11:09:18 Triggered by Gerrit: https://gerrit.onap.org/r/c/sdc/sdc-distribution-client/+/142900 11:09:18 Running as SYSTEM 11:09:18 [EnvInject] - Loading node environment variables. 11:09:18 Building remotely on prd-ubuntu1804-docker-8c-8g-4909 (ubuntu1804-docker-8c-8g) in workspace /w/workspace/sdc-sdc-distribution-client-master-integration-pairwise 11:09:18 [ssh-agent] Looking for ssh-agent implementation... 11:09:18 $ ssh-agent 11:09:18 SSH_AUTH_SOCK=/tmp/ssh-4NCOuxc02oYm/agent.2058 11:09:18 SSH_AGENT_PID=2060 11:09:18 [ssh-agent] Started. 11:09:18 Running ssh-add (command line suppressed) 11:09:18 Identity added: /w/workspace/sdc-sdc-distribution-client-master-integration-pairwise@tmp/private_key_13266113710284415672.key (/w/workspace/sdc-sdc-distribution-client-master-integration-pairwise@tmp/private_key_13266113710284415672.key) 11:09:18 [ssh-agent] Using credentials onap-jobbuiler (Gerrit user) 11:09:18 The recommended git tool is: NONE 11:09:20 using credential onap-jenkins-ssh 11:09:20 Wiping out workspace first. 11:09:20 Cloning the remote Git repository 11:09:20 Cloning repository git://cloud.onap.org/mirror/sdc/sdc-distribution-client.git 11:09:20 > git init /w/workspace/sdc-sdc-distribution-client-master-integration-pairwise # timeout=10 11:09:20 Fetching upstream changes from git://cloud.onap.org/mirror/sdc/sdc-distribution-client.git 11:09:20 > git --version # timeout=10 11:09:20 > git --version # 'git version 2.17.1' 11:09:20 using GIT_SSH to set credentials Gerrit user 11:09:20 Verifying host key using manually-configured host key entries 11:09:20 > git fetch --tags --progress -- git://cloud.onap.org/mirror/sdc/sdc-distribution-client.git +refs/heads/*:refs/remotes/origin/* # timeout=30 11:09:21 > git config remote.origin.url git://cloud.onap.org/mirror/sdc/sdc-distribution-client.git # timeout=10 11:09:21 > git config --add remote.origin.fetch +refs/heads/*:refs/remotes/origin/* # timeout=10 11:09:21 > git config remote.origin.url git://cloud.onap.org/mirror/sdc/sdc-distribution-client.git # timeout=10 11:09:21 Fetching upstream changes from git://cloud.onap.org/mirror/sdc/sdc-distribution-client.git 11:09:21 using GIT_SSH to set credentials Gerrit user 11:09:21 Verifying host key using manually-configured host key entries 11:09:21 > git fetch --tags --progress -- git://cloud.onap.org/mirror/sdc/sdc-distribution-client.git refs/changes/00/142900/2 # timeout=30 11:09:21 > git rev-parse 3de85402dabf1b1ac2a8ab38a07118f5b7c073ce^{commit} # timeout=10 11:09:21 JENKINS-19022: warning: possible memory leak due to Git plugin usage; see: https://plugins.jenkins.io/git/#remove-git-plugin-buildsbybranch-builddata-script 11:09:21 Checking out Revision 3de85402dabf1b1ac2a8ab38a07118f5b7c073ce (refs/changes/00/142900/2) 11:09:21 > git config core.sparsecheckout # timeout=10 11:09:21 > git checkout -f 3de85402dabf1b1ac2a8ab38a07118f5b7c073ce # timeout=30 11:09:25 Commit message: "2.2.0 release" 11:09:25 > git rev-parse FETCH_HEAD^{commit} # timeout=10 11:09:25 > git rev-list --no-walk 4071fcc15f27b4a4dd09b8ffe4a87bf25b6bdb15 # timeout=10 11:09:25 [sdc-sdc-distribution-client-master-integration-pairwise] $ /bin/bash /tmp/jenkins948191436456983511.sh 11:09:25 ---> python-tools-install.sh 11:09:25 Setup pyenv: 11:09:25 * system (set by /opt/pyenv/version) 11:09:25 * 3.8.13 (set by /opt/pyenv/version) 11:09:25 * 3.9.13 (set by /opt/pyenv/version) 11:09:25 * 3.10.6 (set by /opt/pyenv/version) 11:09:30 lf-activate-venv(): INFO: Creating python3 venv at /tmp/venv-L4o3 11:09:30 lf-activate-venv(): INFO: Save venv in file: /tmp/.os_lf_venv 11:09:30 lf-activate-venv(): INFO: Installing base packages (pip, setuptools, virtualenv) 11:09:30 lf-activate-venv(): INFO: Attempting to install with network-safe options... 11:09:34 lf-activate-venv(): INFO: Base packages installed successfully 11:09:34 lf-activate-venv(): INFO: Installing additional packages: lftools 11:09:59 lf-activate-venv(): INFO: Adding /tmp/venv-L4o3/bin to PATH 11:09:59 Generating Requirements File 11:10:19 ERROR: pip's dependency resolver does not currently take into account all the packages that are installed. This behaviour is the source of the following dependency conflicts. 11:10:19 httplib2 0.31.0 requires pyparsing<4,>=3.0.4, but you have pyparsing 2.4.7 which is incompatible. 11:10:20 Python 3.10.6 11:10:20 pip 25.3 from /tmp/venv-L4o3/lib/python3.10/site-packages/pip (python 3.10) 11:10:20 appdirs==1.4.4 11:10:20 argcomplete==3.6.3 11:10:20 aspy.yaml==1.3.0 11:10:20 attrs==25.4.0 11:10:20 autopage==0.5.2 11:10:20 backports.strenum==1.3.1 11:10:20 beautifulsoup4==4.14.3 11:10:20 boto3==1.42.25 11:10:20 botocore==1.42.25 11:10:20 bs4==0.0.2 11:10:20 certifi==2026.1.4 11:10:20 cffi==2.0.0 11:10:20 cfgv==3.5.0 11:10:20 chardet==5.2.0 11:10:20 charset-normalizer==3.4.4 11:10:20 click==8.3.1 11:10:20 cliff==4.13.1 11:10:20 cmd2==3.1.0 11:10:20 cryptography==3.3.2 11:10:20 debtcollector==3.0.0 11:10:20 decorator==5.2.1 11:10:20 defusedxml==0.7.1 11:10:20 Deprecated==1.3.1 11:10:20 distlib==0.4.0 11:10:20 dnspython==2.8.0 11:10:20 docker==7.1.0 11:10:20 dogpile.cache==1.5.0 11:10:20 durationpy==0.10 11:10:20 email-validator==2.3.0 11:10:20 filelock==3.20.3 11:10:20 future==1.0.0 11:10:20 gitdb==4.0.12 11:10:20 GitPython==3.1.46 11:10:20 google-auth==2.47.0 11:10:20 httplib2==0.31.0 11:10:20 identify==2.6.15 11:10:20 idna==3.11 11:10:20 importlib-resources==1.5.0 11:10:20 iso8601==2.1.0 11:10:20 Jinja2==3.1.6 11:10:20 jmespath==1.0.1 11:10:20 jsonpatch==1.33 11:10:20 jsonpointer==3.0.0 11:10:20 jsonschema==4.26.0 11:10:20 jsonschema-specifications==2025.9.1 11:10:20 keystoneauth1==5.12.0 11:10:20 kubernetes==34.1.0 11:10:20 lftools==0.37.18 11:10:20 lxml==6.0.2 11:10:20 markdown-it-py==4.0.0 11:10:20 MarkupSafe==3.0.3 11:10:20 mdurl==0.1.2 11:10:20 msgpack==1.1.2 11:10:20 multi_key_dict==2.0.3 11:10:20 munch==4.0.0 11:10:20 netaddr==1.3.0 11:10:20 niet==1.4.2 11:10:20 nodeenv==1.10.0 11:10:20 oauth2client==4.1.3 11:10:20 oauthlib==3.3.1 11:10:20 openstacksdk==4.8.0 11:10:20 os-service-types==1.8.2 11:10:20 osc-lib==4.3.0 11:10:20 oslo.config==10.2.0 11:10:20 oslo.context==6.2.0 11:10:20 oslo.i18n==6.7.1 11:10:20 oslo.log==8.0.0 11:10:20 oslo.serialization==5.9.0 11:10:20 oslo.utils==9.2.0 11:10:20 packaging==25.0 11:10:20 pbr==7.0.3 11:10:20 platformdirs==4.5.1 11:10:20 prettytable==3.17.0 11:10:20 psutil==7.2.1 11:10:20 pyasn1==0.6.1 11:10:20 pyasn1_modules==0.4.2 11:10:20 pycparser==2.23 11:10:20 pygerrit2==2.0.15 11:10:20 PyGithub==2.8.1 11:10:20 Pygments==2.19.2 11:10:20 PyJWT==2.10.1 11:10:20 PyNaCl==1.6.2 11:10:20 pyparsing==2.4.7 11:10:20 pyperclip==1.11.0 11:10:20 pyrsistent==0.20.0 11:10:20 python-cinderclient==9.8.0 11:10:20 python-dateutil==2.9.0.post0 11:10:20 python-heatclient==4.3.0 11:10:20 python-jenkins==1.8.3 11:10:20 python-keystoneclient==5.7.0 11:10:20 python-magnumclient==4.9.0 11:10:20 python-openstackclient==8.3.0 11:10:20 python-swiftclient==4.9.0 11:10:20 PyYAML==6.0.3 11:10:20 referencing==0.37.0 11:10:20 requests==2.32.5 11:10:20 requests-oauthlib==2.0.0 11:10:20 requestsexceptions==1.4.0 11:10:20 rfc3986==2.0.0 11:10:20 rich==14.2.0 11:10:20 rich-argparse==1.7.2 11:10:20 rpds-py==0.30.0 11:10:20 rsa==4.9.1 11:10:20 ruamel.yaml==0.19.1 11:10:20 ruamel.yaml.clib==0.2.15 11:10:20 s3transfer==0.16.0 11:10:20 simplejson==3.20.2 11:10:20 six==1.17.0 11:10:20 smmap==5.0.2 11:10:20 soupsieve==2.8.1 11:10:20 stevedore==5.6.0 11:10:20 tabulate==0.9.0 11:10:20 toml==0.10.2 11:10:20 tomlkit==0.13.3 11:10:20 tqdm==4.67.1 11:10:20 typing_extensions==4.15.0 11:10:20 tzdata==2025.3 11:10:20 urllib3==1.26.20 11:10:20 virtualenv==20.36.1 11:10:20 wcwidth==0.2.14 11:10:20 websocket-client==1.9.0 11:10:20 wrapt==2.0.1 11:10:20 xdg==6.0.0 11:10:20 xmltodict==1.0.2 11:10:20 yq==3.4.3 11:10:20 [EnvInject] - Injecting environment variables from a build step. 11:10:20 [EnvInject] - Injecting as environment variables the properties content 11:10:20 SET_JDK_VERSION=openjdk11 11:10:20 GIT_URL="git://cloud.onap.org/mirror" 11:10:20 11:10:20 [EnvInject] - Variables injected successfully. 11:10:20 [sdc-sdc-distribution-client-master-integration-pairwise] $ /bin/sh /tmp/jenkins4984104090037238760.sh 11:10:20 ---> update-java-alternatives.sh 11:10:20 ---> Updating Java version 11:10:21 ---> Ubuntu/Debian system detected 11:10:21 update-alternatives: using /usr/lib/jvm/java-11-openjdk-amd64/bin/java to provide /usr/bin/java (java) in manual mode 11:10:21 update-alternatives: using /usr/lib/jvm/java-11-openjdk-amd64/bin/javac to provide /usr/bin/javac (javac) in manual mode 11:10:21 update-alternatives: using /usr/lib/jvm/java-11-openjdk-amd64 to provide /usr/lib/jvm/java-openjdk (java_sdk_openjdk) in manual mode 11:10:21 openjdk version "11.0.16" 2022-07-19 11:10:21 OpenJDK Runtime Environment (build 11.0.16+8-post-Ubuntu-0ubuntu118.04) 11:10:21 OpenJDK 64-Bit Server VM (build 11.0.16+8-post-Ubuntu-0ubuntu118.04, mixed mode) 11:10:21 JAVA_HOME=/usr/lib/jvm/java-11-openjdk-amd64 11:10:21 [EnvInject] - Injecting environment variables from a build step. 11:10:21 [EnvInject] - Injecting as environment variables the properties file path '/tmp/java.env' 11:10:21 [EnvInject] - Variables injected successfully. 11:10:21 provisioning config files... 11:10:21 copy managed file [global-settings] to file:/w/workspace/sdc-sdc-distribution-client-master-integration-pairwise@tmp/config17836231249041978517tmp 11:10:21 copy managed file [sdc-sdc-distribution-client-settings] to file:/w/workspace/sdc-sdc-distribution-client-master-integration-pairwise@tmp/config17318785289516083367tmp 11:10:21 [EnvInject] - Injecting environment variables from a build step. 11:10:21 Unpacking https://repo.maven.apache.org/maven2/org/apache/maven/apache-maven/3.6.3/apache-maven-3.6.3-bin.zip to /w/tools/hudson.tasks.Maven_MavenInstallation/mvn36 on prd-ubuntu1804-docker-8c-8g-4909 11:10:22 using settings config with name sdc-sdc-distribution-client-settings 11:10:22 Replacing all maven server entries not found in credentials list is true 11:10:22 using global settings config with name global-settings 11:10:22 Replacing all maven server entries not found in credentials list is true 11:10:22 [sdc-sdc-distribution-client-master-integration-pairwise] $ /w/tools/hudson.tasks.Maven_MavenInstallation/mvn36/bin/mvn -s /tmp/settings10930456520986990490.xml -gs /tmp/global-settings11539064383460382169.xml -DGERRIT_BRANCH=master -DGERRIT_PATCHSET_REVISION=3de85402dabf1b1ac2a8ab38a07118f5b7c073ce -DGERRIT_HOST=gerrit.onap.org -DMVN=/w/tools/hudson.tasks.Maven_MavenInstallation/mvn36/bin/mvn -DGERRIT_CHANGE_OWNER_EMAIL=fiete.ostkamp@telekom.de "-DGERRIT_EVENT_ACCOUNT_NAME=Fiete Ostkamp" -DGERRIT_CHANGE_URL=https://gerrit.onap.org/r/c/sdc/sdc-distribution-client/+/142900 -DGERRIT_PATCHSET_UPLOADER_EMAIL=fiete.ostkamp@telekom.de "-DARCHIVE_ARTIFACTS= **/target/surefire-reports/*-output.txt" -DGERRIT_EVENT_TYPE=patchset-created -DSTACK_NAME=$JOB_NAME-$BUILD_NUMBER -DGERRIT_PROJECT=sdc/sdc-distribution-client -DGERRIT_PATCHSET_UPLOADER_USERNAME=fostkamp -DGERRIT_CHANGE_NUMBER=142900 -DGERRIT_SCHEME=ssh '-DGERRIT_PATCHSET_UPLOADER=\"Fiete Ostkamp\" ' -DGERRIT_PORT=29418 -DGERRIT_CHANGE_PRIVATE_STATE=false -DGERRIT_REFSPEC=refs/changes/00/142900/2 "-DGERRIT_PATCHSET_UPLOADER_NAME=Fiete Ostkamp" '-DGERRIT_CHANGE_OWNER=\"Fiete Ostkamp\" ' -DPROJECT=sdc/sdc-distribution-client -DGERRIT_HASHTAGS= -DGERRIT_CHANGE_COMMIT_MESSAGE=Mi4yLjAgcmVsZWFzZQoKLSBtb3ZlIGFwaSBpbnRvIGRlZGljYXRlZCBzZGMtZGlzdHJpYnV0aW9uLWNsaWVudC1hcGkgbWF2ZW4gbW9kdWxlIFswXQotIGNoYW5nZSBwYWNrYWdlIGNvb3JkaW5hdGVzIGZvciBgRGlzdHJpYnV0aW9uU3RhdHVzRW51bWAgYW5kIGBEaXN0cmlidXRpb25BY3Rpb25SZXN1bHRFbnVtYAogIC0gYG9yZy5vbmFwLnNkYy51dGlscy5EaXN0cmlidXRpb25TdGF0dXNFbnVtYCAtPiBgb3JnLm9uYXAuc2RjLmFwaS5ub3RpZmljYXRpb24uRGlzdHJpYnV0aW9uU3RhdHVzRW51bWAKICAtIGBvcmcub25hcC5zZGMudXRpbHMuRGlzdHJpYnV0aW9uQWN0aW9uUmVzdWx0RW51bWAgLT4gYG9yZy5vbmFwLnNkYy5hcGkucmVzdWx0cy5EaXN0cmlidXRpb25BY3Rpb25SZXN1bHRFbnVtYAoKWzBdIHRoaXMgcHJlcGFyZXMgYSBzZXBhcmF0ZSBzcHJpbmcgYm9vdCBzdGFydGVyIGltcGxlbWVudGF0aW9uCiAgICB3aXRoIGEgc21hbGxlciBpbnRlcmZhY2UsIGF1dG9jb25maWd1cmF0aW9uIGFuZCByZW1vdmVkIHJ1bnRpbWUKICAgIGRlcGVuZGVuY3kgb24gc2RjLWJlIGZvciB0aGUga2Fma2EgZW5kcG9pbnQgY29uZmlndXJhdGlvbgoKSXNzdWUtSUQ6IFNEQy00Nzc3CkNoYW5nZS1JZDogSTc0YTVmNzFlMjAzODRkYWQ3MTI2YjA3M2E3YmM2OTU1ZWY1YmU0Y2QKU2lnbmVkLW9mZi1ieTogRmlldGUgT3N0a2FtcCA8ZmlldGUub3N0a2FtcEB0ZWxla29tLmRlPgo= -DGERRIT_NAME=Primary -DGERRIT_TOPIC=release-2-2-0 "-DGERRIT_CHANGE_SUBJECT=2.2.0 release" -DGERRIT_EVENT_ACCOUNT_USERNAME=fostkamp -DGERRIT_CHANGE_OWNER_USERNAME=fostkamp '-DGERRIT_EVENT_ACCOUNT=\"Fiete Ostkamp\" ' -DGERRIT_CHANGE_WIP_STATE=false -DGERRIT_CHANGE_ID=I74a5f71e20384dad7126b073a7bc6955ef5be4cd -DGERRIT_EVENT_HASH=-677050320 -DGERRIT_VERSION=3.7.2 -DGERRIT_EVENT_ACCOUNT_EMAIL=fiete.ostkamp@telekom.de -DGERRIT_PATCHSET_NUMBER=2 "-DMAVEN_PARAMS= -P integration-pairwise" "-DGERRIT_CHANGE_OWNER_NAME=Fiete Ostkamp" -DMAVEN_OPTS='' clean install -B -Dorg.slf4j.simpleLogger.log.org.apache.maven.cli.transfer.Slf4jMavenTransferListener=warn -P integration-pairwise 11:10:23 [INFO] Scanning for projects... 11:10:23 [INFO] ------------------------------------------------------------------------ 11:10:23 [INFO] Reactor Build Order: 11:10:23 [INFO] 11:10:23 [INFO] sdc-sdc-distribution-client [pom] 11:10:23 [INFO] sdc-distribution-client-api [jar] 11:10:23 [INFO] sdc-distribution-client [jar] 11:10:23 [INFO] sdc-distribution-ci [jar] 11:10:23 [INFO] 11:10:23 [INFO] --< org.onap.sdc.sdc-distribution-client:sdc-main-distribution-client >-- 11:10:23 [INFO] Building sdc-sdc-distribution-client 2.2.0-SNAPSHOT [1/4] 11:10:23 [INFO] --------------------------------[ pom ]--------------------------------- 11:10:24 [INFO] 11:10:24 [INFO] --- maven-clean-plugin:2.5:clean (default-clean) @ sdc-main-distribution-client --- 11:10:24 [INFO] 11:10:24 [INFO] --- maven-enforcer-plugin:3.0.0-M2:enforce (enforce-property) @ sdc-main-distribution-client --- 11:10:26 [INFO] 11:10:26 [INFO] --- maven-enforcer-plugin:3.0.0-M2:enforce (enforce-no-snapshots) @ sdc-main-distribution-client --- 11:10:26 [INFO] 11:10:26 [INFO] --- jacoco-maven-plugin:0.8.6:prepare-agent (pre-unit-test) @ sdc-main-distribution-client --- 11:10:27 [INFO] surefireArgLine set to -javaagent:/home/jenkins/.m2/repository/org/jacoco/org.jacoco.agent/0.8.6/org.jacoco.agent-0.8.6-runtime.jar=destfile=/w/workspace/sdc-sdc-distribution-client-master-integration-pairwise/target/code-coverage/jacoco-ut.exec,excludes=**/gen/**:**/generated-sources/**:**/yang-gen/**:**/pax/** 11:10:27 [INFO] 11:10:27 [INFO] --- jacoco-maven-plugin:0.8.6:prepare-agent (prepare-agent) @ sdc-main-distribution-client --- 11:10:27 [INFO] argLine set to -javaagent:/home/jenkins/.m2/repository/org/jacoco/org.jacoco.agent/0.8.6/org.jacoco.agent-0.8.6-runtime.jar=destfile=/w/workspace/sdc-sdc-distribution-client-master-integration-pairwise/target/jacoco.exec,excludes=**/gen/**:**/generated-sources/**:**/yang-gen/**:**/pax/** 11:10:27 [INFO] 11:10:27 [INFO] --- maven-checkstyle-plugin:3.1.1:check (onap-license) @ sdc-main-distribution-client --- 11:10:29 [INFO] 11:10:29 [INFO] --- maven-checkstyle-plugin:3.1.1:check (onap-java-style) @ sdc-main-distribution-client --- 11:10:29 [INFO] 11:10:29 [INFO] --- jacoco-maven-plugin:0.8.6:report (post-unit-test) @ sdc-main-distribution-client --- 11:10:29 [INFO] Skipping JaCoCo execution due to missing execution data file. 11:10:29 [INFO] 11:10:29 [INFO] --- maven-javadoc-plugin:3.2.0:jar (attach-javadocs) @ sdc-main-distribution-client --- 11:10:31 [INFO] Not executing Javadoc as the project is not a Java classpath-capable package 11:10:31 [INFO] 11:10:31 [INFO] --- jacoco-maven-plugin:0.8.6:prepare-agent (pre-integration-test) @ sdc-main-distribution-client --- 11:10:31 [INFO] failsafeArgLine set to -javaagent:/home/jenkins/.m2/repository/org/jacoco/org.jacoco.agent/0.8.6/org.jacoco.agent-0.8.6-runtime.jar=destfile=/w/workspace/sdc-sdc-distribution-client-master-integration-pairwise/target/code-coverage/jacoco-it.exec,excludes=**/gen/**:**/generated-sources/**:**/yang-gen/**:**/pax/** 11:10:31 [INFO] 11:10:31 [INFO] --- maven-failsafe-plugin:3.0.0-M4:integration-test (integration-tests) @ sdc-main-distribution-client --- 11:10:31 [INFO] No tests to run. 11:10:31 [INFO] 11:10:31 [INFO] --- jacoco-maven-plugin:0.8.6:report (post-integration-test) @ sdc-main-distribution-client --- 11:10:31 [INFO] Skipping JaCoCo execution due to missing execution data file. 11:10:31 [INFO] 11:10:31 [INFO] --- maven-failsafe-plugin:3.0.0-M4:verify (integration-tests) @ sdc-main-distribution-client --- 11:10:32 [INFO] 11:10:32 [INFO] --- maven-install-plugin:2.4:install (default-install) @ sdc-main-distribution-client --- 11:10:32 [INFO] Installing /w/workspace/sdc-sdc-distribution-client-master-integration-pairwise/pom.xml to /home/jenkins/.m2/repository/org/onap/sdc/sdc-distribution-client/sdc-main-distribution-client/2.2.0-SNAPSHOT/sdc-main-distribution-client-2.2.0-SNAPSHOT.pom 11:10:32 [INFO] 11:10:32 [INFO] --< org.onap.sdc.sdc-distribution-client:sdc-distribution-client-api >-- 11:10:32 [INFO] Building sdc-distribution-client-api 2.2.0-SNAPSHOT [2/4] 11:10:32 [INFO] --------------------------------[ jar ]--------------------------------- 11:10:32 [INFO] 11:10:32 [INFO] --- maven-clean-plugin:2.5:clean (default-clean) @ sdc-distribution-client-api --- 11:10:32 [INFO] 11:10:32 [INFO] --- maven-enforcer-plugin:3.0.0-M2:enforce (enforce-property) @ sdc-distribution-client-api --- 11:10:32 [INFO] 11:10:32 [INFO] --- maven-enforcer-plugin:3.0.0-M2:enforce (enforce-no-snapshots) @ sdc-distribution-client-api --- 11:10:32 [INFO] 11:10:32 [INFO] --- jacoco-maven-plugin:0.8.6:prepare-agent (pre-unit-test) @ sdc-distribution-client-api --- 11:10:32 [INFO] surefireArgLine set to -javaagent:/home/jenkins/.m2/repository/org/jacoco/org.jacoco.agent/0.8.6/org.jacoco.agent-0.8.6-runtime.jar=destfile=/w/workspace/sdc-sdc-distribution-client-master-integration-pairwise/sdc-distribution-client-api/target/code-coverage/jacoco-ut.exec,excludes=**/gen/**:**/generated-sources/**:**/yang-gen/**:**/pax/** 11:10:32 [INFO] 11:10:32 [INFO] --- jacoco-maven-plugin:0.8.6:prepare-agent (prepare-agent) @ sdc-distribution-client-api --- 11:10:32 [INFO] argLine set to -javaagent:/home/jenkins/.m2/repository/org/jacoco/org.jacoco.agent/0.8.6/org.jacoco.agent-0.8.6-runtime.jar=destfile=/w/workspace/sdc-sdc-distribution-client-master-integration-pairwise/sdc-distribution-client-api/target/jacoco.exec,excludes=**/gen/**:**/generated-sources/**:**/yang-gen/**:**/pax/** 11:10:32 [INFO] 11:10:32 [INFO] --- maven-checkstyle-plugin:3.1.1:check (onap-license) @ sdc-distribution-client-api --- 11:10:32 [INFO] 11:10:32 [INFO] --- maven-checkstyle-plugin:3.1.1:check (onap-java-style) @ sdc-distribution-client-api --- 11:10:32 [INFO] 11:10:32 [INFO] --- maven-resources-plugin:2.6:resources (default-resources) @ sdc-distribution-client-api --- 11:10:32 [INFO] Using 'UTF-8' encoding to copy filtered resources. 11:10:32 [INFO] skip non existing resourceDirectory /w/workspace/sdc-sdc-distribution-client-master-integration-pairwise/sdc-distribution-client-api/src/main/resources 11:10:32 [INFO] 11:10:32 [INFO] --- maven-compiler-plugin:3.8.1:compile (default-compile) @ sdc-distribution-client-api --- 11:10:33 [INFO] Changes detected - recompiling the module! 11:10:33 [INFO] Compiling 23 source files to /w/workspace/sdc-sdc-distribution-client-master-integration-pairwise/sdc-distribution-client-api/target/classes 11:10:34 [INFO] 11:10:34 [INFO] --- maven-resources-plugin:2.6:testResources (default-testResources) @ sdc-distribution-client-api --- 11:10:34 [INFO] Using 'UTF-8' encoding to copy filtered resources. 11:10:34 [INFO] skip non existing resourceDirectory /w/workspace/sdc-sdc-distribution-client-master-integration-pairwise/sdc-distribution-client-api/src/test/resources 11:10:34 [INFO] 11:10:34 [INFO] --- maven-compiler-plugin:3.8.1:testCompile (default-testCompile) @ sdc-distribution-client-api --- 11:10:34 [INFO] No sources to compile 11:10:34 [INFO] 11:10:34 [INFO] --- maven-surefire-plugin:3.0.0-M4:test (default-test) @ sdc-distribution-client-api --- 11:10:34 [INFO] No tests to run. 11:10:34 [INFO] 11:10:34 [INFO] --- jacoco-maven-plugin:0.8.6:report (post-unit-test) @ sdc-distribution-client-api --- 11:10:34 [INFO] Skipping JaCoCo execution due to missing execution data file. 11:10:34 [INFO] 11:10:34 [INFO] --- maven-jar-plugin:2.4:jar (default-jar) @ sdc-distribution-client-api --- 11:10:34 [INFO] Building jar: /w/workspace/sdc-sdc-distribution-client-master-integration-pairwise/sdc-distribution-client-api/target/sdc-distribution-client-api-2.2.0-SNAPSHOT.jar 11:10:34 [INFO] 11:10:34 [INFO] --- maven-javadoc-plugin:3.2.0:jar (attach-javadocs) @ sdc-distribution-client-api --- 11:10:34 [INFO] No previous run data found, generating javadoc. 11:10:36 [INFO] 11:10:36 Loading source files for package org.onap.sdc.api.consumer... 11:10:36 Loading source files for package org.onap.sdc.api... 11:10:36 Loading source files for package org.onap.sdc.api.notification... 11:10:36 Loading source files for package org.onap.sdc.api.results... 11:10:36 Constructing Javadoc information... 11:10:36 Standard Doclet version 11.0.16 11:10:36 Building tree for all the packages and classes... 11:10:36 Generating /w/workspace/sdc-sdc-distribution-client-master-integration-pairwise/sdc-distribution-client-api/target/apidocs/org/onap/sdc/api/ArtifactInfo.html... 11:10:36 Generating /w/workspace/sdc-sdc-distribution-client-master-integration-pairwise/sdc-distribution-client-api/target/apidocs/org/onap/sdc/api/DistributionClient.html... 11:10:36 Generating /w/workspace/sdc-sdc-distribution-client-master-integration-pairwise/sdc-distribution-client-api/target/apidocs/org/onap/sdc/api/DownloadResult.html... 11:10:36 Generating /w/workspace/sdc-sdc-distribution-client-master-integration-pairwise/sdc-distribution-client-api/target/apidocs/org/onap/sdc/api/IDistributionClient.html... 11:10:36 Generating /w/workspace/sdc-sdc-distribution-client-master-integration-pairwise/sdc-distribution-client-api/target/apidocs/org/onap/sdc/api/IDistributionStatusMessageJsonBuilder.html... 11:10:36 Generating /w/workspace/sdc-sdc-distribution-client-master-integration-pairwise/sdc-distribution-client-api/target/apidocs/org/onap/sdc/api/StatusMessage.html... 11:10:36 Generating /w/workspace/sdc-sdc-distribution-client-master-integration-pairwise/sdc-distribution-client-api/target/apidocs/org/onap/sdc/api/consumer/IComponentDoneStatusMessage.html... 11:10:36 Generating /w/workspace/sdc-sdc-distribution-client-master-integration-pairwise/sdc-distribution-client-api/target/apidocs/org/onap/sdc/api/consumer/IConfiguration.html... 11:10:36 Generating /w/workspace/sdc-sdc-distribution-client-master-integration-pairwise/sdc-distribution-client-api/target/apidocs/org/onap/sdc/api/consumer/IDistributionStatusMessage.html... 11:10:36 Generating /w/workspace/sdc-sdc-distribution-client-master-integration-pairwise/sdc-distribution-client-api/target/apidocs/org/onap/sdc/api/consumer/IDistributionStatusMessageBasic.html... 11:10:36 Generating /w/workspace/sdc-sdc-distribution-client-master-integration-pairwise/sdc-distribution-client-api/target/apidocs/org/onap/sdc/api/consumer/IFinalDistrStatusMessage.html... 11:10:36 Generating /w/workspace/sdc-sdc-distribution-client-master-integration-pairwise/sdc-distribution-client-api/target/apidocs/org/onap/sdc/api/consumer/INotificationCallback.html... 11:10:36 Generating /w/workspace/sdc-sdc-distribution-client-master-integration-pairwise/sdc-distribution-client-api/target/apidocs/org/onap/sdc/api/consumer/IStatusCallback.html... 11:10:36 Generating /w/workspace/sdc-sdc-distribution-client-master-integration-pairwise/sdc-distribution-client-api/target/apidocs/org/onap/sdc/api/notification/DistributionStatusEnum.html... 11:10:36 Generating /w/workspace/sdc-sdc-distribution-client-master-integration-pairwise/sdc-distribution-client-api/target/apidocs/org/onap/sdc/api/notification/IArtifactInfo.html... 11:10:36 Generating /w/workspace/sdc-sdc-distribution-client-master-integration-pairwise/sdc-distribution-client-api/target/apidocs/org/onap/sdc/api/notification/INotificationData.html... 11:10:36 Generating /w/workspace/sdc-sdc-distribution-client-master-integration-pairwise/sdc-distribution-client-api/target/apidocs/org/onap/sdc/api/notification/IResourceInstance.html... 11:10:36 Generating /w/workspace/sdc-sdc-distribution-client-master-integration-pairwise/sdc-distribution-client-api/target/apidocs/org/onap/sdc/api/notification/IStatusData.html... 11:10:36 Generating /w/workspace/sdc-sdc-distribution-client-master-integration-pairwise/sdc-distribution-client-api/target/apidocs/org/onap/sdc/api/notification/IVfModuleMetadata.html... 11:10:36 Generating /w/workspace/sdc-sdc-distribution-client-master-integration-pairwise/sdc-distribution-client-api/target/apidocs/org/onap/sdc/api/notification/StatusMessage.html... 11:10:36 Generating /w/workspace/sdc-sdc-distribution-client-master-integration-pairwise/sdc-distribution-client-api/target/apidocs/org/onap/sdc/api/results/DistributionActionResultEnum.html... 11:10:36 Generating /w/workspace/sdc-sdc-distribution-client-master-integration-pairwise/sdc-distribution-client-api/target/apidocs/org/onap/sdc/api/results/IDistributionClientDownloadResult.html... 11:10:36 Generating /w/workspace/sdc-sdc-distribution-client-master-integration-pairwise/sdc-distribution-client-api/target/apidocs/org/onap/sdc/api/results/IDistributionClientResult.html... 11:10:36 Generating /w/workspace/sdc-sdc-distribution-client-master-integration-pairwise/sdc-distribution-client-api/target/apidocs/org/onap/sdc/api/package-summary.html... 11:10:36 Generating /w/workspace/sdc-sdc-distribution-client-master-integration-pairwise/sdc-distribution-client-api/target/apidocs/org/onap/sdc/api/package-tree.html... 11:10:36 Generating /w/workspace/sdc-sdc-distribution-client-master-integration-pairwise/sdc-distribution-client-api/target/apidocs/org/onap/sdc/api/consumer/package-summary.html... 11:10:36 Generating /w/workspace/sdc-sdc-distribution-client-master-integration-pairwise/sdc-distribution-client-api/target/apidocs/org/onap/sdc/api/consumer/package-tree.html... 11:10:36 Generating /w/workspace/sdc-sdc-distribution-client-master-integration-pairwise/sdc-distribution-client-api/target/apidocs/org/onap/sdc/api/notification/package-summary.html... 11:10:36 Generating /w/workspace/sdc-sdc-distribution-client-master-integration-pairwise/sdc-distribution-client-api/target/apidocs/org/onap/sdc/api/notification/package-tree.html... 11:10:36 Generating /w/workspace/sdc-sdc-distribution-client-master-integration-pairwise/sdc-distribution-client-api/target/apidocs/org/onap/sdc/api/results/package-summary.html... 11:10:36 Generating /w/workspace/sdc-sdc-distribution-client-master-integration-pairwise/sdc-distribution-client-api/target/apidocs/org/onap/sdc/api/results/package-tree.html... 11:10:36 Generating /w/workspace/sdc-sdc-distribution-client-master-integration-pairwise/sdc-distribution-client-api/target/apidocs/constant-values.html... 11:10:36 Generating /w/workspace/sdc-sdc-distribution-client-master-integration-pairwise/sdc-distribution-client-api/target/apidocs/org/onap/sdc/api/consumer/class-use/IDistributionStatusMessage.html... 11:10:36 Generating /w/workspace/sdc-sdc-distribution-client-master-integration-pairwise/sdc-distribution-client-api/target/apidocs/org/onap/sdc/api/consumer/class-use/IDistributionStatusMessageBasic.html... 11:10:36 Generating /w/workspace/sdc-sdc-distribution-client-master-integration-pairwise/sdc-distribution-client-api/target/apidocs/org/onap/sdc/api/consumer/class-use/IStatusCallback.html... 11:10:36 Generating /w/workspace/sdc-sdc-distribution-client-master-integration-pairwise/sdc-distribution-client-api/target/apidocs/org/onap/sdc/api/consumer/class-use/IFinalDistrStatusMessage.html... 11:10:36 Generating /w/workspace/sdc-sdc-distribution-client-master-integration-pairwise/sdc-distribution-client-api/target/apidocs/org/onap/sdc/api/consumer/class-use/INotificationCallback.html... 11:10:36 Generating /w/workspace/sdc-sdc-distribution-client-master-integration-pairwise/sdc-distribution-client-api/target/apidocs/org/onap/sdc/api/consumer/class-use/IComponentDoneStatusMessage.html... 11:10:36 Generating /w/workspace/sdc-sdc-distribution-client-master-integration-pairwise/sdc-distribution-client-api/target/apidocs/org/onap/sdc/api/consumer/class-use/IConfiguration.html... 11:10:36 Generating /w/workspace/sdc-sdc-distribution-client-master-integration-pairwise/sdc-distribution-client-api/target/apidocs/org/onap/sdc/api/class-use/DistributionClient.html... 11:10:36 Generating /w/workspace/sdc-sdc-distribution-client-master-integration-pairwise/sdc-distribution-client-api/target/apidocs/org/onap/sdc/api/class-use/IDistributionClient.html... 11:10:36 Generating /w/workspace/sdc-sdc-distribution-client-master-integration-pairwise/sdc-distribution-client-api/target/apidocs/org/onap/sdc/api/class-use/ArtifactInfo.html... 11:10:36 Generating /w/workspace/sdc-sdc-distribution-client-master-integration-pairwise/sdc-distribution-client-api/target/apidocs/org/onap/sdc/api/class-use/StatusMessage.html... 11:10:36 Generating /w/workspace/sdc-sdc-distribution-client-master-integration-pairwise/sdc-distribution-client-api/target/apidocs/org/onap/sdc/api/class-use/IDistributionStatusMessageJsonBuilder.html... 11:10:36 Generating /w/workspace/sdc-sdc-distribution-client-master-integration-pairwise/sdc-distribution-client-api/target/apidocs/org/onap/sdc/api/class-use/DownloadResult.html... 11:10:36 Generating /w/workspace/sdc-sdc-distribution-client-master-integration-pairwise/sdc-distribution-client-api/target/apidocs/org/onap/sdc/api/notification/class-use/IArtifactInfo.html... 11:10:36 Generating /w/workspace/sdc-sdc-distribution-client-master-integration-pairwise/sdc-distribution-client-api/target/apidocs/org/onap/sdc/api/notification/class-use/IVfModuleMetadata.html... 11:10:36 Generating /w/workspace/sdc-sdc-distribution-client-master-integration-pairwise/sdc-distribution-client-api/target/apidocs/org/onap/sdc/api/notification/class-use/IResourceInstance.html... 11:10:36 Generating /w/workspace/sdc-sdc-distribution-client-master-integration-pairwise/sdc-distribution-client-api/target/apidocs/org/onap/sdc/api/notification/class-use/IStatusData.html... 11:10:36 Generating /w/workspace/sdc-sdc-distribution-client-master-integration-pairwise/sdc-distribution-client-api/target/apidocs/org/onap/sdc/api/notification/class-use/INotificationData.html... 11:10:36 Generating /w/workspace/sdc-sdc-distribution-client-master-integration-pairwise/sdc-distribution-client-api/target/apidocs/org/onap/sdc/api/notification/class-use/StatusMessage.html... 11:10:36 Generating /w/workspace/sdc-sdc-distribution-client-master-integration-pairwise/sdc-distribution-client-api/target/apidocs/org/onap/sdc/api/notification/class-use/DistributionStatusEnum.html... 11:10:36 Generating /w/workspace/sdc-sdc-distribution-client-master-integration-pairwise/sdc-distribution-client-api/target/apidocs/org/onap/sdc/api/results/class-use/IDistributionClientDownloadResult.html... 11:10:36 Generating /w/workspace/sdc-sdc-distribution-client-master-integration-pairwise/sdc-distribution-client-api/target/apidocs/org/onap/sdc/api/results/class-use/DistributionActionResultEnum.html... 11:10:36 Generating /w/workspace/sdc-sdc-distribution-client-master-integration-pairwise/sdc-distribution-client-api/target/apidocs/org/onap/sdc/api/results/class-use/IDistributionClientResult.html... 11:10:36 Generating /w/workspace/sdc-sdc-distribution-client-master-integration-pairwise/sdc-distribution-client-api/target/apidocs/org/onap/sdc/api/package-use.html... 11:10:36 Generating /w/workspace/sdc-sdc-distribution-client-master-integration-pairwise/sdc-distribution-client-api/target/apidocs/org/onap/sdc/api/consumer/package-use.html... 11:10:36 Generating /w/workspace/sdc-sdc-distribution-client-master-integration-pairwise/sdc-distribution-client-api/target/apidocs/org/onap/sdc/api/notification/package-use.html... 11:10:36 Generating /w/workspace/sdc-sdc-distribution-client-master-integration-pairwise/sdc-distribution-client-api/target/apidocs/org/onap/sdc/api/results/package-use.html... 11:10:36 Building index for all the packages and classes... 11:10:36 Generating /w/workspace/sdc-sdc-distribution-client-master-integration-pairwise/sdc-distribution-client-api/target/apidocs/overview-tree.html... 11:10:36 Generating /w/workspace/sdc-sdc-distribution-client-master-integration-pairwise/sdc-distribution-client-api/target/apidocs/index-all.html... 11:10:36 Building index for all classes... 11:10:36 Generating /w/workspace/sdc-sdc-distribution-client-master-integration-pairwise/sdc-distribution-client-api/target/apidocs/allclasses-index.html... 11:10:36 Generating /w/workspace/sdc-sdc-distribution-client-master-integration-pairwise/sdc-distribution-client-api/target/apidocs/allpackages-index.html... 11:10:36 Generating /w/workspace/sdc-sdc-distribution-client-master-integration-pairwise/sdc-distribution-client-api/target/apidocs/deprecated-list.html... 11:10:36 Building index for all classes... 11:10:36 Generating /w/workspace/sdc-sdc-distribution-client-master-integration-pairwise/sdc-distribution-client-api/target/apidocs/allclasses.html... 11:10:36 Generating /w/workspace/sdc-sdc-distribution-client-master-integration-pairwise/sdc-distribution-client-api/target/apidocs/allclasses.html... 11:10:36 Generating /w/workspace/sdc-sdc-distribution-client-master-integration-pairwise/sdc-distribution-client-api/target/apidocs/index.html... 11:10:36 Generating /w/workspace/sdc-sdc-distribution-client-master-integration-pairwise/sdc-distribution-client-api/target/apidocs/overview-summary.html... 11:10:36 Generating /w/workspace/sdc-sdc-distribution-client-master-integration-pairwise/sdc-distribution-client-api/target/apidocs/help-doc.html... 11:10:36 3 warnings 11:10:36 [WARNING] Javadoc Warnings 11:10:36 [WARNING] /w/workspace/sdc-sdc-distribution-client-master-integration-pairwise/sdc-distribution-client-api/src/main/java/org/onap/sdc/api/consumer/IConfiguration.java:199: warning - Tag @link: reference not found: INotificationData#getResources() 11:10:36 [WARNING] /w/workspace/sdc-sdc-distribution-client-master-integration-pairwise/sdc-distribution-client-api/src/main/java/org/onap/sdc/api/consumer/IConfiguration.java:199: warning - Tag @link: reference not found: INotificationData#getResources() 11:10:36 [WARNING] /w/workspace/sdc-sdc-distribution-client-master-integration-pairwise/sdc-distribution-client-api/src/main/java/org/onap/sdc/api/consumer/IConfiguration.java:199: warning - Tag @link: reference not found: INotificationData#getResources() 11:10:36 [INFO] Building jar: /w/workspace/sdc-sdc-distribution-client-master-integration-pairwise/sdc-distribution-client-api/target/sdc-distribution-client-api-2.2.0-SNAPSHOT-javadoc.jar 11:10:36 [INFO] 11:10:36 [INFO] --- jacoco-maven-plugin:0.8.6:prepare-agent (pre-integration-test) @ sdc-distribution-client-api --- 11:10:36 [INFO] failsafeArgLine set to -javaagent:/home/jenkins/.m2/repository/org/jacoco/org.jacoco.agent/0.8.6/org.jacoco.agent-0.8.6-runtime.jar=destfile=/w/workspace/sdc-sdc-distribution-client-master-integration-pairwise/sdc-distribution-client-api/target/code-coverage/jacoco-it.exec,excludes=**/gen/**:**/generated-sources/**:**/yang-gen/**:**/pax/** 11:10:36 [INFO] 11:10:36 [INFO] --- maven-failsafe-plugin:3.0.0-M4:integration-test (integration-tests) @ sdc-distribution-client-api --- 11:10:36 [INFO] No tests to run. 11:10:36 [INFO] 11:10:36 [INFO] --- jacoco-maven-plugin:0.8.6:report (post-integration-test) @ sdc-distribution-client-api --- 11:10:36 [INFO] Skipping JaCoCo execution due to missing execution data file. 11:10:36 [INFO] 11:10:36 [INFO] --- maven-failsafe-plugin:3.0.0-M4:verify (integration-tests) @ sdc-distribution-client-api --- 11:10:36 [INFO] 11:10:36 [INFO] --- maven-install-plugin:2.4:install (default-install) @ sdc-distribution-client-api --- 11:10:36 [INFO] Installing /w/workspace/sdc-sdc-distribution-client-master-integration-pairwise/sdc-distribution-client-api/target/sdc-distribution-client-api-2.2.0-SNAPSHOT.jar to /home/jenkins/.m2/repository/org/onap/sdc/sdc-distribution-client/sdc-distribution-client-api/2.2.0-SNAPSHOT/sdc-distribution-client-api-2.2.0-SNAPSHOT.jar 11:10:36 [INFO] Installing /w/workspace/sdc-sdc-distribution-client-master-integration-pairwise/sdc-distribution-client-api/pom.xml to /home/jenkins/.m2/repository/org/onap/sdc/sdc-distribution-client/sdc-distribution-client-api/2.2.0-SNAPSHOT/sdc-distribution-client-api-2.2.0-SNAPSHOT.pom 11:10:36 [INFO] Installing /w/workspace/sdc-sdc-distribution-client-master-integration-pairwise/sdc-distribution-client-api/target/sdc-distribution-client-api-2.2.0-SNAPSHOT-javadoc.jar to /home/jenkins/.m2/repository/org/onap/sdc/sdc-distribution-client/sdc-distribution-client-api/2.2.0-SNAPSHOT/sdc-distribution-client-api-2.2.0-SNAPSHOT-javadoc.jar 11:10:36 [INFO] 11:10:36 [INFO] ----< org.onap.sdc.sdc-distribution-client:sdc-distribution-client >---- 11:10:36 [INFO] Building sdc-distribution-client 2.2.0-SNAPSHOT [3/4] 11:10:36 [INFO] --------------------------------[ jar ]--------------------------------- 11:10:39 [INFO] 11:10:39 [INFO] --- maven-clean-plugin:2.5:clean (default-clean) @ sdc-distribution-client --- 11:10:39 [INFO] 11:10:39 [INFO] --- maven-enforcer-plugin:3.0.0-M2:enforce (enforce-property) @ sdc-distribution-client --- 11:10:39 [INFO] 11:10:39 [INFO] --- maven-enforcer-plugin:3.0.0-M2:enforce (enforce-no-snapshots) @ sdc-distribution-client --- 11:10:39 [INFO] 11:10:39 [INFO] --- jacoco-maven-plugin:0.8.6:prepare-agent (pre-unit-test) @ sdc-distribution-client --- 11:10:39 [INFO] surefireArgLine set to -javaagent:/home/jenkins/.m2/repository/org/jacoco/org.jacoco.agent/0.8.6/org.jacoco.agent-0.8.6-runtime.jar=destfile=/w/workspace/sdc-sdc-distribution-client-master-integration-pairwise/sdc-distribution-client/target/code-coverage/jacoco-ut.exec,excludes=**/gen/**:**/generated-sources/**:**/yang-gen/**:**/pax/** 11:10:39 [INFO] 11:10:39 [INFO] --- jacoco-maven-plugin:0.8.6:prepare-agent (prepare-agent) @ sdc-distribution-client --- 11:10:39 [INFO] argLine set to -javaagent:/home/jenkins/.m2/repository/org/jacoco/org.jacoco.agent/0.8.6/org.jacoco.agent-0.8.6-runtime.jar=destfile=/w/workspace/sdc-sdc-distribution-client-master-integration-pairwise/sdc-distribution-client/target/jacoco.exec,excludes=**/gen/**:**/generated-sources/**:**/yang-gen/**:**/pax/** 11:10:39 [INFO] 11:10:39 [INFO] --- maven-checkstyle-plugin:3.1.1:check (onap-license) @ sdc-distribution-client --- 11:10:39 [INFO] 11:10:39 [INFO] --- maven-checkstyle-plugin:3.1.1:check (onap-java-style) @ sdc-distribution-client --- 11:10:39 [INFO] 11:10:39 [INFO] --- maven-resources-plugin:2.6:resources (default-resources) @ sdc-distribution-client --- 11:10:39 [INFO] Using 'UTF-8' encoding to copy filtered resources. 11:10:39 [INFO] skip non existing resourceDirectory /w/workspace/sdc-sdc-distribution-client-master-integration-pairwise/sdc-distribution-client/src/main/resources 11:10:39 [INFO] 11:10:39 [INFO] --- maven-compiler-plugin:3.8.1:compile (default-compile) @ sdc-distribution-client --- 11:10:39 [INFO] Changes detected - recompiling the module! 11:10:39 [INFO] Compiling 44 source files to /w/workspace/sdc-sdc-distribution-client-master-integration-pairwise/sdc-distribution-client/target/classes 11:10:40 [INFO] /w/workspace/sdc-sdc-distribution-client-master-integration-pairwise/sdc-distribution-client/src/main/java/org/onap/sdc/http/SdcConnectorClient.java: Some input files use or override a deprecated API. 11:10:40 [INFO] /w/workspace/sdc-sdc-distribution-client-master-integration-pairwise/sdc-distribution-client/src/main/java/org/onap/sdc/http/SdcConnectorClient.java: Recompile with -Xlint:deprecation for details. 11:10:40 [INFO] 11:10:40 [INFO] --- maven-resources-plugin:2.6:testResources (default-testResources) @ sdc-distribution-client --- 11:10:40 [INFO] Using 'UTF-8' encoding to copy filtered resources. 11:10:40 [INFO] Copying 10 resources 11:10:40 [INFO] 11:10:40 [INFO] --- maven-compiler-plugin:3.8.1:testCompile (default-testCompile) @ sdc-distribution-client --- 11:10:40 [INFO] Changes detected - recompiling the module! 11:10:40 [INFO] Compiling 24 source files to /w/workspace/sdc-sdc-distribution-client-master-integration-pairwise/sdc-distribution-client/target/test-classes 11:10:41 [INFO] /w/workspace/sdc-sdc-distribution-client-master-integration-pairwise/sdc-distribution-client/src/test/java/org/onap/sdc/http/SdcConnectorClientTest.java: Some input files use or override a deprecated API. 11:10:41 [INFO] /w/workspace/sdc-sdc-distribution-client-master-integration-pairwise/sdc-distribution-client/src/test/java/org/onap/sdc/http/SdcConnectorClientTest.java: Recompile with -Xlint:deprecation for details. 11:10:41 [INFO] /w/workspace/sdc-sdc-distribution-client-master-integration-pairwise/sdc-distribution-client/src/test/java/org/onap/sdc/utils/NotificationSenderTest.java: /w/workspace/sdc-sdc-distribution-client-master-integration-pairwise/sdc-distribution-client/src/test/java/org/onap/sdc/utils/NotificationSenderTest.java uses unchecked or unsafe operations. 11:10:41 [INFO] /w/workspace/sdc-sdc-distribution-client-master-integration-pairwise/sdc-distribution-client/src/test/java/org/onap/sdc/utils/NotificationSenderTest.java: Recompile with -Xlint:unchecked for details. 11:10:41 [INFO] 11:10:41 [INFO] --- maven-surefire-plugin:3.0.0-M4:test (default-test) @ sdc-distribution-client --- 11:10:41 [INFO] 11:10:41 [INFO] ------------------------------------------------------- 11:10:41 [INFO] T E S T S 11:10:41 [INFO] ------------------------------------------------------- 11:10:42 [INFO] Running org.onap.sdc.http.HttpSdcClientResponseTest 11:10:44 [INFO] Tests run: 3, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 1.128 s - in org.onap.sdc.http.HttpSdcClientResponseTest 11:10:44 [INFO] Running org.onap.sdc.http.HttpSdcClientTest 11:10:44 11:10:44.770 [main] DEBUG org.onap.sdc.http.HttpSdcClient - url to send http://localhost:8443http://127.0.0.1:8080/target 11:10:45 11:10:45.523 [main] DEBUG org.onap.sdc.http.HttpSdcClient - url to send http://localhost:8443http://127.0.0.1:8080/target 11:10:45 11:10:45.524 [main] DEBUG org.onap.sdc.http.HttpSdcClient - GET Response Status 200 11:10:45 [INFO] Tests run: 4, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 1.507 s - in org.onap.sdc.http.HttpSdcClientTest 11:10:45 [INFO] Running org.onap.sdc.http.HttpClientFactoryTest 11:10:45 [INFO] Tests run: 3, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.405 s - in org.onap.sdc.http.HttpClientFactoryTest 11:10:45 [INFO] Running org.onap.sdc.http.HttpRequestFactoryTest 11:10:45 [INFO] Tests run: 2, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.012 s - in org.onap.sdc.http.HttpRequestFactoryTest 11:10:45 [INFO] Running org.onap.sdc.http.SdcConnectorClientTest 11:10:46 11:10:46.347 [main] DEBUG org.onap.sdc.http.SdcConnectorClient - about to perform get on SDC. requestId= cfad80a6-e851-40da-bfc6-a4f2ff1b8776 url= /sdc/v1/artifactTypes 11:10:46 11:10:46.349 [main] ERROR org.onap.sdc.http.SdcConnectorClient - status from SDC is Mock for HttpSdcResponse, hashCode: 320069288 11:10:46 11:10:46.355 [main] ERROR org.onap.sdc.http.SdcConnectorClient - DistributionClientResultImpl [responseStatus=SDC_SERVER_TIMEOUT, responseMessage=SDC server problem] 11:10:46 11:10:46.357 [main] DEBUG org.onap.sdc.http.SdcConnectorClient - error from SDC is: ["Service","Resource","VF","VFC"] 11:10:46 11:10:46.359 [main] ERROR org.onap.sdc.http.SdcConnectorClient - failed to close http response 11:10:46 11:10:46.373 [main] DEBUG org.onap.sdc.http.SdcConnectorClient - about to perform get on SDC. requestId= 4f915785-8625-4f2c-bbfe-0b71fb30811f url= /sdc/v1/artifactTypes 11:10:46 11:10:46.377 [main] ERROR org.onap.sdc.http.SdcConnectorClient - failed to parse response from SDC. error: 11:10:46 java.io.IOException: Not implemented. This is expected as the implementation is for unit tests only. 11:10:46 at org.onap.sdc.http.SdcConnectorClientTest$ThrowingInputStreamForTesting.read(SdcConnectorClientTest.java:312) 11:10:46 at java.base/java.io.InputStream.read(InputStream.java:271) 11:10:46 at java.base/sun.nio.cs.StreamDecoder.readBytes(StreamDecoder.java:284) 11:10:46 at java.base/sun.nio.cs.StreamDecoder.implRead(StreamDecoder.java:326) 11:10:46 at java.base/sun.nio.cs.StreamDecoder.read(StreamDecoder.java:178) 11:10:46 at java.base/java.io.InputStreamReader.read(InputStreamReader.java:181) 11:10:46 at java.base/java.io.Reader.read(Reader.java:229) 11:10:46 at org.apache.commons.io.IOUtils.copyLarge(IOUtils.java:1282) 11:10:46 at org.apache.commons.io.IOUtils.copyLarge(IOUtils.java:1261) 11:10:46 at org.apache.commons.io.IOUtils.copy(IOUtils.java:1108) 11:10:46 at org.apache.commons.io.IOUtils.copy(IOUtils.java:922) 11:10:46 at org.apache.commons.io.IOUtils.toString(IOUtils.java:2681) 11:10:46 at org.apache.commons.io.IOUtils.toString(IOUtils.java:2661) 11:10:46 at org.onap.sdc.http.SdcConnectorClient.parseGetValidArtifactTypesResponse(SdcConnectorClient.java:155) 11:10:46 at org.onap.sdc.http.SdcConnectorClient.getValidArtifactTypesList(SdcConnectorClient.java:79) 11:10:46 at java.base/java.lang.invoke.MethodHandle.invokeWithArguments(MethodHandle.java:710) 11:10:46 at org.mockito.internal.util.reflection.InstrumentationMemberAccessor$Dispatcher$ByteBuddy$D6vZKrcQ.invokeWithArguments(Unknown Source) 11:10:46 at org.mockito.internal.util.reflection.InstrumentationMemberAccessor.invoke(InstrumentationMemberAccessor.java:239) 11:10:46 at org.mockito.internal.util.reflection.ModuleMemberAccessor.invoke(ModuleMemberAccessor.java:55) 11:10:46 at org.mockito.internal.creation.bytebuddy.MockMethodAdvice.tryInvoke(MockMethodAdvice.java:333) 11:10:46 at org.mockito.internal.creation.bytebuddy.MockMethodAdvice.access$500(MockMethodAdvice.java:60) 11:10:46 at org.mockito.internal.creation.bytebuddy.MockMethodAdvice$RealMethodCall.invoke(MockMethodAdvice.java:253) 11:10:46 at org.mockito.internal.invocation.InterceptedInvocation.callRealMethod(InterceptedInvocation.java:142) 11:10:46 at org.mockito.internal.stubbing.answers.CallsRealMethods.answer(CallsRealMethods.java:45) 11:10:46 at org.mockito.Answers.answer(Answers.java:99) 11:10:46 at org.mockito.internal.handler.MockHandlerImpl.handle(MockHandlerImpl.java:110) 11:10:46 at org.mockito.internal.handler.NullResultGuardian.handle(NullResultGuardian.java:29) 11:10:46 at org.mockito.internal.handler.InvocationNotifierHandler.handle(InvocationNotifierHandler.java:34) 11:10:46 at org.mockito.internal.creation.bytebuddy.MockMethodInterceptor.doIntercept(MockMethodInterceptor.java:82) 11:10:46 at org.mockito.internal.creation.bytebuddy.MockMethodAdvice.handle(MockMethodAdvice.java:151) 11:10:46 at org.onap.sdc.http.SdcConnectorClient.getValidArtifactTypesList(SdcConnectorClient.java:74) 11:10:46 at org.onap.sdc.http.SdcConnectorClientTest.getValidArtifactTypesListParsingExceptionHandlingTest(SdcConnectorClientTest.java:216) 11:10:46 at java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke0(Native Method) 11:10:46 at java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) 11:10:46 at java.base/jdk.internal.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) 11:10:46 at java.base/java.lang.reflect.Method.invoke(Method.java:566) 11:10:46 at org.junit.platform.commons.util.ReflectionUtils.invokeMethod(ReflectionUtils.java:688) 11:10:46 at org.junit.jupiter.engine.execution.MethodInvocation.proceed(MethodInvocation.java:60) 11:10:46 at org.junit.jupiter.engine.execution.InvocationInterceptorChain$ValidatingInvocation.proceed(InvocationInterceptorChain.java:131) 11:10:46 at org.junit.jupiter.engine.extension.TimeoutExtension.intercept(TimeoutExtension.java:149) 11:10:46 at org.junit.jupiter.engine.extension.TimeoutExtension.interceptTestableMethod(TimeoutExtension.java:140) 11:10:46 at org.junit.jupiter.engine.extension.TimeoutExtension.interceptTestMethod(TimeoutExtension.java:84) 11:10:46 at org.junit.jupiter.engine.execution.ExecutableInvoker$ReflectiveInterceptorCall.lambda$ofVoidMethod$0(ExecutableInvoker.java:115) 11:10:46 at org.junit.jupiter.engine.execution.ExecutableInvoker.lambda$invoke$0(ExecutableInvoker.java:105) 11:10:46 at org.junit.jupiter.engine.execution.InvocationInterceptorChain$InterceptedInvocation.proceed(InvocationInterceptorChain.java:106) 11:10:46 at org.junit.jupiter.engine.execution.InvocationInterceptorChain.proceed(InvocationInterceptorChain.java:64) 11:10:46 at org.junit.jupiter.engine.execution.InvocationInterceptorChain.chainAndInvoke(InvocationInterceptorChain.java:45) 11:10:46 at org.junit.jupiter.engine.execution.InvocationInterceptorChain.invoke(InvocationInterceptorChain.java:37) 11:10:46 at org.junit.jupiter.engine.execution.ExecutableInvoker.invoke(ExecutableInvoker.java:104) 11:10:46 at org.junit.jupiter.engine.execution.ExecutableInvoker.invoke(ExecutableInvoker.java:98) 11:10:46 at org.junit.jupiter.engine.descriptor.TestMethodTestDescriptor.lambda$invokeTestMethod$6(TestMethodTestDescriptor.java:210) 11:10:46 at org.junit.platform.engine.support.hierarchical.ThrowableCollector.execute(ThrowableCollector.java:73) 11:10:46 at org.junit.jupiter.engine.descriptor.TestMethodTestDescriptor.invokeTestMethod(TestMethodTestDescriptor.java:206) 11:10:46 at org.junit.jupiter.engine.descriptor.TestMethodTestDescriptor.execute(TestMethodTestDescriptor.java:131) 11:10:46 at org.junit.jupiter.engine.descriptor.TestMethodTestDescriptor.execute(TestMethodTestDescriptor.java:65) 11:10:46 at org.junit.platform.engine.support.hierarchical.NodeTestTask.lambda$executeRecursively$5(NodeTestTask.java:139) 11:10:46 at org.junit.platform.engine.support.hierarchical.ThrowableCollector.execute(ThrowableCollector.java:73) 11:10:46 at org.junit.platform.engine.support.hierarchical.NodeTestTask.lambda$executeRecursively$7(NodeTestTask.java:129) 11:10:46 at org.junit.platform.engine.support.hierarchical.Node.around(Node.java:137) 11:10:46 at org.junit.platform.engine.support.hierarchical.NodeTestTask.lambda$executeRecursively$8(NodeTestTask.java:127) 11:10:46 at org.junit.platform.engine.support.hierarchical.ThrowableCollector.execute(ThrowableCollector.java:73) 11:10:46 at org.junit.platform.engine.support.hierarchical.NodeTestTask.executeRecursively(NodeTestTask.java:126) 11:10:46 at org.junit.platform.engine.support.hierarchical.NodeTestTask.execute(NodeTestTask.java:84) 11:10:46 at java.base/java.util.ArrayList.forEach(ArrayList.java:1541) 11:10:46 at org.junit.platform.engine.support.hierarchical.SameThreadHierarchicalTestExecutorService.invokeAll(SameThreadHierarchicalTestExecutorService.java:38) 11:10:46 at org.junit.platform.engine.support.hierarchical.NodeTestTask.lambda$executeRecursively$5(NodeTestTask.java:143) 11:10:46 at org.junit.platform.engine.support.hierarchical.ThrowableCollector.execute(ThrowableCollector.java:73) 11:10:46 at org.junit.platform.engine.support.hierarchical.NodeTestTask.lambda$executeRecursively$7(NodeTestTask.java:129) 11:10:46 at org.junit.platform.engine.support.hierarchical.Node.around(Node.java:137) 11:10:46 at org.junit.platform.engine.support.hierarchical.NodeTestTask.lambda$executeRecursively$8(NodeTestTask.java:127) 11:10:46 at org.junit.platform.engine.support.hierarchical.ThrowableCollector.execute(ThrowableCollector.java:73) 11:10:46 at org.junit.platform.engine.support.hierarchical.NodeTestTask.executeRecursively(NodeTestTask.java:126) 11:10:46 at org.junit.platform.engine.support.hierarchical.NodeTestTask.execute(NodeTestTask.java:84) 11:10:46 at java.base/java.util.ArrayList.forEach(ArrayList.java:1541) 11:10:46 at org.junit.platform.engine.support.hierarchical.SameThreadHierarchicalTestExecutorService.invokeAll(SameThreadHierarchicalTestExecutorService.java:38) 11:10:46 at org.junit.platform.engine.support.hierarchical.NodeTestTask.lambda$executeRecursively$5(NodeTestTask.java:143) 11:10:46 at org.junit.platform.engine.support.hierarchical.ThrowableCollector.execute(ThrowableCollector.java:73) 11:10:46 at org.junit.platform.engine.support.hierarchical.NodeTestTask.lambda$executeRecursively$7(NodeTestTask.java:129) 11:10:46 at org.junit.platform.engine.support.hierarchical.Node.around(Node.java:137) 11:10:46 at org.junit.platform.engine.support.hierarchical.NodeTestTask.lambda$executeRecursively$8(NodeTestTask.java:127) 11:10:46 at org.junit.platform.engine.support.hierarchical.ThrowableCollector.execute(ThrowableCollector.java:73) 11:10:46 at org.junit.platform.engine.support.hierarchical.NodeTestTask.executeRecursively(NodeTestTask.java:126) 11:10:46 at org.junit.platform.engine.support.hierarchical.NodeTestTask.execute(NodeTestTask.java:84) 11:10:46 at org.junit.platform.engine.support.hierarchical.SameThreadHierarchicalTestExecutorService.submit(SameThreadHierarchicalTestExecutorService.java:32) 11:10:46 at org.junit.platform.engine.support.hierarchical.HierarchicalTestExecutor.execute(HierarchicalTestExecutor.java:57) 11:10:46 at org.junit.platform.engine.support.hierarchical.HierarchicalTestEngine.execute(HierarchicalTestEngine.java:51) 11:10:46 at org.junit.platform.launcher.core.EngineExecutionOrchestrator.execute(EngineExecutionOrchestrator.java:108) 11:10:46 at org.junit.platform.launcher.core.EngineExecutionOrchestrator.execute(EngineExecutionOrchestrator.java:88) 11:10:46 at org.junit.platform.launcher.core.EngineExecutionOrchestrator.lambda$execute$0(EngineExecutionOrchestrator.java:54) 11:10:46 at org.junit.platform.launcher.core.EngineExecutionOrchestrator.withInterceptedStreams(EngineExecutionOrchestrator.java:67) 11:10:46 at org.junit.platform.launcher.core.EngineExecutionOrchestrator.execute(EngineExecutionOrchestrator.java:52) 11:10:46 at org.junit.platform.launcher.core.DefaultLauncher.execute(DefaultLauncher.java:96) 11:10:46 at org.junit.platform.launcher.core.DefaultLauncher.execute(DefaultLauncher.java:75) 11:10:46 at org.apache.maven.surefire.junitplatform.JUnitPlatformProvider.invokeAllTests(JUnitPlatformProvider.java:154) 11:10:46 at org.apache.maven.surefire.junitplatform.JUnitPlatformProvider.invoke(JUnitPlatformProvider.java:127) 11:10:46 at org.apache.maven.surefire.booter.ForkedBooter.runSuitesInProcess(ForkedBooter.java:377) 11:10:46 at org.apache.maven.surefire.booter.ForkedBooter.execute(ForkedBooter.java:138) 11:10:46 at org.apache.maven.surefire.booter.ForkedBooter.run(ForkedBooter.java:465) 11:10:46 at org.apache.maven.surefire.booter.ForkedBooter.main(ForkedBooter.java:451) 11:10:46 11:10:46.471 [main] ERROR org.onap.sdc.http.SdcConnectorClient - failed to get artifact from response 11:10:46 11:10:46.476 [main] DEBUG org.onap.sdc.http.SdcConnectorClient - about to perform get on SDC. requestId= 375735e2-5062-4510-a0f2-08b3f3de0e46 url= /sdc/v1/artifactTypes 11:10:46 11:10:46.477 [main] ERROR org.onap.sdc.http.SdcConnectorClient - status from SDC is Mock for HttpSdcResponse, hashCode: 74774173 11:10:46 11:10:46.477 [main] ERROR org.onap.sdc.http.SdcConnectorClient - DistributionClientResultImpl [responseStatus=SDC_SERVER_TIMEOUT, responseMessage=SDC server problem] 11:10:46 11:10:46.478 [main] DEBUG org.onap.sdc.http.SdcConnectorClient - error from SDC is: It just didn't work 11:10:46 11:10:46.481 [main] DEBUG org.onap.sdc.http.SdcConnectorClient - about to perform get on SDC. requestId= 19173a13-e914-49c7-ab7e-42f85408f111 url= /sdc/v1/distributionKafkaData 11:10:46 11:10:46.482 [main] ERROR org.onap.sdc.http.SdcConnectorClient - status from SDC is Mock for HttpSdcResponse, hashCode: 1594608347 11:10:46 11:10:46.482 [main] ERROR org.onap.sdc.http.SdcConnectorClient - DistributionClientResultImpl [responseStatus=SDC_SERVER_TIMEOUT, responseMessage=SDC server problem] 11:10:46 11:10:46.483 [main] DEBUG org.onap.sdc.http.SdcConnectorClient - error from SDC is: It just didn't work 11:10:46 11:10:46.491 [main] ERROR org.onap.sdc.http.SdcConnectorClient - status from SDC is Mock for HttpSdcResponse, hashCode: 1176447324 11:10:46 11:10:46.491 [main] ERROR org.onap.sdc.http.SdcConnectorClient - DistributionClientResultImpl [responseStatus=SDC_SERVER_PROBLEM, responseMessage=SDC server problem] 11:10:46 11:10:46.492 [main] ERROR org.onap.sdc.http.SdcConnectorClient - During error handling another exception occurred: 11:10:46 java.io.IOException: Not implemented. This is expected as the implementation is for unit tests only. 11:10:46 at org.onap.sdc.http.SdcConnectorClientTest$ThrowingInputStreamForTesting.read(SdcConnectorClientTest.java:312) 11:10:46 at java.base/java.io.InputStream.read(InputStream.java:271) 11:10:46 at java.base/sun.nio.cs.StreamDecoder.readBytes(StreamDecoder.java:284) 11:10:46 at java.base/sun.nio.cs.StreamDecoder.implRead(StreamDecoder.java:326) 11:10:46 at java.base/sun.nio.cs.StreamDecoder.read(StreamDecoder.java:178) 11:10:46 at java.base/java.io.InputStreamReader.read(InputStreamReader.java:181) 11:10:46 at java.base/java.io.Reader.read(Reader.java:229) 11:10:46 at org.apache.commons.io.IOUtils.copyLarge(IOUtils.java:1282) 11:10:46 at org.apache.commons.io.IOUtils.copyLarge(IOUtils.java:1261) 11:10:46 at org.apache.commons.io.IOUtils.copy(IOUtils.java:1108) 11:10:46 at org.apache.commons.io.IOUtils.copy(IOUtils.java:922) 11:10:46 at org.apache.commons.io.IOUtils.toString(IOUtils.java:2681) 11:10:46 at org.apache.commons.io.IOUtils.toString(IOUtils.java:2661) 11:10:46 at org.onap.sdc.http.SdcConnectorClient.handleSdcDownloadArtifactError(SdcConnectorClient.java:256) 11:10:46 at org.onap.sdc.http.SdcConnectorClient.downloadArtifact(SdcConnectorClient.java:144) 11:10:46 at java.base/java.lang.invoke.MethodHandle.invokeWithArguments(MethodHandle.java:710) 11:10:46 at org.mockito.internal.util.reflection.InstrumentationMemberAccessor$Dispatcher$ByteBuddy$D6vZKrcQ.invokeWithArguments(Unknown Source) 11:10:46 at org.mockito.internal.util.reflection.InstrumentationMemberAccessor.invoke(InstrumentationMemberAccessor.java:239) 11:10:46 at org.mockito.internal.util.reflection.ModuleMemberAccessor.invoke(ModuleMemberAccessor.java:55) 11:10:46 at org.mockito.internal.creation.bytebuddy.MockMethodAdvice.tryInvoke(MockMethodAdvice.java:333) 11:10:46 at org.mockito.internal.creation.bytebuddy.MockMethodAdvice.access$500(MockMethodAdvice.java:60) 11:10:46 at org.mockito.internal.creation.bytebuddy.MockMethodAdvice$RealMethodCall.invoke(MockMethodAdvice.java:253) 11:10:46 at org.mockito.internal.invocation.InterceptedInvocation.callRealMethod(InterceptedInvocation.java:142) 11:10:46 at org.mockito.internal.stubbing.answers.CallsRealMethods.answer(CallsRealMethods.java:45) 11:10:46 at org.mockito.Answers.answer(Answers.java:99) 11:10:46 at org.mockito.internal.handler.MockHandlerImpl.handle(MockHandlerImpl.java:110) 11:10:46 at org.mockito.internal.handler.NullResultGuardian.handle(NullResultGuardian.java:29) 11:10:46 at org.mockito.internal.handler.InvocationNotifierHandler.handle(InvocationNotifierHandler.java:34) 11:10:46 at org.mockito.internal.creation.bytebuddy.MockMethodInterceptor.doIntercept(MockMethodInterceptor.java:82) 11:10:46 at org.mockito.internal.creation.bytebuddy.MockMethodAdvice.handle(MockMethodAdvice.java:151) 11:10:46 at org.onap.sdc.http.SdcConnectorClient.downloadArtifact(SdcConnectorClient.java:130) 11:10:46 at org.onap.sdc.http.SdcConnectorClientTest.downloadArtifactHandleDownloadErrorTest(SdcConnectorClientTest.java:304) 11:10:46 at java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke0(Native Method) 11:10:46 at java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) 11:10:46 at java.base/jdk.internal.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) 11:10:46 at java.base/java.lang.reflect.Method.invoke(Method.java:566) 11:10:46 at org.junit.platform.commons.util.ReflectionUtils.invokeMethod(ReflectionUtils.java:688) 11:10:46 at org.junit.jupiter.engine.execution.MethodInvocation.proceed(MethodInvocation.java:60) 11:10:46 at org.junit.jupiter.engine.execution.InvocationInterceptorChain$ValidatingInvocation.proceed(InvocationInterceptorChain.java:131) 11:10:46 at org.junit.jupiter.engine.extension.TimeoutExtension.intercept(TimeoutExtension.java:149) 11:10:46 at org.junit.jupiter.engine.extension.TimeoutExtension.interceptTestableMethod(TimeoutExtension.java:140) 11:10:46 at org.junit.jupiter.engine.extension.TimeoutExtension.interceptTestMethod(TimeoutExtension.java:84) 11:10:46 at org.junit.jupiter.engine.execution.ExecutableInvoker$ReflectiveInterceptorCall.lambda$ofVoidMethod$0(ExecutableInvoker.java:115) 11:10:46 at org.junit.jupiter.engine.execution.ExecutableInvoker.lambda$invoke$0(ExecutableInvoker.java:105) 11:10:46 at org.junit.jupiter.engine.execution.InvocationInterceptorChain$InterceptedInvocation.proceed(InvocationInterceptorChain.java:106) 11:10:46 at org.junit.jupiter.engine.execution.InvocationInterceptorChain.proceed(InvocationInterceptorChain.java:64) 11:10:46 at org.junit.jupiter.engine.execution.InvocationInterceptorChain.chainAndInvoke(InvocationInterceptorChain.java:45) 11:10:46 at org.junit.jupiter.engine.execution.InvocationInterceptorChain.invoke(InvocationInterceptorChain.java:37) 11:10:46 at org.junit.jupiter.engine.execution.ExecutableInvoker.invoke(ExecutableInvoker.java:104) 11:10:46 at org.junit.jupiter.engine.execution.ExecutableInvoker.invoke(ExecutableInvoker.java:98) 11:10:46 at org.junit.jupiter.engine.descriptor.TestMethodTestDescriptor.lambda$invokeTestMethod$6(TestMethodTestDescriptor.java:210) 11:10:46 at org.junit.platform.engine.support.hierarchical.ThrowableCollector.execute(ThrowableCollector.java:73) 11:10:46 at org.junit.jupiter.engine.descriptor.TestMethodTestDescriptor.invokeTestMethod(TestMethodTestDescriptor.java:206) 11:10:46 at org.junit.jupiter.engine.descriptor.TestMethodTestDescriptor.execute(TestMethodTestDescriptor.java:131) 11:10:46 at org.junit.jupiter.engine.descriptor.TestMethodTestDescriptor.execute(TestMethodTestDescriptor.java:65) 11:10:46 at org.junit.platform.engine.support.hierarchical.NodeTestTask.lambda$executeRecursively$5(NodeTestTask.java:139) 11:10:46 at org.junit.platform.engine.support.hierarchical.ThrowableCollector.execute(ThrowableCollector.java:73) 11:10:46 at org.junit.platform.engine.support.hierarchical.NodeTestTask.lambda$executeRecursively$7(NodeTestTask.java:129) 11:10:46 at org.junit.platform.engine.support.hierarchical.Node.around(Node.java:137) 11:10:46 at org.junit.platform.engine.support.hierarchical.NodeTestTask.lambda$executeRecursively$8(NodeTestTask.java:127) 11:10:46 at org.junit.platform.engine.support.hierarchical.ThrowableCollector.execute(ThrowableCollector.java:73) 11:10:46 at org.junit.platform.engine.support.hierarchical.NodeTestTask.executeRecursively(NodeTestTask.java:126) 11:10:46 at org.junit.platform.engine.support.hierarchical.NodeTestTask.execute(NodeTestTask.java:84) 11:10:46 at java.base/java.util.ArrayList.forEach(ArrayList.java:1541) 11:10:46 at org.junit.platform.engine.support.hierarchical.SameThreadHierarchicalTestExecutorService.invokeAll(SameThreadHierarchicalTestExecutorService.java:38) 11:10:46 at org.junit.platform.engine.support.hierarchical.NodeTestTask.lambda$executeRecursively$5(NodeTestTask.java:143) 11:10:46 at org.junit.platform.engine.support.hierarchical.ThrowableCollector.execute(ThrowableCollector.java:73) 11:10:46 at org.junit.platform.engine.support.hierarchical.NodeTestTask.lambda$executeRecursively$7(NodeTestTask.java:129) 11:10:46 at org.junit.platform.engine.support.hierarchical.Node.around(Node.java:137) 11:10:46 at org.junit.platform.engine.support.hierarchical.NodeTestTask.lambda$executeRecursively$8(NodeTestTask.java:127) 11:10:46 at org.junit.platform.engine.support.hierarchical.ThrowableCollector.execute(ThrowableCollector.java:73) 11:10:46 at org.junit.platform.engine.support.hierarchical.NodeTestTask.executeRecursively(NodeTestTask.java:126) 11:10:46 at org.junit.platform.engine.support.hierarchical.NodeTestTask.execute(NodeTestTask.java:84) 11:10:46 at java.base/java.util.ArrayList.forEach(ArrayList.java:1541) 11:10:46 at org.junit.platform.engine.support.hierarchical.SameThreadHierarchicalTestExecutorService.invokeAll(SameThreadHierarchicalTestExecutorService.java:38) 11:10:46 at org.junit.platform.engine.support.hierarchical.NodeTestTask.lambda$executeRecursively$5(NodeTestTask.java:143) 11:10:46 at org.junit.platform.engine.support.hierarchical.ThrowableCollector.execute(ThrowableCollector.java:73) 11:10:46 at org.junit.platform.engine.support.hierarchical.NodeTestTask.lambda$executeRecursively$7(NodeTestTask.java:129) 11:10:46 at org.junit.platform.engine.support.hierarchical.Node.around(Node.java:137) 11:10:46 at org.junit.platform.engine.support.hierarchical.NodeTestTask.lambda$executeRecursively$8(NodeTestTask.java:127) 11:10:46 at org.junit.platform.engine.support.hierarchical.ThrowableCollector.execute(ThrowableCollector.java:73) 11:10:46 at org.junit.platform.engine.support.hierarchical.NodeTestTask.executeRecursively(NodeTestTask.java:126) 11:10:46 at org.junit.platform.engine.support.hierarchical.NodeTestTask.execute(NodeTestTask.java:84) 11:10:46 at org.junit.platform.engine.support.hierarchical.SameThreadHierarchicalTestExecutorService.submit(SameThreadHierarchicalTestExecutorService.java:32) 11:10:46 at org.junit.platform.engine.support.hierarchical.HierarchicalTestExecutor.execute(HierarchicalTestExecutor.java:57) 11:10:46 at org.junit.platform.engine.support.hierarchical.HierarchicalTestEngine.execute(HierarchicalTestEngine.java:51) 11:10:46 at org.junit.platform.launcher.core.EngineExecutionOrchestrator.execute(EngineExecutionOrchestrator.java:108) 11:10:46 at org.junit.platform.launcher.core.EngineExecutionOrchestrator.execute(EngineExecutionOrchestrator.java:88) 11:10:46 at org.junit.platform.launcher.core.EngineExecutionOrchestrator.lambda$execute$0(EngineExecutionOrchestrator.java:54) 11:10:46 at org.junit.platform.launcher.core.EngineExecutionOrchestrator.withInterceptedStreams(EngineExecutionOrchestrator.java:67) 11:10:46 at org.junit.platform.launcher.core.EngineExecutionOrchestrator.execute(EngineExecutionOrchestrator.java:52) 11:10:46 at org.junit.platform.launcher.core.DefaultLauncher.execute(DefaultLauncher.java:96) 11:10:46 at org.junit.platform.launcher.core.DefaultLauncher.execute(DefaultLauncher.java:75) 11:10:46 at org.apache.maven.surefire.junitplatform.JUnitPlatformProvider.invokeAllTests(JUnitPlatformProvider.java:154) 11:10:46 at org.apache.maven.surefire.junitplatform.JUnitPlatformProvider.invoke(JUnitPlatformProvider.java:127) 11:10:46 at org.apache.maven.surefire.booter.ForkedBooter.runSuitesInProcess(ForkedBooter.java:377) 11:10:46 at org.apache.maven.surefire.booter.ForkedBooter.execute(ForkedBooter.java:138) 11:10:46 at org.apache.maven.surefire.booter.ForkedBooter.run(ForkedBooter.java:465) 11:10:46 at org.apache.maven.surefire.booter.ForkedBooter.main(ForkedBooter.java:451) 11:10:46 11:10:46.522 [main] DEBUG org.onap.sdc.http.SdcConnectorClient - about to perform get on SDC. requestId= abe32dbb-27dc-4e2d-904b-f6d8c73f42ef url= /sdc/v1/artifactTypes 11:10:46 11:10:46.531 [main] DEBUG org.onap.sdc.http.SdcConnectorClient - about to perform get on SDC. requestId= 653c9e54-8747-4ee0-a1b5-adb7b57b3857 url= /sdc/v1/distributionKafkaData 11:10:46 [INFO] Tests run: 11, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.552 s - in org.onap.sdc.http.SdcConnectorClientTest 11:10:46 [INFO] Running org.onap.sdc.utils.SdcKafkaTest 11:10:46 11:10:46.553 [main] INFO com.salesforce.kafka.test.ZookeeperTestServer - Starting Zookeeper test server 11:10:46 11:10:46.756 [Thread-2] INFO org.apache.zookeeper.server.quorum.QuorumPeerConfig - clientPortAddress is 0.0.0.0:39173 11:10:46 11:10:46.756 [Thread-2] INFO org.apache.zookeeper.server.quorum.QuorumPeerConfig - secureClientPort is not set 11:10:46 11:10:46.756 [Thread-2] INFO org.apache.zookeeper.server.quorum.QuorumPeerConfig - observerMasterPort is not set 11:10:46 11:10:46.756 [Thread-2] INFO org.apache.zookeeper.server.quorum.QuorumPeerConfig - metricsProvider.className is org.apache.zookeeper.metrics.impl.DefaultMetricsProvider 11:10:46 11:10:46.759 [Thread-2] INFO org.apache.zookeeper.server.ZooKeeperServerMain - Starting server 11:10:46 11:10:46.781 [Thread-2] INFO org.apache.zookeeper.server.ServerMetrics - ServerMetrics initialized with provider org.apache.zookeeper.metrics.impl.DefaultMetricsProvider@3e3962f0 11:10:46 11:10:46.788 [Thread-2] DEBUG org.apache.zookeeper.server.persistence.FileTxnSnapLog - Opening datadir:/tmp/kafka-unit1168587096075541194 snapDir:/tmp/kafka-unit1168587096075541194 11:10:46 11:10:46.788 [Thread-2] INFO org.apache.zookeeper.server.persistence.FileTxnSnapLog - zookeeper.snapshot.trust.empty : false 11:10:46 11:10:46.799 [Thread-2] INFO org.apache.zookeeper.server.ZooKeeperServer - 11:10:46 11:10:46.799 [Thread-2] INFO org.apache.zookeeper.server.ZooKeeperServer - ______ _ 11:10:46 11:10:46.799 [Thread-2] INFO org.apache.zookeeper.server.ZooKeeperServer - |___ / | | 11:10:46 11:10:46.799 [Thread-2] INFO org.apache.zookeeper.server.ZooKeeperServer - / / ___ ___ | | __ ___ ___ _ __ ___ _ __ 11:10:46 11:10:46.799 [Thread-2] INFO org.apache.zookeeper.server.ZooKeeperServer - / / / _ \ / _ \ | |/ / / _ \ / _ \ | '_ \ / _ \ | '__| 11:10:46 11:10:46.799 [Thread-2] INFO org.apache.zookeeper.server.ZooKeeperServer - / /__ | (_) | | (_) | | < | __/ | __/ | |_) | | __/ | | 11:10:46 11:10:46.799 [Thread-2] INFO org.apache.zookeeper.server.ZooKeeperServer - /_____| \___/ \___/ |_|\_\ \___| \___| | .__/ \___| |_| 11:10:46 11:10:46.799 [Thread-2] INFO org.apache.zookeeper.server.ZooKeeperServer - | | 11:10:46 11:10:46.799 [Thread-2] INFO org.apache.zookeeper.server.ZooKeeperServer - |_| 11:10:46 11:10:46.799 [Thread-2] INFO org.apache.zookeeper.server.ZooKeeperServer - 11:10:46 11:10:46.801 [Thread-2] INFO org.apache.zookeeper.server.ZooKeeperServer - Server environment:zookeeper.version=3.6.3--6401e4ad2087061bc6b9f80dec2d69f2e3c8660a, built on 04/08/2021 16:35 GMT 11:10:46 11:10:46.801 [Thread-2] INFO org.apache.zookeeper.server.ZooKeeperServer - Server environment:host.name=prd-ubuntu1804-docker-8c-8g-4909 11:10:46 11:10:46.801 [Thread-2] INFO org.apache.zookeeper.server.ZooKeeperServer - Server environment:java.version=11.0.16 11:10:46 11:10:46.801 [Thread-2] INFO org.apache.zookeeper.server.ZooKeeperServer - Server environment:java.vendor=Ubuntu 11:10:46 11:10:46.801 [Thread-2] INFO org.apache.zookeeper.server.ZooKeeperServer - Server environment:java.home=/usr/lib/jvm/java-11-openjdk-amd64 11:10:46 11:10:46.801 [Thread-2] INFO org.apache.zookeeper.server.ZooKeeperServer - Server environment:java.class.path=/w/workspace/sdc-sdc-distribution-client-master-integration-pairwise/sdc-distribution-client/target/test-classes:/w/workspace/sdc-sdc-distribution-client-master-integration-pairwise/sdc-distribution-client/target/classes:/w/workspace/sdc-sdc-distribution-client-master-integration-pairwise/sdc-distribution-client-api/target/sdc-distribution-client-api-2.2.0-SNAPSHOT.jar:/home/jenkins/.m2/repository/org/apache/kafka/kafka-clients/3.3.1/kafka-clients-3.3.1.jar:/home/jenkins/.m2/repository/com/github/luben/zstd-jni/1.5.2-1/zstd-jni-1.5.2-1.jar:/home/jenkins/.m2/repository/org/lz4/lz4-java/1.8.0/lz4-java-1.8.0.jar:/home/jenkins/.m2/repository/org/xerial/snappy/snappy-java/1.1.8.4/snappy-java-1.1.8.4.jar:/home/jenkins/.m2/repository/com/fasterxml/jackson/core/jackson-core/2.15.2/jackson-core-2.15.2.jar:/home/jenkins/.m2/repository/com/fasterxml/jackson/core/jackson-databind/2.15.2/jackson-databind-2.15.2.jar:/home/jenkins/.m2/repository/com/fasterxml/jackson/core/jackson-annotations/2.15.2/jackson-annotations-2.15.2.jar:/home/jenkins/.m2/repository/org/projectlombok/lombok/1.18.24/lombok-1.18.24.jar:/home/jenkins/.m2/repository/org/slf4j/slf4j-api/1.7.30/slf4j-api-1.7.30.jar:/home/jenkins/.m2/repository/com/google/code/gson/gson/2.8.9/gson-2.8.9.jar:/home/jenkins/.m2/repository/org/functionaljava/functionaljava/4.8/functionaljava-4.8.jar:/home/jenkins/.m2/repository/commons-io/commons-io/2.8.0/commons-io-2.8.0.jar:/home/jenkins/.m2/repository/org/apache/httpcomponents/httpclient/4.5.13/httpclient-4.5.13.jar:/home/jenkins/.m2/repository/commons-logging/commons-logging/1.2/commons-logging-1.2.jar:/home/jenkins/.m2/repository/org/yaml/snakeyaml/1.30/snakeyaml-1.30.jar:/home/jenkins/.m2/repository/org/apache/httpcomponents/httpcore/4.4.15/httpcore-4.4.15.jar:/home/jenkins/.m2/repository/com/google/guava/guava/32.1.2-jre/guava-32.1.2-jre.jar:/home/jenkins/.m2/repository/com/google/guava/failureaccess/1.0.1/failureaccess-1.0.1.jar:/home/jenkins/.m2/repository/com/google/guava/listenablefuture/9999.0-empty-to-avoid-conflict-with-guava/listenablefuture-9999.0-empty-to-avoid-conflict-with-guava.jar:/home/jenkins/.m2/repository/com/google/code/findbugs/jsr305/3.0.2/jsr305-3.0.2.jar:/home/jenkins/.m2/repository/org/checkerframework/checker-qual/3.33.0/checker-qual-3.33.0.jar:/home/jenkins/.m2/repository/com/google/errorprone/error_prone_annotations/2.18.0/error_prone_annotations-2.18.0.jar:/home/jenkins/.m2/repository/com/google/j2objc/j2objc-annotations/2.8/j2objc-annotations-2.8.jar:/home/jenkins/.m2/repository/org/eclipse/jetty/jetty-servlet/9.4.48.v20220622/jetty-servlet-9.4.48.v20220622.jar:/home/jenkins/.m2/repository/org/eclipse/jetty/jetty-util-ajax/9.4.48.v20220622/jetty-util-ajax-9.4.48.v20220622.jar:/home/jenkins/.m2/repository/org/eclipse/jetty/jetty-webapp/9.4.48.v20220622/jetty-webapp-9.4.48.v20220622.jar:/home/jenkins/.m2/repository/org/eclipse/jetty/jetty-xml/9.4.48.v20220622/jetty-xml-9.4.48.v20220622.jar:/home/jenkins/.m2/repository/org/eclipse/jetty/jetty-util/9.4.48.v20220622/jetty-util-9.4.48.v20220622.jar:/home/jenkins/.m2/repository/org/junit/jupiter/junit-jupiter/5.7.2/junit-jupiter-5.7.2.jar:/home/jenkins/.m2/repository/org/junit/jupiter/junit-jupiter-api/5.7.2/junit-jupiter-api-5.7.2.jar:/home/jenkins/.m2/repository/org/apiguardian/apiguardian-api/1.1.0/apiguardian-api-1.1.0.jar:/home/jenkins/.m2/repository/org/opentest4j/opentest4j/1.2.0/opentest4j-1.2.0.jar:/home/jenkins/.m2/repository/org/junit/jupiter/junit-jupiter-params/5.7.2/junit-jupiter-params-5.7.2.jar:/home/jenkins/.m2/repository/org/junit/jupiter/junit-jupiter-engine/5.7.2/junit-jupiter-engine-5.7.2.jar:/home/jenkins/.m2/repository/org/junit/platform/junit-platform-engine/1.7.2/junit-platform-engine-1.7.2.jar:/home/jenkins/.m2/repository/org/mockito/mockito-junit-jupiter/3.12.4/mockito-junit-jupiter-3.12.4.jar:/home/jenkins/.m2/repository/org/mockito/mockito-inline/3.12.4/mockito-inline-3.12.4.jar:/home/jenkins/.m2/repository/org/junit-pioneer/junit-pioneer/1.4.2/junit-pioneer-1.4.2.jar:/home/jenkins/.m2/repository/org/junit/platform/junit-platform-commons/1.7.1/junit-platform-commons-1.7.1.jar:/home/jenkins/.m2/repository/org/junit/platform/junit-platform-launcher/1.7.1/junit-platform-launcher-1.7.1.jar:/home/jenkins/.m2/repository/org/mockito/mockito-core/3.12.4/mockito-core-3.12.4.jar:/home/jenkins/.m2/repository/net/bytebuddy/byte-buddy/1.11.13/byte-buddy-1.11.13.jar:/home/jenkins/.m2/repository/net/bytebuddy/byte-buddy-agent/1.11.13/byte-buddy-agent-1.11.13.jar:/home/jenkins/.m2/repository/org/objenesis/objenesis/3.2/objenesis-3.2.jar:/home/jenkins/.m2/repository/com/google/code/bean-matchers/bean-matchers/0.12/bean-matchers-0.12.jar:/home/jenkins/.m2/repository/org/hamcrest/hamcrest/2.2/hamcrest-2.2.jar:/home/jenkins/.m2/repository/org/assertj/assertj-core/3.18.1/assertj-core-3.18.1.jar:/home/jenkins/.m2/repository/io/github/hakky54/logcaptor/2.7.10/logcaptor-2.7.10.jar:/home/jenkins/.m2/repository/ch/qos/logback/logback-classic/1.2.3/logback-classic-1.2.3.jar:/home/jenkins/.m2/repository/ch/qos/logback/logback-core/1.2.3/logback-core-1.2.3.jar:/home/jenkins/.m2/repository/org/apache/logging/log4j/log4j-to-slf4j/2.17.2/log4j-to-slf4j-2.17.2.jar:/home/jenkins/.m2/repository/org/apache/logging/log4j/log4j-api/2.17.2/log4j-api-2.17.2.jar:/home/jenkins/.m2/repository/org/slf4j/jul-to-slf4j/1.7.36/jul-to-slf4j-1.7.36.jar:/home/jenkins/.m2/repository/com/salesforce/kafka/test/kafka-junit5/3.2.4/kafka-junit5-3.2.4.jar:/home/jenkins/.m2/repository/com/salesforce/kafka/test/kafka-junit-core/3.2.4/kafka-junit-core-3.2.4.jar:/home/jenkins/.m2/repository/org/apache/curator/curator-test/2.12.0/curator-test-2.12.0.jar:/home/jenkins/.m2/repository/org/javassist/javassist/3.18.1-GA/javassist-3.18.1-GA.jar:/home/jenkins/.m2/repository/org/apache/kafka/kafka_2.13/3.3.1/kafka_2.13-3.3.1.jar:/home/jenkins/.m2/repository/org/scala-lang/scala-library/2.13.8/scala-library-2.13.8.jar:/home/jenkins/.m2/repository/org/apache/kafka/kafka-server-common/3.3.1/kafka-server-common-3.3.1.jar:/home/jenkins/.m2/repository/org/apache/kafka/kafka-metadata/3.3.1/kafka-metadata-3.3.1.jar:/home/jenkins/.m2/repository/org/apache/kafka/kafka-raft/3.3.1/kafka-raft-3.3.1.jar:/home/jenkins/.m2/repository/org/apache/kafka/kafka-storage/3.3.1/kafka-storage-3.3.1.jar:/home/jenkins/.m2/repository/org/apache/kafka/kafka-storage-api/3.3.1/kafka-storage-api-3.3.1.jar:/home/jenkins/.m2/repository/net/sourceforge/argparse4j/argparse4j/0.7.0/argparse4j-0.7.0.jar:/home/jenkins/.m2/repository/net/sf/jopt-simple/jopt-simple/5.0.4/jopt-simple-5.0.4.jar:/home/jenkins/.m2/repository/org/bitbucket/b_c/jose4j/0.7.9/jose4j-0.7.9.jar:/home/jenkins/.m2/repository/com/yammer/metrics/metrics-core/2.2.0/metrics-core-2.2.0.jar:/home/jenkins/.m2/repository/org/scala-lang/modules/scala-collection-compat_2.13/2.6.0/scala-collection-compat_2.13-2.6.0.jar:/home/jenkins/.m2/repository/org/scala-lang/modules/scala-java8-compat_2.13/1.0.2/scala-java8-compat_2.13-1.0.2.jar:/home/jenkins/.m2/repository/org/scala-lang/scala-reflect/2.13.8/scala-reflect-2.13.8.jar:/home/jenkins/.m2/repository/com/typesafe/scala-logging/scala-logging_2.13/3.9.4/scala-logging_2.13-3.9.4.jar:/home/jenkins/.m2/repository/io/dropwizard/metrics/metrics-core/4.1.12.1/metrics-core-4.1.12.1.jar:/home/jenkins/.m2/repository/org/apache/zookeeper/zookeeper/3.6.3/zookeeper-3.6.3.jar:/home/jenkins/.m2/repository/org/apache/zookeeper/zookeeper-jute/3.6.3/zookeeper-jute-3.6.3.jar:/home/jenkins/.m2/repository/org/apache/yetus/audience-annotations/0.5.0/audience-annotations-0.5.0.jar:/home/jenkins/.m2/repository/io/netty/netty-handler/4.1.63.Final/netty-handler-4.1.63.Final.jar:/home/jenkins/.m2/repository/io/netty/netty-common/4.1.63.Final/netty-common-4.1.63.Final.jar:/home/jenkins/.m2/repository/io/netty/netty-resolver/4.1.63.Final/netty-resolver-4.1.63.Final.jar:/home/jenkins/.m2/repository/io/netty/netty-buffer/4.1.63.Final/netty-buffer-4.1.63.Final.jar:/home/jenkins/.m2/repository/io/netty/netty-transport/4.1.63.Final/netty-transport-4.1.63.Final.jar:/home/jenkins/.m2/repository/io/netty/netty-codec/4.1.63.Final/netty-codec-4.1.63.Final.jar:/home/jenkins/.m2/repository/io/netty/netty-transport-native-epoll/4.1.63.Final/netty-transport-native-epoll-4.1.63.Final.jar:/home/jenkins/.m2/repository/io/netty/netty-transport-native-unix-common/4.1.63.Final/netty-transport-native-unix-common-4.1.63.Final.jar:/home/jenkins/.m2/repository/commons-cli/commons-cli/1.4/commons-cli-1.4.jar:/home/jenkins/.m2/repository/org/skyscreamer/jsonassert/1.5.3/jsonassert-1.5.3.jar:/home/jenkins/.m2/repository/com/vaadin/external/google/android-json/0.0.20131108.vaadin1/android-json-0.0.20131108.vaadin1.jar: 11:10:46 11:10:46.801 [Thread-2] INFO org.apache.zookeeper.server.ZooKeeperServer - Server environment:java.library.path=/usr/java/packages/lib:/usr/lib/x86_64-linux-gnu/jni:/lib/x86_64-linux-gnu:/usr/lib/x86_64-linux-gnu:/usr/lib/jni:/lib:/usr/lib 11:10:46 11:10:46.801 [Thread-2] INFO org.apache.zookeeper.server.ZooKeeperServer - Server environment:java.io.tmpdir=/tmp 11:10:46 11:10:46.801 [Thread-2] INFO org.apache.zookeeper.server.ZooKeeperServer - Server environment:java.compiler= 11:10:46 11:10:46.801 [Thread-2] INFO org.apache.zookeeper.server.ZooKeeperServer - Server environment:os.name=Linux 11:10:46 11:10:46.802 [Thread-2] INFO org.apache.zookeeper.server.ZooKeeperServer - Server environment:os.arch=amd64 11:10:46 11:10:46.802 [Thread-2] INFO org.apache.zookeeper.server.ZooKeeperServer - Server environment:os.version=4.15.0-192-generic 11:10:46 11:10:46.802 [Thread-2] INFO org.apache.zookeeper.server.ZooKeeperServer - Server environment:user.name=jenkins 11:10:46 11:10:46.802 [Thread-2] INFO org.apache.zookeeper.server.ZooKeeperServer - Server environment:user.home=/home/jenkins 11:10:46 11:10:46.802 [Thread-2] INFO org.apache.zookeeper.server.ZooKeeperServer - Server environment:user.dir=/w/workspace/sdc-sdc-distribution-client-master-integration-pairwise/sdc-distribution-client 11:10:46 11:10:46.802 [Thread-2] INFO org.apache.zookeeper.server.ZooKeeperServer - Server environment:os.memory.free=442MB 11:10:46 11:10:46.802 [Thread-2] INFO org.apache.zookeeper.server.ZooKeeperServer - Server environment:os.memory.max=8042MB 11:10:46 11:10:46.802 [Thread-2] INFO org.apache.zookeeper.server.ZooKeeperServer - Server environment:os.memory.total=504MB 11:10:46 11:10:46.802 [Thread-2] INFO org.apache.zookeeper.server.ZooKeeperServer - zookeeper.enableEagerACLCheck = false 11:10:46 11:10:46.802 [Thread-2] INFO org.apache.zookeeper.server.ZooKeeperServer - zookeeper.digest.enabled = true 11:10:46 11:10:46.802 [Thread-2] INFO org.apache.zookeeper.server.ZooKeeperServer - zookeeper.closeSessionTxn.enabled = true 11:10:46 11:10:46.802 [Thread-2] INFO org.apache.zookeeper.server.ZooKeeperServer - zookeeper.flushDelay=0 11:10:46 11:10:46.802 [Thread-2] INFO org.apache.zookeeper.server.ZooKeeperServer - zookeeper.maxWriteQueuePollTime=0 11:10:46 11:10:46.802 [Thread-2] INFO org.apache.zookeeper.server.ZooKeeperServer - zookeeper.maxBatchSize=1000 11:10:46 11:10:46.802 [Thread-2] INFO org.apache.zookeeper.server.ZooKeeperServer - zookeeper.intBufferStartingSizeBytes = 1024 11:10:46 11:10:46.804 [Thread-2] INFO org.apache.zookeeper.server.BlueThrottle - Weighed connection throttling is disabled 11:10:46 11:10:46.805 [Thread-2] INFO org.apache.zookeeper.server.ZooKeeperServer - minSessionTimeout set to 6000 11:10:46 11:10:46.805 [Thread-2] INFO org.apache.zookeeper.server.ZooKeeperServer - maxSessionTimeout set to 60000 11:10:46 11:10:46.806 [Thread-2] INFO org.apache.zookeeper.server.ResponseCache - Response cache size is initialized with value 400. 11:10:46 11:10:46.807 [Thread-2] INFO org.apache.zookeeper.server.ResponseCache - Response cache size is initialized with value 400. 11:10:46 11:10:46.808 [Thread-2] INFO org.apache.zookeeper.server.util.RequestPathMetricsCollector - zookeeper.pathStats.slotCapacity = 60 11:10:46 11:10:46.808 [Thread-2] INFO org.apache.zookeeper.server.util.RequestPathMetricsCollector - zookeeper.pathStats.slotDuration = 15 11:10:46 11:10:46.808 [Thread-2] INFO org.apache.zookeeper.server.util.RequestPathMetricsCollector - zookeeper.pathStats.maxDepth = 6 11:10:46 11:10:46.808 [Thread-2] INFO org.apache.zookeeper.server.util.RequestPathMetricsCollector - zookeeper.pathStats.initialDelay = 5 11:10:46 11:10:46.808 [Thread-2] INFO org.apache.zookeeper.server.util.RequestPathMetricsCollector - zookeeper.pathStats.delay = 5 11:10:46 11:10:46.808 [Thread-2] INFO org.apache.zookeeper.server.util.RequestPathMetricsCollector - zookeeper.pathStats.enabled = false 11:10:46 11:10:46.811 [Thread-2] INFO org.apache.zookeeper.server.ZooKeeperServer - The max bytes for all large requests are set to 104857600 11:10:46 11:10:46.811 [Thread-2] INFO org.apache.zookeeper.server.ZooKeeperServer - The large request threshold is set to -1 11:10:46 11:10:46.811 [Thread-2] INFO org.apache.zookeeper.server.ZooKeeperServer - Created server with tickTime 3000 minSessionTimeout 6000 maxSessionTimeout 60000 clientPortListenBacklog -1 datadir /tmp/kafka-unit1168587096075541194/version-2 snapdir /tmp/kafka-unit1168587096075541194/version-2 11:10:46 11:10:46.827 [Thread-2] INFO org.apache.zookeeper.server.ServerCnxnFactory - Using org.apache.zookeeper.server.NIOServerCnxnFactory as server connection factory 11:10:46 11:10:46.834 [Thread-2] INFO org.apache.zookeeper.common.X509Util - Setting -D jdk.tls.rejectClientInitiatedRenegotiation=true to disable client-initiated TLS renegotiation 11:10:46 11:10:46.897 [Thread-2] INFO org.apache.zookeeper.Login - Server successfully logged in. 11:10:46 11:10:46.902 [Thread-2] WARN org.apache.zookeeper.server.ServerCnxnFactory - maxCnxns is not configured, using default value 0. 11:10:46 11:10:46.905 [Thread-2] INFO org.apache.zookeeper.server.NIOServerCnxnFactory - Configuring NIO connection handler with 10s sessionless connection timeout, 2 selector thread(s), 16 worker threads, and 64 kB direct buffers. 11:10:46 11:10:46.915 [Thread-2] INFO org.apache.zookeeper.server.NIOServerCnxnFactory - binding to port 0.0.0.0/0.0.0.0:39173 11:10:46 11:10:46.949 [Thread-2] INFO org.apache.zookeeper.server.watch.WatchManagerFactory - Using org.apache.zookeeper.server.watch.WatchManager as watch manager 11:10:46 11:10:46.949 [Thread-2] INFO org.apache.zookeeper.server.watch.WatchManagerFactory - Using org.apache.zookeeper.server.watch.WatchManager as watch manager 11:10:46 11:10:46.949 [Thread-2] INFO org.apache.zookeeper.server.ZKDatabase - zookeeper.snapshotSizeFactor = 0.33 11:10:46 11:10:46.949 [Thread-2] INFO org.apache.zookeeper.server.ZKDatabase - zookeeper.commitLogCount=500 11:10:46 11:10:46.956 [Thread-2] INFO org.apache.zookeeper.server.persistence.SnapStream - zookeeper.snapshot.compression.method = CHECKED 11:10:46 11:10:46.956 [Thread-2] INFO org.apache.zookeeper.server.persistence.FileTxnSnapLog - Snapshotting: 0x0 to /tmp/kafka-unit1168587096075541194/version-2/snapshot.0 11:10:46 11:10:46.961 [Thread-2] INFO org.apache.zookeeper.server.ZKDatabase - Snapshot loaded in 11 ms, highest zxid is 0x0, digest is 1371985504 11:10:46 11:10:46.961 [Thread-2] INFO org.apache.zookeeper.server.persistence.FileTxnSnapLog - Snapshotting: 0x0 to /tmp/kafka-unit1168587096075541194/version-2/snapshot.0 11:10:46 11:10:46.961 [Thread-2] INFO org.apache.zookeeper.server.ZooKeeperServer - Snapshot taken in 1 ms 11:10:46 11:10:46.977 [ProcessThread(sid:0 cport:39173):] INFO org.apache.zookeeper.server.PrepRequestProcessor - PrepRequestProcessor (sid:0) started, reconfigEnabled=false 11:10:46 11:10:46.977 [Thread-2] INFO org.apache.zookeeper.server.RequestThrottler - zookeeper.request_throttler.shutdownTimeout = 10000 11:10:46 11:10:46.993 [Thread-2] INFO org.apache.zookeeper.server.ContainerManager - Using checkIntervalMs=60000 maxPerMinute=10000 maxNeverUsedIntervalMs=0 11:10:46 11:10:46.995 [Thread-2] INFO org.apache.zookeeper.audit.ZKAuditProvider - ZooKeeper audit is disabled. 11:10:48 11:10:48.571 [main] INFO kafka.server.KafkaConfig - KafkaConfig values: 11:10:48 advertised.listeners = SASL_PLAINTEXT://localhost:39115 11:10:48 alter.config.policy.class.name = null 11:10:48 alter.log.dirs.replication.quota.window.num = 11 11:10:48 alter.log.dirs.replication.quota.window.size.seconds = 1 11:10:48 authorizer.class.name = 11:10:48 auto.create.topics.enable = true 11:10:48 auto.leader.rebalance.enable = true 11:10:48 background.threads = 10 11:10:48 broker.heartbeat.interval.ms = 2000 11:10:48 broker.id = 1 11:10:48 broker.id.generation.enable = true 11:10:48 broker.rack = null 11:10:48 broker.session.timeout.ms = 9000 11:10:48 client.quota.callback.class = null 11:10:48 compression.type = producer 11:10:48 connection.failed.authentication.delay.ms = 100 11:10:48 connections.max.idle.ms = 600000 11:10:48 connections.max.reauth.ms = 0 11:10:48 control.plane.listener.name = null 11:10:48 controlled.shutdown.enable = true 11:10:48 controlled.shutdown.max.retries = 3 11:10:48 controlled.shutdown.retry.backoff.ms = 5000 11:10:48 controller.listener.names = null 11:10:48 controller.quorum.append.linger.ms = 25 11:10:48 controller.quorum.election.backoff.max.ms = 1000 11:10:48 controller.quorum.election.timeout.ms = 1000 11:10:48 controller.quorum.fetch.timeout.ms = 2000 11:10:48 controller.quorum.request.timeout.ms = 2000 11:10:48 controller.quorum.retry.backoff.ms = 20 11:10:48 controller.quorum.voters = [] 11:10:48 controller.quota.window.num = 11 11:10:48 controller.quota.window.size.seconds = 1 11:10:48 controller.socket.timeout.ms = 30000 11:10:48 create.topic.policy.class.name = null 11:10:48 default.replication.factor = 1 11:10:48 delegation.token.expiry.check.interval.ms = 3600000 11:10:48 delegation.token.expiry.time.ms = 86400000 11:10:48 delegation.token.master.key = null 11:10:48 delegation.token.max.lifetime.ms = 604800000 11:10:48 delegation.token.secret.key = null 11:10:48 delete.records.purgatory.purge.interval.requests = 1 11:10:48 delete.topic.enable = true 11:10:48 early.start.listeners = null 11:10:48 fetch.max.bytes = 57671680 11:10:48 fetch.purgatory.purge.interval.requests = 1000 11:10:48 group.initial.rebalance.delay.ms = 3000 11:10:48 group.max.session.timeout.ms = 1800000 11:10:48 group.max.size = 2147483647 11:10:48 group.min.session.timeout.ms = 6000 11:10:48 initial.broker.registration.timeout.ms = 60000 11:10:48 inter.broker.listener.name = null 11:10:48 inter.broker.protocol.version = 3.3-IV3 11:10:48 kafka.metrics.polling.interval.secs = 10 11:10:48 kafka.metrics.reporters = [] 11:10:48 leader.imbalance.check.interval.seconds = 300 11:10:48 leader.imbalance.per.broker.percentage = 10 11:10:48 listener.security.protocol.map = PLAINTEXT:PLAINTEXT,SSL:SSL,SASL_PLAINTEXT:SASL_PLAINTEXT,SASL_SSL:SASL_SSL 11:10:48 listeners = SASL_PLAINTEXT://localhost:39115 11:10:48 log.cleaner.backoff.ms = 15000 11:10:48 log.cleaner.dedupe.buffer.size = 134217728 11:10:48 log.cleaner.delete.retention.ms = 86400000 11:10:48 log.cleaner.enable = true 11:10:48 log.cleaner.io.buffer.load.factor = 0.9 11:10:48 log.cleaner.io.buffer.size = 524288 11:10:48 log.cleaner.io.max.bytes.per.second = 1.7976931348623157E308 11:10:48 log.cleaner.max.compaction.lag.ms = 9223372036854775807 11:10:48 log.cleaner.min.cleanable.ratio = 0.5 11:10:48 log.cleaner.min.compaction.lag.ms = 0 11:10:48 log.cleaner.threads = 1 11:10:48 log.cleanup.policy = [delete] 11:10:48 log.dir = /tmp/kafka-unit8944902187107510952 11:10:48 log.dirs = null 11:10:48 log.flush.interval.messages = 1 11:10:48 log.flush.interval.ms = null 11:10:48 log.flush.offset.checkpoint.interval.ms = 60000 11:10:48 log.flush.scheduler.interval.ms = 9223372036854775807 11:10:48 log.flush.start.offset.checkpoint.interval.ms = 60000 11:10:48 log.index.interval.bytes = 4096 11:10:48 log.index.size.max.bytes = 10485760 11:10:48 log.message.downconversion.enable = true 11:10:48 log.message.format.version = 3.0-IV1 11:10:48 log.message.timestamp.difference.max.ms = 9223372036854775807 11:10:48 log.message.timestamp.type = CreateTime 11:10:48 log.preallocate = false 11:10:48 log.retention.bytes = -1 11:10:48 log.retention.check.interval.ms = 300000 11:10:48 log.retention.hours = 168 11:10:48 log.retention.minutes = null 11:10:48 log.retention.ms = null 11:10:48 log.roll.hours = 168 11:10:48 log.roll.jitter.hours = 0 11:10:48 log.roll.jitter.ms = null 11:10:48 log.roll.ms = null 11:10:48 log.segment.bytes = 1073741824 11:10:48 log.segment.delete.delay.ms = 60000 11:10:48 max.connection.creation.rate = 2147483647 11:10:48 max.connections = 2147483647 11:10:48 max.connections.per.ip = 2147483647 11:10:48 max.connections.per.ip.overrides = 11:10:48 max.incremental.fetch.session.cache.slots = 1000 11:10:48 message.max.bytes = 1048588 11:10:48 metadata.log.dir = null 11:10:48 metadata.log.max.record.bytes.between.snapshots = 20971520 11:10:48 metadata.log.segment.bytes = 1073741824 11:10:48 metadata.log.segment.min.bytes = 8388608 11:10:48 metadata.log.segment.ms = 604800000 11:10:48 metadata.max.idle.interval.ms = 500 11:10:48 metadata.max.retention.bytes = -1 11:10:48 metadata.max.retention.ms = 604800000 11:10:48 metric.reporters = [] 11:10:48 metrics.num.samples = 2 11:10:48 metrics.recording.level = INFO 11:10:48 metrics.sample.window.ms = 30000 11:10:48 min.insync.replicas = 1 11:10:48 node.id = 1 11:10:48 num.io.threads = 2 11:10:48 num.network.threads = 2 11:10:48 num.partitions = 1 11:10:48 num.recovery.threads.per.data.dir = 1 11:10:48 num.replica.alter.log.dirs.threads = null 11:10:48 num.replica.fetchers = 1 11:10:48 offset.metadata.max.bytes = 4096 11:10:48 offsets.commit.required.acks = -1 11:10:48 offsets.commit.timeout.ms = 5000 11:10:48 offsets.load.buffer.size = 5242880 11:10:48 offsets.retention.check.interval.ms = 600000 11:10:48 offsets.retention.minutes = 10080 11:10:48 offsets.topic.compression.codec = 0 11:10:48 offsets.topic.num.partitions = 50 11:10:48 offsets.topic.replication.factor = 1 11:10:48 offsets.topic.segment.bytes = 104857600 11:10:48 password.encoder.cipher.algorithm = AES/CBC/PKCS5Padding 11:10:48 password.encoder.iterations = 4096 11:10:48 password.encoder.key.length = 128 11:10:48 password.encoder.keyfactory.algorithm = null 11:10:48 password.encoder.old.secret = null 11:10:48 password.encoder.secret = null 11:10:48 principal.builder.class = class org.apache.kafka.common.security.authenticator.DefaultKafkaPrincipalBuilder 11:10:48 process.roles = [] 11:10:48 producer.purgatory.purge.interval.requests = 1000 11:10:48 queued.max.request.bytes = -1 11:10:48 queued.max.requests = 500 11:10:48 quota.window.num = 11 11:10:48 quota.window.size.seconds = 1 11:10:48 remote.log.index.file.cache.total.size.bytes = 1073741824 11:10:48 remote.log.manager.task.interval.ms = 30000 11:10:48 remote.log.manager.task.retry.backoff.max.ms = 30000 11:10:48 remote.log.manager.task.retry.backoff.ms = 500 11:10:48 remote.log.manager.task.retry.jitter = 0.2 11:10:48 remote.log.manager.thread.pool.size = 10 11:10:48 remote.log.metadata.manager.class.name = null 11:10:48 remote.log.metadata.manager.class.path = null 11:10:48 remote.log.metadata.manager.impl.prefix = null 11:10:48 remote.log.metadata.manager.listener.name = null 11:10:48 remote.log.reader.max.pending.tasks = 100 11:10:48 remote.log.reader.threads = 10 11:10:48 remote.log.storage.manager.class.name = null 11:10:48 remote.log.storage.manager.class.path = null 11:10:48 remote.log.storage.manager.impl.prefix = null 11:10:48 remote.log.storage.system.enable = false 11:10:48 replica.fetch.backoff.ms = 1000 11:10:48 replica.fetch.max.bytes = 1048576 11:10:48 replica.fetch.min.bytes = 1 11:10:48 replica.fetch.response.max.bytes = 10485760 11:10:48 replica.fetch.wait.max.ms = 500 11:10:48 replica.high.watermark.checkpoint.interval.ms = 5000 11:10:48 replica.lag.time.max.ms = 30000 11:10:48 replica.selector.class = null 11:10:48 replica.socket.receive.buffer.bytes = 65536 11:10:48 replica.socket.timeout.ms = 30000 11:10:48 replication.quota.window.num = 11 11:10:48 replication.quota.window.size.seconds = 1 11:10:48 request.timeout.ms = 30000 11:10:48 reserved.broker.max.id = 1000 11:10:48 sasl.client.callback.handler.class = null 11:10:48 sasl.enabled.mechanisms = [PLAIN] 11:10:48 sasl.jaas.config = null 11:10:48 sasl.kerberos.kinit.cmd = /usr/bin/kinit 11:10:48 sasl.kerberos.min.time.before.relogin = 60000 11:10:48 sasl.kerberos.principal.to.local.rules = [DEFAULT] 11:10:48 sasl.kerberos.service.name = null 11:10:48 sasl.kerberos.ticket.renew.jitter = 0.05 11:10:48 sasl.kerberos.ticket.renew.window.factor = 0.8 11:10:48 sasl.login.callback.handler.class = null 11:10:48 sasl.login.class = null 11:10:48 sasl.login.connect.timeout.ms = null 11:10:48 sasl.login.read.timeout.ms = null 11:10:48 sasl.login.refresh.buffer.seconds = 300 11:10:48 sasl.login.refresh.min.period.seconds = 60 11:10:48 sasl.login.refresh.window.factor = 0.8 11:10:48 sasl.login.refresh.window.jitter = 0.05 11:10:48 sasl.login.retry.backoff.max.ms = 10000 11:10:48 sasl.login.retry.backoff.ms = 100 11:10:48 sasl.mechanism.controller.protocol = GSSAPI 11:10:48 sasl.mechanism.inter.broker.protocol = PLAIN 11:10:48 sasl.oauthbearer.clock.skew.seconds = 30 11:10:48 sasl.oauthbearer.expected.audience = null 11:10:48 sasl.oauthbearer.expected.issuer = null 11:10:48 sasl.oauthbearer.jwks.endpoint.refresh.ms = 3600000 11:10:48 sasl.oauthbearer.jwks.endpoint.retry.backoff.max.ms = 10000 11:10:48 sasl.oauthbearer.jwks.endpoint.retry.backoff.ms = 100 11:10:48 sasl.oauthbearer.jwks.endpoint.url = null 11:10:48 sasl.oauthbearer.scope.claim.name = scope 11:10:48 sasl.oauthbearer.sub.claim.name = sub 11:10:48 sasl.oauthbearer.token.endpoint.url = null 11:10:48 sasl.server.callback.handler.class = null 11:10:48 sasl.server.max.receive.size = 524288 11:10:48 security.inter.broker.protocol = SASL_PLAINTEXT 11:10:48 security.providers = null 11:10:48 socket.connection.setup.timeout.max.ms = 30000 11:10:48 socket.connection.setup.timeout.ms = 10000 11:10:48 socket.listen.backlog.size = 50 11:10:48 socket.receive.buffer.bytes = 102400 11:10:48 socket.request.max.bytes = 104857600 11:10:48 socket.send.buffer.bytes = 102400 11:10:48 ssl.cipher.suites = [] 11:10:48 ssl.client.auth = none 11:10:48 ssl.enabled.protocols = [TLSv1.2, TLSv1.3] 11:10:48 ssl.endpoint.identification.algorithm = https 11:10:48 ssl.engine.factory.class = null 11:10:48 ssl.key.password = null 11:10:48 ssl.keymanager.algorithm = SunX509 11:10:48 ssl.keystore.certificate.chain = null 11:10:48 ssl.keystore.key = null 11:10:48 ssl.keystore.location = null 11:10:48 ssl.keystore.password = null 11:10:48 ssl.keystore.type = JKS 11:10:48 ssl.principal.mapping.rules = DEFAULT 11:10:48 ssl.protocol = TLSv1.3 11:10:48 ssl.provider = null 11:10:48 ssl.secure.random.implementation = null 11:10:48 ssl.trustmanager.algorithm = PKIX 11:10:48 ssl.truststore.certificates = null 11:10:48 ssl.truststore.location = null 11:10:48 ssl.truststore.password = null 11:10:48 ssl.truststore.type = JKS 11:10:48 transaction.abort.timed.out.transaction.cleanup.interval.ms = 10000 11:10:48 transaction.max.timeout.ms = 900000 11:10:48 transaction.remove.expired.transaction.cleanup.interval.ms = 3600000 11:10:48 transaction.state.log.load.buffer.size = 5242880 11:10:48 transaction.state.log.min.isr = 1 11:10:48 transaction.state.log.num.partitions = 4 11:10:48 transaction.state.log.replication.factor = 1 11:10:48 transaction.state.log.segment.bytes = 104857600 11:10:48 transactional.id.expiration.ms = 604800000 11:10:48 unclean.leader.election.enable = false 11:10:48 zookeeper.clientCnxnSocket = null 11:10:48 zookeeper.connect = 127.0.0.1:39173 11:10:48 zookeeper.connection.timeout.ms = null 11:10:48 zookeeper.max.in.flight.requests = 10 11:10:48 zookeeper.session.timeout.ms = 30000 11:10:48 zookeeper.set.acl = false 11:10:48 zookeeper.ssl.cipher.suites = null 11:10:48 zookeeper.ssl.client.enable = false 11:10:48 zookeeper.ssl.crl.enable = false 11:10:48 zookeeper.ssl.enabled.protocols = null 11:10:48 zookeeper.ssl.endpoint.identification.algorithm = HTTPS 11:10:48 zookeeper.ssl.keystore.location = null 11:10:48 zookeeper.ssl.keystore.password = null 11:10:48 zookeeper.ssl.keystore.type = null 11:10:48 zookeeper.ssl.ocsp.enable = false 11:10:48 zookeeper.ssl.protocol = TLSv1.2 11:10:48 zookeeper.ssl.truststore.location = null 11:10:48 zookeeper.ssl.truststore.password = null 11:10:48 zookeeper.ssl.truststore.type = null 11:10:48 11:10:48 11:10:48.632 [main] INFO kafka.utils.Log4jControllerRegistration$ - Registered kafka:type=kafka.Log4jController MBean 11:10:48 11:10:48.757 [main] DEBUG org.apache.kafka.common.security.JaasUtils - Checking login config for Zookeeper JAAS context [java.security.auth.login.config=src/test/resources/jaas.conf, zookeeper.sasl.client=default:true, zookeeper.sasl.clientconfig=default:Client] 11:10:48 11:10:48.762 [main] INFO kafka.server.KafkaServer - starting 11:10:48 11:10:48.762 [main] INFO kafka.server.KafkaServer - Connecting to zookeeper on 127.0.0.1:39173 11:10:48 11:10:48.762 [main] DEBUG org.apache.kafka.common.security.JaasUtils - Checking login config for Zookeeper JAAS context [java.security.auth.login.config=src/test/resources/jaas.conf, zookeeper.sasl.client=default:true, zookeeper.sasl.clientconfig=default:Client] 11:10:48 11:10:48.783 [main] INFO kafka.zookeeper.ZooKeeperClient - [ZooKeeperClient Kafka server] Initializing a new session to 127.0.0.1:39173. 11:10:48 11:10:48.790 [main] INFO org.apache.zookeeper.ZooKeeper - Client environment:zookeeper.version=3.6.3--6401e4ad2087061bc6b9f80dec2d69f2e3c8660a, built on 04/08/2021 16:35 GMT 11:10:48 11:10:48.790 [main] INFO org.apache.zookeeper.ZooKeeper - Client environment:host.name=prd-ubuntu1804-docker-8c-8g-4909 11:10:48 11:10:48.790 [main] INFO org.apache.zookeeper.ZooKeeper - Client environment:java.version=11.0.16 11:10:48 11:10:48.790 [main] INFO org.apache.zookeeper.ZooKeeper - Client environment:java.vendor=Ubuntu 11:10:48 11:10:48.790 [main] INFO org.apache.zookeeper.ZooKeeper - Client environment:java.home=/usr/lib/jvm/java-11-openjdk-amd64 11:10:48 11:10:48.790 [main] INFO org.apache.zookeeper.ZooKeeper - Client environment:java.class.path=/w/workspace/sdc-sdc-distribution-client-master-integration-pairwise/sdc-distribution-client/target/test-classes:/w/workspace/sdc-sdc-distribution-client-master-integration-pairwise/sdc-distribution-client/target/classes:/w/workspace/sdc-sdc-distribution-client-master-integration-pairwise/sdc-distribution-client-api/target/sdc-distribution-client-api-2.2.0-SNAPSHOT.jar:/home/jenkins/.m2/repository/org/apache/kafka/kafka-clients/3.3.1/kafka-clients-3.3.1.jar:/home/jenkins/.m2/repository/com/github/luben/zstd-jni/1.5.2-1/zstd-jni-1.5.2-1.jar:/home/jenkins/.m2/repository/org/lz4/lz4-java/1.8.0/lz4-java-1.8.0.jar:/home/jenkins/.m2/repository/org/xerial/snappy/snappy-java/1.1.8.4/snappy-java-1.1.8.4.jar:/home/jenkins/.m2/repository/com/fasterxml/jackson/core/jackson-core/2.15.2/jackson-core-2.15.2.jar:/home/jenkins/.m2/repository/com/fasterxml/jackson/core/jackson-databind/2.15.2/jackson-databind-2.15.2.jar:/home/jenkins/.m2/repository/com/fasterxml/jackson/core/jackson-annotations/2.15.2/jackson-annotations-2.15.2.jar:/home/jenkins/.m2/repository/org/projectlombok/lombok/1.18.24/lombok-1.18.24.jar:/home/jenkins/.m2/repository/org/slf4j/slf4j-api/1.7.30/slf4j-api-1.7.30.jar:/home/jenkins/.m2/repository/com/google/code/gson/gson/2.8.9/gson-2.8.9.jar:/home/jenkins/.m2/repository/org/functionaljava/functionaljava/4.8/functionaljava-4.8.jar:/home/jenkins/.m2/repository/commons-io/commons-io/2.8.0/commons-io-2.8.0.jar:/home/jenkins/.m2/repository/org/apache/httpcomponents/httpclient/4.5.13/httpclient-4.5.13.jar:/home/jenkins/.m2/repository/commons-logging/commons-logging/1.2/commons-logging-1.2.jar:/home/jenkins/.m2/repository/org/yaml/snakeyaml/1.30/snakeyaml-1.30.jar:/home/jenkins/.m2/repository/org/apache/httpcomponents/httpcore/4.4.15/httpcore-4.4.15.jar:/home/jenkins/.m2/repository/com/google/guava/guava/32.1.2-jre/guava-32.1.2-jre.jar:/home/jenkins/.m2/repository/com/google/guava/failureaccess/1.0.1/failureaccess-1.0.1.jar:/home/jenkins/.m2/repository/com/google/guava/listenablefuture/9999.0-empty-to-avoid-conflict-with-guava/listenablefuture-9999.0-empty-to-avoid-conflict-with-guava.jar:/home/jenkins/.m2/repository/com/google/code/findbugs/jsr305/3.0.2/jsr305-3.0.2.jar:/home/jenkins/.m2/repository/org/checkerframework/checker-qual/3.33.0/checker-qual-3.33.0.jar:/home/jenkins/.m2/repository/com/google/errorprone/error_prone_annotations/2.18.0/error_prone_annotations-2.18.0.jar:/home/jenkins/.m2/repository/com/google/j2objc/j2objc-annotations/2.8/j2objc-annotations-2.8.jar:/home/jenkins/.m2/repository/org/eclipse/jetty/jetty-servlet/9.4.48.v20220622/jetty-servlet-9.4.48.v20220622.jar:/home/jenkins/.m2/repository/org/eclipse/jetty/jetty-util-ajax/9.4.48.v20220622/jetty-util-ajax-9.4.48.v20220622.jar:/home/jenkins/.m2/repository/org/eclipse/jetty/jetty-webapp/9.4.48.v20220622/jetty-webapp-9.4.48.v20220622.jar:/home/jenkins/.m2/repository/org/eclipse/jetty/jetty-xml/9.4.48.v20220622/jetty-xml-9.4.48.v20220622.jar:/home/jenkins/.m2/repository/org/eclipse/jetty/jetty-util/9.4.48.v20220622/jetty-util-9.4.48.v20220622.jar:/home/jenkins/.m2/repository/org/junit/jupiter/junit-jupiter/5.7.2/junit-jupiter-5.7.2.jar:/home/jenkins/.m2/repository/org/junit/jupiter/junit-jupiter-api/5.7.2/junit-jupiter-api-5.7.2.jar:/home/jenkins/.m2/repository/org/apiguardian/apiguardian-api/1.1.0/apiguardian-api-1.1.0.jar:/home/jenkins/.m2/repository/org/opentest4j/opentest4j/1.2.0/opentest4j-1.2.0.jar:/home/jenkins/.m2/repository/org/junit/jupiter/junit-jupiter-params/5.7.2/junit-jupiter-params-5.7.2.jar:/home/jenkins/.m2/repository/org/junit/jupiter/junit-jupiter-engine/5.7.2/junit-jupiter-engine-5.7.2.jar:/home/jenkins/.m2/repository/org/junit/platform/junit-platform-engine/1.7.2/junit-platform-engine-1.7.2.jar:/home/jenkins/.m2/repository/org/mockito/mockito-junit-jupiter/3.12.4/mockito-junit-jupiter-3.12.4.jar:/home/jenkins/.m2/repository/org/mockito/mockito-inline/3.12.4/mockito-inline-3.12.4.jar:/home/jenkins/.m2/repository/org/junit-pioneer/junit-pioneer/1.4.2/junit-pioneer-1.4.2.jar:/home/jenkins/.m2/repository/org/junit/platform/junit-platform-commons/1.7.1/junit-platform-commons-1.7.1.jar:/home/jenkins/.m2/repository/org/junit/platform/junit-platform-launcher/1.7.1/junit-platform-launcher-1.7.1.jar:/home/jenkins/.m2/repository/org/mockito/mockito-core/3.12.4/mockito-core-3.12.4.jar:/home/jenkins/.m2/repository/net/bytebuddy/byte-buddy/1.11.13/byte-buddy-1.11.13.jar:/home/jenkins/.m2/repository/net/bytebuddy/byte-buddy-agent/1.11.13/byte-buddy-agent-1.11.13.jar:/home/jenkins/.m2/repository/org/objenesis/objenesis/3.2/objenesis-3.2.jar:/home/jenkins/.m2/repository/com/google/code/bean-matchers/bean-matchers/0.12/bean-matchers-0.12.jar:/home/jenkins/.m2/repository/org/hamcrest/hamcrest/2.2/hamcrest-2.2.jar:/home/jenkins/.m2/repository/org/assertj/assertj-core/3.18.1/assertj-core-3.18.1.jar:/home/jenkins/.m2/repository/io/github/hakky54/logcaptor/2.7.10/logcaptor-2.7.10.jar:/home/jenkins/.m2/repository/ch/qos/logback/logback-classic/1.2.3/logback-classic-1.2.3.jar:/home/jenkins/.m2/repository/ch/qos/logback/logback-core/1.2.3/logback-core-1.2.3.jar:/home/jenkins/.m2/repository/org/apache/logging/log4j/log4j-to-slf4j/2.17.2/log4j-to-slf4j-2.17.2.jar:/home/jenkins/.m2/repository/org/apache/logging/log4j/log4j-api/2.17.2/log4j-api-2.17.2.jar:/home/jenkins/.m2/repository/org/slf4j/jul-to-slf4j/1.7.36/jul-to-slf4j-1.7.36.jar:/home/jenkins/.m2/repository/com/salesforce/kafka/test/kafka-junit5/3.2.4/kafka-junit5-3.2.4.jar:/home/jenkins/.m2/repository/com/salesforce/kafka/test/kafka-junit-core/3.2.4/kafka-junit-core-3.2.4.jar:/home/jenkins/.m2/repository/org/apache/curator/curator-test/2.12.0/curator-test-2.12.0.jar:/home/jenkins/.m2/repository/org/javassist/javassist/3.18.1-GA/javassist-3.18.1-GA.jar:/home/jenkins/.m2/repository/org/apache/kafka/kafka_2.13/3.3.1/kafka_2.13-3.3.1.jar:/home/jenkins/.m2/repository/org/scala-lang/scala-library/2.13.8/scala-library-2.13.8.jar:/home/jenkins/.m2/repository/org/apache/kafka/kafka-server-common/3.3.1/kafka-server-common-3.3.1.jar:/home/jenkins/.m2/repository/org/apache/kafka/kafka-metadata/3.3.1/kafka-metadata-3.3.1.jar:/home/jenkins/.m2/repository/org/apache/kafka/kafka-raft/3.3.1/kafka-raft-3.3.1.jar:/home/jenkins/.m2/repository/org/apache/kafka/kafka-storage/3.3.1/kafka-storage-3.3.1.jar:/home/jenkins/.m2/repository/org/apache/kafka/kafka-storage-api/3.3.1/kafka-storage-api-3.3.1.jar:/home/jenkins/.m2/repository/net/sourceforge/argparse4j/argparse4j/0.7.0/argparse4j-0.7.0.jar:/home/jenkins/.m2/repository/net/sf/jopt-simple/jopt-simple/5.0.4/jopt-simple-5.0.4.jar:/home/jenkins/.m2/repository/org/bitbucket/b_c/jose4j/0.7.9/jose4j-0.7.9.jar:/home/jenkins/.m2/repository/com/yammer/metrics/metrics-core/2.2.0/metrics-core-2.2.0.jar:/home/jenkins/.m2/repository/org/scala-lang/modules/scala-collection-compat_2.13/2.6.0/scala-collection-compat_2.13-2.6.0.jar:/home/jenkins/.m2/repository/org/scala-lang/modules/scala-java8-compat_2.13/1.0.2/scala-java8-compat_2.13-1.0.2.jar:/home/jenkins/.m2/repository/org/scala-lang/scala-reflect/2.13.8/scala-reflect-2.13.8.jar:/home/jenkins/.m2/repository/com/typesafe/scala-logging/scala-logging_2.13/3.9.4/scala-logging_2.13-3.9.4.jar:/home/jenkins/.m2/repository/io/dropwizard/metrics/metrics-core/4.1.12.1/metrics-core-4.1.12.1.jar:/home/jenkins/.m2/repository/org/apache/zookeeper/zookeeper/3.6.3/zookeeper-3.6.3.jar:/home/jenkins/.m2/repository/org/apache/zookeeper/zookeeper-jute/3.6.3/zookeeper-jute-3.6.3.jar:/home/jenkins/.m2/repository/org/apache/yetus/audience-annotations/0.5.0/audience-annotations-0.5.0.jar:/home/jenkins/.m2/repository/io/netty/netty-handler/4.1.63.Final/netty-handler-4.1.63.Final.jar:/home/jenkins/.m2/repository/io/netty/netty-common/4.1.63.Final/netty-common-4.1.63.Final.jar:/home/jenkins/.m2/repository/io/netty/netty-resolver/4.1.63.Final/netty-resolver-4.1.63.Final.jar:/home/jenkins/.m2/repository/io/netty/netty-buffer/4.1.63.Final/netty-buffer-4.1.63.Final.jar:/home/jenkins/.m2/repository/io/netty/netty-transport/4.1.63.Final/netty-transport-4.1.63.Final.jar:/home/jenkins/.m2/repository/io/netty/netty-codec/4.1.63.Final/netty-codec-4.1.63.Final.jar:/home/jenkins/.m2/repository/io/netty/netty-transport-native-epoll/4.1.63.Final/netty-transport-native-epoll-4.1.63.Final.jar:/home/jenkins/.m2/repository/io/netty/netty-transport-native-unix-common/4.1.63.Final/netty-transport-native-unix-common-4.1.63.Final.jar:/home/jenkins/.m2/repository/commons-cli/commons-cli/1.4/commons-cli-1.4.jar:/home/jenkins/.m2/repository/org/skyscreamer/jsonassert/1.5.3/jsonassert-1.5.3.jar:/home/jenkins/.m2/repository/com/vaadin/external/google/android-json/0.0.20131108.vaadin1/android-json-0.0.20131108.vaadin1.jar: 11:10:48 11:10:48.790 [main] INFO org.apache.zookeeper.ZooKeeper - Client environment:java.library.path=/usr/java/packages/lib:/usr/lib/x86_64-linux-gnu/jni:/lib/x86_64-linux-gnu:/usr/lib/x86_64-linux-gnu:/usr/lib/jni:/lib:/usr/lib 11:10:48 11:10:48.790 [main] INFO org.apache.zookeeper.ZooKeeper - Client environment:java.io.tmpdir=/tmp 11:10:48 11:10:48.790 [main] INFO org.apache.zookeeper.ZooKeeper - Client environment:java.compiler= 11:10:48 11:10:48.790 [main] INFO org.apache.zookeeper.ZooKeeper - Client environment:os.name=Linux 11:10:48 11:10:48.790 [main] INFO org.apache.zookeeper.ZooKeeper - Client environment:os.arch=amd64 11:10:48 11:10:48.790 [main] INFO org.apache.zookeeper.ZooKeeper - Client environment:os.version=4.15.0-192-generic 11:10:48 11:10:48.790 [main] INFO org.apache.zookeeper.ZooKeeper - Client environment:user.name=jenkins 11:10:48 11:10:48.790 [main] INFO org.apache.zookeeper.ZooKeeper - Client environment:user.home=/home/jenkins 11:10:48 11:10:48.790 [main] INFO org.apache.zookeeper.ZooKeeper - Client environment:user.dir=/w/workspace/sdc-sdc-distribution-client-master-integration-pairwise/sdc-distribution-client 11:10:48 11:10:48.790 [main] INFO org.apache.zookeeper.ZooKeeper - Client environment:os.memory.free=537MB 11:10:48 11:10:48.790 [main] INFO org.apache.zookeeper.ZooKeeper - Client environment:os.memory.max=8042MB 11:10:48 11:10:48.790 [main] INFO org.apache.zookeeper.ZooKeeper - Client environment:os.memory.total=650MB 11:10:48 11:10:48.794 [main] INFO org.apache.zookeeper.ZooKeeper - Initiating client connection, connectString=127.0.0.1:39173 sessionTimeout=30000 watcher=kafka.zookeeper.ZooKeeperClient$ZooKeeperClientWatcher$@4f554ab3 11:10:48 11:10:48.799 [main] INFO org.apache.zookeeper.ClientCnxnSocket - jute.maxbuffer value is 4194304 Bytes 11:10:48 11:10:48.809 [main] INFO org.apache.zookeeper.ClientCnxn - zookeeper.request.timeout value is 0. feature enabled=false 11:10:48 11:10:48.812 [main] DEBUG kafka.utils.KafkaScheduler - Initializing task scheduler. 11:10:48 11:10:48.813 [main] INFO kafka.zookeeper.ZooKeeperClient - [ZooKeeperClient Kafka server] Waiting until connected. 11:10:48 11:10:48.817 [main-SendThread(127.0.0.1:39173)] DEBUG org.apache.zookeeper.SaslServerPrincipal - Canonicalized address to localhost 11:10:48 11:10:48.818 [main-SendThread(127.0.0.1:39173)] DEBUG org.apache.zookeeper.client.ZooKeeperSaslClient - JAAS loginContext is: Client 11:10:48 11:10:48.820 [main-SendThread(127.0.0.1:39173)] INFO org.apache.zookeeper.Login - Client successfully logged in. 11:10:48 11:10:48.823 [main-SendThread(127.0.0.1:39173)] INFO org.apache.zookeeper.client.ZooKeeperSaslClient - Client will use DIGEST-MD5 as SASL mechanism. 11:10:48 11:10:48.853 [main-SendThread(127.0.0.1:39173)] INFO org.apache.zookeeper.ClientCnxn - Opening socket connection to server localhost/127.0.0.1:39173. 11:10:48 11:10:48.853 [main-SendThread(127.0.0.1:39173)] INFO org.apache.zookeeper.ClientCnxn - SASL config status: Will attempt to SASL-authenticate using Login Context section 'Client' 11:10:48 11:10:48.856 [NIOServerCxnFactory.AcceptThread:0.0.0.0/0.0.0.0:39173] DEBUG org.apache.zookeeper.server.NIOServerCnxnFactory - Accepted socket connection from /127.0.0.1:39794 11:10:48 11:10:48.857 [main-SendThread(127.0.0.1:39173)] INFO org.apache.zookeeper.ClientCnxn - Socket connection established, initiating session, client: /127.0.0.1:39794, server: localhost/127.0.0.1:39173 11:10:48 11:10:48.864 [main-SendThread(127.0.0.1:39173)] DEBUG org.apache.zookeeper.ClientCnxn - Session establishment request sent on localhost/127.0.0.1:39173 11:10:48 11:10:48.875 [NIOWorkerThread-1] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Session establishment request from client /127.0.0.1:39794 client's lastZxid is 0x0 11:10:48 11:10:48.878 [NIOWorkerThread-1] DEBUG org.apache.zookeeper.server.SessionTrackerImpl - Adding session 0x10000020e9e0000 11:10:48 11:10:48.878 [NIOWorkerThread-1] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Client attempting to establish new session: session = 0x10000020e9e0000, zxid = 0x0, timeout = 30000, address = /127.0.0.1:39794 11:10:48 11:10:48.881 [ProcessThread(sid:0 cport:39173):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from data tree is: 1371985504 11:10:48 11:10:48.882 [SyncThread:0] INFO org.apache.zookeeper.server.persistence.FileTxnLog - Creating new log file: log.1 11:10:48 11:10:48.888 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - Processing request:: sessionid:0x10000020e9e0000 type:createSession cxid:0x0 zxid:0x1 txntype:-10 reqpath:n/a 11:10:48 11:10:48.891 [SyncThread:0] DEBUG org.apache.zookeeper.server.DataTree - Digests are matching for Zxid: 1, Digest in log and actual tree: 1371985504 11:10:48 11:10:48.894 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - sessionid:0x10000020e9e0000 type:createSession cxid:0x0 zxid:0x1 txntype:-10 reqpath:n/a 11:10:48 11:10:48.899 [SyncThread:0] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Established session 0x10000020e9e0000 with negotiated timeout 30000 for client /127.0.0.1:39794 11:10:48 11:10:48.901 [main-SendThread(127.0.0.1:39173)] INFO org.apache.zookeeper.ClientCnxn - Session establishment complete on server localhost/127.0.0.1:39173, session id = 0x10000020e9e0000, negotiated timeout = 30000 11:10:48 11:10:48.906 [main-SendThread(127.0.0.1:39173)] DEBUG org.apache.zookeeper.client.ZooKeeperSaslClient - ClientCnxn:sendSaslPacket:length=0 11:10:48 11:10:48.907 [main-EventThread] DEBUG kafka.zookeeper.ZooKeeperClient - [ZooKeeperClient Kafka server] Received event: WatchedEvent state:SyncConnected type:None path:null 11:10:48 11:10:48.908 [main] INFO kafka.zookeeper.ZooKeeperClient - [ZooKeeperClient Kafka server] Connected. 11:10:48 11:10:48.911 [NIOWorkerThread-3] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Responding to client SASL token. 11:10:48 11:10:48.912 [NIOWorkerThread-3] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Size of client SASL token: 0 11:10:48 11:10:48.913 [NIOWorkerThread-3] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Size of server SASL response: 101 11:10:48 11:10:48.919 [main-SendThread(127.0.0.1:39173)] DEBUG org.apache.zookeeper.client.ZooKeeperSaslClient - saslClient.evaluateChallenge(len=101) 11:10:48 11:10:48.922 [main-SendThread(127.0.0.1:39173)] DEBUG org.apache.zookeeper.client.ZooKeeperSaslClient - ClientCnxn:sendSaslPacket:length=284 11:10:48 11:10:48.923 [NIOWorkerThread-5] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Responding to client SASL token. 11:10:48 11:10:48.923 [NIOWorkerThread-5] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Size of client SASL token: 284 11:10:48 11:10:48.924 [NIOWorkerThread-5] DEBUG org.apache.zookeeper.server.auth.SaslServerCallbackHandler - client supplied realm: zk-sasl-md5 11:10:48 11:10:48.924 [NIOWorkerThread-5] INFO org.apache.zookeeper.server.auth.SaslServerCallbackHandler - Successfully authenticated client: authenticationID=zooclient; authorizationID=zooclient. 11:10:48 11:10:48.960 [main-SendThread(127.0.0.1:39173)] DEBUG org.apache.zookeeper.ClientCnxnSocketNIO - Deferring non-priming packet clientPath:/consumers serverPath:/consumers finished:false header:: 0,1 replyHeader:: 0,0,0 request:: '/consumers,,v{s{31,s{'world,'anyone}}},0 response:: until SASL authentication completes. 11:10:48 11:10:48.966 [NIOWorkerThread-5] INFO org.apache.zookeeper.server.auth.SaslServerCallbackHandler - Setting authorizedID: zooclient 11:10:48 11:10:48.967 [NIOWorkerThread-5] INFO org.apache.zookeeper.server.ZooKeeperServer - adding SASL authorization for authorizationID: zooclient 11:10:48 11:10:48.967 [NIOWorkerThread-5] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Size of server SASL response: 40 11:10:48 11:10:48.970 [main-SendThread(127.0.0.1:39173)] DEBUG org.apache.zookeeper.ClientCnxnSocketNIO - Deferring non-priming packet clientPath:/consumers serverPath:/consumers finished:false header:: 0,1 replyHeader:: 0,0,0 request:: '/consumers,,v{s{31,s{'world,'anyone}}},0 response:: until SASL authentication completes. 11:10:48 11:10:48.972 [main-SendThread(127.0.0.1:39173)] DEBUG org.apache.zookeeper.client.ZooKeeperSaslClient - saslClient.evaluateChallenge(len=40) 11:10:48 11:10:48.973 [main-SendThread(127.0.0.1:39173)] DEBUG org.apache.zookeeper.ClientCnxnSocketNIO - Deferring non-priming packet clientPath:/consumers serverPath:/consumers finished:false header:: 0,1 replyHeader:: 0,0,0 request:: '/consumers,,v{s{31,s{'world,'anyone}}},0 response:: until SASL authentication completes. 11:10:48 11:10:48.975 [main-EventThread] DEBUG kafka.zookeeper.ZooKeeperClient - [ZooKeeperClient Kafka server] Received event: WatchedEvent state:SaslAuthenticated type:None path:null 11:10:48 11:10:48.978 [ProcessThread(sid:0 cport:39173):] DEBUG org.apache.zookeeper.server.SessionTrackerImpl - Checking session 0x10000020e9e0000 11:10:48 11:10:48.978 [ProcessThread(sid:0 cport:39173):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Processing ACL: 31,s{'world,'anyone} 11:10:48 11:10:48 11:10:48.980 [ProcessThread(sid:0 cport:39173):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Permission requested: 4 11:10:48 11:10:48.981 [ProcessThread(sid:0 cport:39173):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - ACLs for node: [31,s{'world,'anyone} 11:10:48 ] 11:10:48 11:10:48.981 [ProcessThread(sid:0 cport:39173):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Client credentials: ['sasl,'zooclient 11:10:48 , 'ip,'127.0.0.1 11:10:48 ] 11:10:48 11:10:48.988 [ProcessThread(sid:0 cport:39173):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from data tree is: 1371985504 11:10:48 11:10:48.988 [ProcessThread(sid:0 cport:39173):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 1355400778 11:10:48 11:10:48.991 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - Processing request:: sessionid:0x10000020e9e0000 type:create cxid:0x3 zxid:0x2 txntype:1 reqpath:n/a 11:10:48 11:10:48.993 [SyncThread:0] DEBUG org.apache.zookeeper.common.PathTrie - consumers 11:10:48 11:10:48.994 [SyncThread:0] DEBUG org.apache.zookeeper.server.DataTree - Digests are matching for Zxid: 2, Digest in log and actual tree: 3158685775 11:10:48 11:10:48.994 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - sessionid:0x10000020e9e0000 type:create cxid:0x3 zxid:0x2 txntype:1 reqpath:n/a 11:10:48 11:10:48.996 [main-SendThread(127.0.0.1:39173)] DEBUG org.apache.zookeeper.ClientCnxn - Reading reply session id: 0x10000020e9e0000, packet:: clientPath:/consumers serverPath:/consumers finished:false header:: 3,1 replyHeader:: 3,2,0 request:: '/consumers,,v{s{31,s{'world,'anyone}}},0 response:: '/consumers 11:10:49 11:10:49.017 [ProcessThread(sid:0 cport:39173):] DEBUG org.apache.zookeeper.server.SessionTrackerImpl - Checking session 0x10000020e9e0000 11:10:49 11:10:49.017 [ProcessThread(sid:0 cport:39173):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Processing ACL: 31,s{'world,'anyone} 11:10:49 11:10:49 11:10:49.019 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - Processing request:: sessionid:0x10000020e9e0000 type:create cxid:0x4 zxid:0x3 txntype:-1 reqpath:n/a 11:10:49 11:10:49.020 [SyncThread:0] DEBUG org.apache.zookeeper.server.DataTree - Ignoring processTxn failure hdr: -1 : error: -101 11:10:49 11:10:49.021 [main-SendThread(127.0.0.1:39173)] DEBUG org.apache.zookeeper.ClientCnxn - Reading reply session id: 0x10000020e9e0000, packet:: clientPath:/brokers/ids serverPath:/brokers/ids finished:false header:: 4,1 replyHeader:: 4,3,-101 request:: '/brokers/ids,,v{s{31,s{'world,'anyone}}},0 response:: 11:10:49 11:10:49.024 [ProcessThread(sid:0 cport:39173):] DEBUG org.apache.zookeeper.server.SessionTrackerImpl - Checking session 0x10000020e9e0000 11:10:49 11:10:49.024 [ProcessThread(sid:0 cport:39173):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Processing ACL: 31,s{'world,'anyone} 11:10:49 11:10:49 11:10:49.024 [ProcessThread(sid:0 cport:39173):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Permission requested: 4 11:10:49 11:10:49.024 [ProcessThread(sid:0 cport:39173):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - ACLs for node: [31,s{'world,'anyone} 11:10:49 ] 11:10:49 11:10:49.025 [ProcessThread(sid:0 cport:39173):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Client credentials: ['sasl,'zooclient 11:10:49 , 'ip,'127.0.0.1 11:10:49 ] 11:10:49 11:10:49.026 [ProcessThread(sid:0 cport:39173):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from data tree is: 3158685775 11:10:49 11:10:49.026 [ProcessThread(sid:0 cport:39173):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 3276647001 11:10:49 11:10:49.027 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - Processing request:: sessionid:0x10000020e9e0000 type:create cxid:0x5 zxid:0x4 txntype:1 reqpath:n/a 11:10:49 11:10:49.028 [SyncThread:0] DEBUG org.apache.zookeeper.common.PathTrie - brokers 11:10:49 11:10:49.028 [SyncThread:0] DEBUG org.apache.zookeeper.server.DataTree - Digests are matching for Zxid: 4, Digest in log and actual tree: 5663647451 11:10:49 11:10:49.028 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - sessionid:0x10000020e9e0000 type:create cxid:0x5 zxid:0x4 txntype:1 reqpath:n/a 11:10:49 11:10:49.029 [main-SendThread(127.0.0.1:39173)] DEBUG org.apache.zookeeper.ClientCnxn - Reading reply session id: 0x10000020e9e0000, packet:: clientPath:/brokers serverPath:/brokers finished:false header:: 5,1 replyHeader:: 5,4,0 request:: '/brokers,,v{s{31,s{'world,'anyone}}},0 response:: '/brokers 11:10:49 11:10:49.032 [ProcessThread(sid:0 cport:39173):] DEBUG org.apache.zookeeper.server.SessionTrackerImpl - Checking session 0x10000020e9e0000 11:10:49 11:10:49.032 [ProcessThread(sid:0 cport:39173):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Processing ACL: 31,s{'world,'anyone} 11:10:49 11:10:49 11:10:49.032 [ProcessThread(sid:0 cport:39173):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Permission requested: 4 11:10:49 11:10:49.033 [ProcessThread(sid:0 cport:39173):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - ACLs for node: [31,s{'world,'anyone} 11:10:49 ] 11:10:49 11:10:49.033 [ProcessThread(sid:0 cport:39173):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Client credentials: ['sasl,'zooclient 11:10:49 , 'ip,'127.0.0.1 11:10:49 ] 11:10:49 11:10:49.034 [ProcessThread(sid:0 cport:39173):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from data tree is: 5663647451 11:10:49 11:10:49.034 [ProcessThread(sid:0 cport:39173):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 6818765937 11:10:49 11:10:49.042 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - Processing request:: sessionid:0x10000020e9e0000 type:create cxid:0x6 zxid:0x5 txntype:1 reqpath:n/a 11:10:49 11:10:49.042 [SyncThread:0] DEBUG org.apache.zookeeper.common.PathTrie - brokers 11:10:49 11:10:49.042 [SyncThread:0] DEBUG org.apache.zookeeper.server.DataTree - Digests are matching for Zxid: 5, Digest in log and actual tree: 8461026917 11:10:49 11:10:49.043 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - sessionid:0x10000020e9e0000 type:create cxid:0x6 zxid:0x5 txntype:1 reqpath:n/a 11:10:49 11:10:49.044 [main-SendThread(127.0.0.1:39173)] DEBUG org.apache.zookeeper.ClientCnxn - Reading reply session id: 0x10000020e9e0000, packet:: clientPath:/brokers/ids serverPath:/brokers/ids finished:false header:: 6,1 replyHeader:: 6,5,0 request:: '/brokers/ids,,v{s{31,s{'world,'anyone}}},0 response:: '/brokers/ids 11:10:49 11:10:49.046 [ProcessThread(sid:0 cport:39173):] DEBUG org.apache.zookeeper.server.SessionTrackerImpl - Checking session 0x10000020e9e0000 11:10:49 11:10:49.046 [ProcessThread(sid:0 cport:39173):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Processing ACL: 31,s{'world,'anyone} 11:10:49 11:10:49 11:10:49.046 [ProcessThread(sid:0 cport:39173):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Permission requested: 4 11:10:49 11:10:49.046 [ProcessThread(sid:0 cport:39173):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - ACLs for node: [31,s{'world,'anyone} 11:10:49 ] 11:10:49 11:10:49.046 [ProcessThread(sid:0 cport:39173):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Client credentials: ['sasl,'zooclient 11:10:49 , 'ip,'127.0.0.1 11:10:49 ] 11:10:49 11:10:49.047 [ProcessThread(sid:0 cport:39173):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from data tree is: 8461026917 11:10:49 11:10:49.047 [ProcessThread(sid:0 cport:39173):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 5800441347 11:10:49 11:10:49.048 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - Processing request:: sessionid:0x10000020e9e0000 type:create cxid:0x7 zxid:0x6 txntype:1 reqpath:n/a 11:10:49 11:10:49.048 [SyncThread:0] DEBUG org.apache.zookeeper.common.PathTrie - brokers 11:10:49 11:10:49.049 [SyncThread:0] DEBUG org.apache.zookeeper.server.DataTree - Digests are matching for Zxid: 6, Digest in log and actual tree: 6613143344 11:10:49 11:10:49.049 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - sessionid:0x10000020e9e0000 type:create cxid:0x7 zxid:0x6 txntype:1 reqpath:n/a 11:10:49 11:10:49.050 [main-SendThread(127.0.0.1:39173)] DEBUG org.apache.zookeeper.ClientCnxn - Reading reply session id: 0x10000020e9e0000, packet:: clientPath:/brokers/topics serverPath:/brokers/topics finished:false header:: 7,1 replyHeader:: 7,6,0 request:: '/brokers/topics,,v{s{31,s{'world,'anyone}}},0 response:: '/brokers/topics 11:10:49 11:10:49.052 [ProcessThread(sid:0 cport:39173):] DEBUG org.apache.zookeeper.server.SessionTrackerImpl - Checking session 0x10000020e9e0000 11:10:49 11:10:49.052 [ProcessThread(sid:0 cport:39173):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Processing ACL: 31,s{'world,'anyone} 11:10:49 11:10:49 11:10:49.054 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - Processing request:: sessionid:0x10000020e9e0000 type:create cxid:0x8 zxid:0x7 txntype:-1 reqpath:n/a 11:10:49 11:10:49.054 [SyncThread:0] DEBUG org.apache.zookeeper.server.DataTree - Ignoring processTxn failure hdr: -1 : error: -101 11:10:49 11:10:49.055 [main-SendThread(127.0.0.1:39173)] DEBUG org.apache.zookeeper.ClientCnxn - Reading reply session id: 0x10000020e9e0000, packet:: clientPath:/config/changes serverPath:/config/changes finished:false header:: 8,1 replyHeader:: 8,7,-101 request:: '/config/changes,,v{s{31,s{'world,'anyone}}},0 response:: 11:10:49 11:10:49.058 [ProcessThread(sid:0 cport:39173):] DEBUG org.apache.zookeeper.server.SessionTrackerImpl - Checking session 0x10000020e9e0000 11:10:49 11:10:49.058 [ProcessThread(sid:0 cport:39173):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Processing ACL: 31,s{'world,'anyone} 11:10:49 11:10:49 11:10:49.058 [ProcessThread(sid:0 cport:39173):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Permission requested: 4 11:10:49 11:10:49.058 [ProcessThread(sid:0 cport:39173):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - ACLs for node: [31,s{'world,'anyone} 11:10:49 ] 11:10:49 11:10:49.058 [ProcessThread(sid:0 cport:39173):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Client credentials: ['sasl,'zooclient 11:10:49 , 'ip,'127.0.0.1 11:10:49 ] 11:10:49 11:10:49.058 [ProcessThread(sid:0 cport:39173):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from data tree is: 6613143344 11:10:49 11:10:49.059 [ProcessThread(sid:0 cport:39173):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 8146218220 11:10:49 11:10:49.060 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - Processing request:: sessionid:0x10000020e9e0000 type:create cxid:0x9 zxid:0x8 txntype:1 reqpath:n/a 11:10:49 11:10:49.060 [SyncThread:0] DEBUG org.apache.zookeeper.common.PathTrie - config 11:10:49 11:10:49.060 [SyncThread:0] DEBUG org.apache.zookeeper.server.DataTree - Digests are matching for Zxid: 8, Digest in log and actual tree: 11979281272 11:10:49 11:10:49.060 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - sessionid:0x10000020e9e0000 type:create cxid:0x9 zxid:0x8 txntype:1 reqpath:n/a 11:10:49 11:10:49.061 [main-SendThread(127.0.0.1:39173)] DEBUG org.apache.zookeeper.ClientCnxn - Reading reply session id: 0x10000020e9e0000, packet:: clientPath:/config serverPath:/config finished:false header:: 9,1 replyHeader:: 9,8,0 request:: '/config,,v{s{31,s{'world,'anyone}}},0 response:: '/config 11:10:49 11:10:49.062 [ProcessThread(sid:0 cport:39173):] DEBUG org.apache.zookeeper.server.SessionTrackerImpl - Checking session 0x10000020e9e0000 11:10:49 11:10:49.062 [ProcessThread(sid:0 cport:39173):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Processing ACL: 31,s{'world,'anyone} 11:10:49 11:10:49 11:10:49.063 [ProcessThread(sid:0 cport:39173):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Permission requested: 4 11:10:49 11:10:49.063 [ProcessThread(sid:0 cport:39173):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - ACLs for node: [31,s{'world,'anyone} 11:10:49 ] 11:10:49 11:10:49.063 [ProcessThread(sid:0 cport:39173):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Client credentials: ['sasl,'zooclient 11:10:49 , 'ip,'127.0.0.1 11:10:49 ] 11:10:49 11:10:49.063 [ProcessThread(sid:0 cport:39173):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from data tree is: 11979281272 11:10:49 11:10:49.063 [ProcessThread(sid:0 cport:39173):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 11251127554 11:10:49 11:10:49.064 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - Processing request:: sessionid:0x10000020e9e0000 type:create cxid:0xa zxid:0x9 txntype:1 reqpath:n/a 11:10:49 11:10:49.064 [SyncThread:0] DEBUG org.apache.zookeeper.common.PathTrie - config 11:10:49 11:10:49.065 [SyncThread:0] DEBUG org.apache.zookeeper.server.DataTree - Digests are matching for Zxid: 9, Digest in log and actual tree: 13428304527 11:10:49 11:10:49.065 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - sessionid:0x10000020e9e0000 type:create cxid:0xa zxid:0x9 txntype:1 reqpath:n/a 11:10:49 11:10:49.067 [main-SendThread(127.0.0.1:39173)] DEBUG org.apache.zookeeper.ClientCnxn - Reading reply session id: 0x10000020e9e0000, packet:: clientPath:/config/changes serverPath:/config/changes finished:false header:: 10,1 replyHeader:: 10,9,0 request:: '/config/changes,,v{s{31,s{'world,'anyone}}},0 response:: '/config/changes 11:10:49 11:10:49.068 [ProcessThread(sid:0 cport:39173):] DEBUG org.apache.zookeeper.server.SessionTrackerImpl - Checking session 0x10000020e9e0000 11:10:49 11:10:49.068 [ProcessThread(sid:0 cport:39173):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Processing ACL: 31,s{'world,'anyone} 11:10:49 11:10:49 11:10:49.070 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - Processing request:: sessionid:0x10000020e9e0000 type:create cxid:0xb zxid:0xa txntype:-1 reqpath:n/a 11:10:49 11:10:49.070 [SyncThread:0] DEBUG org.apache.zookeeper.server.DataTree - Ignoring processTxn failure hdr: -1 : error: -101 11:10:49 11:10:49.071 [main-SendThread(127.0.0.1:39173)] DEBUG org.apache.zookeeper.ClientCnxn - Reading reply session id: 0x10000020e9e0000, packet:: clientPath:/admin/delete_topics serverPath:/admin/delete_topics finished:false header:: 11,1 replyHeader:: 11,10,-101 request:: '/admin/delete_topics,,v{s{31,s{'world,'anyone}}},0 response:: 11:10:49 11:10:49.072 [ProcessThread(sid:0 cport:39173):] DEBUG org.apache.zookeeper.server.SessionTrackerImpl - Checking session 0x10000020e9e0000 11:10:49 11:10:49.073 [ProcessThread(sid:0 cport:39173):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Processing ACL: 31,s{'world,'anyone} 11:10:49 11:10:49 11:10:49.073 [ProcessThread(sid:0 cport:39173):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Permission requested: 4 11:10:49 11:10:49.073 [ProcessThread(sid:0 cport:39173):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - ACLs for node: [31,s{'world,'anyone} 11:10:49 ] 11:10:49 11:10:49.073 [ProcessThread(sid:0 cport:39173):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Client credentials: ['sasl,'zooclient 11:10:49 , 'ip,'127.0.0.1 11:10:49 ] 11:10:49 11:10:49.073 [ProcessThread(sid:0 cport:39173):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from data tree is: 13428304527 11:10:49 11:10:49.073 [ProcessThread(sid:0 cport:39173):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 12839932759 11:10:49 11:10:49.074 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - Processing request:: sessionid:0x10000020e9e0000 type:create cxid:0xc zxid:0xb txntype:1 reqpath:n/a 11:10:49 11:10:49.075 [SyncThread:0] DEBUG org.apache.zookeeper.common.PathTrie - admin 11:10:49 11:10:49.075 [SyncThread:0] DEBUG org.apache.zookeeper.server.DataTree - Digests are matching for Zxid: b, Digest in log and actual tree: 16581645927 11:10:49 11:10:49.075 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - sessionid:0x10000020e9e0000 type:create cxid:0xc zxid:0xb txntype:1 reqpath:n/a 11:10:49 11:10:49.075 [main-SendThread(127.0.0.1:39173)] DEBUG org.apache.zookeeper.ClientCnxn - Reading reply session id: 0x10000020e9e0000, packet:: clientPath:/admin serverPath:/admin finished:false header:: 12,1 replyHeader:: 12,11,0 request:: '/admin,,v{s{31,s{'world,'anyone}}},0 response:: '/admin 11:10:49 11:10:49.078 [ProcessThread(sid:0 cport:39173):] DEBUG org.apache.zookeeper.server.SessionTrackerImpl - Checking session 0x10000020e9e0000 11:10:49 11:10:49.079 [ProcessThread(sid:0 cport:39173):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Processing ACL: 31,s{'world,'anyone} 11:10:49 11:10:49 11:10:49.079 [ProcessThread(sid:0 cport:39173):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Permission requested: 4 11:10:49 11:10:49.079 [ProcessThread(sid:0 cport:39173):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - ACLs for node: [31,s{'world,'anyone} 11:10:49 ] 11:10:49 11:10:49.079 [ProcessThread(sid:0 cport:39173):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Client credentials: ['sasl,'zooclient 11:10:49 , 'ip,'127.0.0.1 11:10:49 ] 11:10:49 11:10:49.079 [ProcessThread(sid:0 cport:39173):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from data tree is: 16581645927 11:10:49 11:10:49.079 [ProcessThread(sid:0 cport:39173):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 13756565153 11:10:49 11:10:49.081 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - Processing request:: sessionid:0x10000020e9e0000 type:create cxid:0xd zxid:0xc txntype:1 reqpath:n/a 11:10:49 11:10:49.081 [SyncThread:0] DEBUG org.apache.zookeeper.common.PathTrie - admin 11:10:49 11:10:49.081 [SyncThread:0] DEBUG org.apache.zookeeper.server.DataTree - Digests are matching for Zxid: c, Digest in log and actual tree: 15655073013 11:10:49 11:10:49.081 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - sessionid:0x10000020e9e0000 type:create cxid:0xd zxid:0xc txntype:1 reqpath:n/a 11:10:49 11:10:49.082 [main-SendThread(127.0.0.1:39173)] DEBUG org.apache.zookeeper.ClientCnxn - Reading reply session id: 0x10000020e9e0000, packet:: clientPath:/admin/delete_topics serverPath:/admin/delete_topics finished:false header:: 13,1 replyHeader:: 13,12,0 request:: '/admin/delete_topics,,v{s{31,s{'world,'anyone}}},0 response:: '/admin/delete_topics 11:10:49 11:10:49.083 [ProcessThread(sid:0 cport:39173):] DEBUG org.apache.zookeeper.server.SessionTrackerImpl - Checking session 0x10000020e9e0000 11:10:49 11:10:49.084 [ProcessThread(sid:0 cport:39173):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Processing ACL: 31,s{'world,'anyone} 11:10:49 11:10:49 11:10:49.084 [ProcessThread(sid:0 cport:39173):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Permission requested: 4 11:10:49 11:10:49.084 [ProcessThread(sid:0 cport:39173):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - ACLs for node: [31,s{'world,'anyone} 11:10:49 ] 11:10:49 11:10:49.084 [ProcessThread(sid:0 cport:39173):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Client credentials: ['sasl,'zooclient 11:10:49 , 'ip,'127.0.0.1 11:10:49 ] 11:10:49 11:10:49.084 [ProcessThread(sid:0 cport:39173):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from data tree is: 15655073013 11:10:49 11:10:49.084 [ProcessThread(sid:0 cport:39173):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 16650124652 11:10:49 11:10:49.086 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - Processing request:: sessionid:0x10000020e9e0000 type:create cxid:0xe zxid:0xd txntype:1 reqpath:n/a 11:10:49 11:10:49.086 [SyncThread:0] DEBUG org.apache.zookeeper.common.PathTrie - brokers 11:10:49 11:10:49.086 [SyncThread:0] DEBUG org.apache.zookeeper.server.DataTree - Digests are matching for Zxid: d, Digest in log and actual tree: 18361177384 11:10:49 11:10:49.086 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - sessionid:0x10000020e9e0000 type:create cxid:0xe zxid:0xd txntype:1 reqpath:n/a 11:10:49 11:10:49.087 [main-SendThread(127.0.0.1:39173)] DEBUG org.apache.zookeeper.ClientCnxn - Reading reply session id: 0x10000020e9e0000, packet:: clientPath:/brokers/seqid serverPath:/brokers/seqid finished:false header:: 14,1 replyHeader:: 14,13,0 request:: '/brokers/seqid,,v{s{31,s{'world,'anyone}}},0 response:: '/brokers/seqid 11:10:49 11:10:49.088 [ProcessThread(sid:0 cport:39173):] DEBUG org.apache.zookeeper.server.SessionTrackerImpl - Checking session 0x10000020e9e0000 11:10:49 11:10:49.088 [ProcessThread(sid:0 cport:39173):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Processing ACL: 31,s{'world,'anyone} 11:10:49 11:10:49 11:10:49.089 [ProcessThread(sid:0 cport:39173):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Permission requested: 4 11:10:49 11:10:49.089 [ProcessThread(sid:0 cport:39173):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - ACLs for node: [31,s{'world,'anyone} 11:10:49 ] 11:10:49 11:10:49.089 [ProcessThread(sid:0 cport:39173):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Client credentials: ['sasl,'zooclient 11:10:49 , 'ip,'127.0.0.1 11:10:49 ] 11:10:49 11:10:49.089 [ProcessThread(sid:0 cport:39173):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from data tree is: 18361177384 11:10:49 11:10:49.089 [ProcessThread(sid:0 cport:39173):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 19148492109 11:10:49 11:10:49.090 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - Processing request:: sessionid:0x10000020e9e0000 type:create cxid:0xf zxid:0xe txntype:1 reqpath:n/a 11:10:49 11:10:49.091 [SyncThread:0] DEBUG org.apache.zookeeper.common.PathTrie - isr_change_notification 11:10:49 11:10:49.091 [SyncThread:0] DEBUG org.apache.zookeeper.server.DataTree - Digests are matching for Zxid: e, Digest in log and actual tree: 21621727267 11:10:49 11:10:49.091 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - sessionid:0x10000020e9e0000 type:create cxid:0xf zxid:0xe txntype:1 reqpath:n/a 11:10:49 11:10:49.091 [main-SendThread(127.0.0.1:39173)] DEBUG org.apache.zookeeper.ClientCnxn - Reading reply session id: 0x10000020e9e0000, packet:: clientPath:/isr_change_notification serverPath:/isr_change_notification finished:false header:: 15,1 replyHeader:: 15,14,0 request:: '/isr_change_notification,,v{s{31,s{'world,'anyone}}},0 response:: '/isr_change_notification 11:10:49 11:10:49.093 [ProcessThread(sid:0 cport:39173):] DEBUG org.apache.zookeeper.server.SessionTrackerImpl - Checking session 0x10000020e9e0000 11:10:49 11:10:49.093 [ProcessThread(sid:0 cport:39173):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Processing ACL: 31,s{'world,'anyone} 11:10:49 11:10:49 11:10:49.093 [ProcessThread(sid:0 cport:39173):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Permission requested: 4 11:10:49 11:10:49.093 [ProcessThread(sid:0 cport:39173):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - ACLs for node: [31,s{'world,'anyone} 11:10:49 ] 11:10:49 11:10:49.093 [ProcessThread(sid:0 cport:39173):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Client credentials: ['sasl,'zooclient 11:10:49 , 'ip,'127.0.0.1 11:10:49 ] 11:10:49 11:10:49.093 [ProcessThread(sid:0 cport:39173):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from data tree is: 21621727267 11:10:49 11:10:49.094 [ProcessThread(sid:0 cport:39173):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 21001603314 11:10:49 11:10:49.095 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - Processing request:: sessionid:0x10000020e9e0000 type:create cxid:0x10 zxid:0xf txntype:1 reqpath:n/a 11:10:49 11:10:49.095 [SyncThread:0] DEBUG org.apache.zookeeper.common.PathTrie - latest_producer_id_block 11:10:49 11:10:49.095 [SyncThread:0] DEBUG org.apache.zookeeper.server.DataTree - Digests are matching for Zxid: f, Digest in log and actual tree: 22236057915 11:10:49 11:10:49.096 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - sessionid:0x10000020e9e0000 type:create cxid:0x10 zxid:0xf txntype:1 reqpath:n/a 11:10:49 11:10:49.096 [main-SendThread(127.0.0.1:39173)] DEBUG org.apache.zookeeper.ClientCnxn - Reading reply session id: 0x10000020e9e0000, packet:: clientPath:/latest_producer_id_block serverPath:/latest_producer_id_block finished:false header:: 16,1 replyHeader:: 16,15,0 request:: '/latest_producer_id_block,,v{s{31,s{'world,'anyone}}},0 response:: '/latest_producer_id_block 11:10:49 11:10:49.098 [ProcessThread(sid:0 cport:39173):] DEBUG org.apache.zookeeper.server.SessionTrackerImpl - Checking session 0x10000020e9e0000 11:10:49 11:10:49.098 [ProcessThread(sid:0 cport:39173):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Processing ACL: 31,s{'world,'anyone} 11:10:49 11:10:49 11:10:49.098 [ProcessThread(sid:0 cport:39173):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Permission requested: 4 11:10:49 11:10:49.098 [ProcessThread(sid:0 cport:39173):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - ACLs for node: [31,s{'world,'anyone} 11:10:49 ] 11:10:49 11:10:49.098 [ProcessThread(sid:0 cport:39173):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Client credentials: ['sasl,'zooclient 11:10:49 , 'ip,'127.0.0.1 11:10:49 ] 11:10:49 11:10:49.099 [ProcessThread(sid:0 cport:39173):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from data tree is: 22236057915 11:10:49 11:10:49.099 [ProcessThread(sid:0 cport:39173):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 21413597876 11:10:49 11:10:49.100 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - Processing request:: sessionid:0x10000020e9e0000 type:create cxid:0x11 zxid:0x10 txntype:1 reqpath:n/a 11:10:49 11:10:49.100 [SyncThread:0] DEBUG org.apache.zookeeper.common.PathTrie - log_dir_event_notification 11:10:49 11:10:49.100 [SyncThread:0] DEBUG org.apache.zookeeper.server.DataTree - Digests are matching for Zxid: 10, Digest in log and actual tree: 21435932657 11:10:49 11:10:49.100 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - sessionid:0x10000020e9e0000 type:create cxid:0x11 zxid:0x10 txntype:1 reqpath:n/a 11:10:49 11:10:49.101 [main-SendThread(127.0.0.1:39173)] DEBUG org.apache.zookeeper.ClientCnxn - Reading reply session id: 0x10000020e9e0000, packet:: clientPath:/log_dir_event_notification serverPath:/log_dir_event_notification finished:false header:: 17,1 replyHeader:: 17,16,0 request:: '/log_dir_event_notification,,v{s{31,s{'world,'anyone}}},0 response:: '/log_dir_event_notification 11:10:49 11:10:49.102 [ProcessThread(sid:0 cport:39173):] DEBUG org.apache.zookeeper.server.SessionTrackerImpl - Checking session 0x10000020e9e0000 11:10:49 11:10:49.102 [ProcessThread(sid:0 cport:39173):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Processing ACL: 31,s{'world,'anyone} 11:10:49 11:10:49 11:10:49.102 [ProcessThread(sid:0 cport:39173):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Permission requested: 4 11:10:49 11:10:49.102 [ProcessThread(sid:0 cport:39173):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - ACLs for node: [31,s{'world,'anyone} 11:10:49 ] 11:10:49 11:10:49.102 [ProcessThread(sid:0 cport:39173):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Client credentials: ['sasl,'zooclient 11:10:49 , 'ip,'127.0.0.1 11:10:49 ] 11:10:49 11:10:49.103 [ProcessThread(sid:0 cport:39173):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from data tree is: 21435932657 11:10:49 11:10:49.103 [ProcessThread(sid:0 cport:39173):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 21455891702 11:10:49 11:10:49.105 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - Processing request:: sessionid:0x10000020e9e0000 type:create cxid:0x12 zxid:0x11 txntype:1 reqpath:n/a 11:10:49 11:10:49.105 [SyncThread:0] DEBUG org.apache.zookeeper.common.PathTrie - config 11:10:49 11:10:49.105 [SyncThread:0] DEBUG org.apache.zookeeper.server.DataTree - Digests are matching for Zxid: 11, Digest in log and actual tree: 24204821361 11:10:49 11:10:49.105 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - sessionid:0x10000020e9e0000 type:create cxid:0x12 zxid:0x11 txntype:1 reqpath:n/a 11:10:49 11:10:49.106 [main-SendThread(127.0.0.1:39173)] DEBUG org.apache.zookeeper.ClientCnxn - Reading reply session id: 0x10000020e9e0000, packet:: clientPath:/config/topics serverPath:/config/topics finished:false header:: 18,1 replyHeader:: 18,17,0 request:: '/config/topics,,v{s{31,s{'world,'anyone}}},0 response:: '/config/topics 11:10:49 11:10:49.107 [ProcessThread(sid:0 cport:39173):] DEBUG org.apache.zookeeper.server.SessionTrackerImpl - Checking session 0x10000020e9e0000 11:10:49 11:10:49.107 [ProcessThread(sid:0 cport:39173):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Processing ACL: 31,s{'world,'anyone} 11:10:49 11:10:49 11:10:49.107 [ProcessThread(sid:0 cport:39173):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Permission requested: 4 11:10:49 11:10:49.107 [ProcessThread(sid:0 cport:39173):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - ACLs for node: [31,s{'world,'anyone} 11:10:49 ] 11:10:49 11:10:49.107 [ProcessThread(sid:0 cport:39173):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Client credentials: ['sasl,'zooclient 11:10:49 , 'ip,'127.0.0.1 11:10:49 ] 11:10:49 11:10:49.107 [ProcessThread(sid:0 cport:39173):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from data tree is: 24204821361 11:10:49 11:10:49.108 [ProcessThread(sid:0 cport:39173):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 22115312212 11:10:49 11:10:49.110 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - Processing request:: sessionid:0x10000020e9e0000 type:create cxid:0x13 zxid:0x12 txntype:1 reqpath:n/a 11:10:49 11:10:49.111 [SyncThread:0] DEBUG org.apache.zookeeper.common.PathTrie - config 11:10:49 11:10:49.111 [SyncThread:0] DEBUG org.apache.zookeeper.server.DataTree - Digests are matching for Zxid: 12, Digest in log and actual tree: 22175757745 11:10:49 11:10:49.111 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - sessionid:0x10000020e9e0000 type:create cxid:0x13 zxid:0x12 txntype:1 reqpath:n/a 11:10:49 11:10:49.112 [main-SendThread(127.0.0.1:39173)] DEBUG org.apache.zookeeper.ClientCnxn - Reading reply session id: 0x10000020e9e0000, packet:: clientPath:/config/clients serverPath:/config/clients finished:false header:: 19,1 replyHeader:: 19,18,0 request:: '/config/clients,,v{s{31,s{'world,'anyone}}},0 response:: '/config/clients 11:10:49 11:10:49.114 [ProcessThread(sid:0 cport:39173):] DEBUG org.apache.zookeeper.server.SessionTrackerImpl - Checking session 0x10000020e9e0000 11:10:49 11:10:49.114 [ProcessThread(sid:0 cport:39173):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Processing ACL: 31,s{'world,'anyone} 11:10:49 11:10:49 11:10:49.114 [ProcessThread(sid:0 cport:39173):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Permission requested: 4 11:10:49 11:10:49.114 [ProcessThread(sid:0 cport:39173):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - ACLs for node: [31,s{'world,'anyone} 11:10:49 ] 11:10:49 11:10:49.114 [ProcessThread(sid:0 cport:39173):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Client credentials: ['sasl,'zooclient 11:10:49 , 'ip,'127.0.0.1 11:10:49 ] 11:10:49 11:10:49.114 [ProcessThread(sid:0 cport:39173):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from data tree is: 22175757745 11:10:49 11:10:49.115 [ProcessThread(sid:0 cport:39173):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 24365114476 11:10:49 11:10:49.116 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - Processing request:: sessionid:0x10000020e9e0000 type:create cxid:0x14 zxid:0x13 txntype:1 reqpath:n/a 11:10:49 11:10:49.116 [SyncThread:0] DEBUG org.apache.zookeeper.common.PathTrie - config 11:10:49 11:10:49.116 [SyncThread:0] DEBUG org.apache.zookeeper.server.DataTree - Digests are matching for Zxid: 13, Digest in log and actual tree: 25601662782 11:10:49 11:10:49.116 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - sessionid:0x10000020e9e0000 type:create cxid:0x14 zxid:0x13 txntype:1 reqpath:n/a 11:10:49 11:10:49.117 [main-SendThread(127.0.0.1:39173)] DEBUG org.apache.zookeeper.ClientCnxn - Reading reply session id: 0x10000020e9e0000, packet:: clientPath:/config/users serverPath:/config/users finished:false header:: 20,1 replyHeader:: 20,19,0 request:: '/config/users,,v{s{31,s{'world,'anyone}}},0 response:: '/config/users 11:10:49 11:10:49.119 [ProcessThread(sid:0 cport:39173):] DEBUG org.apache.zookeeper.server.SessionTrackerImpl - Checking session 0x10000020e9e0000 11:10:49 11:10:49.119 [ProcessThread(sid:0 cport:39173):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Processing ACL: 31,s{'world,'anyone} 11:10:49 11:10:49 11:10:49.119 [ProcessThread(sid:0 cport:39173):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Permission requested: 4 11:10:49 11:10:49.119 [ProcessThread(sid:0 cport:39173):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - ACLs for node: [31,s{'world,'anyone} 11:10:49 ] 11:10:49 11:10:49.119 [ProcessThread(sid:0 cport:39173):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Client credentials: ['sasl,'zooclient 11:10:49 , 'ip,'127.0.0.1 11:10:49 ] 11:10:49 11:10:49.119 [ProcessThread(sid:0 cport:39173):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from data tree is: 25601662782 11:10:49 11:10:49.119 [ProcessThread(sid:0 cport:39173):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 23074377576 11:10:49 11:10:49.120 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - Processing request:: sessionid:0x10000020e9e0000 type:create cxid:0x15 zxid:0x14 txntype:1 reqpath:n/a 11:10:49 11:10:49.120 [SyncThread:0] DEBUG org.apache.zookeeper.common.PathTrie - config 11:10:49 11:10:49.120 [SyncThread:0] DEBUG org.apache.zookeeper.server.DataTree - Digests are matching for Zxid: 14, Digest in log and actual tree: 24064559339 11:10:49 11:10:49.121 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - sessionid:0x10000020e9e0000 type:create cxid:0x15 zxid:0x14 txntype:1 reqpath:n/a 11:10:49 11:10:49.121 [main-SendThread(127.0.0.1:39173)] DEBUG org.apache.zookeeper.ClientCnxn - Reading reply session id: 0x10000020e9e0000, packet:: clientPath:/config/brokers serverPath:/config/brokers finished:false header:: 21,1 replyHeader:: 21,20,0 request:: '/config/brokers,,v{s{31,s{'world,'anyone}}},0 response:: '/config/brokers 11:10:49 11:10:49.122 [ProcessThread(sid:0 cport:39173):] DEBUG org.apache.zookeeper.server.SessionTrackerImpl - Checking session 0x10000020e9e0000 11:10:49 11:10:49.122 [ProcessThread(sid:0 cport:39173):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Processing ACL: 31,s{'world,'anyone} 11:10:49 11:10:49 11:10:49.122 [ProcessThread(sid:0 cport:39173):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Permission requested: 4 11:10:49 11:10:49.122 [ProcessThread(sid:0 cport:39173):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - ACLs for node: [31,s{'world,'anyone} 11:10:49 ] 11:10:49 11:10:49.122 [ProcessThread(sid:0 cport:39173):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Client credentials: ['sasl,'zooclient 11:10:49 , 'ip,'127.0.0.1 11:10:49 ] 11:10:49 11:10:49.123 [ProcessThread(sid:0 cport:39173):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from data tree is: 24064559339 11:10:49 11:10:49.123 [ProcessThread(sid:0 cport:39173):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 23713914682 11:10:49 11:10:49.125 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - Processing request:: sessionid:0x10000020e9e0000 type:create cxid:0x16 zxid:0x15 txntype:1 reqpath:n/a 11:10:49 11:10:49.125 [SyncThread:0] DEBUG org.apache.zookeeper.common.PathTrie - config 11:10:49 11:10:49.126 [SyncThread:0] DEBUG org.apache.zookeeper.server.DataTree - Digests are matching for Zxid: 15, Digest in log and actual tree: 26019552192 11:10:49 11:10:49.126 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - sessionid:0x10000020e9e0000 type:create cxid:0x16 zxid:0x15 txntype:1 reqpath:n/a 11:10:49 11:10:49.126 [main-SendThread(127.0.0.1:39173)] DEBUG org.apache.zookeeper.ClientCnxn - Reading reply session id: 0x10000020e9e0000, packet:: clientPath:/config/ips serverPath:/config/ips finished:false header:: 22,1 replyHeader:: 22,21,0 request:: '/config/ips,,v{s{31,s{'world,'anyone}}},0 response:: '/config/ips 11:10:49 11:10:49.141 [ProcessThread(sid:0 cport:39173):] DEBUG org.apache.zookeeper.server.SessionTrackerImpl - Checking session 0x10000020e9e0000 11:10:49 11:10:49.141 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - Processing request:: sessionid:0x10000020e9e0000 type:getData cxid:0x17 zxid:0xfffffffffffffffe txntype:unknown reqpath:/cluster/id 11:10:49 11:10:49.143 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - sessionid:0x10000020e9e0000 type:getData cxid:0x17 zxid:0xfffffffffffffffe txntype:unknown reqpath:/cluster/id 11:10:49 11:10:49.144 [main-SendThread(127.0.0.1:39173)] DEBUG org.apache.zookeeper.ClientCnxn - Reading reply session id: 0x10000020e9e0000, packet:: clientPath:/cluster/id serverPath:/cluster/id finished:false header:: 23,4 replyHeader:: 23,21,-101 request:: '/cluster/id,F response:: 11:10:49 11:10:49.482 [ProcessThread(sid:0 cport:39173):] DEBUG org.apache.zookeeper.server.SessionTrackerImpl - Checking session 0x10000020e9e0000 11:10:49 11:10:49.482 [ProcessThread(sid:0 cport:39173):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Processing ACL: 31,s{'world,'anyone} 11:10:49 11:10:49 11:10:49.486 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - Processing request:: sessionid:0x10000020e9e0000 type:create cxid:0x18 zxid:0x16 txntype:-1 reqpath:n/a 11:10:49 11:10:49.486 [SyncThread:0] DEBUG org.apache.zookeeper.server.DataTree - Ignoring processTxn failure hdr: -1 : error: -101 11:10:49 11:10:49.487 [main-SendThread(127.0.0.1:39173)] DEBUG org.apache.zookeeper.ClientCnxn - Reading reply session id: 0x10000020e9e0000, packet:: clientPath:/cluster/id serverPath:/cluster/id finished:false header:: 24,1 replyHeader:: 24,22,-101 request:: '/cluster/id,#7b2276657273696f6e223a2231222c226964223a226a7835796370395054484f586f315536483851546d77227d,v{s{31,s{'world,'anyone}}},0 response:: 11:10:49 11:10:49.490 [ProcessThread(sid:0 cport:39173):] DEBUG org.apache.zookeeper.server.SessionTrackerImpl - Checking session 0x10000020e9e0000 11:10:49 11:10:49.490 [ProcessThread(sid:0 cport:39173):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Processing ACL: 31,s{'world,'anyone} 11:10:49 11:10:49 11:10:49.490 [ProcessThread(sid:0 cport:39173):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Permission requested: 4 11:10:49 11:10:49.490 [ProcessThread(sid:0 cport:39173):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - ACLs for node: [31,s{'world,'anyone} 11:10:49 ] 11:10:49 11:10:49.490 [ProcessThread(sid:0 cport:39173):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Client credentials: ['sasl,'zooclient 11:10:49 , 'ip,'127.0.0.1 11:10:49 ] 11:10:49 11:10:49.491 [ProcessThread(sid:0 cport:39173):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from data tree is: 26019552192 11:10:49 11:10:49.491 [ProcessThread(sid:0 cport:39173):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 25696696026 11:10:49 11:10:49.492 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - Processing request:: sessionid:0x10000020e9e0000 type:create cxid:0x19 zxid:0x17 txntype:1 reqpath:n/a 11:10:49 11:10:49.492 [SyncThread:0] DEBUG org.apache.zookeeper.common.PathTrie - cluster 11:10:49 11:10:49.492 [SyncThread:0] DEBUG org.apache.zookeeper.server.DataTree - Digests are matching for Zxid: 17, Digest in log and actual tree: 29210447533 11:10:49 11:10:49.493 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - sessionid:0x10000020e9e0000 type:create cxid:0x19 zxid:0x17 txntype:1 reqpath:n/a 11:10:49 11:10:49.493 [main-SendThread(127.0.0.1:39173)] DEBUG org.apache.zookeeper.ClientCnxn - Reading reply session id: 0x10000020e9e0000, packet:: clientPath:/cluster serverPath:/cluster finished:false header:: 25,1 replyHeader:: 25,23,0 request:: '/cluster,,v{s{31,s{'world,'anyone}}},0 response:: '/cluster 11:10:49 11:10:49.495 [ProcessThread(sid:0 cport:39173):] DEBUG org.apache.zookeeper.server.SessionTrackerImpl - Checking session 0x10000020e9e0000 11:10:49 11:10:49.495 [ProcessThread(sid:0 cport:39173):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Processing ACL: 31,s{'world,'anyone} 11:10:49 11:10:49 11:10:49.495 [ProcessThread(sid:0 cport:39173):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Permission requested: 4 11:10:49 11:10:49.496 [ProcessThread(sid:0 cport:39173):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - ACLs for node: [31,s{'world,'anyone} 11:10:49 ] 11:10:49 11:10:49.496 [ProcessThread(sid:0 cport:39173):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Client credentials: ['sasl,'zooclient 11:10:49 , 'ip,'127.0.0.1 11:10:49 ] 11:10:49 11:10:49.497 [ProcessThread(sid:0 cport:39173):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from data tree is: 29210447533 11:10:49 11:10:49.497 [ProcessThread(sid:0 cport:39173):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 29529080273 11:10:49 11:10:49.498 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - Processing request:: sessionid:0x10000020e9e0000 type:create cxid:0x1a zxid:0x18 txntype:1 reqpath:n/a 11:10:49 11:10:49.498 [SyncThread:0] DEBUG org.apache.zookeeper.common.PathTrie - cluster 11:10:49 11:10:49.498 [SyncThread:0] DEBUG org.apache.zookeeper.server.DataTree - Digests are matching for Zxid: 18, Digest in log and actual tree: 33146094928 11:10:49 11:10:49.498 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - sessionid:0x10000020e9e0000 type:create cxid:0x1a zxid:0x18 txntype:1 reqpath:n/a 11:10:49 11:10:49.499 [main-SendThread(127.0.0.1:39173)] DEBUG org.apache.zookeeper.ClientCnxn - Reading reply session id: 0x10000020e9e0000, packet:: clientPath:/cluster/id serverPath:/cluster/id finished:false header:: 26,1 replyHeader:: 26,24,0 request:: '/cluster/id,#7b2276657273696f6e223a2231222c226964223a226a7835796370395054484f586f315536483851546d77227d,v{s{31,s{'world,'anyone}}},0 response:: '/cluster/id 11:10:49 11:10:49.500 [main] INFO kafka.server.KafkaServer - Cluster ID = jx5ycp9PTHOXo1U6H8QTmw 11:10:49 11:10:49.504 [main] WARN kafka.server.BrokerMetadataCheckpoint - No meta.properties file under dir /tmp/kafka-unit8944902187107510952/meta.properties 11:10:49 11:10:49.517 [ProcessThread(sid:0 cport:39173):] DEBUG org.apache.zookeeper.server.SessionTrackerImpl - Checking session 0x10000020e9e0000 11:10:49 11:10:49.517 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - Processing request:: sessionid:0x10000020e9e0000 type:getData cxid:0x1b zxid:0xfffffffffffffffe txntype:unknown reqpath:/config/brokers/ 11:10:49 11:10:49.518 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - sessionid:0x10000020e9e0000 type:getData cxid:0x1b zxid:0xfffffffffffffffe txntype:unknown reqpath:/config/brokers/ 11:10:49 11:10:49.518 [main-SendThread(127.0.0.1:39173)] DEBUG org.apache.zookeeper.ClientCnxn - Reading reply session id: 0x10000020e9e0000, packet:: clientPath:/config/brokers/ serverPath:/config/brokers/ finished:false header:: 27,4 replyHeader:: 27,24,-101 request:: '/config/brokers/,F response:: 11:10:49 11:10:49.571 [ProcessThread(sid:0 cport:39173):] DEBUG org.apache.zookeeper.server.SessionTrackerImpl - Checking session 0x10000020e9e0000 11:10:49 11:10:49.571 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - Processing request:: sessionid:0x10000020e9e0000 type:getData cxid:0x1c zxid:0xfffffffffffffffe txntype:unknown reqpath:/config/brokers/1 11:10:49 11:10:49.572 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - sessionid:0x10000020e9e0000 type:getData cxid:0x1c zxid:0xfffffffffffffffe txntype:unknown reqpath:/config/brokers/1 11:10:49 11:10:49.573 [main-SendThread(127.0.0.1:39173)] DEBUG org.apache.zookeeper.ClientCnxn - Reading reply session id: 0x10000020e9e0000, packet:: clientPath:/config/brokers/1 serverPath:/config/brokers/1 finished:false header:: 28,4 replyHeader:: 28,24,-101 request:: '/config/brokers/1,F response:: 11:10:49 11:10:49.575 [main] INFO kafka.server.KafkaConfig - KafkaConfig values: 11:10:49 advertised.listeners = SASL_PLAINTEXT://localhost:39115 11:10:49 alter.config.policy.class.name = null 11:10:49 alter.log.dirs.replication.quota.window.num = 11 11:10:49 alter.log.dirs.replication.quota.window.size.seconds = 1 11:10:49 authorizer.class.name = 11:10:49 auto.create.topics.enable = true 11:10:49 auto.leader.rebalance.enable = true 11:10:49 background.threads = 10 11:10:49 broker.heartbeat.interval.ms = 2000 11:10:49 broker.id = 1 11:10:49 broker.id.generation.enable = true 11:10:49 broker.rack = null 11:10:49 broker.session.timeout.ms = 9000 11:10:49 client.quota.callback.class = null 11:10:49 compression.type = producer 11:10:49 connection.failed.authentication.delay.ms = 100 11:10:49 connections.max.idle.ms = 600000 11:10:49 connections.max.reauth.ms = 0 11:10:49 control.plane.listener.name = null 11:10:49 controlled.shutdown.enable = true 11:10:49 controlled.shutdown.max.retries = 3 11:10:49 controlled.shutdown.retry.backoff.ms = 5000 11:10:49 controller.listener.names = null 11:10:49 controller.quorum.append.linger.ms = 25 11:10:49 controller.quorum.election.backoff.max.ms = 1000 11:10:49 controller.quorum.election.timeout.ms = 1000 11:10:49 controller.quorum.fetch.timeout.ms = 2000 11:10:49 controller.quorum.request.timeout.ms = 2000 11:10:49 controller.quorum.retry.backoff.ms = 20 11:10:49 controller.quorum.voters = [] 11:10:49 controller.quota.window.num = 11 11:10:49 controller.quota.window.size.seconds = 1 11:10:49 controller.socket.timeout.ms = 30000 11:10:49 create.topic.policy.class.name = null 11:10:49 default.replication.factor = 1 11:10:49 delegation.token.expiry.check.interval.ms = 3600000 11:10:49 delegation.token.expiry.time.ms = 86400000 11:10:49 delegation.token.master.key = null 11:10:49 delegation.token.max.lifetime.ms = 604800000 11:10:49 delegation.token.secret.key = null 11:10:49 delete.records.purgatory.purge.interval.requests = 1 11:10:49 delete.topic.enable = true 11:10:49 early.start.listeners = null 11:10:49 fetch.max.bytes = 57671680 11:10:49 fetch.purgatory.purge.interval.requests = 1000 11:10:49 group.initial.rebalance.delay.ms = 3000 11:10:49 group.max.session.timeout.ms = 1800000 11:10:49 group.max.size = 2147483647 11:10:49 group.min.session.timeout.ms = 6000 11:10:49 initial.broker.registration.timeout.ms = 60000 11:10:49 inter.broker.listener.name = null 11:10:49 inter.broker.protocol.version = 3.3-IV3 11:10:49 kafka.metrics.polling.interval.secs = 10 11:10:49 kafka.metrics.reporters = [] 11:10:49 leader.imbalance.check.interval.seconds = 300 11:10:49 leader.imbalance.per.broker.percentage = 10 11:10:49 listener.security.protocol.map = PLAINTEXT:PLAINTEXT,SSL:SSL,SASL_PLAINTEXT:SASL_PLAINTEXT,SASL_SSL:SASL_SSL 11:10:49 listeners = SASL_PLAINTEXT://localhost:39115 11:10:49 log.cleaner.backoff.ms = 15000 11:10:49 log.cleaner.dedupe.buffer.size = 134217728 11:10:49 log.cleaner.delete.retention.ms = 86400000 11:10:49 log.cleaner.enable = true 11:10:49 log.cleaner.io.buffer.load.factor = 0.9 11:10:49 log.cleaner.io.buffer.size = 524288 11:10:49 log.cleaner.io.max.bytes.per.second = 1.7976931348623157E308 11:10:49 log.cleaner.max.compaction.lag.ms = 9223372036854775807 11:10:49 log.cleaner.min.cleanable.ratio = 0.5 11:10:49 log.cleaner.min.compaction.lag.ms = 0 11:10:49 log.cleaner.threads = 1 11:10:49 log.cleanup.policy = [delete] 11:10:49 log.dir = /tmp/kafka-unit8944902187107510952 11:10:49 log.dirs = null 11:10:49 log.flush.interval.messages = 1 11:10:49 log.flush.interval.ms = null 11:10:49 log.flush.offset.checkpoint.interval.ms = 60000 11:10:49 log.flush.scheduler.interval.ms = 9223372036854775807 11:10:49 log.flush.start.offset.checkpoint.interval.ms = 60000 11:10:49 log.index.interval.bytes = 4096 11:10:49 log.index.size.max.bytes = 10485760 11:10:49 log.message.downconversion.enable = true 11:10:49 log.message.format.version = 3.0-IV1 11:10:49 log.message.timestamp.difference.max.ms = 9223372036854775807 11:10:49 log.message.timestamp.type = CreateTime 11:10:49 log.preallocate = false 11:10:49 log.retention.bytes = -1 11:10:49 log.retention.check.interval.ms = 300000 11:10:49 log.retention.hours = 168 11:10:49 log.retention.minutes = null 11:10:49 log.retention.ms = null 11:10:49 log.roll.hours = 168 11:10:49 log.roll.jitter.hours = 0 11:10:49 log.roll.jitter.ms = null 11:10:49 log.roll.ms = null 11:10:49 log.segment.bytes = 1073741824 11:10:49 log.segment.delete.delay.ms = 60000 11:10:49 max.connection.creation.rate = 2147483647 11:10:49 max.connections = 2147483647 11:10:49 max.connections.per.ip = 2147483647 11:10:49 max.connections.per.ip.overrides = 11:10:49 max.incremental.fetch.session.cache.slots = 1000 11:10:49 message.max.bytes = 1048588 11:10:49 metadata.log.dir = null 11:10:49 metadata.log.max.record.bytes.between.snapshots = 20971520 11:10:49 metadata.log.segment.bytes = 1073741824 11:10:49 metadata.log.segment.min.bytes = 8388608 11:10:49 metadata.log.segment.ms = 604800000 11:10:49 metadata.max.idle.interval.ms = 500 11:10:49 metadata.max.retention.bytes = -1 11:10:49 metadata.max.retention.ms = 604800000 11:10:49 metric.reporters = [] 11:10:49 metrics.num.samples = 2 11:10:49 metrics.recording.level = INFO 11:10:49 metrics.sample.window.ms = 30000 11:10:49 min.insync.replicas = 1 11:10:49 node.id = 1 11:10:49 num.io.threads = 2 11:10:49 num.network.threads = 2 11:10:49 num.partitions = 1 11:10:49 num.recovery.threads.per.data.dir = 1 11:10:49 num.replica.alter.log.dirs.threads = null 11:10:49 num.replica.fetchers = 1 11:10:49 offset.metadata.max.bytes = 4096 11:10:49 offsets.commit.required.acks = -1 11:10:49 offsets.commit.timeout.ms = 5000 11:10:49 offsets.load.buffer.size = 5242880 11:10:49 offsets.retention.check.interval.ms = 600000 11:10:49 offsets.retention.minutes = 10080 11:10:49 offsets.topic.compression.codec = 0 11:10:49 offsets.topic.num.partitions = 50 11:10:49 offsets.topic.replication.factor = 1 11:10:49 offsets.topic.segment.bytes = 104857600 11:10:49 password.encoder.cipher.algorithm = AES/CBC/PKCS5Padding 11:10:49 password.encoder.iterations = 4096 11:10:49 password.encoder.key.length = 128 11:10:49 password.encoder.keyfactory.algorithm = null 11:10:49 password.encoder.old.secret = null 11:10:49 password.encoder.secret = null 11:10:49 principal.builder.class = class org.apache.kafka.common.security.authenticator.DefaultKafkaPrincipalBuilder 11:10:49 process.roles = [] 11:10:49 producer.purgatory.purge.interval.requests = 1000 11:10:49 queued.max.request.bytes = -1 11:10:49 queued.max.requests = 500 11:10:49 quota.window.num = 11 11:10:49 quota.window.size.seconds = 1 11:10:49 remote.log.index.file.cache.total.size.bytes = 1073741824 11:10:49 remote.log.manager.task.interval.ms = 30000 11:10:49 remote.log.manager.task.retry.backoff.max.ms = 30000 11:10:49 remote.log.manager.task.retry.backoff.ms = 500 11:10:49 remote.log.manager.task.retry.jitter = 0.2 11:10:49 remote.log.manager.thread.pool.size = 10 11:10:49 remote.log.metadata.manager.class.name = null 11:10:49 remote.log.metadata.manager.class.path = null 11:10:49 remote.log.metadata.manager.impl.prefix = null 11:10:49 remote.log.metadata.manager.listener.name = null 11:10:49 remote.log.reader.max.pending.tasks = 100 11:10:49 remote.log.reader.threads = 10 11:10:49 remote.log.storage.manager.class.name = null 11:10:49 remote.log.storage.manager.class.path = null 11:10:49 remote.log.storage.manager.impl.prefix = null 11:10:49 remote.log.storage.system.enable = false 11:10:49 replica.fetch.backoff.ms = 1000 11:10:49 replica.fetch.max.bytes = 1048576 11:10:49 replica.fetch.min.bytes = 1 11:10:49 replica.fetch.response.max.bytes = 10485760 11:10:49 replica.fetch.wait.max.ms = 500 11:10:49 replica.high.watermark.checkpoint.interval.ms = 5000 11:10:49 replica.lag.time.max.ms = 30000 11:10:49 replica.selector.class = null 11:10:49 replica.socket.receive.buffer.bytes = 65536 11:10:49 replica.socket.timeout.ms = 30000 11:10:49 replication.quota.window.num = 11 11:10:49 replication.quota.window.size.seconds = 1 11:10:49 request.timeout.ms = 30000 11:10:49 reserved.broker.max.id = 1000 11:10:49 sasl.client.callback.handler.class = null 11:10:49 sasl.enabled.mechanisms = [PLAIN] 11:10:49 sasl.jaas.config = null 11:10:49 sasl.kerberos.kinit.cmd = /usr/bin/kinit 11:10:49 sasl.kerberos.min.time.before.relogin = 60000 11:10:49 sasl.kerberos.principal.to.local.rules = [DEFAULT] 11:10:49 sasl.kerberos.service.name = null 11:10:49 sasl.kerberos.ticket.renew.jitter = 0.05 11:10:49 sasl.kerberos.ticket.renew.window.factor = 0.8 11:10:49 sasl.login.callback.handler.class = null 11:10:49 sasl.login.class = null 11:10:49 sasl.login.connect.timeout.ms = null 11:10:49 sasl.login.read.timeout.ms = null 11:10:49 sasl.login.refresh.buffer.seconds = 300 11:10:49 sasl.login.refresh.min.period.seconds = 60 11:10:49 sasl.login.refresh.window.factor = 0.8 11:10:49 sasl.login.refresh.window.jitter = 0.05 11:10:49 sasl.login.retry.backoff.max.ms = 10000 11:10:49 sasl.login.retry.backoff.ms = 100 11:10:49 sasl.mechanism.controller.protocol = GSSAPI 11:10:49 sasl.mechanism.inter.broker.protocol = PLAIN 11:10:49 sasl.oauthbearer.clock.skew.seconds = 30 11:10:49 sasl.oauthbearer.expected.audience = null 11:10:49 sasl.oauthbearer.expected.issuer = null 11:10:49 sasl.oauthbearer.jwks.endpoint.refresh.ms = 3600000 11:10:49 sasl.oauthbearer.jwks.endpoint.retry.backoff.max.ms = 10000 11:10:49 sasl.oauthbearer.jwks.endpoint.retry.backoff.ms = 100 11:10:49 sasl.oauthbearer.jwks.endpoint.url = null 11:10:49 sasl.oauthbearer.scope.claim.name = scope 11:10:49 sasl.oauthbearer.sub.claim.name = sub 11:10:49 sasl.oauthbearer.token.endpoint.url = null 11:10:49 sasl.server.callback.handler.class = null 11:10:49 sasl.server.max.receive.size = 524288 11:10:49 security.inter.broker.protocol = SASL_PLAINTEXT 11:10:49 security.providers = null 11:10:49 socket.connection.setup.timeout.max.ms = 30000 11:10:49 socket.connection.setup.timeout.ms = 10000 11:10:49 socket.listen.backlog.size = 50 11:10:49 socket.receive.buffer.bytes = 102400 11:10:49 socket.request.max.bytes = 104857600 11:10:49 socket.send.buffer.bytes = 102400 11:10:49 ssl.cipher.suites = [] 11:10:49 ssl.client.auth = none 11:10:49 ssl.enabled.protocols = [TLSv1.2, TLSv1.3] 11:10:49 ssl.endpoint.identification.algorithm = https 11:10:49 ssl.engine.factory.class = null 11:10:49 ssl.key.password = null 11:10:49 ssl.keymanager.algorithm = SunX509 11:10:49 ssl.keystore.certificate.chain = null 11:10:49 ssl.keystore.key = null 11:10:49 ssl.keystore.location = null 11:10:49 ssl.keystore.password = null 11:10:49 ssl.keystore.type = JKS 11:10:49 ssl.principal.mapping.rules = DEFAULT 11:10:49 ssl.protocol = TLSv1.3 11:10:49 ssl.provider = null 11:10:49 ssl.secure.random.implementation = null 11:10:49 ssl.trustmanager.algorithm = PKIX 11:10:49 ssl.truststore.certificates = null 11:10:49 ssl.truststore.location = null 11:10:49 ssl.truststore.password = null 11:10:49 ssl.truststore.type = JKS 11:10:49 transaction.abort.timed.out.transaction.cleanup.interval.ms = 10000 11:10:49 transaction.max.timeout.ms = 900000 11:10:49 transaction.remove.expired.transaction.cleanup.interval.ms = 3600000 11:10:49 transaction.state.log.load.buffer.size = 5242880 11:10:49 transaction.state.log.min.isr = 1 11:10:49 transaction.state.log.num.partitions = 4 11:10:49 transaction.state.log.replication.factor = 1 11:10:49 transaction.state.log.segment.bytes = 104857600 11:10:49 transactional.id.expiration.ms = 604800000 11:10:49 unclean.leader.election.enable = false 11:10:49 zookeeper.clientCnxnSocket = null 11:10:49 zookeeper.connect = 127.0.0.1:39173 11:10:49 zookeeper.connection.timeout.ms = null 11:10:49 zookeeper.max.in.flight.requests = 10 11:10:49 zookeeper.session.timeout.ms = 30000 11:10:49 zookeeper.set.acl = false 11:10:49 zookeeper.ssl.cipher.suites = null 11:10:49 zookeeper.ssl.client.enable = false 11:10:49 zookeeper.ssl.crl.enable = false 11:10:49 zookeeper.ssl.enabled.protocols = null 11:10:49 zookeeper.ssl.endpoint.identification.algorithm = HTTPS 11:10:49 zookeeper.ssl.keystore.location = null 11:10:49 zookeeper.ssl.keystore.password = null 11:10:49 zookeeper.ssl.keystore.type = null 11:10:49 zookeeper.ssl.ocsp.enable = false 11:10:49 zookeeper.ssl.protocol = TLSv1.2 11:10:49 zookeeper.ssl.truststore.location = null 11:10:49 zookeeper.ssl.truststore.password = null 11:10:49 zookeeper.ssl.truststore.type = null 11:10:49 11:10:49 11:10:49.579 [main] DEBUG kafka.utils.KafkaScheduler - Initializing task scheduler. 11:10:49 11:10:49.638 [TestBroker:1ThrottledChannelReaper-Fetch] INFO kafka.server.ClientQuotaManager$ThrottledChannelReaper - [TestBroker:1ThrottledChannelReaper-Fetch]: Starting 11:10:49 11:10:49.638 [TestBroker:1ThrottledChannelReaper-Produce] INFO kafka.server.ClientQuotaManager$ThrottledChannelReaper - [TestBroker:1ThrottledChannelReaper-Produce]: Starting 11:10:49 11:10:49.640 [TestBroker:1ThrottledChannelReaper-Request] INFO kafka.server.ClientQuotaManager$ThrottledChannelReaper - [TestBroker:1ThrottledChannelReaper-Request]: Starting 11:10:49 11:10:49.644 [TestBroker:1ThrottledChannelReaper-ControllerMutation] INFO kafka.server.ClientQuotaManager$ThrottledChannelReaper - [TestBroker:1ThrottledChannelReaper-ControllerMutation]: Starting 11:10:49 11:10:49.685 [ProcessThread(sid:0 cport:39173):] DEBUG org.apache.zookeeper.server.SessionTrackerImpl - Checking session 0x10000020e9e0000 11:10:49 11:10:49.685 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - Processing request:: sessionid:0x10000020e9e0000 type:getChildren2 cxid:0x1d zxid:0xfffffffffffffffe txntype:unknown reqpath:/brokers/topics 11:10:49 11:10:49.685 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - sessionid:0x10000020e9e0000 type:getChildren2 cxid:0x1d zxid:0xfffffffffffffffe txntype:unknown reqpath:/brokers/topics 11:10:49 11:10:49.686 [SyncThread:0] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Permission requested: 1 11:10:49 11:10:49.686 [SyncThread:0] DEBUG org.apache.zookeeper.server.ZooKeeperServer - ACLs for node: [31,s{'world,'anyone} 11:10:49 ] 11:10:49 11:10:49.686 [SyncThread:0] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Client credentials: ['sasl,'zooclient 11:10:49 , 'ip,'127.0.0.1 11:10:49 ] 11:10:49 11:10:49.688 [main-SendThread(127.0.0.1:39173)] DEBUG org.apache.zookeeper.ClientCnxn - Reading reply session id: 0x10000020e9e0000, packet:: clientPath:/brokers/topics serverPath:/brokers/topics finished:false header:: 29,12 replyHeader:: 29,24,0 request:: '/brokers/topics,F response:: v{},s{6,6,1768216249046,1768216249046,0,0,0,0,0,0,6} 11:10:49 11:10:49.692 [main] INFO kafka.log.LogManager - Loading logs from log dirs ArraySeq(/tmp/kafka-unit8944902187107510952) 11:10:49 11:10:49.695 [main] INFO kafka.log.LogManager - Attempting recovery for all logs in /tmp/kafka-unit8944902187107510952 since no clean shutdown file was found 11:10:49 11:10:49.699 [main] DEBUG kafka.log.LogManager - Adding log recovery metrics 11:10:49 11:10:49.705 [main] DEBUG kafka.log.LogManager - Removing log recovery metrics 11:10:49 11:10:49.709 [main] INFO kafka.log.LogManager - Loaded 0 logs in 18ms. 11:10:49 11:10:49.709 [main] INFO kafka.log.LogManager - Starting log cleanup with a period of 300000 ms. 11:10:49 11:10:49.711 [main] DEBUG kafka.utils.KafkaScheduler - Scheduling task kafka-log-retention with initial delay 30000 ms and period 300000 ms. 11:10:49 11:10:49.712 [main] INFO kafka.log.LogManager - Starting log flusher with a default period of 9223372036854775807 ms. 11:10:49 11:10:49.713 [main] DEBUG kafka.utils.KafkaScheduler - Scheduling task kafka-log-flusher with initial delay 30000 ms and period 9223372036854775807 ms. 11:10:49 11:10:49.713 [main] DEBUG kafka.utils.KafkaScheduler - Scheduling task kafka-recovery-point-checkpoint with initial delay 30000 ms and period 60000 ms. 11:10:49 11:10:49.714 [main] DEBUG kafka.utils.KafkaScheduler - Scheduling task kafka-log-start-offset-checkpoint with initial delay 30000 ms and period 60000 ms. 11:10:49 11:10:49.714 [main] DEBUG kafka.utils.KafkaScheduler - Scheduling task kafka-delete-logs with initial delay 30000 ms and period -1 ms. 11:10:49 11:10:49.730 [main] INFO kafka.log.LogCleaner - Starting the log cleaner 11:10:49 11:10:49.780 [kafka-log-cleaner-thread-0] INFO kafka.log.LogCleaner - [kafka-log-cleaner-thread-0]: Starting 11:10:49 11:10:49.811 [feature-zk-node-event-process-thread] INFO kafka.server.FinalizedFeatureChangeListener$ChangeNotificationProcessorThread - [feature-zk-node-event-process-thread]: Starting 11:10:49 11:10:49.816 [ProcessThread(sid:0 cport:39173):] DEBUG org.apache.zookeeper.server.SessionTrackerImpl - Checking session 0x10000020e9e0000 11:10:49 11:10:49.817 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - Processing request:: sessionid:0x10000020e9e0000 type:exists cxid:0x1e zxid:0xfffffffffffffffe txntype:unknown reqpath:/feature 11:10:49 11:10:49.817 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - sessionid:0x10000020e9e0000 type:exists cxid:0x1e zxid:0xfffffffffffffffe txntype:unknown reqpath:/feature 11:10:49 11:10:49.820 [main-SendThread(127.0.0.1:39173)] DEBUG org.apache.zookeeper.ClientCnxn - Reading reply session id: 0x10000020e9e0000, packet:: clientPath:/feature serverPath:/feature finished:false header:: 30,3 replyHeader:: 30,24,-101 request:: '/feature,T response:: 11:10:49 11:10:49.827 [feature-zk-node-event-process-thread] DEBUG kafka.server.FinalizedFeatureChangeListener - Reading feature ZK node at path: /feature 11:10:49 11:10:49.829 [ProcessThread(sid:0 cport:39173):] DEBUG org.apache.zookeeper.server.SessionTrackerImpl - Checking session 0x10000020e9e0000 11:10:49 11:10:49.829 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - Processing request:: sessionid:0x10000020e9e0000 type:getData cxid:0x1f zxid:0xfffffffffffffffe txntype:unknown reqpath:/feature 11:10:49 11:10:49.829 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - sessionid:0x10000020e9e0000 type:getData cxid:0x1f zxid:0xfffffffffffffffe txntype:unknown reqpath:/feature 11:10:49 11:10:49.830 [main-SendThread(127.0.0.1:39173)] DEBUG org.apache.zookeeper.ClientCnxn - Reading reply session id: 0x10000020e9e0000, packet:: clientPath:/feature serverPath:/feature finished:false header:: 31,4 replyHeader:: 31,24,-101 request:: '/feature,T response:: 11:10:49 11:10:49.832 [feature-zk-node-event-process-thread] INFO kafka.server.FinalizedFeatureChangeListener - Feature ZK node at path: /feature does not exist 11:10:49 11:10:49.854 [main] INFO org.apache.kafka.common.security.authenticator.AbstractLogin - Successfully logged in. 11:10:49 11:10:49.883 [TestBroker:1:BrokerToControllerChannelManager broker=1 name=forwarding] INFO kafka.server.BrokerToControllerRequestThread - [TestBroker:1:BrokerToControllerChannelManager broker=1 name=forwarding]: Starting 11:10:49 11:10:49.884 [TestBroker:1:BrokerToControllerChannelManager broker=1 name=forwarding] DEBUG kafka.server.BrokerToControllerRequestThread - [TestBroker:1:BrokerToControllerChannelManager broker=1 name=forwarding]: Controller isn't cached, looking for local metadata changes 11:10:49 11:10:49.885 [TestBroker:1:BrokerToControllerChannelManager broker=1 name=forwarding] DEBUG kafka.server.BrokerToControllerRequestThread - [TestBroker:1:BrokerToControllerChannelManager broker=1 name=forwarding]: No controller defined in metadata cache, retrying after backoff 11:10:50 11:10:49.997 [TestBroker:1:BrokerToControllerChannelManager broker=1 name=forwarding] DEBUG kafka.server.BrokerToControllerRequestThread - [TestBroker:1:BrokerToControllerChannelManager broker=1 name=forwarding]: Controller isn't cached, looking for local metadata changes 11:10:50 11:10:49.997 [TestBroker:1:BrokerToControllerChannelManager broker=1 name=forwarding] DEBUG kafka.server.BrokerToControllerRequestThread - [TestBroker:1:BrokerToControllerChannelManager broker=1 name=forwarding]: No controller defined in metadata cache, retrying after backoff 11:10:50 11:10:50.098 [TestBroker:1:BrokerToControllerChannelManager broker=1 name=forwarding] DEBUG kafka.server.BrokerToControllerRequestThread - [TestBroker:1:BrokerToControllerChannelManager broker=1 name=forwarding]: Controller isn't cached, looking for local metadata changes 11:10:50 11:10:50.098 [TestBroker:1:BrokerToControllerChannelManager broker=1 name=forwarding] DEBUG kafka.server.BrokerToControllerRequestThread - [TestBroker:1:BrokerToControllerChannelManager broker=1 name=forwarding]: No controller defined in metadata cache, retrying after backoff 11:10:50 11:10:50.198 [TestBroker:1:BrokerToControllerChannelManager broker=1 name=forwarding] DEBUG kafka.server.BrokerToControllerRequestThread - [TestBroker:1:BrokerToControllerChannelManager broker=1 name=forwarding]: Controller isn't cached, looking for local metadata changes 11:10:50 11:10:50.198 [TestBroker:1:BrokerToControllerChannelManager broker=1 name=forwarding] DEBUG kafka.server.BrokerToControllerRequestThread - [TestBroker:1:BrokerToControllerChannelManager broker=1 name=forwarding]: No controller defined in metadata cache, retrying after backoff 11:10:50 11:10:50.299 [TestBroker:1:BrokerToControllerChannelManager broker=1 name=forwarding] DEBUG kafka.server.BrokerToControllerRequestThread - [TestBroker:1:BrokerToControllerChannelManager broker=1 name=forwarding]: Controller isn't cached, looking for local metadata changes 11:10:50 11:10:50.299 [TestBroker:1:BrokerToControllerChannelManager broker=1 name=forwarding] DEBUG kafka.server.BrokerToControllerRequestThread - [TestBroker:1:BrokerToControllerChannelManager broker=1 name=forwarding]: No controller defined in metadata cache, retrying after backoff 11:10:50 11:10:50.400 [TestBroker:1:BrokerToControllerChannelManager broker=1 name=forwarding] DEBUG kafka.server.BrokerToControllerRequestThread - [TestBroker:1:BrokerToControllerChannelManager broker=1 name=forwarding]: Controller isn't cached, looking for local metadata changes 11:10:50 11:10:50.400 [TestBroker:1:BrokerToControllerChannelManager broker=1 name=forwarding] DEBUG kafka.server.BrokerToControllerRequestThread - [TestBroker:1:BrokerToControllerChannelManager broker=1 name=forwarding]: No controller defined in metadata cache, retrying after backoff 11:10:50 11:10:50.436 [main] INFO kafka.network.ConnectionQuotas - Updated connection-accept-rate max connection creation rate to 2147483647 11:10:50 11:10:50.441 [main] INFO kafka.network.DataPlaneAcceptor - Awaiting socket connections on localhost:39115. 11:10:50 11:10:50.476 [main] INFO kafka.network.SocketServer - [SocketServer listenerType=ZK_BROKER, nodeId=1] Created data-plane acceptor and processors for endpoint : ListenerName(SASL_PLAINTEXT) 11:10:50 11:10:50.485 [TestBroker:1:BrokerToControllerChannelManager broker=1 name=alterPartition] INFO kafka.server.BrokerToControllerRequestThread - [TestBroker:1:BrokerToControllerChannelManager broker=1 name=alterPartition]: Starting 11:10:50 11:10:50.486 [TestBroker:1:BrokerToControllerChannelManager broker=1 name=alterPartition] DEBUG kafka.server.BrokerToControllerRequestThread - [TestBroker:1:BrokerToControllerChannelManager broker=1 name=alterPartition]: Controller isn't cached, looking for local metadata changes 11:10:50 11:10:50.487 [TestBroker:1:BrokerToControllerChannelManager broker=1 name=alterPartition] DEBUG kafka.server.BrokerToControllerRequestThread - [TestBroker:1:BrokerToControllerChannelManager broker=1 name=alterPartition]: No controller defined in metadata cache, retrying after backoff 11:10:50 11:10:50.501 [TestBroker:1:BrokerToControllerChannelManager broker=1 name=forwarding] DEBUG kafka.server.BrokerToControllerRequestThread - [TestBroker:1:BrokerToControllerChannelManager broker=1 name=forwarding]: Controller isn't cached, looking for local metadata changes 11:10:50 11:10:50.501 [TestBroker:1:BrokerToControllerChannelManager broker=1 name=forwarding] DEBUG kafka.server.BrokerToControllerRequestThread - [TestBroker:1:BrokerToControllerChannelManager broker=1 name=forwarding]: No controller defined in metadata cache, retrying after backoff 11:10:50 11:10:50.519 [ExpirationReaper-1-Produce] INFO kafka.server.DelayedOperationPurgatory$ExpiredOperationReaper - [ExpirationReaper-1-Produce]: Starting 11:10:50 11:10:50.521 [ExpirationReaper-1-Fetch] INFO kafka.server.DelayedOperationPurgatory$ExpiredOperationReaper - [ExpirationReaper-1-Fetch]: Starting 11:10:50 11:10:50.523 [ExpirationReaper-1-DeleteRecords] INFO kafka.server.DelayedOperationPurgatory$ExpiredOperationReaper - [ExpirationReaper-1-DeleteRecords]: Starting 11:10:50 11:10:50.525 [ExpirationReaper-1-ElectLeader] INFO kafka.server.DelayedOperationPurgatory$ExpiredOperationReaper - [ExpirationReaper-1-ElectLeader]: Starting 11:10:50 11:10:50.541 [main] DEBUG kafka.utils.KafkaScheduler - Scheduling task isr-expiration with initial delay 0 ms and period 15000 ms. 11:10:50 11:10:50.542 [main] DEBUG kafka.utils.KafkaScheduler - Scheduling task shutdown-idle-replica-alter-log-dirs-thread with initial delay 0 ms and period 10000 ms. 11:10:50 11:10:50.547 [LogDirFailureHandler] INFO kafka.server.ReplicaManager$LogDirFailureHandler - [LogDirFailureHandler]: Starting 11:10:50 11:10:50.549 [ProcessThread(sid:0 cport:39173):] DEBUG org.apache.zookeeper.server.SessionTrackerImpl - Checking session 0x10000020e9e0000 11:10:50 11:10:50.549 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - Processing request:: sessionid:0x10000020e9e0000 type:getChildren2 cxid:0x20 zxid:0xfffffffffffffffe txntype:unknown reqpath:/brokers/ids 11:10:50 11:10:50.549 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - sessionid:0x10000020e9e0000 type:getChildren2 cxid:0x20 zxid:0xfffffffffffffffe txntype:unknown reqpath:/brokers/ids 11:10:50 11:10:50.549 [SyncThread:0] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Permission requested: 1 11:10:50 11:10:50.550 [SyncThread:0] DEBUG org.apache.zookeeper.server.ZooKeeperServer - ACLs for node: [31,s{'world,'anyone} 11:10:50 ] 11:10:50 11:10:50.550 [SyncThread:0] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Client credentials: ['sasl,'zooclient 11:10:50 , 'ip,'127.0.0.1 11:10:50 ] 11:10:50 11:10:50.550 [main-SendThread(127.0.0.1:39173)] DEBUG org.apache.zookeeper.ClientCnxn - Reading reply session id: 0x10000020e9e0000, packet:: clientPath:/brokers/ids serverPath:/brokers/ids finished:false header:: 32,12 replyHeader:: 32,24,0 request:: '/brokers/ids,F response:: v{},s{5,5,1768216249032,1768216249032,0,0,0,0,0,0,5} 11:10:50 11:10:50.587 [main] INFO kafka.zk.KafkaZkClient - Creating /brokers/ids/1 (is it secure? false) 11:10:50 11:10:50.590 [TestBroker:1:BrokerToControllerChannelManager broker=1 name=alterPartition] DEBUG kafka.server.BrokerToControllerRequestThread - [TestBroker:1:BrokerToControllerChannelManager broker=1 name=alterPartition]: Controller isn't cached, looking for local metadata changes 11:10:50 11:10:50.590 [TestBroker:1:BrokerToControllerChannelManager broker=1 name=alterPartition] DEBUG kafka.server.BrokerToControllerRequestThread - [TestBroker:1:BrokerToControllerChannelManager broker=1 name=alterPartition]: No controller defined in metadata cache, retrying after backoff 11:10:50 11:10:50.602 [TestBroker:1:BrokerToControllerChannelManager broker=1 name=forwarding] DEBUG kafka.server.BrokerToControllerRequestThread - [TestBroker:1:BrokerToControllerChannelManager broker=1 name=forwarding]: Controller isn't cached, looking for local metadata changes 11:10:50 11:10:50.602 [TestBroker:1:BrokerToControllerChannelManager broker=1 name=forwarding] DEBUG kafka.server.BrokerToControllerRequestThread - [TestBroker:1:BrokerToControllerChannelManager broker=1 name=forwarding]: No controller defined in metadata cache, retrying after backoff 11:10:50 11:10:50.603 [ProcessThread(sid:0 cport:39173):] DEBUG org.apache.zookeeper.server.SessionTrackerImpl - Checking session 0x10000020e9e0000 11:10:50 11:10:50.604 [ProcessThread(sid:0 cport:39173):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Processing ACL: 31,s{'world,'anyone} 11:10:50 11:10:50 11:10:50.605 [ProcessThread(sid:0 cport:39173):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Permission requested: 4 11:10:50 11:10:50.605 [ProcessThread(sid:0 cport:39173):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - ACLs for node: [31,s{'world,'anyone} 11:10:50 ] 11:10:50 11:10:50.605 [ProcessThread(sid:0 cport:39173):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Client credentials: ['sasl,'zooclient 11:10:50 , 'ip,'127.0.0.1 11:10:50 ] 11:10:50 11:10:50.606 [ProcessThread(sid:0 cport:39173):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from data tree is: 33146094928 11:10:50 11:10:50.606 [ProcessThread(sid:0 cport:39173):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 33328198249 11:10:50 11:10:50.608 [ProcessThread(sid:0 cport:39173):] DEBUG org.apache.zookeeper.server.SessionTrackerImpl - Checking session 0x10000020e9e0000 11:10:50 11:10:50.608 [ProcessThread(sid:0 cport:39173):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Permission requested: 2 11:10:50 11:10:50.608 [ProcessThread(sid:0 cport:39173):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - ACLs for node: [31,s{'world,'anyone} 11:10:50 ] 11:10:50 11:10:50.608 [ProcessThread(sid:0 cport:39173):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Client credentials: ['sasl,'zooclient 11:10:50 , 'ip,'127.0.0.1 11:10:50 ] 11:10:50 11:10:50.609 [ProcessThread(sid:0 cport:39173):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 34075227783 11:10:50 11:10:50.610 [ProcessThread(sid:0 cport:39173):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 36374043825 11:10:50 11:10:50.613 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - Processing request:: sessionid:0x10000020e9e0000 type:multi cxid:0x21 zxid:0x19 txntype:14 reqpath:n/a 11:10:50 11:10:50.614 [SyncThread:0] DEBUG org.apache.zookeeper.common.PathTrie - brokers 11:10:50 11:10:50.614 [SyncThread:0] DEBUG org.apache.zookeeper.common.PathTrie - brokers 11:10:50 11:10:50.614 [SyncThread:0] DEBUG org.apache.zookeeper.server.DataTree - Digests are matching for Zxid: 19, Digest in log and actual tree: 36374043825 11:10:50 11:10:50.614 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - sessionid:0x10000020e9e0000 type:multi cxid:0x21 zxid:0x19 txntype:14 reqpath:n/a 11:10:50 11:10:50.616 [main-SendThread(127.0.0.1:39173)] DEBUG org.apache.zookeeper.ClientCnxn - Reading reply session id: 0x10000020e9e0000, packet:: clientPath:null serverPath:null finished:false header:: 33,14 replyHeader:: 33,25,0 request:: org.apache.zookeeper.MultiOperationRecord@bdc6b67f response:: org.apache.zookeeper.MultiResponse@1dbbce85 11:10:50 11:10:50.621 [main] INFO kafka.zk.KafkaZkClient - Stat of the created znode at /brokers/ids/1 is: 25,25,1768216250603,1768216250603,1,0,0,72057602873098240,209,0,25 11:10:50 11:10:50 11:10:50.622 [main] INFO kafka.zk.KafkaZkClient - Registered broker 1 at path /brokers/ids/1 with addresses: SASL_PLAINTEXT://localhost:39115, czxid (broker epoch): 25 11:10:50 11:10:50.691 [TestBroker:1:BrokerToControllerChannelManager broker=1 name=alterPartition] DEBUG kafka.server.BrokerToControllerRequestThread - [TestBroker:1:BrokerToControllerChannelManager broker=1 name=alterPartition]: Controller isn't cached, looking for local metadata changes 11:10:50 11:10:50.692 [TestBroker:1:BrokerToControllerChannelManager broker=1 name=alterPartition] DEBUG kafka.server.BrokerToControllerRequestThread - [TestBroker:1:BrokerToControllerChannelManager broker=1 name=alterPartition]: No controller defined in metadata cache, retrying after backoff 11:10:50 11:10:50.703 [TestBroker:1:BrokerToControllerChannelManager broker=1 name=forwarding] DEBUG kafka.server.BrokerToControllerRequestThread - [TestBroker:1:BrokerToControllerChannelManager broker=1 name=forwarding]: Controller isn't cached, looking for local metadata changes 11:10:50 11:10:50.703 [TestBroker:1:BrokerToControllerChannelManager broker=1 name=forwarding] DEBUG kafka.server.BrokerToControllerRequestThread - [TestBroker:1:BrokerToControllerChannelManager broker=1 name=forwarding]: No controller defined in metadata cache, retrying after backoff 11:10:50 11:10:50.712 [controller-event-thread] INFO kafka.controller.ControllerEventManager$ControllerEventThread - [ControllerEventThread controllerId=1] Starting 11:10:50 11:10:50.725 [ExpirationReaper-1-topic] INFO kafka.server.DelayedOperationPurgatory$ExpiredOperationReaper - [ExpirationReaper-1-topic]: Starting 11:10:50 11:10:50.730 [ProcessThread(sid:0 cport:39173):] DEBUG org.apache.zookeeper.server.SessionTrackerImpl - Checking session 0x10000020e9e0000 11:10:50 11:10:50.731 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - Processing request:: sessionid:0x10000020e9e0000 type:exists cxid:0x22 zxid:0xfffffffffffffffe txntype:unknown reqpath:/controller 11:10:50 11:10:50.731 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - sessionid:0x10000020e9e0000 type:exists cxid:0x22 zxid:0xfffffffffffffffe txntype:unknown reqpath:/controller 11:10:50 11:10:50.731 [main-SendThread(127.0.0.1:39173)] DEBUG org.apache.zookeeper.ClientCnxn - Reading reply session id: 0x10000020e9e0000, packet:: clientPath:/controller serverPath:/controller finished:false header:: 34,3 replyHeader:: 34,25,-101 request:: '/controller,T response:: 11:10:50 11:10:50.733 [ProcessThread(sid:0 cport:39173):] DEBUG org.apache.zookeeper.server.SessionTrackerImpl - Checking session 0x10000020e9e0000 11:10:50 11:10:50.733 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - Processing request:: sessionid:0x10000020e9e0000 type:getData cxid:0x23 zxid:0xfffffffffffffffe txntype:unknown reqpath:/controller 11:10:50 11:10:50.733 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - sessionid:0x10000020e9e0000 type:getData cxid:0x23 zxid:0xfffffffffffffffe txntype:unknown reqpath:/controller 11:10:50 11:10:50.734 [main-SendThread(127.0.0.1:39173)] DEBUG org.apache.zookeeper.ClientCnxn - Reading reply session id: 0x10000020e9e0000, packet:: clientPath:/controller serverPath:/controller finished:false header:: 35,4 replyHeader:: 35,25,-101 request:: '/controller,T response:: 11:10:50 11:10:50.734 [ExpirationReaper-1-Heartbeat] INFO kafka.server.DelayedOperationPurgatory$ExpiredOperationReaper - [ExpirationReaper-1-Heartbeat]: Starting 11:10:50 11:10:50.737 [ExpirationReaper-1-Rebalance] INFO kafka.server.DelayedOperationPurgatory$ExpiredOperationReaper - [ExpirationReaper-1-Rebalance]: Starting 11:10:50 11:10:50.738 [ProcessThread(sid:0 cport:39173):] DEBUG org.apache.zookeeper.server.SessionTrackerImpl - Checking session 0x10000020e9e0000 11:10:50 11:10:50.739 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - Processing request:: sessionid:0x10000020e9e0000 type:getData cxid:0x24 zxid:0xfffffffffffffffe txntype:unknown reqpath:/controller_epoch 11:10:50 11:10:50.739 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - sessionid:0x10000020e9e0000 type:getData cxid:0x24 zxid:0xfffffffffffffffe txntype:unknown reqpath:/controller_epoch 11:10:50 11:10:50.739 [main-SendThread(127.0.0.1:39173)] DEBUG org.apache.zookeeper.ClientCnxn - Reading reply session id: 0x10000020e9e0000, packet:: clientPath:/controller_epoch serverPath:/controller_epoch finished:false header:: 36,4 replyHeader:: 36,25,-101 request:: '/controller_epoch,F response:: 11:10:50 11:10:50.742 [ProcessThread(sid:0 cport:39173):] DEBUG org.apache.zookeeper.server.SessionTrackerImpl - Checking session 0x10000020e9e0000 11:10:50 11:10:50.742 [ProcessThread(sid:0 cport:39173):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Processing ACL: 31,s{'world,'anyone} 11:10:50 11:10:50 11:10:50.742 [ProcessThread(sid:0 cport:39173):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Permission requested: 4 11:10:50 11:10:50.742 [ProcessThread(sid:0 cport:39173):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - ACLs for node: [31,s{'world,'anyone} 11:10:50 ] 11:10:50 11:10:50.742 [ProcessThread(sid:0 cport:39173):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Client credentials: ['sasl,'zooclient 11:10:50 , 'ip,'127.0.0.1 11:10:50 ] 11:10:50 11:10:50.742 [ProcessThread(sid:0 cport:39173):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from data tree is: 36374043825 11:10:50 11:10:50.742 [ProcessThread(sid:0 cport:39173):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 40122501126 11:10:50 11:10:50.745 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - Processing request:: sessionid:0x10000020e9e0000 type:create cxid:0x25 zxid:0x1a txntype:1 reqpath:n/a 11:10:50 11:10:50.745 [SyncThread:0] DEBUG org.apache.zookeeper.common.PathTrie - controller_epoch 11:10:50 11:10:50.745 [SyncThread:0] DEBUG org.apache.zookeeper.server.DataTree - Digests are matching for Zxid: 1a, Digest in log and actual tree: 44042881610 11:10:50 11:10:50.745 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - sessionid:0x10000020e9e0000 type:create cxid:0x25 zxid:0x1a txntype:1 reqpath:n/a 11:10:50 11:10:50.746 [main-SendThread(127.0.0.1:39173)] DEBUG org.apache.zookeeper.ClientCnxn - Reading reply session id: 0x10000020e9e0000, packet:: clientPath:/controller_epoch serverPath:/controller_epoch finished:false header:: 37,1 replyHeader:: 37,26,0 request:: '/controller_epoch,#30,v{s{31,s{'world,'anyone}}},0 response:: '/controller_epoch 11:10:50 11:10:50.747 [controller-event-thread] INFO kafka.zk.KafkaZkClient - Successfully created /controller_epoch with initial epoch 0 11:10:50 11:10:50.748 [controller-event-thread] DEBUG kafka.zk.KafkaZkClient - Try to create /controller and increment controller epoch to 1 with expected controller epoch zkVersion 0 11:10:50 11:10:50.753 [ProcessThread(sid:0 cport:39173):] DEBUG org.apache.zookeeper.server.SessionTrackerImpl - Checking session 0x10000020e9e0000 11:10:50 11:10:50.753 [ProcessThread(sid:0 cport:39173):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Processing ACL: 31,s{'world,'anyone} 11:10:50 11:10:50 11:10:50.754 [ProcessThread(sid:0 cport:39173):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Permission requested: 4 11:10:50 11:10:50.754 [ProcessThread(sid:0 cport:39173):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - ACLs for node: [31,s{'world,'anyone} 11:10:50 ] 11:10:50 11:10:50.754 [ProcessThread(sid:0 cport:39173):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Client credentials: ['sasl,'zooclient 11:10:50 , 'ip,'127.0.0.1 11:10:50 ] 11:10:50 11:10:50.754 [ProcessThread(sid:0 cport:39173):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from data tree is: 44042881610 11:10:50 11:10:50.754 [ProcessThread(sid:0 cport:39173):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 43827515771 11:10:50 11:10:50.754 [ProcessThread(sid:0 cport:39173):] DEBUG org.apache.zookeeper.server.SessionTrackerImpl - Checking session 0x10000020e9e0000 11:10:50 11:10:50.755 [ProcessThread(sid:0 cport:39173):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Permission requested: 2 11:10:50 11:10:50.755 [ProcessThread(sid:0 cport:39173):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - ACLs for node: [31,s{'world,'anyone} 11:10:50 ] 11:10:50 11:10:50.755 [ProcessThread(sid:0 cport:39173):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Client credentials: ['sasl,'zooclient 11:10:50 , 'ip,'127.0.0.1 11:10:50 ] 11:10:50 11:10:50.755 [ProcessThread(sid:0 cport:39173):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 44974775237 11:10:50 11:10:50.755 [ProcessThread(sid:0 cport:39173):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 44155431210 11:10:50 11:10:50.756 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - Processing request:: sessionid:0x10000020e9e0000 type:multi cxid:0x26 zxid:0x1b txntype:14 reqpath:n/a 11:10:50 11:10:50.756 [SyncThread:0] DEBUG org.apache.zookeeper.common.PathTrie - controller 11:10:50 11:10:50.759 [SyncThread:0] DEBUG org.apache.zookeeper.common.PathTrie - controller_epoch 11:10:50 11:10:50.759 [SyncThread:0] DEBUG org.apache.zookeeper.server.DataTree - Digests are matching for Zxid: 1b, Digest in log and actual tree: 44155431210 11:10:50 11:10:50.759 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - sessionid:0x10000020e9e0000 type:multi cxid:0x26 zxid:0x1b txntype:14 reqpath:n/a 11:10:50 11:10:50.759 [main-SendThread(127.0.0.1:39173)] DEBUG org.apache.zookeeper.ClientCnxn - Got notification session id: 0x10000020e9e0000 11:10:50 11:10:50.759 [main-SendThread(127.0.0.1:39173)] DEBUG org.apache.zookeeper.ClientCnxn - Got WatchedEvent state:SyncConnected type:NodeCreated path:/controller for session id 0x10000020e9e0000 11:10:50 11:10:50.760 [main-SendThread(127.0.0.1:39173)] DEBUG org.apache.zookeeper.ClientCnxn - Reading reply session id: 0x10000020e9e0000, packet:: clientPath:null serverPath:null finished:false header:: 38,14 replyHeader:: 38,27,0 request:: org.apache.zookeeper.MultiOperationRecord@fa160ec6 response:: org.apache.zookeeper.MultiResponse@f3584fa6 11:10:50 11:10:50.760 [main-EventThread] DEBUG kafka.zookeeper.ZooKeeperClient - [ZooKeeperClient Kafka server] Received event: WatchedEvent state:SyncConnected type:NodeCreated path:/controller 11:10:50 11:10:50.762 [controller-event-thread] INFO kafka.controller.KafkaController - [Controller id=1] 1 successfully elected as the controller. Epoch incremented to 1 and epoch zk version is now 1 11:10:50 11:10:50.763 [ProcessThread(sid:0 cport:39173):] DEBUG org.apache.zookeeper.server.SessionTrackerImpl - Checking session 0x10000020e9e0000 11:10:50 11:10:50.763 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - Processing request:: sessionid:0x10000020e9e0000 type:getData cxid:0x27 zxid:0xfffffffffffffffe txntype:unknown reqpath:/feature 11:10:50 11:10:50.763 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - sessionid:0x10000020e9e0000 type:getData cxid:0x27 zxid:0xfffffffffffffffe txntype:unknown reqpath:/feature 11:10:50 11:10:50.764 [main-SendThread(127.0.0.1:39173)] DEBUG org.apache.zookeeper.ClientCnxn - Reading reply session id: 0x10000020e9e0000, packet:: clientPath:/feature serverPath:/feature finished:false header:: 39,4 replyHeader:: 39,27,-101 request:: '/feature,T response:: 11:10:50 11:10:50.768 [controller-event-thread] INFO kafka.controller.KafkaController - [Controller id=1] Creating FeatureZNode at path: /feature with contents: FeatureZNode(2,Enabled,Map()) 11:10:50 11:10:50.770 [main] INFO kafka.coordinator.group.GroupCoordinator - [GroupCoordinator 1]: Starting up. 11:10:50 11:10:50.771 [ProcessThread(sid:0 cport:39173):] DEBUG org.apache.zookeeper.server.SessionTrackerImpl - Checking session 0x10000020e9e0000 11:10:50 11:10:50.771 [ProcessThread(sid:0 cport:39173):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Processing ACL: 31,s{'world,'anyone} 11:10:50 11:10:50 11:10:50.771 [ProcessThread(sid:0 cport:39173):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Permission requested: 4 11:10:50 11:10:50.771 [ProcessThread(sid:0 cport:39173):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - ACLs for node: [31,s{'world,'anyone} 11:10:50 ] 11:10:50 11:10:50.772 [ProcessThread(sid:0 cport:39173):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Client credentials: ['sasl,'zooclient 11:10:50 , 'ip,'127.0.0.1 11:10:50 ] 11:10:50 11:10:50.772 [ProcessThread(sid:0 cport:39173):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from data tree is: 44155431210 11:10:50 11:10:50.772 [ProcessThread(sid:0 cport:39173):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 41342916980 11:10:50 11:10:50.772 [ProcessThread(sid:0 cport:39173):] DEBUG org.apache.zookeeper.server.SessionTrackerImpl - Checking session 0x10000020e9e0000 11:10:50 11:10:50.773 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - Processing request:: sessionid:0x10000020e9e0000 type:create cxid:0x28 zxid:0x1c txntype:1 reqpath:n/a 11:10:50 11:10:50.773 [SyncThread:0] DEBUG org.apache.zookeeper.common.PathTrie - feature 11:10:50 11:10:50.774 [SyncThread:0] DEBUG org.apache.zookeeper.server.DataTree - Digests are matching for Zxid: 1c, Digest in log and actual tree: 45507074165 11:10:50 11:10:50.774 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - sessionid:0x10000020e9e0000 type:create cxid:0x28 zxid:0x1c txntype:1 reqpath:n/a 11:10:50 11:10:50.774 [main-SendThread(127.0.0.1:39173)] DEBUG org.apache.zookeeper.ClientCnxn - Got notification session id: 0x10000020e9e0000 11:10:50 11:10:50.774 [main-SendThread(127.0.0.1:39173)] DEBUG org.apache.zookeeper.ClientCnxn - Got WatchedEvent state:SyncConnected type:NodeCreated path:/feature for session id 0x10000020e9e0000 11:10:50 11:10:50.774 [main-EventThread] DEBUG kafka.zookeeper.ZooKeeperClient - [ZooKeeperClient Kafka server] Received event: WatchedEvent state:SyncConnected type:NodeCreated path:/feature 11:10:50 11:10:50.776 [main-EventThread] INFO kafka.server.FinalizedFeatureChangeListener - Feature ZK node created at path: /feature 11:10:50 11:10:50.776 [feature-zk-node-event-process-thread] DEBUG kafka.server.FinalizedFeatureChangeListener - Reading feature ZK node at path: /feature 11:10:50 11:10:50.776 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - Processing request:: sessionid:0x10000020e9e0000 type:getData cxid:0x29 zxid:0xfffffffffffffffe txntype:unknown reqpath:/brokers/topics/__consumer_offsets 11:10:50 11:10:50.776 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - sessionid:0x10000020e9e0000 type:getData cxid:0x29 zxid:0xfffffffffffffffe txntype:unknown reqpath:/brokers/topics/__consumer_offsets 11:10:50 11:10:50.777 [main-SendThread(127.0.0.1:39173)] DEBUG org.apache.zookeeper.ClientCnxn - Reading reply session id: 0x10000020e9e0000, packet:: clientPath:/feature serverPath:/feature finished:false header:: 40,1 replyHeader:: 40,28,0 request:: '/feature,#7b226665617475726573223a7b7d2c2276657273696f6e223a322c22737461747573223a317d,v{s{31,s{'world,'anyone}}},0 response:: '/feature 11:10:50 11:10:50.778 [ProcessThread(sid:0 cport:39173):] DEBUG org.apache.zookeeper.server.SessionTrackerImpl - Checking session 0x10000020e9e0000 11:10:50 11:10:50.778 [main-SendThread(127.0.0.1:39173)] DEBUG org.apache.zookeeper.ClientCnxn - Reading reply session id: 0x10000020e9e0000, packet:: clientPath:/brokers/topics/__consumer_offsets serverPath:/brokers/topics/__consumer_offsets finished:false header:: 41,4 replyHeader:: 41,28,-101 request:: '/brokers/topics/__consumer_offsets,F response:: 11:10:50 11:10:50.779 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - Processing request:: sessionid:0x10000020e9e0000 type:getData cxid:0x2a zxid:0xfffffffffffffffe txntype:unknown reqpath:/feature 11:10:50 11:10:50.779 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - sessionid:0x10000020e9e0000 type:getData cxid:0x2a zxid:0xfffffffffffffffe txntype:unknown reqpath:/feature 11:10:50 11:10:50.779 [SyncThread:0] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Permission requested: 1 11:10:50 11:10:50.779 [SyncThread:0] DEBUG org.apache.zookeeper.server.ZooKeeperServer - ACLs for node: [31,s{'world,'anyone} 11:10:50 ] 11:10:50 11:10:50.779 [SyncThread:0] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Client credentials: ['sasl,'zooclient 11:10:50 , 'ip,'127.0.0.1 11:10:50 ] 11:10:50 11:10:50.779 [main] DEBUG kafka.utils.KafkaScheduler - Initializing task scheduler. 11:10:50 11:10:50.780 [main] DEBUG kafka.utils.KafkaScheduler - Scheduling task delete-expired-group-metadata with initial delay 0 ms and period 600000 ms. 11:10:50 11:10:50.780 [ProcessThread(sid:0 cport:39173):] DEBUG org.apache.zookeeper.server.SessionTrackerImpl - Checking session 0x10000020e9e0000 11:10:50 11:10:50.780 [main-SendThread(127.0.0.1:39173)] DEBUG org.apache.zookeeper.ClientCnxn - Reading reply session id: 0x10000020e9e0000, packet:: clientPath:/feature serverPath:/feature finished:false header:: 42,4 replyHeader:: 42,28,0 request:: '/feature,T response:: #7b226665617475726573223a7b7d2c2276657273696f6e223a322c22737461747573223a317d,s{28,28,1768216250771,1768216250771,0,0,0,0,38,0,28} 11:10:50 11:10:50.780 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - Processing request:: sessionid:0x10000020e9e0000 type:getData cxid:0x2b zxid:0xfffffffffffffffe txntype:unknown reqpath:/feature 11:10:50 11:10:50.780 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - sessionid:0x10000020e9e0000 type:getData cxid:0x2b zxid:0xfffffffffffffffe txntype:unknown reqpath:/feature 11:10:50 11:10:50.780 [SyncThread:0] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Permission requested: 1 11:10:50 11:10:50.780 [main] INFO kafka.coordinator.group.GroupCoordinator - [GroupCoordinator 1]: Startup complete. 11:10:50 11:10:50.780 [SyncThread:0] DEBUG org.apache.zookeeper.server.ZooKeeperServer - ACLs for node: [31,s{'world,'anyone} 11:10:50 ] 11:10:50 11:10:50.781 [SyncThread:0] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Client credentials: ['sasl,'zooclient 11:10:50 , 'ip,'127.0.0.1 11:10:50 ] 11:10:50 11:10:50.782 [main-SendThread(127.0.0.1:39173)] DEBUG org.apache.zookeeper.ClientCnxn - Reading reply session id: 0x10000020e9e0000, packet:: clientPath:/feature serverPath:/feature finished:false header:: 43,4 replyHeader:: 43,28,0 request:: '/feature,T response:: #7b226665617475726573223a7b7d2c2276657273696f6e223a322c22737461747573223a317d,s{28,28,1768216250771,1768216250771,0,0,0,0,38,0,28} 11:10:50 11:10:50.793 [TestBroker:1:BrokerToControllerChannelManager broker=1 name=alterPartition] DEBUG kafka.server.BrokerToControllerRequestThread - [TestBroker:1:BrokerToControllerChannelManager broker=1 name=alterPartition]: Controller isn't cached, looking for local metadata changes 11:10:50 11:10:50.793 [TestBroker:1:BrokerToControllerChannelManager broker=1 name=alterPartition] DEBUG kafka.server.BrokerToControllerRequestThread - [TestBroker:1:BrokerToControllerChannelManager broker=1 name=alterPartition]: No controller defined in metadata cache, retrying after backoff 11:10:50 11:10:50.803 [TestBroker:1:BrokerToControllerChannelManager broker=1 name=forwarding] DEBUG kafka.server.BrokerToControllerRequestThread - [TestBroker:1:BrokerToControllerChannelManager broker=1 name=forwarding]: Controller isn't cached, looking for local metadata changes 11:10:50 11:10:50.804 [TestBroker:1:BrokerToControllerChannelManager broker=1 name=forwarding] DEBUG kafka.server.BrokerToControllerRequestThread - [TestBroker:1:BrokerToControllerChannelManager broker=1 name=forwarding]: No controller defined in metadata cache, retrying after backoff 11:10:50 11:10:50.816 [main] INFO kafka.coordinator.transaction.TransactionCoordinator - [TransactionCoordinator id=1] Starting up. 11:10:50 11:10:50.816 [main] DEBUG kafka.utils.KafkaScheduler - Initializing task scheduler. 11:10:50 11:10:50.817 [main] DEBUG kafka.utils.KafkaScheduler - Scheduling task transaction-abort with initial delay 10000 ms and period 10000 ms. 11:10:50 11:10:50.819 [ProcessThread(sid:0 cport:39173):] DEBUG org.apache.zookeeper.server.SessionTrackerImpl - Checking session 0x10000020e9e0000 11:10:50 11:10:50.819 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - Processing request:: sessionid:0x10000020e9e0000 type:getData cxid:0x2c zxid:0xfffffffffffffffe txntype:unknown reqpath:/brokers/topics/__transaction_state 11:10:50 11:10:50.819 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - sessionid:0x10000020e9e0000 type:getData cxid:0x2c zxid:0xfffffffffffffffe txntype:unknown reqpath:/brokers/topics/__transaction_state 11:10:50 11:10:50.820 [main-SendThread(127.0.0.1:39173)] DEBUG org.apache.zookeeper.ClientCnxn - Reading reply session id: 0x10000020e9e0000, packet:: clientPath:/brokers/topics/__transaction_state serverPath:/brokers/topics/__transaction_state finished:false header:: 44,4 replyHeader:: 44,28,-101 request:: '/brokers/topics/__transaction_state,F response:: 11:10:50 11:10:50.821 [feature-zk-node-event-process-thread] INFO kafka.server.metadata.ZkMetadataCache - [MetadataCache brokerId=1] Updated cache from existing to latest FinalizedFeaturesAndEpoch(features=Map(), epoch=0). 11:10:50 11:10:50.821 [controller-event-thread] INFO kafka.controller.KafkaController - [Controller id=1] Registering handlers 11:10:50 11:10:50.822 [main] DEBUG kafka.utils.KafkaScheduler - Scheduling task transactionalId-expiration with initial delay 3600000 ms and period 3600000 ms. 11:10:50 11:10:50.824 [TxnMarkerSenderThread-1] INFO kafka.coordinator.transaction.TransactionMarkerChannelManager - [Transaction Marker Channel Manager 1]: Starting 11:10:50 11:10:50.824 [main] INFO kafka.coordinator.transaction.TransactionCoordinator - [TransactionCoordinator id=1] Startup complete. 11:10:50 11:10:50.825 [ProcessThread(sid:0 cport:39173):] DEBUG org.apache.zookeeper.server.SessionTrackerImpl - Checking session 0x10000020e9e0000 11:10:50 11:10:50.825 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - Processing request:: sessionid:0x10000020e9e0000 type:exists cxid:0x2d zxid:0xfffffffffffffffe txntype:unknown reqpath:/admin/preferred_replica_election 11:10:50 11:10:50.825 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - sessionid:0x10000020e9e0000 type:exists cxid:0x2d zxid:0xfffffffffffffffe txntype:unknown reqpath:/admin/preferred_replica_election 11:10:50 11:10:50.826 [main-SendThread(127.0.0.1:39173)] DEBUG org.apache.zookeeper.ClientCnxn - Reading reply session id: 0x10000020e9e0000, packet:: clientPath:/admin/preferred_replica_election serverPath:/admin/preferred_replica_election finished:false header:: 45,3 replyHeader:: 45,28,-101 request:: '/admin/preferred_replica_election,T response:: 11:10:50 11:10:50.827 [ProcessThread(sid:0 cport:39173):] DEBUG org.apache.zookeeper.server.SessionTrackerImpl - Checking session 0x10000020e9e0000 11:10:50 11:10:50.827 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - Processing request:: sessionid:0x10000020e9e0000 type:exists cxid:0x2e zxid:0xfffffffffffffffe txntype:unknown reqpath:/admin/reassign_partitions 11:10:50 11:10:50.827 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - sessionid:0x10000020e9e0000 type:exists cxid:0x2e zxid:0xfffffffffffffffe txntype:unknown reqpath:/admin/reassign_partitions 11:10:50 11:10:50.828 [main-SendThread(127.0.0.1:39173)] DEBUG org.apache.zookeeper.ClientCnxn - Reading reply session id: 0x10000020e9e0000, packet:: clientPath:/admin/reassign_partitions serverPath:/admin/reassign_partitions finished:false header:: 46,3 replyHeader:: 46,28,-101 request:: '/admin/reassign_partitions,T response:: 11:10:50 11:10:50.829 [controller-event-thread] INFO kafka.controller.KafkaController - [Controller id=1] Deleting log dir event notifications 11:10:50 11:10:50.830 [ProcessThread(sid:0 cport:39173):] DEBUG org.apache.zookeeper.server.SessionTrackerImpl - Checking session 0x10000020e9e0000 11:10:50 11:10:50.830 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - Processing request:: sessionid:0x10000020e9e0000 type:getChildren2 cxid:0x2f zxid:0xfffffffffffffffe txntype:unknown reqpath:/log_dir_event_notification 11:10:50 11:10:50.830 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - sessionid:0x10000020e9e0000 type:getChildren2 cxid:0x2f zxid:0xfffffffffffffffe txntype:unknown reqpath:/log_dir_event_notification 11:10:50 11:10:50.830 [SyncThread:0] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Permission requested: 1 11:10:50 11:10:50.830 [SyncThread:0] DEBUG org.apache.zookeeper.server.ZooKeeperServer - ACLs for node: [31,s{'world,'anyone} 11:10:50 ] 11:10:50 11:10:50.830 [SyncThread:0] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Client credentials: ['sasl,'zooclient 11:10:50 , 'ip,'127.0.0.1 11:10:50 ] 11:10:50 11:10:50.831 [main-SendThread(127.0.0.1:39173)] DEBUG org.apache.zookeeper.ClientCnxn - Reading reply session id: 0x10000020e9e0000, packet:: clientPath:/log_dir_event_notification serverPath:/log_dir_event_notification finished:false header:: 47,12 replyHeader:: 47,28,0 request:: '/log_dir_event_notification,T response:: v{},s{16,16,1768216249098,1768216249098,0,0,0,0,0,0,16} 11:10:50 11:10:50.832 [controller-event-thread] INFO kafka.controller.KafkaController - [Controller id=1] Deleting isr change notifications 11:10:50 11:10:50.833 [ProcessThread(sid:0 cport:39173):] DEBUG org.apache.zookeeper.server.SessionTrackerImpl - Checking session 0x10000020e9e0000 11:10:50 11:10:50.833 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - Processing request:: sessionid:0x10000020e9e0000 type:getChildren2 cxid:0x30 zxid:0xfffffffffffffffe txntype:unknown reqpath:/isr_change_notification 11:10:50 11:10:50.833 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - sessionid:0x10000020e9e0000 type:getChildren2 cxid:0x30 zxid:0xfffffffffffffffe txntype:unknown reqpath:/isr_change_notification 11:10:50 11:10:50.833 [SyncThread:0] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Permission requested: 1 11:10:50 11:10:50.833 [SyncThread:0] DEBUG org.apache.zookeeper.server.ZooKeeperServer - ACLs for node: [31,s{'world,'anyone} 11:10:50 ] 11:10:50 11:10:50.834 [SyncThread:0] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Client credentials: ['sasl,'zooclient 11:10:50 , 'ip,'127.0.0.1 11:10:50 ] 11:10:50 11:10:50.834 [main-SendThread(127.0.0.1:39173)] DEBUG org.apache.zookeeper.ClientCnxn - Reading reply session id: 0x10000020e9e0000, packet:: clientPath:/isr_change_notification serverPath:/isr_change_notification finished:false header:: 48,12 replyHeader:: 48,28,0 request:: '/isr_change_notification,T response:: v{},s{14,14,1768216249088,1768216249088,0,0,0,0,0,0,14} 11:10:50 11:10:50.835 [controller-event-thread] INFO kafka.controller.KafkaController - [Controller id=1] Initializing controller context 11:10:50 11:10:50.836 [ProcessThread(sid:0 cport:39173):] DEBUG org.apache.zookeeper.server.SessionTrackerImpl - Checking session 0x10000020e9e0000 11:10:50 11:10:50.836 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - Processing request:: sessionid:0x10000020e9e0000 type:getChildren2 cxid:0x31 zxid:0xfffffffffffffffe txntype:unknown reqpath:/brokers/ids 11:10:50 11:10:50.836 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - sessionid:0x10000020e9e0000 type:getChildren2 cxid:0x31 zxid:0xfffffffffffffffe txntype:unknown reqpath:/brokers/ids 11:10:50 11:10:50.836 [SyncThread:0] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Permission requested: 1 11:10:50 11:10:50.836 [SyncThread:0] DEBUG org.apache.zookeeper.server.ZooKeeperServer - ACLs for node: [31,s{'world,'anyone} 11:10:50 ] 11:10:50 11:10:50.836 [SyncThread:0] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Client credentials: ['sasl,'zooclient 11:10:50 , 'ip,'127.0.0.1 11:10:50 ] 11:10:50 11:10:50.837 [main-SendThread(127.0.0.1:39173)] DEBUG org.apache.zookeeper.ClientCnxn - Reading reply session id: 0x10000020e9e0000, packet:: clientPath:/brokers/ids serverPath:/brokers/ids finished:false header:: 49,12 replyHeader:: 49,28,0 request:: '/brokers/ids,T response:: v{'1},s{5,5,1768216249032,1768216249032,0,1,0,0,0,1,25} 11:10:50 11:10:50.838 [ProcessThread(sid:0 cport:39173):] DEBUG org.apache.zookeeper.server.SessionTrackerImpl - Checking session 0x10000020e9e0000 11:10:50 11:10:50.838 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - Processing request:: sessionid:0x10000020e9e0000 type:getData cxid:0x32 zxid:0xfffffffffffffffe txntype:unknown reqpath:/brokers/ids/1 11:10:50 11:10:50.838 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - sessionid:0x10000020e9e0000 type:getData cxid:0x32 zxid:0xfffffffffffffffe txntype:unknown reqpath:/brokers/ids/1 11:10:50 11:10:50.838 [SyncThread:0] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Permission requested: 1 11:10:50 11:10:50.838 [SyncThread:0] DEBUG org.apache.zookeeper.server.ZooKeeperServer - ACLs for node: [31,s{'world,'anyone} 11:10:50 ] 11:10:50 11:10:50.838 [SyncThread:0] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Client credentials: ['sasl,'zooclient 11:10:50 , 'ip,'127.0.0.1 11:10:50 ] 11:10:50 11:10:50.839 [main-SendThread(127.0.0.1:39173)] DEBUG org.apache.zookeeper.ClientCnxn - Reading reply session id: 0x10000020e9e0000, packet:: clientPath:/brokers/ids/1 serverPath:/brokers/ids/1 finished:false header:: 50,4 replyHeader:: 50,28,0 request:: '/brokers/ids/1,F response:: #7b226665617475726573223a7b7d2c226c697374656e65725f73656375726974795f70726f746f636f6c5f6d6170223a7b225341534c5f504c41494e54455854223a225341534c5f504c41494e54455854227d2c22656e64706f696e7473223a5b225341534c5f504c41494e544558543a2f2f6c6f63616c686f73743a3339313135225d2c226a6d785f706f7274223a2d312c22706f7274223a2d312c22686f7374223a6e756c6c2c2276657273696f6e223a352c2274696d657374616d70223a2231373638323136323530353633227d,s{25,25,1768216250603,1768216250603,1,0,0,72057602873098240,209,0,25} 11:10:50 11:10:50.860 [controller-event-thread] INFO kafka.controller.KafkaController - [Controller id=1] Initialized broker epochs cache: HashMap(1 -> 25) 11:10:50 11:10:50.860 [ProcessThread(sid:0 cport:39173):] DEBUG org.apache.zookeeper.server.SessionTrackerImpl - Checking session 0x10000020e9e0000 11:10:50 11:10:50.861 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - Processing request:: sessionid:0x10000020e9e0000 type:getChildren2 cxid:0x33 zxid:0xfffffffffffffffe txntype:unknown reqpath:/brokers/topics 11:10:50 11:10:50.861 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - sessionid:0x10000020e9e0000 type:getChildren2 cxid:0x33 zxid:0xfffffffffffffffe txntype:unknown reqpath:/brokers/topics 11:10:50 11:10:50.861 [SyncThread:0] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Permission requested: 1 11:10:50 11:10:50.861 [SyncThread:0] DEBUG org.apache.zookeeper.server.ZooKeeperServer - ACLs for node: [31,s{'world,'anyone} 11:10:50 ] 11:10:50 11:10:50.861 [SyncThread:0] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Client credentials: ['sasl,'zooclient 11:10:50 , 'ip,'127.0.0.1 11:10:50 ] 11:10:50 11:10:50.861 [main-SendThread(127.0.0.1:39173)] DEBUG org.apache.zookeeper.ClientCnxn - Reading reply session id: 0x10000020e9e0000, packet:: clientPath:/brokers/topics serverPath:/brokers/topics finished:false header:: 51,12 replyHeader:: 51,28,0 request:: '/brokers/topics,T response:: v{},s{6,6,1768216249046,1768216249046,0,0,0,0,0,0,6} 11:10:50 11:10:50.865 [controller-event-thread] DEBUG kafka.controller.KafkaController - [Controller id=1] Register BrokerModifications handler for Set(1) 11:10:50 11:10:50.867 [ProcessThread(sid:0 cport:39173):] DEBUG org.apache.zookeeper.server.SessionTrackerImpl - Checking session 0x10000020e9e0000 11:10:50 11:10:50.867 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - Processing request:: sessionid:0x10000020e9e0000 type:exists cxid:0x34 zxid:0xfffffffffffffffe txntype:unknown reqpath:/brokers/ids/1 11:10:50 11:10:50.867 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - sessionid:0x10000020e9e0000 type:exists cxid:0x34 zxid:0xfffffffffffffffe txntype:unknown reqpath:/brokers/ids/1 11:10:50 11:10:50.868 [main-SendThread(127.0.0.1:39173)] DEBUG org.apache.zookeeper.ClientCnxn - Reading reply session id: 0x10000020e9e0000, packet:: clientPath:/brokers/ids/1 serverPath:/brokers/ids/1 finished:false header:: 52,3 replyHeader:: 52,28,0 request:: '/brokers/ids/1,T response:: s{25,25,1768216250603,1768216250603,1,0,0,72057602873098240,209,0,25} 11:10:50 11:10:50.870 [controller-event-thread] DEBUG kafka.controller.ControllerChannelManager - [Channel manager on controller 1]: Controller 1 trying to connect to broker 1 11:10:50 11:10:50.915 [TestBroker:1:BrokerToControllerChannelManager broker=1 name=forwarding] DEBUG kafka.server.BrokerToControllerRequestThread - [TestBroker:1:BrokerToControllerChannelManager broker=1 name=forwarding]: Controller isn't cached, looking for local metadata changes 11:10:50 11:10:50.915 [TestBroker:1:BrokerToControllerChannelManager broker=1 name=forwarding] DEBUG kafka.server.BrokerToControllerRequestThread - [TestBroker:1:BrokerToControllerChannelManager broker=1 name=forwarding]: No controller defined in metadata cache, retrying after backoff 11:10:50 11:10:50.915 [TestBroker:1:BrokerToControllerChannelManager broker=1 name=alterPartition] DEBUG kafka.server.BrokerToControllerRequestThread - [TestBroker:1:BrokerToControllerChannelManager broker=1 name=alterPartition]: Controller isn't cached, looking for local metadata changes 11:10:50 11:10:50.916 [TestBroker:1:BrokerToControllerChannelManager broker=1 name=alterPartition] DEBUG kafka.server.BrokerToControllerRequestThread - [TestBroker:1:BrokerToControllerChannelManager broker=1 name=alterPartition]: No controller defined in metadata cache, retrying after backoff 11:10:50 11:10:50.922 [TestBroker:1:Controller-1-to-broker-1-send-thread] INFO kafka.controller.RequestSendThread - [RequestSendThread controllerId=1] Starting 11:10:50 11:10:50.925 [controller-event-thread] INFO kafka.controller.KafkaController - [Controller id=1] Currently active brokers in the cluster: Set(1) 11:10:50 11:10:50.925 [controller-event-thread] INFO kafka.controller.KafkaController - [Controller id=1] Currently shutting brokers in the cluster: HashSet() 11:10:50 11:10:50.925 [controller-event-thread] INFO kafka.controller.KafkaController - [Controller id=1] Current list of topics in the cluster: HashSet() 11:10:50 11:10:50.926 [controller-event-thread] INFO kafka.controller.KafkaController - [Controller id=1] Fetching topic deletions in progress 11:10:50 11:10:50.927 [ProcessThread(sid:0 cport:39173):] DEBUG org.apache.zookeeper.server.SessionTrackerImpl - Checking session 0x10000020e9e0000 11:10:50 11:10:50.928 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - Processing request:: sessionid:0x10000020e9e0000 type:getChildren2 cxid:0x35 zxid:0xfffffffffffffffe txntype:unknown reqpath:/admin/delete_topics 11:10:50 11:10:50.928 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - sessionid:0x10000020e9e0000 type:getChildren2 cxid:0x35 zxid:0xfffffffffffffffe txntype:unknown reqpath:/admin/delete_topics 11:10:50 11:10:50.928 [SyncThread:0] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Permission requested: 1 11:10:50 11:10:50.928 [SyncThread:0] DEBUG org.apache.zookeeper.server.ZooKeeperServer - ACLs for node: [31,s{'world,'anyone} 11:10:50 ] 11:10:50 11:10:50.928 [SyncThread:0] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Client credentials: ['sasl,'zooclient 11:10:50 , 'ip,'127.0.0.1 11:10:50 ] 11:10:50 11:10:50.929 [main-SendThread(127.0.0.1:39173)] DEBUG org.apache.zookeeper.ClientCnxn - Reading reply session id: 0x10000020e9e0000, packet:: clientPath:/admin/delete_topics serverPath:/admin/delete_topics finished:false header:: 53,12 replyHeader:: 53,28,0 request:: '/admin/delete_topics,T response:: v{},s{12,12,1768216249078,1768216249078,0,0,0,0,0,0,12} 11:10:50 11:10:50.931 [controller-event-thread] INFO kafka.controller.KafkaController - [Controller id=1] List of topics to be deleted: 11:10:50 11:10:50.931 [controller-event-thread] INFO kafka.controller.KafkaController - [Controller id=1] List of topics ineligible for deletion: 11:10:50 11:10:50.931 [controller-event-thread] INFO kafka.controller.KafkaController - [Controller id=1] Initializing topic deletion manager 11:10:50 11:10:50.932 [controller-event-thread] INFO kafka.controller.TopicDeletionManager - [Topic Deletion Manager 1] Initializing manager with initial deletions: Set(), initial ineligible deletions: HashSet() 11:10:50 11:10:50.933 [controller-event-thread] INFO kafka.controller.KafkaController - [Controller id=1] Sending update metadata request 11:10:50 11:10:50.936 [controller-event-thread] INFO state.change.logger - [Controller id=1 epoch=1] Sending UpdateMetadata request to brokers HashSet(1) for 0 partitions 11:10:50 11:10:50.944 [controller-event-thread] INFO kafka.controller.ZkReplicaStateMachine - [ReplicaStateMachine controllerId=1] Initializing replica state 11:10:50 11:10:50.944 [controller-event-thread] INFO kafka.controller.ZkReplicaStateMachine - [ReplicaStateMachine controllerId=1] Triggering online replica state changes 11:10:50 11:10:50.944 [ExpirationReaper-1-AlterAcls] INFO kafka.server.DelayedOperationPurgatory$ExpiredOperationReaper - [ExpirationReaper-1-AlterAcls]: Starting 11:10:50 11:10:50.948 [controller-event-thread] INFO kafka.controller.ZkReplicaStateMachine - [ReplicaStateMachine controllerId=1] Triggering offline replica state changes 11:10:50 11:10:50.949 [controller-event-thread] DEBUG kafka.controller.ZkReplicaStateMachine - [ReplicaStateMachine controllerId=1] Started replica state machine with initial state -> HashMap() 11:10:50 11:10:50.950 [TestBroker:1:Controller-1-to-broker-1-send-thread] DEBUG org.apache.kafka.clients.ClientUtils - Resolved host localhost as 127.0.0.1 11:10:50 11:10:50.950 [controller-event-thread] INFO kafka.controller.ZkPartitionStateMachine - [PartitionStateMachine controllerId=1] Initializing partition state 11:10:50 11:10:50.950 [TestBroker:1:Controller-1-to-broker-1-send-thread] DEBUG org.apache.kafka.clients.NetworkClient - [Controller id=1, targetBrokerId=1] Initiating connection to node localhost:39115 (id: 1 rack: null) using address localhost/127.0.0.1 11:10:50 11:10:50.950 [controller-event-thread] INFO kafka.controller.ZkPartitionStateMachine - [PartitionStateMachine controllerId=1] Triggering online partition state changes 11:10:50 11:10:50.955 [controller-event-thread] DEBUG kafka.controller.ZkPartitionStateMachine - [PartitionStateMachine controllerId=1] Started partition state machine with initial state -> HashMap() 11:10:50 11:10:50.955 [controller-event-thread] INFO kafka.controller.KafkaController - [Controller id=1] Ready to serve as the new controller with epoch 1 11:10:50 11:10:50.956 [ProcessThread(sid:0 cport:39173):] DEBUG org.apache.zookeeper.server.SessionTrackerImpl - Checking session 0x10000020e9e0000 11:10:50 11:10:50.957 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - Processing request:: sessionid:0x10000020e9e0000 type:exists cxid:0x36 zxid:0xfffffffffffffffe txntype:unknown reqpath:/admin/reassign_partitions 11:10:50 11:10:50.957 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - sessionid:0x10000020e9e0000 type:exists cxid:0x36 zxid:0xfffffffffffffffe txntype:unknown reqpath:/admin/reassign_partitions 11:10:50 11:10:50.957 [main-SendThread(127.0.0.1:39173)] DEBUG org.apache.zookeeper.ClientCnxn - Reading reply session id: 0x10000020e9e0000, packet:: clientPath:/admin/reassign_partitions serverPath:/admin/reassign_partitions finished:false header:: 54,3 replyHeader:: 54,28,-101 request:: '/admin/reassign_partitions,T response:: 11:10:50 11:10:50.961 [ProcessThread(sid:0 cport:39173):] DEBUG org.apache.zookeeper.server.SessionTrackerImpl - Checking session 0x10000020e9e0000 11:10:50 11:10:50.961 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - Processing request:: sessionid:0x10000020e9e0000 type:getData cxid:0x37 zxid:0xfffffffffffffffe txntype:unknown reqpath:/admin/preferred_replica_election 11:10:50 11:10:50.962 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - sessionid:0x10000020e9e0000 type:getData cxid:0x37 zxid:0xfffffffffffffffe txntype:unknown reqpath:/admin/preferred_replica_election 11:10:50 11:10:50.962 [main-SendThread(127.0.0.1:39173)] DEBUG org.apache.zookeeper.ClientCnxn - Reading reply session id: 0x10000020e9e0000, packet:: clientPath:/admin/preferred_replica_election serverPath:/admin/preferred_replica_election finished:false header:: 55,4 replyHeader:: 55,28,-101 request:: '/admin/preferred_replica_election,T response:: 11:10:50 11:10:50.963 [TestBroker:1:Controller-1-to-broker-1-send-thread] DEBUG org.apache.kafka.common.security.authenticator.SaslClientAuthenticator - [Controller id=1, targetBrokerId=1] Set SASL client state to SEND_APIVERSIONS_REQUEST 11:10:50 11:10:50.963 [controller-event-thread] INFO kafka.controller.KafkaController - [Controller id=1] Partitions undergoing preferred replica election: 11:10:50 11:10:50.963 [controller-event-thread] INFO kafka.controller.KafkaController - [Controller id=1] Partitions that completed preferred replica election: 11:10:50 11:10:50.963 [TestBroker:1:Controller-1-to-broker-1-send-thread] DEBUG org.apache.kafka.common.security.authenticator.SaslClientAuthenticator - [Controller id=1, targetBrokerId=1] Creating SaslClient: client=null;service=kafka;serviceHostname=localhost;mechs=[PLAIN] 11:10:50 11:10:50.964 [controller-event-thread] INFO kafka.controller.KafkaController - [Controller id=1] Skipping preferred replica election for partitions due to topic deletion: 11:10:50 11:10:50.964 [controller-event-thread] INFO kafka.controller.KafkaController - [Controller id=1] Resuming preferred replica election for partitions: 11:10:50 11:10:50.965 [controller-event-thread] INFO kafka.controller.KafkaController - [Controller id=1] Starting replica leader election (PREFERRED) for partitions triggered by ZkTriggered 11:10:50 11:10:50.973 [ProcessThread(sid:0 cport:39173):] DEBUG org.apache.zookeeper.server.SessionTrackerImpl - Checking session 0x10000020e9e0000 11:10:50 11:10:50.973 [ProcessThread(sid:0 cport:39173):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Permission requested: 1 11:10:50 11:10:50.973 [ProcessThread(sid:0 cport:39173):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - ACLs for node: [31,s{'world,'anyone} 11:10:50 ] 11:10:50 11:10:50.973 [TestBroker:1:Controller-1-to-broker-1-send-thread] DEBUG org.apache.kafka.common.network.Selector - [Controller id=1, targetBrokerId=1] Created socket with SO_RCVBUF = 65536, SO_SNDBUF = 1313280, SO_TIMEOUT = 0 to node 1 11:10:50 11:10:50.973 [ProcessThread(sid:0 cport:39173):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Client credentials: ['sasl,'zooclient 11:10:50 , 'ip,'127.0.0.1 11:10:50 ] 11:10:50 11:10:50.975 [ProcessThread(sid:0 cport:39173):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from data tree is: 45507074165 11:10:50 11:10:50.975 [ProcessThread(sid:0 cport:39173):] DEBUG org.apache.zookeeper.server.SessionTrackerImpl - Checking session 0x10000020e9e0000 11:10:50 11:10:50.975 [ProcessThread(sid:0 cport:39173):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Permission requested: 8 11:10:50 11:10:50.975 [ProcessThread(sid:0 cport:39173):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - ACLs for node: [31,s{'world,'anyone} 11:10:50 ] 11:10:50 11:10:50.975 [ProcessThread(sid:0 cport:39173):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Client credentials: ['sasl,'zooclient 11:10:50 , 'ip,'127.0.0.1 11:10:50 ] 11:10:50 11:10:50.976 [ProcessThread(sid:0 cport:39173):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from data tree is: 45507074165 11:10:50 11:10:50.980 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - Processing request:: sessionid:0x10000020e9e0000 type:multi cxid:0x38 zxid:0x1d txntype:14 reqpath:n/a 11:10:50 11:10:50.980 [/config/changes-event-process-thread] INFO kafka.common.ZkNodeChangeNotificationListener$ChangeEventProcessThread - [/config/changes-event-process-thread]: Starting 11:10:50 11:10:50.980 [SyncThread:0] DEBUG org.apache.zookeeper.server.DataTree - Ignoring processTxn failure hdr: -1 : error: -101 11:10:50 11:10:50.980 [SyncThread:0] DEBUG org.apache.zookeeper.server.DataTree - Ignoring processTxn failure hdr: 14 : error: -101 11:10:50 11:10:50.980 [SyncThread:0] DEBUG org.apache.zookeeper.server.DataTree - Digests are matching for Zxid: 1d, Digest in log and actual tree: 45507074165 11:10:50 11:10:50.980 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - sessionid:0x10000020e9e0000 type:multi cxid:0x38 zxid:0x1d txntype:14 reqpath:n/a 11:10:50 11:10:50.981 [ProcessThread(sid:0 cport:39173):] DEBUG org.apache.zookeeper.server.SessionTrackerImpl - Checking session 0x10000020e9e0000 11:10:50 11:10:50.981 [ProcessThread(sid:0 cport:39173):] DEBUG org.apache.zookeeper.server.SessionTrackerImpl - Checking session 0x10000020e9e0000 11:10:50 11:10:50.982 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - Processing request:: sessionid:0x10000020e9e0000 type:getChildren2 cxid:0x39 zxid:0xfffffffffffffffe txntype:unknown reqpath:/config/topics 11:10:50 11:10:50.982 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - sessionid:0x10000020e9e0000 type:getChildren2 cxid:0x39 zxid:0xfffffffffffffffe txntype:unknown reqpath:/config/topics 11:10:50 11:10:50.983 [SyncThread:0] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Permission requested: 1 11:10:50 11:10:50.982 [main-SendThread(127.0.0.1:39173)] DEBUG org.apache.zookeeper.ClientCnxn - Reading reply session id: 0x10000020e9e0000, packet:: clientPath:null serverPath:null finished:false header:: 56,14 replyHeader:: 56,29,0 request:: org.apache.zookeeper.MultiOperationRecord@228011e8 response:: org.apache.zookeeper.MultiResponse@441 11:10:50 11:10:50.983 [SyncThread:0] DEBUG org.apache.zookeeper.server.ZooKeeperServer - ACLs for node: [31,s{'world,'anyone} 11:10:50 ] 11:10:50 11:10:50.983 [SyncThread:0] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Client credentials: ['sasl,'zooclient 11:10:50 , 'ip,'127.0.0.1 11:10:50 ] 11:10:50 11:10:50.983 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - Processing request:: sessionid:0x10000020e9e0000 type:getChildren2 cxid:0x3a zxid:0xfffffffffffffffe txntype:unknown reqpath:/config/changes 11:10:50 11:10:50.983 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - sessionid:0x10000020e9e0000 type:getChildren2 cxid:0x3a zxid:0xfffffffffffffffe txntype:unknown reqpath:/config/changes 11:10:50 11:10:50.983 [SyncThread:0] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Permission requested: 1 11:10:50 11:10:50.983 [SyncThread:0] DEBUG org.apache.zookeeper.server.ZooKeeperServer - ACLs for node: [31,s{'world,'anyone} 11:10:50 ] 11:10:50 11:10:50.983 [SyncThread:0] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Client credentials: ['sasl,'zooclient 11:10:50 , 'ip,'127.0.0.1 11:10:50 ] 11:10:50 11:10:50.983 [main-SendThread(127.0.0.1:39173)] DEBUG org.apache.zookeeper.ClientCnxn - Reading reply session id: 0x10000020e9e0000, packet:: clientPath:/config/topics serverPath:/config/topics finished:false header:: 57,12 replyHeader:: 57,29,0 request:: '/config/topics,F response:: v{},s{17,17,1768216249102,1768216249102,0,0,0,0,0,0,17} 11:10:50 11:10:50.984 [main-SendThread(127.0.0.1:39173)] DEBUG org.apache.zookeeper.ClientCnxn - Reading reply session id: 0x10000020e9e0000, packet:: clientPath:/config/changes serverPath:/config/changes finished:false header:: 58,12 replyHeader:: 58,29,0 request:: '/config/changes,T response:: v{},s{9,9,1768216249062,1768216249062,0,0,0,0,0,0,9} 11:10:50 11:10:50.985 [ProcessThread(sid:0 cport:39173):] DEBUG org.apache.zookeeper.server.SessionTrackerImpl - Checking session 0x10000020e9e0000 11:10:50 11:10:50.986 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - Processing request:: sessionid:0x10000020e9e0000 type:getChildren2 cxid:0x3b zxid:0xfffffffffffffffe txntype:unknown reqpath:/config/clients 11:10:50 11:10:50.986 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - sessionid:0x10000020e9e0000 type:getChildren2 cxid:0x3b zxid:0xfffffffffffffffe txntype:unknown reqpath:/config/clients 11:10:50 11:10:50.986 [SyncThread:0] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Permission requested: 1 11:10:50 11:10:50.986 [SyncThread:0] DEBUG org.apache.zookeeper.server.ZooKeeperServer - ACLs for node: [31,s{'world,'anyone} 11:10:50 ] 11:10:50 11:10:50.986 [SyncThread:0] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Client credentials: ['sasl,'zooclient 11:10:50 , 'ip,'127.0.0.1 11:10:50 ] 11:10:50 11:10:50.986 [main-SendThread(127.0.0.1:39173)] DEBUG org.apache.zookeeper.ClientCnxn - Reading reply session id: 0x10000020e9e0000, packet:: clientPath:/config/clients serverPath:/config/clients finished:false header:: 59,12 replyHeader:: 59,29,0 request:: '/config/clients,F response:: v{},s{18,18,1768216249107,1768216249107,0,0,0,0,0,0,18} 11:10:50 11:10:50.987 [ProcessThread(sid:0 cport:39173):] DEBUG org.apache.zookeeper.server.SessionTrackerImpl - Checking session 0x10000020e9e0000 11:10:50 11:10:50.987 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - Processing request:: sessionid:0x10000020e9e0000 type:getChildren2 cxid:0x3c zxid:0xfffffffffffffffe txntype:unknown reqpath:/config/users 11:10:50 11:10:50.987 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - sessionid:0x10000020e9e0000 type:getChildren2 cxid:0x3c zxid:0xfffffffffffffffe txntype:unknown reqpath:/config/users 11:10:50 11:10:50.987 [SyncThread:0] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Permission requested: 1 11:10:50 11:10:50.987 [SyncThread:0] DEBUG org.apache.zookeeper.server.ZooKeeperServer - ACLs for node: [31,s{'world,'anyone} 11:10:50 ] 11:10:50 11:10:50.987 [SyncThread:0] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Client credentials: ['sasl,'zooclient 11:10:50 , 'ip,'127.0.0.1 11:10:50 ] 11:10:50 11:10:50.988 [main-SendThread(127.0.0.1:39173)] DEBUG org.apache.zookeeper.ClientCnxn - Reading reply session id: 0x10000020e9e0000, packet:: clientPath:/config/users serverPath:/config/users finished:false header:: 60,12 replyHeader:: 60,29,0 request:: '/config/users,F response:: v{},s{19,19,1768216249114,1768216249114,0,0,0,0,0,0,19} 11:10:50 11:10:50.989 [ProcessThread(sid:0 cport:39173):] DEBUG org.apache.zookeeper.server.SessionTrackerImpl - Checking session 0x10000020e9e0000 11:10:50 11:10:50.989 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - Processing request:: sessionid:0x10000020e9e0000 type:getChildren2 cxid:0x3d zxid:0xfffffffffffffffe txntype:unknown reqpath:/config/users 11:10:50 11:10:50.989 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - sessionid:0x10000020e9e0000 type:getChildren2 cxid:0x3d zxid:0xfffffffffffffffe txntype:unknown reqpath:/config/users 11:10:50 11:10:50.989 [SyncThread:0] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Permission requested: 1 11:10:50 11:10:50.989 [SyncThread:0] DEBUG org.apache.zookeeper.server.ZooKeeperServer - ACLs for node: [31,s{'world,'anyone} 11:10:50 ] 11:10:50 11:10:50.989 [SyncThread:0] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Client credentials: ['sasl,'zooclient 11:10:50 , 'ip,'127.0.0.1 11:10:50 ] 11:10:50 11:10:50.990 [controller-event-thread] INFO kafka.controller.KafkaController - [Controller id=1] Starting the controller scheduler 11:10:50 11:10:50.990 [controller-event-thread] DEBUG kafka.utils.KafkaScheduler - Initializing task scheduler. 11:10:50 11:10:50.990 [main-SendThread(127.0.0.1:39173)] DEBUG org.apache.zookeeper.ClientCnxn - Reading reply session id: 0x10000020e9e0000, packet:: clientPath:/config/users serverPath:/config/users finished:false header:: 61,12 replyHeader:: 61,29,0 request:: '/config/users,F response:: v{},s{19,19,1768216249114,1768216249114,0,0,0,0,0,0,19} 11:10:50 11:10:50.990 [controller-event-thread] DEBUG kafka.utils.KafkaScheduler - Scheduling task auto-leader-rebalance-task with initial delay 5000 ms and period -1000 ms. 11:10:50 11:10:50.992 [ProcessThread(sid:0 cport:39173):] DEBUG org.apache.zookeeper.server.SessionTrackerImpl - Checking session 0x10000020e9e0000 11:10:50 11:10:50.992 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - Processing request:: sessionid:0x10000020e9e0000 type:getChildren2 cxid:0x3e zxid:0xfffffffffffffffe txntype:unknown reqpath:/config/ips 11:10:50 11:10:50.992 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - sessionid:0x10000020e9e0000 type:getChildren2 cxid:0x3e zxid:0xfffffffffffffffe txntype:unknown reqpath:/config/ips 11:10:50 11:10:50.992 [SyncThread:0] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Permission requested: 1 11:10:50 11:10:50.992 [SyncThread:0] DEBUG org.apache.zookeeper.server.ZooKeeperServer - ACLs for node: [31,s{'world,'anyone} 11:10:50 ] 11:10:50 11:10:50.992 [SyncThread:0] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Client credentials: ['sasl,'zooclient 11:10:50 , 'ip,'127.0.0.1 11:10:50 ] 11:10:50 11:10:50.993 [main-SendThread(127.0.0.1:39173)] DEBUG org.apache.zookeeper.ClientCnxn - Reading reply session id: 0x10000020e9e0000, packet:: clientPath:/config/ips serverPath:/config/ips finished:false header:: 62,12 replyHeader:: 62,29,0 request:: '/config/ips,F response:: v{},s{21,21,1768216249122,1768216249122,0,0,0,0,0,0,21} 11:10:50 11:10:50.994 [ProcessThread(sid:0 cport:39173):] DEBUG org.apache.zookeeper.server.SessionTrackerImpl - Checking session 0x10000020e9e0000 11:10:50 11:10:50.994 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - Processing request:: sessionid:0x10000020e9e0000 type:getChildren2 cxid:0x3f zxid:0xfffffffffffffffe txntype:unknown reqpath:/config/brokers 11:10:50 11:10:50.994 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - sessionid:0x10000020e9e0000 type:getChildren2 cxid:0x3f zxid:0xfffffffffffffffe txntype:unknown reqpath:/config/brokers 11:10:50 11:10:50.994 [SyncThread:0] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Permission requested: 1 11:10:50 11:10:50.994 [SyncThread:0] DEBUG org.apache.zookeeper.server.ZooKeeperServer - ACLs for node: [31,s{'world,'anyone} 11:10:50 ] 11:10:50 11:10:50.994 [SyncThread:0] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Client credentials: ['sasl,'zooclient 11:10:50 , 'ip,'127.0.0.1 11:10:50 ] 11:10:50 11:10:50.995 [main-SendThread(127.0.0.1:39173)] DEBUG org.apache.zookeeper.ClientCnxn - Reading reply session id: 0x10000020e9e0000, packet:: clientPath:/config/brokers serverPath:/config/brokers finished:false header:: 63,12 replyHeader:: 63,29,0 request:: '/config/brokers,F response:: v{},s{20,20,1768216249118,1768216249118,0,0,0,0,0,0,20} 11:10:50 11:10:50.996 [main] INFO kafka.network.SocketServer - [SocketServer listenerType=ZK_BROKER, nodeId=1] Enabling request processing. 11:10:50 11:10:50.996 [ProcessThread(sid:0 cport:39173):] DEBUG org.apache.zookeeper.server.SessionTrackerImpl - Checking session 0x10000020e9e0000 11:10:50 11:10:50.996 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - Processing request:: sessionid:0x10000020e9e0000 type:exists cxid:0x40 zxid:0xfffffffffffffffe txntype:unknown reqpath:/controller 11:10:50 11:10:50.996 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - sessionid:0x10000020e9e0000 type:exists cxid:0x40 zxid:0xfffffffffffffffe txntype:unknown reqpath:/controller 11:10:50 11:10:50.996 [main-SendThread(127.0.0.1:39173)] DEBUG org.apache.zookeeper.ClientCnxn - Reading reply session id: 0x10000020e9e0000, packet:: clientPath:/controller serverPath:/controller finished:false header:: 64,3 replyHeader:: 64,29,0 request:: '/controller,T response:: s{27,27,1768216250753,1768216250753,0,0,0,72057602873098240,54,0,27} 11:10:51 11:10:50.998 [ProcessThread(sid:0 cport:39173):] DEBUG org.apache.zookeeper.server.SessionTrackerImpl - Checking session 0x10000020e9e0000 11:10:51 11:10:50.998 [main] DEBUG kafka.network.DataPlaneAcceptor - Starting processors for listener ListenerName(SASL_PLAINTEXT) 11:10:51 11:10:50.998 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - Processing request:: sessionid:0x10000020e9e0000 type:getData cxid:0x41 zxid:0xfffffffffffffffe txntype:unknown reqpath:/controller 11:10:51 11:10:50.998 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - sessionid:0x10000020e9e0000 type:getData cxid:0x41 zxid:0xfffffffffffffffe txntype:unknown reqpath:/controller 11:10:51 11:10:50.998 [SyncThread:0] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Permission requested: 1 11:10:51 11:10:50.998 [SyncThread:0] DEBUG org.apache.zookeeper.server.ZooKeeperServer - ACLs for node: [31,s{'world,'anyone} 11:10:51 ] 11:10:51 11:10:50.998 [SyncThread:0] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Client credentials: ['sasl,'zooclient 11:10:51 , 'ip,'127.0.0.1 11:10:51 ] 11:10:51 11:10:50.998 [main-SendThread(127.0.0.1:39173)] DEBUG org.apache.zookeeper.ClientCnxn - Reading reply session id: 0x10000020e9e0000, packet:: clientPath:/controller serverPath:/controller finished:false header:: 65,4 replyHeader:: 65,29,0 request:: '/controller,T response:: #7b2276657273696f6e223a312c2262726f6b65726964223a312c2274696d657374616d70223a2231373638323136323530373338227d,s{27,27,1768216250753,1768216250753,0,0,0,72057602873098240,54,0,27} 11:10:51 11:10:51.001 [ProcessThread(sid:0 cport:39173):] DEBUG org.apache.zookeeper.server.SessionTrackerImpl - Checking session 0x10000020e9e0000 11:10:51 11:10:51.001 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - Processing request:: sessionid:0x10000020e9e0000 type:exists cxid:0x42 zxid:0xfffffffffffffffe txntype:unknown reqpath:/admin/preferred_replica_election 11:10:51 11:10:51.001 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - sessionid:0x10000020e9e0000 type:exists cxid:0x42 zxid:0xfffffffffffffffe txntype:unknown reqpath:/admin/preferred_replica_election 11:10:51 11:10:51.001 [main] DEBUG kafka.network.DataPlaneAcceptor - Starting acceptor thread for listener ListenerName(SASL_PLAINTEXT) 11:10:51 11:10:51.001 [main-SendThread(127.0.0.1:39173)] DEBUG org.apache.zookeeper.ClientCnxn - Reading reply session id: 0x10000020e9e0000, packet:: clientPath:/admin/preferred_replica_election serverPath:/admin/preferred_replica_election finished:false header:: 66,3 replyHeader:: 66,29,-101 request:: '/admin/preferred_replica_election,T response:: 11:10:51 11:10:51.003 [main] INFO org.apache.kafka.common.utils.AppInfoParser - Kafka version: 3.3.1 11:10:51 11:10:51.003 [main] INFO org.apache.kafka.common.utils.AppInfoParser - Kafka commitId: e23c59d00e687ff5 11:10:51 11:10:51.003 [main] INFO org.apache.kafka.common.utils.AppInfoParser - Kafka startTimeMs: 1768216251002 11:10:51 11:10:51.005 [main] INFO kafka.server.KafkaServer - [KafkaServer id=1] started 11:10:51 11:10:51.010 [data-plane-kafka-socket-acceptor-ListenerName(SASL_PLAINTEXT)-SASL_PLAINTEXT-39115] DEBUG kafka.network.DataPlaneAcceptor - Accepted connection from /127.0.0.1:43322 on /127.0.0.1:39115 and assigned it to processor 0, sendBufferSize [actual|requested]: [102400|102400] recvBufferSize [actual|requested]: [102400|102400] 11:10:51 11:10:51.011 [TestBroker:1:Controller-1-to-broker-1-send-thread] DEBUG org.apache.kafka.common.security.authenticator.SaslClientAuthenticator - [Controller id=1, targetBrokerId=1] Set SASL client state to RECEIVE_APIVERSIONS_RESPONSE 11:10:51 11:10:51.012 [TestBroker:1:Controller-1-to-broker-1-send-thread] DEBUG org.apache.kafka.clients.NetworkClient - [Controller id=1, targetBrokerId=1] Completed connection to node 1. Ready. 11:10:51 11:10:51.013 [data-plane-kafka-network-thread-1-ListenerName(SASL_PLAINTEXT)-SASL_PLAINTEXT-0] DEBUG kafka.network.Processor - Processor 0 listening to new connection from /127.0.0.1:43322 11:10:51 11:10:51.016 [TestBroker:1:BrokerToControllerChannelManager broker=1 name=forwarding] DEBUG kafka.server.BrokerToControllerRequestThread - [TestBroker:1:BrokerToControllerChannelManager broker=1 name=forwarding]: Controller isn't cached, looking for local metadata changes 11:10:51 11:10:51.016 [TestBroker:1:BrokerToControllerChannelManager broker=1 name=forwarding] DEBUG kafka.server.BrokerToControllerRequestThread - [TestBroker:1:BrokerToControllerChannelManager broker=1 name=forwarding]: No controller defined in metadata cache, retrying after backoff 11:10:51 11:10:51.017 [TestBroker:1:BrokerToControllerChannelManager broker=1 name=alterPartition] DEBUG kafka.server.BrokerToControllerRequestThread - [TestBroker:1:BrokerToControllerChannelManager broker=1 name=alterPartition]: Controller isn't cached, looking for local metadata changes 11:10:51 11:10:51.017 [TestBroker:1:BrokerToControllerChannelManager broker=1 name=alterPartition] DEBUG kafka.server.BrokerToControllerRequestThread - [TestBroker:1:BrokerToControllerChannelManager broker=1 name=alterPartition]: No controller defined in metadata cache, retrying after backoff 11:10:51 11:10:51.024 [data-plane-kafka-network-thread-1-ListenerName(SASL_PLAINTEXT)-SASL_PLAINTEXT-0] DEBUG org.apache.kafka.common.security.authenticator.SaslServerAuthenticator - Set SASL server state to HANDSHAKE_OR_VERSIONS_REQUEST during authentication 11:10:51 11:10:51.024 [data-plane-kafka-network-thread-1-ListenerName(SASL_PLAINTEXT)-SASL_PLAINTEXT-0] DEBUG org.apache.kafka.common.security.authenticator.SaslServerAuthenticator - Handling Kafka request API_VERSIONS during authentication 11:10:51 11:10:51.029 [main] INFO org.apache.kafka.clients.admin.AdminClientConfig - AdminClientConfig values: 11:10:51 bootstrap.servers = [SASL_PLAINTEXT://localhost:39115] 11:10:51 client.dns.lookup = use_all_dns_ips 11:10:51 client.id = test-consumer-id 11:10:51 connections.max.idle.ms = 300000 11:10:51 default.api.timeout.ms = 60000 11:10:51 metadata.max.age.ms = 300000 11:10:51 metric.reporters = [] 11:10:51 metrics.num.samples = 2 11:10:51 metrics.recording.level = INFO 11:10:51 metrics.sample.window.ms = 30000 11:10:51 receive.buffer.bytes = 65536 11:10:51 reconnect.backoff.max.ms = 1000 11:10:51 reconnect.backoff.ms = 50 11:10:51 request.timeout.ms = 15000 11:10:51 retries = 2147483647 11:10:51 retry.backoff.ms = 100 11:10:51 sasl.client.callback.handler.class = null 11:10:51 sasl.jaas.config = [hidden] 11:10:51 sasl.kerberos.kinit.cmd = /usr/bin/kinit 11:10:51 sasl.kerberos.min.time.before.relogin = 60000 11:10:51 sasl.kerberos.service.name = null 11:10:51 sasl.kerberos.ticket.renew.jitter = 0.05 11:10:51 sasl.kerberos.ticket.renew.window.factor = 0.8 11:10:51 sasl.login.callback.handler.class = null 11:10:51 sasl.login.class = null 11:10:51 sasl.login.connect.timeout.ms = null 11:10:51 sasl.login.read.timeout.ms = null 11:10:51 sasl.login.refresh.buffer.seconds = 300 11:10:51 sasl.login.refresh.min.period.seconds = 60 11:10:51 sasl.login.refresh.window.factor = 0.8 11:10:51 sasl.login.refresh.window.jitter = 0.05 11:10:51 sasl.login.retry.backoff.max.ms = 10000 11:10:51 sasl.login.retry.backoff.ms = 100 11:10:51 sasl.mechanism = PLAIN 11:10:51 sasl.oauthbearer.clock.skew.seconds = 30 11:10:51 sasl.oauthbearer.expected.audience = null 11:10:51 sasl.oauthbearer.expected.issuer = null 11:10:51 sasl.oauthbearer.jwks.endpoint.refresh.ms = 3600000 11:10:51 sasl.oauthbearer.jwks.endpoint.retry.backoff.max.ms = 10000 11:10:51 sasl.oauthbearer.jwks.endpoint.retry.backoff.ms = 100 11:10:51 sasl.oauthbearer.jwks.endpoint.url = null 11:10:51 sasl.oauthbearer.scope.claim.name = scope 11:10:51 sasl.oauthbearer.sub.claim.name = sub 11:10:51 sasl.oauthbearer.token.endpoint.url = null 11:10:51 security.protocol = SASL_PLAINTEXT 11:10:51 security.providers = null 11:10:51 send.buffer.bytes = 131072 11:10:51 socket.connection.setup.timeout.max.ms = 30000 11:10:51 socket.connection.setup.timeout.ms = 10000 11:10:51 ssl.cipher.suites = null 11:10:51 ssl.enabled.protocols = [TLSv1.2, TLSv1.3] 11:10:51 ssl.endpoint.identification.algorithm = https 11:10:51 ssl.engine.factory.class = null 11:10:51 ssl.key.password = null 11:10:51 ssl.keymanager.algorithm = SunX509 11:10:51 ssl.keystore.certificate.chain = null 11:10:51 ssl.keystore.key = null 11:10:51 ssl.keystore.location = null 11:10:51 ssl.keystore.password = null 11:10:51 ssl.keystore.type = JKS 11:10:51 ssl.protocol = TLSv1.3 11:10:51 ssl.provider = null 11:10:51 ssl.secure.random.implementation = null 11:10:51 ssl.trustmanager.algorithm = PKIX 11:10:51 ssl.truststore.certificates = null 11:10:51 ssl.truststore.location = null 11:10:51 ssl.truststore.password = null 11:10:51 ssl.truststore.type = JKS 11:10:51 11:10:51 11:10:51.053 [main] DEBUG org.apache.kafka.clients.admin.internals.AdminMetadataManager - [AdminClient clientId=test-consumer-id] Setting bootstrap cluster metadata Cluster(id = null, nodes = [localhost:39115 (id: -1 rack: null)], partitions = [], controller = null). 11:10:51 11:10:51.053 [data-plane-kafka-network-thread-1-ListenerName(SASL_PLAINTEXT)-SASL_PLAINTEXT-0] DEBUG org.apache.kafka.common.security.authenticator.SaslServerAuthenticator - Set SASL server state to HANDSHAKE_REQUEST during authentication 11:10:51 11:10:51.054 [main] INFO org.apache.kafka.common.security.authenticator.AbstractLogin - Successfully logged in. 11:10:51 11:10:51.058 [TestBroker:1:Controller-1-to-broker-1-send-thread] DEBUG org.apache.kafka.common.security.authenticator.SaslClientAuthenticator - [Controller id=1, targetBrokerId=1] Set SASL client state to SEND_HANDSHAKE_REQUEST 11:10:51 11:10:51.059 [main] INFO org.apache.kafka.common.utils.AppInfoParser - Kafka version: 3.3.1 11:10:51 11:10:51.059 [main] INFO org.apache.kafka.common.utils.AppInfoParser - Kafka commitId: e23c59d00e687ff5 11:10:51 11:10:51.059 [main] INFO org.apache.kafka.common.utils.AppInfoParser - Kafka startTimeMs: 1768216251059 11:10:51 11:10:51.059 [main] DEBUG org.apache.kafka.clients.admin.KafkaAdminClient - [AdminClient clientId=test-consumer-id] Kafka admin client initialized 11:10:51 11:10:51.060 [kafka-admin-client-thread | test-consumer-id] DEBUG org.apache.kafka.clients.admin.KafkaAdminClient - [AdminClient clientId=test-consumer-id] Thread starting 11:10:51 11:10:51.060 [TestBroker:1:Controller-1-to-broker-1-send-thread] DEBUG org.apache.kafka.common.security.authenticator.SaslClientAuthenticator - [Controller id=1, targetBrokerId=1] Set SASL client state to RECEIVE_HANDSHAKE_RESPONSE 11:10:51 11:10:51.060 [data-plane-kafka-network-thread-1-ListenerName(SASL_PLAINTEXT)-SASL_PLAINTEXT-0] DEBUG org.apache.kafka.common.security.authenticator.SaslServerAuthenticator - Handling Kafka request SASL_HANDSHAKE during authentication 11:10:51 11:10:51.060 [data-plane-kafka-network-thread-1-ListenerName(SASL_PLAINTEXT)-SASL_PLAINTEXT-0] DEBUG org.apache.kafka.common.security.authenticator.SaslServerAuthenticator - Using SASL mechanism 'PLAIN' provided by client 11:10:51 11:10:51.061 [TestBroker:1:Controller-1-to-broker-1-send-thread] DEBUG org.apache.kafka.common.security.authenticator.SaslClientAuthenticator - [Controller id=1, targetBrokerId=1] Set SASL client state to INITIAL 11:10:51 11:10:51.064 [data-plane-kafka-network-thread-1-ListenerName(SASL_PLAINTEXT)-SASL_PLAINTEXT-0] DEBUG org.apache.kafka.common.security.authenticator.SaslServerAuthenticator - Set SASL server state to AUTHENTICATE during authentication 11:10:51 11:10:51.064 [main] DEBUG org.apache.kafka.clients.admin.KafkaAdminClient - [AdminClient clientId=test-consumer-id] Queueing Call(callName=listNodes, deadlineMs=1768216311061, tries=0, nextAllowedTryMs=0) with a timeout 15000 ms from now. 11:10:51 11:10:51.069 [kafka-admin-client-thread | test-consumer-id] DEBUG org.apache.kafka.clients.ClientUtils - Resolved host localhost as 127.0.0.1 11:10:51 11:10:51.069 [TestBroker:1:Controller-1-to-broker-1-send-thread] DEBUG org.apache.kafka.common.security.authenticator.SaslClientAuthenticator - [Controller id=1, targetBrokerId=1] Set SASL client state to INTERMEDIATE 11:10:51 11:10:51.069 [kafka-admin-client-thread | test-consumer-id] DEBUG org.apache.kafka.clients.NetworkClient - [AdminClient clientId=test-consumer-id] Initiating connection to node localhost:39115 (id: -1 rack: null) using address localhost/127.0.0.1 11:10:51 11:10:51.070 [kafka-admin-client-thread | test-consumer-id] DEBUG org.apache.kafka.common.security.authenticator.SaslClientAuthenticator - [AdminClient clientId=test-consumer-id] Set SASL client state to SEND_APIVERSIONS_REQUEST 11:10:51 11:10:51.070 [kafka-admin-client-thread | test-consumer-id] DEBUG org.apache.kafka.common.security.authenticator.SaslClientAuthenticator - [AdminClient clientId=test-consumer-id] Creating SaslClient: client=null;service=kafka;serviceHostname=localhost;mechs=[PLAIN] 11:10:51 11:10:51.070 [data-plane-kafka-network-thread-1-ListenerName(SASL_PLAINTEXT)-SASL_PLAINTEXT-0] DEBUG org.apache.kafka.common.security.authenticator.SaslServerAuthenticator - Authentication complete; session max lifetime from broker config=0 ms, no credential expiration; no session expiration, sending 0 ms to client 11:10:51 11:10:51.071 [data-plane-kafka-network-thread-1-ListenerName(SASL_PLAINTEXT)-SASL_PLAINTEXT-0] DEBUG org.apache.kafka.common.security.authenticator.SaslServerAuthenticator - Set SASL server state to COMPLETE during authentication 11:10:51 11:10:51.078 [data-plane-kafka-socket-acceptor-ListenerName(SASL_PLAINTEXT)-SASL_PLAINTEXT-39115] DEBUG kafka.network.DataPlaneAcceptor - Accepted connection from /127.0.0.1:43324 on /127.0.0.1:39115 and assigned it to processor 1, sendBufferSize [actual|requested]: [102400|102400] recvBufferSize [actual|requested]: [102400|102400] 11:10:51 11:10:51.075 [data-plane-kafka-network-thread-1-ListenerName(SASL_PLAINTEXT)-SASL_PLAINTEXT-1] DEBUG kafka.network.Processor - Processor 1 listening to new connection from /127.0.0.1:43324 11:10:51 11:10:51.074 [TestBroker:1:Controller-1-to-broker-1-send-thread] DEBUG org.apache.kafka.common.security.authenticator.SaslClientAuthenticator - [Controller id=1, targetBrokerId=1] Set SASL client state to COMPLETE 11:10:51 11:10:51.079 [TestBroker:1:Controller-1-to-broker-1-send-thread] DEBUG org.apache.kafka.common.security.authenticator.SaslClientAuthenticator - [Controller id=1, targetBrokerId=1] Finished authentication with no session expiration and no session re-authentication 11:10:51 11:10:51.080 [TestBroker:1:Controller-1-to-broker-1-send-thread] DEBUG org.apache.kafka.common.network.Selector - [Controller id=1, targetBrokerId=1] Successfully authenticated with localhost/127.0.0.1 11:10:51 11:10:51.080 [data-plane-kafka-network-thread-1-ListenerName(SASL_PLAINTEXT)-SASL_PLAINTEXT-0] DEBUG org.apache.kafka.common.network.Selector - [SocketServer listenerType=ZK_BROKER, nodeId=1] Successfully authenticated with /127.0.0.1 11:10:51 11:10:51.080 [kafka-admin-client-thread | test-consumer-id] DEBUG org.apache.kafka.common.network.Selector - [AdminClient clientId=test-consumer-id] Created socket with SO_RCVBUF = 65536, SO_SNDBUF = 131072, SO_TIMEOUT = 0 to node -1 11:10:51 11:10:51.081 [kafka-admin-client-thread | test-consumer-id] DEBUG org.apache.kafka.common.security.authenticator.SaslClientAuthenticator - [AdminClient clientId=test-consumer-id] Set SASL client state to RECEIVE_APIVERSIONS_RESPONSE 11:10:51 11:10:51.081 [kafka-admin-client-thread | test-consumer-id] DEBUG org.apache.kafka.clients.NetworkClient - [AdminClient clientId=test-consumer-id] Completed connection to node -1. Fetching API versions. 11:10:51 11:10:51.081 [data-plane-kafka-network-thread-1-ListenerName(SASL_PLAINTEXT)-SASL_PLAINTEXT-1] DEBUG org.apache.kafka.common.security.authenticator.SaslServerAuthenticator - Set SASL server state to HANDSHAKE_OR_VERSIONS_REQUEST during authentication 11:10:51 11:10:51.081 [data-plane-kafka-network-thread-1-ListenerName(SASL_PLAINTEXT)-SASL_PLAINTEXT-1] DEBUG org.apache.kafka.common.security.authenticator.SaslServerAuthenticator - Handling Kafka request API_VERSIONS during authentication 11:10:51 11:10:51.081 [TestBroker:1:Controller-1-to-broker-1-send-thread] INFO kafka.controller.RequestSendThread - [RequestSendThread controllerId=1] Controller 1 connected to localhost:39115 (id: 1 rack: null) for sending state change requests 11:10:51 11:10:51.083 [data-plane-kafka-network-thread-1-ListenerName(SASL_PLAINTEXT)-SASL_PLAINTEXT-1] DEBUG org.apache.kafka.common.security.authenticator.SaslServerAuthenticator - Set SASL server state to HANDSHAKE_REQUEST during authentication 11:10:51 11:10:51.084 [kafka-admin-client-thread | test-consumer-id] DEBUG org.apache.kafka.common.security.authenticator.SaslClientAuthenticator - [AdminClient clientId=test-consumer-id] Set SASL client state to SEND_HANDSHAKE_REQUEST 11:10:51 11:10:51.085 [kafka-admin-client-thread | test-consumer-id] DEBUG org.apache.kafka.common.security.authenticator.SaslClientAuthenticator - [AdminClient clientId=test-consumer-id] Set SASL client state to RECEIVE_HANDSHAKE_RESPONSE 11:10:51 11:10:51.085 [data-plane-kafka-network-thread-1-ListenerName(SASL_PLAINTEXT)-SASL_PLAINTEXT-1] DEBUG org.apache.kafka.common.security.authenticator.SaslServerAuthenticator - Handling Kafka request SASL_HANDSHAKE during authentication 11:10:51 11:10:51.085 [data-plane-kafka-network-thread-1-ListenerName(SASL_PLAINTEXT)-SASL_PLAINTEXT-1] DEBUG org.apache.kafka.common.security.authenticator.SaslServerAuthenticator - Using SASL mechanism 'PLAIN' provided by client 11:10:51 11:10:51.085 [kafka-admin-client-thread | test-consumer-id] DEBUG org.apache.kafka.common.security.authenticator.SaslClientAuthenticator - [AdminClient clientId=test-consumer-id] Set SASL client state to INITIAL 11:10:51 11:10:51.086 [data-plane-kafka-network-thread-1-ListenerName(SASL_PLAINTEXT)-SASL_PLAINTEXT-1] DEBUG org.apache.kafka.common.security.authenticator.SaslServerAuthenticator - Set SASL server state to AUTHENTICATE during authentication 11:10:51 11:10:51.086 [kafka-admin-client-thread | test-consumer-id] DEBUG org.apache.kafka.common.security.authenticator.SaslClientAuthenticator - [AdminClient clientId=test-consumer-id] Set SASL client state to INTERMEDIATE 11:10:51 11:10:51.086 [data-plane-kafka-network-thread-1-ListenerName(SASL_PLAINTEXT)-SASL_PLAINTEXT-1] DEBUG org.apache.kafka.common.security.authenticator.SaslServerAuthenticator - Authentication complete; session max lifetime from broker config=0 ms, no credential expiration; no session expiration, sending 0 ms to client 11:10:51 11:10:51.086 [TestBroker:1:Controller-1-to-broker-1-send-thread] DEBUG org.apache.kafka.clients.NetworkClient - [Controller id=1, targetBrokerId=1] Sending UPDATE_METADATA request with header RequestHeader(apiKey=UPDATE_METADATA, apiVersion=7, clientId=1, correlationId=0) and timeout 30000 to node 1: UpdateMetadataRequestData(controllerId=1, controllerEpoch=1, brokerEpoch=25, ungroupedPartitionStates=[], topicStates=[], liveBrokers=[UpdateMetadataBroker(id=1, v0Host='', v0Port=0, endpoints=[UpdateMetadataEndpoint(port=39115, host='localhost', listener='SASL_PLAINTEXT', securityProtocol=2)], rack=null)]) 11:10:51 11:10:51.086 [data-plane-kafka-network-thread-1-ListenerName(SASL_PLAINTEXT)-SASL_PLAINTEXT-1] DEBUG org.apache.kafka.common.security.authenticator.SaslServerAuthenticator - Set SASL server state to COMPLETE during authentication 11:10:51 11:10:51.086 [data-plane-kafka-network-thread-1-ListenerName(SASL_PLAINTEXT)-SASL_PLAINTEXT-1] DEBUG org.apache.kafka.common.network.Selector - [SocketServer listenerType=ZK_BROKER, nodeId=1] Successfully authenticated with /127.0.0.1 11:10:51 11:10:51.086 [kafka-admin-client-thread | test-consumer-id] DEBUG org.apache.kafka.common.security.authenticator.SaslClientAuthenticator - [AdminClient clientId=test-consumer-id] Set SASL client state to COMPLETE 11:10:51 11:10:51.087 [kafka-admin-client-thread | test-consumer-id] DEBUG org.apache.kafka.common.security.authenticator.SaslClientAuthenticator - [AdminClient clientId=test-consumer-id] Finished authentication with no session expiration and no session re-authentication 11:10:51 11:10:51.087 [kafka-admin-client-thread | test-consumer-id] DEBUG org.apache.kafka.common.network.Selector - [AdminClient clientId=test-consumer-id] Successfully authenticated with localhost/127.0.0.1 11:10:51 11:10:51.087 [kafka-admin-client-thread | test-consumer-id] DEBUG org.apache.kafka.clients.NetworkClient - [AdminClient clientId=test-consumer-id] Initiating API versions fetch from node -1. 11:10:51 11:10:51.087 [kafka-admin-client-thread | test-consumer-id] DEBUG org.apache.kafka.clients.NetworkClient - [AdminClient clientId=test-consumer-id] Sending API_VERSIONS request with header RequestHeader(apiKey=API_VERSIONS, apiVersion=3, clientId=test-consumer-id, correlationId=0) and timeout 3600000 to node -1: ApiVersionsRequestData(clientSoftwareName='apache-kafka-java', clientSoftwareVersion='3.3.1') 11:10:51 11:10:51.114 [kafka-admin-client-thread | test-consumer-id] DEBUG org.apache.kafka.clients.NetworkClient - [AdminClient clientId=test-consumer-id] Received API_VERSIONS response from node -1 for request with header RequestHeader(apiKey=API_VERSIONS, apiVersion=3, clientId=test-consumer-id, correlationId=0): ApiVersionsResponseData(errorCode=0, apiKeys=[ApiVersion(apiKey=0, minVersion=0, maxVersion=9), ApiVersion(apiKey=1, minVersion=0, maxVersion=13), ApiVersion(apiKey=2, minVersion=0, maxVersion=7), ApiVersion(apiKey=3, minVersion=0, maxVersion=12), ApiVersion(apiKey=4, minVersion=0, maxVersion=6), ApiVersion(apiKey=5, minVersion=0, maxVersion=3), ApiVersion(apiKey=6, minVersion=0, maxVersion=7), ApiVersion(apiKey=7, minVersion=0, maxVersion=3), ApiVersion(apiKey=8, minVersion=0, maxVersion=8), ApiVersion(apiKey=9, minVersion=0, maxVersion=8), ApiVersion(apiKey=10, minVersion=0, maxVersion=4), ApiVersion(apiKey=11, minVersion=0, maxVersion=9), ApiVersion(apiKey=12, minVersion=0, maxVersion=4), ApiVersion(apiKey=13, minVersion=0, maxVersion=5), ApiVersion(apiKey=14, minVersion=0, maxVersion=5), ApiVersion(apiKey=15, minVersion=0, maxVersion=5), ApiVersion(apiKey=16, minVersion=0, maxVersion=4), ApiVersion(apiKey=17, minVersion=0, maxVersion=1), ApiVersion(apiKey=18, minVersion=0, maxVersion=3), ApiVersion(apiKey=19, minVersion=0, maxVersion=7), ApiVersion(apiKey=20, minVersion=0, maxVersion=6), ApiVersion(apiKey=21, minVersion=0, maxVersion=2), ApiVersion(apiKey=22, minVersion=0, maxVersion=4), ApiVersion(apiKey=23, minVersion=0, maxVersion=4), ApiVersion(apiKey=24, minVersion=0, maxVersion=3), ApiVersion(apiKey=25, minVersion=0, maxVersion=3), ApiVersion(apiKey=26, minVersion=0, maxVersion=3), ApiVersion(apiKey=27, minVersion=0, maxVersion=1), ApiVersion(apiKey=28, minVersion=0, maxVersion=3), ApiVersion(apiKey=29, minVersion=0, maxVersion=3), ApiVersion(apiKey=30, minVersion=0, maxVersion=3), ApiVersion(apiKey=31, minVersion=0, maxVersion=3), ApiVersion(apiKey=32, minVersion=0, maxVersion=4), ApiVersion(apiKey=33, minVersion=0, maxVersion=2), ApiVersion(apiKey=34, minVersion=0, maxVersion=2), ApiVersion(apiKey=35, minVersion=0, maxVersion=4), ApiVersion(apiKey=36, minVersion=0, maxVersion=2), ApiVersion(apiKey=37, minVersion=0, maxVersion=3), ApiVersion(apiKey=38, minVersion=0, maxVersion=3), ApiVersion(apiKey=39, minVersion=0, maxVersion=2), ApiVersion(apiKey=40, minVersion=0, maxVersion=2), ApiVersion(apiKey=41, minVersion=0, maxVersion=3), ApiVersion(apiKey=42, minVersion=0, maxVersion=2), ApiVersion(apiKey=43, minVersion=0, maxVersion=2), ApiVersion(apiKey=44, minVersion=0, maxVersion=1), ApiVersion(apiKey=45, minVersion=0, maxVersion=0), ApiVersion(apiKey=46, minVersion=0, maxVersion=0), ApiVersion(apiKey=47, minVersion=0, maxVersion=0), ApiVersion(apiKey=48, minVersion=0, maxVersion=1), ApiVersion(apiKey=49, minVersion=0, maxVersion=1), ApiVersion(apiKey=50, minVersion=0, maxVersion=0), ApiVersion(apiKey=51, minVersion=0, maxVersion=0), ApiVersion(apiKey=56, minVersion=0, maxVersion=2), ApiVersion(apiKey=57, minVersion=0, maxVersion=1), ApiVersion(apiKey=60, minVersion=0, maxVersion=0), ApiVersion(apiKey=61, minVersion=0, maxVersion=0), ApiVersion(apiKey=65, minVersion=0, maxVersion=0), ApiVersion(apiKey=66, minVersion=0, maxVersion=0), ApiVersion(apiKey=67, minVersion=0, maxVersion=0)], throttleTimeMs=0, supportedFeatures=[], finalizedFeaturesEpoch=0, finalizedFeatures=[]) 11:10:51 11:10:51.117 [TestBroker:1:BrokerToControllerChannelManager broker=1 name=forwarding] DEBUG kafka.server.BrokerToControllerRequestThread - [TestBroker:1:BrokerToControllerChannelManager broker=1 name=forwarding]: Controller isn't cached, looking for local metadata changes 11:10:51 11:10:51.117 [TestBroker:1:BrokerToControllerChannelManager broker=1 name=forwarding] DEBUG kafka.server.BrokerToControllerRequestThread - [TestBroker:1:BrokerToControllerChannelManager broker=1 name=forwarding]: No controller defined in metadata cache, retrying after backoff 11:10:51 11:10:51.118 [TestBroker:1:BrokerToControllerChannelManager broker=1 name=alterPartition] DEBUG kafka.server.BrokerToControllerRequestThread - [TestBroker:1:BrokerToControllerChannelManager broker=1 name=alterPartition]: Controller isn't cached, looking for local metadata changes 11:10:51 11:10:51.118 [TestBroker:1:BrokerToControllerChannelManager broker=1 name=alterPartition] DEBUG kafka.server.BrokerToControllerRequestThread - [TestBroker:1:BrokerToControllerChannelManager broker=1 name=alterPartition]: No controller defined in metadata cache, retrying after backoff 11:10:51 11:10:51.119 [kafka-admin-client-thread | test-consumer-id] DEBUG org.apache.kafka.clients.NetworkClient - [AdminClient clientId=test-consumer-id] Node -1 has finalized features epoch: 0, finalized features: [], supported features: [], API versions: (Produce(0): 0 to 9 [usable: 9], Fetch(1): 0 to 13 [usable: 13], ListOffsets(2): 0 to 7 [usable: 7], Metadata(3): 0 to 12 [usable: 12], LeaderAndIsr(4): 0 to 6 [usable: 6], StopReplica(5): 0 to 3 [usable: 3], UpdateMetadata(6): 0 to 7 [usable: 7], ControlledShutdown(7): 0 to 3 [usable: 3], OffsetCommit(8): 0 to 8 [usable: 8], OffsetFetch(9): 0 to 8 [usable: 8], FindCoordinator(10): 0 to 4 [usable: 4], JoinGroup(11): 0 to 9 [usable: 9], Heartbeat(12): 0 to 4 [usable: 4], LeaveGroup(13): 0 to 5 [usable: 5], SyncGroup(14): 0 to 5 [usable: 5], DescribeGroups(15): 0 to 5 [usable: 5], ListGroups(16): 0 to 4 [usable: 4], SaslHandshake(17): 0 to 1 [usable: 1], ApiVersions(18): 0 to 3 [usable: 3], CreateTopics(19): 0 to 7 [usable: 7], DeleteTopics(20): 0 to 6 [usable: 6], DeleteRecords(21): 0 to 2 [usable: 2], InitProducerId(22): 0 to 4 [usable: 4], OffsetForLeaderEpoch(23): 0 to 4 [usable: 4], AddPartitionsToTxn(24): 0 to 3 [usable: 3], AddOffsetsToTxn(25): 0 to 3 [usable: 3], EndTxn(26): 0 to 3 [usable: 3], WriteTxnMarkers(27): 0 to 1 [usable: 1], TxnOffsetCommit(28): 0 to 3 [usable: 3], DescribeAcls(29): 0 to 3 [usable: 3], CreateAcls(30): 0 to 3 [usable: 3], DeleteAcls(31): 0 to 3 [usable: 3], DescribeConfigs(32): 0 to 4 [usable: 4], AlterConfigs(33): 0 to 2 [usable: 2], AlterReplicaLogDirs(34): 0 to 2 [usable: 2], DescribeLogDirs(35): 0 to 4 [usable: 4], SaslAuthenticate(36): 0 to 2 [usable: 2], CreatePartitions(37): 0 to 3 [usable: 3], CreateDelegationToken(38): 0 to 3 [usable: 3], RenewDelegationToken(39): 0 to 2 [usable: 2], ExpireDelegationToken(40): 0 to 2 [usable: 2], DescribeDelegationToken(41): 0 to 3 [usable: 3], DeleteGroups(42): 0 to 2 [usable: 2], ElectLeaders(43): 0 to 2 [usable: 2], IncrementalAlterConfigs(44): 0 to 1 [usable: 1], AlterPartitionReassignments(45): 0 [usable: 0], ListPartitionReassignments(46): 0 [usable: 0], OffsetDelete(47): 0 [usable: 0], DescribeClientQuotas(48): 0 to 1 [usable: 1], AlterClientQuotas(49): 0 to 1 [usable: 1], DescribeUserScramCredentials(50): 0 [usable: 0], AlterUserScramCredentials(51): 0 [usable: 0], AlterPartition(56): 0 to 2 [usable: 2], UpdateFeatures(57): 0 to 1 [usable: 1], DescribeCluster(60): 0 [usable: 0], DescribeProducers(61): 0 [usable: 0], DescribeTransactions(65): 0 [usable: 0], ListTransactions(66): 0 [usable: 0], AllocateProducerIds(67): 0 [usable: 0]). 11:10:51 11:10:51.120 [kafka-admin-client-thread | test-consumer-id] DEBUG org.apache.kafka.clients.admin.KafkaAdminClient - [AdminClient clientId=test-consumer-id] Sending MetadataRequestData(topics=[], allowAutoTopicCreation=true, includeClusterAuthorizedOperations=false, includeTopicAuthorizedOperations=false) to localhost:39115 (id: -1 rack: null). correlationId=1, timeoutMs=14941 11:10:51 11:10:51.121 [kafka-admin-client-thread | test-consumer-id] DEBUG org.apache.kafka.clients.NetworkClient - [AdminClient clientId=test-consumer-id] Sending METADATA request with header RequestHeader(apiKey=METADATA, apiVersion=12, clientId=test-consumer-id, correlationId=1) and timeout 14941 to node -1: MetadataRequestData(topics=[], allowAutoTopicCreation=true, includeClusterAuthorizedOperations=false, includeTopicAuthorizedOperations=false) 11:10:51 11:10:51.127 [TestBroker:1:Controller-1-to-broker-1-send-thread] DEBUG org.apache.kafka.clients.NetworkClient - [Controller id=1, targetBrokerId=1] Received UPDATE_METADATA response from node 1 for request with header RequestHeader(apiKey=UPDATE_METADATA, apiVersion=7, clientId=1, correlationId=0): UpdateMetadataResponseData(errorCode=0) 11:10:51 11:10:51.144 [data-plane-kafka-network-thread-1-ListenerName(SASL_PLAINTEXT)-SASL_PLAINTEXT-0] DEBUG kafka.request.logger - Completed request:{"isForwarded":false,"requestHeader":{"requestApiKey":6,"requestApiVersion":7,"correlationId":0,"clientId":"1","requestApiKeyName":"UPDATE_METADATA"},"request":{"controllerId":1,"controllerEpoch":1,"brokerEpoch":25,"topicStates":[],"liveBrokers":[{"id":1,"endpoints":[{"port":39115,"host":"localhost","listener":"SASL_PLAINTEXT","securityProtocol":2}],"rack":null}]},"response":{"errorCode":0},"connection":"127.0.0.1:39115-127.0.0.1:43322-0","totalTimeMs":35.538,"requestQueueTimeMs":18.359,"localTimeMs":16.734,"remoteTimeMs":0.0,"throttleTimeMs":0,"responseQueueTimeMs":0.134,"sendTimeMs":0.31,"securityProtocol":"SASL_PLAINTEXT","principal":"User:admin","listener":"SASL_PLAINTEXT","clientInformation":{"softwareName":"unknown","softwareVersion":"unknown"}} 11:10:51 11:10:51.148 [data-plane-kafka-network-thread-1-ListenerName(SASL_PLAINTEXT)-SASL_PLAINTEXT-1] DEBUG kafka.request.logger - Completed request:{"isForwarded":false,"requestHeader":{"requestApiKey":18,"requestApiVersion":3,"correlationId":0,"clientId":"test-consumer-id","requestApiKeyName":"API_VERSIONS"},"request":{"clientSoftwareName":"apache-kafka-java","clientSoftwareVersion":"3.3.1"},"response":{"errorCode":0,"apiKeys":[{"apiKey":0,"minVersion":0,"maxVersion":9},{"apiKey":1,"minVersion":0,"maxVersion":13},{"apiKey":2,"minVersion":0,"maxVersion":7},{"apiKey":3,"minVersion":0,"maxVersion":12},{"apiKey":4,"minVersion":0,"maxVersion":6},{"apiKey":5,"minVersion":0,"maxVersion":3},{"apiKey":6,"minVersion":0,"maxVersion":7},{"apiKey":7,"minVersion":0,"maxVersion":3},{"apiKey":8,"minVersion":0,"maxVersion":8},{"apiKey":9,"minVersion":0,"maxVersion":8},{"apiKey":10,"minVersion":0,"maxVersion":4},{"apiKey":11,"minVersion":0,"maxVersion":9},{"apiKey":12,"minVersion":0,"maxVersion":4},{"apiKey":13,"minVersion":0,"maxVersion":5},{"apiKey":14,"minVersion":0,"maxVersion":5},{"apiKey":15,"minVersion":0,"maxVersion":5},{"apiKey":16,"minVersion":0,"maxVersion":4},{"apiKey":17,"minVersion":0,"maxVersion":1},{"apiKey":18,"minVersion":0,"maxVersion":3},{"apiKey":19,"minVersion":0,"maxVersion":7},{"apiKey":20,"minVersion":0,"maxVersion":6},{"apiKey":21,"minVersion":0,"maxVersion":2},{"apiKey":22,"minVersion":0,"maxVersion":4},{"apiKey":23,"minVersion":0,"maxVersion":4},{"apiKey":24,"minVersion":0,"maxVersion":3},{"apiKey":25,"minVersion":0,"maxVersion":3},{"apiKey":26,"minVersion":0,"maxVersion":3},{"apiKey":27,"minVersion":0,"maxVersion":1},{"apiKey":28,"minVersion":0,"maxVersion":3},{"apiKey":29,"minVersion":0,"maxVersion":3},{"apiKey":30,"minVersion":0,"maxVersion":3},{"apiKey":31,"minVersion":0,"maxVersion":3},{"apiKey":32,"minVersion":0,"maxVersion":4},{"apiKey":33,"minVersion":0,"maxVersion":2},{"apiKey":34,"minVersion":0,"maxVersion":2},{"apiKey":35,"minVersion":0,"maxVersion":4},{"apiKey":36,"minVersion":0,"maxVersion":2},{"apiKey":37,"minVersion":0,"maxVersion":3},{"apiKey":38,"minVersion":0,"maxVersion":3},{"apiKey":39,"minVersion":0,"maxVersion":2},{"apiKey":40,"minVersion":0,"maxVersion":2},{"apiKey":41,"minVersion":0,"maxVersion":3},{"apiKey":42,"minVersion":0,"maxVersion":2},{"apiKey":43,"minVersion":0,"maxVersion":2},{"apiKey":44,"minVersion":0,"maxVersion":1},{"apiKey":45,"minVersion":0,"maxVersion":0},{"apiKey":46,"minVersion":0,"maxVersion":0},{"apiKey":47,"minVersion":0,"maxVersion":0},{"apiKey":48,"minVersion":0,"maxVersion":1},{"apiKey":49,"minVersion":0,"maxVersion":1},{"apiKey":50,"minVersion":0,"maxVersion":0},{"apiKey":51,"minVersion":0,"maxVersion":0},{"apiKey":56,"minVersion":0,"maxVersion":2},{"apiKey":57,"minVersion":0,"maxVersion":1},{"apiKey":60,"minVersion":0,"maxVersion":0},{"apiKey":61,"minVersion":0,"maxVersion":0},{"apiKey":65,"minVersion":0,"maxVersion":0},{"apiKey":66,"minVersion":0,"maxVersion":0},{"apiKey":67,"minVersion":0,"maxVersion":0}],"throttleTimeMs":0,"finalizedFeaturesEpoch":0},"connection":"127.0.0.1:39115-127.0.0.1:43324-0","totalTimeMs":24.494,"requestQueueTimeMs":15.901,"localTimeMs":6.3,"remoteTimeMs":0.0,"throttleTimeMs":0,"responseQueueTimeMs":0.14,"sendTimeMs":2.151,"securityProtocol":"SASL_PLAINTEXT","principal":"User:kafkaclient","listener":"SASL_PLAINTEXT","clientInformation":{"softwareName":"unknown","softwareVersion":"unknown"}} 11:10:51 11:10:51.168 [kafka-admin-client-thread | test-consumer-id] DEBUG org.apache.kafka.clients.NetworkClient - [AdminClient clientId=test-consumer-id] Received METADATA response from node -1 for request with header RequestHeader(apiKey=METADATA, apiVersion=12, clientId=test-consumer-id, correlationId=1): MetadataResponseData(throttleTimeMs=0, brokers=[MetadataResponseBroker(nodeId=1, host='localhost', port=39115, rack=null)], clusterId='jx5ycp9PTHOXo1U6H8QTmw', controllerId=1, topics=[], clusterAuthorizedOperations=-2147483648) 11:10:51 11:10:51.169 [data-plane-kafka-network-thread-1-ListenerName(SASL_PLAINTEXT)-SASL_PLAINTEXT-1] DEBUG kafka.request.logger - Completed request:{"isForwarded":false,"requestHeader":{"requestApiKey":3,"requestApiVersion":12,"correlationId":1,"clientId":"test-consumer-id","requestApiKeyName":"METADATA"},"request":{"topics":[],"allowAutoTopicCreation":true,"includeTopicAuthorizedOperations":false},"response":{"throttleTimeMs":0,"brokers":[{"nodeId":1,"host":"localhost","port":39115,"rack":null}],"clusterId":"jx5ycp9PTHOXo1U6H8QTmw","controllerId":1,"topics":[]},"connection":"127.0.0.1:39115-127.0.0.1:43324-0","totalTimeMs":18.21,"requestQueueTimeMs":2.597,"localTimeMs":14.812,"remoteTimeMs":0.0,"throttleTimeMs":0,"responseQueueTimeMs":0.256,"sendTimeMs":0.543,"securityProtocol":"SASL_PLAINTEXT","principal":"User:kafkaclient","listener":"SASL_PLAINTEXT","clientInformation":{"softwareName":"apache-kafka-java","softwareVersion":"3.3.1"}} 11:10:51 11:10:51.171 [kafka-admin-client-thread | test-consumer-id] DEBUG org.apache.kafka.clients.admin.internals.AdminMetadataManager - [AdminClient clientId=test-consumer-id] Updating cluster metadata to Cluster(id = jx5ycp9PTHOXo1U6H8QTmw, nodes = [localhost:39115 (id: 1 rack: null)], partitions = [], controller = localhost:39115 (id: 1 rack: null)) 11:10:51 11:10:51.171 [kafka-admin-client-thread | test-consumer-id] DEBUG org.apache.kafka.clients.ClientUtils - Resolved host localhost as 127.0.0.1 11:10:51 11:10:51.171 [kafka-admin-client-thread | test-consumer-id] DEBUG org.apache.kafka.clients.NetworkClient - [AdminClient clientId=test-consumer-id] Initiating connection to node localhost:39115 (id: 1 rack: null) using address localhost/127.0.0.1 11:10:51 11:10:51.172 [kafka-admin-client-thread | test-consumer-id] DEBUG org.apache.kafka.common.security.authenticator.SaslClientAuthenticator - [AdminClient clientId=test-consumer-id] Set SASL client state to SEND_APIVERSIONS_REQUEST 11:10:51 11:10:51.172 [kafka-admin-client-thread | test-consumer-id] DEBUG org.apache.kafka.common.security.authenticator.SaslClientAuthenticator - [AdminClient clientId=test-consumer-id] Creating SaslClient: client=null;service=kafka;serviceHostname=localhost;mechs=[PLAIN] 11:10:51 11:10:51.172 [data-plane-kafka-socket-acceptor-ListenerName(SASL_PLAINTEXT)-SASL_PLAINTEXT-39115] DEBUG kafka.network.DataPlaneAcceptor - Accepted connection from /127.0.0.1:43326 on /127.0.0.1:39115 and assigned it to processor 0, sendBufferSize [actual|requested]: [102400|102400] recvBufferSize [actual|requested]: [102400|102400] 11:10:51 11:10:51.172 [data-plane-kafka-network-thread-1-ListenerName(SASL_PLAINTEXT)-SASL_PLAINTEXT-0] DEBUG kafka.network.Processor - Processor 0 listening to new connection from /127.0.0.1:43326 11:10:51 11:10:51.173 [kafka-admin-client-thread | test-consumer-id] DEBUG org.apache.kafka.common.network.Selector - [AdminClient clientId=test-consumer-id] Created socket with SO_RCVBUF = 65536, SO_SNDBUF = 131072, SO_TIMEOUT = 0 to node 1 11:10:51 11:10:51.174 [kafka-admin-client-thread | test-consumer-id] DEBUG org.apache.kafka.common.security.authenticator.SaslClientAuthenticator - [AdminClient clientId=test-consumer-id] Set SASL client state to RECEIVE_APIVERSIONS_RESPONSE 11:10:51 11:10:51.174 [kafka-admin-client-thread | test-consumer-id] DEBUG org.apache.kafka.clients.NetworkClient - [AdminClient clientId=test-consumer-id] Completed connection to node 1. Fetching API versions. 11:10:51 11:10:51.174 [data-plane-kafka-network-thread-1-ListenerName(SASL_PLAINTEXT)-SASL_PLAINTEXT-0] DEBUG org.apache.kafka.common.security.authenticator.SaslServerAuthenticator - Set SASL server state to HANDSHAKE_OR_VERSIONS_REQUEST during authentication 11:10:51 11:10:51.174 [data-plane-kafka-network-thread-1-ListenerName(SASL_PLAINTEXT)-SASL_PLAINTEXT-0] DEBUG org.apache.kafka.common.security.authenticator.SaslServerAuthenticator - Handling Kafka request API_VERSIONS during authentication 11:10:51 11:10:51.175 [data-plane-kafka-network-thread-1-ListenerName(SASL_PLAINTEXT)-SASL_PLAINTEXT-0] DEBUG org.apache.kafka.common.security.authenticator.SaslServerAuthenticator - Set SASL server state to HANDSHAKE_REQUEST during authentication 11:10:51 11:10:51.175 [kafka-admin-client-thread | test-consumer-id] DEBUG org.apache.kafka.common.security.authenticator.SaslClientAuthenticator - [AdminClient clientId=test-consumer-id] Set SASL client state to SEND_HANDSHAKE_REQUEST 11:10:51 11:10:51.175 [kafka-admin-client-thread | test-consumer-id] DEBUG org.apache.kafka.common.security.authenticator.SaslClientAuthenticator - [AdminClient clientId=test-consumer-id] Set SASL client state to RECEIVE_HANDSHAKE_RESPONSE 11:10:51 11:10:51.175 [data-plane-kafka-network-thread-1-ListenerName(SASL_PLAINTEXT)-SASL_PLAINTEXT-0] DEBUG org.apache.kafka.common.security.authenticator.SaslServerAuthenticator - Handling Kafka request SASL_HANDSHAKE during authentication 11:10:51 11:10:51.175 [data-plane-kafka-network-thread-1-ListenerName(SASL_PLAINTEXT)-SASL_PLAINTEXT-0] DEBUG org.apache.kafka.common.security.authenticator.SaslServerAuthenticator - Using SASL mechanism 'PLAIN' provided by client 11:10:51 11:10:51.176 [kafka-admin-client-thread | test-consumer-id] DEBUG org.apache.kafka.common.security.authenticator.SaslClientAuthenticator - [AdminClient clientId=test-consumer-id] Set SASL client state to INITIAL 11:10:51 11:10:51.176 [kafka-admin-client-thread | test-consumer-id] DEBUG org.apache.kafka.common.security.authenticator.SaslClientAuthenticator - [AdminClient clientId=test-consumer-id] Set SASL client state to INTERMEDIATE 11:10:51 11:10:51.176 [data-plane-kafka-network-thread-1-ListenerName(SASL_PLAINTEXT)-SASL_PLAINTEXT-0] DEBUG org.apache.kafka.common.security.authenticator.SaslServerAuthenticator - Set SASL server state to AUTHENTICATE during authentication 11:10:51 11:10:51.176 [data-plane-kafka-network-thread-1-ListenerName(SASL_PLAINTEXT)-SASL_PLAINTEXT-0] DEBUG org.apache.kafka.common.security.authenticator.SaslServerAuthenticator - Authentication complete; session max lifetime from broker config=0 ms, no credential expiration; no session expiration, sending 0 ms to client 11:10:51 11:10:51.176 [data-plane-kafka-network-thread-1-ListenerName(SASL_PLAINTEXT)-SASL_PLAINTEXT-0] DEBUG org.apache.kafka.common.security.authenticator.SaslServerAuthenticator - Set SASL server state to COMPLETE during authentication 11:10:51 11:10:51.176 [data-plane-kafka-network-thread-1-ListenerName(SASL_PLAINTEXT)-SASL_PLAINTEXT-0] DEBUG org.apache.kafka.common.network.Selector - [SocketServer listenerType=ZK_BROKER, nodeId=1] Successfully authenticated with /127.0.0.1 11:10:51 11:10:51.176 [kafka-admin-client-thread | test-consumer-id] DEBUG org.apache.kafka.common.security.authenticator.SaslClientAuthenticator - [AdminClient clientId=test-consumer-id] Set SASL client state to COMPLETE 11:10:51 11:10:51.176 [kafka-admin-client-thread | test-consumer-id] DEBUG org.apache.kafka.common.security.authenticator.SaslClientAuthenticator - [AdminClient clientId=test-consumer-id] Finished authentication with no session expiration and no session re-authentication 11:10:51 11:10:51.176 [kafka-admin-client-thread | test-consumer-id] DEBUG org.apache.kafka.common.network.Selector - [AdminClient clientId=test-consumer-id] Successfully authenticated with localhost/127.0.0.1 11:10:51 11:10:51.177 [kafka-admin-client-thread | test-consumer-id] DEBUG org.apache.kafka.clients.NetworkClient - [AdminClient clientId=test-consumer-id] Initiating API versions fetch from node 1. 11:10:51 11:10:51.177 [kafka-admin-client-thread | test-consumer-id] DEBUG org.apache.kafka.clients.NetworkClient - [AdminClient clientId=test-consumer-id] Sending API_VERSIONS request with header RequestHeader(apiKey=API_VERSIONS, apiVersion=3, clientId=test-consumer-id, correlationId=2) and timeout 3600000 to node 1: ApiVersionsRequestData(clientSoftwareName='apache-kafka-java', clientSoftwareVersion='3.3.1') 11:10:51 11:10:51.182 [kafka-admin-client-thread | test-consumer-id] DEBUG org.apache.kafka.clients.NetworkClient - [AdminClient clientId=test-consumer-id] Received API_VERSIONS response from node 1 for request with header RequestHeader(apiKey=API_VERSIONS, apiVersion=3, clientId=test-consumer-id, correlationId=2): ApiVersionsResponseData(errorCode=0, apiKeys=[ApiVersion(apiKey=0, minVersion=0, maxVersion=9), ApiVersion(apiKey=1, minVersion=0, maxVersion=13), ApiVersion(apiKey=2, minVersion=0, maxVersion=7), ApiVersion(apiKey=3, minVersion=0, maxVersion=12), ApiVersion(apiKey=4, minVersion=0, maxVersion=6), ApiVersion(apiKey=5, minVersion=0, maxVersion=3), ApiVersion(apiKey=6, minVersion=0, maxVersion=7), ApiVersion(apiKey=7, minVersion=0, maxVersion=3), ApiVersion(apiKey=8, minVersion=0, maxVersion=8), ApiVersion(apiKey=9, minVersion=0, maxVersion=8), ApiVersion(apiKey=10, minVersion=0, maxVersion=4), ApiVersion(apiKey=11, minVersion=0, maxVersion=9), ApiVersion(apiKey=12, minVersion=0, maxVersion=4), ApiVersion(apiKey=13, minVersion=0, maxVersion=5), ApiVersion(apiKey=14, minVersion=0, maxVersion=5), ApiVersion(apiKey=15, minVersion=0, maxVersion=5), ApiVersion(apiKey=16, minVersion=0, maxVersion=4), ApiVersion(apiKey=17, minVersion=0, maxVersion=1), ApiVersion(apiKey=18, minVersion=0, maxVersion=3), ApiVersion(apiKey=19, minVersion=0, maxVersion=7), ApiVersion(apiKey=20, minVersion=0, maxVersion=6), ApiVersion(apiKey=21, minVersion=0, maxVersion=2), ApiVersion(apiKey=22, minVersion=0, maxVersion=4), ApiVersion(apiKey=23, minVersion=0, maxVersion=4), ApiVersion(apiKey=24, minVersion=0, maxVersion=3), ApiVersion(apiKey=25, minVersion=0, maxVersion=3), ApiVersion(apiKey=26, minVersion=0, maxVersion=3), ApiVersion(apiKey=27, minVersion=0, maxVersion=1), ApiVersion(apiKey=28, minVersion=0, maxVersion=3), ApiVersion(apiKey=29, minVersion=0, maxVersion=3), ApiVersion(apiKey=30, minVersion=0, maxVersion=3), ApiVersion(apiKey=31, minVersion=0, maxVersion=3), ApiVersion(apiKey=32, minVersion=0, maxVersion=4), ApiVersion(apiKey=33, minVersion=0, maxVersion=2), ApiVersion(apiKey=34, minVersion=0, maxVersion=2), ApiVersion(apiKey=35, minVersion=0, maxVersion=4), ApiVersion(apiKey=36, minVersion=0, maxVersion=2), ApiVersion(apiKey=37, minVersion=0, maxVersion=3), ApiVersion(apiKey=38, minVersion=0, maxVersion=3), ApiVersion(apiKey=39, minVersion=0, maxVersion=2), ApiVersion(apiKey=40, minVersion=0, maxVersion=2), ApiVersion(apiKey=41, minVersion=0, maxVersion=3), ApiVersion(apiKey=42, minVersion=0, maxVersion=2), ApiVersion(apiKey=43, minVersion=0, maxVersion=2), ApiVersion(apiKey=44, minVersion=0, maxVersion=1), ApiVersion(apiKey=45, minVersion=0, maxVersion=0), ApiVersion(apiKey=46, minVersion=0, maxVersion=0), ApiVersion(apiKey=47, minVersion=0, maxVersion=0), ApiVersion(apiKey=48, minVersion=0, maxVersion=1), ApiVersion(apiKey=49, minVersion=0, maxVersion=1), ApiVersion(apiKey=50, minVersion=0, maxVersion=0), ApiVersion(apiKey=51, minVersion=0, maxVersion=0), ApiVersion(apiKey=56, minVersion=0, maxVersion=2), ApiVersion(apiKey=57, minVersion=0, maxVersion=1), ApiVersion(apiKey=60, minVersion=0, maxVersion=0), ApiVersion(apiKey=61, minVersion=0, maxVersion=0), ApiVersion(apiKey=65, minVersion=0, maxVersion=0), ApiVersion(apiKey=66, minVersion=0, maxVersion=0), ApiVersion(apiKey=67, minVersion=0, maxVersion=0)], throttleTimeMs=0, supportedFeatures=[], finalizedFeaturesEpoch=0, finalizedFeatures=[]) 11:10:51 11:10:51.183 [kafka-admin-client-thread | test-consumer-id] DEBUG org.apache.kafka.clients.NetworkClient - [AdminClient clientId=test-consumer-id] Node 1 has finalized features epoch: 0, finalized features: [], supported features: [], API versions: (Produce(0): 0 to 9 [usable: 9], Fetch(1): 0 to 13 [usable: 13], ListOffsets(2): 0 to 7 [usable: 7], Metadata(3): 0 to 12 [usable: 12], LeaderAndIsr(4): 0 to 6 [usable: 6], StopReplica(5): 0 to 3 [usable: 3], UpdateMetadata(6): 0 to 7 [usable: 7], ControlledShutdown(7): 0 to 3 [usable: 3], OffsetCommit(8): 0 to 8 [usable: 8], OffsetFetch(9): 0 to 8 [usable: 8], FindCoordinator(10): 0 to 4 [usable: 4], JoinGroup(11): 0 to 9 [usable: 9], Heartbeat(12): 0 to 4 [usable: 4], LeaveGroup(13): 0 to 5 [usable: 5], SyncGroup(14): 0 to 5 [usable: 5], DescribeGroups(15): 0 to 5 [usable: 5], ListGroups(16): 0 to 4 [usable: 4], SaslHandshake(17): 0 to 1 [usable: 1], ApiVersions(18): 0 to 3 [usable: 3], CreateTopics(19): 0 to 7 [usable: 7], DeleteTopics(20): 0 to 6 [usable: 6], DeleteRecords(21): 0 to 2 [usable: 2], InitProducerId(22): 0 to 4 [usable: 4], OffsetForLeaderEpoch(23): 0 to 4 [usable: 4], AddPartitionsToTxn(24): 0 to 3 [usable: 3], AddOffsetsToTxn(25): 0 to 3 [usable: 3], EndTxn(26): 0 to 3 [usable: 3], WriteTxnMarkers(27): 0 to 1 [usable: 1], TxnOffsetCommit(28): 0 to 3 [usable: 3], DescribeAcls(29): 0 to 3 [usable: 3], CreateAcls(30): 0 to 3 [usable: 3], DeleteAcls(31): 0 to 3 [usable: 3], DescribeConfigs(32): 0 to 4 [usable: 4], AlterConfigs(33): 0 to 2 [usable: 2], AlterReplicaLogDirs(34): 0 to 2 [usable: 2], DescribeLogDirs(35): 0 to 4 [usable: 4], SaslAuthenticate(36): 0 to 2 [usable: 2], CreatePartitions(37): 0 to 3 [usable: 3], CreateDelegationToken(38): 0 to 3 [usable: 3], RenewDelegationToken(39): 0 to 2 [usable: 2], ExpireDelegationToken(40): 0 to 2 [usable: 2], DescribeDelegationToken(41): 0 to 3 [usable: 3], DeleteGroups(42): 0 to 2 [usable: 2], ElectLeaders(43): 0 to 2 [usable: 2], IncrementalAlterConfigs(44): 0 to 1 [usable: 1], AlterPartitionReassignments(45): 0 [usable: 0], ListPartitionReassignments(46): 0 [usable: 0], OffsetDelete(47): 0 [usable: 0], DescribeClientQuotas(48): 0 to 1 [usable: 1], AlterClientQuotas(49): 0 to 1 [usable: 1], DescribeUserScramCredentials(50): 0 [usable: 0], AlterUserScramCredentials(51): 0 [usable: 0], AlterPartition(56): 0 to 2 [usable: 2], UpdateFeatures(57): 0 to 1 [usable: 1], DescribeCluster(60): 0 [usable: 0], DescribeProducers(61): 0 [usable: 0], DescribeTransactions(65): 0 [usable: 0], ListTransactions(66): 0 [usable: 0], AllocateProducerIds(67): 0 [usable: 0]). 11:10:51 11:10:51.183 [data-plane-kafka-network-thread-1-ListenerName(SASL_PLAINTEXT)-SASL_PLAINTEXT-0] DEBUG kafka.request.logger - Completed request:{"isForwarded":false,"requestHeader":{"requestApiKey":18,"requestApiVersion":3,"correlationId":2,"clientId":"test-consumer-id","requestApiKeyName":"API_VERSIONS"},"request":{"clientSoftwareName":"apache-kafka-java","clientSoftwareVersion":"3.3.1"},"response":{"errorCode":0,"apiKeys":[{"apiKey":0,"minVersion":0,"maxVersion":9},{"apiKey":1,"minVersion":0,"maxVersion":13},{"apiKey":2,"minVersion":0,"maxVersion":7},{"apiKey":3,"minVersion":0,"maxVersion":12},{"apiKey":4,"minVersion":0,"maxVersion":6},{"apiKey":5,"minVersion":0,"maxVersion":3},{"apiKey":6,"minVersion":0,"maxVersion":7},{"apiKey":7,"minVersion":0,"maxVersion":3},{"apiKey":8,"minVersion":0,"maxVersion":8},{"apiKey":9,"minVersion":0,"maxVersion":8},{"apiKey":10,"minVersion":0,"maxVersion":4},{"apiKey":11,"minVersion":0,"maxVersion":9},{"apiKey":12,"minVersion":0,"maxVersion":4},{"apiKey":13,"minVersion":0,"maxVersion":5},{"apiKey":14,"minVersion":0,"maxVersion":5},{"apiKey":15,"minVersion":0,"maxVersion":5},{"apiKey":16,"minVersion":0,"maxVersion":4},{"apiKey":17,"minVersion":0,"maxVersion":1},{"apiKey":18,"minVersion":0,"maxVersion":3},{"apiKey":19,"minVersion":0,"maxVersion":7},{"apiKey":20,"minVersion":0,"maxVersion":6},{"apiKey":21,"minVersion":0,"maxVersion":2},{"apiKey":22,"minVersion":0,"maxVersion":4},{"apiKey":23,"minVersion":0,"maxVersion":4},{"apiKey":24,"minVersion":0,"maxVersion":3},{"apiKey":25,"minVersion":0,"maxVersion":3},{"apiKey":26,"minVersion":0,"maxVersion":3},{"apiKey":27,"minVersion":0,"maxVersion":1},{"apiKey":28,"minVersion":0,"maxVersion":3},{"apiKey":29,"minVersion":0,"maxVersion":3},{"apiKey":30,"minVersion":0,"maxVersion":3},{"apiKey":31,"minVersion":0,"maxVersion":3},{"apiKey":32,"minVersion":0,"maxVersion":4},{"apiKey":33,"minVersion":0,"maxVersion":2},{"apiKey":34,"minVersion":0,"maxVersion":2},{"apiKey":35,"minVersion":0,"maxVersion":4},{"apiKey":36,"minVersion":0,"maxVersion":2},{"apiKey":37,"minVersion":0,"maxVersion":3},{"apiKey":38,"minVersion":0,"maxVersion":3},{"apiKey":39,"minVersion":0,"maxVersion":2},{"apiKey":40,"minVersion":0,"maxVersion":2},{"apiKey":41,"minVersion":0,"maxVersion":3},{"apiKey":42,"minVersion":0,"maxVersion":2},{"apiKey":43,"minVersion":0,"maxVersion":2},{"apiKey":44,"minVersion":0,"maxVersion":1},{"apiKey":45,"minVersion":0,"maxVersion":0},{"apiKey":46,"minVersion":0,"maxVersion":0},{"apiKey":47,"minVersion":0,"maxVersion":0},{"apiKey":48,"minVersion":0,"maxVersion":1},{"apiKey":49,"minVersion":0,"maxVersion":1},{"apiKey":50,"minVersion":0,"maxVersion":0},{"apiKey":51,"minVersion":0,"maxVersion":0},{"apiKey":56,"minVersion":0,"maxVersion":2},{"apiKey":57,"minVersion":0,"maxVersion":1},{"apiKey":60,"minVersion":0,"maxVersion":0},{"apiKey":61,"minVersion":0,"maxVersion":0},{"apiKey":65,"minVersion":0,"maxVersion":0},{"apiKey":66,"minVersion":0,"maxVersion":0},{"apiKey":67,"minVersion":0,"maxVersion":0}],"throttleTimeMs":0,"finalizedFeaturesEpoch":0},"connection":"127.0.0.1:39115-127.0.0.1:43326-1","totalTimeMs":4.39,"requestQueueTimeMs":0.506,"localTimeMs":3.468,"remoteTimeMs":0.0,"throttleTimeMs":0,"responseQueueTimeMs":0.164,"sendTimeMs":0.25,"securityProtocol":"SASL_PLAINTEXT","principal":"User:kafkaclient","listener":"SASL_PLAINTEXT","clientInformation":{"softwareName":"unknown","softwareVersion":"unknown"}} 11:10:51 11:10:51.183 [kafka-admin-client-thread | test-consumer-id] DEBUG org.apache.kafka.clients.admin.KafkaAdminClient - [AdminClient clientId=test-consumer-id] Sending DescribeClusterRequestData(includeClusterAuthorizedOperations=false) to localhost:39115 (id: 1 rack: null). correlationId=3, timeoutMs=14985 11:10:51 11:10:51.184 [kafka-admin-client-thread | test-consumer-id] DEBUG org.apache.kafka.clients.NetworkClient - [AdminClient clientId=test-consumer-id] Sending DESCRIBE_CLUSTER request with header RequestHeader(apiKey=DESCRIBE_CLUSTER, apiVersion=0, clientId=test-consumer-id, correlationId=3) and timeout 14985 to node 1: DescribeClusterRequestData(includeClusterAuthorizedOperations=false) 11:10:51 11:10:51.195 [kafka-admin-client-thread | test-consumer-id] DEBUG org.apache.kafka.clients.NetworkClient - [AdminClient clientId=test-consumer-id] Received DESCRIBE_CLUSTER response from node 1 for request with header RequestHeader(apiKey=DESCRIBE_CLUSTER, apiVersion=0, clientId=test-consumer-id, correlationId=3): DescribeClusterResponseData(throttleTimeMs=0, errorCode=0, errorMessage=null, clusterId='jx5ycp9PTHOXo1U6H8QTmw', controllerId=1, brokers=[DescribeClusterBroker(brokerId=1, host='localhost', port=39115, rack=null)], clusterAuthorizedOperations=-2147483648) 11:10:51 11:10:51.196 [main] DEBUG org.apache.kafka.clients.admin.KafkaAdminClient - [AdminClient clientId=test-consumer-id] Initiating close operation. 11:10:51 11:10:51.196 [data-plane-kafka-network-thread-1-ListenerName(SASL_PLAINTEXT)-SASL_PLAINTEXT-0] DEBUG kafka.request.logger - Completed request:{"isForwarded":false,"requestHeader":{"requestApiKey":60,"requestApiVersion":0,"correlationId":3,"clientId":"test-consumer-id","requestApiKeyName":"DESCRIBE_CLUSTER"},"request":{"includeClusterAuthorizedOperations":false},"response":{"throttleTimeMs":0,"errorCode":0,"errorMessage":null,"clusterId":"jx5ycp9PTHOXo1U6H8QTmw","controllerId":1,"brokers":[{"brokerId":1,"host":"localhost","port":39115,"rack":null}],"clusterAuthorizedOperations":-2147483648},"connection":"127.0.0.1:39115-127.0.0.1:43326-1","totalTimeMs":11.028,"requestQueueTimeMs":1.255,"localTimeMs":9.473,"remoteTimeMs":0.0,"throttleTimeMs":0,"responseQueueTimeMs":0.104,"sendTimeMs":0.195,"securityProtocol":"SASL_PLAINTEXT","principal":"User:kafkaclient","listener":"SASL_PLAINTEXT","clientInformation":{"softwareName":"apache-kafka-java","softwareVersion":"3.3.1"}} 11:10:51 11:10:51.196 [main] DEBUG org.apache.kafka.clients.admin.KafkaAdminClient - [AdminClient clientId=test-consumer-id] Waiting for the I/O thread to exit. Hard shutdown in 31536000000 ms. 11:10:51 11:10:51.197 [kafka-admin-client-thread | test-consumer-id] INFO org.apache.kafka.common.utils.AppInfoParser - App info kafka.admin.client for test-consumer-id unregistered 11:10:51 11:10:51.199 [data-plane-kafka-network-thread-1-ListenerName(SASL_PLAINTEXT)-SASL_PLAINTEXT-1] DEBUG org.apache.kafka.common.network.Selector - [SocketServer listenerType=ZK_BROKER, nodeId=1] Connection with /127.0.0.1 (channelId=127.0.0.1:39115-127.0.0.1:43324-0) disconnected 11:10:51 java.io.EOFException: null 11:10:51 at org.apache.kafka.common.network.NetworkReceive.readFrom(NetworkReceive.java:97) 11:10:51 at org.apache.kafka.common.network.KafkaChannel.receive(KafkaChannel.java:452) 11:10:51 at org.apache.kafka.common.network.KafkaChannel.read(KafkaChannel.java:402) 11:10:51 at org.apache.kafka.common.network.Selector.attemptRead(Selector.java:674) 11:10:51 at org.apache.kafka.common.network.Selector.pollSelectionKeys(Selector.java:576) 11:10:51 at org.apache.kafka.common.network.Selector.poll(Selector.java:481) 11:10:51 at kafka.network.Processor.poll(SocketServer.scala:1055) 11:10:51 at kafka.network.Processor.run(SocketServer.scala:959) 11:10:51 at java.base/java.lang.Thread.run(Thread.java:829) 11:10:51 11:10:51.199 [data-plane-kafka-network-thread-1-ListenerName(SASL_PLAINTEXT)-SASL_PLAINTEXT-0] DEBUG org.apache.kafka.common.network.Selector - [SocketServer listenerType=ZK_BROKER, nodeId=1] Connection with /127.0.0.1 (channelId=127.0.0.1:39115-127.0.0.1:43326-1) disconnected 11:10:51 java.io.EOFException: null 11:10:51 at org.apache.kafka.common.network.NetworkReceive.readFrom(NetworkReceive.java:97) 11:10:51 at org.apache.kafka.common.network.KafkaChannel.receive(KafkaChannel.java:452) 11:10:51 at org.apache.kafka.common.network.KafkaChannel.read(KafkaChannel.java:402) 11:10:51 at org.apache.kafka.common.network.Selector.attemptRead(Selector.java:674) 11:10:51 at org.apache.kafka.common.network.Selector.pollSelectionKeys(Selector.java:576) 11:10:51 at org.apache.kafka.common.network.Selector.poll(Selector.java:481) 11:10:51 at kafka.network.Processor.poll(SocketServer.scala:1055) 11:10:51 at kafka.network.Processor.run(SocketServer.scala:959) 11:10:51 at java.base/java.lang.Thread.run(Thread.java:829) 11:10:51 11:10:51.200 [kafka-admin-client-thread | test-consumer-id] INFO org.apache.kafka.common.metrics.Metrics - Metrics scheduler closed 11:10:51 11:10:51.200 [kafka-admin-client-thread | test-consumer-id] INFO org.apache.kafka.common.metrics.Metrics - Closing reporter org.apache.kafka.common.metrics.JmxReporter 11:10:51 11:10:51.200 [kafka-admin-client-thread | test-consumer-id] INFO org.apache.kafka.common.metrics.Metrics - Metrics reporters closed 11:10:51 11:10:51.200 [kafka-admin-client-thread | test-consumer-id] DEBUG org.apache.kafka.clients.admin.KafkaAdminClient - [AdminClient clientId=test-consumer-id] Exiting AdminClientRunnable thread. 11:10:51 11:10:51.201 [main] DEBUG org.apache.kafka.clients.admin.KafkaAdminClient - [AdminClient clientId=test-consumer-id] Kafka admin client closed. 11:10:51 11:10:51.201 [main] INFO com.salesforce.kafka.test.KafkaTestCluster - Found 1 brokers on-line, cluster is ready. 11:10:51 11:10:51.201 [main] DEBUG org.onap.sdc.utils.SdcKafkaTest - Cluster started at: SASL_PLAINTEXT://localhost:39115 11:10:51 11:10:51.202 [main] INFO org.apache.kafka.clients.admin.AdminClientConfig - AdminClientConfig values: 11:10:51 bootstrap.servers = [SASL_PLAINTEXT://localhost:39115] 11:10:51 client.dns.lookup = use_all_dns_ips 11:10:51 client.id = test-consumer-id 11:10:51 connections.max.idle.ms = 300000 11:10:51 default.api.timeout.ms = 60000 11:10:51 metadata.max.age.ms = 300000 11:10:51 metric.reporters = [] 11:10:51 metrics.num.samples = 2 11:10:51 metrics.recording.level = INFO 11:10:51 metrics.sample.window.ms = 30000 11:10:51 receive.buffer.bytes = 65536 11:10:51 reconnect.backoff.max.ms = 1000 11:10:51 reconnect.backoff.ms = 50 11:10:51 request.timeout.ms = 15000 11:10:51 retries = 2147483647 11:10:51 retry.backoff.ms = 100 11:10:51 sasl.client.callback.handler.class = null 11:10:51 sasl.jaas.config = [hidden] 11:10:51 sasl.kerberos.kinit.cmd = /usr/bin/kinit 11:10:51 sasl.kerberos.min.time.before.relogin = 60000 11:10:51 sasl.kerberos.service.name = null 11:10:51 sasl.kerberos.ticket.renew.jitter = 0.05 11:10:51 sasl.kerberos.ticket.renew.window.factor = 0.8 11:10:51 sasl.login.callback.handler.class = null 11:10:51 sasl.login.class = null 11:10:51 sasl.login.connect.timeout.ms = null 11:10:51 sasl.login.read.timeout.ms = null 11:10:51 sasl.login.refresh.buffer.seconds = 300 11:10:51 sasl.login.refresh.min.period.seconds = 60 11:10:51 sasl.login.refresh.window.factor = 0.8 11:10:51 sasl.login.refresh.window.jitter = 0.05 11:10:51 sasl.login.retry.backoff.max.ms = 10000 11:10:51 sasl.login.retry.backoff.ms = 100 11:10:51 sasl.mechanism = PLAIN 11:10:51 sasl.oauthbearer.clock.skew.seconds = 30 11:10:51 sasl.oauthbearer.expected.audience = null 11:10:51 sasl.oauthbearer.expected.issuer = null 11:10:51 sasl.oauthbearer.jwks.endpoint.refresh.ms = 3600000 11:10:51 sasl.oauthbearer.jwks.endpoint.retry.backoff.max.ms = 10000 11:10:51 sasl.oauthbearer.jwks.endpoint.retry.backoff.ms = 100 11:10:51 sasl.oauthbearer.jwks.endpoint.url = null 11:10:51 sasl.oauthbearer.scope.claim.name = scope 11:10:51 sasl.oauthbearer.sub.claim.name = sub 11:10:51 sasl.oauthbearer.token.endpoint.url = null 11:10:51 security.protocol = SASL_PLAINTEXT 11:10:51 security.providers = null 11:10:51 send.buffer.bytes = 131072 11:10:51 socket.connection.setup.timeout.max.ms = 30000 11:10:51 socket.connection.setup.timeout.ms = 10000 11:10:51 ssl.cipher.suites = null 11:10:51 ssl.enabled.protocols = [TLSv1.2, TLSv1.3] 11:10:51 ssl.endpoint.identification.algorithm = https 11:10:51 ssl.engine.factory.class = null 11:10:51 ssl.key.password = null 11:10:51 ssl.keymanager.algorithm = SunX509 11:10:51 ssl.keystore.certificate.chain = null 11:10:51 ssl.keystore.key = null 11:10:51 ssl.keystore.location = null 11:10:51 ssl.keystore.password = null 11:10:51 ssl.keystore.type = JKS 11:10:51 ssl.protocol = TLSv1.3 11:10:51 ssl.provider = null 11:10:51 ssl.secure.random.implementation = null 11:10:51 ssl.trustmanager.algorithm = PKIX 11:10:51 ssl.truststore.certificates = null 11:10:51 ssl.truststore.location = null 11:10:51 ssl.truststore.password = null 11:10:51 ssl.truststore.type = JKS 11:10:51 11:10:51 11:10:51.202 [main] DEBUG org.apache.kafka.clients.admin.internals.AdminMetadataManager - [AdminClient clientId=test-consumer-id] Setting bootstrap cluster metadata Cluster(id = null, nodes = [localhost:39115 (id: -1 rack: null)], partitions = [], controller = null). 11:10:51 11:10:51.203 [main] INFO org.apache.kafka.common.security.authenticator.AbstractLogin - Successfully logged in. 11:10:51 11:10:51.208 [main] INFO org.apache.kafka.common.utils.AppInfoParser - Kafka version: 3.3.1 11:10:51 11:10:51.208 [main] INFO org.apache.kafka.common.utils.AppInfoParser - Kafka commitId: e23c59d00e687ff5 11:10:51 11:10:51.208 [main] INFO org.apache.kafka.common.utils.AppInfoParser - Kafka startTimeMs: 1768216251208 11:10:51 11:10:51.208 [main] DEBUG org.apache.kafka.clients.admin.KafkaAdminClient - [AdminClient clientId=test-consumer-id] Kafka admin client initialized 11:10:51 11:10:51.213 [kafka-admin-client-thread | test-consumer-id] DEBUG org.apache.kafka.clients.admin.KafkaAdminClient - [AdminClient clientId=test-consumer-id] Thread starting 11:10:51 11:10:51.213 [kafka-admin-client-thread | test-consumer-id] DEBUG org.apache.kafka.clients.ClientUtils - Resolved host localhost as 127.0.0.1 11:10:51 11:10:51.213 [kafka-admin-client-thread | test-consumer-id] DEBUG org.apache.kafka.clients.NetworkClient - [AdminClient clientId=test-consumer-id] Initiating connection to node localhost:39115 (id: -1 rack: null) using address localhost/127.0.0.1 11:10:51 11:10:51.214 [kafka-admin-client-thread | test-consumer-id] DEBUG org.apache.kafka.common.security.authenticator.SaslClientAuthenticator - [AdminClient clientId=test-consumer-id] Set SASL client state to SEND_APIVERSIONS_REQUEST 11:10:51 11:10:51.214 [kafka-admin-client-thread | test-consumer-id] DEBUG org.apache.kafka.common.security.authenticator.SaslClientAuthenticator - [AdminClient clientId=test-consumer-id] Creating SaslClient: client=null;service=kafka;serviceHostname=localhost;mechs=[PLAIN] 11:10:51 11:10:51.214 [data-plane-kafka-socket-acceptor-ListenerName(SASL_PLAINTEXT)-SASL_PLAINTEXT-39115] DEBUG kafka.network.DataPlaneAcceptor - Accepted connection from /127.0.0.1:43328 on /127.0.0.1:39115 and assigned it to processor 1, sendBufferSize [actual|requested]: [102400|102400] recvBufferSize [actual|requested]: [102400|102400] 11:10:51 11:10:51.214 [data-plane-kafka-network-thread-1-ListenerName(SASL_PLAINTEXT)-SASL_PLAINTEXT-1] DEBUG kafka.network.Processor - Processor 1 listening to new connection from /127.0.0.1:43328 11:10:51 11:10:51.217 [TestBroker:1:BrokerToControllerChannelManager broker=1 name=forwarding] DEBUG kafka.server.BrokerToControllerRequestThread - [TestBroker:1:BrokerToControllerChannelManager broker=1 name=forwarding]: Controller isn't cached, looking for local metadata changes 11:10:51 11:10:51.218 [kafka-admin-client-thread | test-consumer-id] DEBUG org.apache.kafka.common.network.Selector - [AdminClient clientId=test-consumer-id] Created socket with SO_RCVBUF = 65536, SO_SNDBUF = 131072, SO_TIMEOUT = 0 to node -1 11:10:51 11:10:51.218 [data-plane-kafka-network-thread-1-ListenerName(SASL_PLAINTEXT)-SASL_PLAINTEXT-1] DEBUG org.apache.kafka.common.security.authenticator.SaslServerAuthenticator - Set SASL server state to HANDSHAKE_OR_VERSIONS_REQUEST during authentication 11:10:51 11:10:51.219 [data-plane-kafka-network-thread-1-ListenerName(SASL_PLAINTEXT)-SASL_PLAINTEXT-1] DEBUG org.apache.kafka.common.security.authenticator.SaslServerAuthenticator - Handling Kafka request API_VERSIONS during authentication 11:10:51 11:10:51.219 [TestBroker:1:BrokerToControllerChannelManager broker=1 name=forwarding] INFO kafka.server.BrokerToControllerRequestThread - [TestBroker:1:BrokerToControllerChannelManager broker=1 name=forwarding]: Recorded new controller, from now on will use broker localhost:39115 (id: 1 rack: null) 11:10:51 11:10:51.219 [main] DEBUG org.apache.kafka.clients.admin.KafkaAdminClient - [AdminClient clientId=test-consumer-id] Queueing Call(callName=createTopics, deadlineMs=1768216311217, tries=0, nextAllowedTryMs=0) with a timeout 15000 ms from now. 11:10:51 11:10:51.219 [kafka-admin-client-thread | test-consumer-id] DEBUG org.apache.kafka.common.security.authenticator.SaslClientAuthenticator - [AdminClient clientId=test-consumer-id] Set SASL client state to RECEIVE_APIVERSIONS_RESPONSE 11:10:51 11:10:51.219 [TestBroker:1:BrokerToControllerChannelManager broker=1 name=alterPartition] DEBUG kafka.server.BrokerToControllerRequestThread - [TestBroker:1:BrokerToControllerChannelManager broker=1 name=alterPartition]: Controller isn't cached, looking for local metadata changes 11:10:51 11:10:51.219 [kafka-admin-client-thread | test-consumer-id] DEBUG org.apache.kafka.clients.NetworkClient - [AdminClient clientId=test-consumer-id] Completed connection to node -1. Fetching API versions. 11:10:51 11:10:51.219 [TestBroker:1:BrokerToControllerChannelManager broker=1 name=alterPartition] INFO kafka.server.BrokerToControllerRequestThread - [TestBroker:1:BrokerToControllerChannelManager broker=1 name=alterPartition]: Recorded new controller, from now on will use broker localhost:39115 (id: 1 rack: null) 11:10:51 11:10:51.220 [data-plane-kafka-network-thread-1-ListenerName(SASL_PLAINTEXT)-SASL_PLAINTEXT-1] DEBUG org.apache.kafka.common.security.authenticator.SaslServerAuthenticator - Set SASL server state to HANDSHAKE_REQUEST during authentication 11:10:51 11:10:51.221 [kafka-admin-client-thread | test-consumer-id] DEBUG org.apache.kafka.common.security.authenticator.SaslClientAuthenticator - [AdminClient clientId=test-consumer-id] Set SASL client state to SEND_HANDSHAKE_REQUEST 11:10:51 11:10:51.221 [data-plane-kafka-network-thread-1-ListenerName(SASL_PLAINTEXT)-SASL_PLAINTEXT-1] DEBUG org.apache.kafka.common.security.authenticator.SaslServerAuthenticator - Handling Kafka request SASL_HANDSHAKE during authentication 11:10:51 11:10:51.221 [data-plane-kafka-network-thread-1-ListenerName(SASL_PLAINTEXT)-SASL_PLAINTEXT-1] DEBUG org.apache.kafka.common.security.authenticator.SaslServerAuthenticator - Using SASL mechanism 'PLAIN' provided by client 11:10:51 11:10:51.222 [data-plane-kafka-network-thread-1-ListenerName(SASL_PLAINTEXT)-SASL_PLAINTEXT-1] DEBUG org.apache.kafka.common.security.authenticator.SaslServerAuthenticator - Set SASL server state to AUTHENTICATE during authentication 11:10:51 11:10:51.222 [kafka-admin-client-thread | test-consumer-id] DEBUG org.apache.kafka.common.security.authenticator.SaslClientAuthenticator - [AdminClient clientId=test-consumer-id] Set SASL client state to RECEIVE_HANDSHAKE_RESPONSE 11:10:51 11:10:51.222 [kafka-admin-client-thread | test-consumer-id] DEBUG org.apache.kafka.common.security.authenticator.SaslClientAuthenticator - [AdminClient clientId=test-consumer-id] Set SASL client state to INITIAL 11:10:51 11:10:51.222 [kafka-admin-client-thread | test-consumer-id] DEBUG org.apache.kafka.common.security.authenticator.SaslClientAuthenticator - [AdminClient clientId=test-consumer-id] Set SASL client state to INTERMEDIATE 11:10:51 11:10:51.223 [data-plane-kafka-network-thread-1-ListenerName(SASL_PLAINTEXT)-SASL_PLAINTEXT-1] DEBUG org.apache.kafka.common.security.authenticator.SaslServerAuthenticator - Authentication complete; session max lifetime from broker config=0 ms, no credential expiration; no session expiration, sending 0 ms to client 11:10:51 11:10:51.223 [data-plane-kafka-network-thread-1-ListenerName(SASL_PLAINTEXT)-SASL_PLAINTEXT-1] DEBUG org.apache.kafka.common.security.authenticator.SaslServerAuthenticator - Set SASL server state to COMPLETE during authentication 11:10:51 11:10:51.223 [data-plane-kafka-network-thread-1-ListenerName(SASL_PLAINTEXT)-SASL_PLAINTEXT-1] DEBUG org.apache.kafka.common.network.Selector - [SocketServer listenerType=ZK_BROKER, nodeId=1] Successfully authenticated with /127.0.0.1 11:10:51 11:10:51.223 [kafka-admin-client-thread | test-consumer-id] DEBUG org.apache.kafka.common.security.authenticator.SaslClientAuthenticator - [AdminClient clientId=test-consumer-id] Set SASL client state to COMPLETE 11:10:51 11:10:51.223 [kafka-admin-client-thread | test-consumer-id] DEBUG org.apache.kafka.common.security.authenticator.SaslClientAuthenticator - [AdminClient clientId=test-consumer-id] Finished authentication with no session expiration and no session re-authentication 11:10:51 11:10:51.223 [kafka-admin-client-thread | test-consumer-id] DEBUG org.apache.kafka.common.network.Selector - [AdminClient clientId=test-consumer-id] Successfully authenticated with localhost/127.0.0.1 11:10:51 11:10:51.223 [kafka-admin-client-thread | test-consumer-id] DEBUG org.apache.kafka.clients.NetworkClient - [AdminClient clientId=test-consumer-id] Initiating API versions fetch from node -1. 11:10:51 11:10:51.223 [kafka-admin-client-thread | test-consumer-id] DEBUG org.apache.kafka.clients.NetworkClient - [AdminClient clientId=test-consumer-id] Sending API_VERSIONS request with header RequestHeader(apiKey=API_VERSIONS, apiVersion=3, clientId=test-consumer-id, correlationId=0) and timeout 3600000 to node -1: ApiVersionsRequestData(clientSoftwareName='apache-kafka-java', clientSoftwareVersion='3.3.1') 11:10:51 11:10:51.227 [data-plane-kafka-network-thread-1-ListenerName(SASL_PLAINTEXT)-SASL_PLAINTEXT-1] DEBUG kafka.request.logger - Completed request:{"isForwarded":false,"requestHeader":{"requestApiKey":18,"requestApiVersion":3,"correlationId":0,"clientId":"test-consumer-id","requestApiKeyName":"API_VERSIONS"},"request":{"clientSoftwareName":"apache-kafka-java","clientSoftwareVersion":"3.3.1"},"response":{"errorCode":0,"apiKeys":[{"apiKey":0,"minVersion":0,"maxVersion":9},{"apiKey":1,"minVersion":0,"maxVersion":13},{"apiKey":2,"minVersion":0,"maxVersion":7},{"apiKey":3,"minVersion":0,"maxVersion":12},{"apiKey":4,"minVersion":0,"maxVersion":6},{"apiKey":5,"minVersion":0,"maxVersion":3},{"apiKey":6,"minVersion":0,"maxVersion":7},{"apiKey":7,"minVersion":0,"maxVersion":3},{"apiKey":8,"minVersion":0,"maxVersion":8},{"apiKey":9,"minVersion":0,"maxVersion":8},{"apiKey":10,"minVersion":0,"maxVersion":4},{"apiKey":11,"minVersion":0,"maxVersion":9},{"apiKey":12,"minVersion":0,"maxVersion":4},{"apiKey":13,"minVersion":0,"maxVersion":5},{"apiKey":14,"minVersion":0,"maxVersion":5},{"apiKey":15,"minVersion":0,"maxVersion":5},{"apiKey":16,"minVersion":0,"maxVersion":4},{"apiKey":17,"minVersion":0,"maxVersion":1},{"apiKey":18,"minVersion":0,"maxVersion":3},{"apiKey":19,"minVersion":0,"maxVersion":7},{"apiKey":20,"minVersion":0,"maxVersion":6},{"apiKey":21,"minVersion":0,"maxVersion":2},{"apiKey":22,"minVersion":0,"maxVersion":4},{"apiKey":23,"minVersion":0,"maxVersion":4},{"apiKey":24,"minVersion":0,"maxVersion":3},{"apiKey":25,"minVersion":0,"maxVersion":3},{"apiKey":26,"minVersion":0,"maxVersion":3},{"apiKey":27,"minVersion":0,"maxVersion":1},{"apiKey":28,"minVersion":0,"maxVersion":3},{"apiKey":29,"minVersion":0,"maxVersion":3},{"apiKey":30,"minVersion":0,"maxVersion":3},{"apiKey":31,"minVersion":0,"maxVersion":3},{"apiKey":32,"minVersion":0,"maxVersion":4},{"apiKey":33,"minVersion":0,"maxVersion":2},{"apiKey":34,"minVersion":0,"maxVersion":2},{"apiKey":35,"minVersion":0,"maxVersion":4},{"apiKey":36,"minVersion":0,"maxVersion":2},{"apiKey":37,"minVersion":0,"maxVersion":3},{"apiKey":38,"minVersion":0,"maxVersion":3},{"apiKey":39,"minVersion":0,"maxVersion":2},{"apiKey":40,"minVersion":0,"maxVersion":2},{"apiKey":41,"minVersion":0,"maxVersion":3},{"apiKey":42,"minVersion":0,"maxVersion":2},{"apiKey":43,"minVersion":0,"maxVersion":2},{"apiKey":44,"minVersion":0,"maxVersion":1},{"apiKey":45,"minVersion":0,"maxVersion":0},{"apiKey":46,"minVersion":0,"maxVersion":0},{"apiKey":47,"minVersion":0,"maxVersion":0},{"apiKey":48,"minVersion":0,"maxVersion":1},{"apiKey":49,"minVersion":0,"maxVersion":1},{"apiKey":50,"minVersion":0,"maxVersion":0},{"apiKey":51,"minVersion":0,"maxVersion":0},{"apiKey":56,"minVersion":0,"maxVersion":2},{"apiKey":57,"minVersion":0,"maxVersion":1},{"apiKey":60,"minVersion":0,"maxVersion":0},{"apiKey":61,"minVersion":0,"maxVersion":0},{"apiKey":65,"minVersion":0,"maxVersion":0},{"apiKey":66,"minVersion":0,"maxVersion":0},{"apiKey":67,"minVersion":0,"maxVersion":0}],"throttleTimeMs":0,"finalizedFeaturesEpoch":0},"connection":"127.0.0.1:39115-127.0.0.1:43328-1","totalTimeMs":1.639,"requestQueueTimeMs":0.249,"localTimeMs":0.925,"remoteTimeMs":0.0,"throttleTimeMs":0,"responseQueueTimeMs":0.082,"sendTimeMs":0.381,"securityProtocol":"SASL_PLAINTEXT","principal":"User:kafkaclient","listener":"SASL_PLAINTEXT","clientInformation":{"softwareName":"unknown","softwareVersion":"unknown"}} 11:10:51 11:10:51.227 [kafka-admin-client-thread | test-consumer-id] DEBUG org.apache.kafka.clients.NetworkClient - [AdminClient clientId=test-consumer-id] Received API_VERSIONS response from node -1 for request with header RequestHeader(apiKey=API_VERSIONS, apiVersion=3, clientId=test-consumer-id, correlationId=0): ApiVersionsResponseData(errorCode=0, apiKeys=[ApiVersion(apiKey=0, minVersion=0, maxVersion=9), ApiVersion(apiKey=1, minVersion=0, maxVersion=13), ApiVersion(apiKey=2, minVersion=0, maxVersion=7), ApiVersion(apiKey=3, minVersion=0, maxVersion=12), ApiVersion(apiKey=4, minVersion=0, maxVersion=6), ApiVersion(apiKey=5, minVersion=0, maxVersion=3), ApiVersion(apiKey=6, minVersion=0, maxVersion=7), ApiVersion(apiKey=7, minVersion=0, maxVersion=3), ApiVersion(apiKey=8, minVersion=0, maxVersion=8), ApiVersion(apiKey=9, minVersion=0, maxVersion=8), ApiVersion(apiKey=10, minVersion=0, maxVersion=4), ApiVersion(apiKey=11, minVersion=0, maxVersion=9), ApiVersion(apiKey=12, minVersion=0, maxVersion=4), ApiVersion(apiKey=13, minVersion=0, maxVersion=5), ApiVersion(apiKey=14, minVersion=0, maxVersion=5), ApiVersion(apiKey=15, minVersion=0, maxVersion=5), ApiVersion(apiKey=16, minVersion=0, maxVersion=4), ApiVersion(apiKey=17, minVersion=0, maxVersion=1), ApiVersion(apiKey=18, minVersion=0, maxVersion=3), ApiVersion(apiKey=19, minVersion=0, maxVersion=7), ApiVersion(apiKey=20, minVersion=0, maxVersion=6), ApiVersion(apiKey=21, minVersion=0, maxVersion=2), ApiVersion(apiKey=22, minVersion=0, maxVersion=4), ApiVersion(apiKey=23, minVersion=0, maxVersion=4), ApiVersion(apiKey=24, minVersion=0, maxVersion=3), ApiVersion(apiKey=25, minVersion=0, maxVersion=3), ApiVersion(apiKey=26, minVersion=0, maxVersion=3), ApiVersion(apiKey=27, minVersion=0, maxVersion=1), ApiVersion(apiKey=28, minVersion=0, maxVersion=3), ApiVersion(apiKey=29, minVersion=0, maxVersion=3), ApiVersion(apiKey=30, minVersion=0, maxVersion=3), ApiVersion(apiKey=31, minVersion=0, maxVersion=3), ApiVersion(apiKey=32, minVersion=0, maxVersion=4), ApiVersion(apiKey=33, minVersion=0, maxVersion=2), ApiVersion(apiKey=34, minVersion=0, maxVersion=2), ApiVersion(apiKey=35, minVersion=0, maxVersion=4), ApiVersion(apiKey=36, minVersion=0, maxVersion=2), ApiVersion(apiKey=37, minVersion=0, maxVersion=3), ApiVersion(apiKey=38, minVersion=0, maxVersion=3), ApiVersion(apiKey=39, minVersion=0, maxVersion=2), ApiVersion(apiKey=40, minVersion=0, maxVersion=2), ApiVersion(apiKey=41, minVersion=0, maxVersion=3), ApiVersion(apiKey=42, minVersion=0, maxVersion=2), ApiVersion(apiKey=43, minVersion=0, maxVersion=2), ApiVersion(apiKey=44, minVersion=0, maxVersion=1), ApiVersion(apiKey=45, minVersion=0, maxVersion=0), ApiVersion(apiKey=46, minVersion=0, maxVersion=0), ApiVersion(apiKey=47, minVersion=0, maxVersion=0), ApiVersion(apiKey=48, minVersion=0, maxVersion=1), ApiVersion(apiKey=49, minVersion=0, maxVersion=1), ApiVersion(apiKey=50, minVersion=0, maxVersion=0), ApiVersion(apiKey=51, minVersion=0, maxVersion=0), ApiVersion(apiKey=56, minVersion=0, maxVersion=2), ApiVersion(apiKey=57, minVersion=0, maxVersion=1), ApiVersion(apiKey=60, minVersion=0, maxVersion=0), ApiVersion(apiKey=61, minVersion=0, maxVersion=0), ApiVersion(apiKey=65, minVersion=0, maxVersion=0), ApiVersion(apiKey=66, minVersion=0, maxVersion=0), ApiVersion(apiKey=67, minVersion=0, maxVersion=0)], throttleTimeMs=0, supportedFeatures=[], finalizedFeaturesEpoch=0, finalizedFeatures=[]) 11:10:51 11:10:51.228 [kafka-admin-client-thread | test-consumer-id] DEBUG org.apache.kafka.clients.NetworkClient - [AdminClient clientId=test-consumer-id] Node -1 has finalized features epoch: 0, finalized features: [], supported features: [], API versions: (Produce(0): 0 to 9 [usable: 9], Fetch(1): 0 to 13 [usable: 13], ListOffsets(2): 0 to 7 [usable: 7], Metadata(3): 0 to 12 [usable: 12], LeaderAndIsr(4): 0 to 6 [usable: 6], StopReplica(5): 0 to 3 [usable: 3], UpdateMetadata(6): 0 to 7 [usable: 7], ControlledShutdown(7): 0 to 3 [usable: 3], OffsetCommit(8): 0 to 8 [usable: 8], OffsetFetch(9): 0 to 8 [usable: 8], FindCoordinator(10): 0 to 4 [usable: 4], JoinGroup(11): 0 to 9 [usable: 9], Heartbeat(12): 0 to 4 [usable: 4], LeaveGroup(13): 0 to 5 [usable: 5], SyncGroup(14): 0 to 5 [usable: 5], DescribeGroups(15): 0 to 5 [usable: 5], ListGroups(16): 0 to 4 [usable: 4], SaslHandshake(17): 0 to 1 [usable: 1], ApiVersions(18): 0 to 3 [usable: 3], CreateTopics(19): 0 to 7 [usable: 7], DeleteTopics(20): 0 to 6 [usable: 6], DeleteRecords(21): 0 to 2 [usable: 2], InitProducerId(22): 0 to 4 [usable: 4], OffsetForLeaderEpoch(23): 0 to 4 [usable: 4], AddPartitionsToTxn(24): 0 to 3 [usable: 3], AddOffsetsToTxn(25): 0 to 3 [usable: 3], EndTxn(26): 0 to 3 [usable: 3], WriteTxnMarkers(27): 0 to 1 [usable: 1], TxnOffsetCommit(28): 0 to 3 [usable: 3], DescribeAcls(29): 0 to 3 [usable: 3], CreateAcls(30): 0 to 3 [usable: 3], DeleteAcls(31): 0 to 3 [usable: 3], DescribeConfigs(32): 0 to 4 [usable: 4], AlterConfigs(33): 0 to 2 [usable: 2], AlterReplicaLogDirs(34): 0 to 2 [usable: 2], DescribeLogDirs(35): 0 to 4 [usable: 4], SaslAuthenticate(36): 0 to 2 [usable: 2], CreatePartitions(37): 0 to 3 [usable: 3], CreateDelegationToken(38): 0 to 3 [usable: 3], RenewDelegationToken(39): 0 to 2 [usable: 2], ExpireDelegationToken(40): 0 to 2 [usable: 2], DescribeDelegationToken(41): 0 to 3 [usable: 3], DeleteGroups(42): 0 to 2 [usable: 2], ElectLeaders(43): 0 to 2 [usable: 2], IncrementalAlterConfigs(44): 0 to 1 [usable: 1], AlterPartitionReassignments(45): 0 [usable: 0], ListPartitionReassignments(46): 0 [usable: 0], OffsetDelete(47): 0 [usable: 0], DescribeClientQuotas(48): 0 to 1 [usable: 1], AlterClientQuotas(49): 0 to 1 [usable: 1], DescribeUserScramCredentials(50): 0 [usable: 0], AlterUserScramCredentials(51): 0 [usable: 0], AlterPartition(56): 0 to 2 [usable: 2], UpdateFeatures(57): 0 to 1 [usable: 1], DescribeCluster(60): 0 [usable: 0], DescribeProducers(61): 0 [usable: 0], DescribeTransactions(65): 0 [usable: 0], ListTransactions(66): 0 [usable: 0], AllocateProducerIds(67): 0 [usable: 0]). 11:10:51 11:10:51.228 [kafka-admin-client-thread | test-consumer-id] DEBUG org.apache.kafka.clients.admin.KafkaAdminClient - [AdminClient clientId=test-consumer-id] Sending MetadataRequestData(topics=[], allowAutoTopicCreation=true, includeClusterAuthorizedOperations=false, includeTopicAuthorizedOperations=false) to localhost:39115 (id: -1 rack: null). correlationId=1, timeoutMs=14985 11:10:51 11:10:51.228 [kafka-admin-client-thread | test-consumer-id] DEBUG org.apache.kafka.clients.NetworkClient - [AdminClient clientId=test-consumer-id] Sending METADATA request with header RequestHeader(apiKey=METADATA, apiVersion=12, clientId=test-consumer-id, correlationId=1) and timeout 14985 to node -1: MetadataRequestData(topics=[], allowAutoTopicCreation=true, includeClusterAuthorizedOperations=false, includeTopicAuthorizedOperations=false) 11:10:51 11:10:51.232 [data-plane-kafka-network-thread-1-ListenerName(SASL_PLAINTEXT)-SASL_PLAINTEXT-1] DEBUG kafka.request.logger - Completed request:{"isForwarded":false,"requestHeader":{"requestApiKey":3,"requestApiVersion":12,"correlationId":1,"clientId":"test-consumer-id","requestApiKeyName":"METADATA"},"request":{"topics":[],"allowAutoTopicCreation":true,"includeTopicAuthorizedOperations":false},"response":{"throttleTimeMs":0,"brokers":[{"nodeId":1,"host":"localhost","port":39115,"rack":null}],"clusterId":"jx5ycp9PTHOXo1U6H8QTmw","controllerId":1,"topics":[]},"connection":"127.0.0.1:39115-127.0.0.1:43328-1","totalTimeMs":2.222,"requestQueueTimeMs":0.199,"localTimeMs":1.764,"remoteTimeMs":0.0,"throttleTimeMs":0,"responseQueueTimeMs":0.107,"sendTimeMs":0.15,"securityProtocol":"SASL_PLAINTEXT","principal":"User:kafkaclient","listener":"SASL_PLAINTEXT","clientInformation":{"softwareName":"apache-kafka-java","softwareVersion":"3.3.1"}} 11:10:51 11:10:51.233 [kafka-admin-client-thread | test-consumer-id] DEBUG org.apache.kafka.clients.NetworkClient - [AdminClient clientId=test-consumer-id] Received METADATA response from node -1 for request with header RequestHeader(apiKey=METADATA, apiVersion=12, clientId=test-consumer-id, correlationId=1): MetadataResponseData(throttleTimeMs=0, brokers=[MetadataResponseBroker(nodeId=1, host='localhost', port=39115, rack=null)], clusterId='jx5ycp9PTHOXo1U6H8QTmw', controllerId=1, topics=[], clusterAuthorizedOperations=-2147483648) 11:10:51 11:10:51.234 [kafka-admin-client-thread | test-consumer-id] DEBUG org.apache.kafka.clients.admin.internals.AdminMetadataManager - [AdminClient clientId=test-consumer-id] Updating cluster metadata to Cluster(id = jx5ycp9PTHOXo1U6H8QTmw, nodes = [localhost:39115 (id: 1 rack: null)], partitions = [], controller = localhost:39115 (id: 1 rack: null)) 11:10:51 11:10:51.234 [kafka-admin-client-thread | test-consumer-id] DEBUG org.apache.kafka.clients.ClientUtils - Resolved host localhost as 127.0.0.1 11:10:51 11:10:51.234 [kafka-admin-client-thread | test-consumer-id] DEBUG org.apache.kafka.clients.NetworkClient - [AdminClient clientId=test-consumer-id] Initiating connection to node localhost:39115 (id: 1 rack: null) using address localhost/127.0.0.1 11:10:51 11:10:51.234 [kafka-admin-client-thread | test-consumer-id] DEBUG org.apache.kafka.common.security.authenticator.SaslClientAuthenticator - [AdminClient clientId=test-consumer-id] Set SASL client state to SEND_APIVERSIONS_REQUEST 11:10:51 11:10:51.234 [kafka-admin-client-thread | test-consumer-id] DEBUG org.apache.kafka.common.security.authenticator.SaslClientAuthenticator - [AdminClient clientId=test-consumer-id] Creating SaslClient: client=null;service=kafka;serviceHostname=localhost;mechs=[PLAIN] 11:10:51 11:10:51.234 [data-plane-kafka-socket-acceptor-ListenerName(SASL_PLAINTEXT)-SASL_PLAINTEXT-39115] DEBUG kafka.network.DataPlaneAcceptor - Accepted connection from /127.0.0.1:43330 on /127.0.0.1:39115 and assigned it to processor 0, sendBufferSize [actual|requested]: [102400|102400] recvBufferSize [actual|requested]: [102400|102400] 11:10:51 11:10:51.235 [data-plane-kafka-network-thread-1-ListenerName(SASL_PLAINTEXT)-SASL_PLAINTEXT-0] DEBUG kafka.network.Processor - Processor 0 listening to new connection from /127.0.0.1:43330 11:10:51 11:10:51.235 [kafka-admin-client-thread | test-consumer-id] DEBUG org.apache.kafka.common.network.Selector - [AdminClient clientId=test-consumer-id] Created socket with SO_RCVBUF = 65536, SO_SNDBUF = 131072, SO_TIMEOUT = 0 to node 1 11:10:51 11:10:51.235 [kafka-admin-client-thread | test-consumer-id] DEBUG org.apache.kafka.common.security.authenticator.SaslClientAuthenticator - [AdminClient clientId=test-consumer-id] Set SASL client state to RECEIVE_APIVERSIONS_RESPONSE 11:10:51 11:10:51.235 [kafka-admin-client-thread | test-consumer-id] DEBUG org.apache.kafka.clients.NetworkClient - [AdminClient clientId=test-consumer-id] Completed connection to node 1. Fetching API versions. 11:10:51 11:10:51.236 [data-plane-kafka-network-thread-1-ListenerName(SASL_PLAINTEXT)-SASL_PLAINTEXT-0] DEBUG org.apache.kafka.common.security.authenticator.SaslServerAuthenticator - Set SASL server state to HANDSHAKE_OR_VERSIONS_REQUEST during authentication 11:10:51 11:10:51.236 [data-plane-kafka-network-thread-1-ListenerName(SASL_PLAINTEXT)-SASL_PLAINTEXT-0] DEBUG org.apache.kafka.common.security.authenticator.SaslServerAuthenticator - Handling Kafka request API_VERSIONS during authentication 11:10:51 11:10:51.237 [data-plane-kafka-network-thread-1-ListenerName(SASL_PLAINTEXT)-SASL_PLAINTEXT-0] DEBUG org.apache.kafka.common.security.authenticator.SaslServerAuthenticator - Set SASL server state to HANDSHAKE_REQUEST during authentication 11:10:51 11:10:51.237 [kafka-admin-client-thread | test-consumer-id] DEBUG org.apache.kafka.common.security.authenticator.SaslClientAuthenticator - [AdminClient clientId=test-consumer-id] Set SASL client state to SEND_HANDSHAKE_REQUEST 11:10:51 11:10:51.237 [kafka-admin-client-thread | test-consumer-id] DEBUG org.apache.kafka.common.security.authenticator.SaslClientAuthenticator - [AdminClient clientId=test-consumer-id] Set SASL client state to RECEIVE_HANDSHAKE_RESPONSE 11:10:51 11:10:51.237 [data-plane-kafka-network-thread-1-ListenerName(SASL_PLAINTEXT)-SASL_PLAINTEXT-0] DEBUG org.apache.kafka.common.security.authenticator.SaslServerAuthenticator - Handling Kafka request SASL_HANDSHAKE during authentication 11:10:51 11:10:51.237 [data-plane-kafka-network-thread-1-ListenerName(SASL_PLAINTEXT)-SASL_PLAINTEXT-0] DEBUG org.apache.kafka.common.security.authenticator.SaslServerAuthenticator - Using SASL mechanism 'PLAIN' provided by client 11:10:51 11:10:51.238 [kafka-admin-client-thread | test-consumer-id] DEBUG org.apache.kafka.common.security.authenticator.SaslClientAuthenticator - [AdminClient clientId=test-consumer-id] Set SASL client state to INITIAL 11:10:51 11:10:51.238 [kafka-admin-client-thread | test-consumer-id] DEBUG org.apache.kafka.common.security.authenticator.SaslClientAuthenticator - [AdminClient clientId=test-consumer-id] Set SASL client state to INTERMEDIATE 11:10:51 11:10:51.238 [data-plane-kafka-network-thread-1-ListenerName(SASL_PLAINTEXT)-SASL_PLAINTEXT-0] DEBUG org.apache.kafka.common.security.authenticator.SaslServerAuthenticator - Set SASL server state to AUTHENTICATE during authentication 11:10:51 11:10:51.238 [data-plane-kafka-network-thread-1-ListenerName(SASL_PLAINTEXT)-SASL_PLAINTEXT-0] DEBUG org.apache.kafka.common.security.authenticator.SaslServerAuthenticator - Authentication complete; session max lifetime from broker config=0 ms, no credential expiration; no session expiration, sending 0 ms to client 11:10:51 11:10:51.239 [kafka-admin-client-thread | test-consumer-id] DEBUG org.apache.kafka.common.security.authenticator.SaslClientAuthenticator - [AdminClient clientId=test-consumer-id] Set SASL client state to COMPLETE 11:10:51 11:10:51.239 [kafka-admin-client-thread | test-consumer-id] DEBUG org.apache.kafka.common.security.authenticator.SaslClientAuthenticator - [AdminClient clientId=test-consumer-id] Finished authentication with no session expiration and no session re-authentication 11:10:51 11:10:51.239 [kafka-admin-client-thread | test-consumer-id] DEBUG org.apache.kafka.common.network.Selector - [AdminClient clientId=test-consumer-id] Successfully authenticated with localhost/127.0.0.1 11:10:51 11:10:51.239 [kafka-admin-client-thread | test-consumer-id] DEBUG org.apache.kafka.clients.NetworkClient - [AdminClient clientId=test-consumer-id] Initiating API versions fetch from node 1. 11:10:51 11:10:51.239 [kafka-admin-client-thread | test-consumer-id] DEBUG org.apache.kafka.clients.NetworkClient - [AdminClient clientId=test-consumer-id] Sending API_VERSIONS request with header RequestHeader(apiKey=API_VERSIONS, apiVersion=3, clientId=test-consumer-id, correlationId=2) and timeout 3600000 to node 1: ApiVersionsRequestData(clientSoftwareName='apache-kafka-java', clientSoftwareVersion='3.3.1') 11:10:51 11:10:51.245 [data-plane-kafka-network-thread-1-ListenerName(SASL_PLAINTEXT)-SASL_PLAINTEXT-0] DEBUG org.apache.kafka.common.security.authenticator.SaslServerAuthenticator - Set SASL server state to COMPLETE during authentication 11:10:51 11:10:51.245 [data-plane-kafka-network-thread-1-ListenerName(SASL_PLAINTEXT)-SASL_PLAINTEXT-0] DEBUG org.apache.kafka.common.network.Selector - [SocketServer listenerType=ZK_BROKER, nodeId=1] Successfully authenticated with /127.0.0.1 11:10:51 11:10:51.250 [kafka-admin-client-thread | test-consumer-id] DEBUG org.apache.kafka.clients.NetworkClient - [AdminClient clientId=test-consumer-id] Received API_VERSIONS response from node 1 for request with header RequestHeader(apiKey=API_VERSIONS, apiVersion=3, clientId=test-consumer-id, correlationId=2): ApiVersionsResponseData(errorCode=0, apiKeys=[ApiVersion(apiKey=0, minVersion=0, maxVersion=9), ApiVersion(apiKey=1, minVersion=0, maxVersion=13), ApiVersion(apiKey=2, minVersion=0, maxVersion=7), ApiVersion(apiKey=3, minVersion=0, maxVersion=12), ApiVersion(apiKey=4, minVersion=0, maxVersion=6), ApiVersion(apiKey=5, minVersion=0, maxVersion=3), ApiVersion(apiKey=6, minVersion=0, maxVersion=7), ApiVersion(apiKey=7, minVersion=0, maxVersion=3), ApiVersion(apiKey=8, minVersion=0, maxVersion=8), ApiVersion(apiKey=9, minVersion=0, maxVersion=8), ApiVersion(apiKey=10, minVersion=0, maxVersion=4), ApiVersion(apiKey=11, minVersion=0, maxVersion=9), ApiVersion(apiKey=12, minVersion=0, maxVersion=4), ApiVersion(apiKey=13, minVersion=0, maxVersion=5), ApiVersion(apiKey=14, minVersion=0, maxVersion=5), ApiVersion(apiKey=15, minVersion=0, maxVersion=5), ApiVersion(apiKey=16, minVersion=0, maxVersion=4), ApiVersion(apiKey=17, minVersion=0, maxVersion=1), ApiVersion(apiKey=18, minVersion=0, maxVersion=3), ApiVersion(apiKey=19, minVersion=0, maxVersion=7), ApiVersion(apiKey=20, minVersion=0, maxVersion=6), ApiVersion(apiKey=21, minVersion=0, maxVersion=2), ApiVersion(apiKey=22, minVersion=0, maxVersion=4), ApiVersion(apiKey=23, minVersion=0, maxVersion=4), ApiVersion(apiKey=24, minVersion=0, maxVersion=3), ApiVersion(apiKey=25, minVersion=0, maxVersion=3), ApiVersion(apiKey=26, minVersion=0, maxVersion=3), ApiVersion(apiKey=27, minVersion=0, maxVersion=1), ApiVersion(apiKey=28, minVersion=0, maxVersion=3), ApiVersion(apiKey=29, minVersion=0, maxVersion=3), ApiVersion(apiKey=30, minVersion=0, maxVersion=3), ApiVersion(apiKey=31, minVersion=0, maxVersion=3), ApiVersion(apiKey=32, minVersion=0, maxVersion=4), ApiVersion(apiKey=33, minVersion=0, maxVersion=2), ApiVersion(apiKey=34, minVersion=0, maxVersion=2), ApiVersion(apiKey=35, minVersion=0, maxVersion=4), ApiVersion(apiKey=36, minVersion=0, maxVersion=2), ApiVersion(apiKey=37, minVersion=0, maxVersion=3), ApiVersion(apiKey=38, minVersion=0, maxVersion=3), ApiVersion(apiKey=39, minVersion=0, maxVersion=2), ApiVersion(apiKey=40, minVersion=0, maxVersion=2), ApiVersion(apiKey=41, minVersion=0, maxVersion=3), ApiVersion(apiKey=42, minVersion=0, maxVersion=2), ApiVersion(apiKey=43, minVersion=0, maxVersion=2), ApiVersion(apiKey=44, minVersion=0, maxVersion=1), ApiVersion(apiKey=45, minVersion=0, maxVersion=0), ApiVersion(apiKey=46, minVersion=0, maxVersion=0), ApiVersion(apiKey=47, minVersion=0, maxVersion=0), ApiVersion(apiKey=48, minVersion=0, maxVersion=1), ApiVersion(apiKey=49, minVersion=0, maxVersion=1), ApiVersion(apiKey=50, minVersion=0, maxVersion=0), ApiVersion(apiKey=51, minVersion=0, maxVersion=0), ApiVersion(apiKey=56, minVersion=0, maxVersion=2), ApiVersion(apiKey=57, minVersion=0, maxVersion=1), ApiVersion(apiKey=60, minVersion=0, maxVersion=0), ApiVersion(apiKey=61, minVersion=0, maxVersion=0), ApiVersion(apiKey=65, minVersion=0, maxVersion=0), ApiVersion(apiKey=66, minVersion=0, maxVersion=0), ApiVersion(apiKey=67, minVersion=0, maxVersion=0)], throttleTimeMs=0, supportedFeatures=[], finalizedFeaturesEpoch=0, finalizedFeatures=[]) 11:10:51 11:10:51.250 [kafka-admin-client-thread | test-consumer-id] DEBUG org.apache.kafka.clients.NetworkClient - [AdminClient clientId=test-consumer-id] Node 1 has finalized features epoch: 0, finalized features: [], supported features: [], API versions: (Produce(0): 0 to 9 [usable: 9], Fetch(1): 0 to 13 [usable: 13], ListOffsets(2): 0 to 7 [usable: 7], Metadata(3): 0 to 12 [usable: 12], LeaderAndIsr(4): 0 to 6 [usable: 6], StopReplica(5): 0 to 3 [usable: 3], UpdateMetadata(6): 0 to 7 [usable: 7], ControlledShutdown(7): 0 to 3 [usable: 3], OffsetCommit(8): 0 to 8 [usable: 8], OffsetFetch(9): 0 to 8 [usable: 8], FindCoordinator(10): 0 to 4 [usable: 4], JoinGroup(11): 0 to 9 [usable: 9], Heartbeat(12): 0 to 4 [usable: 4], LeaveGroup(13): 0 to 5 [usable: 5], SyncGroup(14): 0 to 5 [usable: 5], DescribeGroups(15): 0 to 5 [usable: 5], ListGroups(16): 0 to 4 [usable: 4], SaslHandshake(17): 0 to 1 [usable: 1], ApiVersions(18): 0 to 3 [usable: 3], CreateTopics(19): 0 to 7 [usable: 7], DeleteTopics(20): 0 to 6 [usable: 6], DeleteRecords(21): 0 to 2 [usable: 2], InitProducerId(22): 0 to 4 [usable: 4], OffsetForLeaderEpoch(23): 0 to 4 [usable: 4], AddPartitionsToTxn(24): 0 to 3 [usable: 3], AddOffsetsToTxn(25): 0 to 3 [usable: 3], EndTxn(26): 0 to 3 [usable: 3], WriteTxnMarkers(27): 0 to 1 [usable: 1], TxnOffsetCommit(28): 0 to 3 [usable: 3], DescribeAcls(29): 0 to 3 [usable: 3], CreateAcls(30): 0 to 3 [usable: 3], DeleteAcls(31): 0 to 3 [usable: 3], DescribeConfigs(32): 0 to 4 [usable: 4], AlterConfigs(33): 0 to 2 [usable: 2], AlterReplicaLogDirs(34): 0 to 2 [usable: 2], DescribeLogDirs(35): 0 to 4 [usable: 4], SaslAuthenticate(36): 0 to 2 [usable: 2], CreatePartitions(37): 0 to 3 [usable: 3], CreateDelegationToken(38): 0 to 3 [usable: 3], RenewDelegationToken(39): 0 to 2 [usable: 2], ExpireDelegationToken(40): 0 to 2 [usable: 2], DescribeDelegationToken(41): 0 to 3 [usable: 3], DeleteGroups(42): 0 to 2 [usable: 2], ElectLeaders(43): 0 to 2 [usable: 2], IncrementalAlterConfigs(44): 0 to 1 [usable: 1], AlterPartitionReassignments(45): 0 [usable: 0], ListPartitionReassignments(46): 0 [usable: 0], OffsetDelete(47): 0 [usable: 0], DescribeClientQuotas(48): 0 to 1 [usable: 1], AlterClientQuotas(49): 0 to 1 [usable: 1], DescribeUserScramCredentials(50): 0 [usable: 0], AlterUserScramCredentials(51): 0 [usable: 0], AlterPartition(56): 0 to 2 [usable: 2], UpdateFeatures(57): 0 to 1 [usable: 1], DescribeCluster(60): 0 [usable: 0], DescribeProducers(61): 0 [usable: 0], DescribeTransactions(65): 0 [usable: 0], ListTransactions(66): 0 [usable: 0], AllocateProducerIds(67): 0 [usable: 0]). 11:10:51 11:10:51.251 [data-plane-kafka-network-thread-1-ListenerName(SASL_PLAINTEXT)-SASL_PLAINTEXT-0] DEBUG kafka.request.logger - Completed request:{"isForwarded":false,"requestHeader":{"requestApiKey":18,"requestApiVersion":3,"correlationId":2,"clientId":"test-consumer-id","requestApiKeyName":"API_VERSIONS"},"request":{"clientSoftwareName":"apache-kafka-java","clientSoftwareVersion":"3.3.1"},"response":{"errorCode":0,"apiKeys":[{"apiKey":0,"minVersion":0,"maxVersion":9},{"apiKey":1,"minVersion":0,"maxVersion":13},{"apiKey":2,"minVersion":0,"maxVersion":7},{"apiKey":3,"minVersion":0,"maxVersion":12},{"apiKey":4,"minVersion":0,"maxVersion":6},{"apiKey":5,"minVersion":0,"maxVersion":3},{"apiKey":6,"minVersion":0,"maxVersion":7},{"apiKey":7,"minVersion":0,"maxVersion":3},{"apiKey":8,"minVersion":0,"maxVersion":8},{"apiKey":9,"minVersion":0,"maxVersion":8},{"apiKey":10,"minVersion":0,"maxVersion":4},{"apiKey":11,"minVersion":0,"maxVersion":9},{"apiKey":12,"minVersion":0,"maxVersion":4},{"apiKey":13,"minVersion":0,"maxVersion":5},{"apiKey":14,"minVersion":0,"maxVersion":5},{"apiKey":15,"minVersion":0,"maxVersion":5},{"apiKey":16,"minVersion":0,"maxVersion":4},{"apiKey":17,"minVersion":0,"maxVersion":1},{"apiKey":18,"minVersion":0,"maxVersion":3},{"apiKey":19,"minVersion":0,"maxVersion":7},{"apiKey":20,"minVersion":0,"maxVersion":6},{"apiKey":21,"minVersion":0,"maxVersion":2},{"apiKey":22,"minVersion":0,"maxVersion":4},{"apiKey":23,"minVersion":0,"maxVersion":4},{"apiKey":24,"minVersion":0,"maxVersion":3},{"apiKey":25,"minVersion":0,"maxVersion":3},{"apiKey":26,"minVersion":0,"maxVersion":3},{"apiKey":27,"minVersion":0,"maxVersion":1},{"apiKey":28,"minVersion":0,"maxVersion":3},{"apiKey":29,"minVersion":0,"maxVersion":3},{"apiKey":30,"minVersion":0,"maxVersion":3},{"apiKey":31,"minVersion":0,"maxVersion":3},{"apiKey":32,"minVersion":0,"maxVersion":4},{"apiKey":33,"minVersion":0,"maxVersion":2},{"apiKey":34,"minVersion":0,"maxVersion":2},{"apiKey":35,"minVersion":0,"maxVersion":4},{"apiKey":36,"minVersion":0,"maxVersion":2},{"apiKey":37,"minVersion":0,"maxVersion":3},{"apiKey":38,"minVersion":0,"maxVersion":3},{"apiKey":39,"minVersion":0,"maxVersion":2},{"apiKey":40,"minVersion":0,"maxVersion":2},{"apiKey":41,"minVersion":0,"maxVersion":3},{"apiKey":42,"minVersion":0,"maxVersion":2},{"apiKey":43,"minVersion":0,"maxVersion":2},{"apiKey":44,"minVersion":0,"maxVersion":1},{"apiKey":45,"minVersion":0,"maxVersion":0},{"apiKey":46,"minVersion":0,"maxVersion":0},{"apiKey":47,"minVersion":0,"maxVersion":0},{"apiKey":48,"minVersion":0,"maxVersion":1},{"apiKey":49,"minVersion":0,"maxVersion":1},{"apiKey":50,"minVersion":0,"maxVersion":0},{"apiKey":51,"minVersion":0,"maxVersion":0},{"apiKey":56,"minVersion":0,"maxVersion":2},{"apiKey":57,"minVersion":0,"maxVersion":1},{"apiKey":60,"minVersion":0,"maxVersion":0},{"apiKey":61,"minVersion":0,"maxVersion":0},{"apiKey":65,"minVersion":0,"maxVersion":0},{"apiKey":66,"minVersion":0,"maxVersion":0},{"apiKey":67,"minVersion":0,"maxVersion":0}],"throttleTimeMs":0,"finalizedFeaturesEpoch":0},"connection":"127.0.0.1:39115-127.0.0.1:43330-2","totalTimeMs":3.661,"requestQueueTimeMs":0.338,"localTimeMs":1.114,"remoteTimeMs":0.0,"throttleTimeMs":0,"responseQueueTimeMs":1.794,"sendTimeMs":0.413,"securityProtocol":"SASL_PLAINTEXT","principal":"User:kafkaclient","listener":"SASL_PLAINTEXT","clientInformation":{"softwareName":"unknown","softwareVersion":"unknown"}} 11:10:51 11:10:51.251 [kafka-admin-client-thread | test-consumer-id] DEBUG org.apache.kafka.clients.admin.KafkaAdminClient - [AdminClient clientId=test-consumer-id] Sending CreateTopicsRequestData(topics=[CreatableTopic(name='my-test-topic', numPartitions=1, replicationFactor=1, assignments=[], configs=[])], timeoutMs=14982, validateOnly=false) to localhost:39115 (id: 1 rack: null). correlationId=3, timeoutMs=14982 11:10:51 11:10:51.253 [kafka-admin-client-thread | test-consumer-id] DEBUG org.apache.kafka.clients.NetworkClient - [AdminClient clientId=test-consumer-id] Sending CREATE_TOPICS request with header RequestHeader(apiKey=CREATE_TOPICS, apiVersion=7, clientId=test-consumer-id, correlationId=3) and timeout 14982 to node 1: CreateTopicsRequestData(topics=[CreatableTopic(name='my-test-topic', numPartitions=1, replicationFactor=1, assignments=[], configs=[])], timeoutMs=14982, validateOnly=false) 11:10:51 11:10:51.276 [ProcessThread(sid:0 cport:39173):] DEBUG org.apache.zookeeper.server.SessionTrackerImpl - Checking session 0x10000020e9e0000 11:10:51 11:10:51.276 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - Processing request:: sessionid:0x10000020e9e0000 type:exists cxid:0x43 zxid:0xfffffffffffffffe txntype:unknown reqpath:/admin/delete_topics/my-test-topic 11:10:51 11:10:51.276 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - sessionid:0x10000020e9e0000 type:exists cxid:0x43 zxid:0xfffffffffffffffe txntype:unknown reqpath:/admin/delete_topics/my-test-topic 11:10:51 11:10:51.277 [main-SendThread(127.0.0.1:39173)] DEBUG org.apache.zookeeper.ClientCnxn - Reading reply session id: 0x10000020e9e0000, packet:: clientPath:/admin/delete_topics/my-test-topic serverPath:/admin/delete_topics/my-test-topic finished:false header:: 67,3 replyHeader:: 67,29,-101 request:: '/admin/delete_topics/my-test-topic,F response:: 11:10:51 11:10:51.278 [ProcessThread(sid:0 cport:39173):] DEBUG org.apache.zookeeper.server.SessionTrackerImpl - Checking session 0x10000020e9e0000 11:10:51 11:10:51.278 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - Processing request:: sessionid:0x10000020e9e0000 type:exists cxid:0x44 zxid:0xfffffffffffffffe txntype:unknown reqpath:/brokers/topics/my-test-topic 11:10:51 11:10:51.278 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - sessionid:0x10000020e9e0000 type:exists cxid:0x44 zxid:0xfffffffffffffffe txntype:unknown reqpath:/brokers/topics/my-test-topic 11:10:51 11:10:51.279 [main-SendThread(127.0.0.1:39173)] DEBUG org.apache.zookeeper.ClientCnxn - Reading reply session id: 0x10000020e9e0000, packet:: clientPath:/brokers/topics/my-test-topic serverPath:/brokers/topics/my-test-topic finished:false header:: 68,3 replyHeader:: 68,29,-101 request:: '/brokers/topics/my-test-topic,F response:: 11:10:51 11:10:51.305 [data-plane-kafka-request-handler-1] INFO kafka.zk.AdminZkClient - Creating topic my-test-topic with configuration {} and initial partition assignment HashMap(0 -> ArrayBuffer(1)) 11:10:51 11:10:51.308 [ProcessThread(sid:0 cport:39173):] DEBUG org.apache.zookeeper.server.SessionTrackerImpl - Checking session 0x10000020e9e0000 11:10:51 11:10:51.337 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - Processing request:: sessionid:0x10000020e9e0000 type:setData cxid:0x45 zxid:0x1e txntype:-1 reqpath:n/a 11:10:51 11:10:51.337 [SyncThread:0] DEBUG org.apache.zookeeper.server.DataTree - Ignoring processTxn failure hdr: -1 : error: -101 11:10:51 11:10:51.338 [main-SendThread(127.0.0.1:39173)] DEBUG org.apache.zookeeper.ClientCnxn - Reading reply session id: 0x10000020e9e0000, packet:: clientPath:/config/topics/my-test-topic serverPath:/config/topics/my-test-topic finished:false header:: 69,5 replyHeader:: 69,30,-101 request:: '/config/topics/my-test-topic,#7b2276657273696f6e223a312c22636f6e666967223a7b7d7d,-1 response:: 11:10:51 11:10:51.341 [ProcessThread(sid:0 cport:39173):] DEBUG org.apache.zookeeper.server.SessionTrackerImpl - Checking session 0x10000020e9e0000 11:10:51 11:10:51.341 [ProcessThread(sid:0 cport:39173):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Processing ACL: 31,s{'world,'anyone} 11:10:51 11:10:51 11:10:51.342 [ProcessThread(sid:0 cport:39173):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Permission requested: 4 11:10:51 11:10:51.342 [ProcessThread(sid:0 cport:39173):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - ACLs for node: [31,s{'world,'anyone} 11:10:51 ] 11:10:51 11:10:51.342 [ProcessThread(sid:0 cport:39173):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Client credentials: ['sasl,'zooclient 11:10:51 , 'ip,'127.0.0.1 11:10:51 ] 11:10:51 11:10:51.342 [ProcessThread(sid:0 cport:39173):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from data tree is: 45507074165 11:10:51 11:10:51.342 [ProcessThread(sid:0 cport:39173):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 43130084922 11:10:51 11:10:51.357 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - Processing request:: sessionid:0x10000020e9e0000 type:create cxid:0x46 zxid:0x1f txntype:1 reqpath:n/a 11:10:51 11:10:51.358 [SyncThread:0] DEBUG org.apache.zookeeper.common.PathTrie - config 11:10:51 11:10:51.358 [SyncThread:0] DEBUG org.apache.zookeeper.server.DataTree - Digests are matching for Zxid: 1f, Digest in log and actual tree: 44238754747 11:10:51 11:10:51.358 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - sessionid:0x10000020e9e0000 type:create cxid:0x46 zxid:0x1f txntype:1 reqpath:n/a 11:10:51 11:10:51.359 [main-SendThread(127.0.0.1:39173)] DEBUG org.apache.zookeeper.ClientCnxn - Reading reply session id: 0x10000020e9e0000, packet:: clientPath:/config/topics/my-test-topic serverPath:/config/topics/my-test-topic finished:false header:: 70,1 replyHeader:: 70,31,0 request:: '/config/topics/my-test-topic,#7b2276657273696f6e223a312c22636f6e666967223a7b7d7d,v{s{31,s{'world,'anyone}}},0 response:: '/config/topics/my-test-topic 11:10:51 11:10:51.371 [ProcessThread(sid:0 cport:39173):] DEBUG org.apache.zookeeper.server.SessionTrackerImpl - Checking session 0x10000020e9e0000 11:10:51 11:10:51.371 [ProcessThread(sid:0 cport:39173):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Processing ACL: 31,s{'world,'anyone} 11:10:51 11:10:51 11:10:51.372 [ProcessThread(sid:0 cport:39173):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Permission requested: 4 11:10:51 11:10:51.372 [ProcessThread(sid:0 cport:39173):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - ACLs for node: [31,s{'world,'anyone} 11:10:51 ] 11:10:51 11:10:51.372 [ProcessThread(sid:0 cport:39173):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Client credentials: ['sasl,'zooclient 11:10:51 , 'ip,'127.0.0.1 11:10:51 ] 11:10:51 11:10:51.372 [ProcessThread(sid:0 cport:39173):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from data tree is: 44238754747 11:10:51 11:10:51.372 [ProcessThread(sid:0 cport:39173):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 46066155936 11:10:51 11:10:51.380 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - Processing request:: sessionid:0x10000020e9e0000 type:create cxid:0x47 zxid:0x20 txntype:1 reqpath:n/a 11:10:51 11:10:51.381 [SyncThread:0] DEBUG org.apache.zookeeper.common.PathTrie - brokers 11:10:51 11:10:51.381 [SyncThread:0] DEBUG org.apache.zookeeper.server.DataTree - Digests are matching for Zxid: 20, Digest in log and actual tree: 48600353754 11:10:51 11:10:51.381 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - sessionid:0x10000020e9e0000 type:create cxid:0x47 zxid:0x20 txntype:1 reqpath:n/a 11:10:51 11:10:51.381 [main-SendThread(127.0.0.1:39173)] DEBUG org.apache.zookeeper.ClientCnxn - Got notification session id: 0x10000020e9e0000 11:10:51 11:10:51.381 [main-SendThread(127.0.0.1:39173)] DEBUG org.apache.zookeeper.ClientCnxn - Got WatchedEvent state:SyncConnected type:NodeChildrenChanged path:/brokers/topics for session id 0x10000020e9e0000 11:10:51 11:10:51.381 [main-EventThread] DEBUG kafka.zookeeper.ZooKeeperClient - [ZooKeeperClient Kafka server] Received event: WatchedEvent state:SyncConnected type:NodeChildrenChanged path:/brokers/topics 11:10:51 11:10:51.382 [main-SendThread(127.0.0.1:39173)] DEBUG org.apache.zookeeper.ClientCnxn - Reading reply session id: 0x10000020e9e0000, packet:: clientPath:/brokers/topics/my-test-topic serverPath:/brokers/topics/my-test-topic finished:false header:: 71,1 replyHeader:: 71,32,0 request:: '/brokers/topics/my-test-topic,#7b22706172746974696f6e73223a7b2230223a5b315d7d2c22746f7069635f6964223a225162435349486568547a366f797667726b59734a7477222c22616464696e675f7265706c69636173223a7b7d2c2272656d6f76696e675f7265706c69636173223a7b7d2c2276657273696f6e223a337d,v{s{31,s{'world,'anyone}}},0 response:: '/brokers/topics/my-test-topic 11:10:51 11:10:51.383 [ProcessThread(sid:0 cport:39173):] DEBUG org.apache.zookeeper.server.SessionTrackerImpl - Checking session 0x10000020e9e0000 11:10:51 11:10:51.383 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - Processing request:: sessionid:0x10000020e9e0000 type:getChildren2 cxid:0x48 zxid:0xfffffffffffffffe txntype:unknown reqpath:/brokers/topics 11:10:51 11:10:51.383 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - sessionid:0x10000020e9e0000 type:getChildren2 cxid:0x48 zxid:0xfffffffffffffffe txntype:unknown reqpath:/brokers/topics 11:10:51 11:10:51.383 [SyncThread:0] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Permission requested: 1 11:10:51 11:10:51.383 [SyncThread:0] DEBUG org.apache.zookeeper.server.ZooKeeperServer - ACLs for node: [31,s{'world,'anyone} 11:10:51 ] 11:10:51 11:10:51.383 [SyncThread:0] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Client credentials: ['sasl,'zooclient 11:10:51 , 'ip,'127.0.0.1 11:10:51 ] 11:10:51 11:10:51.383 [data-plane-kafka-request-handler-1] DEBUG kafka.zk.AdminZkClient - Updated path /brokers/topics/my-test-topic with Map(my-test-topic-0 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=)) for replica assignment 11:10:51 11:10:51.385 [main-SendThread(127.0.0.1:39173)] DEBUG org.apache.zookeeper.ClientCnxn - Reading reply session id: 0x10000020e9e0000, packet:: clientPath:/brokers/topics serverPath:/brokers/topics finished:false header:: 72,12 replyHeader:: 72,32,0 request:: '/brokers/topics,T response:: v{'my-test-topic},s{6,6,1768216249046,1768216249046,0,1,0,0,0,1,32} 11:10:51 11:10:51.385 [ProcessThread(sid:0 cport:39173):] DEBUG org.apache.zookeeper.server.SessionTrackerImpl - Checking session 0x10000020e9e0000 11:10:51 11:10:51.385 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - Processing request:: sessionid:0x10000020e9e0000 type:getData cxid:0x49 zxid:0xfffffffffffffffe txntype:unknown reqpath:/brokers/topics/my-test-topic 11:10:51 11:10:51.385 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - sessionid:0x10000020e9e0000 type:getData cxid:0x49 zxid:0xfffffffffffffffe txntype:unknown reqpath:/brokers/topics/my-test-topic 11:10:51 11:10:51.385 [SyncThread:0] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Permission requested: 1 11:10:51 11:10:51.385 [SyncThread:0] DEBUG org.apache.zookeeper.server.ZooKeeperServer - ACLs for node: [31,s{'world,'anyone} 11:10:51 ] 11:10:51 11:10:51.385 [SyncThread:0] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Client credentials: ['sasl,'zooclient 11:10:51 , 'ip,'127.0.0.1 11:10:51 ] 11:10:51 11:10:51.385 [main-SendThread(127.0.0.1:39173)] DEBUG org.apache.zookeeper.ClientCnxn - Reading reply session id: 0x10000020e9e0000, packet:: clientPath:/brokers/topics/my-test-topic serverPath:/brokers/topics/my-test-topic finished:false header:: 73,4 replyHeader:: 73,32,0 request:: '/brokers/topics/my-test-topic,F response:: #7b22706172746974696f6e73223a7b2230223a5b315d7d2c22746f7069635f6964223a225162435349486568547a366f797667726b59734a7477222c22616464696e675f7265706c69636173223a7b7d2c2272656d6f76696e675f7265706c69636173223a7b7d2c2276657273696f6e223a337d,s{32,32,1768216251371,1768216251371,0,0,0,0,116,0,32} 11:10:51 11:10:51.388 [ProcessThread(sid:0 cport:39173):] DEBUG org.apache.zookeeper.server.SessionTrackerImpl - Checking session 0x10000020e9e0000 11:10:51 11:10:51.388 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - Processing request:: sessionid:0x10000020e9e0000 type:getData cxid:0x4a zxid:0xfffffffffffffffe txntype:unknown reqpath:/brokers/topics/my-test-topic 11:10:51 11:10:51.388 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - sessionid:0x10000020e9e0000 type:getData cxid:0x4a zxid:0xfffffffffffffffe txntype:unknown reqpath:/brokers/topics/my-test-topic 11:10:51 11:10:51.388 [SyncThread:0] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Permission requested: 1 11:10:51 11:10:51.388 [SyncThread:0] DEBUG org.apache.zookeeper.server.ZooKeeperServer - ACLs for node: [31,s{'world,'anyone} 11:10:51 ] 11:10:51 11:10:51.388 [SyncThread:0] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Client credentials: ['sasl,'zooclient 11:10:51 , 'ip,'127.0.0.1 11:10:51 ] 11:10:51 11:10:51.389 [main-SendThread(127.0.0.1:39173)] DEBUG org.apache.zookeeper.ClientCnxn - Reading reply session id: 0x10000020e9e0000, packet:: clientPath:/brokers/topics/my-test-topic serverPath:/brokers/topics/my-test-topic finished:false header:: 74,4 replyHeader:: 74,32,0 request:: '/brokers/topics/my-test-topic,T response:: #7b22706172746974696f6e73223a7b2230223a5b315d7d2c22746f7069635f6964223a225162435349486568547a366f797667726b59734a7477222c22616464696e675f7265706c69636173223a7b7d2c2272656d6f76696e675f7265706c69636173223a7b7d2c2276657273696f6e223a337d,s{32,32,1768216251371,1768216251371,0,0,0,0,116,0,32} 11:10:51 11:10:51.396 [controller-event-thread] INFO kafka.controller.KafkaController - [Controller id=1] New topics: [Set(my-test-topic)], deleted topics: [HashSet()], new partition replica assignment [Set(TopicIdReplicaAssignment(my-test-topic,Some(QbCSIHehTz6oyvgrkYsJtw),Map(my-test-topic-0 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=))))] 11:10:51 11:10:51.397 [controller-event-thread] INFO kafka.controller.KafkaController - [Controller id=1] New partition creation callback for my-test-topic-0 11:10:51 11:10:51.399 [controller-event-thread] INFO state.change.logger - [Controller id=1 epoch=1] Changed partition my-test-topic-0 state from NonExistentPartition to NewPartition with assigned replicas 1 11:10:51 11:10:51.400 [controller-event-thread] INFO state.change.logger - [Controller id=1 epoch=1] Sending UpdateMetadata request to brokers HashSet() for 0 partitions 11:10:51 11:10:51.404 [controller-event-thread] INFO state.change.logger - [Controller id=1 epoch=1] Sending UpdateMetadata request to brokers HashSet() for 0 partitions 11:10:51 11:10:51.415 [ProcessThread(sid:0 cport:39173):] DEBUG org.apache.zookeeper.server.SessionTrackerImpl - Checking session 0x10000020e9e0000 11:10:51 11:10:51.415 [ProcessThread(sid:0 cport:39173):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Permission requested: 1 11:10:51 11:10:51.415 [ProcessThread(sid:0 cport:39173):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - ACLs for node: [31,s{'world,'anyone} 11:10:51 ] 11:10:51 11:10:51.415 [ProcessThread(sid:0 cport:39173):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Client credentials: ['sasl,'zooclient 11:10:51 , 'ip,'127.0.0.1 11:10:51 ] 11:10:51 11:10:51.415 [ProcessThread(sid:0 cport:39173):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from data tree is: 48600353754 11:10:51 11:10:51.415 [ProcessThread(sid:0 cport:39173):] DEBUG org.apache.zookeeper.server.SessionTrackerImpl - Checking session 0x10000020e9e0000 11:10:51 11:10:51.415 [ProcessThread(sid:0 cport:39173):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Processing ACL: 31,s{'world,'anyone} 11:10:51 11:10:51 11:10:51.415 [ProcessThread(sid:0 cport:39173):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Permission requested: 4 11:10:51 11:10:51.415 [ProcessThread(sid:0 cport:39173):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - ACLs for node: [31,s{'world,'anyone} 11:10:51 ] 11:10:51 11:10:51.416 [ProcessThread(sid:0 cport:39173):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Client credentials: ['sasl,'zooclient 11:10:51 , 'ip,'127.0.0.1 11:10:51 ] 11:10:51 11:10:51.416 [ProcessThread(sid:0 cport:39173):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from data tree is: 48600353754 11:10:51 11:10:51.416 [ProcessThread(sid:0 cport:39173):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 49462101568 11:10:51 11:10:51.416 [ProcessThread(sid:0 cport:39173):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 51603575070 11:10:51 11:10:51.440 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - Processing request:: sessionid:0x10000020e9e0000 type:multi cxid:0x4b zxid:0x21 txntype:14 reqpath:n/a 11:10:51 11:10:51.441 [SyncThread:0] DEBUG org.apache.zookeeper.common.PathTrie - brokers 11:10:51 11:10:51.441 [SyncThread:0] DEBUG org.apache.zookeeper.server.DataTree - Digests are matching for Zxid: 21, Digest in log and actual tree: 51603575070 11:10:51 11:10:51.441 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - sessionid:0x10000020e9e0000 type:multi cxid:0x4b zxid:0x21 txntype:14 reqpath:n/a 11:10:51 11:10:51.442 [main-SendThread(127.0.0.1:39173)] DEBUG org.apache.zookeeper.ClientCnxn - Reading reply session id: 0x10000020e9e0000, packet:: clientPath:null serverPath:null finished:false header:: 75,14 replyHeader:: 75,33,0 request:: org.apache.zookeeper.MultiOperationRecord@81bd0a85 response:: org.apache.zookeeper.MultiResponse@7b890ac6 11:10:51 11:10:51.447 [ProcessThread(sid:0 cport:39173):] DEBUG org.apache.zookeeper.server.SessionTrackerImpl - Checking session 0x10000020e9e0000 11:10:51 11:10:51.448 [ProcessThread(sid:0 cport:39173):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Permission requested: 1 11:10:51 11:10:51.448 [ProcessThread(sid:0 cport:39173):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - ACLs for node: [31,s{'world,'anyone} 11:10:51 ] 11:10:51 11:10:51.448 [ProcessThread(sid:0 cport:39173):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Client credentials: ['sasl,'zooclient 11:10:51 , 'ip,'127.0.0.1 11:10:51 ] 11:10:51 11:10:51.448 [ProcessThread(sid:0 cport:39173):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from data tree is: 51603575070 11:10:51 11:10:51.448 [ProcessThread(sid:0 cport:39173):] DEBUG org.apache.zookeeper.server.SessionTrackerImpl - Checking session 0x10000020e9e0000 11:10:51 11:10:51.448 [ProcessThread(sid:0 cport:39173):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Processing ACL: 31,s{'world,'anyone} 11:10:51 11:10:51 11:10:51.448 [ProcessThread(sid:0 cport:39173):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Permission requested: 4 11:10:51 11:10:51.449 [ProcessThread(sid:0 cport:39173):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - ACLs for node: [31,s{'world,'anyone} 11:10:51 ] 11:10:51 11:10:51.449 [ProcessThread(sid:0 cport:39173):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Client credentials: ['sasl,'zooclient 11:10:51 , 'ip,'127.0.0.1 11:10:51 ] 11:10:51 11:10:51.449 [ProcessThread(sid:0 cport:39173):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from data tree is: 51603575070 11:10:51 11:10:51.449 [ProcessThread(sid:0 cport:39173):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 53628334715 11:10:51 11:10:51.449 [ProcessThread(sid:0 cport:39173):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 54882821870 11:10:51 11:10:51.466 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - Processing request:: sessionid:0x10000020e9e0000 type:multi cxid:0x4c zxid:0x22 txntype:14 reqpath:n/a 11:10:51 11:10:51.466 [SyncThread:0] DEBUG org.apache.zookeeper.common.PathTrie - brokers 11:10:51 11:10:51.466 [SyncThread:0] DEBUG org.apache.zookeeper.server.DataTree - Digests are matching for Zxid: 22, Digest in log and actual tree: 54882821870 11:10:51 11:10:51.466 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - sessionid:0x10000020e9e0000 type:multi cxid:0x4c zxid:0x22 txntype:14 reqpath:n/a 11:10:51 11:10:51.467 [main-SendThread(127.0.0.1:39173)] DEBUG org.apache.zookeeper.ClientCnxn - Reading reply session id: 0x10000020e9e0000, packet:: clientPath:null serverPath:null finished:false header:: 76,14 replyHeader:: 76,34,0 request:: org.apache.zookeeper.MultiOperationRecord@c37a65e6 response:: org.apache.zookeeper.MultiResponse@bd466627 11:10:51 11:10:51.473 [ProcessThread(sid:0 cport:39173):] DEBUG org.apache.zookeeper.server.SessionTrackerImpl - Checking session 0x10000020e9e0000 11:10:51 11:10:51.473 [ProcessThread(sid:0 cport:39173):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Permission requested: 1 11:10:51 11:10:51.473 [ProcessThread(sid:0 cport:39173):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - ACLs for node: [31,s{'world,'anyone} 11:10:51 ] 11:10:51 11:10:51.473 [ProcessThread(sid:0 cport:39173):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Client credentials: ['sasl,'zooclient 11:10:51 , 'ip,'127.0.0.1 11:10:51 ] 11:10:51 11:10:51.474 [ProcessThread(sid:0 cport:39173):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from data tree is: 54882821870 11:10:51 11:10:51.474 [ProcessThread(sid:0 cport:39173):] DEBUG org.apache.zookeeper.server.SessionTrackerImpl - Checking session 0x10000020e9e0000 11:10:51 11:10:51.474 [ProcessThread(sid:0 cport:39173):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Processing ACL: 31,s{'world,'anyone} 11:10:51 11:10:51 11:10:51.474 [ProcessThread(sid:0 cport:39173):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Permission requested: 4 11:10:51 11:10:51.474 [ProcessThread(sid:0 cport:39173):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - ACLs for node: [31,s{'world,'anyone} 11:10:51 ] 11:10:51 11:10:51.474 [ProcessThread(sid:0 cport:39173):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Client credentials: ['sasl,'zooclient 11:10:51 , 'ip,'127.0.0.1 11:10:51 ] 11:10:51 11:10:51.474 [ProcessThread(sid:0 cport:39173):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from data tree is: 54882821870 11:10:51 11:10:51.474 [ProcessThread(sid:0 cport:39173):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 54024908132 11:10:51 11:10:51.474 [ProcessThread(sid:0 cport:39173):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 54550712703 11:10:51 11:10:51.475 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - Processing request:: sessionid:0x10000020e9e0000 type:multi cxid:0x4d zxid:0x23 txntype:14 reqpath:n/a 11:10:51 11:10:51.476 [SyncThread:0] DEBUG org.apache.zookeeper.common.PathTrie - brokers 11:10:51 11:10:51.476 [SyncThread:0] DEBUG org.apache.zookeeper.server.DataTree - Digests are matching for Zxid: 23, Digest in log and actual tree: 54550712703 11:10:51 11:10:51.476 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - sessionid:0x10000020e9e0000 type:multi cxid:0x4d zxid:0x23 txntype:14 reqpath:n/a 11:10:51 11:10:51.476 [main-SendThread(127.0.0.1:39173)] DEBUG org.apache.zookeeper.ClientCnxn - Reading reply session id: 0x10000020e9e0000, packet:: clientPath:null serverPath:null finished:false header:: 77,14 replyHeader:: 77,35,0 request:: org.apache.zookeeper.MultiOperationRecord@b3e0859f response:: org.apache.zookeeper.MultiResponse@ce2303a9 11:10:51 11:10:51.483 [controller-event-thread] INFO state.change.logger - [Controller id=1 epoch=1] Changed partition my-test-topic-0 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isr=List(1), leaderRecoveryState=RECOVERED, partitionEpoch=0) 11:10:51 11:10:51.486 [controller-event-thread] INFO state.change.logger - [Controller id=1 epoch=1] Sending LeaderAndIsr request to broker 1 with 1 become-leader and 0 become-follower partitions 11:10:51 11:10:51.488 [controller-event-thread] INFO state.change.logger - [Controller id=1 epoch=1] Sending UpdateMetadata request to brokers HashSet(1) for 1 partitions 11:10:51 11:10:51.489 [controller-event-thread] INFO state.change.logger - [Controller id=1 epoch=1] Sending UpdateMetadata request to brokers HashSet() for 0 partitions 11:10:51 11:10:51.491 [TestBroker:1:Controller-1-to-broker-1-send-thread] DEBUG org.apache.kafka.clients.NetworkClient - [Controller id=1, targetBrokerId=1] Sending LEADER_AND_ISR request with header RequestHeader(apiKey=LEADER_AND_ISR, apiVersion=6, clientId=1, correlationId=1) and timeout 30000 to node 1: LeaderAndIsrRequestData(controllerId=1, controllerEpoch=1, brokerEpoch=25, type=0, ungroupedPartitionStates=[], topicStates=[LeaderAndIsrTopicState(topicName='my-test-topic', topicId=QbCSIHehTz6oyvgrkYsJtw, partitionStates=[LeaderAndIsrPartitionState(topicName='my-test-topic', partitionIndex=0, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0)])], liveLeaders=[LeaderAndIsrLiveLeader(brokerId=1, hostName='localhost', port=39115)]) 11:10:51 11:10:51.504 [data-plane-kafka-request-handler-0] INFO state.change.logger - [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 for 1 partitions 11:10:51 11:10:51.540 [data-plane-kafka-request-handler-0] INFO kafka.server.ReplicaFetcherManager - [ReplicaFetcherManager on broker 1] Removed fetcher for partitions Set(my-test-topic-0) 11:10:51 11:10:51.540 [data-plane-kafka-request-handler-0] INFO state.change.logger - [Broker id=1] Stopped fetchers as part of LeaderAndIsr request correlationId 1 from controller 1 epoch 1 as part of the become-leader transition for 1 partitions 11:10:51 11:10:51.555 [ProcessThread(sid:0 cport:39173):] DEBUG org.apache.zookeeper.server.SessionTrackerImpl - Checking session 0x10000020e9e0000 11:10:51 11:10:51.556 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - Processing request:: sessionid:0x10000020e9e0000 type:getData cxid:0x4e zxid:0xfffffffffffffffe txntype:unknown reqpath:/config/topics/my-test-topic 11:10:51 11:10:51.556 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - sessionid:0x10000020e9e0000 type:getData cxid:0x4e zxid:0xfffffffffffffffe txntype:unknown reqpath:/config/topics/my-test-topic 11:10:51 11:10:51.556 [SyncThread:0] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Permission requested: 1 11:10:51 11:10:51.556 [SyncThread:0] DEBUG org.apache.zookeeper.server.ZooKeeperServer - ACLs for node: [31,s{'world,'anyone} 11:10:51 ] 11:10:51 11:10:51.556 [SyncThread:0] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Client credentials: ['sasl,'zooclient 11:10:51 , 'ip,'127.0.0.1 11:10:51 ] 11:10:51 11:10:51.557 [main-SendThread(127.0.0.1:39173)] DEBUG org.apache.zookeeper.ClientCnxn - Reading reply session id: 0x10000020e9e0000, packet:: clientPath:/config/topics/my-test-topic serverPath:/config/topics/my-test-topic finished:false header:: 78,4 replyHeader:: 78,35,0 request:: '/config/topics/my-test-topic,F response:: #7b2276657273696f6e223a312c22636f6e666967223a7b7d7d,s{31,31,1768216251341,1768216251341,0,0,0,0,25,0,31} 11:10:51 11:10:51.610 [data-plane-kafka-request-handler-0] DEBUG kafka.log.OffsetIndex - Loaded index file /tmp/kafka-unit8944902187107510952/my-test-topic-0/00000000000000000000.index with maxEntries = 1310720, maxIndexSize = 10485760, entries = 0, lastOffset = 0, file position = 0 11:10:51 11:10:51.613 [data-plane-kafka-request-handler-0] DEBUG kafka.log.AbstractIndex - Index /tmp/kafka-unit8944902187107510952/my-test-topic-0/00000000000000000000.index was not resized because it already has size 10485760 11:10:51 11:10:51.614 [data-plane-kafka-request-handler-0] DEBUG kafka.log.TimeIndex - Loaded index file /tmp/kafka-unit8944902187107510952/my-test-topic-0/00000000000000000000.timeindex with maxEntries = 873813, maxIndexSize = 10485760, entries = 0, lastOffset = TimestampOffset(-1,0), file position = 0 11:10:51 11:10:51.614 [data-plane-kafka-request-handler-0] DEBUG kafka.log.AbstractIndex - Index /tmp/kafka-unit8944902187107510952/my-test-topic-0/00000000000000000000.timeindex was not resized because it already has size 10485756 11:10:51 11:10:51.619 [data-plane-kafka-request-handler-0] INFO kafka.log.UnifiedLog$ - [LogLoader partition=my-test-topic-0, dir=/tmp/kafka-unit8944902187107510952] Loading producer state till offset 0 with message format version 2 11:10:51 11:10:51.634 [data-plane-kafka-request-handler-0] DEBUG kafka.utils.KafkaScheduler - Scheduling task flush-metadata-file with initial delay 0 ms and period -1 ms. 11:10:51 11:10:51.637 [data-plane-kafka-request-handler-0] DEBUG kafka.utils.KafkaScheduler - Scheduling task PeriodicProducerExpirationCheck with initial delay 600000 ms and period 600000 ms. 11:10:51 11:10:51.639 [data-plane-kafka-request-handler-0] INFO kafka.log.LogManager - Created log for partition my-test-topic-0 in /tmp/kafka-unit8944902187107510952/my-test-topic-0 with properties {} 11:10:51 11:10:51.641 [data-plane-kafka-request-handler-0] INFO kafka.cluster.Partition - [Partition my-test-topic-0 broker=1] No checkpointed highwatermark is found for partition my-test-topic-0 11:10:51 11:10:51.642 [data-plane-kafka-request-handler-0] INFO kafka.cluster.Partition - [Partition my-test-topic-0 broker=1] Log loaded for partition my-test-topic-0 with initial high watermark 0 11:10:51 11:10:51.644 [data-plane-kafka-request-handler-0] INFO state.change.logger - [Broker id=1] Leader my-test-topic-0 with topic id Some(QbCSIHehTz6oyvgrkYsJtw) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas []. Previous leader epoch was -1. 11:10:51 11:10:51.646 [data-plane-kafka-request-handler-0] DEBUG kafka.server.epoch.LeaderEpochFileCache - [LeaderEpochCache my-test-topic-0] Appended new epoch entry EpochEntry(epoch=0, startOffset=0). Cache now contains 1 entries. 11:10:51 11:10:51.656 [data-plane-kafka-request-handler-0] DEBUG kafka.utils.KafkaScheduler - Scheduling task highwatermark-checkpoint with initial delay 0 ms and period 5000 ms. 11:10:51 11:10:51.661 [data-plane-kafka-request-handler-0] INFO state.change.logger - [Broker id=1] Finished LeaderAndIsr request in 159ms correlationId 1 from controller 1 for 1 partitions 11:10:51 11:10:51.667 [TestBroker:1:Controller-1-to-broker-1-send-thread] DEBUG org.apache.kafka.clients.NetworkClient - [Controller id=1, targetBrokerId=1] Received LEADER_AND_ISR response from node 1 for request with header RequestHeader(apiKey=LEADER_AND_ISR, apiVersion=6, clientId=1, correlationId=1): LeaderAndIsrResponseData(errorCode=0, partitionErrors=[], topics=[LeaderAndIsrTopicError(topicId=QbCSIHehTz6oyvgrkYsJtw, partitionErrors=[LeaderAndIsrPartitionError(topicName='', partitionIndex=0, errorCode=0)])]) 11:10:51 11:10:51.668 [data-plane-kafka-network-thread-1-ListenerName(SASL_PLAINTEXT)-SASL_PLAINTEXT-0] DEBUG kafka.request.logger - Completed request:{"isForwarded":false,"requestHeader":{"requestApiKey":4,"requestApiVersion":6,"correlationId":1,"clientId":"1","requestApiKeyName":"LEADER_AND_ISR"},"request":{"controllerId":1,"controllerEpoch":1,"brokerEpoch":25,"type":0,"topicStates":[{"topicName":"my-test-topic","topicId":"QbCSIHehTz6oyvgrkYsJtw","partitionStates":[{"partitionIndex":0,"controllerEpoch":1,"leader":1,"leaderEpoch":0,"isr":[1],"partitionEpoch":0,"replicas":[1],"addingReplicas":[],"removingReplicas":[],"isNew":true,"leaderRecoveryState":0}]}],"liveLeaders":[{"brokerId":1,"hostName":"localhost","port":39115}]},"response":{"errorCode":0,"topics":[{"topicId":"QbCSIHehTz6oyvgrkYsJtw","partitionErrors":[{"partitionIndex":0,"errorCode":0}]}]},"connection":"127.0.0.1:39115-127.0.0.1:43322-0","totalTimeMs":174.293,"requestQueueTimeMs":7.948,"localTimeMs":165.719,"remoteTimeMs":0.0,"throttleTimeMs":0,"responseQueueTimeMs":0.226,"sendTimeMs":0.399,"securityProtocol":"SASL_PLAINTEXT","principal":"User:admin","listener":"SASL_PLAINTEXT","clientInformation":{"softwareName":"unknown","softwareVersion":"unknown"}} 11:10:51 11:10:51.668 [TestBroker:1:Controller-1-to-broker-1-send-thread] DEBUG org.apache.kafka.clients.NetworkClient - [Controller id=1, targetBrokerId=1] Sending UPDATE_METADATA request with header RequestHeader(apiKey=UPDATE_METADATA, apiVersion=7, clientId=1, correlationId=2) and timeout 30000 to node 1: UpdateMetadataRequestData(controllerId=1, controllerEpoch=1, brokerEpoch=25, ungroupedPartitionStates=[], topicStates=[UpdateMetadataTopicState(topicName='my-test-topic', topicId=QbCSIHehTz6oyvgrkYsJtw, partitionStates=[UpdateMetadataPartitionState(topicName='my-test-topic', partitionIndex=0, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[])])], liveBrokers=[UpdateMetadataBroker(id=1, v0Host='', v0Port=0, endpoints=[UpdateMetadataEndpoint(port=39115, host='localhost', listener='SASL_PLAINTEXT', securityProtocol=2)], rack=null)]) 11:10:51 11:10:51.675 [data-plane-kafka-request-handler-1] INFO state.change.logger - [Broker id=1] Add 1 partitions and deleted 0 partitions from metadata cache in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 11:10:51 11:10:51.683 [data-plane-kafka-request-handler-1] DEBUG kafka.server.DelayedOperationPurgatory - Request key TopicKey(my-test-topic) unblocked 1 topic operations 11:10:51 11:10:51.684 [data-plane-kafka-request-handler-1] DEBUG kafka.server.ZkAdminManager - [Admin Manager on Broker 1]: Request key my-test-topic unblocked 1 topic requests. 11:10:51 11:10:51.684 [data-plane-kafka-network-thread-1-ListenerName(SASL_PLAINTEXT)-SASL_PLAINTEXT-0] DEBUG kafka.request.logger - Completed request:{"isForwarded":false,"requestHeader":{"requestApiKey":19,"requestApiVersion":7,"correlationId":3,"clientId":"test-consumer-id","requestApiKeyName":"CREATE_TOPICS"},"request":{"topics":[{"name":"my-test-topic","numPartitions":1,"replicationFactor":1,"assignments":[],"configs":[]}],"timeoutMs":14982,"validateOnly":false},"response":{"throttleTimeMs":0,"topics":[{"name":"my-test-topic","topicId":"QbCSIHehTz6oyvgrkYsJtw","errorCode":0,"errorMessage":null,"numPartitions":1,"replicationFactor":1,"configs":[{"name":"compression.type","value":"producer","readOnly":false,"configSource":5,"isSensitive":false},{"name":"leader.replication.throttled.replicas","value":"","readOnly":false,"configSource":5,"isSensitive":false},{"name":"message.downconversion.enable","value":"true","readOnly":false,"configSource":5,"isSensitive":false},{"name":"min.insync.replicas","value":"1","readOnly":false,"configSource":5,"isSensitive":false},{"name":"segment.jitter.ms","value":"0","readOnly":false,"configSource":5,"isSensitive":false},{"name":"cleanup.policy","value":"delete","readOnly":false,"configSource":5,"isSensitive":false},{"name":"flush.ms","value":"9223372036854775807","readOnly":false,"configSource":5,"isSensitive":false},{"name":"follower.replication.throttled.replicas","value":"","readOnly":false,"configSource":5,"isSensitive":false},{"name":"segment.bytes","value":"1073741824","readOnly":false,"configSource":5,"isSensitive":false},{"name":"retention.ms","value":"604800000","readOnly":false,"configSource":5,"isSensitive":false},{"name":"flush.messages","value":"1","readOnly":false,"configSource":4,"isSensitive":false},{"name":"message.format.version","value":"3.0-IV1","readOnly":false,"configSource":5,"isSensitive":false},{"name":"max.compaction.lag.ms","value":"9223372036854775807","readOnly":false,"configSource":5,"isSensitive":false},{"name":"file.delete.delay.ms","value":"60000","readOnly":false,"configSource":5,"isSensitive":false},{"name":"max.message.bytes","value":"1048588","readOnly":false,"configSource":5,"isSensitive":false},{"name":"min.compaction.lag.ms","value":"0","readOnly":false,"configSource":5,"isSensitive":false},{"name":"message.timestamp.type","value":"CreateTime","readOnly":false,"configSource":5,"isSensitive":false},{"name":"preallocate","value":"false","readOnly":false,"configSource":5,"isSensitive":false},{"name":"min.cleanable.dirty.ratio","value":"0.5","readOnly":false,"configSource":5,"isSensitive":false},{"name":"index.interval.bytes","value":"4096","readOnly":false,"configSource":5,"isSensitive":false},{"name":"unclean.leader.election.enable","value":"false","readOnly":false,"configSource":5,"isSensitive":false},{"name":"retention.bytes","value":"-1","readOnly":false,"configSource":5,"isSensitive":false},{"name":"delete.retention.ms","value":"86400000","readOnly":false,"configSource":5,"isSensitive":false},{"name":"segment.ms","value":"604800000","readOnly":false,"configSource":5,"isSensitive":false},{"name":"message.timestamp.difference.max.ms","value":"9223372036854775807","readOnly":false,"configSource":5,"isSensitive":false},{"name":"segment.index.bytes","value":"10485760","readOnly":false,"configSource":5,"isSensitive":false}]}]},"connection":"127.0.0.1:39115-127.0.0.1:43330-2","totalTimeMs":428.981,"requestQueueTimeMs":3.71,"localTimeMs":148.252,"remoteTimeMs":276.588,"throttleTimeMs":0,"responseQueueTimeMs":0.125,"sendTimeMs":0.304,"securityProtocol":"SASL_PLAINTEXT","principal":"User:kafkaclient","listener":"SASL_PLAINTEXT","clientInformation":{"softwareName":"apache-kafka-java","softwareVersion":"3.3.1"}} 11:10:51 11:10:51.685 [kafka-admin-client-thread | test-consumer-id] DEBUG org.apache.kafka.clients.NetworkClient - [AdminClient clientId=test-consumer-id] Received CREATE_TOPICS response from node 1 for request with header RequestHeader(apiKey=CREATE_TOPICS, apiVersion=7, clientId=test-consumer-id, correlationId=3): CreateTopicsResponseData(throttleTimeMs=0, topics=[CreatableTopicResult(name='my-test-topic', topicId=QbCSIHehTz6oyvgrkYsJtw, errorCode=0, errorMessage=null, topicConfigErrorCode=0, numPartitions=1, replicationFactor=1, configs=[CreatableTopicConfigs(name='compression.type', value='producer', readOnly=false, configSource=5, isSensitive=false), CreatableTopicConfigs(name='leader.replication.throttled.replicas', value='', readOnly=false, configSource=5, isSensitive=false), CreatableTopicConfigs(name='message.downconversion.enable', value='true', readOnly=false, configSource=5, isSensitive=false), CreatableTopicConfigs(name='min.insync.replicas', value='1', readOnly=false, configSource=5, isSensitive=false), CreatableTopicConfigs(name='segment.jitter.ms', value='0', readOnly=false, configSource=5, isSensitive=false), CreatableTopicConfigs(name='cleanup.policy', value='delete', readOnly=false, configSource=5, isSensitive=false), CreatableTopicConfigs(name='flush.ms', value='9223372036854775807', readOnly=false, configSource=5, isSensitive=false), CreatableTopicConfigs(name='follower.replication.throttled.replicas', value='', readOnly=false, configSource=5, isSensitive=false), CreatableTopicConfigs(name='segment.bytes', value='1073741824', readOnly=false, configSource=5, isSensitive=false), CreatableTopicConfigs(name='retention.ms', value='604800000', readOnly=false, configSource=5, isSensitive=false), CreatableTopicConfigs(name='flush.messages', value='1', readOnly=false, configSource=4, isSensitive=false), CreatableTopicConfigs(name='message.format.version', value='3.0-IV1', readOnly=false, configSource=5, isSensitive=false), CreatableTopicConfigs(name='max.compaction.lag.ms', value='9223372036854775807', readOnly=false, configSource=5, isSensitive=false), CreatableTopicConfigs(name='file.delete.delay.ms', value='60000', readOnly=false, configSource=5, isSensitive=false), CreatableTopicConfigs(name='max.message.bytes', value='1048588', readOnly=false, configSource=5, isSensitive=false), CreatableTopicConfigs(name='min.compaction.lag.ms', value='0', readOnly=false, configSource=5, isSensitive=false), CreatableTopicConfigs(name='message.timestamp.type', value='CreateTime', readOnly=false, configSource=5, isSensitive=false), CreatableTopicConfigs(name='preallocate', value='false', readOnly=false, configSource=5, isSensitive=false), CreatableTopicConfigs(name='min.cleanable.dirty.ratio', value='0.5', readOnly=false, configSource=5, isSensitive=false), CreatableTopicConfigs(name='index.interval.bytes', value='4096', readOnly=false, configSource=5, isSensitive=false), CreatableTopicConfigs(name='unclean.leader.election.enable', value='false', readOnly=false, configSource=5, isSensitive=false), CreatableTopicConfigs(name='retention.bytes', value='-1', readOnly=false, configSource=5, isSensitive=false), CreatableTopicConfigs(name='delete.retention.ms', value='86400000', readOnly=false, configSource=5, isSensitive=false), CreatableTopicConfigs(name='segment.ms', value='604800000', readOnly=false, configSource=5, isSensitive=false), CreatableTopicConfigs(name='message.timestamp.difference.max.ms', value='9223372036854775807', readOnly=false, configSource=5, isSensitive=false), CreatableTopicConfigs(name='segment.index.bytes', value='10485760', readOnly=false, configSource=5, isSensitive=false)])]) 11:10:51 11:10:51.685 [TestBroker:1:Controller-1-to-broker-1-send-thread] DEBUG org.apache.kafka.clients.NetworkClient - [Controller id=1, targetBrokerId=1] Received UPDATE_METADATA response from node 1 for request with header RequestHeader(apiKey=UPDATE_METADATA, apiVersion=7, clientId=1, correlationId=2): UpdateMetadataResponseData(errorCode=0) 11:10:51 11:10:51.686 [data-plane-kafka-network-thread-1-ListenerName(SASL_PLAINTEXT)-SASL_PLAINTEXT-0] DEBUG kafka.request.logger - Completed request:{"isForwarded":false,"requestHeader":{"requestApiKey":6,"requestApiVersion":7,"correlationId":2,"clientId":"1","requestApiKeyName":"UPDATE_METADATA"},"request":{"controllerId":1,"controllerEpoch":1,"brokerEpoch":25,"topicStates":[{"topicName":"my-test-topic","topicId":"QbCSIHehTz6oyvgrkYsJtw","partitionStates":[{"partitionIndex":0,"controllerEpoch":1,"leader":1,"leaderEpoch":0,"isr":[1],"zkVersion":0,"replicas":[1],"offlineReplicas":[]}]}],"liveBrokers":[{"id":1,"endpoints":[{"port":39115,"host":"localhost","listener":"SASL_PLAINTEXT","securityProtocol":2}],"rack":null}]},"response":{"errorCode":0},"connection":"127.0.0.1:39115-127.0.0.1:43322-0","totalTimeMs":16.702,"requestQueueTimeMs":2.707,"localTimeMs":12.508,"remoteTimeMs":0.0,"throttleTimeMs":0,"responseQueueTimeMs":0.509,"sendTimeMs":0.977,"securityProtocol":"SASL_PLAINTEXT","principal":"User:admin","listener":"SASL_PLAINTEXT","clientInformation":{"softwareName":"unknown","softwareVersion":"unknown"}} 11:10:51 11:10:51.689 [main] DEBUG org.apache.kafka.clients.admin.KafkaAdminClient - [AdminClient clientId=test-consumer-id] Initiating close operation. 11:10:51 11:10:51.689 [main] DEBUG org.apache.kafka.clients.admin.KafkaAdminClient - [AdminClient clientId=test-consumer-id] Waiting for the I/O thread to exit. Hard shutdown in 31536000000 ms. 11:10:51 11:10:51.691 [kafka-admin-client-thread | test-consumer-id] INFO org.apache.kafka.common.utils.AppInfoParser - App info kafka.admin.client for test-consumer-id unregistered 11:10:51 11:10:51.691 [data-plane-kafka-network-thread-1-ListenerName(SASL_PLAINTEXT)-SASL_PLAINTEXT-1] DEBUG org.apache.kafka.common.network.Selector - [SocketServer listenerType=ZK_BROKER, nodeId=1] Connection with /127.0.0.1 (channelId=127.0.0.1:39115-127.0.0.1:43328-1) disconnected 11:10:51 java.io.EOFException: null 11:10:51 at org.apache.kafka.common.network.NetworkReceive.readFrom(NetworkReceive.java:97) 11:10:51 at org.apache.kafka.common.network.KafkaChannel.receive(KafkaChannel.java:452) 11:10:51 at org.apache.kafka.common.network.KafkaChannel.read(KafkaChannel.java:402) 11:10:51 at org.apache.kafka.common.network.Selector.attemptRead(Selector.java:674) 11:10:51 at org.apache.kafka.common.network.Selector.pollSelectionKeys(Selector.java:576) 11:10:51 at org.apache.kafka.common.network.Selector.poll(Selector.java:481) 11:10:51 at kafka.network.Processor.poll(SocketServer.scala:1055) 11:10:51 at kafka.network.Processor.run(SocketServer.scala:959) 11:10:51 at java.base/java.lang.Thread.run(Thread.java:829) 11:10:51 11:10:51.691 [data-plane-kafka-network-thread-1-ListenerName(SASL_PLAINTEXT)-SASL_PLAINTEXT-0] DEBUG org.apache.kafka.common.network.Selector - [SocketServer listenerType=ZK_BROKER, nodeId=1] Connection with /127.0.0.1 (channelId=127.0.0.1:39115-127.0.0.1:43330-2) disconnected 11:10:51 java.io.EOFException: null 11:10:51 at org.apache.kafka.common.network.NetworkReceive.readFrom(NetworkReceive.java:97) 11:10:51 at org.apache.kafka.common.network.KafkaChannel.receive(KafkaChannel.java:452) 11:10:51 at org.apache.kafka.common.network.KafkaChannel.read(KafkaChannel.java:402) 11:10:51 at org.apache.kafka.common.network.Selector.attemptRead(Selector.java:674) 11:10:51 at org.apache.kafka.common.network.Selector.pollSelectionKeys(Selector.java:576) 11:10:51 at org.apache.kafka.common.network.Selector.poll(Selector.java:481) 11:10:51 at kafka.network.Processor.poll(SocketServer.scala:1055) 11:10:51 at kafka.network.Processor.run(SocketServer.scala:959) 11:10:51 at java.base/java.lang.Thread.run(Thread.java:829) 11:10:51 11:10:51.692 [kafka-admin-client-thread | test-consumer-id] INFO org.apache.kafka.common.metrics.Metrics - Metrics scheduler closed 11:10:51 11:10:51.692 [kafka-admin-client-thread | test-consumer-id] INFO org.apache.kafka.common.metrics.Metrics - Closing reporter org.apache.kafka.common.metrics.JmxReporter 11:10:51 11:10:51.692 [kafka-admin-client-thread | test-consumer-id] INFO org.apache.kafka.common.metrics.Metrics - Metrics reporters closed 11:10:51 11:10:51.692 [kafka-admin-client-thread | test-consumer-id] DEBUG org.apache.kafka.clients.admin.KafkaAdminClient - [AdminClient clientId=test-consumer-id] Exiting AdminClientRunnable thread. 11:10:51 11:10:51.692 [main] DEBUG org.apache.kafka.clients.admin.KafkaAdminClient - [AdminClient clientId=test-consumer-id] Kafka admin client closed. 11:10:51 11:10:51.715 [main] INFO org.apache.kafka.clients.consumer.ConsumerConfig - ConsumerConfig values: 11:10:51 allow.auto.create.topics = false 11:10:51 auto.commit.interval.ms = 5000 11:10:51 auto.offset.reset = latest 11:10:51 bootstrap.servers = [SASL_PLAINTEXT://localhost:39115] 11:10:51 check.crcs = true 11:10:51 client.dns.lookup = use_all_dns_ips 11:10:51 client.id = mso-123456-consumer-37da6eb2-0a27-47c4-bc3c-8032fdeca549 11:10:51 client.rack = 11:10:51 connections.max.idle.ms = 540000 11:10:51 default.api.timeout.ms = 60000 11:10:51 enable.auto.commit = true 11:10:51 exclude.internal.topics = true 11:10:51 fetch.max.bytes = 52428800 11:10:51 fetch.max.wait.ms = 500 11:10:51 fetch.min.bytes = 1 11:10:51 group.id = mso-group 11:10:51 group.instance.id = null 11:10:51 heartbeat.interval.ms = 3000 11:10:51 interceptor.classes = [] 11:10:51 internal.leave.group.on.close = true 11:10:51 internal.throw.on.fetch.stable.offset.unsupported = false 11:10:51 isolation.level = read_uncommitted 11:10:51 key.deserializer = class org.apache.kafka.common.serialization.StringDeserializer 11:10:51 max.partition.fetch.bytes = 1048576 11:10:51 max.poll.interval.ms = 600000 11:10:51 max.poll.records = 500 11:10:51 metadata.max.age.ms = 300000 11:10:51 metric.reporters = [] 11:10:51 metrics.num.samples = 2 11:10:51 metrics.recording.level = INFO 11:10:51 metrics.sample.window.ms = 30000 11:10:51 partition.assignment.strategy = [class org.apache.kafka.clients.consumer.RangeAssignor, class org.apache.kafka.clients.consumer.CooperativeStickyAssignor] 11:10:51 receive.buffer.bytes = 65536 11:10:51 reconnect.backoff.max.ms = 1000 11:10:51 reconnect.backoff.ms = 50 11:10:51 request.timeout.ms = 30000 11:10:51 retry.backoff.ms = 100 11:10:51 sasl.client.callback.handler.class = null 11:10:51 sasl.jaas.config = [hidden] 11:10:51 sasl.kerberos.kinit.cmd = /usr/bin/kinit 11:10:51 sasl.kerberos.min.time.before.relogin = 60000 11:10:51 sasl.kerberos.service.name = null 11:10:51 sasl.kerberos.ticket.renew.jitter = 0.05 11:10:51 sasl.kerberos.ticket.renew.window.factor = 0.8 11:10:51 sasl.login.callback.handler.class = null 11:10:51 sasl.login.class = null 11:10:51 sasl.login.connect.timeout.ms = null 11:10:51 sasl.login.read.timeout.ms = null 11:10:51 sasl.login.refresh.buffer.seconds = 300 11:10:51 sasl.login.refresh.min.period.seconds = 60 11:10:51 sasl.login.refresh.window.factor = 0.8 11:10:51 sasl.login.refresh.window.jitter = 0.05 11:10:51 sasl.login.retry.backoff.max.ms = 10000 11:10:51 sasl.login.retry.backoff.ms = 100 11:10:51 sasl.mechanism = PLAIN 11:10:51 sasl.oauthbearer.clock.skew.seconds = 30 11:10:51 sasl.oauthbearer.expected.audience = null 11:10:51 sasl.oauthbearer.expected.issuer = null 11:10:51 sasl.oauthbearer.jwks.endpoint.refresh.ms = 3600000 11:10:51 sasl.oauthbearer.jwks.endpoint.retry.backoff.max.ms = 10000 11:10:51 sasl.oauthbearer.jwks.endpoint.retry.backoff.ms = 100 11:10:51 sasl.oauthbearer.jwks.endpoint.url = null 11:10:51 sasl.oauthbearer.scope.claim.name = scope 11:10:51 sasl.oauthbearer.sub.claim.name = sub 11:10:51 sasl.oauthbearer.token.endpoint.url = null 11:10:51 security.protocol = SASL_PLAINTEXT 11:10:51 security.providers = null 11:10:51 send.buffer.bytes = 131072 11:10:51 session.timeout.ms = 50000 11:10:51 socket.connection.setup.timeout.max.ms = 30000 11:10:51 socket.connection.setup.timeout.ms = 10000 11:10:51 ssl.cipher.suites = null 11:10:51 ssl.enabled.protocols = [TLSv1.2, TLSv1.3] 11:10:51 ssl.endpoint.identification.algorithm = https 11:10:51 ssl.engine.factory.class = null 11:10:51 ssl.key.password = null 11:10:51 ssl.keymanager.algorithm = SunX509 11:10:51 ssl.keystore.certificate.chain = null 11:10:51 ssl.keystore.key = null 11:10:51 ssl.keystore.location = null 11:10:51 ssl.keystore.password = null 11:10:51 ssl.keystore.type = JKS 11:10:51 ssl.protocol = TLSv1.3 11:10:51 ssl.provider = null 11:10:51 ssl.secure.random.implementation = null 11:10:51 ssl.trustmanager.algorithm = PKIX 11:10:51 ssl.truststore.certificates = null 11:10:51 ssl.truststore.location = null 11:10:51 ssl.truststore.password = null 11:10:51 ssl.truststore.type = JKS 11:10:51 value.deserializer = class org.apache.kafka.common.serialization.StringDeserializer 11:10:51 11:10:51 11:10:51.716 [main] DEBUG org.apache.kafka.clients.consumer.KafkaConsumer - [Consumer clientId=mso-123456-consumer-37da6eb2-0a27-47c4-bc3c-8032fdeca549, groupId=mso-group] Initializing the Kafka consumer 11:10:51 11:10:51.727 [main] INFO org.apache.kafka.common.security.authenticator.AbstractLogin - Successfully logged in. 11:10:51 11:10:51.771 [main] INFO org.apache.kafka.common.utils.AppInfoParser - Kafka version: 3.3.1 11:10:51 11:10:51.771 [main] INFO org.apache.kafka.common.utils.AppInfoParser - Kafka commitId: e23c59d00e687ff5 11:10:51 11:10:51.771 [main] INFO org.apache.kafka.common.utils.AppInfoParser - Kafka startTimeMs: 1768216251771 11:10:51 11:10:51.771 [main] DEBUG org.apache.kafka.clients.consumer.KafkaConsumer - [Consumer clientId=mso-123456-consumer-37da6eb2-0a27-47c4-bc3c-8032fdeca549, groupId=mso-group] Kafka consumer initialized 11:10:51 11:10:51.772 [main] INFO org.apache.kafka.clients.consumer.KafkaConsumer - [Consumer clientId=mso-123456-consumer-37da6eb2-0a27-47c4-bc3c-8032fdeca549, groupId=mso-group] Subscribed to topic(s): my-test-topic 11:10:51 11:10:51.772 [main] DEBUG org.apache.kafka.clients.consumer.internals.ConsumerCoordinator - [Consumer clientId=mso-123456-consumer-37da6eb2-0a27-47c4-bc3c-8032fdeca549, groupId=mso-group] Sending FindCoordinator request to broker localhost:39115 (id: -1 rack: null) 11:10:51 11:10:51.775 [main] DEBUG org.apache.kafka.clients.ClientUtils - Resolved host localhost as 127.0.0.1 11:10:51 11:10:51.775 [main] DEBUG org.apache.kafka.clients.NetworkClient - [Consumer clientId=mso-123456-consumer-37da6eb2-0a27-47c4-bc3c-8032fdeca549, groupId=mso-group] Initiating connection to node localhost:39115 (id: -1 rack: null) using address localhost/127.0.0.1 11:10:51 11:10:51.775 [main] DEBUG org.apache.kafka.common.security.authenticator.SaslClientAuthenticator - [Consumer clientId=mso-123456-consumer-37da6eb2-0a27-47c4-bc3c-8032fdeca549, groupId=mso-group] Set SASL client state to SEND_APIVERSIONS_REQUEST 11:10:51 11:10:51.776 [main] DEBUG org.apache.kafka.common.security.authenticator.SaslClientAuthenticator - [Consumer clientId=mso-123456-consumer-37da6eb2-0a27-47c4-bc3c-8032fdeca549, groupId=mso-group] Creating SaslClient: client=null;service=kafka;serviceHostname=localhost;mechs=[PLAIN] 11:10:51 11:10:51.776 [data-plane-kafka-socket-acceptor-ListenerName(SASL_PLAINTEXT)-SASL_PLAINTEXT-39115] DEBUG kafka.network.DataPlaneAcceptor - Accepted connection from /127.0.0.1:43332 on /127.0.0.1:39115 and assigned it to processor 1, sendBufferSize [actual|requested]: [102400|102400] recvBufferSize [actual|requested]: [102400|102400] 11:10:51 11:10:51.776 [data-plane-kafka-network-thread-1-ListenerName(SASL_PLAINTEXT)-SASL_PLAINTEXT-1] DEBUG kafka.network.Processor - Processor 1 listening to new connection from /127.0.0.1:43332 11:10:51 11:10:51.777 [main] DEBUG org.apache.kafka.common.network.Selector - [Consumer clientId=mso-123456-consumer-37da6eb2-0a27-47c4-bc3c-8032fdeca549, groupId=mso-group] Created socket with SO_RCVBUF = 65536, SO_SNDBUF = 131072, SO_TIMEOUT = 0 to node -1 11:10:51 11:10:51.777 [main] DEBUG org.apache.kafka.common.security.authenticator.SaslClientAuthenticator - [Consumer clientId=mso-123456-consumer-37da6eb2-0a27-47c4-bc3c-8032fdeca549, groupId=mso-group] Set SASL client state to RECEIVE_APIVERSIONS_RESPONSE 11:10:51 11:10:51.777 [main] DEBUG org.apache.kafka.clients.NetworkClient - [Consumer clientId=mso-123456-consumer-37da6eb2-0a27-47c4-bc3c-8032fdeca549, groupId=mso-group] Completed connection to node -1. Fetching API versions. 11:10:51 11:10:51.777 [data-plane-kafka-network-thread-1-ListenerName(SASL_PLAINTEXT)-SASL_PLAINTEXT-1] DEBUG org.apache.kafka.common.security.authenticator.SaslServerAuthenticator - Set SASL server state to HANDSHAKE_OR_VERSIONS_REQUEST during authentication 11:10:51 11:10:51.777 [data-plane-kafka-network-thread-1-ListenerName(SASL_PLAINTEXT)-SASL_PLAINTEXT-1] DEBUG org.apache.kafka.common.security.authenticator.SaslServerAuthenticator - Handling Kafka request API_VERSIONS during authentication 11:10:51 11:10:51.778 [data-plane-kafka-network-thread-1-ListenerName(SASL_PLAINTEXT)-SASL_PLAINTEXT-1] DEBUG org.apache.kafka.common.security.authenticator.SaslServerAuthenticator - Set SASL server state to HANDSHAKE_REQUEST during authentication 11:10:51 11:10:51.779 [main] DEBUG org.apache.kafka.common.security.authenticator.SaslClientAuthenticator - [Consumer clientId=mso-123456-consumer-37da6eb2-0a27-47c4-bc3c-8032fdeca549, groupId=mso-group] Set SASL client state to SEND_HANDSHAKE_REQUEST 11:10:51 11:10:51.779 [main] DEBUG org.apache.kafka.common.security.authenticator.SaslClientAuthenticator - [Consumer clientId=mso-123456-consumer-37da6eb2-0a27-47c4-bc3c-8032fdeca549, groupId=mso-group] Set SASL client state to RECEIVE_HANDSHAKE_RESPONSE 11:10:51 11:10:51.779 [data-plane-kafka-network-thread-1-ListenerName(SASL_PLAINTEXT)-SASL_PLAINTEXT-1] DEBUG org.apache.kafka.common.security.authenticator.SaslServerAuthenticator - Handling Kafka request SASL_HANDSHAKE during authentication 11:10:51 11:10:51.779 [data-plane-kafka-network-thread-1-ListenerName(SASL_PLAINTEXT)-SASL_PLAINTEXT-1] DEBUG org.apache.kafka.common.security.authenticator.SaslServerAuthenticator - Using SASL mechanism 'PLAIN' provided by client 11:10:51 11:10:51.780 [main] DEBUG org.apache.kafka.common.security.authenticator.SaslClientAuthenticator - [Consumer clientId=mso-123456-consumer-37da6eb2-0a27-47c4-bc3c-8032fdeca549, groupId=mso-group] Set SASL client state to INITIAL 11:10:51 11:10:51.780 [main] DEBUG org.apache.kafka.common.security.authenticator.SaslClientAuthenticator - [Consumer clientId=mso-123456-consumer-37da6eb2-0a27-47c4-bc3c-8032fdeca549, groupId=mso-group] Set SASL client state to INTERMEDIATE 11:10:51 11:10:51.781 [data-plane-kafka-network-thread-1-ListenerName(SASL_PLAINTEXT)-SASL_PLAINTEXT-1] DEBUG org.apache.kafka.common.security.authenticator.SaslServerAuthenticator - Set SASL server state to AUTHENTICATE during authentication 11:10:51 11:10:51.782 [data-plane-kafka-network-thread-1-ListenerName(SASL_PLAINTEXT)-SASL_PLAINTEXT-1] DEBUG org.apache.kafka.common.security.authenticator.SaslServerAuthenticator - Authentication complete; session max lifetime from broker config=0 ms, no credential expiration; no session expiration, sending 0 ms to client 11:10:51 11:10:51.782 [data-plane-kafka-network-thread-1-ListenerName(SASL_PLAINTEXT)-SASL_PLAINTEXT-1] DEBUG org.apache.kafka.common.security.authenticator.SaslServerAuthenticator - Set SASL server state to COMPLETE during authentication 11:10:51 11:10:51.782 [data-plane-kafka-network-thread-1-ListenerName(SASL_PLAINTEXT)-SASL_PLAINTEXT-1] DEBUG org.apache.kafka.common.network.Selector - [SocketServer listenerType=ZK_BROKER, nodeId=1] Successfully authenticated with /127.0.0.1 11:10:51 11:10:51.782 [main] DEBUG org.apache.kafka.common.security.authenticator.SaslClientAuthenticator - [Consumer clientId=mso-123456-consumer-37da6eb2-0a27-47c4-bc3c-8032fdeca549, groupId=mso-group] Set SASL client state to COMPLETE 11:10:51 11:10:51.782 [main] DEBUG org.apache.kafka.common.security.authenticator.SaslClientAuthenticator - [Consumer clientId=mso-123456-consumer-37da6eb2-0a27-47c4-bc3c-8032fdeca549, groupId=mso-group] Finished authentication with no session expiration and no session re-authentication 11:10:51 11:10:51.782 [main] DEBUG org.apache.kafka.common.network.Selector - [Consumer clientId=mso-123456-consumer-37da6eb2-0a27-47c4-bc3c-8032fdeca549, groupId=mso-group] Successfully authenticated with localhost/127.0.0.1 11:10:51 11:10:51.782 [main] DEBUG org.apache.kafka.clients.NetworkClient - [Consumer clientId=mso-123456-consumer-37da6eb2-0a27-47c4-bc3c-8032fdeca549, groupId=mso-group] Initiating API versions fetch from node -1. 11:10:51 11:10:51.782 [main] DEBUG org.apache.kafka.clients.NetworkClient - [Consumer clientId=mso-123456-consumer-37da6eb2-0a27-47c4-bc3c-8032fdeca549, groupId=mso-group] Sending API_VERSIONS request with header RequestHeader(apiKey=API_VERSIONS, apiVersion=3, clientId=mso-123456-consumer-37da6eb2-0a27-47c4-bc3c-8032fdeca549, correlationId=1) and timeout 30000 to node -1: ApiVersionsRequestData(clientSoftwareName='apache-kafka-java', clientSoftwareVersion='3.3.1') 11:10:51 11:10:51.785 [main] DEBUG org.apache.kafka.clients.NetworkClient - [Consumer clientId=mso-123456-consumer-37da6eb2-0a27-47c4-bc3c-8032fdeca549, groupId=mso-group] Received API_VERSIONS response from node -1 for request with header RequestHeader(apiKey=API_VERSIONS, apiVersion=3, clientId=mso-123456-consumer-37da6eb2-0a27-47c4-bc3c-8032fdeca549, correlationId=1): ApiVersionsResponseData(errorCode=0, apiKeys=[ApiVersion(apiKey=0, minVersion=0, maxVersion=9), ApiVersion(apiKey=1, minVersion=0, maxVersion=13), ApiVersion(apiKey=2, minVersion=0, maxVersion=7), ApiVersion(apiKey=3, minVersion=0, maxVersion=12), ApiVersion(apiKey=4, minVersion=0, maxVersion=6), ApiVersion(apiKey=5, minVersion=0, maxVersion=3), ApiVersion(apiKey=6, minVersion=0, maxVersion=7), ApiVersion(apiKey=7, minVersion=0, maxVersion=3), ApiVersion(apiKey=8, minVersion=0, maxVersion=8), ApiVersion(apiKey=9, minVersion=0, maxVersion=8), ApiVersion(apiKey=10, minVersion=0, maxVersion=4), ApiVersion(apiKey=11, minVersion=0, maxVersion=9), ApiVersion(apiKey=12, minVersion=0, maxVersion=4), ApiVersion(apiKey=13, minVersion=0, maxVersion=5), ApiVersion(apiKey=14, minVersion=0, maxVersion=5), ApiVersion(apiKey=15, minVersion=0, maxVersion=5), ApiVersion(apiKey=16, minVersion=0, maxVersion=4), ApiVersion(apiKey=17, minVersion=0, maxVersion=1), ApiVersion(apiKey=18, minVersion=0, maxVersion=3), ApiVersion(apiKey=19, minVersion=0, maxVersion=7), ApiVersion(apiKey=20, minVersion=0, maxVersion=6), ApiVersion(apiKey=21, minVersion=0, maxVersion=2), ApiVersion(apiKey=22, minVersion=0, maxVersion=4), ApiVersion(apiKey=23, minVersion=0, maxVersion=4), ApiVersion(apiKey=24, minVersion=0, maxVersion=3), ApiVersion(apiKey=25, minVersion=0, maxVersion=3), ApiVersion(apiKey=26, minVersion=0, maxVersion=3), ApiVersion(apiKey=27, minVersion=0, maxVersion=1), ApiVersion(apiKey=28, minVersion=0, maxVersion=3), ApiVersion(apiKey=29, minVersion=0, maxVersion=3), ApiVersion(apiKey=30, minVersion=0, maxVersion=3), ApiVersion(apiKey=31, minVersion=0, maxVersion=3), ApiVersion(apiKey=32, minVersion=0, maxVersion=4), ApiVersion(apiKey=33, minVersion=0, maxVersion=2), ApiVersion(apiKey=34, minVersion=0, maxVersion=2), ApiVersion(apiKey=35, minVersion=0, maxVersion=4), ApiVersion(apiKey=36, minVersion=0, maxVersion=2), ApiVersion(apiKey=37, minVersion=0, maxVersion=3), ApiVersion(apiKey=38, minVersion=0, maxVersion=3), ApiVersion(apiKey=39, minVersion=0, maxVersion=2), ApiVersion(apiKey=40, minVersion=0, maxVersion=2), ApiVersion(apiKey=41, minVersion=0, maxVersion=3), ApiVersion(apiKey=42, minVersion=0, maxVersion=2), ApiVersion(apiKey=43, minVersion=0, maxVersion=2), ApiVersion(apiKey=44, minVersion=0, maxVersion=1), ApiVersion(apiKey=45, minVersion=0, maxVersion=0), ApiVersion(apiKey=46, minVersion=0, maxVersion=0), ApiVersion(apiKey=47, minVersion=0, maxVersion=0), ApiVersion(apiKey=48, minVersion=0, maxVersion=1), ApiVersion(apiKey=49, minVersion=0, maxVersion=1), ApiVersion(apiKey=50, minVersion=0, maxVersion=0), ApiVersion(apiKey=51, minVersion=0, maxVersion=0), ApiVersion(apiKey=56, minVersion=0, maxVersion=2), ApiVersion(apiKey=57, minVersion=0, maxVersion=1), ApiVersion(apiKey=60, minVersion=0, maxVersion=0), ApiVersion(apiKey=61, minVersion=0, maxVersion=0), ApiVersion(apiKey=65, minVersion=0, maxVersion=0), ApiVersion(apiKey=66, minVersion=0, maxVersion=0), ApiVersion(apiKey=67, minVersion=0, maxVersion=0)], throttleTimeMs=0, supportedFeatures=[], finalizedFeaturesEpoch=0, finalizedFeatures=[]) 11:10:51 11:10:51.786 [main] DEBUG org.apache.kafka.clients.NetworkClient - [Consumer clientId=mso-123456-consumer-37da6eb2-0a27-47c4-bc3c-8032fdeca549, groupId=mso-group] Node -1 has finalized features epoch: 0, finalized features: [], supported features: [], API versions: (Produce(0): 0 to 9 [usable: 9], Fetch(1): 0 to 13 [usable: 13], ListOffsets(2): 0 to 7 [usable: 7], Metadata(3): 0 to 12 [usable: 12], LeaderAndIsr(4): 0 to 6 [usable: 6], StopReplica(5): 0 to 3 [usable: 3], UpdateMetadata(6): 0 to 7 [usable: 7], ControlledShutdown(7): 0 to 3 [usable: 3], OffsetCommit(8): 0 to 8 [usable: 8], OffsetFetch(9): 0 to 8 [usable: 8], FindCoordinator(10): 0 to 4 [usable: 4], JoinGroup(11): 0 to 9 [usable: 9], Heartbeat(12): 0 to 4 [usable: 4], LeaveGroup(13): 0 to 5 [usable: 5], SyncGroup(14): 0 to 5 [usable: 5], DescribeGroups(15): 0 to 5 [usable: 5], ListGroups(16): 0 to 4 [usable: 4], SaslHandshake(17): 0 to 1 [usable: 1], ApiVersions(18): 0 to 3 [usable: 3], CreateTopics(19): 0 to 7 [usable: 7], DeleteTopics(20): 0 to 6 [usable: 6], DeleteRecords(21): 0 to 2 [usable: 2], InitProducerId(22): 0 to 4 [usable: 4], OffsetForLeaderEpoch(23): 0 to 4 [usable: 4], AddPartitionsToTxn(24): 0 to 3 [usable: 3], AddOffsetsToTxn(25): 0 to 3 [usable: 3], EndTxn(26): 0 to 3 [usable: 3], WriteTxnMarkers(27): 0 to 1 [usable: 1], TxnOffsetCommit(28): 0 to 3 [usable: 3], DescribeAcls(29): 0 to 3 [usable: 3], CreateAcls(30): 0 to 3 [usable: 3], DeleteAcls(31): 0 to 3 [usable: 3], DescribeConfigs(32): 0 to 4 [usable: 4], AlterConfigs(33): 0 to 2 [usable: 2], AlterReplicaLogDirs(34): 0 to 2 [usable: 2], DescribeLogDirs(35): 0 to 4 [usable: 4], SaslAuthenticate(36): 0 to 2 [usable: 2], CreatePartitions(37): 0 to 3 [usable: 3], CreateDelegationToken(38): 0 to 3 [usable: 3], RenewDelegationToken(39): 0 to 2 [usable: 2], ExpireDelegationToken(40): 0 to 2 [usable: 2], DescribeDelegationToken(41): 0 to 3 [usable: 3], DeleteGroups(42): 0 to 2 [usable: 2], ElectLeaders(43): 0 to 2 [usable: 2], IncrementalAlterConfigs(44): 0 to 1 [usable: 1], AlterPartitionReassignments(45): 0 [usable: 0], ListPartitionReassignments(46): 0 [usable: 0], OffsetDelete(47): 0 [usable: 0], DescribeClientQuotas(48): 0 to 1 [usable: 1], AlterClientQuotas(49): 0 to 1 [usable: 1], DescribeUserScramCredentials(50): 0 [usable: 0], AlterUserScramCredentials(51): 0 [usable: 0], AlterPartition(56): 0 to 2 [usable: 2], UpdateFeatures(57): 0 to 1 [usable: 1], DescribeCluster(60): 0 [usable: 0], DescribeProducers(61): 0 [usable: 0], DescribeTransactions(65): 0 [usable: 0], ListTransactions(66): 0 [usable: 0], AllocateProducerIds(67): 0 [usable: 0]). 11:10:51 11:10:51.787 [data-plane-kafka-network-thread-1-ListenerName(SASL_PLAINTEXT)-SASL_PLAINTEXT-1] DEBUG kafka.request.logger - Completed request:{"isForwarded":false,"requestHeader":{"requestApiKey":18,"requestApiVersion":3,"correlationId":1,"clientId":"mso-123456-consumer-37da6eb2-0a27-47c4-bc3c-8032fdeca549","requestApiKeyName":"API_VERSIONS"},"request":{"clientSoftwareName":"apache-kafka-java","clientSoftwareVersion":"3.3.1"},"response":{"errorCode":0,"apiKeys":[{"apiKey":0,"minVersion":0,"maxVersion":9},{"apiKey":1,"minVersion":0,"maxVersion":13},{"apiKey":2,"minVersion":0,"maxVersion":7},{"apiKey":3,"minVersion":0,"maxVersion":12},{"apiKey":4,"minVersion":0,"maxVersion":6},{"apiKey":5,"minVersion":0,"maxVersion":3},{"apiKey":6,"minVersion":0,"maxVersion":7},{"apiKey":7,"minVersion":0,"maxVersion":3},{"apiKey":8,"minVersion":0,"maxVersion":8},{"apiKey":9,"minVersion":0,"maxVersion":8},{"apiKey":10,"minVersion":0,"maxVersion":4},{"apiKey":11,"minVersion":0,"maxVersion":9},{"apiKey":12,"minVersion":0,"maxVersion":4},{"apiKey":13,"minVersion":0,"maxVersion":5},{"apiKey":14,"minVersion":0,"maxVersion":5},{"apiKey":15,"minVersion":0,"maxVersion":5},{"apiKey":16,"minVersion":0,"maxVersion":4},{"apiKey":17,"minVersion":0,"maxVersion":1},{"apiKey":18,"minVersion":0,"maxVersion":3},{"apiKey":19,"minVersion":0,"maxVersion":7},{"apiKey":20,"minVersion":0,"maxVersion":6},{"apiKey":21,"minVersion":0,"maxVersion":2},{"apiKey":22,"minVersion":0,"maxVersion":4},{"apiKey":23,"minVersion":0,"maxVersion":4},{"apiKey":24,"minVersion":0,"maxVersion":3},{"apiKey":25,"minVersion":0,"maxVersion":3},{"apiKey":26,"minVersion":0,"maxVersion":3},{"apiKey":27,"minVersion":0,"maxVersion":1},{"apiKey":28,"minVersion":0,"maxVersion":3},{"apiKey":29,"minVersion":0,"maxVersion":3},{"apiKey":30,"minVersion":0,"maxVersion":3},{"apiKey":31,"minVersion":0,"maxVersion":3},{"apiKey":32,"minVersion":0,"maxVersion":4},{"apiKey":33,"minVersion":0,"maxVersion":2},{"apiKey":34,"minVersion":0,"maxVersion":2},{"apiKey":35,"minVersion":0,"maxVersion":4},{"apiKey":36,"minVersion":0,"maxVersion":2},{"apiKey":37,"minVersion":0,"maxVersion":3},{"apiKey":38,"minVersion":0,"maxVersion":3},{"apiKey":39,"minVersion":0,"maxVersion":2},{"apiKey":40,"minVersion":0,"maxVersion":2},{"apiKey":41,"minVersion":0,"maxVersion":3},{"apiKey":42,"minVersion":0,"maxVersion":2},{"apiKey":43,"minVersion":0,"maxVersion":2},{"apiKey":44,"minVersion":0,"maxVersion":1},{"apiKey":45,"minVersion":0,"maxVersion":0},{"apiKey":46,"minVersion":0,"maxVersion":0},{"apiKey":47,"minVersion":0,"maxVersion":0},{"apiKey":48,"minVersion":0,"maxVersion":1},{"apiKey":49,"minVersion":0,"maxVersion":1},{"apiKey":50,"minVersion":0,"maxVersion":0},{"apiKey":51,"minVersion":0,"maxVersion":0},{"apiKey":56,"minVersion":0,"maxVersion":2},{"apiKey":57,"minVersion":0,"maxVersion":1},{"apiKey":60,"minVersion":0,"maxVersion":0},{"apiKey":61,"minVersion":0,"maxVersion":0},{"apiKey":65,"minVersion":0,"maxVersion":0},{"apiKey":66,"minVersion":0,"maxVersion":0},{"apiKey":67,"minVersion":0,"maxVersion":0}],"throttleTimeMs":0,"finalizedFeaturesEpoch":0},"connection":"127.0.0.1:39115-127.0.0.1:43332-2","totalTimeMs":2.393,"requestQueueTimeMs":0.475,"localTimeMs":1.386,"remoteTimeMs":0.0,"throttleTimeMs":0,"responseQueueTimeMs":0.124,"sendTimeMs":0.406,"securityProtocol":"SASL_PLAINTEXT","principal":"User:admin","listener":"SASL_PLAINTEXT","clientInformation":{"softwareName":"unknown","softwareVersion":"unknown"}} 11:10:51 11:10:51.790 [main] DEBUG org.apache.kafka.clients.NetworkClient - [Consumer clientId=mso-123456-consumer-37da6eb2-0a27-47c4-bc3c-8032fdeca549, groupId=mso-group] Sending metadata request MetadataRequestData(topics=[MetadataRequestTopic(topicId=AAAAAAAAAAAAAAAAAAAAAA, name='my-test-topic')], allowAutoTopicCreation=false, includeClusterAuthorizedOperations=false, includeTopicAuthorizedOperations=false) to node localhost:39115 (id: -1 rack: null) 11:10:51 11:10:51.790 [main] DEBUG org.apache.kafka.clients.NetworkClient - [Consumer clientId=mso-123456-consumer-37da6eb2-0a27-47c4-bc3c-8032fdeca549, groupId=mso-group] Sending METADATA request with header RequestHeader(apiKey=METADATA, apiVersion=12, clientId=mso-123456-consumer-37da6eb2-0a27-47c4-bc3c-8032fdeca549, correlationId=2) and timeout 30000 to node -1: MetadataRequestData(topics=[MetadataRequestTopic(topicId=AAAAAAAAAAAAAAAAAAAAAA, name='my-test-topic')], allowAutoTopicCreation=false, includeClusterAuthorizedOperations=false, includeTopicAuthorizedOperations=false) 11:10:51 11:10:51.792 [main] DEBUG org.apache.kafka.clients.NetworkClient - [Consumer clientId=mso-123456-consumer-37da6eb2-0a27-47c4-bc3c-8032fdeca549, groupId=mso-group] Sending FIND_COORDINATOR request with header RequestHeader(apiKey=FIND_COORDINATOR, apiVersion=4, clientId=mso-123456-consumer-37da6eb2-0a27-47c4-bc3c-8032fdeca549, correlationId=0) and timeout 30000 to node -1: FindCoordinatorRequestData(key='', keyType=0, coordinatorKeys=[mso-group]) 11:10:51 11:10:51.801 [data-plane-kafka-network-thread-1-ListenerName(SASL_PLAINTEXT)-SASL_PLAINTEXT-1] DEBUG kafka.request.logger - Completed request:{"isForwarded":false,"requestHeader":{"requestApiKey":3,"requestApiVersion":12,"correlationId":2,"clientId":"mso-123456-consumer-37da6eb2-0a27-47c4-bc3c-8032fdeca549","requestApiKeyName":"METADATA"},"request":{"topics":[{"topicId":"AAAAAAAAAAAAAAAAAAAAAA","name":"my-test-topic"}],"allowAutoTopicCreation":false,"includeTopicAuthorizedOperations":false},"response":{"throttleTimeMs":0,"brokers":[{"nodeId":1,"host":"localhost","port":39115,"rack":null}],"clusterId":"jx5ycp9PTHOXo1U6H8QTmw","controllerId":1,"topics":[{"errorCode":0,"name":"my-test-topic","topicId":"QbCSIHehTz6oyvgrkYsJtw","isInternal":false,"partitions":[{"errorCode":0,"partitionIndex":0,"leaderId":1,"leaderEpoch":0,"replicaNodes":[1],"isrNodes":[1],"offlineReplicas":[]}],"topicAuthorizedOperations":-2147483648}]},"connection":"127.0.0.1:39115-127.0.0.1:43332-2","totalTimeMs":9.3,"requestQueueTimeMs":1.288,"localTimeMs":7.65,"remoteTimeMs":0.0,"throttleTimeMs":0,"responseQueueTimeMs":0.143,"sendTimeMs":0.218,"securityProtocol":"SASL_PLAINTEXT","principal":"User:admin","listener":"SASL_PLAINTEXT","clientInformation":{"softwareName":"apache-kafka-java","softwareVersion":"3.3.1"}} 11:10:51 11:10:51.802 [main] DEBUG org.apache.kafka.clients.NetworkClient - [Consumer clientId=mso-123456-consumer-37da6eb2-0a27-47c4-bc3c-8032fdeca549, groupId=mso-group] Received METADATA response from node -1 for request with header RequestHeader(apiKey=METADATA, apiVersion=12, clientId=mso-123456-consumer-37da6eb2-0a27-47c4-bc3c-8032fdeca549, correlationId=2): MetadataResponseData(throttleTimeMs=0, brokers=[MetadataResponseBroker(nodeId=1, host='localhost', port=39115, rack=null)], clusterId='jx5ycp9PTHOXo1U6H8QTmw', controllerId=1, topics=[MetadataResponseTopic(errorCode=0, name='my-test-topic', topicId=QbCSIHehTz6oyvgrkYsJtw, isInternal=false, partitions=[MetadataResponsePartition(errorCode=0, partitionIndex=0, leaderId=1, leaderEpoch=0, replicaNodes=[1], isrNodes=[1], offlineReplicas=[])], topicAuthorizedOperations=-2147483648)], clusterAuthorizedOperations=-2147483648) 11:10:51 11:10:51.805 [main] INFO org.apache.kafka.clients.Metadata - [Consumer clientId=mso-123456-consumer-37da6eb2-0a27-47c4-bc3c-8032fdeca549, groupId=mso-group] Resetting the last seen epoch of partition my-test-topic-0 to 0 since the associated topicId changed from null to QbCSIHehTz6oyvgrkYsJtw 11:10:51 11:10:51.808 [main] INFO org.apache.kafka.clients.Metadata - [Consumer clientId=mso-123456-consumer-37da6eb2-0a27-47c4-bc3c-8032fdeca549, groupId=mso-group] Cluster ID: jx5ycp9PTHOXo1U6H8QTmw 11:10:51 11:10:51.808 [main] DEBUG org.apache.kafka.clients.Metadata - [Consumer clientId=mso-123456-consumer-37da6eb2-0a27-47c4-bc3c-8032fdeca549, groupId=mso-group] Updated cluster metadata updateVersion 2 to MetadataCache{clusterId='jx5ycp9PTHOXo1U6H8QTmw', nodes={1=localhost:39115 (id: 1 rack: null)}, partitions=[PartitionMetadata(error=NONE, partition=my-test-topic-0, leader=Optional[1], leaderEpoch=Optional[0], replicas=1, isr=1, offlineReplicas=)], controller=localhost:39115 (id: 1 rack: null)} 11:10:51 11:10:51.810 [ProcessThread(sid:0 cport:39173):] DEBUG org.apache.zookeeper.server.SessionTrackerImpl - Checking session 0x10000020e9e0000 11:10:51 11:10:51.810 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - Processing request:: sessionid:0x10000020e9e0000 type:exists cxid:0x4f zxid:0xfffffffffffffffe txntype:unknown reqpath:/admin/delete_topics/__consumer_offsets 11:10:51 11:10:51.810 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - sessionid:0x10000020e9e0000 type:exists cxid:0x4f zxid:0xfffffffffffffffe txntype:unknown reqpath:/admin/delete_topics/__consumer_offsets 11:10:51 11:10:51.811 [main-SendThread(127.0.0.1:39173)] DEBUG org.apache.zookeeper.ClientCnxn - Reading reply session id: 0x10000020e9e0000, packet:: clientPath:/admin/delete_topics/__consumer_offsets serverPath:/admin/delete_topics/__consumer_offsets finished:false header:: 79,3 replyHeader:: 79,35,-101 request:: '/admin/delete_topics/__consumer_offsets,F response:: 11:10:51 11:10:51.812 [ProcessThread(sid:0 cport:39173):] DEBUG org.apache.zookeeper.server.SessionTrackerImpl - Checking session 0x10000020e9e0000 11:10:51 11:10:51.813 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - Processing request:: sessionid:0x10000020e9e0000 type:exists cxid:0x50 zxid:0xfffffffffffffffe txntype:unknown reqpath:/brokers/topics/__consumer_offsets 11:10:51 11:10:51.813 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - sessionid:0x10000020e9e0000 type:exists cxid:0x50 zxid:0xfffffffffffffffe txntype:unknown reqpath:/brokers/topics/__consumer_offsets 11:10:51 11:10:51.813 [main-SendThread(127.0.0.1:39173)] DEBUG org.apache.zookeeper.ClientCnxn - Reading reply session id: 0x10000020e9e0000, packet:: clientPath:/brokers/topics/__consumer_offsets serverPath:/brokers/topics/__consumer_offsets finished:false header:: 80,3 replyHeader:: 80,35,-101 request:: '/brokers/topics/__consumer_offsets,F response:: 11:10:51 11:10:51.814 [ProcessThread(sid:0 cport:39173):] DEBUG org.apache.zookeeper.server.SessionTrackerImpl - Checking session 0x10000020e9e0000 11:10:51 11:10:51.814 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - Processing request:: sessionid:0x10000020e9e0000 type:getChildren2 cxid:0x51 zxid:0xfffffffffffffffe txntype:unknown reqpath:/brokers/topics 11:10:51 11:10:51.814 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - sessionid:0x10000020e9e0000 type:getChildren2 cxid:0x51 zxid:0xfffffffffffffffe txntype:unknown reqpath:/brokers/topics 11:10:51 11:10:51.814 [SyncThread:0] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Permission requested: 1 11:10:51 11:10:51.814 [SyncThread:0] DEBUG org.apache.zookeeper.server.ZooKeeperServer - ACLs for node: [31,s{'world,'anyone} 11:10:51 ] 11:10:51 11:10:51.814 [SyncThread:0] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Client credentials: ['sasl,'zooclient 11:10:51 , 'ip,'127.0.0.1 11:10:51 ] 11:10:51 11:10:51.815 [main-SendThread(127.0.0.1:39173)] DEBUG org.apache.zookeeper.ClientCnxn - Reading reply session id: 0x10000020e9e0000, packet:: clientPath:/brokers/topics serverPath:/brokers/topics finished:false header:: 81,12 replyHeader:: 81,35,0 request:: '/brokers/topics,F response:: v{'my-test-topic},s{6,6,1768216249046,1768216249046,0,1,0,0,0,1,32} 11:10:51 11:10:51.821 [data-plane-kafka-request-handler-0] INFO kafka.zk.AdminZkClient - Creating topic __consumer_offsets with configuration {compression.type=producer, cleanup.policy=compact, segment.bytes=104857600} and initial partition assignment HashMap(0 -> ArrayBuffer(1), 1 -> ArrayBuffer(1), 2 -> ArrayBuffer(1), 3 -> ArrayBuffer(1), 4 -> ArrayBuffer(1), 5 -> ArrayBuffer(1), 6 -> ArrayBuffer(1), 7 -> ArrayBuffer(1), 8 -> ArrayBuffer(1), 9 -> ArrayBuffer(1), 10 -> ArrayBuffer(1), 11 -> ArrayBuffer(1), 12 -> ArrayBuffer(1), 13 -> ArrayBuffer(1), 14 -> ArrayBuffer(1), 15 -> ArrayBuffer(1), 16 -> ArrayBuffer(1), 17 -> ArrayBuffer(1), 18 -> ArrayBuffer(1), 19 -> ArrayBuffer(1), 20 -> ArrayBuffer(1), 21 -> ArrayBuffer(1), 22 -> ArrayBuffer(1), 23 -> ArrayBuffer(1), 24 -> ArrayBuffer(1), 25 -> ArrayBuffer(1), 26 -> ArrayBuffer(1), 27 -> ArrayBuffer(1), 28 -> ArrayBuffer(1), 29 -> ArrayBuffer(1), 30 -> ArrayBuffer(1), 31 -> ArrayBuffer(1), 32 -> ArrayBuffer(1), 33 -> ArrayBuffer(1), 34 -> ArrayBuffer(1), 35 -> ArrayBuffer(1), 36 -> ArrayBuffer(1), 37 -> ArrayBuffer(1), 38 -> ArrayBuffer(1), 39 -> ArrayBuffer(1), 40 -> ArrayBuffer(1), 41 -> ArrayBuffer(1), 42 -> ArrayBuffer(1), 43 -> ArrayBuffer(1), 44 -> ArrayBuffer(1), 45 -> ArrayBuffer(1), 46 -> ArrayBuffer(1), 47 -> ArrayBuffer(1), 48 -> ArrayBuffer(1), 49 -> ArrayBuffer(1)) 11:10:51 11:10:51.822 [ProcessThread(sid:0 cport:39173):] DEBUG org.apache.zookeeper.server.SessionTrackerImpl - Checking session 0x10000020e9e0000 11:10:51 11:10:51.825 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - Processing request:: sessionid:0x10000020e9e0000 type:setData cxid:0x52 zxid:0x24 txntype:-1 reqpath:n/a 11:10:51 11:10:51.825 [SyncThread:0] DEBUG org.apache.zookeeper.server.DataTree - Ignoring processTxn failure hdr: -1 : error: -101 11:10:51 11:10:51.825 [main-SendThread(127.0.0.1:39173)] DEBUG org.apache.zookeeper.ClientCnxn - Reading reply session id: 0x10000020e9e0000, packet:: clientPath:/config/topics/__consumer_offsets serverPath:/config/topics/__consumer_offsets finished:false header:: 82,5 replyHeader:: 82,36,-101 request:: '/config/topics/__consumer_offsets,#7b2276657273696f6e223a312c22636f6e666967223a7b22636f6d7072657373696f6e2e74797065223a2270726f6475636572222c22636c65616e75702e706f6c696379223a22636f6d70616374222c227365676d656e742e6279746573223a22313034383537363030227d7d,-1 response:: 11:10:51 11:10:51.827 [ProcessThread(sid:0 cport:39173):] DEBUG org.apache.zookeeper.server.SessionTrackerImpl - Checking session 0x10000020e9e0000 11:10:51 11:10:51.827 [ProcessThread(sid:0 cport:39173):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Processing ACL: 31,s{'world,'anyone} 11:10:51 11:10:51 11:10:51.827 [ProcessThread(sid:0 cport:39173):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Permission requested: 4 11:10:51 11:10:51.827 [ProcessThread(sid:0 cport:39173):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - ACLs for node: [31,s{'world,'anyone} 11:10:51 ] 11:10:51 11:10:51.827 [ProcessThread(sid:0 cport:39173):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Client credentials: ['sasl,'zooclient 11:10:51 , 'ip,'127.0.0.1 11:10:51 ] 11:10:51 11:10:51.827 [ProcessThread(sid:0 cport:39173):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from data tree is: 54550712703 11:10:51 11:10:51.827 [ProcessThread(sid:0 cport:39173):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 54372910471 11:10:51 11:10:51.828 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - Processing request:: sessionid:0x10000020e9e0000 type:create cxid:0x53 zxid:0x25 txntype:1 reqpath:n/a 11:10:51 11:10:51.828 [SyncThread:0] DEBUG org.apache.zookeeper.common.PathTrie - config 11:10:51 11:10:51.828 [SyncThread:0] DEBUG org.apache.zookeeper.server.DataTree - Digests are matching for Zxid: 25, Digest in log and actual tree: 56805664930 11:10:51 11:10:51.828 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - sessionid:0x10000020e9e0000 type:create cxid:0x53 zxid:0x25 txntype:1 reqpath:n/a 11:10:51 11:10:51.829 [main-SendThread(127.0.0.1:39173)] DEBUG org.apache.zookeeper.ClientCnxn - Reading reply session id: 0x10000020e9e0000, packet:: clientPath:/config/topics/__consumer_offsets serverPath:/config/topics/__consumer_offsets finished:false header:: 83,1 replyHeader:: 83,37,0 request:: '/config/topics/__consumer_offsets,#7b2276657273696f6e223a312c22636f6e666967223a7b22636f6d7072657373696f6e2e74797065223a2270726f6475636572222c22636c65616e75702e706f6c696379223a22636f6d70616374222c227365676d656e742e6279746573223a22313034383537363030227d7d,v{s{31,s{'world,'anyone}}},0 response:: '/config/topics/__consumer_offsets 11:10:51 11:10:51.835 [ProcessThread(sid:0 cport:39173):] DEBUG org.apache.zookeeper.server.SessionTrackerImpl - Checking session 0x10000020e9e0000 11:10:51 11:10:51.836 [ProcessThread(sid:0 cport:39173):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Processing ACL: 31,s{'world,'anyone} 11:10:51 11:10:51 11:10:51.836 [ProcessThread(sid:0 cport:39173):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Permission requested: 4 11:10:51 11:10:51.836 [ProcessThread(sid:0 cport:39173):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - ACLs for node: [31,s{'world,'anyone} 11:10:51 ] 11:10:51 11:10:51.836 [ProcessThread(sid:0 cport:39173):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Client credentials: ['sasl,'zooclient 11:10:51 , 'ip,'127.0.0.1 11:10:51 ] 11:10:51 11:10:51.836 [ProcessThread(sid:0 cport:39173):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from data tree is: 56805664930 11:10:51 11:10:51.836 [ProcessThread(sid:0 cport:39173):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 56654149292 11:10:51 11:10:51.837 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - Processing request:: sessionid:0x10000020e9e0000 type:create cxid:0x54 zxid:0x26 txntype:1 reqpath:n/a 11:10:51 11:10:51.837 [SyncThread:0] DEBUG org.apache.zookeeper.common.PathTrie - brokers 11:10:51 11:10:51.837 [SyncThread:0] DEBUG org.apache.zookeeper.server.DataTree - Digests are matching for Zxid: 26, Digest in log and actual tree: 57743292423 11:10:51 11:10:51.837 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - sessionid:0x10000020e9e0000 type:create cxid:0x54 zxid:0x26 txntype:1 reqpath:n/a 11:10:51 11:10:51.837 [main-SendThread(127.0.0.1:39173)] DEBUG org.apache.zookeeper.ClientCnxn - Got notification session id: 0x10000020e9e0000 11:10:51 11:10:51.837 [main-SendThread(127.0.0.1:39173)] DEBUG org.apache.zookeeper.ClientCnxn - Got WatchedEvent state:SyncConnected type:NodeChildrenChanged path:/brokers/topics for session id 0x10000020e9e0000 11:10:51 11:10:51.838 [main-EventThread] DEBUG kafka.zookeeper.ZooKeeperClient - [ZooKeeperClient Kafka server] Received event: WatchedEvent state:SyncConnected type:NodeChildrenChanged path:/brokers/topics 11:10:51 11:10:51.838 [main-SendThread(127.0.0.1:39173)] DEBUG org.apache.zookeeper.ClientCnxn - Reading reply session id: 0x10000020e9e0000, packet:: clientPath:/brokers/topics/__consumer_offsets serverPath:/brokers/topics/__consumer_offsets finished:false header:: 84,1 replyHeader:: 84,38,0 request:: '/brokers/topics/__consumer_offsets,#7b22706172746974696f6e73223a7b223434223a5b315d2c223435223a5b315d2c223436223a5b315d2c223437223a5b315d2c223438223a5b315d2c223439223a5b315d2c223130223a5b315d2c223131223a5b315d2c223132223a5b315d2c223133223a5b315d2c223134223a5b315d2c223135223a5b315d2c223136223a5b315d2c223137223a5b315d2c223138223a5b315d2c223139223a5b315d2c2230223a5b315d2c2231223a5b315d2c2232223a5b315d2c2233223a5b315d2c2234223a5b315d2c2235223a5b315d2c2236223a5b315d2c2237223a5b315d2c2238223a5b315d2c2239223a5b315d2c223230223a5b315d2c223231223a5b315d2c223232223a5b315d2c223233223a5b315d2c223234223a5b315d2c223235223a5b315d2c223236223a5b315d2c223237223a5b315d2c223238223a5b315d2c223239223a5b315d2c223330223a5b315d2c223331223a5b315d2c223332223a5b315d2c223333223a5b315d2c223334223a5b315d2c223335223a5b315d2c223336223a5b315d2c223337223a5b315d2c223338223a5b315d2c223339223a5b315d2c223430223a5b315d2c223431223a5b315d2c223432223a5b315d2c223433223a5b315d7d2c22746f7069635f6964223a224e364a64474b536e533575503551735163616e733377222c22616464696e675f7265706c69636173223a7b7d2c2272656d6f76696e675f7265706c69636173223a7b7d2c2276657273696f6e223a337d,v{s{31,s{'world,'anyone}}},0 response:: '/brokers/topics/__consumer_offsets 11:10:51 11:10:51.839 [ProcessThread(sid:0 cport:39173):] DEBUG org.apache.zookeeper.server.SessionTrackerImpl - Checking session 0x10000020e9e0000 11:10:51 11:10:51.839 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - Processing request:: sessionid:0x10000020e9e0000 type:getChildren2 cxid:0x55 zxid:0xfffffffffffffffe txntype:unknown reqpath:/brokers/topics 11:10:51 11:10:51.839 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - sessionid:0x10000020e9e0000 type:getChildren2 cxid:0x55 zxid:0xfffffffffffffffe txntype:unknown reqpath:/brokers/topics 11:10:51 11:10:51.839 [SyncThread:0] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Permission requested: 1 11:10:51 11:10:51.839 [SyncThread:0] DEBUG org.apache.zookeeper.server.ZooKeeperServer - ACLs for node: [31,s{'world,'anyone} 11:10:51 ] 11:10:51 11:10:51.839 [SyncThread:0] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Client credentials: ['sasl,'zooclient 11:10:51 , 'ip,'127.0.0.1 11:10:51 ] 11:10:51 11:10:51.839 [data-plane-kafka-request-handler-0] DEBUG kafka.zk.AdminZkClient - Updated path /brokers/topics/__consumer_offsets with HashMap(__consumer_offsets-22 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-30 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-25 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-35 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-37 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-38 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-13 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-8 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-21 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-4 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-27 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-7 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-9 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-46 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-41 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-33 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-23 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-49 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-47 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-16 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-28 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-31 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-36 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-42 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-3 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-18 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-15 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-24 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-17 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-48 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-19 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-11 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-2 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-43 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-6 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-14 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-20 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-0 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-44 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-39 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-12 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-45 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-1 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-5 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-26 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-29 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-34 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-10 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-32 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-40 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=)) for replica assignment 11:10:51 11:10:51.839 [main-SendThread(127.0.0.1:39173)] DEBUG org.apache.zookeeper.ClientCnxn - Reading reply session id: 0x10000020e9e0000, packet:: clientPath:/brokers/topics serverPath:/brokers/topics finished:false header:: 85,12 replyHeader:: 85,38,0 request:: '/brokers/topics,T response:: v{'my-test-topic,'__consumer_offsets},s{6,6,1768216249046,1768216249046,0,2,0,0,0,2,38} 11:10:51 11:10:51.841 [ProcessThread(sid:0 cport:39173):] DEBUG org.apache.zookeeper.server.SessionTrackerImpl - Checking session 0x10000020e9e0000 11:10:51 11:10:51.841 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - Processing request:: sessionid:0x10000020e9e0000 type:getData cxid:0x56 zxid:0xfffffffffffffffe txntype:unknown reqpath:/brokers/topics/__consumer_offsets 11:10:51 11:10:51.841 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - sessionid:0x10000020e9e0000 type:getData cxid:0x56 zxid:0xfffffffffffffffe txntype:unknown reqpath:/brokers/topics/__consumer_offsets 11:10:51 11:10:51.841 [SyncThread:0] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Permission requested: 1 11:10:51 11:10:51.841 [SyncThread:0] DEBUG org.apache.zookeeper.server.ZooKeeperServer - ACLs for node: [31,s{'world,'anyone} 11:10:51 ] 11:10:51 11:10:51.841 [SyncThread:0] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Client credentials: ['sasl,'zooclient 11:10:51 , 'ip,'127.0.0.1 11:10:51 ] 11:10:51 11:10:51.841 [main-SendThread(127.0.0.1:39173)] DEBUG org.apache.zookeeper.ClientCnxn - Reading reply session id: 0x10000020e9e0000, packet:: clientPath:/brokers/topics/__consumer_offsets serverPath:/brokers/topics/__consumer_offsets finished:false header:: 86,4 replyHeader:: 86,38,0 request:: '/brokers/topics/__consumer_offsets,T response:: #7b22706172746974696f6e73223a7b223434223a5b315d2c223435223a5b315d2c223436223a5b315d2c223437223a5b315d2c223438223a5b315d2c223439223a5b315d2c223130223a5b315d2c223131223a5b315d2c223132223a5b315d2c223133223a5b315d2c223134223a5b315d2c223135223a5b315d2c223136223a5b315d2c223137223a5b315d2c223138223a5b315d2c223139223a5b315d2c2230223a5b315d2c2231223a5b315d2c2232223a5b315d2c2233223a5b315d2c2234223a5b315d2c2235223a5b315d2c2236223a5b315d2c2237223a5b315d2c2238223a5b315d2c2239223a5b315d2c223230223a5b315d2c223231223a5b315d2c223232223a5b315d2c223233223a5b315d2c223234223a5b315d2c223235223a5b315d2c223236223a5b315d2c223237223a5b315d2c223238223a5b315d2c223239223a5b315d2c223330223a5b315d2c223331223a5b315d2c223332223a5b315d2c223333223a5b315d2c223334223a5b315d2c223335223a5b315d2c223336223a5b315d2c223337223a5b315d2c223338223a5b315d2c223339223a5b315d2c223430223a5b315d2c223431223a5b315d2c223432223a5b315d2c223433223a5b315d7d2c22746f7069635f6964223a224e364a64474b536e533575503551735163616e733377222c22616464696e675f7265706c69636173223a7b7d2c2272656d6f76696e675f7265706c69636173223a7b7d2c2276657273696f6e223a337d,s{38,38,1768216251835,1768216251835,0,0,0,0,548,0,38} 11:10:51 11:10:51.842 [data-plane-kafka-request-handler-0] DEBUG kafka.server.DefaultAutoTopicCreationManager - Cleared inflight topic creation state for HashMap(__consumer_offsets -> CreatableTopic(name='__consumer_offsets', numPartitions=50, replicationFactor=1, assignments=[], configs=[CreateableTopicConfig(name='compression.type', value='producer'), CreateableTopicConfig(name='cleanup.policy', value='compact'), CreateableTopicConfig(name='segment.bytes', value='104857600')])) 11:10:51 11:10:51.847 [controller-event-thread] INFO kafka.controller.KafkaController - [Controller id=1] New topics: [Set(__consumer_offsets)], deleted topics: [HashSet()], new partition replica assignment [Set(TopicIdReplicaAssignment(__consumer_offsets,Some(N6JdGKSnS5uP5QsQcans3w),HashMap(__consumer_offsets-22 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-30 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-25 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-35 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-37 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-38 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-13 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-8 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-21 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-4 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-27 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-7 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-9 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-46 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-41 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-33 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-23 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-49 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-47 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-16 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-28 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-31 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-36 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-42 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-3 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-18 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-15 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-24 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-17 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-48 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-19 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-11 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-2 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-43 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-6 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-14 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-20 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-0 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-44 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-39 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-12 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-45 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-1 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-5 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-26 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-29 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-34 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-10 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-32 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-40 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=))))] 11:10:51 11:10:51.847 [controller-event-thread] INFO kafka.controller.KafkaController - [Controller id=1] New partition creation callback for __consumer_offsets-22,__consumer_offsets-30,__consumer_offsets-25,__consumer_offsets-35,__consumer_offsets-37,__consumer_offsets-38,__consumer_offsets-13,__consumer_offsets-8,__consumer_offsets-21,__consumer_offsets-4,__consumer_offsets-27,__consumer_offsets-7,__consumer_offsets-9,__consumer_offsets-46,__consumer_offsets-41,__consumer_offsets-33,__consumer_offsets-23,__consumer_offsets-49,__consumer_offsets-47,__consumer_offsets-16,__consumer_offsets-28,__consumer_offsets-31,__consumer_offsets-36,__consumer_offsets-42,__consumer_offsets-3,__consumer_offsets-18,__consumer_offsets-15,__consumer_offsets-24,__consumer_offsets-17,__consumer_offsets-48,__consumer_offsets-19,__consumer_offsets-11,__consumer_offsets-2,__consumer_offsets-43,__consumer_offsets-6,__consumer_offsets-14,__consumer_offsets-20,__consumer_offsets-0,__consumer_offsets-44,__consumer_offsets-39,__consumer_offsets-12,__consumer_offsets-45,__consumer_offsets-1,__consumer_offsets-5,__consumer_offsets-26,__consumer_offsets-29,__consumer_offsets-34,__consumer_offsets-10,__consumer_offsets-32,__consumer_offsets-40 11:10:51 11:10:51.848 [controller-event-thread] INFO state.change.logger - [Controller id=1 epoch=1] Changed partition __consumer_offsets-22 state from NonExistentPartition to NewPartition with assigned replicas 1 11:10:51 11:10:51.848 [controller-event-thread] INFO state.change.logger - [Controller id=1 epoch=1] Changed partition __consumer_offsets-30 state from NonExistentPartition to NewPartition with assigned replicas 1 11:10:51 11:10:51.848 [controller-event-thread] INFO state.change.logger - [Controller id=1 epoch=1] Changed partition __consumer_offsets-25 state from NonExistentPartition to NewPartition with assigned replicas 1 11:10:51 11:10:51.848 [controller-event-thread] INFO state.change.logger - [Controller id=1 epoch=1] Changed partition __consumer_offsets-35 state from NonExistentPartition to NewPartition with assigned replicas 1 11:10:51 11:10:51.848 [controller-event-thread] INFO state.change.logger - [Controller id=1 epoch=1] Changed partition __consumer_offsets-37 state from NonExistentPartition to NewPartition with assigned replicas 1 11:10:51 11:10:51.848 [controller-event-thread] INFO state.change.logger - [Controller id=1 epoch=1] Changed partition __consumer_offsets-38 state from NonExistentPartition to NewPartition with assigned replicas 1 11:10:51 11:10:51.848 [controller-event-thread] INFO state.change.logger - [Controller id=1 epoch=1] Changed partition __consumer_offsets-13 state from NonExistentPartition to NewPartition with assigned replicas 1 11:10:51 11:10:51.848 [main] DEBUG org.apache.kafka.clients.NetworkClient - [Consumer clientId=mso-123456-consumer-37da6eb2-0a27-47c4-bc3c-8032fdeca549, groupId=mso-group] Received FIND_COORDINATOR response from node -1 for request with header RequestHeader(apiKey=FIND_COORDINATOR, apiVersion=4, clientId=mso-123456-consumer-37da6eb2-0a27-47c4-bc3c-8032fdeca549, correlationId=0): FindCoordinatorResponseData(throttleTimeMs=0, errorCode=0, errorMessage='', nodeId=0, host='', port=0, coordinators=[Coordinator(key='mso-group', nodeId=-1, host='', port=-1, errorCode=15, errorMessage='')]) 11:10:51 11:10:51.848 [controller-event-thread] INFO state.change.logger - [Controller id=1 epoch=1] Changed partition __consumer_offsets-8 state from NonExistentPartition to NewPartition with assigned replicas 1 11:10:51 11:10:51.848 [controller-event-thread] INFO state.change.logger - [Controller id=1 epoch=1] Changed partition __consumer_offsets-21 state from NonExistentPartition to NewPartition with assigned replicas 1 11:10:51 11:10:51.848 [controller-event-thread] INFO state.change.logger - [Controller id=1 epoch=1] Changed partition __consumer_offsets-4 state from NonExistentPartition to NewPartition with assigned replicas 1 11:10:51 11:10:51.848 [controller-event-thread] INFO state.change.logger - [Controller id=1 epoch=1] Changed partition __consumer_offsets-27 state from NonExistentPartition to NewPartition with assigned replicas 1 11:10:51 11:10:51.848 [controller-event-thread] INFO state.change.logger - [Controller id=1 epoch=1] Changed partition __consumer_offsets-7 state from NonExistentPartition to NewPartition with assigned replicas 1 11:10:51 11:10:51.848 [controller-event-thread] INFO state.change.logger - [Controller id=1 epoch=1] Changed partition __consumer_offsets-9 state from NonExistentPartition to NewPartition with assigned replicas 1 11:10:51 11:10:51.848 [controller-event-thread] INFO state.change.logger - [Controller id=1 epoch=1] Changed partition __consumer_offsets-46 state from NonExistentPartition to NewPartition with assigned replicas 1 11:10:51 11:10:51.848 [controller-event-thread] INFO state.change.logger - [Controller id=1 epoch=1] Changed partition __consumer_offsets-41 state from NonExistentPartition to NewPartition with assigned replicas 1 11:10:51 11:10:51.848 [controller-event-thread] INFO state.change.logger - [Controller id=1 epoch=1] Changed partition __consumer_offsets-33 state from NonExistentPartition to NewPartition with assigned replicas 1 11:10:51 11:10:51.848 [controller-event-thread] INFO state.change.logger - [Controller id=1 epoch=1] Changed partition __consumer_offsets-23 state from NonExistentPartition to NewPartition with assigned replicas 1 11:10:51 11:10:51.848 [main] DEBUG org.apache.kafka.clients.consumer.internals.ConsumerCoordinator - [Consumer clientId=mso-123456-consumer-37da6eb2-0a27-47c4-bc3c-8032fdeca549, groupId=mso-group] Received FindCoordinator response ClientResponse(receivedTimeMs=1768216251848, latencyMs=75, disconnected=false, requestHeader=RequestHeader(apiKey=FIND_COORDINATOR, apiVersion=4, clientId=mso-123456-consumer-37da6eb2-0a27-47c4-bc3c-8032fdeca549, correlationId=0), responseBody=FindCoordinatorResponseData(throttleTimeMs=0, errorCode=0, errorMessage='', nodeId=0, host='', port=0, coordinators=[Coordinator(key='mso-group', nodeId=-1, host='', port=-1, errorCode=15, errorMessage='')])) 11:10:51 11:10:51.849 [controller-event-thread] INFO state.change.logger - [Controller id=1 epoch=1] Changed partition __consumer_offsets-49 state from NonExistentPartition to NewPartition with assigned replicas 1 11:10:51 11:10:51.849 [controller-event-thread] INFO state.change.logger - [Controller id=1 epoch=1] Changed partition __consumer_offsets-47 state from NonExistentPartition to NewPartition with assigned replicas 1 11:10:51 11:10:51.849 [main] DEBUG org.apache.kafka.clients.consumer.internals.ConsumerCoordinator - [Consumer clientId=mso-123456-consumer-37da6eb2-0a27-47c4-bc3c-8032fdeca549, groupId=mso-group] Group coordinator lookup failed: 11:10:51 11:10:51.849 [controller-event-thread] INFO state.change.logger - [Controller id=1 epoch=1] Changed partition __consumer_offsets-16 state from NonExistentPartition to NewPartition with assigned replicas 1 11:10:51 11:10:51.849 [controller-event-thread] INFO state.change.logger - [Controller id=1 epoch=1] Changed partition __consumer_offsets-28 state from NonExistentPartition to NewPartition with assigned replicas 1 11:10:51 11:10:51.849 [controller-event-thread] INFO state.change.logger - [Controller id=1 epoch=1] Changed partition __consumer_offsets-31 state from NonExistentPartition to NewPartition with assigned replicas 1 11:10:51 11:10:51.849 [controller-event-thread] INFO state.change.logger - [Controller id=1 epoch=1] Changed partition __consumer_offsets-36 state from NonExistentPartition to NewPartition with assigned replicas 1 11:10:51 11:10:51.849 [main] DEBUG org.apache.kafka.clients.consumer.internals.ConsumerCoordinator - [Consumer clientId=mso-123456-consumer-37da6eb2-0a27-47c4-bc3c-8032fdeca549, groupId=mso-group] Coordinator discovery failed, refreshing metadata 11:10:51 org.apache.kafka.common.errors.CoordinatorNotAvailableException: The coordinator is not available. 11:10:51 11:10:51.849 [controller-event-thread] INFO state.change.logger - [Controller id=1 epoch=1] Changed partition __consumer_offsets-42 state from NonExistentPartition to NewPartition with assigned replicas 1 11:10:51 11:10:51.849 [controller-event-thread] INFO state.change.logger - [Controller id=1 epoch=1] Changed partition __consumer_offsets-3 state from NonExistentPartition to NewPartition with assigned replicas 1 11:10:51 11:10:51.849 [controller-event-thread] INFO state.change.logger - [Controller id=1 epoch=1] Changed partition __consumer_offsets-18 state from NonExistentPartition to NewPartition with assigned replicas 1 11:10:51 11:10:51.849 [data-plane-kafka-network-thread-1-ListenerName(SASL_PLAINTEXT)-SASL_PLAINTEXT-1] DEBUG kafka.request.logger - Completed request:{"isForwarded":false,"requestHeader":{"requestApiKey":10,"requestApiVersion":4,"correlationId":0,"clientId":"mso-123456-consumer-37da6eb2-0a27-47c4-bc3c-8032fdeca549","requestApiKeyName":"FIND_COORDINATOR"},"request":{"keyType":0,"coordinatorKeys":["mso-group"]},"response":{"throttleTimeMs":0,"coordinators":[{"key":"mso-group","nodeId":-1,"host":"","port":-1,"errorCode":15,"errorMessage":""}]},"connection":"127.0.0.1:39115-127.0.0.1:43332-2","totalTimeMs":45.932,"requestQueueTimeMs":1.191,"localTimeMs":44.24,"remoteTimeMs":0.0,"throttleTimeMs":0,"responseQueueTimeMs":0.181,"sendTimeMs":0.318,"securityProtocol":"SASL_PLAINTEXT","principal":"User:admin","listener":"SASL_PLAINTEXT","clientInformation":{"softwareName":"apache-kafka-java","softwareVersion":"3.3.1"}} 11:10:51 11:10:51.849 [controller-event-thread] INFO state.change.logger - [Controller id=1 epoch=1] Changed partition __consumer_offsets-15 state from NonExistentPartition to NewPartition with assigned replicas 1 11:10:51 11:10:51.849 [controller-event-thread] INFO state.change.logger - [Controller id=1 epoch=1] Changed partition __consumer_offsets-24 state from NonExistentPartition to NewPartition with assigned replicas 1 11:10:51 11:10:51.849 [controller-event-thread] INFO state.change.logger - [Controller id=1 epoch=1] Changed partition __consumer_offsets-17 state from NonExistentPartition to NewPartition with assigned replicas 1 11:10:51 11:10:51.849 [controller-event-thread] INFO state.change.logger - [Controller id=1 epoch=1] Changed partition __consumer_offsets-48 state from NonExistentPartition to NewPartition with assigned replicas 1 11:10:51 11:10:51.849 [controller-event-thread] INFO state.change.logger - [Controller id=1 epoch=1] Changed partition __consumer_offsets-19 state from NonExistentPartition to NewPartition with assigned replicas 1 11:10:51 11:10:51.849 [controller-event-thread] INFO state.change.logger - [Controller id=1 epoch=1] Changed partition __consumer_offsets-11 state from NonExistentPartition to NewPartition with assigned replicas 1 11:10:51 11:10:51.849 [controller-event-thread] INFO state.change.logger - [Controller id=1 epoch=1] Changed partition __consumer_offsets-2 state from NonExistentPartition to NewPartition with assigned replicas 1 11:10:51 11:10:51.849 [controller-event-thread] INFO state.change.logger - [Controller id=1 epoch=1] Changed partition __consumer_offsets-43 state from NonExistentPartition to NewPartition with assigned replicas 1 11:10:51 11:10:51.849 [controller-event-thread] INFO state.change.logger - [Controller id=1 epoch=1] Changed partition __consumer_offsets-6 state from NonExistentPartition to NewPartition with assigned replicas 1 11:10:51 11:10:51.849 [controller-event-thread] INFO state.change.logger - [Controller id=1 epoch=1] Changed partition __consumer_offsets-14 state from NonExistentPartition to NewPartition with assigned replicas 1 11:10:51 11:10:51.849 [controller-event-thread] INFO state.change.logger - [Controller id=1 epoch=1] Changed partition __consumer_offsets-20 state from NonExistentPartition to NewPartition with assigned replicas 1 11:10:51 11:10:51.850 [controller-event-thread] INFO state.change.logger - [Controller id=1 epoch=1] Changed partition __consumer_offsets-0 state from NonExistentPartition to NewPartition with assigned replicas 1 11:10:51 11:10:51.850 [controller-event-thread] INFO state.change.logger - [Controller id=1 epoch=1] Changed partition __consumer_offsets-44 state from NonExistentPartition to NewPartition with assigned replicas 1 11:10:51 11:10:51.850 [controller-event-thread] INFO state.change.logger - [Controller id=1 epoch=1] Changed partition __consumer_offsets-39 state from NonExistentPartition to NewPartition with assigned replicas 1 11:10:51 11:10:51.850 [controller-event-thread] INFO state.change.logger - [Controller id=1 epoch=1] Changed partition __consumer_offsets-12 state from NonExistentPartition to NewPartition with assigned replicas 1 11:10:51 11:10:51.850 [controller-event-thread] INFO state.change.logger - [Controller id=1 epoch=1] Changed partition __consumer_offsets-45 state from NonExistentPartition to NewPartition with assigned replicas 1 11:10:51 11:10:51.850 [controller-event-thread] INFO state.change.logger - [Controller id=1 epoch=1] Changed partition __consumer_offsets-1 state from NonExistentPartition to NewPartition with assigned replicas 1 11:10:51 11:10:51.850 [controller-event-thread] INFO state.change.logger - [Controller id=1 epoch=1] Changed partition __consumer_offsets-5 state from NonExistentPartition to NewPartition with assigned replicas 1 11:10:51 11:10:51.850 [controller-event-thread] INFO state.change.logger - [Controller id=1 epoch=1] Changed partition __consumer_offsets-26 state from NonExistentPartition to NewPartition with assigned replicas 1 11:10:51 11:10:51.850 [controller-event-thread] INFO state.change.logger - [Controller id=1 epoch=1] Changed partition __consumer_offsets-29 state from NonExistentPartition to NewPartition with assigned replicas 1 11:10:51 11:10:51.850 [controller-event-thread] INFO state.change.logger - [Controller id=1 epoch=1] Changed partition __consumer_offsets-34 state from NonExistentPartition to NewPartition with assigned replicas 1 11:10:51 11:10:51.850 [controller-event-thread] INFO state.change.logger - [Controller id=1 epoch=1] Changed partition __consumer_offsets-10 state from NonExistentPartition to NewPartition with assigned replicas 1 11:10:51 11:10:51.850 [controller-event-thread] INFO state.change.logger - [Controller id=1 epoch=1] Changed partition __consumer_offsets-32 state from NonExistentPartition to NewPartition with assigned replicas 1 11:10:51 11:10:51.850 [controller-event-thread] INFO state.change.logger - [Controller id=1 epoch=1] Changed partition __consumer_offsets-40 state from NonExistentPartition to NewPartition with assigned replicas 1 11:10:51 11:10:51.850 [controller-event-thread] INFO state.change.logger - [Controller id=1 epoch=1] Sending UpdateMetadata request to brokers HashSet() for 0 partitions 11:10:51 11:10:51.853 [controller-event-thread] INFO state.change.logger - [Controller id=1 epoch=1] Sending UpdateMetadata request to brokers HashSet() for 0 partitions 11:10:51 11:10:51.858 [ProcessThread(sid:0 cport:39173):] DEBUG org.apache.zookeeper.server.SessionTrackerImpl - Checking session 0x10000020e9e0000 11:10:51 11:10:51.858 [ProcessThread(sid:0 cport:39173):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Permission requested: 1 11:10:51 11:10:51.858 [ProcessThread(sid:0 cport:39173):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - ACLs for node: [31,s{'world,'anyone} 11:10:51 ] 11:10:51 11:10:51.859 [ProcessThread(sid:0 cport:39173):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Client credentials: ['sasl,'zooclient 11:10:51 , 'ip,'127.0.0.1 11:10:51 ] 11:10:51 11:10:51.859 [ProcessThread(sid:0 cport:39173):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from data tree is: 57743292423 11:10:51 11:10:51.859 [ProcessThread(sid:0 cport:39173):] DEBUG org.apache.zookeeper.server.SessionTrackerImpl - Checking session 0x10000020e9e0000 11:10:51 11:10:51.859 [ProcessThread(sid:0 cport:39173):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Processing ACL: 31,s{'world,'anyone} 11:10:51 11:10:51 11:10:51.859 [ProcessThread(sid:0 cport:39173):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Permission requested: 4 11:10:51 11:10:51.859 [ProcessThread(sid:0 cport:39173):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - ACLs for node: [31,s{'world,'anyone} 11:10:51 ] 11:10:51 11:10:51.859 [ProcessThread(sid:0 cport:39173):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Client credentials: ['sasl,'zooclient 11:10:51 , 'ip,'127.0.0.1 11:10:51 ] 11:10:51 11:10:51.859 [ProcessThread(sid:0 cport:39173):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from data tree is: 57743292423 11:10:51 11:10:51.859 [ProcessThread(sid:0 cport:39173):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 57149982317 11:10:51 11:10:51.859 [ProcessThread(sid:0 cport:39173):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 60161348112 11:10:51 11:10:51.860 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - Processing request:: sessionid:0x10000020e9e0000 type:multi cxid:0x57 zxid:0x27 txntype:14 reqpath:n/a 11:10:51 11:10:51.860 [SyncThread:0] DEBUG org.apache.zookeeper.common.PathTrie - brokers 11:10:51 11:10:51.860 [SyncThread:0] DEBUG org.apache.zookeeper.server.DataTree - Digests are matching for Zxid: 27, Digest in log and actual tree: 60161348112 11:10:51 11:10:51.860 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - sessionid:0x10000020e9e0000 type:multi cxid:0x57 zxid:0x27 txntype:14 reqpath:n/a 11:10:51 11:10:51.869 [main-SendThread(127.0.0.1:39173)] DEBUG org.apache.zookeeper.ClientCnxn - Reading reply session id: 0x10000020e9e0000, packet:: clientPath:null serverPath:null finished:false header:: 87,14 replyHeader:: 87,39,0 request:: org.apache.zookeeper.MultiOperationRecord@47c7375 response:: org.apache.zookeeper.MultiResponse@fe4873b6 11:10:51 11:10:51.871 [ProcessThread(sid:0 cport:39173):] DEBUG org.apache.zookeeper.server.SessionTrackerImpl - Checking session 0x10000020e9e0000 11:10:51 11:10:51.871 [ProcessThread(sid:0 cport:39173):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Permission requested: 1 11:10:51 11:10:51.871 [ProcessThread(sid:0 cport:39173):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - ACLs for node: [31,s{'world,'anyone} 11:10:51 ] 11:10:51 11:10:51.871 [ProcessThread(sid:0 cport:39173):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Client credentials: ['sasl,'zooclient 11:10:51 , 'ip,'127.0.0.1 11:10:51 ] 11:10:51 11:10:51.871 [ProcessThread(sid:0 cport:39173):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from data tree is: 60161348112 11:10:51 11:10:51.871 [ProcessThread(sid:0 cport:39173):] DEBUG org.apache.zookeeper.server.SessionTrackerImpl - Checking session 0x10000020e9e0000 11:10:51 11:10:51.871 [ProcessThread(sid:0 cport:39173):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Processing ACL: 31,s{'world,'anyone} 11:10:51 11:10:51 11:10:51.872 [ProcessThread(sid:0 cport:39173):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Permission requested: 4 11:10:51 11:10:51.872 [ProcessThread(sid:0 cport:39173):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - ACLs for node: [31,s{'world,'anyone} 11:10:51 ] 11:10:51 11:10:51.872 [ProcessThread(sid:0 cport:39173):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Client credentials: ['sasl,'zooclient 11:10:51 , 'ip,'127.0.0.1 11:10:51 ] 11:10:51 11:10:51.872 [ProcessThread(sid:0 cport:39173):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from data tree is: 60161348112 11:10:51 11:10:51.872 [ProcessThread(sid:0 cport:39173):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 59406507252 11:10:51 11:10:51.872 [ProcessThread(sid:0 cport:39173):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 61575448265 11:10:51 11:10:51.872 [ProcessThread(sid:0 cport:39173):] DEBUG org.apache.zookeeper.server.SessionTrackerImpl - Checking session 0x10000020e9e0000 11:10:51 11:10:51.872 [ProcessThread(sid:0 cport:39173):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Permission requested: 1 11:10:51 11:10:51.872 [ProcessThread(sid:0 cport:39173):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - ACLs for node: [31,s{'world,'anyone} 11:10:51 ] 11:10:51 11:10:51.872 [ProcessThread(sid:0 cport:39173):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Client credentials: ['sasl,'zooclient 11:10:51 , 'ip,'127.0.0.1 11:10:51 ] 11:10:51 11:10:51.872 [ProcessThread(sid:0 cport:39173):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 61575448265 11:10:51 11:10:51.872 [ProcessThread(sid:0 cport:39173):] DEBUG org.apache.zookeeper.server.SessionTrackerImpl - Checking session 0x10000020e9e0000 11:10:51 11:10:51.872 [ProcessThread(sid:0 cport:39173):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Processing ACL: 31,s{'world,'anyone} 11:10:51 11:10:51 11:10:51.872 [ProcessThread(sid:0 cport:39173):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Permission requested: 4 11:10:51 11:10:51.872 [ProcessThread(sid:0 cport:39173):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - ACLs for node: [31,s{'world,'anyone} 11:10:51 ] 11:10:51 11:10:51.872 [ProcessThread(sid:0 cport:39173):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Client credentials: ['sasl,'zooclient 11:10:51 , 'ip,'127.0.0.1 11:10:51 ] 11:10:51 11:10:51.872 [ProcessThread(sid:0 cport:39173):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 61575448265 11:10:51 11:10:51.872 [ProcessThread(sid:0 cport:39173):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 62460829080 11:10:51 11:10:51.872 [ProcessThread(sid:0 cport:39173):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 62692792090 11:10:51 11:10:51.872 [ProcessThread(sid:0 cport:39173):] DEBUG org.apache.zookeeper.server.SessionTrackerImpl - Checking session 0x10000020e9e0000 11:10:51 11:10:51.872 [ProcessThread(sid:0 cport:39173):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Permission requested: 1 11:10:51 11:10:51.872 [ProcessThread(sid:0 cport:39173):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - ACLs for node: [31,s{'world,'anyone} 11:10:51 ] 11:10:51 11:10:51.872 [ProcessThread(sid:0 cport:39173):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Client credentials: ['sasl,'zooclient 11:10:51 , 'ip,'127.0.0.1 11:10:51 ] 11:10:51 11:10:51.872 [ProcessThread(sid:0 cport:39173):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 62692792090 11:10:51 11:10:51.872 [ProcessThread(sid:0 cport:39173):] DEBUG org.apache.zookeeper.server.SessionTrackerImpl - Checking session 0x10000020e9e0000 11:10:51 11:10:51.873 [ProcessThread(sid:0 cport:39173):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Processing ACL: 31,s{'world,'anyone} 11:10:51 11:10:51 11:10:51.873 [ProcessThread(sid:0 cport:39173):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Permission requested: 4 11:10:51 11:10:51.873 [ProcessThread(sid:0 cport:39173):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - ACLs for node: [31,s{'world,'anyone} 11:10:51 ] 11:10:51 11:10:51.873 [ProcessThread(sid:0 cport:39173):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Client credentials: ['sasl,'zooclient 11:10:51 , 'ip,'127.0.0.1 11:10:51 ] 11:10:51 11:10:51.873 [ProcessThread(sid:0 cport:39173):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 62692792090 11:10:51 11:10:51.873 [ProcessThread(sid:0 cport:39173):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 60569204087 11:10:51 11:10:51.873 [ProcessThread(sid:0 cport:39173):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 60779080893 11:10:51 11:10:51.873 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - Processing request:: sessionid:0x10000020e9e0000 type:multi cxid:0x58 zxid:0x28 txntype:14 reqpath:n/a 11:10:51 11:10:51.873 [ProcessThread(sid:0 cport:39173):] DEBUG org.apache.zookeeper.server.SessionTrackerImpl - Checking session 0x10000020e9e0000 11:10:51 11:10:51.873 [SyncThread:0] DEBUG org.apache.zookeeper.common.PathTrie - brokers 11:10:51 11:10:51.873 [SyncThread:0] DEBUG org.apache.zookeeper.server.DataTree - Digests are matching for Zxid: 28, Digest in log and actual tree: 61575448265 11:10:51 11:10:51.873 [ProcessThread(sid:0 cport:39173):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Permission requested: 1 11:10:51 11:10:51.873 [ProcessThread(sid:0 cport:39173):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - ACLs for node: [31,s{'world,'anyone} 11:10:51 ] 11:10:51 11:10:51.873 [ProcessThread(sid:0 cport:39173):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Client credentials: ['sasl,'zooclient 11:10:51 , 'ip,'127.0.0.1 11:10:51 ] 11:10:51 11:10:51.873 [ProcessThread(sid:0 cport:39173):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 60779080893 11:10:51 11:10:51.873 [ProcessThread(sid:0 cport:39173):] DEBUG org.apache.zookeeper.server.SessionTrackerImpl - Checking session 0x10000020e9e0000 11:10:51 11:10:51.873 [ProcessThread(sid:0 cport:39173):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Processing ACL: 31,s{'world,'anyone} 11:10:51 11:10:51 11:10:51.873 [ProcessThread(sid:0 cport:39173):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Permission requested: 4 11:10:51 11:10:51.873 [ProcessThread(sid:0 cport:39173):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - ACLs for node: [31,s{'world,'anyone} 11:10:51 ] 11:10:51 11:10:51.873 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - sessionid:0x10000020e9e0000 type:multi cxid:0x58 zxid:0x28 txntype:14 reqpath:n/a 11:10:51 11:10:51.873 [ProcessThread(sid:0 cport:39173):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Client credentials: ['sasl,'zooclient 11:10:51 , 'ip,'127.0.0.1 11:10:51 ] 11:10:51 11:10:51.874 [ProcessThread(sid:0 cport:39173):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 60779080893 11:10:51 11:10:51.874 [ProcessThread(sid:0 cport:39173):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 63002000382 11:10:51 11:10:51.874 [ProcessThread(sid:0 cport:39173):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 63471386268 11:10:51 11:10:51.874 [ProcessThread(sid:0 cport:39173):] DEBUG org.apache.zookeeper.server.SessionTrackerImpl - Checking session 0x10000020e9e0000 11:10:51 11:10:51.874 [ProcessThread(sid:0 cport:39173):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Permission requested: 1 11:10:51 11:10:51.874 [ProcessThread(sid:0 cport:39173):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - ACLs for node: [31,s{'world,'anyone} 11:10:51 ] 11:10:51 11:10:51.874 [ProcessThread(sid:0 cport:39173):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Client credentials: ['sasl,'zooclient 11:10:51 , 'ip,'127.0.0.1 11:10:51 ] 11:10:51 11:10:51.874 [ProcessThread(sid:0 cport:39173):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 63471386268 11:10:51 11:10:51.874 [ProcessThread(sid:0 cport:39173):] DEBUG org.apache.zookeeper.server.SessionTrackerImpl - Checking session 0x10000020e9e0000 11:10:51 11:10:51.874 [ProcessThread(sid:0 cport:39173):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Processing ACL: 31,s{'world,'anyone} 11:10:51 11:10:51 11:10:51.874 [ProcessThread(sid:0 cport:39173):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Permission requested: 4 11:10:51 11:10:51.874 [ProcessThread(sid:0 cport:39173):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - ACLs for node: [31,s{'world,'anyone} 11:10:51 ] 11:10:51 11:10:51.874 [ProcessThread(sid:0 cport:39173):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Client credentials: ['sasl,'zooclient 11:10:51 , 'ip,'127.0.0.1 11:10:51 ] 11:10:51 11:10:51.874 [ProcessThread(sid:0 cport:39173):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 63471386268 11:10:51 11:10:51.874 [ProcessThread(sid:0 cport:39173):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 60911062614 11:10:51 11:10:51.874 [ProcessThread(sid:0 cport:39173):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 62878099622 11:10:51 11:10:51.874 [ProcessThread(sid:0 cport:39173):] DEBUG org.apache.zookeeper.server.SessionTrackerImpl - Checking session 0x10000020e9e0000 11:10:51 11:10:51.874 [main-SendThread(127.0.0.1:39173)] DEBUG org.apache.zookeeper.ClientCnxn - Reading reply session id: 0x10000020e9e0000, packet:: clientPath:null serverPath:null finished:false header:: 88,14 replyHeader:: 88,40,0 request:: org.apache.zookeeper.MultiOperationRecord@324db770 response:: org.apache.zookeeper.MultiResponse@2c19b7b1 11:10:51 11:10:51.874 [ProcessThread(sid:0 cport:39173):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Permission requested: 1 11:10:51 11:10:51.874 [ProcessThread(sid:0 cport:39173):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - ACLs for node: [31,s{'world,'anyone} 11:10:51 ] 11:10:51 11:10:51.874 [ProcessThread(sid:0 cport:39173):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Client credentials: ['sasl,'zooclient 11:10:51 , 'ip,'127.0.0.1 11:10:51 ] 11:10:51 11:10:51.874 [ProcessThread(sid:0 cport:39173):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 62878099622 11:10:51 11:10:51.874 [ProcessThread(sid:0 cport:39173):] DEBUG org.apache.zookeeper.server.SessionTrackerImpl - Checking session 0x10000020e9e0000 11:10:51 11:10:51.874 [ProcessThread(sid:0 cport:39173):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Processing ACL: 31,s{'world,'anyone} 11:10:51 11:10:51 11:10:51.874 [ProcessThread(sid:0 cport:39173):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Permission requested: 4 11:10:51 11:10:51.874 [ProcessThread(sid:0 cport:39173):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - ACLs for node: [31,s{'world,'anyone} 11:10:51 ] 11:10:51 11:10:51.874 [ProcessThread(sid:0 cport:39173):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Client credentials: ['sasl,'zooclient 11:10:51 , 'ip,'127.0.0.1 11:10:51 ] 11:10:51 11:10:51.874 [ProcessThread(sid:0 cport:39173):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 62878099622 11:10:51 11:10:51.874 [ProcessThread(sid:0 cport:39173):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 62561013623 11:10:51 11:10:51.874 [ProcessThread(sid:0 cport:39173):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 66220254523 11:10:51 11:10:51.875 [ProcessThread(sid:0 cport:39173):] DEBUG org.apache.zookeeper.server.SessionTrackerImpl - Checking session 0x10000020e9e0000 11:10:51 11:10:51.875 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - Processing request:: sessionid:0x10000020e9e0000 type:multi cxid:0x59 zxid:0x29 txntype:14 reqpath:n/a 11:10:51 11:10:51.875 [ProcessThread(sid:0 cport:39173):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Permission requested: 1 11:10:51 11:10:51.875 [ProcessThread(sid:0 cport:39173):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - ACLs for node: [31,s{'world,'anyone} 11:10:51 ] 11:10:51 11:10:51.875 [ProcessThread(sid:0 cport:39173):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Client credentials: ['sasl,'zooclient 11:10:51 , 'ip,'127.0.0.1 11:10:51 ] 11:10:51 11:10:51.875 [SyncThread:0] DEBUG org.apache.zookeeper.common.PathTrie - brokers 11:10:51 11:10:51.875 [SyncThread:0] DEBUG org.apache.zookeeper.server.DataTree - Digests are matching for Zxid: 29, Digest in log and actual tree: 62692792090 11:10:51 11:10:51.875 [ProcessThread(sid:0 cport:39173):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 66220254523 11:10:51 11:10:51.875 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - sessionid:0x10000020e9e0000 type:multi cxid:0x59 zxid:0x29 txntype:14 reqpath:n/a 11:10:51 11:10:51.875 [ProcessThread(sid:0 cport:39173):] DEBUG org.apache.zookeeper.server.SessionTrackerImpl - Checking session 0x10000020e9e0000 11:10:51 11:10:51.875 [ProcessThread(sid:0 cport:39173):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Processing ACL: 31,s{'world,'anyone} 11:10:51 11:10:51 11:10:51.875 [ProcessThread(sid:0 cport:39173):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Permission requested: 4 11:10:51 11:10:51.875 [ProcessThread(sid:0 cport:39173):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - ACLs for node: [31,s{'world,'anyone} 11:10:51 ] 11:10:51 11:10:51.875 [ProcessThread(sid:0 cport:39173):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Client credentials: ['sasl,'zooclient 11:10:51 , 'ip,'127.0.0.1 11:10:51 ] 11:10:51 11:10:51.875 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - Processing request:: sessionid:0x10000020e9e0000 type:multi cxid:0x5a zxid:0x2a txntype:14 reqpath:n/a 11:10:51 11:10:51.875 [SyncThread:0] DEBUG org.apache.zookeeper.common.PathTrie - brokers 11:10:51 11:10:51.875 [SyncThread:0] DEBUG org.apache.zookeeper.server.DataTree - Digests are matching for Zxid: 2a, Digest in log and actual tree: 60779080893 11:10:51 11:10:51.875 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - sessionid:0x10000020e9e0000 type:multi cxid:0x5a zxid:0x2a txntype:14 reqpath:n/a 11:10:51 11:10:51.875 [ProcessThread(sid:0 cport:39173):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 66220254523 11:10:51 11:10:51.876 [ProcessThread(sid:0 cport:39173):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 68311698262 11:10:51 11:10:51.875 [main-SendThread(127.0.0.1:39173)] DEBUG org.apache.zookeeper.ClientCnxn - Reading reply session id: 0x10000020e9e0000, packet:: clientPath:null serverPath:null finished:false header:: 89,14 replyHeader:: 89,41,0 request:: org.apache.zookeeper.MultiOperationRecord@324db78d response:: org.apache.zookeeper.MultiResponse@2c19b7ce 11:10:51 11:10:51.876 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - Processing request:: sessionid:0x10000020e9e0000 type:multi cxid:0x5b zxid:0x2b txntype:14 reqpath:n/a 11:10:51 11:10:51.876 [ProcessThread(sid:0 cport:39173):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 69742050238 11:10:51 11:10:51.876 [main-SendThread(127.0.0.1:39173)] DEBUG org.apache.zookeeper.ClientCnxn - Reading reply session id: 0x10000020e9e0000, packet:: clientPath:null serverPath:null finished:false header:: 90,14 replyHeader:: 90,42,0 request:: org.apache.zookeeper.MultiOperationRecord@324db773 response:: org.apache.zookeeper.MultiResponse@2c19b7b4 11:10:51 11:10:51.876 [SyncThread:0] DEBUG org.apache.zookeeper.common.PathTrie - brokers 11:10:51 11:10:51.876 [SyncThread:0] DEBUG org.apache.zookeeper.server.DataTree - Digests are matching for Zxid: 2b, Digest in log and actual tree: 63471386268 11:10:51 11:10:51.876 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - sessionid:0x10000020e9e0000 type:multi cxid:0x5b zxid:0x2b txntype:14 reqpath:n/a 11:10:51 11:10:51.876 [ProcessThread(sid:0 cport:39173):] DEBUG org.apache.zookeeper.server.SessionTrackerImpl - Checking session 0x10000020e9e0000 11:10:51 11:10:51.876 [ProcessThread(sid:0 cport:39173):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Permission requested: 1 11:10:51 11:10:51.876 [ProcessThread(sid:0 cport:39173):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - ACLs for node: [31,s{'world,'anyone} 11:10:51 ] 11:10:51 11:10:51.876 [ProcessThread(sid:0 cport:39173):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Client credentials: ['sasl,'zooclient 11:10:51 , 'ip,'127.0.0.1 11:10:51 ] 11:10:51 11:10:51.876 [ProcessThread(sid:0 cport:39173):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 69742050238 11:10:51 11:10:51.876 [ProcessThread(sid:0 cport:39173):] DEBUG org.apache.zookeeper.server.SessionTrackerImpl - Checking session 0x10000020e9e0000 11:10:51 11:10:51.876 [ProcessThread(sid:0 cport:39173):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Processing ACL: 31,s{'world,'anyone} 11:10:51 11:10:51 11:10:51.876 [ProcessThread(sid:0 cport:39173):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Permission requested: 4 11:10:51 11:10:51.876 [ProcessThread(sid:0 cport:39173):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - ACLs for node: [31,s{'world,'anyone} 11:10:51 ] 11:10:51 11:10:51.876 [ProcessThread(sid:0 cport:39173):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Client credentials: ['sasl,'zooclient 11:10:51 , 'ip,'127.0.0.1 11:10:51 ] 11:10:51 11:10:51.876 [ProcessThread(sid:0 cport:39173):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 69742050238 11:10:51 11:10:51.876 [ProcessThread(sid:0 cport:39173):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 68190173652 11:10:51 11:10:51.877 [ProcessThread(sid:0 cport:39173):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 71851139173 11:10:51 11:10:51.876 [main-SendThread(127.0.0.1:39173)] DEBUG org.apache.zookeeper.ClientCnxn - Reading reply session id: 0x10000020e9e0000, packet:: clientPath:null serverPath:null finished:false header:: 91,14 replyHeader:: 91,43,0 request:: org.apache.zookeeper.MultiOperationRecord@324db792 response:: org.apache.zookeeper.MultiResponse@2c19b7d3 11:10:51 11:10:51.877 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - Processing request:: sessionid:0x10000020e9e0000 type:multi cxid:0x5c zxid:0x2c txntype:14 reqpath:n/a 11:10:51 11:10:51.877 [SyncThread:0] DEBUG org.apache.zookeeper.common.PathTrie - brokers 11:10:51 11:10:51.877 [SyncThread:0] DEBUG org.apache.zookeeper.server.DataTree - Digests are matching for Zxid: 2c, Digest in log and actual tree: 62878099622 11:10:51 11:10:51.877 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - sessionid:0x10000020e9e0000 type:multi cxid:0x5c zxid:0x2c txntype:14 reqpath:n/a 11:10:51 11:10:51.877 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - Processing request:: sessionid:0x10000020e9e0000 type:multi cxid:0x5d zxid:0x2d txntype:14 reqpath:n/a 11:10:51 11:10:51.877 [SyncThread:0] DEBUG org.apache.zookeeper.common.PathTrie - brokers 11:10:51 11:10:51.878 [SyncThread:0] DEBUG org.apache.zookeeper.server.DataTree - Digests are matching for Zxid: 2d, Digest in log and actual tree: 66220254523 11:10:51 11:10:51.878 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - sessionid:0x10000020e9e0000 type:multi cxid:0x5d zxid:0x2d txntype:14 reqpath:n/a 11:10:51 11:10:51.878 [main-SendThread(127.0.0.1:39173)] DEBUG org.apache.zookeeper.ClientCnxn - Reading reply session id: 0x10000020e9e0000, packet:: clientPath:null serverPath:null finished:false header:: 92,14 replyHeader:: 92,44,0 request:: org.apache.zookeeper.MultiOperationRecord@324db794 response:: org.apache.zookeeper.MultiResponse@2c19b7d5 11:10:51 11:10:51.878 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - Processing request:: sessionid:0x10000020e9e0000 type:multi cxid:0x5e zxid:0x2e txntype:14 reqpath:n/a 11:10:51 11:10:51.878 [SyncThread:0] DEBUG org.apache.zookeeper.common.PathTrie - brokers 11:10:51 11:10:51.878 [SyncThread:0] DEBUG org.apache.zookeeper.server.DataTree - Digests are matching for Zxid: 2e, Digest in log and actual tree: 69742050238 11:10:51 11:10:51.878 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - sessionid:0x10000020e9e0000 type:multi cxid:0x5e zxid:0x2e txntype:14 reqpath:n/a 11:10:51 11:10:51.878 [main-SendThread(127.0.0.1:39173)] DEBUG org.apache.zookeeper.ClientCnxn - Reading reply session id: 0x10000020e9e0000, packet:: clientPath:null serverPath:null finished:false header:: 93,14 replyHeader:: 93,45,0 request:: org.apache.zookeeper.MultiOperationRecord@324db795 response:: org.apache.zookeeper.MultiResponse@2c19b7d6 11:10:51 11:10:51.879 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - Processing request:: sessionid:0x10000020e9e0000 type:multi cxid:0x5f zxid:0x2f txntype:14 reqpath:n/a 11:10:51 11:10:51.879 [SyncThread:0] DEBUG org.apache.zookeeper.common.PathTrie - brokers 11:10:51 11:10:51.879 [SyncThread:0] DEBUG org.apache.zookeeper.server.DataTree - Digests are matching for Zxid: 2f, Digest in log and actual tree: 71851139173 11:10:51 11:10:51.879 [main-SendThread(127.0.0.1:39173)] DEBUG org.apache.zookeeper.ClientCnxn - Reading reply session id: 0x10000020e9e0000, packet:: clientPath:null serverPath:null finished:false header:: 94,14 replyHeader:: 94,46,0 request:: org.apache.zookeeper.MultiOperationRecord@324db752 response:: org.apache.zookeeper.MultiResponse@2c19b793 11:10:51 11:10:51.879 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - sessionid:0x10000020e9e0000 type:multi cxid:0x5f zxid:0x2f txntype:14 reqpath:n/a 11:10:51 11:10:51.880 [main-SendThread(127.0.0.1:39173)] DEBUG org.apache.zookeeper.ClientCnxn - Reading reply session id: 0x10000020e9e0000, packet:: clientPath:null serverPath:null finished:false header:: 95,14 replyHeader:: 95,47,0 request:: org.apache.zookeeper.MultiOperationRecord@940352de response:: org.apache.zookeeper.MultiResponse@8dcf531f 11:10:51 11:10:51.880 [ProcessThread(sid:0 cport:39173):] DEBUG org.apache.zookeeper.server.SessionTrackerImpl - Checking session 0x10000020e9e0000 11:10:51 11:10:51.880 [ProcessThread(sid:0 cport:39173):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Permission requested: 1 11:10:51 11:10:51.880 [ProcessThread(sid:0 cport:39173):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - ACLs for node: [31,s{'world,'anyone} 11:10:51 ] 11:10:51 11:10:51.880 [ProcessThread(sid:0 cport:39173):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Client credentials: ['sasl,'zooclient 11:10:51 , 'ip,'127.0.0.1 11:10:51 ] 11:10:51 11:10:51.880 [ProcessThread(sid:0 cport:39173):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from data tree is: 71851139173 11:10:51 11:10:51.880 [ProcessThread(sid:0 cport:39173):] DEBUG org.apache.zookeeper.server.SessionTrackerImpl - Checking session 0x10000020e9e0000 11:10:51 11:10:51.880 [ProcessThread(sid:0 cport:39173):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Processing ACL: 31,s{'world,'anyone} 11:10:51 11:10:51 11:10:51.880 [ProcessThread(sid:0 cport:39173):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Permission requested: 4 11:10:51 11:10:51.880 [ProcessThread(sid:0 cport:39173):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - ACLs for node: [31,s{'world,'anyone} 11:10:51 ] 11:10:51 11:10:51.880 [ProcessThread(sid:0 cport:39173):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Client credentials: ['sasl,'zooclient 11:10:51 , 'ip,'127.0.0.1 11:10:51 ] 11:10:51 11:10:51.881 [ProcessThread(sid:0 cport:39173):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from data tree is: 71851139173 11:10:51 11:10:51.881 [ProcessThread(sid:0 cport:39173):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 72607047390 11:10:51 11:10:51.881 [ProcessThread(sid:0 cport:39173):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 73870386845 11:10:51 11:10:51.881 [ProcessThread(sid:0 cport:39173):] DEBUG org.apache.zookeeper.server.SessionTrackerImpl - Checking session 0x10000020e9e0000 11:10:51 11:10:51.881 [ProcessThread(sid:0 cport:39173):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Permission requested: 1 11:10:51 11:10:51.881 [ProcessThread(sid:0 cport:39173):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - ACLs for node: [31,s{'world,'anyone} 11:10:51 ] 11:10:51 11:10:51.881 [ProcessThread(sid:0 cport:39173):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Client credentials: ['sasl,'zooclient 11:10:51 , 'ip,'127.0.0.1 11:10:51 ] 11:10:51 11:10:51.881 [ProcessThread(sid:0 cport:39173):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 73870386845 11:10:51 11:10:51.881 [ProcessThread(sid:0 cport:39173):] DEBUG org.apache.zookeeper.server.SessionTrackerImpl - Checking session 0x10000020e9e0000 11:10:51 11:10:51.881 [ProcessThread(sid:0 cport:39173):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Processing ACL: 31,s{'world,'anyone} 11:10:51 11:10:51 11:10:51.881 [ProcessThread(sid:0 cport:39173):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Permission requested: 4 11:10:51 11:10:51.881 [ProcessThread(sid:0 cport:39173):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - ACLs for node: [31,s{'world,'anyone} 11:10:51 ] 11:10:51 11:10:51.881 [ProcessThread(sid:0 cport:39173):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Client credentials: ['sasl,'zooclient 11:10:51 , 'ip,'127.0.0.1 11:10:51 ] 11:10:51 11:10:51.881 [ProcessThread(sid:0 cport:39173):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 73870386845 11:10:51 11:10:51.881 [ProcessThread(sid:0 cport:39173):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 73819374188 11:10:51 11:10:51.881 [ProcessThread(sid:0 cport:39173):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 77337606199 11:10:51 11:10:51.881 [ProcessThread(sid:0 cport:39173):] DEBUG org.apache.zookeeper.server.SessionTrackerImpl - Checking session 0x10000020e9e0000 11:10:51 11:10:51.881 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - Processing request:: sessionid:0x10000020e9e0000 type:multi cxid:0x60 zxid:0x30 txntype:14 reqpath:n/a 11:10:51 11:10:51.881 [ProcessThread(sid:0 cport:39173):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Permission requested: 1 11:10:51 11:10:51.881 [ProcessThread(sid:0 cport:39173):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - ACLs for node: [31,s{'world,'anyone} 11:10:51 ] 11:10:51 11:10:51.881 [ProcessThread(sid:0 cport:39173):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Client credentials: ['sasl,'zooclient 11:10:51 , 'ip,'127.0.0.1 11:10:51 ] 11:10:51 11:10:51.882 [SyncThread:0] DEBUG org.apache.zookeeper.common.PathTrie - brokers 11:10:51 11:10:51.882 [SyncThread:0] DEBUG org.apache.zookeeper.server.DataTree - Digests are matching for Zxid: 30, Digest in log and actual tree: 73870386845 11:10:51 11:10:51.882 [ProcessThread(sid:0 cport:39173):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 77337606199 11:10:51 11:10:51.882 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - sessionid:0x10000020e9e0000 type:multi cxid:0x60 zxid:0x30 txntype:14 reqpath:n/a 11:10:51 11:10:51.882 [ProcessThread(sid:0 cport:39173):] DEBUG org.apache.zookeeper.server.SessionTrackerImpl - Checking session 0x10000020e9e0000 11:10:51 11:10:51.882 [ProcessThread(sid:0 cport:39173):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Processing ACL: 31,s{'world,'anyone} 11:10:51 11:10:51 11:10:51.882 [ProcessThread(sid:0 cport:39173):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Permission requested: 4 11:10:51 11:10:51.882 [ProcessThread(sid:0 cport:39173):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - ACLs for node: [31,s{'world,'anyone} 11:10:51 ] 11:10:51 11:10:51.882 [ProcessThread(sid:0 cport:39173):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Client credentials: ['sasl,'zooclient 11:10:51 , 'ip,'127.0.0.1 11:10:51 ] 11:10:51 11:10:51.882 [ProcessThread(sid:0 cport:39173):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 77337606199 11:10:51 11:10:51.882 [ProcessThread(sid:0 cport:39173):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 79356039516 11:10:51 11:10:51.882 [ProcessThread(sid:0 cport:39173):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 83033652542 11:10:51 11:10:51.882 [ProcessThread(sid:0 cport:39173):] DEBUG org.apache.zookeeper.server.SessionTrackerImpl - Checking session 0x10000020e9e0000 11:10:51 11:10:51.882 [ProcessThread(sid:0 cport:39173):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Permission requested: 1 11:10:51 11:10:51.882 [ProcessThread(sid:0 cport:39173):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - ACLs for node: [31,s{'world,'anyone} 11:10:51 ] 11:10:51 11:10:51.882 [ProcessThread(sid:0 cport:39173):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Client credentials: ['sasl,'zooclient 11:10:51 , 'ip,'127.0.0.1 11:10:51 ] 11:10:51 11:10:51.882 [ProcessThread(sid:0 cport:39173):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 83033652542 11:10:51 11:10:51.882 [ProcessThread(sid:0 cport:39173):] DEBUG org.apache.zookeeper.server.SessionTrackerImpl - Checking session 0x10000020e9e0000 11:10:51 11:10:51.882 [ProcessThread(sid:0 cport:39173):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Processing ACL: 31,s{'world,'anyone} 11:10:51 11:10:51 11:10:51.882 [ProcessThread(sid:0 cport:39173):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Permission requested: 4 11:10:51 11:10:51.882 [ProcessThread(sid:0 cport:39173):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - ACLs for node: [31,s{'world,'anyone} 11:10:51 ] 11:10:51 11:10:51.882 [ProcessThread(sid:0 cport:39173):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Client credentials: ['sasl,'zooclient 11:10:51 , 'ip,'127.0.0.1 11:10:51 ] 11:10:51 11:10:51.882 [main-SendThread(127.0.0.1:39173)] DEBUG org.apache.zookeeper.ClientCnxn - Reading reply session id: 0x10000020e9e0000, packet:: clientPath:null serverPath:null finished:false header:: 96,14 replyHeader:: 96,48,0 request:: org.apache.zookeeper.MultiOperationRecord@324db76f response:: org.apache.zookeeper.MultiResponse@2c19b7b0 11:10:51 11:10:51.882 [ProcessThread(sid:0 cport:39173):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 83033652542 11:10:51 11:10:51.882 [ProcessThread(sid:0 cport:39173):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 80038726779 11:10:51 11:10:51.882 [ProcessThread(sid:0 cport:39173):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 80887013788 11:10:51 11:10:51.883 [ProcessThread(sid:0 cport:39173):] DEBUG org.apache.zookeeper.server.SessionTrackerImpl - Checking session 0x10000020e9e0000 11:10:51 11:10:51.883 [ProcessThread(sid:0 cport:39173):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Permission requested: 1 11:10:51 11:10:51.883 [ProcessThread(sid:0 cport:39173):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - ACLs for node: [31,s{'world,'anyone} 11:10:51 ] 11:10:51 11:10:51.883 [ProcessThread(sid:0 cport:39173):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Client credentials: ['sasl,'zooclient 11:10:51 , 'ip,'127.0.0.1 11:10:51 ] 11:10:51 11:10:51.883 [ProcessThread(sid:0 cport:39173):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 80887013788 11:10:51 11:10:51.883 [ProcessThread(sid:0 cport:39173):] DEBUG org.apache.zookeeper.server.SessionTrackerImpl - Checking session 0x10000020e9e0000 11:10:51 11:10:51.883 [ProcessThread(sid:0 cport:39173):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Processing ACL: 31,s{'world,'anyone} 11:10:51 11:10:51 11:10:51.883 [ProcessThread(sid:0 cport:39173):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Permission requested: 4 11:10:51 11:10:51.883 [ProcessThread(sid:0 cport:39173):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - ACLs for node: [31,s{'world,'anyone} 11:10:51 ] 11:10:51 11:10:51.883 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - Processing request:: sessionid:0x10000020e9e0000 type:multi cxid:0x61 zxid:0x31 txntype:14 reqpath:n/a 11:10:51 11:10:51.883 [ProcessThread(sid:0 cport:39173):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Client credentials: ['sasl,'zooclient 11:10:51 , 'ip,'127.0.0.1 11:10:51 ] 11:10:51 11:10:51.883 [SyncThread:0] DEBUG org.apache.zookeeper.common.PathTrie - brokers 11:10:51 11:10:51.883 [SyncThread:0] DEBUG org.apache.zookeeper.server.DataTree - Digests are matching for Zxid: 31, Digest in log and actual tree: 77337606199 11:10:51 11:10:51.884 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - sessionid:0x10000020e9e0000 type:multi cxid:0x61 zxid:0x31 txntype:14 reqpath:n/a 11:10:51 11:10:51.884 [ProcessThread(sid:0 cport:39173):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 80887013788 11:10:51 11:10:51.884 [ProcessThread(sid:0 cport:39173):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 83678605634 11:10:51 11:10:51.884 [ProcessThread(sid:0 cport:39173):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 83851418207 11:10:51 11:10:51.884 [ProcessThread(sid:0 cport:39173):] DEBUG org.apache.zookeeper.server.SessionTrackerImpl - Checking session 0x10000020e9e0000 11:10:51 11:10:51.884 [ProcessThread(sid:0 cport:39173):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Permission requested: 1 11:10:51 11:10:51.884 [ProcessThread(sid:0 cport:39173):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - ACLs for node: [31,s{'world,'anyone} 11:10:51 ] 11:10:51 11:10:51.884 [ProcessThread(sid:0 cport:39173):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Client credentials: ['sasl,'zooclient 11:10:51 , 'ip,'127.0.0.1 11:10:51 ] 11:10:51 11:10:51.884 [ProcessThread(sid:0 cport:39173):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 83851418207 11:10:51 11:10:51.884 [ProcessThread(sid:0 cport:39173):] DEBUG org.apache.zookeeper.server.SessionTrackerImpl - Checking session 0x10000020e9e0000 11:10:51 11:10:51.884 [ProcessThread(sid:0 cport:39173):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Processing ACL: 31,s{'world,'anyone} 11:10:51 11:10:51 11:10:51.884 [ProcessThread(sid:0 cport:39173):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Permission requested: 4 11:10:51 11:10:51.884 [ProcessThread(sid:0 cport:39173):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - ACLs for node: [31,s{'world,'anyone} 11:10:51 ] 11:10:51 11:10:51.884 [main-SendThread(127.0.0.1:39173)] DEBUG org.apache.zookeeper.ClientCnxn - Reading reply session id: 0x10000020e9e0000, packet:: clientPath:null serverPath:null finished:false header:: 97,14 replyHeader:: 97,49,0 request:: org.apache.zookeeper.MultiOperationRecord@940352da response:: org.apache.zookeeper.MultiResponse@8dcf531b 11:10:51 11:10:51.884 [ProcessThread(sid:0 cport:39173):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Client credentials: ['sasl,'zooclient 11:10:51 , 'ip,'127.0.0.1 11:10:51 ] 11:10:51 11:10:51.884 [ProcessThread(sid:0 cport:39173):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 83851418207 11:10:51 11:10:51.884 [ProcessThread(sid:0 cport:39173):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 84474922544 11:10:51 11:10:51.884 [ProcessThread(sid:0 cport:39173):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 85922382528 11:10:51 11:10:51.884 [ProcessThread(sid:0 cport:39173):] DEBUG org.apache.zookeeper.server.SessionTrackerImpl - Checking session 0x10000020e9e0000 11:10:51 11:10:51.884 [ProcessThread(sid:0 cport:39173):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Permission requested: 1 11:10:51 11:10:51.884 [ProcessThread(sid:0 cport:39173):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - ACLs for node: [31,s{'world,'anyone} 11:10:51 ] 11:10:51 11:10:51.884 [ProcessThread(sid:0 cport:39173):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Client credentials: ['sasl,'zooclient 11:10:51 , 'ip,'127.0.0.1 11:10:51 ] 11:10:51 11:10:51.884 [ProcessThread(sid:0 cport:39173):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 85922382528 11:10:51 11:10:51.884 [ProcessThread(sid:0 cport:39173):] DEBUG org.apache.zookeeper.server.SessionTrackerImpl - Checking session 0x10000020e9e0000 11:10:51 11:10:51.884 [ProcessThread(sid:0 cport:39173):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Processing ACL: 31,s{'world,'anyone} 11:10:51 11:10:51 11:10:51.884 [ProcessThread(sid:0 cport:39173):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Permission requested: 4 11:10:51 11:10:51.884 [ProcessThread(sid:0 cport:39173):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - ACLs for node: [31,s{'world,'anyone} 11:10:51 ] 11:10:51 11:10:51.885 [ProcessThread(sid:0 cport:39173):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Client credentials: ['sasl,'zooclient 11:10:51 , 'ip,'127.0.0.1 11:10:51 ] 11:10:51 11:10:51.885 [ProcessThread(sid:0 cport:39173):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 85922382528 11:10:51 11:10:51.885 [ProcessThread(sid:0 cport:39173):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 83867870243 11:10:51 11:10:51.885 [ProcessThread(sid:0 cport:39173):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 84313910397 11:10:51 11:10:51.885 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - Processing request:: sessionid:0x10000020e9e0000 type:multi cxid:0x62 zxid:0x32 txntype:14 reqpath:n/a 11:10:51 11:10:51.885 [SyncThread:0] DEBUG org.apache.zookeeper.common.PathTrie - brokers 11:10:51 11:10:51.885 [SyncThread:0] DEBUG org.apache.zookeeper.server.DataTree - Digests are matching for Zxid: 32, Digest in log and actual tree: 83033652542 11:10:51 11:10:51.885 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - sessionid:0x10000020e9e0000 type:multi cxid:0x62 zxid:0x32 txntype:14 reqpath:n/a 11:10:51 11:10:51.885 [ProcessThread(sid:0 cport:39173):] DEBUG org.apache.zookeeper.server.SessionTrackerImpl - Checking session 0x10000020e9e0000 11:10:51 11:10:51.885 [ProcessThread(sid:0 cport:39173):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Permission requested: 1 11:10:51 11:10:51.885 [ProcessThread(sid:0 cport:39173):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - ACLs for node: [31,s{'world,'anyone} 11:10:51 ] 11:10:51 11:10:51.885 [ProcessThread(sid:0 cport:39173):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Client credentials: ['sasl,'zooclient 11:10:51 , 'ip,'127.0.0.1 11:10:51 ] 11:10:51 11:10:51.885 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - Processing request:: sessionid:0x10000020e9e0000 type:multi cxid:0x63 zxid:0x33 txntype:14 reqpath:n/a 11:10:51 11:10:51.885 [ProcessThread(sid:0 cport:39173):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 84313910397 11:10:51 11:10:51.885 [ProcessThread(sid:0 cport:39173):] DEBUG org.apache.zookeeper.server.SessionTrackerImpl - Checking session 0x10000020e9e0000 11:10:51 11:10:51.886 [ProcessThread(sid:0 cport:39173):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Processing ACL: 31,s{'world,'anyone} 11:10:51 11:10:51 11:10:51.886 [main-SendThread(127.0.0.1:39173)] DEBUG org.apache.zookeeper.ClientCnxn - Reading reply session id: 0x10000020e9e0000, packet:: clientPath:null serverPath:null finished:false header:: 98,14 replyHeader:: 98,50,0 request:: org.apache.zookeeper.MultiOperationRecord@324db775 response:: org.apache.zookeeper.MultiResponse@2c19b7b6 11:10:51 11:10:51.886 [SyncThread:0] DEBUG org.apache.zookeeper.common.PathTrie - brokers 11:10:51 11:10:51.886 [SyncThread:0] DEBUG org.apache.zookeeper.server.DataTree - Digests are matching for Zxid: 33, Digest in log and actual tree: 80887013788 11:10:51 11:10:51.886 [ProcessThread(sid:0 cport:39173):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Permission requested: 4 11:10:51 11:10:51.886 [ProcessThread(sid:0 cport:39173):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - ACLs for node: [31,s{'world,'anyone} 11:10:51 ] 11:10:51 11:10:51.886 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - sessionid:0x10000020e9e0000 type:multi cxid:0x63 zxid:0x33 txntype:14 reqpath:n/a 11:10:51 11:10:51.886 [ProcessThread(sid:0 cport:39173):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Client credentials: ['sasl,'zooclient 11:10:51 , 'ip,'127.0.0.1 11:10:51 ] 11:10:51 11:10:51.886 [ProcessThread(sid:0 cport:39173):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 84313910397 11:10:51 11:10:51.886 [ProcessThread(sid:0 cport:39173):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 84020627725 11:10:51 11:10:51.886 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - Processing request:: sessionid:0x10000020e9e0000 type:multi cxid:0x64 zxid:0x34 txntype:14 reqpath:n/a 11:10:51 11:10:51.887 [SyncThread:0] DEBUG org.apache.zookeeper.common.PathTrie - brokers 11:10:51 11:10:51.886 [main-SendThread(127.0.0.1:39173)] DEBUG org.apache.zookeeper.ClientCnxn - Reading reply session id: 0x10000020e9e0000, packet:: clientPath:null serverPath:null finished:false header:: 99,14 replyHeader:: 99,51,0 request:: org.apache.zookeeper.MultiOperationRecord@940352dd response:: org.apache.zookeeper.MultiResponse@8dcf531e 11:10:51 11:10:51.887 [SyncThread:0] DEBUG org.apache.zookeeper.server.DataTree - Digests are matching for Zxid: 34, Digest in log and actual tree: 83851418207 11:10:51 11:10:51.887 [ProcessThread(sid:0 cport:39173):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 88127828843 11:10:51 11:10:51.887 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - sessionid:0x10000020e9e0000 type:multi cxid:0x64 zxid:0x34 txntype:14 reqpath:n/a 11:10:51 11:10:51.887 [ProcessThread(sid:0 cport:39173):] DEBUG org.apache.zookeeper.server.SessionTrackerImpl - Checking session 0x10000020e9e0000 11:10:51 11:10:51.887 [ProcessThread(sid:0 cport:39173):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Permission requested: 1 11:10:51 11:10:51.887 [ProcessThread(sid:0 cport:39173):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - ACLs for node: [31,s{'world,'anyone} 11:10:51 ] 11:10:51 11:10:51.887 [ProcessThread(sid:0 cport:39173):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Client credentials: ['sasl,'zooclient 11:10:51 , 'ip,'127.0.0.1 11:10:51 ] 11:10:51 11:10:51.887 [ProcessThread(sid:0 cport:39173):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 88127828843 11:10:51 11:10:51.887 [ProcessThread(sid:0 cport:39173):] DEBUG org.apache.zookeeper.server.SessionTrackerImpl - Checking session 0x10000020e9e0000 11:10:51 11:10:51.887 [ProcessThread(sid:0 cport:39173):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Processing ACL: 31,s{'world,'anyone} 11:10:51 11:10:51 11:10:51.887 [ProcessThread(sid:0 cport:39173):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Permission requested: 4 11:10:51 11:10:51.887 [main-SendThread(127.0.0.1:39173)] DEBUG org.apache.zookeeper.ClientCnxn - Reading reply session id: 0x10000020e9e0000, packet:: clientPath:null serverPath:null finished:false header:: 100,14 replyHeader:: 100,52,0 request:: org.apache.zookeeper.MultiOperationRecord@940352df response:: org.apache.zookeeper.MultiResponse@8dcf5320 11:10:51 11:10:51.887 [ProcessThread(sid:0 cport:39173):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - ACLs for node: [31,s{'world,'anyone} 11:10:51 ] 11:10:51 11:10:51.887 [ProcessThread(sid:0 cport:39173):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Client credentials: ['sasl,'zooclient 11:10:51 , 'ip,'127.0.0.1 11:10:51 ] 11:10:51 11:10:51.888 [ProcessThread(sid:0 cport:39173):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 88127828843 11:10:51 11:10:51.888 [ProcessThread(sid:0 cport:39173):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 87775374983 11:10:51 11:10:51.888 [ProcessThread(sid:0 cport:39173):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 90647514878 11:10:51 11:10:51.888 [ProcessThread(sid:0 cport:39173):] DEBUG org.apache.zookeeper.server.SessionTrackerImpl - Checking session 0x10000020e9e0000 11:10:51 11:10:51.888 [ProcessThread(sid:0 cport:39173):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Permission requested: 1 11:10:51 11:10:51.888 [ProcessThread(sid:0 cport:39173):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - ACLs for node: [31,s{'world,'anyone} 11:10:51 ] 11:10:51 11:10:51.888 [ProcessThread(sid:0 cport:39173):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Client credentials: ['sasl,'zooclient 11:10:51 , 'ip,'127.0.0.1 11:10:51 ] 11:10:51 11:10:51.888 [ProcessThread(sid:0 cport:39173):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 90647514878 11:10:51 11:10:51.888 [ProcessThread(sid:0 cport:39173):] DEBUG org.apache.zookeeper.server.SessionTrackerImpl - Checking session 0x10000020e9e0000 11:10:51 11:10:51.888 [ProcessThread(sid:0 cport:39173):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Processing ACL: 31,s{'world,'anyone} 11:10:51 11:10:51 11:10:51.888 [ProcessThread(sid:0 cport:39173):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Permission requested: 4 11:10:51 11:10:51.888 [ProcessThread(sid:0 cport:39173):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - ACLs for node: [31,s{'world,'anyone} 11:10:51 ] 11:10:51 11:10:51.888 [ProcessThread(sid:0 cport:39173):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Client credentials: ['sasl,'zooclient 11:10:51 , 'ip,'127.0.0.1 11:10:51 ] 11:10:51 11:10:51.888 [ProcessThread(sid:0 cport:39173):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 90647514878 11:10:51 11:10:51.888 [ProcessThread(sid:0 cport:39173):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 91137866063 11:10:51 11:10:51.888 [ProcessThread(sid:0 cport:39173):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 93463902826 11:10:51 11:10:51.888 [ProcessThread(sid:0 cport:39173):] DEBUG org.apache.zookeeper.server.SessionTrackerImpl - Checking session 0x10000020e9e0000 11:10:51 11:10:51.888 [ProcessThread(sid:0 cport:39173):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Permission requested: 1 11:10:51 11:10:51.889 [ProcessThread(sid:0 cport:39173):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - ACLs for node: [31,s{'world,'anyone} 11:10:51 ] 11:10:51 11:10:51.889 [ProcessThread(sid:0 cport:39173):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Client credentials: ['sasl,'zooclient 11:10:51 , 'ip,'127.0.0.1 11:10:51 ] 11:10:51 11:10:51.889 [ProcessThread(sid:0 cport:39173):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 93463902826 11:10:51 11:10:51.889 [ProcessThread(sid:0 cport:39173):] DEBUG org.apache.zookeeper.server.SessionTrackerImpl - Checking session 0x10000020e9e0000 11:10:51 11:10:51.889 [ProcessThread(sid:0 cport:39173):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Processing ACL: 31,s{'world,'anyone} 11:10:51 11:10:51 11:10:51.889 [ProcessThread(sid:0 cport:39173):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Permission requested: 4 11:10:51 11:10:51.889 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - Processing request:: sessionid:0x10000020e9e0000 type:multi cxid:0x65 zxid:0x35 txntype:14 reqpath:n/a 11:10:51 11:10:51.889 [ProcessThread(sid:0 cport:39173):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - ACLs for node: [31,s{'world,'anyone} 11:10:51 ] 11:10:51 11:10:51.889 [ProcessThread(sid:0 cport:39173):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Client credentials: ['sasl,'zooclient 11:10:51 , 'ip,'127.0.0.1 11:10:51 ] 11:10:51 11:10:51.889 [SyncThread:0] DEBUG org.apache.zookeeper.common.PathTrie - brokers 11:10:51 11:10:51.889 [SyncThread:0] DEBUG org.apache.zookeeper.server.DataTree - Digests are matching for Zxid: 35, Digest in log and actual tree: 85922382528 11:10:51 11:10:51.889 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - sessionid:0x10000020e9e0000 type:multi cxid:0x65 zxid:0x35 txntype:14 reqpath:n/a 11:10:51 11:10:51.889 [ProcessThread(sid:0 cport:39173):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 93463902826 11:10:51 11:10:51.889 [ProcessThread(sid:0 cport:39173):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 95561798925 11:10:51 11:10:51.889 [ProcessThread(sid:0 cport:39173):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 99373494945 11:10:51 11:10:51.889 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - Processing request:: sessionid:0x10000020e9e0000 type:multi cxid:0x66 zxid:0x36 txntype:14 reqpath:n/a 11:10:51 11:10:51.890 [SyncThread:0] DEBUG org.apache.zookeeper.common.PathTrie - brokers 11:10:51 11:10:51.890 [main-SendThread(127.0.0.1:39173)] DEBUG org.apache.zookeeper.ClientCnxn - Reading reply session id: 0x10000020e9e0000, packet:: clientPath:null serverPath:null finished:false header:: 101,14 replyHeader:: 101,53,0 request:: org.apache.zookeeper.MultiOperationRecord@324db7b2 response:: org.apache.zookeeper.MultiResponse@2c19b7f3 11:10:51 11:10:51.890 [SyncThread:0] DEBUG org.apache.zookeeper.server.DataTree - Digests are matching for Zxid: 36, Digest in log and actual tree: 84313910397 11:10:51 11:10:51.890 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - sessionid:0x10000020e9e0000 type:multi cxid:0x66 zxid:0x36 txntype:14 reqpath:n/a 11:10:51 11:10:51.890 [ProcessThread(sid:0 cport:39173):] DEBUG org.apache.zookeeper.server.SessionTrackerImpl - Checking session 0x10000020e9e0000 11:10:51 11:10:51.890 [ProcessThread(sid:0 cport:39173):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Permission requested: 1 11:10:51 11:10:51.890 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - Processing request:: sessionid:0x10000020e9e0000 type:multi cxid:0x67 zxid:0x37 txntype:14 reqpath:n/a 11:10:51 11:10:51.890 [ProcessThread(sid:0 cport:39173):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - ACLs for node: [31,s{'world,'anyone} 11:10:51 ] 11:10:51 11:10:51.890 [ProcessThread(sid:0 cport:39173):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Client credentials: ['sasl,'zooclient 11:10:51 , 'ip,'127.0.0.1 11:10:51 ] 11:10:51 11:10:51.891 [main-SendThread(127.0.0.1:39173)] DEBUG org.apache.zookeeper.ClientCnxn - Reading reply session id: 0x10000020e9e0000, packet:: clientPath:null serverPath:null finished:false header:: 102,14 replyHeader:: 102,54,0 request:: org.apache.zookeeper.MultiOperationRecord@324db7ad response:: org.apache.zookeeper.MultiResponse@2c19b7ee 11:10:51 11:10:51.894 [SyncThread:0] DEBUG org.apache.zookeeper.common.PathTrie - brokers 11:10:51 11:10:51.894 [SyncThread:0] DEBUG org.apache.zookeeper.server.DataTree - Digests are matching for Zxid: 37, Digest in log and actual tree: 88127828843 11:10:51 11:10:51.894 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - sessionid:0x10000020e9e0000 type:multi cxid:0x67 zxid:0x37 txntype:14 reqpath:n/a 11:10:51 11:10:51.894 [ProcessThread(sid:0 cport:39173):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 99373494945 11:10:51 11:10:51.894 [ProcessThread(sid:0 cport:39173):] DEBUG org.apache.zookeeper.server.SessionTrackerImpl - Checking session 0x10000020e9e0000 11:10:51 11:10:51.894 [ProcessThread(sid:0 cport:39173):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Processing ACL: 31,s{'world,'anyone} 11:10:51 11:10:51 11:10:51.894 [ProcessThread(sid:0 cport:39173):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Permission requested: 4 11:10:51 11:10:51.894 [ProcessThread(sid:0 cport:39173):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - ACLs for node: [31,s{'world,'anyone} 11:10:51 ] 11:10:51 11:10:51.894 [ProcessThread(sid:0 cport:39173):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Client credentials: ['sasl,'zooclient 11:10:51 , 'ip,'127.0.0.1 11:10:51 ] 11:10:51 11:10:51.894 [ProcessThread(sid:0 cport:39173):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 99373494945 11:10:51 11:10:51.894 [ProcessThread(sid:0 cport:39173):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 95825174880 11:10:51 11:10:51.894 [ProcessThread(sid:0 cport:39173):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 99624085855 11:10:51 11:10:51.894 [main-SendThread(127.0.0.1:39173)] DEBUG org.apache.zookeeper.ClientCnxn - Reading reply session id: 0x10000020e9e0000, packet:: clientPath:null serverPath:null finished:false header:: 103,14 replyHeader:: 103,55,0 request:: org.apache.zookeeper.MultiOperationRecord@324db790 response:: org.apache.zookeeper.MultiResponse@2c19b7d1 11:10:51 11:10:51.895 [ProcessThread(sid:0 cport:39173):] DEBUG org.apache.zookeeper.server.SessionTrackerImpl - Checking session 0x10000020e9e0000 11:10:51 11:10:51.895 [ProcessThread(sid:0 cport:39173):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Permission requested: 1 11:10:51 11:10:51.895 [ProcessThread(sid:0 cport:39173):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - ACLs for node: [31,s{'world,'anyone} 11:10:51 ] 11:10:51 11:10:51.895 [ProcessThread(sid:0 cport:39173):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Client credentials: ['sasl,'zooclient 11:10:51 , 'ip,'127.0.0.1 11:10:51 ] 11:10:51 11:10:51.895 [ProcessThread(sid:0 cport:39173):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 99624085855 11:10:51 11:10:51.895 [ProcessThread(sid:0 cport:39173):] DEBUG org.apache.zookeeper.server.SessionTrackerImpl - Checking session 0x10000020e9e0000 11:10:51 11:10:51.895 [ProcessThread(sid:0 cport:39173):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Processing ACL: 31,s{'world,'anyone} 11:10:51 11:10:51 11:10:51.895 [ProcessThread(sid:0 cport:39173):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Permission requested: 4 11:10:51 11:10:51.895 [ProcessThread(sid:0 cport:39173):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - ACLs for node: [31,s{'world,'anyone} 11:10:51 ] 11:10:51 11:10:51.895 [ProcessThread(sid:0 cport:39173):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Client credentials: ['sasl,'zooclient 11:10:51 , 'ip,'127.0.0.1 11:10:51 ] 11:10:51 11:10:51.895 [ProcessThread(sid:0 cport:39173):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 99624085855 11:10:51 11:10:51.895 [ProcessThread(sid:0 cport:39173):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 103510333845 11:10:51 11:10:51.895 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - Processing request:: sessionid:0x10000020e9e0000 type:multi cxid:0x68 zxid:0x38 txntype:14 reqpath:n/a 11:10:51 11:10:51.895 [ProcessThread(sid:0 cport:39173):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 105166748503 11:10:51 11:10:51.895 [SyncThread:0] DEBUG org.apache.zookeeper.common.PathTrie - brokers 11:10:51 11:10:51.895 [SyncThread:0] DEBUG org.apache.zookeeper.server.DataTree - Digests are matching for Zxid: 38, Digest in log and actual tree: 90647514878 11:10:51 11:10:51.895 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - sessionid:0x10000020e9e0000 type:multi cxid:0x68 zxid:0x38 txntype:14 reqpath:n/a 11:10:51 11:10:51.895 [ProcessThread(sid:0 cport:39173):] DEBUG org.apache.zookeeper.server.SessionTrackerImpl - Checking session 0x10000020e9e0000 11:10:51 11:10:51.895 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - Processing request:: sessionid:0x10000020e9e0000 type:multi cxid:0x69 zxid:0x39 txntype:14 reqpath:n/a 11:10:51 11:10:51.895 [ProcessThread(sid:0 cport:39173):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Permission requested: 1 11:10:51 11:10:51.896 [ProcessThread(sid:0 cport:39173):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - ACLs for node: [31,s{'world,'anyone} 11:10:51 ] 11:10:51 11:10:51.896 [SyncThread:0] DEBUG org.apache.zookeeper.common.PathTrie - brokers 11:10:51 11:10:51.896 [SyncThread:0] DEBUG org.apache.zookeeper.server.DataTree - Digests are matching for Zxid: 39, Digest in log and actual tree: 93463902826 11:10:51 11:10:51.896 [ProcessThread(sid:0 cport:39173):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Client credentials: ['sasl,'zooclient 11:10:51 , 'ip,'127.0.0.1 11:10:51 ] 11:10:51 11:10:51.896 [ProcessThread(sid:0 cport:39173):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 105166748503 11:10:51 11:10:51.896 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - sessionid:0x10000020e9e0000 type:multi cxid:0x69 zxid:0x39 txntype:14 reqpath:n/a 11:10:51 11:10:51.896 [ProcessThread(sid:0 cport:39173):] DEBUG org.apache.zookeeper.server.SessionTrackerImpl - Checking session 0x10000020e9e0000 11:10:51 11:10:51.896 [main-SendThread(127.0.0.1:39173)] DEBUG org.apache.zookeeper.ClientCnxn - Reading reply session id: 0x10000020e9e0000, packet:: clientPath:null serverPath:null finished:false header:: 104,14 replyHeader:: 104,56,0 request:: org.apache.zookeeper.MultiOperationRecord@324db771 response:: org.apache.zookeeper.MultiResponse@2c19b7b2 11:10:51 11:10:51.896 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - Processing request:: sessionid:0x10000020e9e0000 type:multi cxid:0x6a zxid:0x3a txntype:14 reqpath:n/a 11:10:51 11:10:51.896 [ProcessThread(sid:0 cport:39173):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Processing ACL: 31,s{'world,'anyone} 11:10:51 11:10:51 11:10:51.896 [SyncThread:0] DEBUG org.apache.zookeeper.common.PathTrie - brokers 11:10:51 11:10:51.896 [SyncThread:0] DEBUG org.apache.zookeeper.server.DataTree - Digests are matching for Zxid: 3a, Digest in log and actual tree: 99373494945 11:10:51 11:10:51.896 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - sessionid:0x10000020e9e0000 type:multi cxid:0x6a zxid:0x3a txntype:14 reqpath:n/a 11:10:51 11:10:51.896 [main-SendThread(127.0.0.1:39173)] DEBUG org.apache.zookeeper.ClientCnxn - Reading reply session id: 0x10000020e9e0000, packet:: clientPath:null serverPath:null finished:false header:: 105,14 replyHeader:: 105,57,0 request:: org.apache.zookeeper.MultiOperationRecord@324db7b5 response:: org.apache.zookeeper.MultiResponse@2c19b7f6 11:10:51 11:10:51.896 [ProcessThread(sid:0 cport:39173):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Permission requested: 4 11:10:51 11:10:51.896 [ProcessThread(sid:0 cport:39173):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - ACLs for node: [31,s{'world,'anyone} 11:10:51 ] 11:10:51 11:10:51.896 [ProcessThread(sid:0 cport:39173):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Client credentials: ['sasl,'zooclient 11:10:51 , 'ip,'127.0.0.1 11:10:51 ] 11:10:51 11:10:51.896 [ProcessThread(sid:0 cport:39173):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 105166748503 11:10:51 11:10:51.896 [ProcessThread(sid:0 cport:39173):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 104175211942 11:10:51 11:10:51.897 [ProcessThread(sid:0 cport:39173):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 105526301189 11:10:51 11:10:51.896 [main-SendThread(127.0.0.1:39173)] DEBUG org.apache.zookeeper.ClientCnxn - Reading reply session id: 0x10000020e9e0000, packet:: clientPath:null serverPath:null finished:false header:: 106,14 replyHeader:: 106,58,0 request:: org.apache.zookeeper.MultiOperationRecord@324db7b3 response:: org.apache.zookeeper.MultiResponse@2c19b7f4 11:10:51 11:10:51.897 [ProcessThread(sid:0 cport:39173):] DEBUG org.apache.zookeeper.server.SessionTrackerImpl - Checking session 0x10000020e9e0000 11:10:51 11:10:51.897 [ProcessThread(sid:0 cport:39173):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Permission requested: 1 11:10:51 11:10:51.897 [ProcessThread(sid:0 cport:39173):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - ACLs for node: [31,s{'world,'anyone} 11:10:51 ] 11:10:51 11:10:51.897 [ProcessThread(sid:0 cport:39173):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Client credentials: ['sasl,'zooclient 11:10:51 , 'ip,'127.0.0.1 11:10:51 ] 11:10:51 11:10:51.897 [ProcessThread(sid:0 cport:39173):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 105526301189 11:10:51 11:10:51.897 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - Processing request:: sessionid:0x10000020e9e0000 type:multi cxid:0x6b zxid:0x3b txntype:14 reqpath:n/a 11:10:51 11:10:51.897 [ProcessThread(sid:0 cport:39173):] DEBUG org.apache.zookeeper.server.SessionTrackerImpl - Checking session 0x10000020e9e0000 11:10:51 11:10:51.897 [ProcessThread(sid:0 cport:39173):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Processing ACL: 31,s{'world,'anyone} 11:10:51 11:10:51 11:10:51.897 [SyncThread:0] DEBUG org.apache.zookeeper.common.PathTrie - brokers 11:10:51 11:10:51.897 [SyncThread:0] DEBUG org.apache.zookeeper.server.DataTree - Digests are matching for Zxid: 3b, Digest in log and actual tree: 99624085855 11:10:51 11:10:51.897 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - sessionid:0x10000020e9e0000 type:multi cxid:0x6b zxid:0x3b txntype:14 reqpath:n/a 11:10:51 11:10:51.897 [ProcessThread(sid:0 cport:39173):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Permission requested: 4 11:10:51 11:10:51.897 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - Processing request:: sessionid:0x10000020e9e0000 type:multi cxid:0x6c zxid:0x3c txntype:14 reqpath:n/a 11:10:51 11:10:51.897 [ProcessThread(sid:0 cport:39173):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - ACLs for node: [31,s{'world,'anyone} 11:10:51 ] 11:10:51 11:10:51.897 [SyncThread:0] DEBUG org.apache.zookeeper.common.PathTrie - brokers 11:10:51 11:10:51.897 [ProcessThread(sid:0 cport:39173):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Client credentials: ['sasl,'zooclient 11:10:51 , 'ip,'127.0.0.1 11:10:51 ] 11:10:51 11:10:51.898 [SyncThread:0] DEBUG org.apache.zookeeper.server.DataTree - Digests are matching for Zxid: 3c, Digest in log and actual tree: 105166748503 11:10:51 11:10:51.898 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - sessionid:0x10000020e9e0000 type:multi cxid:0x6c zxid:0x3c txntype:14 reqpath:n/a 11:10:51 11:10:51.898 [main-SendThread(127.0.0.1:39173)] DEBUG org.apache.zookeeper.ClientCnxn - Reading reply session id: 0x10000020e9e0000, packet:: clientPath:null serverPath:null finished:false header:: 107,14 replyHeader:: 107,59,0 request:: org.apache.zookeeper.MultiOperationRecord@324db755 response:: org.apache.zookeeper.MultiResponse@2c19b796 11:10:51 11:10:51.898 [ProcessThread(sid:0 cport:39173):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 105526301189 11:10:51 11:10:51.898 [ProcessThread(sid:0 cport:39173):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 103392392426 11:10:51 11:10:51.898 [ProcessThread(sid:0 cport:39173):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 107421386248 11:10:51 11:10:51.898 [main-SendThread(127.0.0.1:39173)] DEBUG org.apache.zookeeper.ClientCnxn - Reading reply session id: 0x10000020e9e0000, packet:: clientPath:null serverPath:null finished:false header:: 108,14 replyHeader:: 108,60,0 request:: org.apache.zookeeper.MultiOperationRecord@324db776 response:: org.apache.zookeeper.MultiResponse@2c19b7b7 11:10:51 11:10:51.898 [ProcessThread(sid:0 cport:39173):] DEBUG org.apache.zookeeper.server.SessionTrackerImpl - Checking session 0x10000020e9e0000 11:10:51 11:10:51.898 [ProcessThread(sid:0 cport:39173):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Permission requested: 1 11:10:51 11:10:51.898 [ProcessThread(sid:0 cport:39173):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - ACLs for node: [31,s{'world,'anyone} 11:10:51 ] 11:10:51 11:10:51.898 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - Processing request:: sessionid:0x10000020e9e0000 type:multi cxid:0x6d zxid:0x3d txntype:14 reqpath:n/a 11:10:51 11:10:51.899 [ProcessThread(sid:0 cport:39173):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Client credentials: ['sasl,'zooclient 11:10:51 , 'ip,'127.0.0.1 11:10:51 ] 11:10:51 11:10:51.899 [SyncThread:0] DEBUG org.apache.zookeeper.common.PathTrie - brokers 11:10:51 11:10:51.899 [SyncThread:0] DEBUG org.apache.zookeeper.server.DataTree - Digests are matching for Zxid: 3d, Digest in log and actual tree: 105526301189 11:10:51 11:10:51.899 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - sessionid:0x10000020e9e0000 type:multi cxid:0x6d zxid:0x3d txntype:14 reqpath:n/a 11:10:51 11:10:51.899 [ProcessThread(sid:0 cport:39173):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 107421386248 11:10:51 11:10:51.899 [ProcessThread(sid:0 cport:39173):] DEBUG org.apache.zookeeper.server.SessionTrackerImpl - Checking session 0x10000020e9e0000 11:10:51 11:10:51.899 [ProcessThread(sid:0 cport:39173):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Processing ACL: 31,s{'world,'anyone} 11:10:51 11:10:51 11:10:51.899 [ProcessThread(sid:0 cport:39173):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Permission requested: 4 11:10:51 11:10:51.899 [ProcessThread(sid:0 cport:39173):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - ACLs for node: [31,s{'world,'anyone} 11:10:51 ] 11:10:51 11:10:51.899 [ProcessThread(sid:0 cport:39173):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Client credentials: ['sasl,'zooclient 11:10:51 , 'ip,'127.0.0.1 11:10:51 ] 11:10:51 11:10:51.899 [ProcessThread(sid:0 cport:39173):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 107421386248 11:10:51 11:10:51.899 [ProcessThread(sid:0 cport:39173):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 110097336290 11:10:51 11:10:51.899 [ProcessThread(sid:0 cport:39173):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 114046824654 11:10:51 11:10:51.899 [main-SendThread(127.0.0.1:39173)] DEBUG org.apache.zookeeper.ClientCnxn - Reading reply session id: 0x10000020e9e0000, packet:: clientPath:null serverPath:null finished:false header:: 109,14 replyHeader:: 109,61,0 request:: org.apache.zookeeper.MultiOperationRecord@324db78e response:: org.apache.zookeeper.MultiResponse@2c19b7cf 11:10:51 11:10:51.900 [ProcessThread(sid:0 cport:39173):] DEBUG org.apache.zookeeper.server.SessionTrackerImpl - Checking session 0x10000020e9e0000 11:10:51 11:10:51.900 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - Processing request:: sessionid:0x10000020e9e0000 type:multi cxid:0x6e zxid:0x3e txntype:14 reqpath:n/a 11:10:51 11:10:51.900 [ProcessThread(sid:0 cport:39173):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Permission requested: 1 11:10:51 11:10:51.900 [ProcessThread(sid:0 cport:39173):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - ACLs for node: [31,s{'world,'anyone} 11:10:51 ] 11:10:51 11:10:51.900 [SyncThread:0] DEBUG org.apache.zookeeper.common.PathTrie - brokers 11:10:51 11:10:51.900 [SyncThread:0] DEBUG org.apache.zookeeper.server.DataTree - Digests are matching for Zxid: 3e, Digest in log and actual tree: 107421386248 11:10:51 11:10:51.900 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - sessionid:0x10000020e9e0000 type:multi cxid:0x6e zxid:0x3e txntype:14 reqpath:n/a 11:10:51 11:10:51.900 [ProcessThread(sid:0 cport:39173):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Client credentials: ['sasl,'zooclient 11:10:51 , 'ip,'127.0.0.1 11:10:51 ] 11:10:51 11:10:51.900 [ProcessThread(sid:0 cport:39173):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 114046824654 11:10:51 11:10:51.900 [ProcessThread(sid:0 cport:39173):] DEBUG org.apache.zookeeper.server.SessionTrackerImpl - Checking session 0x10000020e9e0000 11:10:51 11:10:51.900 [main-SendThread(127.0.0.1:39173)] DEBUG org.apache.zookeeper.ClientCnxn - Reading reply session id: 0x10000020e9e0000, packet:: clientPath:null serverPath:null finished:false header:: 110,14 replyHeader:: 110,62,0 request:: org.apache.zookeeper.MultiOperationRecord@324db793 response:: org.apache.zookeeper.MultiResponse@2c19b7d4 11:10:51 11:10:51.900 [ProcessThread(sid:0 cport:39173):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Processing ACL: 31,s{'world,'anyone} 11:10:51 11:10:51 11:10:51.900 [ProcessThread(sid:0 cport:39173):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Permission requested: 4 11:10:51 11:10:51.901 [ProcessThread(sid:0 cport:39173):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - ACLs for node: [31,s{'world,'anyone} 11:10:51 ] 11:10:51 11:10:51.901 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - Processing request:: sessionid:0x10000020e9e0000 type:multi cxid:0x6f zxid:0x3f txntype:14 reqpath:n/a 11:10:51 11:10:51.901 [ProcessThread(sid:0 cport:39173):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Client credentials: ['sasl,'zooclient 11:10:51 , 'ip,'127.0.0.1 11:10:51 ] 11:10:51 11:10:51.901 [SyncThread:0] DEBUG org.apache.zookeeper.common.PathTrie - brokers 11:10:51 11:10:51.901 [SyncThread:0] DEBUG org.apache.zookeeper.server.DataTree - Digests are matching for Zxid: 3f, Digest in log and actual tree: 114046824654 11:10:51 11:10:51.901 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - sessionid:0x10000020e9e0000 type:multi cxid:0x6f zxid:0x3f txntype:14 reqpath:n/a 11:10:51 11:10:51.901 [ProcessThread(sid:0 cport:39173):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from data tree is: 114046824654 11:10:51 11:10:51.901 [ProcessThread(sid:0 cport:39173):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 110813176952 11:10:51 11:10:51.901 [ProcessThread(sid:0 cport:39173):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 111850520290 11:10:51 11:10:51.902 [ProcessThread(sid:0 cport:39173):] DEBUG org.apache.zookeeper.server.SessionTrackerImpl - Checking session 0x10000020e9e0000 11:10:51 11:10:51.901 [main-SendThread(127.0.0.1:39173)] DEBUG org.apache.zookeeper.ClientCnxn - Reading reply session id: 0x10000020e9e0000, packet:: clientPath:null serverPath:null finished:false header:: 111,14 replyHeader:: 111,63,0 request:: org.apache.zookeeper.MultiOperationRecord@324db7ae response:: org.apache.zookeeper.MultiResponse@2c19b7ef 11:10:51 11:10:51.902 [ProcessThread(sid:0 cport:39173):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Permission requested: 1 11:10:51 11:10:51.902 [ProcessThread(sid:0 cport:39173):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - ACLs for node: [31,s{'world,'anyone} 11:10:51 ] 11:10:51 11:10:51.902 [ProcessThread(sid:0 cport:39173):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Client credentials: ['sasl,'zooclient 11:10:51 , 'ip,'127.0.0.1 11:10:51 ] 11:10:51 11:10:51.902 [ProcessThread(sid:0 cport:39173):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 111850520290 11:10:51 11:10:51.902 [ProcessThread(sid:0 cport:39173):] DEBUG org.apache.zookeeper.server.SessionTrackerImpl - Checking session 0x10000020e9e0000 11:10:51 11:10:51.902 [ProcessThread(sid:0 cport:39173):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Processing ACL: 31,s{'world,'anyone} 11:10:51 11:10:51 11:10:51.902 [ProcessThread(sid:0 cport:39173):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Permission requested: 4 11:10:51 11:10:51.902 [ProcessThread(sid:0 cport:39173):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - ACLs for node: [31,s{'world,'anyone} 11:10:51 ] 11:10:51 11:10:51.902 [ProcessThread(sid:0 cport:39173):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Client credentials: ['sasl,'zooclient 11:10:51 , 'ip,'127.0.0.1 11:10:51 ] 11:10:51 11:10:51.902 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - Processing request:: sessionid:0x10000020e9e0000 type:multi cxid:0x70 zxid:0x40 txntype:14 reqpath:n/a 11:10:51 11:10:51.902 [SyncThread:0] DEBUG org.apache.zookeeper.common.PathTrie - brokers 11:10:51 11:10:51.903 [SyncThread:0] DEBUG org.apache.zookeeper.server.DataTree - Digests are matching for Zxid: 40, Digest in log and actual tree: 111850520290 11:10:51 11:10:51.903 [main] DEBUG org.apache.kafka.clients.NetworkClient - [Consumer clientId=mso-123456-consumer-37da6eb2-0a27-47c4-bc3c-8032fdeca549, groupId=mso-group] Initialize connection to node localhost:39115 (id: 1 rack: null) for sending metadata request 11:10:51 11:10:51.903 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - sessionid:0x10000020e9e0000 type:multi cxid:0x70 zxid:0x40 txntype:14 reqpath:n/a 11:10:51 11:10:51.903 [main] DEBUG org.apache.kafka.clients.ClientUtils - Resolved host localhost as 127.0.0.1 11:10:51 11:10:51.903 [main] DEBUG org.apache.kafka.clients.NetworkClient - [Consumer clientId=mso-123456-consumer-37da6eb2-0a27-47c4-bc3c-8032fdeca549, groupId=mso-group] Initiating connection to node localhost:39115 (id: 1 rack: null) using address localhost/127.0.0.1 11:10:51 11:10:51.903 [main-SendThread(127.0.0.1:39173)] DEBUG org.apache.zookeeper.ClientCnxn - Reading reply session id: 0x10000020e9e0000, packet:: clientPath:null serverPath:null finished:false header:: 112,14 replyHeader:: 112,64,0 request:: org.apache.zookeeper.MultiOperationRecord@940352d9 response:: org.apache.zookeeper.MultiResponse@8dcf531a 11:10:51 11:10:51.903 [main] DEBUG org.apache.kafka.common.security.authenticator.SaslClientAuthenticator - [Consumer clientId=mso-123456-consumer-37da6eb2-0a27-47c4-bc3c-8032fdeca549, groupId=mso-group] Set SASL client state to SEND_APIVERSIONS_REQUEST 11:10:51 11:10:51.903 [main] DEBUG org.apache.kafka.common.security.authenticator.SaslClientAuthenticator - [Consumer clientId=mso-123456-consumer-37da6eb2-0a27-47c4-bc3c-8032fdeca549, groupId=mso-group] Creating SaslClient: client=null;service=kafka;serviceHostname=localhost;mechs=[PLAIN] 11:10:51 11:10:51.905 [main] DEBUG org.apache.kafka.common.network.Selector - [Consumer clientId=mso-123456-consumer-37da6eb2-0a27-47c4-bc3c-8032fdeca549, groupId=mso-group] Created socket with SO_RCVBUF = 65536, SO_SNDBUF = 131072, SO_TIMEOUT = 0 to node 1 11:10:51 11:10:51.905 [main] DEBUG org.apache.kafka.common.security.authenticator.SaslClientAuthenticator - [Consumer clientId=mso-123456-consumer-37da6eb2-0a27-47c4-bc3c-8032fdeca549, groupId=mso-group] Set SASL client state to RECEIVE_APIVERSIONS_RESPONSE 11:10:51 11:10:51.905 [main] DEBUG org.apache.kafka.clients.NetworkClient - [Consumer clientId=mso-123456-consumer-37da6eb2-0a27-47c4-bc3c-8032fdeca549, groupId=mso-group] Completed connection to node 1. Fetching API versions. 11:10:51 11:10:51.906 [data-plane-kafka-socket-acceptor-ListenerName(SASL_PLAINTEXT)-SASL_PLAINTEXT-39115] DEBUG kafka.network.DataPlaneAcceptor - Accepted connection from /127.0.0.1:43334 on /127.0.0.1:39115 and assigned it to processor 0, sendBufferSize [actual|requested]: [102400|102400] recvBufferSize [actual|requested]: [102400|102400] 11:10:51 11:10:51.906 [data-plane-kafka-network-thread-1-ListenerName(SASL_PLAINTEXT)-SASL_PLAINTEXT-0] DEBUG kafka.network.Processor - Processor 0 listening to new connection from /127.0.0.1:43334 11:10:51 11:10:51.906 [ProcessThread(sid:0 cport:39173):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from data tree is: 111850520290 11:10:51 11:10:51.906 [ProcessThread(sid:0 cport:39173):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 111766238129 11:10:51 11:10:51.906 [ProcessThread(sid:0 cport:39173):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 112456300327 11:10:51 11:10:51.907 [data-plane-kafka-network-thread-1-ListenerName(SASL_PLAINTEXT)-SASL_PLAINTEXT-0] DEBUG org.apache.kafka.common.security.authenticator.SaslServerAuthenticator - Set SASL server state to HANDSHAKE_OR_VERSIONS_REQUEST during authentication 11:10:51 11:10:51.907 [data-plane-kafka-network-thread-1-ListenerName(SASL_PLAINTEXT)-SASL_PLAINTEXT-0] DEBUG org.apache.kafka.common.security.authenticator.SaslServerAuthenticator - Handling Kafka request API_VERSIONS during authentication 11:10:51 11:10:51.907 [ProcessThread(sid:0 cport:39173):] DEBUG org.apache.zookeeper.server.SessionTrackerImpl - Checking session 0x10000020e9e0000 11:10:51 11:10:51.907 [ProcessThread(sid:0 cport:39173):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Permission requested: 1 11:10:51 11:10:51.907 [ProcessThread(sid:0 cport:39173):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - ACLs for node: [31,s{'world,'anyone} 11:10:51 ] 11:10:51 11:10:51.907 [ProcessThread(sid:0 cport:39173):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Client credentials: ['sasl,'zooclient 11:10:51 , 'ip,'127.0.0.1 11:10:51 ] 11:10:51 11:10:51.907 [ProcessThread(sid:0 cport:39173):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 112456300327 11:10:51 11:10:51.907 [ProcessThread(sid:0 cport:39173):] DEBUG org.apache.zookeeper.server.SessionTrackerImpl - Checking session 0x10000020e9e0000 11:10:51 11:10:51.907 [ProcessThread(sid:0 cport:39173):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Processing ACL: 31,s{'world,'anyone} 11:10:51 11:10:51 11:10:51.907 [ProcessThread(sid:0 cport:39173):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Permission requested: 4 11:10:51 11:10:51.907 [data-plane-kafka-network-thread-1-ListenerName(SASL_PLAINTEXT)-SASL_PLAINTEXT-0] DEBUG org.apache.kafka.common.security.authenticator.SaslServerAuthenticator - Set SASL server state to HANDSHAKE_REQUEST during authentication 11:10:51 11:10:51.907 [main] DEBUG org.apache.kafka.common.security.authenticator.SaslClientAuthenticator - [Consumer clientId=mso-123456-consumer-37da6eb2-0a27-47c4-bc3c-8032fdeca549, groupId=mso-group] Set SASL client state to SEND_HANDSHAKE_REQUEST 11:10:51 11:10:51.908 [main] DEBUG org.apache.kafka.common.security.authenticator.SaslClientAuthenticator - [Consumer clientId=mso-123456-consumer-37da6eb2-0a27-47c4-bc3c-8032fdeca549, groupId=mso-group] Set SASL client state to RECEIVE_HANDSHAKE_RESPONSE 11:10:51 11:10:51.908 [data-plane-kafka-network-thread-1-ListenerName(SASL_PLAINTEXT)-SASL_PLAINTEXT-0] DEBUG org.apache.kafka.common.security.authenticator.SaslServerAuthenticator - Handling Kafka request SASL_HANDSHAKE during authentication 11:10:51 11:10:51.908 [data-plane-kafka-network-thread-1-ListenerName(SASL_PLAINTEXT)-SASL_PLAINTEXT-0] DEBUG org.apache.kafka.common.security.authenticator.SaslServerAuthenticator - Using SASL mechanism 'PLAIN' provided by client 11:10:51 11:10:51.907 [ProcessThread(sid:0 cport:39173):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - ACLs for node: [31,s{'world,'anyone} 11:10:51 ] 11:10:51 11:10:51.908 [main] DEBUG org.apache.kafka.common.security.authenticator.SaslClientAuthenticator - [Consumer clientId=mso-123456-consumer-37da6eb2-0a27-47c4-bc3c-8032fdeca549, groupId=mso-group] Set SASL client state to INITIAL 11:10:51 11:10:51.908 [main] DEBUG org.apache.kafka.common.security.authenticator.SaslClientAuthenticator - [Consumer clientId=mso-123456-consumer-37da6eb2-0a27-47c4-bc3c-8032fdeca549, groupId=mso-group] Set SASL client state to INTERMEDIATE 11:10:51 11:10:51.908 [ProcessThread(sid:0 cport:39173):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Client credentials: ['sasl,'zooclient 11:10:51 , 'ip,'127.0.0.1 11:10:51 ] 11:10:51 11:10:51.908 [data-plane-kafka-network-thread-1-ListenerName(SASL_PLAINTEXT)-SASL_PLAINTEXT-0] DEBUG org.apache.kafka.common.security.authenticator.SaslServerAuthenticator - Set SASL server state to AUTHENTICATE during authentication 11:10:51 11:10:51.908 [ProcessThread(sid:0 cport:39173):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 112456300327 11:10:51 11:10:51.908 [ProcessThread(sid:0 cport:39173):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 114533047692 11:10:51 11:10:51.909 [ProcessThread(sid:0 cport:39173):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 116794673606 11:10:51 11:10:51.909 [data-plane-kafka-network-thread-1-ListenerName(SASL_PLAINTEXT)-SASL_PLAINTEXT-0] DEBUG org.apache.kafka.common.security.authenticator.SaslServerAuthenticator - Authentication complete; session max lifetime from broker config=0 ms, no credential expiration; no session expiration, sending 0 ms to client 11:10:51 11:10:51.909 [ProcessThread(sid:0 cport:39173):] DEBUG org.apache.zookeeper.server.SessionTrackerImpl - Checking session 0x10000020e9e0000 11:10:51 11:10:51.909 [data-plane-kafka-network-thread-1-ListenerName(SASL_PLAINTEXT)-SASL_PLAINTEXT-0] DEBUG org.apache.kafka.common.security.authenticator.SaslServerAuthenticator - Set SASL server state to COMPLETE during authentication 11:10:51 11:10:51.909 [data-plane-kafka-network-thread-1-ListenerName(SASL_PLAINTEXT)-SASL_PLAINTEXT-0] DEBUG org.apache.kafka.common.network.Selector - [SocketServer listenerType=ZK_BROKER, nodeId=1] Successfully authenticated with /127.0.0.1 11:10:51 11:10:51.909 [main] DEBUG org.apache.kafka.common.security.authenticator.SaslClientAuthenticator - [Consumer clientId=mso-123456-consumer-37da6eb2-0a27-47c4-bc3c-8032fdeca549, groupId=mso-group] Set SASL client state to COMPLETE 11:10:51 11:10:51.909 [main] DEBUG org.apache.kafka.common.security.authenticator.SaslClientAuthenticator - [Consumer clientId=mso-123456-consumer-37da6eb2-0a27-47c4-bc3c-8032fdeca549, groupId=mso-group] Finished authentication with no session expiration and no session re-authentication 11:10:51 11:10:51.909 [main] DEBUG org.apache.kafka.common.network.Selector - [Consumer clientId=mso-123456-consumer-37da6eb2-0a27-47c4-bc3c-8032fdeca549, groupId=mso-group] Successfully authenticated with localhost/127.0.0.1 11:10:51 11:10:51.909 [main] DEBUG org.apache.kafka.clients.NetworkClient - [Consumer clientId=mso-123456-consumer-37da6eb2-0a27-47c4-bc3c-8032fdeca549, groupId=mso-group] Initiating API versions fetch from node 1. 11:10:51 11:10:51.909 [main] DEBUG org.apache.kafka.clients.NetworkClient - [Consumer clientId=mso-123456-consumer-37da6eb2-0a27-47c4-bc3c-8032fdeca549, groupId=mso-group] Sending API_VERSIONS request with header RequestHeader(apiKey=API_VERSIONS, apiVersion=3, clientId=mso-123456-consumer-37da6eb2-0a27-47c4-bc3c-8032fdeca549, correlationId=3) and timeout 30000 to node 1: ApiVersionsRequestData(clientSoftwareName='apache-kafka-java', clientSoftwareVersion='3.3.1') 11:10:51 11:10:51.909 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - Processing request:: sessionid:0x10000020e9e0000 type:multi cxid:0x71 zxid:0x41 txntype:14 reqpath:n/a 11:10:51 11:10:51.909 [ProcessThread(sid:0 cport:39173):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Permission requested: 1 11:10:51 11:10:51.910 [ProcessThread(sid:0 cport:39173):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - ACLs for node: [31,s{'world,'anyone} 11:10:51 ] 11:10:51 11:10:51.910 [ProcessThread(sid:0 cport:39173):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Client credentials: ['sasl,'zooclient 11:10:51 , 'ip,'127.0.0.1 11:10:51 ] 11:10:51 11:10:51.910 [SyncThread:0] DEBUG org.apache.zookeeper.common.PathTrie - brokers 11:10:51 11:10:51.910 [SyncThread:0] DEBUG org.apache.zookeeper.server.DataTree - Digests are matching for Zxid: 41, Digest in log and actual tree: 112456300327 11:10:51 11:10:51.910 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - sessionid:0x10000020e9e0000 type:multi cxid:0x71 zxid:0x41 txntype:14 reqpath:n/a 11:10:51 11:10:51.910 [ProcessThread(sid:0 cport:39173):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 116794673606 11:10:51 11:10:51.910 [ProcessThread(sid:0 cport:39173):] DEBUG org.apache.zookeeper.server.SessionTrackerImpl - Checking session 0x10000020e9e0000 11:10:51 11:10:51.911 [main-SendThread(127.0.0.1:39173)] DEBUG org.apache.zookeeper.ClientCnxn - Reading reply session id: 0x10000020e9e0000, packet:: clientPath:null serverPath:null finished:false header:: 113,14 replyHeader:: 113,65,0 request:: org.apache.zookeeper.MultiOperationRecord@324db757 response:: org.apache.zookeeper.MultiResponse@2c19b798 11:10:51 11:10:51.912 [main] DEBUG org.apache.kafka.clients.NetworkClient - [Consumer clientId=mso-123456-consumer-37da6eb2-0a27-47c4-bc3c-8032fdeca549, groupId=mso-group] Received API_VERSIONS response from node 1 for request with header RequestHeader(apiKey=API_VERSIONS, apiVersion=3, clientId=mso-123456-consumer-37da6eb2-0a27-47c4-bc3c-8032fdeca549, correlationId=3): ApiVersionsResponseData(errorCode=0, apiKeys=[ApiVersion(apiKey=0, minVersion=0, maxVersion=9), ApiVersion(apiKey=1, minVersion=0, maxVersion=13), ApiVersion(apiKey=2, minVersion=0, maxVersion=7), ApiVersion(apiKey=3, minVersion=0, maxVersion=12), ApiVersion(apiKey=4, minVersion=0, maxVersion=6), ApiVersion(apiKey=5, minVersion=0, maxVersion=3), ApiVersion(apiKey=6, minVersion=0, maxVersion=7), ApiVersion(apiKey=7, minVersion=0, maxVersion=3), ApiVersion(apiKey=8, minVersion=0, maxVersion=8), ApiVersion(apiKey=9, minVersion=0, maxVersion=8), ApiVersion(apiKey=10, minVersion=0, maxVersion=4), ApiVersion(apiKey=11, minVersion=0, maxVersion=9), ApiVersion(apiKey=12, minVersion=0, maxVersion=4), ApiVersion(apiKey=13, minVersion=0, maxVersion=5), ApiVersion(apiKey=14, minVersion=0, maxVersion=5), ApiVersion(apiKey=15, minVersion=0, maxVersion=5), ApiVersion(apiKey=16, minVersion=0, maxVersion=4), ApiVersion(apiKey=17, minVersion=0, maxVersion=1), ApiVersion(apiKey=18, minVersion=0, maxVersion=3), ApiVersion(apiKey=19, minVersion=0, maxVersion=7), ApiVersion(apiKey=20, minVersion=0, maxVersion=6), ApiVersion(apiKey=21, minVersion=0, maxVersion=2), ApiVersion(apiKey=22, minVersion=0, maxVersion=4), ApiVersion(apiKey=23, minVersion=0, maxVersion=4), ApiVersion(apiKey=24, minVersion=0, maxVersion=3), ApiVersion(apiKey=25, minVersion=0, maxVersion=3), ApiVersion(apiKey=26, minVersion=0, maxVersion=3), ApiVersion(apiKey=27, minVersion=0, maxVersion=1), ApiVersion(apiKey=28, minVersion=0, maxVersion=3), ApiVersion(apiKey=29, minVersion=0, maxVersion=3), ApiVersion(apiKey=30, minVersion=0, maxVersion=3), ApiVersion(apiKey=31, minVersion=0, maxVersion=3), ApiVersion(apiKey=32, minVersion=0, maxVersion=4), ApiVersion(apiKey=33, minVersion=0, maxVersion=2), ApiVersion(apiKey=34, minVersion=0, maxVersion=2), ApiVersion(apiKey=35, minVersion=0, maxVersion=4), ApiVersion(apiKey=36, minVersion=0, maxVersion=2), ApiVersion(apiKey=37, minVersion=0, maxVersion=3), ApiVersion(apiKey=38, minVersion=0, maxVersion=3), ApiVersion(apiKey=39, minVersion=0, maxVersion=2), ApiVersion(apiKey=40, minVersion=0, maxVersion=2), ApiVersion(apiKey=41, minVersion=0, maxVersion=3), ApiVersion(apiKey=42, minVersion=0, maxVersion=2), ApiVersion(apiKey=43, minVersion=0, maxVersion=2), ApiVersion(apiKey=44, minVersion=0, maxVersion=1), ApiVersion(apiKey=45, minVersion=0, maxVersion=0), ApiVersion(apiKey=46, minVersion=0, maxVersion=0), ApiVersion(apiKey=47, minVersion=0, maxVersion=0), ApiVersion(apiKey=48, minVersion=0, maxVersion=1), ApiVersion(apiKey=49, minVersion=0, maxVersion=1), ApiVersion(apiKey=50, minVersion=0, maxVersion=0), ApiVersion(apiKey=51, minVersion=0, maxVersion=0), ApiVersion(apiKey=56, minVersion=0, maxVersion=2), ApiVersion(apiKey=57, minVersion=0, maxVersion=1), ApiVersion(apiKey=60, minVersion=0, maxVersion=0), ApiVersion(apiKey=61, minVersion=0, maxVersion=0), ApiVersion(apiKey=65, minVersion=0, maxVersion=0), ApiVersion(apiKey=66, minVersion=0, maxVersion=0), ApiVersion(apiKey=67, minVersion=0, maxVersion=0)], throttleTimeMs=0, supportedFeatures=[], finalizedFeaturesEpoch=0, finalizedFeatures=[]) 11:10:51 11:10:51.911 [ProcessThread(sid:0 cport:39173):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Processing ACL: 31,s{'world,'anyone} 11:10:51 11:10:51 11:10:51.912 [ProcessThread(sid:0 cport:39173):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Permission requested: 4 11:10:51 11:10:51.912 [ProcessThread(sid:0 cport:39173):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - ACLs for node: [31,s{'world,'anyone} 11:10:51 ] 11:10:51 11:10:51.912 [ProcessThread(sid:0 cport:39173):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Client credentials: ['sasl,'zooclient 11:10:51 , 'ip,'127.0.0.1 11:10:51 ] 11:10:51 11:10:51.912 [ProcessThread(sid:0 cport:39173):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 116794673606 11:10:51 11:10:51.912 [ProcessThread(sid:0 cport:39173):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 115897285379 11:10:51 11:10:51.912 [ProcessThread(sid:0 cport:39173):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 119131068572 11:10:51 11:10:51.912 [ProcessThread(sid:0 cport:39173):] DEBUG org.apache.zookeeper.server.SessionTrackerImpl - Checking session 0x10000020e9e0000 11:10:51 11:10:51.912 [ProcessThread(sid:0 cport:39173):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Permission requested: 1 11:10:51 11:10:51.912 [data-plane-kafka-network-thread-1-ListenerName(SASL_PLAINTEXT)-SASL_PLAINTEXT-0] DEBUG kafka.request.logger - Completed request:{"isForwarded":false,"requestHeader":{"requestApiKey":18,"requestApiVersion":3,"correlationId":3,"clientId":"mso-123456-consumer-37da6eb2-0a27-47c4-bc3c-8032fdeca549","requestApiKeyName":"API_VERSIONS"},"request":{"clientSoftwareName":"apache-kafka-java","clientSoftwareVersion":"3.3.1"},"response":{"errorCode":0,"apiKeys":[{"apiKey":0,"minVersion":0,"maxVersion":9},{"apiKey":1,"minVersion":0,"maxVersion":13},{"apiKey":2,"minVersion":0,"maxVersion":7},{"apiKey":3,"minVersion":0,"maxVersion":12},{"apiKey":4,"minVersion":0,"maxVersion":6},{"apiKey":5,"minVersion":0,"maxVersion":3},{"apiKey":6,"minVersion":0,"maxVersion":7},{"apiKey":7,"minVersion":0,"maxVersion":3},{"apiKey":8,"minVersion":0,"maxVersion":8},{"apiKey":9,"minVersion":0,"maxVersion":8},{"apiKey":10,"minVersion":0,"maxVersion":4},{"apiKey":11,"minVersion":0,"maxVersion":9},{"apiKey":12,"minVersion":0,"maxVersion":4},{"apiKey":13,"minVersion":0,"maxVersion":5},{"apiKey":14,"minVersion":0,"maxVersion":5},{"apiKey":15,"minVersion":0,"maxVersion":5},{"apiKey":16,"minVersion":0,"maxVersion":4},{"apiKey":17,"minVersion":0,"maxVersion":1},{"apiKey":18,"minVersion":0,"maxVersion":3},{"apiKey":19,"minVersion":0,"maxVersion":7},{"apiKey":20,"minVersion":0,"maxVersion":6},{"apiKey":21,"minVersion":0,"maxVersion":2},{"apiKey":22,"minVersion":0,"maxVersion":4},{"apiKey":23,"minVersion":0,"maxVersion":4},{"apiKey":24,"minVersion":0,"maxVersion":3},{"apiKey":25,"minVersion":0,"maxVersion":3},{"apiKey":26,"minVersion":0,"maxVersion":3},{"apiKey":27,"minVersion":0,"maxVersion":1},{"apiKey":28,"minVersion":0,"maxVersion":3},{"apiKey":29,"minVersion":0,"maxVersion":3},{"apiKey":30,"minVersion":0,"maxVersion":3},{"apiKey":31,"minVersion":0,"maxVersion":3},{"apiKey":32,"minVersion":0,"maxVersion":4},{"apiKey":33,"minVersion":0,"maxVersion":2},{"apiKey":34,"minVersion":0,"maxVersion":2},{"apiKey":35,"minVersion":0,"maxVersion":4},{"apiKey":36,"minVersion":0,"maxVersion":2},{"apiKey":37,"minVersion":0,"maxVersion":3},{"apiKey":38,"minVersion":0,"maxVersion":3},{"apiKey":39,"minVersion":0,"maxVersion":2},{"apiKey":40,"minVersion":0,"maxVersion":2},{"apiKey":41,"minVersion":0,"maxVersion":3},{"apiKey":42,"minVersion":0,"maxVersion":2},{"apiKey":43,"minVersion":0,"maxVersion":2},{"apiKey":44,"minVersion":0,"maxVersion":1},{"apiKey":45,"minVersion":0,"maxVersion":0},{"apiKey":46,"minVersion":0,"maxVersion":0},{"apiKey":47,"minVersion":0,"maxVersion":0},{"apiKey":48,"minVersion":0,"maxVersion":1},{"apiKey":49,"minVersion":0,"maxVersion":1},{"apiKey":50,"minVersion":0,"maxVersion":0},{"apiKey":51,"minVersion":0,"maxVersion":0},{"apiKey":56,"minVersion":0,"maxVersion":2},{"apiKey":57,"minVersion":0,"maxVersion":1},{"apiKey":60,"minVersion":0,"maxVersion":0},{"apiKey":61,"minVersion":0,"maxVersion":0},{"apiKey":65,"minVersion":0,"maxVersion":0},{"apiKey":66,"minVersion":0,"maxVersion":0},{"apiKey":67,"minVersion":0,"maxVersion":0}],"throttleTimeMs":0,"finalizedFeaturesEpoch":0},"connection":"127.0.0.1:39115-127.0.0.1:43334-3","totalTimeMs":1.569,"requestQueueTimeMs":0.266,"localTimeMs":0.939,"remoteTimeMs":0.0,"throttleTimeMs":0,"responseQueueTimeMs":0.098,"sendTimeMs":0.264,"securityProtocol":"SASL_PLAINTEXT","principal":"User:admin","listener":"SASL_PLAINTEXT","clientInformation":{"softwareName":"unknown","softwareVersion":"unknown"}} 11:10:51 11:10:51.913 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - Processing request:: sessionid:0x10000020e9e0000 type:multi cxid:0x72 zxid:0x42 txntype:14 reqpath:n/a 11:10:51 11:10:51.913 [SyncThread:0] DEBUG org.apache.zookeeper.common.PathTrie - brokers 11:10:51 11:10:51.913 [SyncThread:0] DEBUG org.apache.zookeeper.server.DataTree - Digests are matching for Zxid: 42, Digest in log and actual tree: 116794673606 11:10:51 11:10:51.913 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - sessionid:0x10000020e9e0000 type:multi cxid:0x72 zxid:0x42 txntype:14 reqpath:n/a 11:10:51 11:10:51.913 [main-SendThread(127.0.0.1:39173)] DEBUG org.apache.zookeeper.ClientCnxn - Reading reply session id: 0x10000020e9e0000, packet:: clientPath:null serverPath:null finished:false header:: 114,14 replyHeader:: 114,66,0 request:: org.apache.zookeeper.MultiOperationRecord@324db754 response:: org.apache.zookeeper.MultiResponse@2c19b795 11:10:51 11:10:51.912 [ProcessThread(sid:0 cport:39173):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - ACLs for node: [31,s{'world,'anyone} 11:10:51 ] 11:10:51 11:10:51.912 [main] DEBUG org.apache.kafka.clients.NetworkClient - [Consumer clientId=mso-123456-consumer-37da6eb2-0a27-47c4-bc3c-8032fdeca549, groupId=mso-group] Node 1 has finalized features epoch: 0, finalized features: [], supported features: [], API versions: (Produce(0): 0 to 9 [usable: 9], Fetch(1): 0 to 13 [usable: 13], ListOffsets(2): 0 to 7 [usable: 7], Metadata(3): 0 to 12 [usable: 12], LeaderAndIsr(4): 0 to 6 [usable: 6], StopReplica(5): 0 to 3 [usable: 3], UpdateMetadata(6): 0 to 7 [usable: 7], ControlledShutdown(7): 0 to 3 [usable: 3], OffsetCommit(8): 0 to 8 [usable: 8], OffsetFetch(9): 0 to 8 [usable: 8], FindCoordinator(10): 0 to 4 [usable: 4], JoinGroup(11): 0 to 9 [usable: 9], Heartbeat(12): 0 to 4 [usable: 4], LeaveGroup(13): 0 to 5 [usable: 5], SyncGroup(14): 0 to 5 [usable: 5], DescribeGroups(15): 0 to 5 [usable: 5], ListGroups(16): 0 to 4 [usable: 4], SaslHandshake(17): 0 to 1 [usable: 1], ApiVersions(18): 0 to 3 [usable: 3], CreateTopics(19): 0 to 7 [usable: 7], DeleteTopics(20): 0 to 6 [usable: 6], DeleteRecords(21): 0 to 2 [usable: 2], InitProducerId(22): 0 to 4 [usable: 4], OffsetForLeaderEpoch(23): 0 to 4 [usable: 4], AddPartitionsToTxn(24): 0 to 3 [usable: 3], AddOffsetsToTxn(25): 0 to 3 [usable: 3], EndTxn(26): 0 to 3 [usable: 3], WriteTxnMarkers(27): 0 to 1 [usable: 1], TxnOffsetCommit(28): 0 to 3 [usable: 3], DescribeAcls(29): 0 to 3 [usable: 3], CreateAcls(30): 0 to 3 [usable: 3], DeleteAcls(31): 0 to 3 [usable: 3], DescribeConfigs(32): 0 to 4 [usable: 4], AlterConfigs(33): 0 to 2 [usable: 2], AlterReplicaLogDirs(34): 0 to 2 [usable: 2], DescribeLogDirs(35): 0 to 4 [usable: 4], SaslAuthenticate(36): 0 to 2 [usable: 2], CreatePartitions(37): 0 to 3 [usable: 3], CreateDelegationToken(38): 0 to 3 [usable: 3], RenewDelegationToken(39): 0 to 2 [usable: 2], ExpireDelegationToken(40): 0 to 2 [usable: 2], DescribeDelegationToken(41): 0 to 3 [usable: 3], DeleteGroups(42): 0 to 2 [usable: 2], ElectLeaders(43): 0 to 2 [usable: 2], IncrementalAlterConfigs(44): 0 to 1 [usable: 1], AlterPartitionReassignments(45): 0 [usable: 0], ListPartitionReassignments(46): 0 [usable: 0], OffsetDelete(47): 0 [usable: 0], DescribeClientQuotas(48): 0 to 1 [usable: 1], AlterClientQuotas(49): 0 to 1 [usable: 1], DescribeUserScramCredentials(50): 0 [usable: 0], AlterUserScramCredentials(51): 0 [usable: 0], AlterPartition(56): 0 to 2 [usable: 2], UpdateFeatures(57): 0 to 1 [usable: 1], DescribeCluster(60): 0 [usable: 0], DescribeProducers(61): 0 [usable: 0], DescribeTransactions(65): 0 [usable: 0], ListTransactions(66): 0 [usable: 0], AllocateProducerIds(67): 0 [usable: 0]). 11:10:51 11:10:51.913 [ProcessThread(sid:0 cport:39173):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Client credentials: ['sasl,'zooclient 11:10:51 , 'ip,'127.0.0.1 11:10:51 ] 11:10:51 11:10:51.914 [ProcessThread(sid:0 cport:39173):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 119131068572 11:10:51 11:10:51.914 [main] DEBUG org.apache.kafka.clients.NetworkClient - [Consumer clientId=mso-123456-consumer-37da6eb2-0a27-47c4-bc3c-8032fdeca549, groupId=mso-group] Sending metadata request MetadataRequestData(topics=[MetadataRequestTopic(topicId=AAAAAAAAAAAAAAAAAAAAAA, name='my-test-topic')], allowAutoTopicCreation=false, includeClusterAuthorizedOperations=false, includeTopicAuthorizedOperations=false) to node localhost:39115 (id: 1 rack: null) 11:10:51 11:10:51.914 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - Processing request:: sessionid:0x10000020e9e0000 type:multi cxid:0x73 zxid:0x43 txntype:14 reqpath:n/a 11:10:51 11:10:51.914 [SyncThread:0] DEBUG org.apache.zookeeper.common.PathTrie - brokers 11:10:51 11:10:51.914 [SyncThread:0] DEBUG org.apache.zookeeper.server.DataTree - Digests are matching for Zxid: 43, Digest in log and actual tree: 119131068572 11:10:51 11:10:51.914 [main] DEBUG org.apache.kafka.clients.NetworkClient - [Consumer clientId=mso-123456-consumer-37da6eb2-0a27-47c4-bc3c-8032fdeca549, groupId=mso-group] Sending METADATA request with header RequestHeader(apiKey=METADATA, apiVersion=12, clientId=mso-123456-consumer-37da6eb2-0a27-47c4-bc3c-8032fdeca549, correlationId=4) and timeout 30000 to node 1: MetadataRequestData(topics=[MetadataRequestTopic(topicId=AAAAAAAAAAAAAAAAAAAAAA, name='my-test-topic')], allowAutoTopicCreation=false, includeClusterAuthorizedOperations=false, includeTopicAuthorizedOperations=false) 11:10:51 11:10:51.917 [main] DEBUG org.apache.kafka.clients.NetworkClient - [Consumer clientId=mso-123456-consumer-37da6eb2-0a27-47c4-bc3c-8032fdeca549, groupId=mso-group] Received METADATA response from node 1 for request with header RequestHeader(apiKey=METADATA, apiVersion=12, clientId=mso-123456-consumer-37da6eb2-0a27-47c4-bc3c-8032fdeca549, correlationId=4): MetadataResponseData(throttleTimeMs=0, brokers=[MetadataResponseBroker(nodeId=1, host='localhost', port=39115, rack=null)], clusterId='jx5ycp9PTHOXo1U6H8QTmw', controllerId=1, topics=[MetadataResponseTopic(errorCode=0, name='my-test-topic', topicId=QbCSIHehTz6oyvgrkYsJtw, isInternal=false, partitions=[MetadataResponsePartition(errorCode=0, partitionIndex=0, leaderId=1, leaderEpoch=0, replicaNodes=[1], isrNodes=[1], offlineReplicas=[])], topicAuthorizedOperations=-2147483648)], clusterAuthorizedOperations=-2147483648) 11:10:51 11:10:51.918 [data-plane-kafka-network-thread-1-ListenerName(SASL_PLAINTEXT)-SASL_PLAINTEXT-0] DEBUG kafka.request.logger - Completed request:{"isForwarded":false,"requestHeader":{"requestApiKey":3,"requestApiVersion":12,"correlationId":4,"clientId":"mso-123456-consumer-37da6eb2-0a27-47c4-bc3c-8032fdeca549","requestApiKeyName":"METADATA"},"request":{"topics":[{"topicId":"AAAAAAAAAAAAAAAAAAAAAA","name":"my-test-topic"}],"allowAutoTopicCreation":false,"includeTopicAuthorizedOperations":false},"response":{"throttleTimeMs":0,"brokers":[{"nodeId":1,"host":"localhost","port":39115,"rack":null}],"clusterId":"jx5ycp9PTHOXo1U6H8QTmw","controllerId":1,"topics":[{"errorCode":0,"name":"my-test-topic","topicId":"QbCSIHehTz6oyvgrkYsJtw","isInternal":false,"partitions":[{"errorCode":0,"partitionIndex":0,"leaderId":1,"leaderEpoch":0,"replicaNodes":[1],"isrNodes":[1],"offlineReplicas":[]}],"topicAuthorizedOperations":-2147483648}]},"connection":"127.0.0.1:39115-127.0.0.1:43334-3","totalTimeMs":2.476,"requestQueueTimeMs":0.163,"localTimeMs":1.998,"remoteTimeMs":0.0,"throttleTimeMs":0,"responseQueueTimeMs":0.1,"sendTimeMs":0.214,"securityProtocol":"SASL_PLAINTEXT","principal":"User:admin","listener":"SASL_PLAINTEXT","clientInformation":{"softwareName":"apache-kafka-java","softwareVersion":"3.3.1"}} 11:10:51 11:10:51.918 [main] DEBUG org.apache.kafka.clients.Metadata - [Consumer clientId=mso-123456-consumer-37da6eb2-0a27-47c4-bc3c-8032fdeca549, groupId=mso-group] Updating last seen epoch for partition my-test-topic-0 from 0 to epoch 0 from new metadata 11:10:51 11:10:51.918 [main] DEBUG org.apache.kafka.clients.Metadata - [Consumer clientId=mso-123456-consumer-37da6eb2-0a27-47c4-bc3c-8032fdeca549, groupId=mso-group] Updated cluster metadata updateVersion 3 to MetadataCache{clusterId='jx5ycp9PTHOXo1U6H8QTmw', nodes={1=localhost:39115 (id: 1 rack: null)}, partitions=[PartitionMetadata(error=NONE, partition=my-test-topic-0, leader=Optional[1], leaderEpoch=Optional[0], replicas=1, isr=1, offlineReplicas=)], controller=localhost:39115 (id: 1 rack: null)} 11:10:51 11:10:51.918 [main] DEBUG org.apache.kafka.clients.consumer.internals.ConsumerCoordinator - [Consumer clientId=mso-123456-consumer-37da6eb2-0a27-47c4-bc3c-8032fdeca549, groupId=mso-group] Sending FindCoordinator request to broker localhost:39115 (id: 1 rack: null) 11:10:51 11:10:51.919 [main] DEBUG org.apache.kafka.clients.NetworkClient - [Consumer clientId=mso-123456-consumer-37da6eb2-0a27-47c4-bc3c-8032fdeca549, groupId=mso-group] Sending FIND_COORDINATOR request with header RequestHeader(apiKey=FIND_COORDINATOR, apiVersion=4, clientId=mso-123456-consumer-37da6eb2-0a27-47c4-bc3c-8032fdeca549, correlationId=5) and timeout 30000 to node 1: FindCoordinatorRequestData(key='', keyType=0, coordinatorKeys=[mso-group]) 11:10:51 11:10:51.914 [ProcessThread(sid:0 cport:39173):] DEBUG org.apache.zookeeper.server.SessionTrackerImpl - Checking session 0x10000020e9e0000 11:10:51 11:10:51.919 [ProcessThread(sid:0 cport:39173):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Processing ACL: 31,s{'world,'anyone} 11:10:51 11:10:51 11:10:51.919 [ProcessThread(sid:0 cport:39173):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Permission requested: 4 11:10:51 11:10:51.919 [ProcessThread(sid:0 cport:39173):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - ACLs for node: [31,s{'world,'anyone} 11:10:51 ] 11:10:51 11:10:51.919 [ProcessThread(sid:0 cport:39173):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Client credentials: ['sasl,'zooclient 11:10:51 , 'ip,'127.0.0.1 11:10:51 ] 11:10:51 11:10:51.920 [ProcessThread(sid:0 cport:39173):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from data tree is: 119131068572 11:10:51 11:10:51.920 [ProcessThread(sid:0 cport:39173):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 119825533010 11:10:51 11:10:51.920 [ProcessThread(sid:0 cport:39173):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 120875761540 11:10:51 11:10:51.920 [ProcessThread(sid:0 cport:39173):] DEBUG org.apache.zookeeper.server.SessionTrackerImpl - Checking session 0x10000020e9e0000 11:10:51 11:10:51.920 [ProcessThread(sid:0 cport:39173):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Permission requested: 1 11:10:51 11:10:51.920 [ProcessThread(sid:0 cport:39173):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - ACLs for node: [31,s{'world,'anyone} 11:10:51 ] 11:10:51 11:10:51.920 [ProcessThread(sid:0 cport:39173):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Client credentials: ['sasl,'zooclient 11:10:51 , 'ip,'127.0.0.1 11:10:51 ] 11:10:51 11:10:51.920 [ProcessThread(sid:0 cport:39173):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 120875761540 11:10:51 11:10:51.920 [ProcessThread(sid:0 cport:39173):] DEBUG org.apache.zookeeper.server.SessionTrackerImpl - Checking session 0x10000020e9e0000 11:10:51 11:10:51.920 [ProcessThread(sid:0 cport:39173):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Processing ACL: 31,s{'world,'anyone} 11:10:51 11:10:51 11:10:51.920 [ProcessThread(sid:0 cport:39173):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Permission requested: 4 11:10:51 11:10:51.920 [ProcessThread(sid:0 cport:39173):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - ACLs for node: [31,s{'world,'anyone} 11:10:51 ] 11:10:51 11:10:51.920 [ProcessThread(sid:0 cport:39173):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Client credentials: ['sasl,'zooclient 11:10:51 , 'ip,'127.0.0.1 11:10:51 ] 11:10:51 11:10:51.920 [ProcessThread(sid:0 cport:39173):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 120875761540 11:10:51 11:10:51.920 [ProcessThread(sid:0 cport:39173):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 121465422933 11:10:51 11:10:51.920 [ProcessThread(sid:0 cport:39173):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 125028025661 11:10:51 11:10:51.920 [ProcessThread(sid:0 cport:39173):] DEBUG org.apache.zookeeper.server.SessionTrackerImpl - Checking session 0x10000020e9e0000 11:10:51 11:10:51.920 [ProcessThread(sid:0 cport:39173):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Permission requested: 1 11:10:51 11:10:51.920 [ProcessThread(sid:0 cport:39173):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - ACLs for node: [31,s{'world,'anyone} 11:10:51 ] 11:10:51 11:10:51.920 [ProcessThread(sid:0 cport:39173):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Client credentials: ['sasl,'zooclient 11:10:51 , 'ip,'127.0.0.1 11:10:51 ] 11:10:51 11:10:51.920 [ProcessThread(sid:0 cport:39173):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 125028025661 11:10:51 11:10:51.920 [ProcessThread(sid:0 cport:39173):] DEBUG org.apache.zookeeper.server.SessionTrackerImpl - Checking session 0x10000020e9e0000 11:10:51 11:10:51.920 [ProcessThread(sid:0 cport:39173):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Processing ACL: 31,s{'world,'anyone} 11:10:51 11:10:51 11:10:51.920 [ProcessThread(sid:0 cport:39173):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Permission requested: 4 11:10:51 11:10:51.920 [ProcessThread(sid:0 cport:39173):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - ACLs for node: [31,s{'world,'anyone} 11:10:51 ] 11:10:51 11:10:51.920 [ProcessThread(sid:0 cport:39173):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Client credentials: ['sasl,'zooclient 11:10:51 , 'ip,'127.0.0.1 11:10:51 ] 11:10:51 11:10:51.921 [ProcessThread(sid:0 cport:39173):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 125028025661 11:10:51 11:10:51.921 [ProcessThread(sid:0 cport:39173):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 122982048608 11:10:51 11:10:51.921 [ProcessThread(sid:0 cport:39173):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 124268698354 11:10:51 11:10:51.921 [ProcessThread(sid:0 cport:39173):] DEBUG org.apache.zookeeper.server.SessionTrackerImpl - Checking session 0x10000020e9e0000 11:10:51 11:10:51.921 [ProcessThread(sid:0 cport:39173):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Permission requested: 1 11:10:51 11:10:51.921 [ProcessThread(sid:0 cport:39173):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - ACLs for node: [31,s{'world,'anyone} 11:10:51 ] 11:10:51 11:10:51.921 [ProcessThread(sid:0 cport:39173):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Client credentials: ['sasl,'zooclient 11:10:51 , 'ip,'127.0.0.1 11:10:51 ] 11:10:51 11:10:51.921 [ProcessThread(sid:0 cport:39173):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 124268698354 11:10:51 11:10:51.921 [ProcessThread(sid:0 cport:39173):] DEBUG org.apache.zookeeper.server.SessionTrackerImpl - Checking session 0x10000020e9e0000 11:10:51 11:10:51.921 [ProcessThread(sid:0 cport:39173):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Processing ACL: 31,s{'world,'anyone} 11:10:51 11:10:51 11:10:51.914 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - sessionid:0x10000020e9e0000 type:multi cxid:0x73 zxid:0x43 txntype:14 reqpath:n/a 11:10:51 11:10:51.921 [ProcessThread(sid:0 cport:39173):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Permission requested: 4 11:10:51 11:10:51.921 [ProcessThread(sid:0 cport:39173):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - ACLs for node: [31,s{'world,'anyone} 11:10:51 ] 11:10:51 11:10:51.921 [ProcessThread(sid:0 cport:39173):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Client credentials: ['sasl,'zooclient 11:10:51 , 'ip,'127.0.0.1 11:10:51 ] 11:10:51 11:10:51.921 [ProcessThread(sid:0 cport:39173):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 124268698354 11:10:51 11:10:51.921 [ProcessThread(sid:0 cport:39173):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 125895624175 11:10:51 11:10:51.921 [ProcessThread(sid:0 cport:39173):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 128938265903 11:10:51 11:10:51.921 [ProcessThread(sid:0 cport:39173):] DEBUG org.apache.zookeeper.server.SessionTrackerImpl - Checking session 0x10000020e9e0000 11:10:51 11:10:51.921 [ProcessThread(sid:0 cport:39173):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Permission requested: 1 11:10:51 11:10:51.921 [ProcessThread(sid:0 cport:39173):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - ACLs for node: [31,s{'world,'anyone} 11:10:51 ] 11:10:51 11:10:51.921 [ProcessThread(sid:0 cport:39173):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Client credentials: ['sasl,'zooclient 11:10:51 , 'ip,'127.0.0.1 11:10:51 ] 11:10:51 11:10:51.921 [ProcessThread(sid:0 cport:39173):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 128938265903 11:10:51 11:10:51.921 [ProcessThread(sid:0 cport:39173):] DEBUG org.apache.zookeeper.server.SessionTrackerImpl - Checking session 0x10000020e9e0000 11:10:51 11:10:51.922 [ProcessThread(sid:0 cport:39173):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Processing ACL: 31,s{'world,'anyone} 11:10:51 11:10:51 11:10:51.922 [ProcessThread(sid:0 cport:39173):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Permission requested: 4 11:10:51 11:10:51.922 [ProcessThread(sid:0 cport:39173):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - ACLs for node: [31,s{'world,'anyone} 11:10:51 ] 11:10:51 11:10:51.921 [main-SendThread(127.0.0.1:39173)] DEBUG org.apache.zookeeper.ClientCnxn - Reading reply session id: 0x10000020e9e0000, packet:: clientPath:null serverPath:null finished:false header:: 115,14 replyHeader:: 115,67,0 request:: org.apache.zookeeper.MultiOperationRecord@324db772 response:: org.apache.zookeeper.MultiResponse@2c19b7b3 11:10:51 11:10:51.922 [ProcessThread(sid:0 cport:39173):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Client credentials: ['sasl,'zooclient 11:10:51 , 'ip,'127.0.0.1 11:10:51 ] 11:10:51 11:10:51.922 [ProcessThread(sid:0 cport:39173):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 128938265903 11:10:51 11:10:51.922 [ProcessThread(sid:0 cport:39173):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 129122685011 11:10:51 11:10:51.922 [ProcessThread(sid:0 cport:39173):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 131595913269 11:10:51 11:10:51.922 [ProcessThread(sid:0 cport:39173):] DEBUG org.apache.zookeeper.server.SessionTrackerImpl - Checking session 0x10000020e9e0000 11:10:51 11:10:51.922 [ProcessThread(sid:0 cport:39173):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Permission requested: 1 11:10:51 11:10:51.922 [ProcessThread(sid:0 cport:39173):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - ACLs for node: [31,s{'world,'anyone} 11:10:51 ] 11:10:51 11:10:51.922 [ProcessThread(sid:0 cport:39173):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Client credentials: ['sasl,'zooclient 11:10:51 , 'ip,'127.0.0.1 11:10:51 ] 11:10:51 11:10:51.922 [ProcessThread(sid:0 cport:39173):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 131595913269 11:10:51 11:10:51.922 [ProcessThread(sid:0 cport:39173):] DEBUG org.apache.zookeeper.server.SessionTrackerImpl - Checking session 0x10000020e9e0000 11:10:51 11:10:51.922 [ProcessThread(sid:0 cport:39173):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Processing ACL: 31,s{'world,'anyone} 11:10:51 11:10:51 11:10:51.922 [ProcessThread(sid:0 cport:39173):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Permission requested: 4 11:10:51 11:10:51.922 [ProcessThread(sid:0 cport:39173):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - ACLs for node: [31,s{'world,'anyone} 11:10:51 ] 11:10:51 11:10:51.922 [ProcessThread(sid:0 cport:39173):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Client credentials: ['sasl,'zooclient 11:10:51 , 'ip,'127.0.0.1 11:10:51 ] 11:10:51 11:10:51.922 [ProcessThread(sid:0 cport:39173):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 131595913269 11:10:52 11:10:51.922 [ProcessThread(sid:0 cport:39173):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 131542033124 11:10:52 11:10:51.923 [ProcessThread(sid:0 cport:39173):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 131636489376 11:10:52 11:10:51.923 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - Processing request:: sessionid:0x10000020e9e0000 type:multi cxid:0x74 zxid:0x44 txntype:14 reqpath:n/a 11:10:52 11:10:51.923 [ProcessThread(sid:0 cport:39173):] DEBUG org.apache.zookeeper.server.SessionTrackerImpl - Checking session 0x10000020e9e0000 11:10:52 11:10:51.923 [SyncThread:0] DEBUG org.apache.zookeeper.common.PathTrie - brokers 11:10:52 11:10:51.923 [SyncThread:0] DEBUG org.apache.zookeeper.server.DataTree - Digests are matching for Zxid: 44, Digest in log and actual tree: 120875761540 11:10:52 11:10:51.923 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - sessionid:0x10000020e9e0000 type:multi cxid:0x74 zxid:0x44 txntype:14 reqpath:n/a 11:10:52 11:10:51.923 [ProcessThread(sid:0 cport:39173):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Permission requested: 1 11:10:52 11:10:51.923 [ProcessThread(sid:0 cport:39173):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - ACLs for node: [31,s{'world,'anyone} 11:10:52 ] 11:10:52 11:10:51.923 [ProcessThread(sid:0 cport:39173):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Client credentials: ['sasl,'zooclient 11:10:52 , 'ip,'127.0.0.1 11:10:52 ] 11:10:52 11:10:51.923 [ProcessThread(sid:0 cport:39173):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 131636489376 11:10:52 11:10:51.923 [ProcessThread(sid:0 cport:39173):] DEBUG org.apache.zookeeper.server.SessionTrackerImpl - Checking session 0x10000020e9e0000 11:10:52 11:10:51.923 [ProcessThread(sid:0 cport:39173):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Processing ACL: 31,s{'world,'anyone} 11:10:52 11:10:52 11:10:51.923 [ProcessThread(sid:0 cport:39173):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Permission requested: 4 11:10:52 11:10:51.923 [ProcessThread(sid:0 cport:39173):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - ACLs for node: [31,s{'world,'anyone} 11:10:52 ] 11:10:52 11:10:51.923 [ProcessThread(sid:0 cport:39173):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Client credentials: ['sasl,'zooclient 11:10:52 , 'ip,'127.0.0.1 11:10:52 ] 11:10:52 11:10:51.923 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - Processing request:: sessionid:0x10000020e9e0000 type:multi cxid:0x75 zxid:0x45 txntype:14 reqpath:n/a 11:10:52 11:10:51.923 [ProcessThread(sid:0 cport:39173):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 131636489376 11:10:52 11:10:51.924 [ProcessThread(sid:0 cport:39173):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 129362235069 11:10:52 11:10:51.924 [main-SendThread(127.0.0.1:39173)] DEBUG org.apache.zookeeper.ClientCnxn - Reading reply session id: 0x10000020e9e0000, packet:: clientPath:null serverPath:null finished:false header:: 116,14 replyHeader:: 116,68,0 request:: org.apache.zookeeper.MultiOperationRecord@324db756 response:: org.apache.zookeeper.MultiResponse@2c19b797 11:10:52 11:10:51.924 [SyncThread:0] DEBUG org.apache.zookeeper.common.PathTrie - brokers 11:10:52 11:10:51.924 [SyncThread:0] DEBUG org.apache.zookeeper.server.DataTree - Digests are matching for Zxid: 45, Digest in log and actual tree: 125028025661 11:10:52 11:10:51.924 [ProcessThread(sid:0 cport:39173):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 130009790219 11:10:52 11:10:51.924 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - sessionid:0x10000020e9e0000 type:multi cxid:0x75 zxid:0x45 txntype:14 reqpath:n/a 11:10:52 11:10:51.924 [ProcessThread(sid:0 cport:39173):] DEBUG org.apache.zookeeper.server.SessionTrackerImpl - Checking session 0x10000020e9e0000 11:10:52 11:10:51.924 [ProcessThread(sid:0 cport:39173):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Permission requested: 1 11:10:52 11:10:51.924 [ProcessThread(sid:0 cport:39173):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - ACLs for node: [31,s{'world,'anyone} 11:10:52 ] 11:10:52 11:10:51.924 [ProcessThread(sid:0 cport:39173):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Client credentials: ['sasl,'zooclient 11:10:52 , 'ip,'127.0.0.1 11:10:52 ] 11:10:52 11:10:51.924 [ProcessThread(sid:0 cport:39173):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 130009790219 11:10:52 11:10:51.924 [ProcessThread(sid:0 cport:39173):] DEBUG org.apache.zookeeper.server.SessionTrackerImpl - Checking session 0x10000020e9e0000 11:10:52 11:10:51.924 [ProcessThread(sid:0 cport:39173):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Processing ACL: 31,s{'world,'anyone} 11:10:52 11:10:52 11:10:51.924 [ProcessThread(sid:0 cport:39173):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Permission requested: 4 11:10:52 11:10:51.924 [ProcessThread(sid:0 cport:39173):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - ACLs for node: [31,s{'world,'anyone} 11:10:52 ] 11:10:52 11:10:51.924 [ProcessThread(sid:0 cport:39173):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Client credentials: ['sasl,'zooclient 11:10:52 , 'ip,'127.0.0.1 11:10:52 ] 11:10:52 11:10:51.924 [ProcessThread(sid:0 cport:39173):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 130009790219 11:10:52 11:10:51.924 [ProcessThread(sid:0 cport:39173):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 133457708624 11:10:52 11:10:51.924 [ProcessThread(sid:0 cport:39173):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 135344362136 11:10:52 11:10:51.925 [ProcessThread(sid:0 cport:39173):] DEBUG org.apache.zookeeper.server.SessionTrackerImpl - Checking session 0x10000020e9e0000 11:10:52 11:10:51.925 [ProcessThread(sid:0 cport:39173):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Permission requested: 1 11:10:52 11:10:51.925 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - Processing request:: sessionid:0x10000020e9e0000 type:multi cxid:0x76 zxid:0x46 txntype:14 reqpath:n/a 11:10:52 11:10:51.925 [ProcessThread(sid:0 cport:39173):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - ACLs for node: [31,s{'world,'anyone} 11:10:52 ] 11:10:52 11:10:51.925 [ProcessThread(sid:0 cport:39173):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Client credentials: ['sasl,'zooclient 11:10:52 , 'ip,'127.0.0.1 11:10:52 ] 11:10:52 11:10:51.925 [main-SendThread(127.0.0.1:39173)] DEBUG org.apache.zookeeper.ClientCnxn - Reading reply session id: 0x10000020e9e0000, packet:: clientPath:null serverPath:null finished:false header:: 117,14 replyHeader:: 117,69,0 request:: org.apache.zookeeper.MultiOperationRecord@324db7b4 response:: org.apache.zookeeper.MultiResponse@2c19b7f5 11:10:52 11:10:51.925 [SyncThread:0] DEBUG org.apache.zookeeper.common.PathTrie - brokers 11:10:52 11:10:51.925 [SyncThread:0] DEBUG org.apache.zookeeper.server.DataTree - Digests are matching for Zxid: 46, Digest in log and actual tree: 124268698354 11:10:52 11:10:51.925 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - sessionid:0x10000020e9e0000 type:multi cxid:0x76 zxid:0x46 txntype:14 reqpath:n/a 11:10:52 11:10:51.925 [ProcessThread(sid:0 cport:39173):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 135344362136 11:10:52 11:10:51.925 [ProcessThread(sid:0 cport:39173):] DEBUG org.apache.zookeeper.server.SessionTrackerImpl - Checking session 0x10000020e9e0000 11:10:52 11:10:51.925 [ProcessThread(sid:0 cport:39173):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Processing ACL: 31,s{'world,'anyone} 11:10:52 11:10:52 11:10:51.925 [ProcessThread(sid:0 cport:39173):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Permission requested: 4 11:10:52 11:10:51.925 [ProcessThread(sid:0 cport:39173):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - ACLs for node: [31,s{'world,'anyone} 11:10:52 ] 11:10:52 11:10:51.925 [ProcessThread(sid:0 cport:39173):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Client credentials: ['sasl,'zooclient 11:10:52 , 'ip,'127.0.0.1 11:10:52 ] 11:10:52 11:10:51.925 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - Processing request:: sessionid:0x10000020e9e0000 type:multi cxid:0x77 zxid:0x47 txntype:14 reqpath:n/a 11:10:52 11:10:51.925 [ProcessThread(sid:0 cport:39173):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 135344362136 11:10:52 11:10:51.926 [ProcessThread(sid:0 cport:39173):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 131692929630 11:10:52 11:10:51.926 [ProcessThread(sid:0 cport:39173):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 135635110911 11:10:52 11:10:51.926 [main-SendThread(127.0.0.1:39173)] DEBUG org.apache.zookeeper.ClientCnxn - Reading reply session id: 0x10000020e9e0000, packet:: clientPath:null serverPath:null finished:false header:: 118,14 replyHeader:: 118,70,0 request:: org.apache.zookeeper.MultiOperationRecord@324db758 response:: org.apache.zookeeper.MultiResponse@2c19b799 11:10:52 11:10:51.926 [SyncThread:0] DEBUG org.apache.zookeeper.common.PathTrie - brokers 11:10:52 11:10:51.926 [SyncThread:0] DEBUG org.apache.zookeeper.server.DataTree - Digests are matching for Zxid: 47, Digest in log and actual tree: 128938265903 11:10:52 11:10:51.926 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - sessionid:0x10000020e9e0000 type:multi cxid:0x77 zxid:0x47 txntype:14 reqpath:n/a 11:10:52 11:10:51.926 [ProcessThread(sid:0 cport:39173):] DEBUG org.apache.zookeeper.server.SessionTrackerImpl - Checking session 0x10000020e9e0000 11:10:52 11:10:51.926 [ProcessThread(sid:0 cport:39173):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Permission requested: 1 11:10:52 11:10:51.926 [ProcessThread(sid:0 cport:39173):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - ACLs for node: [31,s{'world,'anyone} 11:10:52 ] 11:10:52 11:10:51.926 [ProcessThread(sid:0 cport:39173):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Client credentials: ['sasl,'zooclient 11:10:52 , 'ip,'127.0.0.1 11:10:52 ] 11:10:52 11:10:51.926 [ProcessThread(sid:0 cport:39173):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 135635110911 11:10:52 11:10:51.926 [ProcessThread(sid:0 cport:39173):] DEBUG org.apache.zookeeper.server.SessionTrackerImpl - Checking session 0x10000020e9e0000 11:10:52 11:10:51.926 [ProcessThread(sid:0 cport:39173):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Processing ACL: 31,s{'world,'anyone} 11:10:52 11:10:52 11:10:51.926 [ProcessThread(sid:0 cport:39173):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Permission requested: 4 11:10:52 11:10:51.926 [ProcessThread(sid:0 cport:39173):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - ACLs for node: [31,s{'world,'anyone} 11:10:52 ] 11:10:52 11:10:51.926 [ProcessThread(sid:0 cport:39173):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Client credentials: ['sasl,'zooclient 11:10:52 , 'ip,'127.0.0.1 11:10:52 ] 11:10:52 11:10:51.927 [ProcessThread(sid:0 cport:39173):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 135635110911 11:10:52 11:10:51.927 [ProcessThread(sid:0 cport:39173):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 136257287856 11:10:52 11:10:51.927 [ProcessThread(sid:0 cport:39173):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 136312131090 11:10:52 11:10:51.927 [ProcessThread(sid:0 cport:39173):] DEBUG org.apache.zookeeper.server.SessionTrackerImpl - Checking session 0x10000020e9e0000 11:10:52 11:10:51.927 [ProcessThread(sid:0 cport:39173):] DEBUG org.apache.zookeeper.server.SessionTrackerImpl - Checking session 0x10000020e9e0000 11:10:52 11:10:51.927 [ProcessThread(sid:0 cport:39173):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Permission requested: 1 11:10:52 11:10:51.927 [ProcessThread(sid:0 cport:39173):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - ACLs for node: [31,s{'world,'anyone} 11:10:52 ] 11:10:52 11:10:51.927 [ProcessThread(sid:0 cport:39173):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Client credentials: ['sasl,'zooclient 11:10:52 , 'ip,'127.0.0.1 11:10:52 ] 11:10:52 11:10:51.927 [ProcessThread(sid:0 cport:39173):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 136312131090 11:10:52 11:10:51.927 [ProcessThread(sid:0 cport:39173):] DEBUG org.apache.zookeeper.server.SessionTrackerImpl - Checking session 0x10000020e9e0000 11:10:52 11:10:51.927 [ProcessThread(sid:0 cport:39173):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Processing ACL: 31,s{'world,'anyone} 11:10:52 11:10:52 11:10:51.927 [ProcessThread(sid:0 cport:39173):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Permission requested: 4 11:10:52 11:10:51.927 [ProcessThread(sid:0 cport:39173):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - ACLs for node: [31,s{'world,'anyone} 11:10:52 ] 11:10:52 11:10:51.927 [ProcessThread(sid:0 cport:39173):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Client credentials: ['sasl,'zooclient 11:10:52 , 'ip,'127.0.0.1 11:10:52 ] 11:10:52 11:10:51.927 [ProcessThread(sid:0 cport:39173):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 136312131090 11:10:52 11:10:51.927 [ProcessThread(sid:0 cport:39173):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 138555418733 11:10:52 11:10:51.927 [ProcessThread(sid:0 cport:39173):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 140127249436 11:10:52 11:10:51.928 [ProcessThread(sid:0 cport:39173):] DEBUG org.apache.zookeeper.server.SessionTrackerImpl - Checking session 0x10000020e9e0000 11:10:52 11:10:51.928 [ProcessThread(sid:0 cport:39173):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Permission requested: 1 11:10:52 11:10:51.928 [ProcessThread(sid:0 cport:39173):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - ACLs for node: [31,s{'world,'anyone} 11:10:52 ] 11:10:52 11:10:51.928 [ProcessThread(sid:0 cport:39173):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Client credentials: ['sasl,'zooclient 11:10:52 , 'ip,'127.0.0.1 11:10:52 ] 11:10:52 11:10:51.928 [ProcessThread(sid:0 cport:39173):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 140127249436 11:10:52 11:10:51.928 [ProcessThread(sid:0 cport:39173):] DEBUG org.apache.zookeeper.server.SessionTrackerImpl - Checking session 0x10000020e9e0000 11:10:52 11:10:51.928 [ProcessThread(sid:0 cport:39173):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Processing ACL: 31,s{'world,'anyone} 11:10:52 11:10:52 11:10:51.928 [ProcessThread(sid:0 cport:39173):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Permission requested: 4 11:10:52 11:10:51.928 [ProcessThread(sid:0 cport:39173):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - ACLs for node: [31,s{'world,'anyone} 11:10:52 ] 11:10:52 11:10:51.928 [ProcessThread(sid:0 cport:39173):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Client credentials: ['sasl,'zooclient 11:10:52 , 'ip,'127.0.0.1 11:10:52 ] 11:10:52 11:10:51.928 [ProcessThread(sid:0 cport:39173):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 140127249436 11:10:52 11:10:51.928 [ProcessThread(sid:0 cport:39173):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 137350571526 11:10:52 11:10:51.928 [ProcessThread(sid:0 cport:39173):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 140253111980 11:10:52 11:10:51.928 [main-SendThread(127.0.0.1:39173)] DEBUG org.apache.zookeeper.ClientCnxn - Reading reply session id: 0x10000020e9e0000, packet:: clientPath:null serverPath:null finished:false header:: 119,14 replyHeader:: 119,71,0 request:: org.apache.zookeeper.MultiOperationRecord@324db750 response:: org.apache.zookeeper.MultiResponse@2c19b791 11:10:52 11:10:51.928 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - Processing request:: sessionid:0x10000020e9e0000 type:multi cxid:0x78 zxid:0x48 txntype:14 reqpath:n/a 11:10:52 11:10:51.929 [SyncThread:0] DEBUG org.apache.zookeeper.common.PathTrie - brokers 11:10:52 11:10:51.929 [SyncThread:0] DEBUG org.apache.zookeeper.server.DataTree - Digests are matching for Zxid: 48, Digest in log and actual tree: 131595913269 11:10:52 11:10:51.929 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - sessionid:0x10000020e9e0000 type:multi cxid:0x78 zxid:0x48 txntype:14 reqpath:n/a 11:10:52 11:10:51.929 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - Processing request:: sessionid:0x10000020e9e0000 type:multi cxid:0x79 zxid:0x49 txntype:14 reqpath:n/a 11:10:52 11:10:51.929 [ProcessThread(sid:0 cport:39173):] DEBUG org.apache.zookeeper.server.SessionTrackerImpl - Checking session 0x10000020e9e0000 11:10:52 11:10:51.929 [SyncThread:0] DEBUG org.apache.zookeeper.common.PathTrie - brokers 11:10:52 11:10:51.930 [SyncThread:0] DEBUG org.apache.zookeeper.server.DataTree - Digests are matching for Zxid: 49, Digest in log and actual tree: 131636489376 11:10:52 11:10:51.929 [main-SendThread(127.0.0.1:39173)] DEBUG org.apache.zookeeper.ClientCnxn - Reading reply session id: 0x10000020e9e0000, packet:: clientPath:null serverPath:null finished:false header:: 120,14 replyHeader:: 120,72,0 request:: org.apache.zookeeper.MultiOperationRecord@940352d8 response:: org.apache.zookeeper.MultiResponse@8dcf5319 11:10:52 11:10:51.930 [ProcessThread(sid:0 cport:39173):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Permission requested: 1 11:10:52 11:10:51.930 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - sessionid:0x10000020e9e0000 type:multi cxid:0x79 zxid:0x49 txntype:14 reqpath:n/a 11:10:52 11:10:51.930 [ProcessThread(sid:0 cport:39173):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - ACLs for node: [31,s{'world,'anyone} 11:10:52 ] 11:10:52 11:10:51.930 [ProcessThread(sid:0 cport:39173):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Client credentials: ['sasl,'zooclient 11:10:52 , 'ip,'127.0.0.1 11:10:52 ] 11:10:52 11:10:51.930 [ProcessThread(sid:0 cport:39173):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 140253111980 11:10:52 11:10:51.930 [ProcessThread(sid:0 cport:39173):] DEBUG org.apache.zookeeper.server.SessionTrackerImpl - Checking session 0x10000020e9e0000 11:10:52 11:10:51.930 [ProcessThread(sid:0 cport:39173):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Processing ACL: 31,s{'world,'anyone} 11:10:52 11:10:52 11:10:51.930 [ProcessThread(sid:0 cport:39173):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Permission requested: 4 11:10:52 11:10:51.930 [ProcessThread(sid:0 cport:39173):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - ACLs for node: [31,s{'world,'anyone} 11:10:52 ] 11:10:52 11:10:51.930 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - Processing request:: sessionid:0x10000020e9e0000 type:multi cxid:0x7a zxid:0x4a txntype:14 reqpath:n/a 11:10:52 11:10:51.930 [ProcessThread(sid:0 cport:39173):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Client credentials: ['sasl,'zooclient 11:10:52 , 'ip,'127.0.0.1 11:10:52 ] 11:10:52 11:10:51.930 [main-SendThread(127.0.0.1:39173)] DEBUG org.apache.zookeeper.ClientCnxn - Reading reply session id: 0x10000020e9e0000, packet:: clientPath:null serverPath:null finished:false header:: 121,14 replyHeader:: 121,73,0 request:: org.apache.zookeeper.MultiOperationRecord@324db7af response:: org.apache.zookeeper.MultiResponse@2c19b7f0 11:10:52 11:10:51.930 [SyncThread:0] DEBUG org.apache.zookeeper.common.PathTrie - brokers 11:10:52 11:10:51.931 [SyncThread:0] DEBUG org.apache.zookeeper.server.DataTree - Digests are matching for Zxid: 4a, Digest in log and actual tree: 130009790219 11:10:52 11:10:51.931 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - sessionid:0x10000020e9e0000 type:multi cxid:0x7a zxid:0x4a txntype:14 reqpath:n/a 11:10:52 11:10:51.931 [ProcessThread(sid:0 cport:39173):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 140253111980 11:10:52 11:10:51.931 [ProcessThread(sid:0 cport:39173):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 141142451509 11:10:52 11:10:51.931 [ProcessThread(sid:0 cport:39173):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 141827521090 11:10:52 11:10:51.931 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - Processing request:: sessionid:0x10000020e9e0000 type:multi cxid:0x7b zxid:0x4b txntype:14 reqpath:n/a 11:10:52 11:10:51.931 [ProcessThread(sid:0 cport:39173):] DEBUG org.apache.zookeeper.server.SessionTrackerImpl - Checking session 0x10000020e9e0000 11:10:52 11:10:51.931 [SyncThread:0] DEBUG org.apache.zookeeper.common.PathTrie - brokers 11:10:52 11:10:51.931 [main-SendThread(127.0.0.1:39173)] DEBUG org.apache.zookeeper.ClientCnxn - Reading reply session id: 0x10000020e9e0000, packet:: clientPath:null serverPath:null finished:false header:: 122,14 replyHeader:: 122,74,0 request:: org.apache.zookeeper.MultiOperationRecord@940352dc response:: org.apache.zookeeper.MultiResponse@8dcf531d 11:10:52 11:10:51.931 [SyncThread:0] DEBUG org.apache.zookeeper.server.DataTree - Digests are matching for Zxid: 4b, Digest in log and actual tree: 135344362136 11:10:52 11:10:51.931 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - sessionid:0x10000020e9e0000 type:multi cxid:0x7b zxid:0x4b txntype:14 reqpath:n/a 11:10:52 11:10:51.931 [ProcessThread(sid:0 cport:39173):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Permission requested: 1 11:10:52 11:10:51.931 [ProcessThread(sid:0 cport:39173):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - ACLs for node: [31,s{'world,'anyone} 11:10:52 ] 11:10:52 11:10:51.931 [ProcessThread(sid:0 cport:39173):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Client credentials: ['sasl,'zooclient 11:10:52 , 'ip,'127.0.0.1 11:10:52 ] 11:10:52 11:10:51.931 [ProcessThread(sid:0 cport:39173):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 141827521090 11:10:52 11:10:51.931 [ProcessThread(sid:0 cport:39173):] DEBUG org.apache.zookeeper.server.SessionTrackerImpl - Checking session 0x10000020e9e0000 11:10:52 11:10:51.931 [ProcessThread(sid:0 cport:39173):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Processing ACL: 31,s{'world,'anyone} 11:10:52 11:10:52 11:10:51.931 [ProcessThread(sid:0 cport:39173):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Permission requested: 4 11:10:52 11:10:51.932 [ProcessThread(sid:0 cport:39173):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - ACLs for node: [31,s{'world,'anyone} 11:10:52 ] 11:10:52 11:10:51.932 [ProcessThread(sid:0 cport:39173):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Client credentials: ['sasl,'zooclient 11:10:52 , 'ip,'127.0.0.1 11:10:52 ] 11:10:52 11:10:51.932 [ProcessThread(sid:0 cport:39173):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 141827521090 11:10:52 11:10:51.932 [ProcessThread(sid:0 cport:39173):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 142716293617 11:10:52 11:10:51.932 [ProcessThread(sid:0 cport:39173):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 146983270806 11:10:52 11:10:51.932 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - Processing request:: sessionid:0x10000020e9e0000 type:multi cxid:0x7c zxid:0x4c txntype:14 reqpath:n/a 11:10:52 11:10:51.932 [ProcessThread(sid:0 cport:39173):] DEBUG org.apache.zookeeper.server.SessionTrackerImpl - Checking session 0x10000020e9e0000 11:10:52 11:10:51.932 [main-SendThread(127.0.0.1:39173)] DEBUG org.apache.zookeeper.ClientCnxn - Reading reply session id: 0x10000020e9e0000, packet:: clientPath:null serverPath:null finished:false header:: 123,14 replyHeader:: 123,75,0 request:: org.apache.zookeeper.MultiOperationRecord@324db753 response:: org.apache.zookeeper.MultiResponse@2c19b794 11:10:52 11:10:51.932 [SyncThread:0] DEBUG org.apache.zookeeper.common.PathTrie - brokers 11:10:52 11:10:51.932 [SyncThread:0] DEBUG org.apache.zookeeper.server.DataTree - Digests are matching for Zxid: 4c, Digest in log and actual tree: 135635110911 11:10:52 11:10:51.932 [ProcessThread(sid:0 cport:39173):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Permission requested: 1 11:10:52 11:10:51.932 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - sessionid:0x10000020e9e0000 type:multi cxid:0x7c zxid:0x4c txntype:14 reqpath:n/a 11:10:52 11:10:51.932 [ProcessThread(sid:0 cport:39173):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - ACLs for node: [31,s{'world,'anyone} 11:10:52 ] 11:10:52 11:10:51.932 [ProcessThread(sid:0 cport:39173):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Client credentials: ['sasl,'zooclient 11:10:52 , 'ip,'127.0.0.1 11:10:52 ] 11:10:52 11:10:51.932 [ProcessThread(sid:0 cport:39173):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 146983270806 11:10:52 11:10:51.932 [ProcessThread(sid:0 cport:39173):] DEBUG org.apache.zookeeper.server.SessionTrackerImpl - Checking session 0x10000020e9e0000 11:10:52 11:10:51.932 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - Processing request:: sessionid:0x10000020e9e0000 type:multi cxid:0x7d zxid:0x4d txntype:14 reqpath:n/a 11:10:52 11:10:51.932 [ProcessThread(sid:0 cport:39173):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Processing ACL: 31,s{'world,'anyone} 11:10:52 11:10:52 11:10:51.933 [SyncThread:0] DEBUG org.apache.zookeeper.common.PathTrie - brokers 11:10:52 11:10:51.933 [main-SendThread(127.0.0.1:39173)] DEBUG org.apache.zookeeper.ClientCnxn - Reading reply session id: 0x10000020e9e0000, packet:: clientPath:null serverPath:null finished:false header:: 124,14 replyHeader:: 124,76,0 request:: org.apache.zookeeper.MultiOperationRecord@324db76e response:: org.apache.zookeeper.MultiResponse@2c19b7af 11:10:52 11:10:51.933 [SyncThread:0] DEBUG org.apache.zookeeper.server.DataTree - Digests are matching for Zxid: 4d, Digest in log and actual tree: 136312131090 11:10:52 11:10:51.933 [ProcessThread(sid:0 cport:39173):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Permission requested: 4 11:10:52 11:10:51.933 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - sessionid:0x10000020e9e0000 type:multi cxid:0x7d zxid:0x4d txntype:14 reqpath:n/a 11:10:52 11:10:51.933 [ProcessThread(sid:0 cport:39173):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - ACLs for node: [31,s{'world,'anyone} 11:10:52 ] 11:10:52 11:10:51.933 [ProcessThread(sid:0 cport:39173):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Client credentials: ['sasl,'zooclient 11:10:52 , 'ip,'127.0.0.1 11:10:52 ] 11:10:52 11:10:51.933 [ProcessThread(sid:0 cport:39173):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 146983270806 11:10:52 11:10:51.933 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - Processing request:: sessionid:0x10000020e9e0000 type:exists cxid:0x7e zxid:0xfffffffffffffffe txntype:unknown reqpath:/admin/delete_topics/__consumer_offsets 11:10:52 11:10:51.933 [ProcessThread(sid:0 cport:39173):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 149153029883 11:10:52 11:10:51.933 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - sessionid:0x10000020e9e0000 type:exists cxid:0x7e zxid:0xfffffffffffffffe txntype:unknown reqpath:/admin/delete_topics/__consumer_offsets 11:10:52 11:10:51.933 [ProcessThread(sid:0 cport:39173):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 152508267826 11:10:52 11:10:51.933 [ProcessThread(sid:0 cport:39173):] DEBUG org.apache.zookeeper.server.SessionTrackerImpl - Checking session 0x10000020e9e0000 11:10:52 11:10:51.933 [main-SendThread(127.0.0.1:39173)] DEBUG org.apache.zookeeper.ClientCnxn - Reading reply session id: 0x10000020e9e0000, packet:: clientPath:null serverPath:null finished:false header:: 125,14 replyHeader:: 125,77,0 request:: org.apache.zookeeper.MultiOperationRecord@940352d6 response:: org.apache.zookeeper.MultiResponse@8dcf5317 11:10:52 11:10:51.933 [ProcessThread(sid:0 cport:39173):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Permission requested: 1 11:10:52 11:10:51.933 [ProcessThread(sid:0 cport:39173):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - ACLs for node: [31,s{'world,'anyone} 11:10:52 ] 11:10:52 11:10:51.934 [ProcessThread(sid:0 cport:39173):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Client credentials: ['sasl,'zooclient 11:10:52 , 'ip,'127.0.0.1 11:10:52 ] 11:10:52 11:10:51.934 [ProcessThread(sid:0 cport:39173):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 152508267826 11:10:52 11:10:51.934 [ProcessThread(sid:0 cport:39173):] DEBUG org.apache.zookeeper.server.SessionTrackerImpl - Checking session 0x10000020e9e0000 11:10:52 11:10:51.934 [main-SendThread(127.0.0.1:39173)] DEBUG org.apache.zookeeper.ClientCnxn - Reading reply session id: 0x10000020e9e0000, packet:: clientPath:/admin/delete_topics/__consumer_offsets serverPath:/admin/delete_topics/__consumer_offsets finished:false header:: 126,3 replyHeader:: 126,77,-101 request:: '/admin/delete_topics/__consumer_offsets,F response:: 11:10:52 11:10:51.934 [ProcessThread(sid:0 cport:39173):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Processing ACL: 31,s{'world,'anyone} 11:10:52 11:10:52 11:10:51.934 [ProcessThread(sid:0 cport:39173):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Permission requested: 4 11:10:52 11:10:51.934 [ProcessThread(sid:0 cport:39173):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - ACLs for node: [31,s{'world,'anyone} 11:10:52 ] 11:10:52 11:10:51.934 [ProcessThread(sid:0 cport:39173):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Client credentials: ['sasl,'zooclient 11:10:52 , 'ip,'127.0.0.1 11:10:52 ] 11:10:52 11:10:51.934 [ProcessThread(sid:0 cport:39173):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 152508267826 11:10:52 11:10:51.934 [ProcessThread(sid:0 cport:39173):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 148288867435 11:10:52 11:10:51.934 [ProcessThread(sid:0 cport:39173):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 149544748429 11:10:52 11:10:51.934 [ProcessThread(sid:0 cport:39173):] DEBUG org.apache.zookeeper.server.SessionTrackerImpl - Checking session 0x10000020e9e0000 11:10:52 11:10:51.934 [ProcessThread(sid:0 cport:39173):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Permission requested: 1 11:10:52 11:10:51.934 [ProcessThread(sid:0 cport:39173):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - ACLs for node: [31,s{'world,'anyone} 11:10:52 ] 11:10:52 11:10:51.934 [ProcessThread(sid:0 cport:39173):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Client credentials: ['sasl,'zooclient 11:10:52 , 'ip,'127.0.0.1 11:10:52 ] 11:10:52 11:10:51.934 [ProcessThread(sid:0 cport:39173):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 149544748429 11:10:52 11:10:51.934 [ProcessThread(sid:0 cport:39173):] DEBUG org.apache.zookeeper.server.SessionTrackerImpl - Checking session 0x10000020e9e0000 11:10:52 11:10:51.934 [ProcessThread(sid:0 cport:39173):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Processing ACL: 31,s{'world,'anyone} 11:10:52 11:10:52 11:10:51.934 [ProcessThread(sid:0 cport:39173):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Permission requested: 4 11:10:52 11:10:51.934 [ProcessThread(sid:0 cport:39173):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - ACLs for node: [31,s{'world,'anyone} 11:10:52 ] 11:10:52 11:10:51.935 [ProcessThread(sid:0 cport:39173):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Client credentials: ['sasl,'zooclient 11:10:52 , 'ip,'127.0.0.1 11:10:52 ] 11:10:52 11:10:51.935 [ProcessThread(sid:0 cport:39173):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 149544748429 11:10:52 11:10:51.935 [ProcessThread(sid:0 cport:39173):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 153426269543 11:10:52 11:10:51.935 [ProcessThread(sid:0 cport:39173):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 154294946292 11:10:52 11:10:51.935 [ProcessThread(sid:0 cport:39173):] DEBUG org.apache.zookeeper.server.SessionTrackerImpl - Checking session 0x10000020e9e0000 11:10:52 11:10:51.935 [ProcessThread(sid:0 cport:39173):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Permission requested: 1 11:10:52 11:10:51.935 [ProcessThread(sid:0 cport:39173):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - ACLs for node: [31,s{'world,'anyone} 11:10:52 ] 11:10:52 11:10:51.935 [ProcessThread(sid:0 cport:39173):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Client credentials: ['sasl,'zooclient 11:10:52 , 'ip,'127.0.0.1 11:10:52 ] 11:10:52 11:10:51.935 [ProcessThread(sid:0 cport:39173):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 154294946292 11:10:52 11:10:51.935 [ProcessThread(sid:0 cport:39173):] DEBUG org.apache.zookeeper.server.SessionTrackerImpl - Checking session 0x10000020e9e0000 11:10:52 11:10:51.935 [ProcessThread(sid:0 cport:39173):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Processing ACL: 31,s{'world,'anyone} 11:10:52 11:10:52 11:10:51.935 [ProcessThread(sid:0 cport:39173):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Permission requested: 4 11:10:52 11:10:51.935 [ProcessThread(sid:0 cport:39173):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - ACLs for node: [31,s{'world,'anyone} 11:10:52 ] 11:10:52 11:10:51.935 [ProcessThread(sid:0 cport:39173):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Client credentials: ['sasl,'zooclient 11:10:52 , 'ip,'127.0.0.1 11:10:52 ] 11:10:52 11:10:51.935 [ProcessThread(sid:0 cport:39173):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 154294946292 11:10:52 11:10:51.935 [ProcessThread(sid:0 cport:39173):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 153978663333 11:10:52 11:10:51.935 [ProcessThread(sid:0 cport:39173):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 155593380680 11:10:52 11:10:51.935 [ProcessThread(sid:0 cport:39173):] DEBUG org.apache.zookeeper.server.SessionTrackerImpl - Checking session 0x10000020e9e0000 11:10:52 11:10:51.935 [ProcessThread(sid:0 cport:39173):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Permission requested: 1 11:10:52 11:10:51.935 [ProcessThread(sid:0 cport:39173):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - ACLs for node: [31,s{'world,'anyone} 11:10:52 ] 11:10:52 11:10:51.935 [ProcessThread(sid:0 cport:39173):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Client credentials: ['sasl,'zooclient 11:10:52 , 'ip,'127.0.0.1 11:10:52 ] 11:10:52 11:10:51.935 [ProcessThread(sid:0 cport:39173):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 155593380680 11:10:52 11:10:51.935 [ProcessThread(sid:0 cport:39173):] DEBUG org.apache.zookeeper.server.SessionTrackerImpl - Checking session 0x10000020e9e0000 11:10:52 11:10:51.935 [ProcessThread(sid:0 cport:39173):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Processing ACL: 31,s{'world,'anyone} 11:10:52 11:10:52 11:10:51.935 [ProcessThread(sid:0 cport:39173):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Permission requested: 4 11:10:52 11:10:51.935 [ProcessThread(sid:0 cport:39173):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - ACLs for node: [31,s{'world,'anyone} 11:10:52 ] 11:10:52 11:10:51.935 [ProcessThread(sid:0 cport:39173):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Client credentials: ['sasl,'zooclient 11:10:52 , 'ip,'127.0.0.1 11:10:52 ] 11:10:52 11:10:51.935 [ProcessThread(sid:0 cport:39173):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 155593380680 11:10:52 11:10:51.935 [ProcessThread(sid:0 cport:39173):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 153388724331 11:10:52 11:10:51.936 [ProcessThread(sid:0 cport:39173):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 154360216200 11:10:52 11:10:51.936 [ProcessThread(sid:0 cport:39173):] DEBUG org.apache.zookeeper.server.SessionTrackerImpl - Checking session 0x10000020e9e0000 11:10:52 11:10:51.936 [ProcessThread(sid:0 cport:39173):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Permission requested: 1 11:10:52 11:10:51.936 [ProcessThread(sid:0 cport:39173):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - ACLs for node: [31,s{'world,'anyone} 11:10:52 ] 11:10:52 11:10:51.936 [ProcessThread(sid:0 cport:39173):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Client credentials: ['sasl,'zooclient 11:10:52 , 'ip,'127.0.0.1 11:10:52 ] 11:10:52 11:10:51.936 [ProcessThread(sid:0 cport:39173):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 154360216200 11:10:52 11:10:51.936 [ProcessThread(sid:0 cport:39173):] DEBUG org.apache.zookeeper.server.SessionTrackerImpl - Checking session 0x10000020e9e0000 11:10:52 11:10:51.936 [ProcessThread(sid:0 cport:39173):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Processing ACL: 31,s{'world,'anyone} 11:10:52 11:10:52 11:10:51.936 [ProcessThread(sid:0 cport:39173):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Permission requested: 4 11:10:52 11:10:51.936 [ProcessThread(sid:0 cport:39173):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - ACLs for node: [31,s{'world,'anyone} 11:10:52 ] 11:10:52 11:10:51.936 [ProcessThread(sid:0 cport:39173):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Client credentials: ['sasl,'zooclient 11:10:52 , 'ip,'127.0.0.1 11:10:52 ] 11:10:52 11:10:51.936 [ProcessThread(sid:0 cport:39173):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 154360216200 11:10:52 11:10:51.936 [ProcessThread(sid:0 cport:39173):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 154082926456 11:10:52 11:10:51.936 [ProcessThread(sid:0 cport:39173):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 155759621072 11:10:52 11:10:51.942 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - Processing request:: sessionid:0x10000020e9e0000 type:multi cxid:0x7f zxid:0x4e txntype:14 reqpath:n/a 11:10:52 11:10:51.942 [SyncThread:0] DEBUG org.apache.zookeeper.common.PathTrie - brokers 11:10:52 11:10:51.942 [SyncThread:0] DEBUG org.apache.zookeeper.server.DataTree - Digests are matching for Zxid: 4e, Digest in log and actual tree: 140127249436 11:10:52 11:10:51.942 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - sessionid:0x10000020e9e0000 type:multi cxid:0x7f zxid:0x4e txntype:14 reqpath:n/a 11:10:52 11:10:51.942 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - Processing request:: sessionid:0x10000020e9e0000 type:multi cxid:0x80 zxid:0x4f txntype:14 reqpath:n/a 11:10:52 11:10:51.943 [SyncThread:0] DEBUG org.apache.zookeeper.common.PathTrie - brokers 11:10:52 11:10:51.943 [SyncThread:0] DEBUG org.apache.zookeeper.server.DataTree - Digests are matching for Zxid: 4f, Digest in log and actual tree: 140253111980 11:10:52 11:10:51.943 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - sessionid:0x10000020e9e0000 type:multi cxid:0x80 zxid:0x4f txntype:14 reqpath:n/a 11:10:52 11:10:51.943 [main-SendThread(127.0.0.1:39173)] DEBUG org.apache.zookeeper.ClientCnxn - Reading reply session id: 0x10000020e9e0000, packet:: clientPath:null serverPath:null finished:false header:: 127,14 replyHeader:: 127,78,0 request:: org.apache.zookeeper.MultiOperationRecord@324db7b0 response:: org.apache.zookeeper.MultiResponse@2c19b7f1 11:10:52 11:10:51.943 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - Processing request:: sessionid:0x10000020e9e0000 type:multi cxid:0x81 zxid:0x50 txntype:14 reqpath:n/a 11:10:52 11:10:51.943 [SyncThread:0] DEBUG org.apache.zookeeper.common.PathTrie - brokers 11:10:52 11:10:51.943 [SyncThread:0] DEBUG org.apache.zookeeper.server.DataTree - Digests are matching for Zxid: 50, Digest in log and actual tree: 141827521090 11:10:52 11:10:51.943 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - sessionid:0x10000020e9e0000 type:multi cxid:0x81 zxid:0x50 txntype:14 reqpath:n/a 11:10:52 11:10:51.943 [main-SendThread(127.0.0.1:39173)] DEBUG org.apache.zookeeper.ClientCnxn - Reading reply session id: 0x10000020e9e0000, packet:: clientPath:null serverPath:null finished:false header:: 128,14 replyHeader:: 128,79,0 request:: org.apache.zookeeper.MultiOperationRecord@324db796 response:: org.apache.zookeeper.MultiResponse@2c19b7d7 11:10:52 11:10:51.943 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - Processing request:: sessionid:0x10000020e9e0000 type:multi cxid:0x82 zxid:0x51 txntype:14 reqpath:n/a 11:10:52 11:10:51.943 [SyncThread:0] DEBUG org.apache.zookeeper.common.PathTrie - brokers 11:10:52 11:10:51.943 [SyncThread:0] DEBUG org.apache.zookeeper.server.DataTree - Digests are matching for Zxid: 51, Digest in log and actual tree: 146983270806 11:10:52 11:10:51.943 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - sessionid:0x10000020e9e0000 type:multi cxid:0x82 zxid:0x51 txntype:14 reqpath:n/a 11:10:52 11:10:51.943 [main-SendThread(127.0.0.1:39173)] DEBUG org.apache.zookeeper.ClientCnxn - Reading reply session id: 0x10000020e9e0000, packet:: clientPath:null serverPath:null finished:false header:: 129,14 replyHeader:: 129,80,0 request:: org.apache.zookeeper.MultiOperationRecord@324db751 response:: org.apache.zookeeper.MultiResponse@2c19b792 11:10:52 11:10:51.944 [ProcessThread(sid:0 cport:39173):] DEBUG org.apache.zookeeper.server.SessionTrackerImpl - Checking session 0x10000020e9e0000 11:10:52 11:10:51.944 [ProcessThread(sid:0 cport:39173):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Permission requested: 1 11:10:52 11:10:51.944 [ProcessThread(sid:0 cport:39173):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - ACLs for node: [31,s{'world,'anyone} 11:10:52 ] 11:10:52 11:10:51.944 [ProcessThread(sid:0 cport:39173):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Client credentials: ['sasl,'zooclient 11:10:52 , 'ip,'127.0.0.1 11:10:52 ] 11:10:52 11:10:51.944 [ProcessThread(sid:0 cport:39173):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 155759621072 11:10:52 11:10:51.944 [ProcessThread(sid:0 cport:39173):] DEBUG org.apache.zookeeper.server.SessionTrackerImpl - Checking session 0x10000020e9e0000 11:10:52 11:10:51.944 [main-SendThread(127.0.0.1:39173)] DEBUG org.apache.zookeeper.ClientCnxn - Reading reply session id: 0x10000020e9e0000, packet:: clientPath:null serverPath:null finished:false header:: 130,14 replyHeader:: 130,81,0 request:: org.apache.zookeeper.MultiOperationRecord@324db7b1 response:: org.apache.zookeeper.MultiResponse@2c19b7f2 11:10:52 11:10:51.944 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - Processing request:: sessionid:0x10000020e9e0000 type:multi cxid:0x83 zxid:0x52 txntype:14 reqpath:n/a 11:10:52 11:10:51.944 [SyncThread:0] DEBUG org.apache.zookeeper.common.PathTrie - brokers 11:10:52 11:10:51.944 [SyncThread:0] DEBUG org.apache.zookeeper.server.DataTree - Digests are matching for Zxid: 52, Digest in log and actual tree: 152508267826 11:10:52 11:10:51.944 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - sessionid:0x10000020e9e0000 type:multi cxid:0x83 zxid:0x52 txntype:14 reqpath:n/a 11:10:52 11:10:51.944 [ProcessThread(sid:0 cport:39173):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Processing ACL: 31,s{'world,'anyone} 11:10:52 11:10:52 11:10:51.944 [ProcessThread(sid:0 cport:39173):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Permission requested: 4 11:10:52 11:10:51.944 [ProcessThread(sid:0 cport:39173):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - ACLs for node: [31,s{'world,'anyone} 11:10:52 ] 11:10:52 11:10:51.945 [ProcessThread(sid:0 cport:39173):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Client credentials: ['sasl,'zooclient 11:10:52 , 'ip,'127.0.0.1 11:10:52 ] 11:10:52 11:10:51.944 [main-SendThread(127.0.0.1:39173)] DEBUG org.apache.zookeeper.ClientCnxn - Reading reply session id: 0x10000020e9e0000, packet:: clientPath:null serverPath:null finished:false header:: 131,14 replyHeader:: 131,82,0 request:: org.apache.zookeeper.MultiOperationRecord@940352d7 response:: org.apache.zookeeper.MultiResponse@8dcf5318 11:10:52 11:10:51.945 [ProcessThread(sid:0 cport:39173):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 155759621072 11:10:52 11:10:51.945 [ProcessThread(sid:0 cport:39173):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 156615386796 11:10:52 11:10:51.945 [ProcessThread(sid:0 cport:39173):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 158480614379 11:10:52 11:10:51.945 [ProcessThread(sid:0 cport:39173):] DEBUG org.apache.zookeeper.server.SessionTrackerImpl - Checking session 0x10000020e9e0000 11:10:52 11:10:51.945 [ProcessThread(sid:0 cport:39173):] DEBUG org.apache.zookeeper.server.SessionTrackerImpl - Checking session 0x10000020e9e0000 11:10:52 11:10:51.945 [ProcessThread(sid:0 cport:39173):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Permission requested: 1 11:10:52 11:10:51.945 [ProcessThread(sid:0 cport:39173):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - ACLs for node: [31,s{'world,'anyone} 11:10:52 ] 11:10:52 11:10:51.945 [ProcessThread(sid:0 cport:39173):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Client credentials: ['sasl,'zooclient 11:10:52 , 'ip,'127.0.0.1 11:10:52 ] 11:10:52 11:10:51.945 [ProcessThread(sid:0 cport:39173):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 158480614379 11:10:52 11:10:51.945 [ProcessThread(sid:0 cport:39173):] DEBUG org.apache.zookeeper.server.SessionTrackerImpl - Checking session 0x10000020e9e0000 11:10:52 11:10:51.945 [ProcessThread(sid:0 cport:39173):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Processing ACL: 31,s{'world,'anyone} 11:10:52 11:10:52 11:10:51.945 [ProcessThread(sid:0 cport:39173):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Permission requested: 4 11:10:52 11:10:51.945 [ProcessThread(sid:0 cport:39173):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - ACLs for node: [31,s{'world,'anyone} 11:10:52 ] 11:10:52 11:10:51.945 [ProcessThread(sid:0 cport:39173):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Client credentials: ['sasl,'zooclient 11:10:52 , 'ip,'127.0.0.1 11:10:52 ] 11:10:52 11:10:51.946 [ProcessThread(sid:0 cport:39173):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 158480614379 11:10:52 11:10:51.946 [ProcessThread(sid:0 cport:39173):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 157762742812 11:10:52 11:10:51.946 [ProcessThread(sid:0 cport:39173):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 159934736727 11:10:52 11:10:51.946 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - Processing request:: sessionid:0x10000020e9e0000 type:multi cxid:0x84 zxid:0x53 txntype:14 reqpath:n/a 11:10:52 11:10:51.946 [SyncThread:0] DEBUG org.apache.zookeeper.common.PathTrie - brokers 11:10:52 11:10:51.946 [SyncThread:0] DEBUG org.apache.zookeeper.server.DataTree - Digests are matching for Zxid: 53, Digest in log and actual tree: 149544748429 11:10:52 11:10:51.946 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - sessionid:0x10000020e9e0000 type:multi cxid:0x84 zxid:0x53 txntype:14 reqpath:n/a 11:10:52 11:10:51.946 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - Processing request:: sessionid:0x10000020e9e0000 type:multi cxid:0x85 zxid:0x54 txntype:14 reqpath:n/a 11:10:52 11:10:51.947 [SyncThread:0] DEBUG org.apache.zookeeper.common.PathTrie - brokers 11:10:52 11:10:51.947 [SyncThread:0] DEBUG org.apache.zookeeper.server.DataTree - Digests are matching for Zxid: 54, Digest in log and actual tree: 154294946292 11:10:52 11:10:51.947 [main-SendThread(127.0.0.1:39173)] DEBUG org.apache.zookeeper.ClientCnxn - Reading reply session id: 0x10000020e9e0000, packet:: clientPath:null serverPath:null finished:false header:: 132,14 replyHeader:: 132,83,0 request:: org.apache.zookeeper.MultiOperationRecord@940352db response:: org.apache.zookeeper.MultiResponse@8dcf531c 11:10:52 11:10:51.947 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - sessionid:0x10000020e9e0000 type:multi cxid:0x85 zxid:0x54 txntype:14 reqpath:n/a 11:10:52 11:10:51.947 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - Processing request:: sessionid:0x10000020e9e0000 type:multi cxid:0x86 zxid:0x55 txntype:14 reqpath:n/a 11:10:52 11:10:51.947 [main-SendThread(127.0.0.1:39173)] DEBUG org.apache.zookeeper.ClientCnxn - Reading reply session id: 0x10000020e9e0000, packet:: clientPath:null serverPath:null finished:false header:: 133,14 replyHeader:: 133,84,0 request:: org.apache.zookeeper.MultiOperationRecord@324db774 response:: org.apache.zookeeper.MultiResponse@2c19b7b5 11:10:52 11:10:51.947 [SyncThread:0] DEBUG org.apache.zookeeper.common.PathTrie - brokers 11:10:52 11:10:51.947 [SyncThread:0] DEBUG org.apache.zookeeper.server.DataTree - Digests are matching for Zxid: 55, Digest in log and actual tree: 155593380680 11:10:52 11:10:51.947 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - sessionid:0x10000020e9e0000 type:multi cxid:0x86 zxid:0x55 txntype:14 reqpath:n/a 11:10:52 11:10:51.947 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - Processing request:: sessionid:0x10000020e9e0000 type:multi cxid:0x87 zxid:0x56 txntype:14 reqpath:n/a 11:10:52 11:10:51.948 [SyncThread:0] DEBUG org.apache.zookeeper.common.PathTrie - brokers 11:10:52 11:10:51.948 [SyncThread:0] DEBUG org.apache.zookeeper.server.DataTree - Digests are matching for Zxid: 56, Digest in log and actual tree: 154360216200 11:10:52 11:10:51.948 [main-SendThread(127.0.0.1:39173)] DEBUG org.apache.zookeeper.ClientCnxn - Reading reply session id: 0x10000020e9e0000, packet:: clientPath:null serverPath:null finished:false header:: 134,14 replyHeader:: 134,85,0 request:: org.apache.zookeeper.MultiOperationRecord@324db777 response:: org.apache.zookeeper.MultiResponse@2c19b7b8 11:10:52 11:10:51.948 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - sessionid:0x10000020e9e0000 type:multi cxid:0x87 zxid:0x56 txntype:14 reqpath:n/a 11:10:52 11:10:51.948 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - Processing request:: sessionid:0x10000020e9e0000 type:multi cxid:0x88 zxid:0x57 txntype:14 reqpath:n/a 11:10:52 11:10:51.948 [SyncThread:0] DEBUG org.apache.zookeeper.common.PathTrie - brokers 11:10:52 11:10:51.948 [main-SendThread(127.0.0.1:39173)] DEBUG org.apache.zookeeper.ClientCnxn - Reading reply session id: 0x10000020e9e0000, packet:: clientPath:null serverPath:null finished:false header:: 135,14 replyHeader:: 135,86,0 request:: org.apache.zookeeper.MultiOperationRecord@324db791 response:: org.apache.zookeeper.MultiResponse@2c19b7d2 11:10:52 11:10:51.948 [SyncThread:0] DEBUG org.apache.zookeeper.server.DataTree - Digests are matching for Zxid: 57, Digest in log and actual tree: 155759621072 11:10:52 11:10:51.948 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - sessionid:0x10000020e9e0000 type:multi cxid:0x88 zxid:0x57 txntype:14 reqpath:n/a 11:10:52 11:10:51.948 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - Processing request:: sessionid:0x10000020e9e0000 type:multi cxid:0x89 zxid:0x58 txntype:14 reqpath:n/a 11:10:52 11:10:51.949 [SyncThread:0] DEBUG org.apache.zookeeper.common.PathTrie - brokers 11:10:52 11:10:51.949 [SyncThread:0] DEBUG org.apache.zookeeper.server.DataTree - Digests are matching for Zxid: 58, Digest in log and actual tree: 158480614379 11:10:52 11:10:51.949 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - sessionid:0x10000020e9e0000 type:multi cxid:0x89 zxid:0x58 txntype:14 reqpath:n/a 11:10:52 11:10:51.949 [main-SendThread(127.0.0.1:39173)] DEBUG org.apache.zookeeper.ClientCnxn - Reading reply session id: 0x10000020e9e0000, packet:: clientPath:null serverPath:null finished:false header:: 136,14 replyHeader:: 136,87,0 request:: org.apache.zookeeper.MultiOperationRecord@324db74f response:: org.apache.zookeeper.MultiResponse@2c19b790 11:10:52 11:10:51.949 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - Processing request:: sessionid:0x10000020e9e0000 type:exists cxid:0x8a zxid:0xfffffffffffffffe txntype:unknown reqpath:/brokers/topics/__consumer_offsets 11:10:52 11:10:51.949 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - sessionid:0x10000020e9e0000 type:exists cxid:0x8a zxid:0xfffffffffffffffe txntype:unknown reqpath:/brokers/topics/__consumer_offsets 11:10:52 11:10:51.949 [main-SendThread(127.0.0.1:39173)] DEBUG org.apache.zookeeper.ClientCnxn - Reading reply session id: 0x10000020e9e0000, packet:: clientPath:null serverPath:null finished:false header:: 137,14 replyHeader:: 137,88,0 request:: org.apache.zookeeper.MultiOperationRecord@324db78f response:: org.apache.zookeeper.MultiResponse@2c19b7d0 11:10:52 11:10:51.950 [main-SendThread(127.0.0.1:39173)] DEBUG org.apache.zookeeper.ClientCnxn - Reading reply session id: 0x10000020e9e0000, packet:: clientPath:/brokers/topics/__consumer_offsets serverPath:/brokers/topics/__consumer_offsets finished:false header:: 138,3 replyHeader:: 138,88,0 request:: '/brokers/topics/__consumer_offsets,T response:: s{38,38,1768216251835,1768216251835,0,1,0,0,548,1,39} 11:10:52 11:10:51.950 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - Processing request:: sessionid:0x10000020e9e0000 type:multi cxid:0x8b zxid:0x59 txntype:14 reqpath:n/a 11:10:52 11:10:51.950 [SyncThread:0] DEBUG org.apache.zookeeper.common.PathTrie - brokers 11:10:52 11:10:51.950 [SyncThread:0] DEBUG org.apache.zookeeper.server.DataTree - Digests are matching for Zxid: 59, Digest in log and actual tree: 159934736727 11:10:52 11:10:51.950 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - sessionid:0x10000020e9e0000 type:multi cxid:0x8b zxid:0x59 txntype:14 reqpath:n/a 11:10:52 11:10:51.951 [data-plane-kafka-request-handler-1] DEBUG kafka.server.ZkAdminManager - [Admin Manager on Broker 1]: Topic creation failed since topic '__consumer_offsets' already exists. 11:10:52 org.apache.kafka.common.errors.TopicExistsException: Topic '__consumer_offsets' already exists. 11:10:52 11:10:51.952 [data-plane-kafka-request-handler-1] DEBUG kafka.server.DefaultAutoTopicCreationManager - Cleared inflight topic creation state for HashMap(__consumer_offsets -> CreatableTopic(name='__consumer_offsets', numPartitions=50, replicationFactor=1, assignments=[], configs=[CreateableTopicConfig(name='compression.type', value='producer'), CreateableTopicConfig(name='cleanup.policy', value='compact'), CreateableTopicConfig(name='segment.bytes', value='104857600')])) 11:10:52 11:10:51.953 [main-SendThread(127.0.0.1:39173)] DEBUG org.apache.zookeeper.ClientCnxn - Reading reply session id: 0x10000020e9e0000, packet:: clientPath:null serverPath:null finished:false header:: 139,14 replyHeader:: 139,89,0 request:: org.apache.zookeeper.MultiOperationRecord@324db7ac response:: org.apache.zookeeper.MultiResponse@2c19b7ed 11:10:52 11:10:51.953 [main] DEBUG org.apache.kafka.clients.NetworkClient - [Consumer clientId=mso-123456-consumer-37da6eb2-0a27-47c4-bc3c-8032fdeca549, groupId=mso-group] Received FIND_COORDINATOR response from node 1 for request with header RequestHeader(apiKey=FIND_COORDINATOR, apiVersion=4, clientId=mso-123456-consumer-37da6eb2-0a27-47c4-bc3c-8032fdeca549, correlationId=5): FindCoordinatorResponseData(throttleTimeMs=0, errorCode=0, errorMessage='', nodeId=0, host='', port=0, coordinators=[Coordinator(key='mso-group', nodeId=-1, host='', port=-1, errorCode=15, errorMessage='')]) 11:10:52 11:10:51.953 [main] DEBUG org.apache.kafka.clients.consumer.internals.ConsumerCoordinator - [Consumer clientId=mso-123456-consumer-37da6eb2-0a27-47c4-bc3c-8032fdeca549, groupId=mso-group] Received FindCoordinator response ClientResponse(receivedTimeMs=1768216251953, latencyMs=35, disconnected=false, requestHeader=RequestHeader(apiKey=FIND_COORDINATOR, apiVersion=4, clientId=mso-123456-consumer-37da6eb2-0a27-47c4-bc3c-8032fdeca549, correlationId=5), responseBody=FindCoordinatorResponseData(throttleTimeMs=0, errorCode=0, errorMessage='', nodeId=0, host='', port=0, coordinators=[Coordinator(key='mso-group', nodeId=-1, host='', port=-1, errorCode=15, errorMessage='')])) 11:10:52 11:10:51.953 [main] DEBUG org.apache.kafka.clients.consumer.internals.ConsumerCoordinator - [Consumer clientId=mso-123456-consumer-37da6eb2-0a27-47c4-bc3c-8032fdeca549, groupId=mso-group] Group coordinator lookup failed: 11:10:52 11:10:51.953 [main] DEBUG org.apache.kafka.clients.consumer.internals.ConsumerCoordinator - [Consumer clientId=mso-123456-consumer-37da6eb2-0a27-47c4-bc3c-8032fdeca549, groupId=mso-group] Coordinator discovery failed, refreshing metadata 11:10:52 org.apache.kafka.common.errors.CoordinatorNotAvailableException: The coordinator is not available. 11:10:52 11:10:51.954 [data-plane-kafka-network-thread-1-ListenerName(SASL_PLAINTEXT)-SASL_PLAINTEXT-0] DEBUG kafka.request.logger - Completed request:{"isForwarded":false,"requestHeader":{"requestApiKey":10,"requestApiVersion":4,"correlationId":5,"clientId":"mso-123456-consumer-37da6eb2-0a27-47c4-bc3c-8032fdeca549","requestApiKeyName":"FIND_COORDINATOR"},"request":{"keyType":0,"coordinatorKeys":["mso-group"]},"response":{"throttleTimeMs":0,"coordinators":[{"key":"mso-group","nodeId":-1,"host":"","port":-1,"errorCode":15,"errorMessage":""}]},"connection":"127.0.0.1:39115-127.0.0.1:43334-3","totalTimeMs":34.024,"requestQueueTimeMs":0.135,"localTimeMs":33.272,"remoteTimeMs":0.0,"throttleTimeMs":0,"responseQueueTimeMs":0.165,"sendTimeMs":0.451,"securityProtocol":"SASL_PLAINTEXT","principal":"User:admin","listener":"SASL_PLAINTEXT","clientInformation":{"softwareName":"apache-kafka-java","softwareVersion":"3.3.1"}} 11:10:52 11:10:51.965 [ProcessThread(sid:0 cport:39173):] DEBUG org.apache.zookeeper.server.SessionTrackerImpl - Checking session 0x10000020e9e0000 11:10:52 11:10:51.965 [ProcessThread(sid:0 cport:39173):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Permission requested: 1 11:10:52 11:10:51.965 [ProcessThread(sid:0 cport:39173):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - ACLs for node: [31,s{'world,'anyone} 11:10:52 ] 11:10:52 11:10:51.965 [ProcessThread(sid:0 cport:39173):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Client credentials: ['sasl,'zooclient 11:10:52 , 'ip,'127.0.0.1 11:10:52 ] 11:10:52 11:10:51.965 [ProcessThread(sid:0 cport:39173):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from data tree is: 159934736727 11:10:52 11:10:51.965 [ProcessThread(sid:0 cport:39173):] DEBUG org.apache.zookeeper.server.SessionTrackerImpl - Checking session 0x10000020e9e0000 11:10:52 11:10:51.965 [ProcessThread(sid:0 cport:39173):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Processing ACL: 31,s{'world,'anyone} 11:10:52 11:10:52 11:10:51.966 [ProcessThread(sid:0 cport:39173):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Permission requested: 4 11:10:52 11:10:51.966 [ProcessThread(sid:0 cport:39173):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - ACLs for node: [31,s{'world,'anyone} 11:10:52 ] 11:10:52 11:10:51.966 [ProcessThread(sid:0 cport:39173):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Client credentials: ['sasl,'zooclient 11:10:52 , 'ip,'127.0.0.1 11:10:52 ] 11:10:52 11:10:51.966 [ProcessThread(sid:0 cport:39173):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from data tree is: 159934736727 11:10:52 11:10:51.966 [ProcessThread(sid:0 cport:39173):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 159703828703 11:10:52 11:10:51.967 [ProcessThread(sid:0 cport:39173):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 160732836956 11:10:52 11:10:51.967 [ProcessThread(sid:0 cport:39173):] DEBUG org.apache.zookeeper.server.SessionTrackerImpl - Checking session 0x10000020e9e0000 11:10:52 11:10:51.967 [ProcessThread(sid:0 cport:39173):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Permission requested: 1 11:10:52 11:10:51.967 [ProcessThread(sid:0 cport:39173):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - ACLs for node: [31,s{'world,'anyone} 11:10:52 ] 11:10:52 11:10:51.967 [ProcessThread(sid:0 cport:39173):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Client credentials: ['sasl,'zooclient 11:10:52 , 'ip,'127.0.0.1 11:10:52 ] 11:10:52 11:10:51.967 [ProcessThread(sid:0 cport:39173):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 160732836956 11:10:52 11:10:51.967 [ProcessThread(sid:0 cport:39173):] DEBUG org.apache.zookeeper.server.SessionTrackerImpl - Checking session 0x10000020e9e0000 11:10:52 11:10:51.967 [ProcessThread(sid:0 cport:39173):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Processing ACL: 31,s{'world,'anyone} 11:10:52 11:10:52 11:10:51.967 [ProcessThread(sid:0 cport:39173):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Permission requested: 4 11:10:52 11:10:51.968 [ProcessThread(sid:0 cport:39173):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - ACLs for node: [31,s{'world,'anyone} 11:10:52 ] 11:10:52 11:10:51.968 [ProcessThread(sid:0 cport:39173):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Client credentials: ['sasl,'zooclient 11:10:52 , 'ip,'127.0.0.1 11:10:52 ] 11:10:52 11:10:51.968 [ProcessThread(sid:0 cport:39173):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 160732836956 11:10:52 11:10:51.968 [ProcessThread(sid:0 cport:39173):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 164780119012 11:10:52 11:10:51.968 [ProcessThread(sid:0 cport:39173):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 165387783053 11:10:52 11:10:51.968 [ProcessThread(sid:0 cport:39173):] DEBUG org.apache.zookeeper.server.SessionTrackerImpl - Checking session 0x10000020e9e0000 11:10:52 11:10:51.968 [ProcessThread(sid:0 cport:39173):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Permission requested: 1 11:10:52 11:10:51.968 [ProcessThread(sid:0 cport:39173):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - ACLs for node: [31,s{'world,'anyone} 11:10:52 ] 11:10:52 11:10:51.968 [ProcessThread(sid:0 cport:39173):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Client credentials: ['sasl,'zooclient 11:10:52 , 'ip,'127.0.0.1 11:10:52 ] 11:10:52 11:10:51.968 [ProcessThread(sid:0 cport:39173):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 165387783053 11:10:52 11:10:51.968 [ProcessThread(sid:0 cport:39173):] DEBUG org.apache.zookeeper.server.SessionTrackerImpl - Checking session 0x10000020e9e0000 11:10:52 11:10:51.968 [ProcessThread(sid:0 cport:39173):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Processing ACL: 31,s{'world,'anyone} 11:10:52 11:10:52 11:10:51.969 [ProcessThread(sid:0 cport:39173):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Permission requested: 4 11:10:52 11:10:51.969 [ProcessThread(sid:0 cport:39173):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - ACLs for node: [31,s{'world,'anyone} 11:10:52 ] 11:10:52 11:10:51.969 [ProcessThread(sid:0 cport:39173):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Client credentials: ['sasl,'zooclient 11:10:52 , 'ip,'127.0.0.1 11:10:52 ] 11:10:52 11:10:51.969 [ProcessThread(sid:0 cport:39173):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 165387783053 11:10:52 11:10:51.969 [ProcessThread(sid:0 cport:39173):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 167595021752 11:10:52 11:10:51.969 [ProcessThread(sid:0 cport:39173):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 167833307675 11:10:52 11:10:51.970 [ProcessThread(sid:0 cport:39173):] DEBUG org.apache.zookeeper.server.SessionTrackerImpl - Checking session 0x10000020e9e0000 11:10:52 11:10:51.970 [ProcessThread(sid:0 cport:39173):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Permission requested: 1 11:10:52 11:10:51.970 [ProcessThread(sid:0 cport:39173):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - ACLs for node: [31,s{'world,'anyone} 11:10:52 ] 11:10:52 11:10:51.970 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - Processing request:: sessionid:0x10000020e9e0000 type:multi cxid:0x8c zxid:0x5a txntype:14 reqpath:n/a 11:10:52 11:10:51.970 [ProcessThread(sid:0 cport:39173):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Client credentials: ['sasl,'zooclient 11:10:52 , 'ip,'127.0.0.1 11:10:52 ] 11:10:52 11:10:51.970 [SyncThread:0] DEBUG org.apache.zookeeper.common.PathTrie - brokers 11:10:52 11:10:51.970 [SyncThread:0] DEBUG org.apache.zookeeper.server.DataTree - Digests are matching for Zxid: 5a, Digest in log and actual tree: 160732836956 11:10:52 11:10:51.970 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - sessionid:0x10000020e9e0000 type:multi cxid:0x8c zxid:0x5a txntype:14 reqpath:n/a 11:10:52 11:10:51.971 [ProcessThread(sid:0 cport:39173):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 167833307675 11:10:52 11:10:51.971 [main-SendThread(127.0.0.1:39173)] DEBUG org.apache.zookeeper.ClientCnxn - Reading reply session id: 0x10000020e9e0000, packet:: clientPath:null serverPath:null finished:false header:: 140,14 replyHeader:: 140,90,0 request:: org.apache.zookeeper.MultiOperationRecord@d54f07a9 response:: org.apache.zookeeper.MultiResponse@ef9185b3 11:10:52 11:10:51.971 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - Processing request:: sessionid:0x10000020e9e0000 type:multi cxid:0x8d zxid:0x5b txntype:14 reqpath:n/a 11:10:52 11:10:51.973 [ProcessThread(sid:0 cport:39173):] DEBUG org.apache.zookeeper.server.SessionTrackerImpl - Checking session 0x10000020e9e0000 11:10:52 11:10:51.973 [ProcessThread(sid:0 cport:39173):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Processing ACL: 31,s{'world,'anyone} 11:10:52 11:10:52 11:10:51.973 [SyncThread:0] DEBUG org.apache.zookeeper.common.PathTrie - brokers 11:10:52 11:10:51.973 [SyncThread:0] DEBUG org.apache.zookeeper.server.DataTree - Digests are matching for Zxid: 5b, Digest in log and actual tree: 165387783053 11:10:52 11:10:51.973 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - sessionid:0x10000020e9e0000 type:multi cxid:0x8d zxid:0x5b txntype:14 reqpath:n/a 11:10:52 11:10:51.973 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - Processing request:: sessionid:0x10000020e9e0000 type:multi cxid:0x8e zxid:0x5c txntype:14 reqpath:n/a 11:10:52 11:10:51.974 [main-SendThread(127.0.0.1:39173)] DEBUG org.apache.zookeeper.ClientCnxn - Reading reply session id: 0x10000020e9e0000, packet:: clientPath:null serverPath:null finished:false header:: 141,14 replyHeader:: 141,91,0 request:: org.apache.zookeeper.MultiOperationRecord@d363be06 response:: org.apache.zookeeper.MultiResponse@eda63c10 11:10:52 11:10:51.974 [SyncThread:0] DEBUG org.apache.zookeeper.common.PathTrie - brokers 11:10:52 11:10:51.974 [SyncThread:0] DEBUG org.apache.zookeeper.server.DataTree - Digests are matching for Zxid: 5c, Digest in log and actual tree: 167833307675 11:10:52 11:10:51.974 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - sessionid:0x10000020e9e0000 type:multi cxid:0x8e zxid:0x5c txntype:14 reqpath:n/a 11:10:52 11:10:51.975 [ProcessThread(sid:0 cport:39173):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Permission requested: 4 11:10:52 11:10:51.975 [ProcessThread(sid:0 cport:39173):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - ACLs for node: [31,s{'world,'anyone} 11:10:52 ] 11:10:52 11:10:51.975 [ProcessThread(sid:0 cport:39173):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Client credentials: ['sasl,'zooclient 11:10:52 , 'ip,'127.0.0.1 11:10:52 ] 11:10:52 11:10:51.975 [ProcessThread(sid:0 cport:39173):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from data tree is: 167833307675 11:10:52 11:10:51.975 [main-SendThread(127.0.0.1:39173)] DEBUG org.apache.zookeeper.ClientCnxn - Reading reply session id: 0x10000020e9e0000, packet:: clientPath:null serverPath:null finished:false header:: 142,14 replyHeader:: 142,92,0 request:: org.apache.zookeeper.MultiOperationRecord@7401b96c response:: org.apache.zookeeper.MultiResponse@8e443776 11:10:52 11:10:51.975 [ProcessThread(sid:0 cport:39173):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 169635822630 11:10:52 11:10:51.975 [ProcessThread(sid:0 cport:39173):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 170115991747 11:10:52 11:10:51.975 [ProcessThread(sid:0 cport:39173):] DEBUG org.apache.zookeeper.server.SessionTrackerImpl - Checking session 0x10000020e9e0000 11:10:52 11:10:51.975 [ProcessThread(sid:0 cport:39173):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Permission requested: 1 11:10:52 11:10:51.975 [ProcessThread(sid:0 cport:39173):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - ACLs for node: [31,s{'world,'anyone} 11:10:52 ] 11:10:52 11:10:51.975 [ProcessThread(sid:0 cport:39173):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Client credentials: ['sasl,'zooclient 11:10:52 , 'ip,'127.0.0.1 11:10:52 ] 11:10:52 11:10:51.975 [ProcessThread(sid:0 cport:39173):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 170115991747 11:10:52 11:10:51.975 [ProcessThread(sid:0 cport:39173):] DEBUG org.apache.zookeeper.server.SessionTrackerImpl - Checking session 0x10000020e9e0000 11:10:52 11:10:51.975 [ProcessThread(sid:0 cport:39173):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Processing ACL: 31,s{'world,'anyone} 11:10:52 11:10:52 11:10:51.976 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - Processing request:: sessionid:0x10000020e9e0000 type:multi cxid:0x8f zxid:0x5d txntype:14 reqpath:n/a 11:10:52 11:10:51.976 [ProcessThread(sid:0 cport:39173):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Permission requested: 4 11:10:52 11:10:51.976 [ProcessThread(sid:0 cport:39173):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - ACLs for node: [31,s{'world,'anyone} 11:10:52 ] 11:10:52 11:10:51.976 [SyncThread:0] DEBUG org.apache.zookeeper.common.PathTrie - brokers 11:10:52 11:10:51.976 [ProcessThread(sid:0 cport:39173):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Client credentials: ['sasl,'zooclient 11:10:52 , 'ip,'127.0.0.1 11:10:52 ] 11:10:52 11:10:51.976 [SyncThread:0] DEBUG org.apache.zookeeper.server.DataTree - Digests are matching for Zxid: 5d, Digest in log and actual tree: 170115991747 11:10:52 11:10:51.976 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - sessionid:0x10000020e9e0000 type:multi cxid:0x8f zxid:0x5d txntype:14 reqpath:n/a 11:10:52 11:10:51.976 [ProcessThread(sid:0 cport:39173):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from data tree is: 170115991747 11:10:52 11:10:51.976 [ProcessThread(sid:0 cport:39173):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 170430523723 11:10:52 11:10:51.976 [ProcessThread(sid:0 cport:39173):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 171652574876 11:10:52 11:10:51.977 [ProcessThread(sid:0 cport:39173):] DEBUG org.apache.zookeeper.server.SessionTrackerImpl - Checking session 0x10000020e9e0000 11:10:52 11:10:51.977 [main-SendThread(127.0.0.1:39173)] DEBUG org.apache.zookeeper.ClientCnxn - Reading reply session id: 0x10000020e9e0000, packet:: clientPath:null serverPath:null finished:false header:: 143,14 replyHeader:: 143,93,0 request:: org.apache.zookeeper.MultiOperationRecord@dbe2e64b response:: org.apache.zookeeper.MultiResponse@f6256455 11:10:52 11:10:51.977 [ProcessThread(sid:0 cport:39173):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Permission requested: 1 11:10:52 11:10:51.977 [ProcessThread(sid:0 cport:39173):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - ACLs for node: [31,s{'world,'anyone} 11:10:52 ] 11:10:52 11:10:51.977 [ProcessThread(sid:0 cport:39173):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Client credentials: ['sasl,'zooclient 11:10:52 , 'ip,'127.0.0.1 11:10:52 ] 11:10:52 11:10:51.977 [ProcessThread(sid:0 cport:39173):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 171652574876 11:10:52 11:10:51.977 [ProcessThread(sid:0 cport:39173):] DEBUG org.apache.zookeeper.server.SessionTrackerImpl - Checking session 0x10000020e9e0000 11:10:52 11:10:51.977 [ProcessThread(sid:0 cport:39173):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Processing ACL: 31,s{'world,'anyone} 11:10:52 11:10:52 11:10:51.978 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - Processing request:: sessionid:0x10000020e9e0000 type:multi cxid:0x90 zxid:0x5e txntype:14 reqpath:n/a 11:10:52 11:10:51.978 [ProcessThread(sid:0 cport:39173):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Permission requested: 4 11:10:52 11:10:51.979 [ProcessThread(sid:0 cport:39173):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - ACLs for node: [31,s{'world,'anyone} 11:10:52 ] 11:10:52 11:10:51.979 [ProcessThread(sid:0 cport:39173):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Client credentials: ['sasl,'zooclient 11:10:52 , 'ip,'127.0.0.1 11:10:52 ] 11:10:52 11:10:51.979 [SyncThread:0] DEBUG org.apache.zookeeper.common.PathTrie - brokers 11:10:52 11:10:51.979 [SyncThread:0] DEBUG org.apache.zookeeper.server.DataTree - Digests are matching for Zxid: 5e, Digest in log and actual tree: 171652574876 11:10:52 11:10:51.979 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - sessionid:0x10000020e9e0000 type:multi cxid:0x90 zxid:0x5e txntype:14 reqpath:n/a 11:10:52 11:10:51.979 [ProcessThread(sid:0 cport:39173):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from data tree is: 171652574876 11:10:52 11:10:51.979 [ProcessThread(sid:0 cport:39173):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 168678641188 11:10:52 11:10:51.979 [ProcessThread(sid:0 cport:39173):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 172163754799 11:10:52 11:10:51.979 [main-SendThread(127.0.0.1:39173)] DEBUG org.apache.zookeeper.ClientCnxn - Reading reply session id: 0x10000020e9e0000, packet:: clientPath:null serverPath:null finished:false header:: 144,14 replyHeader:: 144,94,0 request:: org.apache.zookeeper.MultiOperationRecord@45af5ccd response:: org.apache.zookeeper.MultiResponse@5ff1dad7 11:10:52 11:10:51.979 [ProcessThread(sid:0 cport:39173):] DEBUG org.apache.zookeeper.server.SessionTrackerImpl - Checking session 0x10000020e9e0000 11:10:52 11:10:51.979 [ProcessThread(sid:0 cport:39173):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Permission requested: 1 11:10:52 11:10:51.979 [ProcessThread(sid:0 cport:39173):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - ACLs for node: [31,s{'world,'anyone} 11:10:52 ] 11:10:52 11:10:51.979 [ProcessThread(sid:0 cport:39173):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Client credentials: ['sasl,'zooclient 11:10:52 , 'ip,'127.0.0.1 11:10:52 ] 11:10:52 11:10:51.979 [ProcessThread(sid:0 cport:39173):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 172163754799 11:10:52 11:10:51.979 [ProcessThread(sid:0 cport:39173):] DEBUG org.apache.zookeeper.server.SessionTrackerImpl - Checking session 0x10000020e9e0000 11:10:52 11:10:51.980 [ProcessThread(sid:0 cport:39173):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Processing ACL: 31,s{'world,'anyone} 11:10:52 11:10:52 11:10:51.980 [ProcessThread(sid:0 cport:39173):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Permission requested: 4 11:10:52 11:10:51.980 [ProcessThread(sid:0 cport:39173):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - ACLs for node: [31,s{'world,'anyone} 11:10:52 ] 11:10:52 11:10:51.980 [ProcessThread(sid:0 cport:39173):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Client credentials: ['sasl,'zooclient 11:10:52 , 'ip,'127.0.0.1 11:10:52 ] 11:10:52 11:10:51.980 [ProcessThread(sid:0 cport:39173):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 172163754799 11:10:52 11:10:51.980 [ProcessThread(sid:0 cport:39173):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 173739870637 11:10:52 11:10:51.980 [ProcessThread(sid:0 cport:39173):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 175543296716 11:10:52 11:10:51.980 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - Processing request:: sessionid:0x10000020e9e0000 type:multi cxid:0x91 zxid:0x5f txntype:14 reqpath:n/a 11:10:52 11:10:51.980 [SyncThread:0] DEBUG org.apache.zookeeper.common.PathTrie - brokers 11:10:52 11:10:51.981 [SyncThread:0] DEBUG org.apache.zookeeper.server.DataTree - Digests are matching for Zxid: 5f, Digest in log and actual tree: 172163754799 11:10:52 11:10:51.981 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - sessionid:0x10000020e9e0000 type:multi cxid:0x91 zxid:0x5f txntype:14 reqpath:n/a 11:10:52 11:10:51.981 [ProcessThread(sid:0 cport:39173):] DEBUG org.apache.zookeeper.server.SessionTrackerImpl - Checking session 0x10000020e9e0000 11:10:52 11:10:51.981 [ProcessThread(sid:0 cport:39173):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Permission requested: 1 11:10:52 11:10:51.981 [ProcessThread(sid:0 cport:39173):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - ACLs for node: [31,s{'world,'anyone} 11:10:52 ] 11:10:52 11:10:51.981 [ProcessThread(sid:0 cport:39173):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Client credentials: ['sasl,'zooclient 11:10:52 , 'ip,'127.0.0.1 11:10:52 ] 11:10:52 11:10:51.981 [ProcessThread(sid:0 cport:39173):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 175543296716 11:10:52 11:10:51.981 [main-SendThread(127.0.0.1:39173)] DEBUG org.apache.zookeeper.ClientCnxn - Reading reply session id: 0x10000020e9e0000, packet:: clientPath:null serverPath:null finished:false header:: 145,14 replyHeader:: 145,95,0 request:: org.apache.zookeeper.MultiOperationRecord@7a95980e response:: org.apache.zookeeper.MultiResponse@94d81618 11:10:52 11:10:51.981 [ProcessThread(sid:0 cport:39173):] DEBUG org.apache.zookeeper.server.SessionTrackerImpl - Checking session 0x10000020e9e0000 11:10:52 11:10:51.981 [ProcessThread(sid:0 cport:39173):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Processing ACL: 31,s{'world,'anyone} 11:10:52 11:10:52 11:10:51.981 [ProcessThread(sid:0 cport:39173):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Permission requested: 4 11:10:52 11:10:51.981 [ProcessThread(sid:0 cport:39173):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - ACLs for node: [31,s{'world,'anyone} 11:10:52 ] 11:10:52 11:10:51.981 [ProcessThread(sid:0 cport:39173):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Client credentials: ['sasl,'zooclient 11:10:52 , 'ip,'127.0.0.1 11:10:52 ] 11:10:52 11:10:51.981 [ProcessThread(sid:0 cport:39173):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 175543296716 11:10:52 11:10:51.981 [ProcessThread(sid:0 cport:39173):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 172893656154 11:10:52 11:10:51.981 [ProcessThread(sid:0 cport:39173):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 174987639487 11:10:52 11:10:51.981 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - Processing request:: sessionid:0x10000020e9e0000 type:multi cxid:0x92 zxid:0x60 txntype:14 reqpath:n/a 11:10:52 11:10:51.982 [ProcessThread(sid:0 cport:39173):] DEBUG org.apache.zookeeper.server.SessionTrackerImpl - Checking session 0x10000020e9e0000 11:10:52 11:10:51.982 [SyncThread:0] DEBUG org.apache.zookeeper.common.PathTrie - brokers 11:10:52 11:10:51.982 [SyncThread:0] DEBUG org.apache.zookeeper.server.DataTree - Digests are matching for Zxid: 60, Digest in log and actual tree: 175543296716 11:10:52 11:10:51.982 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - sessionid:0x10000020e9e0000 type:multi cxid:0x92 zxid:0x60 txntype:14 reqpath:n/a 11:10:52 11:10:51.982 [ProcessThread(sid:0 cport:39173):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Permission requested: 1 11:10:52 11:10:51.982 [ProcessThread(sid:0 cport:39173):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - ACLs for node: [31,s{'world,'anyone} 11:10:52 ] 11:10:52 11:10:51.982 [ProcessThread(sid:0 cport:39173):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Client credentials: ['sasl,'zooclient 11:10:52 , 'ip,'127.0.0.1 11:10:52 ] 11:10:52 11:10:51.982 [ProcessThread(sid:0 cport:39173):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 174987639487 11:10:52 11:10:51.982 [ProcessThread(sid:0 cport:39173):] DEBUG org.apache.zookeeper.server.SessionTrackerImpl - Checking session 0x10000020e9e0000 11:10:52 11:10:51.982 [ProcessThread(sid:0 cport:39173):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Processing ACL: 31,s{'world,'anyone} 11:10:52 11:10:52 11:10:51.982 [main-SendThread(127.0.0.1:39173)] DEBUG org.apache.zookeeper.ClientCnxn - Reading reply session id: 0x10000020e9e0000, packet:: clientPath:null serverPath:null finished:false header:: 146,14 replyHeader:: 146,96,0 request:: org.apache.zookeeper.MultiOperationRecord@a254160b response:: org.apache.zookeeper.MultiResponse@bc969415 11:10:52 11:10:51.983 [ProcessThread(sid:0 cport:39173):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Permission requested: 4 11:10:52 11:10:51.983 [ProcessThread(sid:0 cport:39173):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - ACLs for node: [31,s{'world,'anyone} 11:10:52 ] 11:10:52 11:10:51.983 [ProcessThread(sid:0 cport:39173):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Client credentials: ['sasl,'zooclient 11:10:52 , 'ip,'127.0.0.1 11:10:52 ] 11:10:52 11:10:51.983 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - Processing request:: sessionid:0x10000020e9e0000 type:multi cxid:0x93 zxid:0x61 txntype:14 reqpath:n/a 11:10:52 11:10:51.983 [ProcessThread(sid:0 cport:39173):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 174987639487 11:10:52 11:10:51.983 [ProcessThread(sid:0 cport:39173):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 175837630797 11:10:52 11:10:51.983 [SyncThread:0] DEBUG org.apache.zookeeper.common.PathTrie - brokers 11:10:52 11:10:51.983 [SyncThread:0] DEBUG org.apache.zookeeper.server.DataTree - Digests are matching for Zxid: 61, Digest in log and actual tree: 174987639487 11:10:52 11:10:51.983 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - sessionid:0x10000020e9e0000 type:multi cxid:0x93 zxid:0x61 txntype:14 reqpath:n/a 11:10:52 11:10:51.984 [ProcessThread(sid:0 cport:39173):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 179409167351 11:10:52 11:10:51.984 [ProcessThread(sid:0 cport:39173):] DEBUG org.apache.zookeeper.server.SessionTrackerImpl - Checking session 0x10000020e9e0000 11:10:52 11:10:51.984 [ProcessThread(sid:0 cport:39173):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Permission requested: 1 11:10:52 11:10:51.984 [ProcessThread(sid:0 cport:39173):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - ACLs for node: [31,s{'world,'anyone} 11:10:52 ] 11:10:52 11:10:51.984 [ProcessThread(sid:0 cport:39173):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Client credentials: ['sasl,'zooclient 11:10:52 , 'ip,'127.0.0.1 11:10:52 ] 11:10:52 11:10:51.984 [ProcessThread(sid:0 cport:39173):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 179409167351 11:10:52 11:10:51.984 [ProcessThread(sid:0 cport:39173):] DEBUG org.apache.zookeeper.server.SessionTrackerImpl - Checking session 0x10000020e9e0000 11:10:52 11:10:51.984 [ProcessThread(sid:0 cport:39173):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Processing ACL: 31,s{'world,'anyone} 11:10:52 11:10:52 11:10:51.984 [ProcessThread(sid:0 cport:39173):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Permission requested: 4 11:10:52 11:10:51.984 [ProcessThread(sid:0 cport:39173):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - ACLs for node: [31,s{'world,'anyone} 11:10:52 ] 11:10:52 11:10:51.984 [main-SendThread(127.0.0.1:39173)] DEBUG org.apache.zookeeper.ClientCnxn - Reading reply session id: 0x10000020e9e0000, packet:: clientPath:null serverPath:null finished:false header:: 147,14 replyHeader:: 147,97,0 request:: org.apache.zookeeper.MultiOperationRecord@7c11d897 response:: org.apache.zookeeper.MultiResponse@965456a1 11:10:52 11:10:51.984 [ProcessThread(sid:0 cport:39173):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Client credentials: ['sasl,'zooclient 11:10:52 , 'ip,'127.0.0.1 11:10:52 ] 11:10:52 11:10:51.984 [ProcessThread(sid:0 cport:39173):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 179409167351 11:10:52 11:10:51.984 [ProcessThread(sid:0 cport:39173):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 179767007845 11:10:52 11:10:51.984 [ProcessThread(sid:0 cport:39173):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 179775105461 11:10:52 11:10:51.984 [ProcessThread(sid:0 cport:39173):] DEBUG org.apache.zookeeper.server.SessionTrackerImpl - Checking session 0x10000020e9e0000 11:10:52 11:10:51.984 [ProcessThread(sid:0 cport:39173):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Permission requested: 1 11:10:52 11:10:51.984 [ProcessThread(sid:0 cport:39173):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - ACLs for node: [31,s{'world,'anyone} 11:10:52 ] 11:10:52 11:10:51.984 [ProcessThread(sid:0 cport:39173):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Client credentials: ['sasl,'zooclient 11:10:52 , 'ip,'127.0.0.1 11:10:52 ] 11:10:52 11:10:51.984 [ProcessThread(sid:0 cport:39173):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 179775105461 11:10:52 11:10:51.984 [ProcessThread(sid:0 cport:39173):] DEBUG org.apache.zookeeper.server.SessionTrackerImpl - Checking session 0x10000020e9e0000 11:10:52 11:10:51.984 [ProcessThread(sid:0 cport:39173):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Processing ACL: 31,s{'world,'anyone} 11:10:52 11:10:52 11:10:51.984 [ProcessThread(sid:0 cport:39173):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Permission requested: 4 11:10:52 11:10:51.984 [ProcessThread(sid:0 cport:39173):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - ACLs for node: [31,s{'world,'anyone} 11:10:52 ] 11:10:52 11:10:51.984 [ProcessThread(sid:0 cport:39173):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Client credentials: ['sasl,'zooclient 11:10:52 , 'ip,'127.0.0.1 11:10:52 ] 11:10:52 11:10:51.984 [ProcessThread(sid:0 cport:39173):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 179775105461 11:10:52 11:10:51.984 [ProcessThread(sid:0 cport:39173):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 178309685890 11:10:52 11:10:51.984 [ProcessThread(sid:0 cport:39173):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 182588489501 11:10:52 11:10:51.984 [ProcessThread(sid:0 cport:39173):] DEBUG org.apache.zookeeper.server.SessionTrackerImpl - Checking session 0x10000020e9e0000 11:10:52 11:10:51.984 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - Processing request:: sessionid:0x10000020e9e0000 type:multi cxid:0x94 zxid:0x62 txntype:14 reqpath:n/a 11:10:52 11:10:51.985 [ProcessThread(sid:0 cport:39173):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Permission requested: 1 11:10:52 11:10:51.985 [ProcessThread(sid:0 cport:39173):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - ACLs for node: [31,s{'world,'anyone} 11:10:52 ] 11:10:52 11:10:51.985 [ProcessThread(sid:0 cport:39173):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Client credentials: ['sasl,'zooclient 11:10:52 , 'ip,'127.0.0.1 11:10:52 ] 11:10:52 11:10:51.987 [SyncThread:0] DEBUG org.apache.zookeeper.common.PathTrie - brokers 11:10:52 11:10:51.987 [SyncThread:0] DEBUG org.apache.zookeeper.server.DataTree - Digests are matching for Zxid: 62, Digest in log and actual tree: 179409167351 11:10:52 11:10:51.987 [ProcessThread(sid:0 cport:39173):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 182588489501 11:10:52 11:10:51.987 [ProcessThread(sid:0 cport:39173):] DEBUG org.apache.zookeeper.server.SessionTrackerImpl - Checking session 0x10000020e9e0000 11:10:52 11:10:51.987 [ProcessThread(sid:0 cport:39173):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Processing ACL: 31,s{'world,'anyone} 11:10:52 11:10:52 11:10:51.987 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - sessionid:0x10000020e9e0000 type:multi cxid:0x94 zxid:0x62 txntype:14 reqpath:n/a 11:10:52 11:10:51.987 [ProcessThread(sid:0 cport:39173):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Permission requested: 4 11:10:52 11:10:51.987 [ProcessThread(sid:0 cport:39173):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - ACLs for node: [31,s{'world,'anyone} 11:10:52 ] 11:10:52 11:10:51.987 [ProcessThread(sid:0 cport:39173):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Client credentials: ['sasl,'zooclient 11:10:52 , 'ip,'127.0.0.1 11:10:52 ] 11:10:52 11:10:51.987 [ProcessThread(sid:0 cport:39173):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 182588489501 11:10:52 11:10:51.987 [ProcessThread(sid:0 cport:39173):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 183525360744 11:10:52 11:10:51.987 [ProcessThread(sid:0 cport:39173):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 183939416490 11:10:52 11:10:51.988 [ProcessThread(sid:0 cport:39173):] DEBUG org.apache.zookeeper.server.SessionTrackerImpl - Checking session 0x10000020e9e0000 11:10:52 11:10:51.988 [ProcessThread(sid:0 cport:39173):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Permission requested: 1 11:10:52 11:10:51.988 [ProcessThread(sid:0 cport:39173):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - ACLs for node: [31,s{'world,'anyone} 11:10:52 ] 11:10:52 11:10:51.988 [ProcessThread(sid:0 cport:39173):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Client credentials: ['sasl,'zooclient 11:10:52 , 'ip,'127.0.0.1 11:10:52 ] 11:10:52 11:10:51.988 [ProcessThread(sid:0 cport:39173):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 183939416490 11:10:52 11:10:51.988 [main-SendThread(127.0.0.1:39173)] DEBUG org.apache.zookeeper.ClientCnxn - Reading reply session id: 0x10000020e9e0000, packet:: clientPath:null serverPath:null finished:false header:: 148,14 replyHeader:: 148,98,0 request:: org.apache.zookeeper.MultiOperationRecord@a068cc68 response:: org.apache.zookeeper.MultiResponse@baab4a72 11:10:52 11:10:51.988 [ProcessThread(sid:0 cport:39173):] DEBUG org.apache.zookeeper.server.SessionTrackerImpl - Checking session 0x10000020e9e0000 11:10:52 11:10:51.988 [ProcessThread(sid:0 cport:39173):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Processing ACL: 31,s{'world,'anyone} 11:10:52 11:10:52 11:10:51.988 [ProcessThread(sid:0 cport:39173):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Permission requested: 4 11:10:52 11:10:51.988 [ProcessThread(sid:0 cport:39173):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - ACLs for node: [31,s{'world,'anyone} 11:10:52 ] 11:10:52 11:10:51.988 [ProcessThread(sid:0 cport:39173):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Client credentials: ['sasl,'zooclient 11:10:52 , 'ip,'127.0.0.1 11:10:52 ] 11:10:52 11:10:51.988 [ProcessThread(sid:0 cport:39173):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 183939416490 11:10:52 11:10:51.988 [ProcessThread(sid:0 cport:39173):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 184789436796 11:10:52 11:10:51.988 [ProcessThread(sid:0 cport:39173):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 187083210820 11:10:52 11:10:51.988 [ProcessThread(sid:0 cport:39173):] DEBUG org.apache.zookeeper.server.SessionTrackerImpl - Checking session 0x10000020e9e0000 11:10:52 11:10:51.988 [ProcessThread(sid:0 cport:39173):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Permission requested: 1 11:10:52 11:10:51.988 [ProcessThread(sid:0 cport:39173):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - ACLs for node: [31,s{'world,'anyone} 11:10:52 ] 11:10:52 11:10:51.988 [ProcessThread(sid:0 cport:39173):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Client credentials: ['sasl,'zooclient 11:10:52 , 'ip,'127.0.0.1 11:10:52 ] 11:10:52 11:10:51.988 [ProcessThread(sid:0 cport:39173):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 187083210820 11:10:52 11:10:51.988 [ProcessThread(sid:0 cport:39173):] DEBUG org.apache.zookeeper.server.SessionTrackerImpl - Checking session 0x10000020e9e0000 11:10:52 11:10:51.988 [ProcessThread(sid:0 cport:39173):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Processing ACL: 31,s{'world,'anyone} 11:10:52 11:10:52 11:10:51.988 [ProcessThread(sid:0 cport:39173):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Permission requested: 4 11:10:52 11:10:51.988 [ProcessThread(sid:0 cport:39173):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - ACLs for node: [31,s{'world,'anyone} 11:10:52 ] 11:10:52 11:10:51.988 [ProcessThread(sid:0 cport:39173):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Client credentials: ['sasl,'zooclient 11:10:52 , 'ip,'127.0.0.1 11:10:52 ] 11:10:52 11:10:51.988 [ProcessThread(sid:0 cport:39173):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 187083210820 11:10:52 11:10:51.988 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - Processing request:: sessionid:0x10000020e9e0000 type:multi cxid:0x95 zxid:0x63 txntype:14 reqpath:n/a 11:10:52 11:10:51.988 [ProcessThread(sid:0 cport:39173):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 187262986518 11:10:52 11:10:51.988 [ProcessThread(sid:0 cport:39173):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 191257833051 11:10:52 11:10:51.989 [SyncThread:0] DEBUG org.apache.zookeeper.common.PathTrie - brokers 11:10:52 11:10:51.989 [SyncThread:0] DEBUG org.apache.zookeeper.server.DataTree - Digests are matching for Zxid: 63, Digest in log and actual tree: 179775105461 11:10:52 11:10:51.989 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - sessionid:0x10000020e9e0000 type:multi cxid:0x95 zxid:0x63 txntype:14 reqpath:n/a 11:10:52 11:10:51.989 [ProcessThread(sid:0 cport:39173):] DEBUG org.apache.zookeeper.server.SessionTrackerImpl - Checking session 0x10000020e9e0000 11:10:52 11:10:51.989 [ProcessThread(sid:0 cport:39173):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Permission requested: 1 11:10:52 11:10:51.989 [ProcessThread(sid:0 cport:39173):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - ACLs for node: [31,s{'world,'anyone} 11:10:52 ] 11:10:52 11:10:51.989 [ProcessThread(sid:0 cport:39173):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Client credentials: ['sasl,'zooclient 11:10:52 , 'ip,'127.0.0.1 11:10:52 ] 11:10:52 11:10:51.989 [ProcessThread(sid:0 cport:39173):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 191257833051 11:10:52 11:10:51.989 [ProcessThread(sid:0 cport:39173):] DEBUG org.apache.zookeeper.server.SessionTrackerImpl - Checking session 0x10000020e9e0000 11:10:52 11:10:51.989 [ProcessThread(sid:0 cport:39173):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Processing ACL: 31,s{'world,'anyone} 11:10:52 11:10:52 11:10:51.989 [ProcessThread(sid:0 cport:39173):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Permission requested: 4 11:10:52 11:10:51.989 [ProcessThread(sid:0 cport:39173):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - ACLs for node: [31,s{'world,'anyone} 11:10:52 ] 11:10:52 11:10:51.989 [ProcessThread(sid:0 cport:39173):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Client credentials: ['sasl,'zooclient 11:10:52 , 'ip,'127.0.0.1 11:10:52 ] 11:10:52 11:10:51.989 [ProcessThread(sid:0 cport:39173):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 191257833051 11:10:52 11:10:51.989 [ProcessThread(sid:0 cport:39173):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 193476759658 11:10:52 11:10:51.989 [main-SendThread(127.0.0.1:39173)] DEBUG org.apache.zookeeper.ClientCnxn - Reading reply session id: 0x10000020e9e0000, packet:: clientPath:null serverPath:null finished:false header:: 149,14 replyHeader:: 149,99,0 request:: org.apache.zookeeper.MultiOperationRecord@a878eb93 response:: org.apache.zookeeper.MultiResponse@c2bb699d 11:10:52 11:10:51.989 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - Processing request:: sessionid:0x10000020e9e0000 type:multi cxid:0x96 zxid:0x64 txntype:14 reqpath:n/a 11:10:52 11:10:51.989 [ProcessThread(sid:0 cport:39173):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 196703136098 11:10:52 11:10:51.989 [ProcessThread(sid:0 cport:39173):] DEBUG org.apache.zookeeper.server.SessionTrackerImpl - Checking session 0x10000020e9e0000 11:10:52 11:10:51.989 [ProcessThread(sid:0 cport:39173):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Permission requested: 1 11:10:52 11:10:51.989 [ProcessThread(sid:0 cport:39173):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - ACLs for node: [31,s{'world,'anyone} 11:10:52 ] 11:10:52 11:10:51.990 [ProcessThread(sid:0 cport:39173):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Client credentials: ['sasl,'zooclient 11:10:52 , 'ip,'127.0.0.1 11:10:52 ] 11:10:52 11:10:51.990 [ProcessThread(sid:0 cport:39173):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 196703136098 11:10:52 11:10:51.990 [ProcessThread(sid:0 cport:39173):] DEBUG org.apache.zookeeper.server.SessionTrackerImpl - Checking session 0x10000020e9e0000 11:10:52 11:10:51.990 [ProcessThread(sid:0 cport:39173):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Processing ACL: 31,s{'world,'anyone} 11:10:52 11:10:52 11:10:51.990 [SyncThread:0] DEBUG org.apache.zookeeper.common.PathTrie - brokers 11:10:52 11:10:51.990 [SyncThread:0] DEBUG org.apache.zookeeper.server.DataTree - Digests are matching for Zxid: 64, Digest in log and actual tree: 182588489501 11:10:52 11:10:51.990 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - sessionid:0x10000020e9e0000 type:multi cxid:0x96 zxid:0x64 txntype:14 reqpath:n/a 11:10:52 11:10:51.990 [ProcessThread(sid:0 cport:39173):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Permission requested: 4 11:10:52 11:10:51.990 [ProcessThread(sid:0 cport:39173):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - ACLs for node: [31,s{'world,'anyone} 11:10:52 ] 11:10:52 11:10:51.990 [ProcessThread(sid:0 cport:39173):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Client credentials: ['sasl,'zooclient 11:10:52 , 'ip,'127.0.0.1 11:10:52 ] 11:10:52 11:10:51.990 [ProcessThread(sid:0 cport:39173):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 196703136098 11:10:52 11:10:51.990 [ProcessThread(sid:0 cport:39173):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 194483423089 11:10:52 11:10:51.990 [ProcessThread(sid:0 cport:39173):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 195763303167 11:10:52 11:10:51.990 [ProcessThread(sid:0 cport:39173):] DEBUG org.apache.zookeeper.server.SessionTrackerImpl - Checking session 0x10000020e9e0000 11:10:52 11:10:51.990 [ProcessThread(sid:0 cport:39173):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Permission requested: 1 11:10:52 11:10:51.990 [ProcessThread(sid:0 cport:39173):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - ACLs for node: [31,s{'world,'anyone} 11:10:52 ] 11:10:52 11:10:51.990 [ProcessThread(sid:0 cport:39173):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Client credentials: ['sasl,'zooclient 11:10:52 , 'ip,'127.0.0.1 11:10:52 ] 11:10:52 11:10:51.990 [ProcessThread(sid:0 cport:39173):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 195763303167 11:10:52 11:10:51.990 [ProcessThread(sid:0 cport:39173):] DEBUG org.apache.zookeeper.server.SessionTrackerImpl - Checking session 0x10000020e9e0000 11:10:52 11:10:51.990 [main-SendThread(127.0.0.1:39173)] DEBUG org.apache.zookeeper.ClientCnxn - Reading reply session id: 0x10000020e9e0000, packet:: clientPath:null serverPath:null finished:false header:: 150,14 replyHeader:: 150,100,0 request:: org.apache.zookeeper.MultiOperationRecord@ddce2fee response:: org.apache.zookeeper.MultiResponse@f810adf8 11:10:52 11:10:51.990 [ProcessThread(sid:0 cport:39173):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Processing ACL: 31,s{'world,'anyone} 11:10:52 11:10:52 11:10:51.990 [ProcessThread(sid:0 cport:39173):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Permission requested: 4 11:10:52 11:10:51.990 [ProcessThread(sid:0 cport:39173):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - ACLs for node: [31,s{'world,'anyone} 11:10:52 ] 11:10:52 11:10:51.990 [ProcessThread(sid:0 cport:39173):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Client credentials: ['sasl,'zooclient 11:10:52 , 'ip,'127.0.0.1 11:10:52 ] 11:10:52 11:10:51.991 [ProcessThread(sid:0 cport:39173):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 195763303167 11:10:52 11:10:51.991 [ProcessThread(sid:0 cport:39173):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 195534277645 11:10:52 11:10:51.991 [ProcessThread(sid:0 cport:39173):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 195958170619 11:10:52 11:10:51.991 [ProcessThread(sid:0 cport:39173):] DEBUG org.apache.zookeeper.server.SessionTrackerImpl - Checking session 0x10000020e9e0000 11:10:52 11:10:51.991 [ProcessThread(sid:0 cport:39173):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Permission requested: 1 11:10:52 11:10:51.991 [ProcessThread(sid:0 cport:39173):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - ACLs for node: [31,s{'world,'anyone} 11:10:52 ] 11:10:52 11:10:51.991 [ProcessThread(sid:0 cport:39173):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Client credentials: ['sasl,'zooclient 11:10:52 , 'ip,'127.0.0.1 11:10:52 ] 11:10:52 11:10:51.991 [ProcessThread(sid:0 cport:39173):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 195958170619 11:10:52 11:10:51.991 [ProcessThread(sid:0 cport:39173):] DEBUG org.apache.zookeeper.server.SessionTrackerImpl - Checking session 0x10000020e9e0000 11:10:52 11:10:51.991 [ProcessThread(sid:0 cport:39173):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Processing ACL: 31,s{'world,'anyone} 11:10:52 11:10:52 11:10:51.991 [ProcessThread(sid:0 cport:39173):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Permission requested: 4 11:10:52 11:10:51.991 [ProcessThread(sid:0 cport:39173):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - ACLs for node: [31,s{'world,'anyone} 11:10:52 ] 11:10:52 11:10:51.991 [ProcessThread(sid:0 cport:39173):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Client credentials: ['sasl,'zooclient 11:10:52 , 'ip,'127.0.0.1 11:10:52 ] 11:10:52 11:10:51.991 [ProcessThread(sid:0 cport:39173):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 195958170619 11:10:52 11:10:51.991 [ProcessThread(sid:0 cport:39173):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 196788304841 11:10:52 11:10:51.991 [ProcessThread(sid:0 cport:39173):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 197514389020 11:10:52 11:10:51.991 [ProcessThread(sid:0 cport:39173):] DEBUG org.apache.zookeeper.server.SessionTrackerImpl - Checking session 0x10000020e9e0000 11:10:52 11:10:51.991 [ProcessThread(sid:0 cport:39173):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Permission requested: 1 11:10:52 11:10:51.991 [ProcessThread(sid:0 cport:39173):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - ACLs for node: [31,s{'world,'anyone} 11:10:52 ] 11:10:52 11:10:51.991 [ProcessThread(sid:0 cport:39173):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Client credentials: ['sasl,'zooclient 11:10:52 , 'ip,'127.0.0.1 11:10:52 ] 11:10:52 11:10:51.991 [ProcessThread(sid:0 cport:39173):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 197514389020 11:10:52 11:10:51.991 [ProcessThread(sid:0 cport:39173):] DEBUG org.apache.zookeeper.server.SessionTrackerImpl - Checking session 0x10000020e9e0000 11:10:52 11:10:51.991 [ProcessThread(sid:0 cport:39173):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Processing ACL: 31,s{'world,'anyone} 11:10:52 11:10:52 11:10:51.991 [ProcessThread(sid:0 cport:39173):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Permission requested: 4 11:10:52 11:10:51.991 [ProcessThread(sid:0 cport:39173):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - ACLs for node: [31,s{'world,'anyone} 11:10:52 ] 11:10:52 11:10:51.991 [ProcessThread(sid:0 cport:39173):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Client credentials: ['sasl,'zooclient 11:10:52 , 'ip,'127.0.0.1 11:10:52 ] 11:10:52 11:10:51.991 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - Processing request:: sessionid:0x10000020e9e0000 type:multi cxid:0x97 zxid:0x65 txntype:14 reqpath:n/a 11:10:52 11:10:51.992 [SyncThread:0] DEBUG org.apache.zookeeper.common.PathTrie - brokers 11:10:52 11:10:51.992 [SyncThread:0] DEBUG org.apache.zookeeper.server.DataTree - Digests are matching for Zxid: 65, Digest in log and actual tree: 183939416490 11:10:52 11:10:51.992 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - sessionid:0x10000020e9e0000 type:multi cxid:0x97 zxid:0x65 txntype:14 reqpath:n/a 11:10:52 11:10:51.992 [ProcessThread(sid:0 cport:39173):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 197514389020 11:10:52 11:10:51.992 [ProcessThread(sid:0 cport:39173):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 196854274913 11:10:52 11:10:51.992 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - Processing request:: sessionid:0x10000020e9e0000 type:multi cxid:0x98 zxid:0x66 txntype:14 reqpath:n/a 11:10:52 11:10:51.992 [ProcessThread(sid:0 cport:39173):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 198251944219 11:10:52 11:10:51.992 [ProcessThread(sid:0 cport:39173):] DEBUG org.apache.zookeeper.server.SessionTrackerImpl - Checking session 0x10000020e9e0000 11:10:52 11:10:51.992 [ProcessThread(sid:0 cport:39173):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Permission requested: 1 11:10:52 11:10:51.992 [ProcessThread(sid:0 cport:39173):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - ACLs for node: [31,s{'world,'anyone} 11:10:52 ] 11:10:52 11:10:51.992 [main-SendThread(127.0.0.1:39173)] DEBUG org.apache.zookeeper.ClientCnxn - Reading reply session id: 0x10000020e9e0000, packet:: clientPath:null serverPath:null finished:false header:: 151,14 replyHeader:: 151,101,0 request:: org.apache.zookeeper.MultiOperationRecord@472b9d56 response:: org.apache.zookeeper.MultiResponse@616e1b60 11:10:52 11:10:51.992 [ProcessThread(sid:0 cport:39173):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Client credentials: ['sasl,'zooclient 11:10:52 , 'ip,'127.0.0.1 11:10:52 ] 11:10:52 11:10:51.992 [ProcessThread(sid:0 cport:39173):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 198251944219 11:10:52 11:10:51.992 [ProcessThread(sid:0 cport:39173):] DEBUG org.apache.zookeeper.server.SessionTrackerImpl - Checking session 0x10000020e9e0000 11:10:52 11:10:51.992 [ProcessThread(sid:0 cport:39173):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Processing ACL: 31,s{'world,'anyone} 11:10:52 11:10:52 11:10:51.993 [SyncThread:0] DEBUG org.apache.zookeeper.common.PathTrie - brokers 11:10:52 11:10:51.993 [SyncThread:0] DEBUG org.apache.zookeeper.server.DataTree - Digests are matching for Zxid: 66, Digest in log and actual tree: 187083210820 11:10:52 11:10:51.993 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - sessionid:0x10000020e9e0000 type:multi cxid:0x98 zxid:0x66 txntype:14 reqpath:n/a 11:10:52 11:10:51.993 [ProcessThread(sid:0 cport:39173):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Permission requested: 4 11:10:52 11:10:51.993 [ProcessThread(sid:0 cport:39173):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - ACLs for node: [31,s{'world,'anyone} 11:10:52 ] 11:10:52 11:10:51.993 [ProcessThread(sid:0 cport:39173):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Client credentials: ['sasl,'zooclient 11:10:52 , 'ip,'127.0.0.1 11:10:52 ] 11:10:52 11:10:51.993 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - Processing request:: sessionid:0x10000020e9e0000 type:multi cxid:0x99 zxid:0x67 txntype:14 reqpath:n/a 11:10:52 11:10:51.994 [main-SendThread(127.0.0.1:39173)] DEBUG org.apache.zookeeper.ClientCnxn - Reading reply session id: 0x10000020e9e0000, packet:: clientPath:null serverPath:null finished:false header:: 152,14 replyHeader:: 152,102,0 request:: org.apache.zookeeper.MultiOperationRecord@b0f813d8 response:: org.apache.zookeeper.MultiResponse@cb3a91e2 11:10:52 11:10:51.994 [SyncThread:0] DEBUG org.apache.zookeeper.common.PathTrie - brokers 11:10:52 11:10:51.994 [SyncThread:0] DEBUG org.apache.zookeeper.server.DataTree - Digests are matching for Zxid: 67, Digest in log and actual tree: 191257833051 11:10:52 11:10:51.994 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - sessionid:0x10000020e9e0000 type:multi cxid:0x99 zxid:0x67 txntype:14 reqpath:n/a 11:10:52 11:10:51.994 [ProcessThread(sid:0 cport:39173):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 198251944219 11:10:52 11:10:51.994 [ProcessThread(sid:0 cport:39173):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 197582395854 11:10:52 11:10:51.994 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - Processing request:: sessionid:0x10000020e9e0000 type:multi cxid:0x9a zxid:0x68 txntype:14 reqpath:n/a 11:10:52 11:10:51.994 [ProcessThread(sid:0 cport:39173):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 198026357822 11:10:52 11:10:51.994 [ProcessThread(sid:0 cport:39173):] DEBUG org.apache.zookeeper.server.SessionTrackerImpl - Checking session 0x10000020e9e0000 11:10:52 11:10:51.995 [main-SendThread(127.0.0.1:39173)] DEBUG org.apache.zookeeper.ClientCnxn - Reading reply session id: 0x10000020e9e0000, packet:: clientPath:null serverPath:null finished:false header:: 153,14 replyHeader:: 153,103,0 request:: org.apache.zookeeper.MultiOperationRecord@78aa4e6b response:: org.apache.zookeeper.MultiResponse@92eccc75 11:10:52 11:10:51.995 [SyncThread:0] DEBUG org.apache.zookeeper.common.PathTrie - brokers 11:10:52 11:10:51.995 [SyncThread:0] DEBUG org.apache.zookeeper.server.DataTree - Digests are matching for Zxid: 68, Digest in log and actual tree: 196703136098 11:10:52 11:10:51.995 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - sessionid:0x10000020e9e0000 type:multi cxid:0x9a zxid:0x68 txntype:14 reqpath:n/a 11:10:52 11:10:51.995 [ProcessThread(sid:0 cport:39173):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Permission requested: 1 11:10:52 11:10:51.995 [ProcessThread(sid:0 cport:39173):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - ACLs for node: [31,s{'world,'anyone} 11:10:52 ] 11:10:52 11:10:51.996 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - Processing request:: sessionid:0x10000020e9e0000 type:multi cxid:0x9b zxid:0x69 txntype:14 reqpath:n/a 11:10:52 11:10:51.996 [ProcessThread(sid:0 cport:39173):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Client credentials: ['sasl,'zooclient 11:10:52 , 'ip,'127.0.0.1 11:10:52 ] 11:10:52 11:10:51.996 [main-SendThread(127.0.0.1:39173)] DEBUG org.apache.zookeeper.ClientCnxn - Reading reply session id: 0x10000020e9e0000, packet:: clientPath:null serverPath:null finished:false header:: 154,14 replyHeader:: 154,104,0 request:: org.apache.zookeeper.MultiOperationRecord@702b2626 response:: org.apache.zookeeper.MultiResponse@8a6da430 11:10:52 11:10:51.996 [SyncThread:0] DEBUG org.apache.zookeeper.common.PathTrie - brokers 11:10:52 11:10:51.996 [SyncThread:0] DEBUG org.apache.zookeeper.server.DataTree - Digests are matching for Zxid: 69, Digest in log and actual tree: 195763303167 11:10:52 11:10:51.997 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - sessionid:0x10000020e9e0000 type:multi cxid:0x9b zxid:0x69 txntype:14 reqpath:n/a 11:10:52 11:10:51.997 [ProcessThread(sid:0 cport:39173):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 198026357822 11:10:52 11:10:51.997 [ProcessThread(sid:0 cport:39173):] DEBUG org.apache.zookeeper.server.SessionTrackerImpl - Checking session 0x10000020e9e0000 11:10:52 11:10:51.997 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - Processing request:: sessionid:0x10000020e9e0000 type:multi cxid:0x9c zxid:0x6a txntype:14 reqpath:n/a 11:10:52 11:10:51.997 [ProcessThread(sid:0 cport:39173):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Processing ACL: 31,s{'world,'anyone} 11:10:52 11:10:52 11:10:51.997 [main-SendThread(127.0.0.1:39173)] DEBUG org.apache.zookeeper.ClientCnxn - Reading reply session id: 0x10000020e9e0000, packet:: clientPath:null serverPath:null finished:false header:: 155,14 replyHeader:: 155,105,0 request:: org.apache.zookeeper.MultiOperationRecord@72166fc9 response:: org.apache.zookeeper.MultiResponse@8c58edd3 11:10:52 11:10:51.998 [SyncThread:0] DEBUG org.apache.zookeeper.common.PathTrie - brokers 11:10:52 11:10:51.998 [SyncThread:0] DEBUG org.apache.zookeeper.server.DataTree - Digests are matching for Zxid: 6a, Digest in log and actual tree: 195958170619 11:10:52 11:10:51.998 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - sessionid:0x10000020e9e0000 type:multi cxid:0x9c zxid:0x6a txntype:14 reqpath:n/a 11:10:52 11:10:51.998 [ProcessThread(sid:0 cport:39173):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Permission requested: 4 11:10:52 11:10:51.998 [ProcessThread(sid:0 cport:39173):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - ACLs for node: [31,s{'world,'anyone} 11:10:52 ] 11:10:52 11:10:51.998 [ProcessThread(sid:0 cport:39173):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Client credentials: ['sasl,'zooclient 11:10:52 , 'ip,'127.0.0.1 11:10:52 ] 11:10:52 11:10:51.998 [ProcessThread(sid:0 cport:39173):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 198026357822 11:10:52 11:10:51.998 [ProcessThread(sid:0 cport:39173):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 197779309228 11:10:52 11:10:51.998 [ProcessThread(sid:0 cport:39173):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 199311754362 11:10:52 11:10:51.998 [ProcessThread(sid:0 cport:39173):] DEBUG org.apache.zookeeper.server.SessionTrackerImpl - Checking session 0x10000020e9e0000 11:10:52 11:10:51.998 [ProcessThread(sid:0 cport:39173):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Permission requested: 1 11:10:52 11:10:51.998 [main-SendThread(127.0.0.1:39173)] DEBUG org.apache.zookeeper.ClientCnxn - Reading reply session id: 0x10000020e9e0000, packet:: clientPath:null serverPath:null finished:false header:: 156,14 replyHeader:: 156,106,0 request:: org.apache.zookeeper.MultiOperationRecord@a3542ea response:: org.apache.zookeeper.MultiResponse@2477c0f4 11:10:52 11:10:51.998 [ProcessThread(sid:0 cport:39173):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - ACLs for node: [31,s{'world,'anyone} 11:10:52 ] 11:10:52 11:10:51.998 [ProcessThread(sid:0 cport:39173):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Client credentials: ['sasl,'zooclient 11:10:52 , 'ip,'127.0.0.1 11:10:52 ] 11:10:52 11:10:51.998 [ProcessThread(sid:0 cport:39173):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 199311754362 11:10:52 11:10:51.998 [ProcessThread(sid:0 cport:39173):] DEBUG org.apache.zookeeper.server.SessionTrackerImpl - Checking session 0x10000020e9e0000 11:10:52 11:10:51.998 [ProcessThread(sid:0 cport:39173):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Processing ACL: 31,s{'world,'anyone} 11:10:52 11:10:52 11:10:51.999 [ProcessThread(sid:0 cport:39173):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Permission requested: 4 11:10:52 11:10:51.999 [ProcessThread(sid:0 cport:39173):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - ACLs for node: [31,s{'world,'anyone} 11:10:52 ] 11:10:52 11:10:51.999 [ProcessThread(sid:0 cport:39173):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Client credentials: ['sasl,'zooclient 11:10:52 , 'ip,'127.0.0.1 11:10:52 ] 11:10:52 11:10:51.999 [ProcessThread(sid:0 cport:39173):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 199311754362 11:10:52 11:10:51.999 [ProcessThread(sid:0 cport:39173):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 199675882952 11:10:52 11:10:51.999 [ProcessThread(sid:0 cport:39173):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 199747648666 11:10:52 11:10:51.999 [ProcessThread(sid:0 cport:39173):] DEBUG org.apache.zookeeper.server.SessionTrackerImpl - Checking session 0x10000020e9e0000 11:10:52 11:10:51.999 [ProcessThread(sid:0 cport:39173):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Permission requested: 1 11:10:52 11:10:51.999 [ProcessThread(sid:0 cport:39173):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - ACLs for node: [31,s{'world,'anyone} 11:10:52 ] 11:10:52 11:10:51.999 [ProcessThread(sid:0 cport:39173):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Client credentials: ['sasl,'zooclient 11:10:52 , 'ip,'127.0.0.1 11:10:52 ] 11:10:52 11:10:51.999 [ProcessThread(sid:0 cport:39173):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 199747648666 11:10:52 11:10:51.999 [ProcessThread(sid:0 cport:39173):] DEBUG org.apache.zookeeper.server.SessionTrackerImpl - Checking session 0x10000020e9e0000 11:10:52 11:10:51.999 [ProcessThread(sid:0 cport:39173):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Processing ACL: 31,s{'world,'anyone} 11:10:52 11:10:52 11:10:51.999 [ProcessThread(sid:0 cport:39173):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Permission requested: 4 11:10:52 11:10:51.999 [ProcessThread(sid:0 cport:39173):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - ACLs for node: [31,s{'world,'anyone} 11:10:52 ] 11:10:52 11:10:51.999 [ProcessThread(sid:0 cport:39173):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Client credentials: ['sasl,'zooclient 11:10:52 , 'ip,'127.0.0.1 11:10:52 ] 11:10:52 11:10:51.999 [ProcessThread(sid:0 cport:39173):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 199747648666 11:10:52 11:10:51.999 [ProcessThread(sid:0 cport:39173):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 196093516556 11:10:52 11:10:52.000 [ProcessThread(sid:0 cport:39173):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 200148028564 11:10:52 11:10:52.000 [ProcessThread(sid:0 cport:39173):] DEBUG org.apache.zookeeper.server.SessionTrackerImpl - Checking session 0x10000020e9e0000 11:10:52 11:10:52.000 [ProcessThread(sid:0 cport:39173):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Permission requested: 1 11:10:52 11:10:52.000 [ProcessThread(sid:0 cport:39173):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - ACLs for node: [31,s{'world,'anyone} 11:10:52 ] 11:10:52 11:10:52.000 [ProcessThread(sid:0 cport:39173):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Client credentials: ['sasl,'zooclient 11:10:52 , 'ip,'127.0.0.1 11:10:52 ] 11:10:52 11:10:52.000 [ProcessThread(sid:0 cport:39173):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 200148028564 11:10:52 11:10:52.000 [ProcessThread(sid:0 cport:39173):] DEBUG org.apache.zookeeper.server.SessionTrackerImpl - Checking session 0x10000020e9e0000 11:10:52 11:10:52.000 [ProcessThread(sid:0 cport:39173):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Processing ACL: 31,s{'world,'anyone} 11:10:52 11:10:52 11:10:52.000 [ProcessThread(sid:0 cport:39173):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Permission requested: 4 11:10:52 11:10:52.000 [ProcessThread(sid:0 cport:39173):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - ACLs for node: [31,s{'world,'anyone} 11:10:52 ] 11:10:52 11:10:52.000 [ProcessThread(sid:0 cport:39173):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Client credentials: ['sasl,'zooclient 11:10:52 , 'ip,'127.0.0.1 11:10:52 ] 11:10:52 11:10:52.000 [ProcessThread(sid:0 cport:39173):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 200148028564 11:10:52 11:10:52.000 [ProcessThread(sid:0 cport:39173):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 196418353674 11:10:52 11:10:52.000 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - Processing request:: sessionid:0x10000020e9e0000 type:multi cxid:0x9d zxid:0x6b txntype:14 reqpath:n/a 11:10:52 11:10:52.000 [ProcessThread(sid:0 cport:39173):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 200667596245 11:10:52 11:10:52.001 [SyncThread:0] DEBUG org.apache.zookeeper.common.PathTrie - brokers 11:10:52 11:10:52.001 [SyncThread:0] DEBUG org.apache.zookeeper.server.DataTree - Digests are matching for Zxid: 6b, Digest in log and actual tree: 197514389020 11:10:52 11:10:52.001 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - sessionid:0x10000020e9e0000 type:multi cxid:0x9d zxid:0x6b txntype:14 reqpath:n/a 11:10:52 11:10:52.001 [ProcessThread(sid:0 cport:39173):] DEBUG org.apache.zookeeper.server.SessionTrackerImpl - Checking session 0x10000020e9e0000 11:10:52 11:10:52.001 [ProcessThread(sid:0 cport:39173):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Permission requested: 1 11:10:52 11:10:52.001 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - Processing request:: sessionid:0x10000020e9e0000 type:multi cxid:0x9e zxid:0x6c txntype:14 reqpath:n/a 11:10:52 11:10:52.001 [ProcessThread(sid:0 cport:39173):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - ACLs for node: [31,s{'world,'anyone} 11:10:52 ] 11:10:52 11:10:52.001 [ProcessThread(sid:0 cport:39173):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Client credentials: ['sasl,'zooclient 11:10:52 , 'ip,'127.0.0.1 11:10:52 ] 11:10:52 11:10:52.001 [main-SendThread(127.0.0.1:39173)] DEBUG org.apache.zookeeper.ClientCnxn - Reading reply session id: 0x10000020e9e0000, packet:: clientPath:null serverPath:null finished:false header:: 157,14 replyHeader:: 157,107,0 request:: org.apache.zookeeper.MultiOperationRecord@175d002e response:: org.apache.zookeeper.MultiResponse@319f7e38 11:10:52 11:10:52.002 [SyncThread:0] DEBUG org.apache.zookeeper.common.PathTrie - brokers 11:10:52 11:10:52.002 [SyncThread:0] DEBUG org.apache.zookeeper.server.DataTree - Digests are matching for Zxid: 6c, Digest in log and actual tree: 198251944219 11:10:52 11:10:52.002 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - sessionid:0x10000020e9e0000 type:multi cxid:0x9e zxid:0x6c txntype:14 reqpath:n/a 11:10:52 11:10:52.002 [ProcessThread(sid:0 cport:39173):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 200667596245 11:10:52 11:10:52.002 [ProcessThread(sid:0 cport:39173):] DEBUG org.apache.zookeeper.server.SessionTrackerImpl - Checking session 0x10000020e9e0000 11:10:52 11:10:52.002 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - Processing request:: sessionid:0x10000020e9e0000 type:multi cxid:0x9f zxid:0x6d txntype:14 reqpath:n/a 11:10:52 11:10:52.002 [ProcessThread(sid:0 cport:39173):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Processing ACL: 31,s{'world,'anyone} 11:10:52 11:10:52 11:10:52.002 [main-SendThread(127.0.0.1:39173)] DEBUG org.apache.zookeeper.ClientCnxn - Reading reply session id: 0x10000020e9e0000, packet:: clientPath:null serverPath:null finished:false header:: 158,14 replyHeader:: 158,108,0 request:: org.apache.zookeeper.MultiOperationRecord@ad9089ac response:: org.apache.zookeeper.MultiResponse@c7d307b6 11:10:52 11:10:52.003 [SyncThread:0] DEBUG org.apache.zookeeper.common.PathTrie - brokers 11:10:52 11:10:52.003 [SyncThread:0] DEBUG org.apache.zookeeper.server.DataTree - Digests are matching for Zxid: 6d, Digest in log and actual tree: 198026357822 11:10:52 11:10:52.003 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - sessionid:0x10000020e9e0000 type:multi cxid:0x9f zxid:0x6d txntype:14 reqpath:n/a 11:10:52 11:10:52.003 [ProcessThread(sid:0 cport:39173):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Permission requested: 4 11:10:52 11:10:52.003 [ProcessThread(sid:0 cport:39173):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - ACLs for node: [31,s{'world,'anyone} 11:10:52 ] 11:10:52 11:10:52.003 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - Processing request:: sessionid:0x10000020e9e0000 type:multi cxid:0xa0 zxid:0x6e txntype:14 reqpath:n/a 11:10:52 11:10:52.003 [ProcessThread(sid:0 cport:39173):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Client credentials: ['sasl,'zooclient 11:10:52 , 'ip,'127.0.0.1 11:10:52 ] 11:10:52 11:10:52.003 [main-SendThread(127.0.0.1:39173)] DEBUG org.apache.zookeeper.ClientCnxn - Reading reply session id: 0x10000020e9e0000, packet:: clientPath:null serverPath:null finished:false header:: 159,14 replyHeader:: 159,109,0 request:: org.apache.zookeeper.MultiOperationRecord@4106c7ce response:: org.apache.zookeeper.MultiResponse@5b4945d8 11:10:52 11:10:52.004 [SyncThread:0] DEBUG org.apache.zookeeper.common.PathTrie - brokers 11:10:52 11:10:52.004 [SyncThread:0] DEBUG org.apache.zookeeper.server.DataTree - Digests are matching for Zxid: 6e, Digest in log and actual tree: 199311754362 11:10:52 11:10:52.004 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - sessionid:0x10000020e9e0000 type:multi cxid:0xa0 zxid:0x6e txntype:14 reqpath:n/a 11:10:52 11:10:52.004 [ProcessThread(sid:0 cport:39173):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 200667596245 11:10:52 11:10:52.004 [ProcessThread(sid:0 cport:39173):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 202257430978 11:10:52 11:10:52.004 [ProcessThread(sid:0 cport:39173):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 206267414992 11:10:52 11:10:52.004 [ProcessThread(sid:0 cport:39173):] DEBUG org.apache.zookeeper.server.SessionTrackerImpl - Checking session 0x10000020e9e0000 11:10:52 11:10:52.004 [ProcessThread(sid:0 cport:39173):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Permission requested: 1 11:10:52 11:10:52.004 [ProcessThread(sid:0 cport:39173):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - ACLs for node: [31,s{'world,'anyone} 11:10:52 ] 11:10:52 11:10:52.004 [ProcessThread(sid:0 cport:39173):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Client credentials: ['sasl,'zooclient 11:10:52 , 'ip,'127.0.0.1 11:10:52 ] 11:10:52 11:10:52.004 [ProcessThread(sid:0 cport:39173):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 206267414992 11:10:52 11:10:52.004 [ProcessThread(sid:0 cport:39173):] DEBUG org.apache.zookeeper.server.SessionTrackerImpl - Checking session 0x10000020e9e0000 11:10:52 11:10:52.004 [ProcessThread(sid:0 cport:39173):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Processing ACL: 31,s{'world,'anyone} 11:10:52 11:10:52 11:10:52.004 [main-SendThread(127.0.0.1:39173)] DEBUG org.apache.zookeeper.ClientCnxn - Reading reply session id: 0x10000020e9e0000, packet:: clientPath:null serverPath:null finished:false header:: 160,14 replyHeader:: 160,110,0 request:: org.apache.zookeeper.MultiOperationRecord@12b46b2f response:: org.apache.zookeeper.MultiResponse@2cf6e939 11:10:52 11:10:52.005 [ProcessThread(sid:0 cport:39173):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Permission requested: 4 11:10:52 11:10:52.005 [ProcessThread(sid:0 cport:39173):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - ACLs for node: [31,s{'world,'anyone} 11:10:52 ] 11:10:52 11:10:52.005 [ProcessThread(sid:0 cport:39173):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Client credentials: ['sasl,'zooclient 11:10:52 , 'ip,'127.0.0.1 11:10:52 ] 11:10:52 11:10:52.005 [ProcessThread(sid:0 cport:39173):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 206267414992 11:10:52 11:10:52.005 [ProcessThread(sid:0 cport:39173):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 207865515429 11:10:52 11:10:52.005 [ProcessThread(sid:0 cport:39173):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 208513852576 11:10:52 11:10:52.005 [ProcessThread(sid:0 cport:39173):] DEBUG org.apache.zookeeper.server.SessionTrackerImpl - Checking session 0x10000020e9e0000 11:10:52 11:10:52.005 [ProcessThread(sid:0 cport:39173):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Permission requested: 1 11:10:52 11:10:52.005 [ProcessThread(sid:0 cport:39173):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - ACLs for node: [31,s{'world,'anyone} 11:10:52 ] 11:10:52 11:10:52.005 [ProcessThread(sid:0 cport:39173):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Client credentials: ['sasl,'zooclient 11:10:52 , 'ip,'127.0.0.1 11:10:52 ] 11:10:52 11:10:52.005 [ProcessThread(sid:0 cport:39173):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 208513852576 11:10:52 11:10:52.005 [ProcessThread(sid:0 cport:39173):] DEBUG org.apache.zookeeper.server.SessionTrackerImpl - Checking session 0x10000020e9e0000 11:10:52 11:10:52.005 [ProcessThread(sid:0 cport:39173):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Processing ACL: 31,s{'world,'anyone} 11:10:52 11:10:52 11:10:52.005 [ProcessThread(sid:0 cport:39173):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Permission requested: 4 11:10:52 11:10:52.005 [ProcessThread(sid:0 cport:39173):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - ACLs for node: [31,s{'world,'anyone} 11:10:52 ] 11:10:52 11:10:52.005 [ProcessThread(sid:0 cport:39173):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Client credentials: ['sasl,'zooclient 11:10:52 , 'ip,'127.0.0.1 11:10:52 ] 11:10:52 11:10:52.005 [ProcessThread(sid:0 cport:39173):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 208513852576 11:10:52 11:10:52.005 [ProcessThread(sid:0 cport:39173):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 207491507998 11:10:52 11:10:52.005 [ProcessThread(sid:0 cport:39173):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 210382451893 11:10:52 11:10:52.005 [ProcessThread(sid:0 cport:39173):] DEBUG org.apache.zookeeper.server.SessionTrackerImpl - Checking session 0x10000020e9e0000 11:10:52 11:10:52.005 [ProcessThread(sid:0 cport:39173):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Permission requested: 1 11:10:52 11:10:52.005 [ProcessThread(sid:0 cport:39173):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - ACLs for node: [31,s{'world,'anyone} 11:10:52 ] 11:10:52 11:10:52.005 [ProcessThread(sid:0 cport:39173):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Client credentials: ['sasl,'zooclient 11:10:52 , 'ip,'127.0.0.1 11:10:52 ] 11:10:52 11:10:52.005 [ProcessThread(sid:0 cport:39173):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 210382451893 11:10:52 11:10:52.005 [ProcessThread(sid:0 cport:39173):] DEBUG org.apache.zookeeper.server.SessionTrackerImpl - Checking session 0x10000020e9e0000 11:10:52 11:10:52.006 [ProcessThread(sid:0 cport:39173):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Processing ACL: 31,s{'world,'anyone} 11:10:52 11:10:52 11:10:52.006 [ProcessThread(sid:0 cport:39173):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Permission requested: 4 11:10:52 11:10:52.006 [ProcessThread(sid:0 cport:39173):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - ACLs for node: [31,s{'world,'anyone} 11:10:52 ] 11:10:52 11:10:52.006 [ProcessThread(sid:0 cport:39173):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Client credentials: ['sasl,'zooclient 11:10:52 , 'ip,'127.0.0.1 11:10:52 ] 11:10:52 11:10:52.006 [ProcessThread(sid:0 cport:39173):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 210382451893 11:10:52 11:10:52.006 [ProcessThread(sid:0 cport:39173):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 207411852855 11:10:52 11:10:52.006 [ProcessThread(sid:0 cport:39173):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 208525898802 11:10:52 11:10:52.006 [ProcessThread(sid:0 cport:39173):] DEBUG org.apache.zookeeper.server.SessionTrackerImpl - Checking session 0x10000020e9e0000 11:10:52 11:10:52.006 [ProcessThread(sid:0 cport:39173):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Permission requested: 1 11:10:52 11:10:52.006 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - Processing request:: sessionid:0x10000020e9e0000 type:multi cxid:0xa1 zxid:0x6f txntype:14 reqpath:n/a 11:10:52 11:10:52.006 [ProcessThread(sid:0 cport:39173):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - ACLs for node: [31,s{'world,'anyone} 11:10:52 ] 11:10:52 11:10:52.006 [ProcessThread(sid:0 cport:39173):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Client credentials: ['sasl,'zooclient 11:10:52 , 'ip,'127.0.0.1 11:10:52 ] 11:10:52 11:10:52.009 [SyncThread:0] DEBUG org.apache.zookeeper.common.PathTrie - brokers 11:10:52 11:10:52.009 [SyncThread:0] DEBUG org.apache.zookeeper.server.DataTree - Digests are matching for Zxid: 6f, Digest in log and actual tree: 199747648666 11:10:52 11:10:52.009 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - sessionid:0x10000020e9e0000 type:multi cxid:0xa1 zxid:0x6f txntype:14 reqpath:n/a 11:10:52 11:10:52.009 [ProcessThread(sid:0 cport:39173):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 208525898802 11:10:52 11:10:52.009 [ProcessThread(sid:0 cport:39173):] DEBUG org.apache.zookeeper.server.SessionTrackerImpl - Checking session 0x10000020e9e0000 11:10:52 11:10:52.009 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - Processing request:: sessionid:0x10000020e9e0000 type:multi cxid:0xa2 zxid:0x70 txntype:14 reqpath:n/a 11:10:52 11:10:52.009 [ProcessThread(sid:0 cport:39173):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Processing ACL: 31,s{'world,'anyone} 11:10:52 11:10:52 11:10:52.009 [main-SendThread(127.0.0.1:39173)] DEBUG org.apache.zookeeper.ClientCnxn - Reading reply session id: 0x10000020e9e0000, packet:: clientPath:null serverPath:null finished:false header:: 161,14 replyHeader:: 161,111,0 request:: org.apache.zookeeper.MultiOperationRecord@849f947 response:: org.apache.zookeeper.MultiResponse@228c7751 11:10:52 11:10:52.010 [SyncThread:0] DEBUG org.apache.zookeeper.common.PathTrie - brokers 11:10:52 11:10:52.010 [SyncThread:0] DEBUG org.apache.zookeeper.server.DataTree - Digests are matching for Zxid: 70, Digest in log and actual tree: 200148028564 11:10:52 11:10:52.010 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - sessionid:0x10000020e9e0000 type:multi cxid:0xa2 zxid:0x70 txntype:14 reqpath:n/a 11:10:52 11:10:52.010 [ProcessThread(sid:0 cport:39173):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Permission requested: 4 11:10:52 11:10:52.010 [ProcessThread(sid:0 cport:39173):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - ACLs for node: [31,s{'world,'anyone} 11:10:52 ] 11:10:52 11:10:52.010 [ProcessThread(sid:0 cport:39173):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Client credentials: ['sasl,'zooclient 11:10:52 , 'ip,'127.0.0.1 11:10:52 ] 11:10:52 11:10:52.010 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - Processing request:: sessionid:0x10000020e9e0000 type:multi cxid:0xa3 zxid:0x71 txntype:14 reqpath:n/a 11:10:52 11:10:52.010 [main-SendThread(127.0.0.1:39173)] DEBUG org.apache.zookeeper.ClientCnxn - Reading reply session id: 0x10000020e9e0000, packet:: clientPath:null serverPath:null finished:false header:: 162,14 replyHeader:: 162,112,0 request:: org.apache.zookeeper.MultiOperationRecord@10c9218c response:: org.apache.zookeeper.MultiResponse@2b0b9f96 11:10:52 11:10:52.011 [SyncThread:0] DEBUG org.apache.zookeeper.common.PathTrie - brokers 11:10:52 11:10:52.011 [SyncThread:0] DEBUG org.apache.zookeeper.server.DataTree - Digests are matching for Zxid: 71, Digest in log and actual tree: 200667596245 11:10:52 11:10:52.011 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - sessionid:0x10000020e9e0000 type:multi cxid:0xa3 zxid:0x71 txntype:14 reqpath:n/a 11:10:52 11:10:52.011 [ProcessThread(sid:0 cport:39173):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 208525898802 11:10:52 11:10:52.011 [ProcessThread(sid:0 cport:39173):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 210157543439 11:10:52 11:10:52.011 [ProcessThread(sid:0 cport:39173):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 213428829733 11:10:52 11:10:52.011 [ProcessThread(sid:0 cport:39173):] DEBUG org.apache.zookeeper.server.SessionTrackerImpl - Checking session 0x10000020e9e0000 11:10:52 11:10:52.011 [ProcessThread(sid:0 cport:39173):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Permission requested: 1 11:10:52 11:10:52.011 [ProcessThread(sid:0 cport:39173):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - ACLs for node: [31,s{'world,'anyone} 11:10:52 ] 11:10:52 11:10:52.011 [main-SendThread(127.0.0.1:39173)] DEBUG org.apache.zookeeper.ClientCnxn - Reading reply session id: 0x10000020e9e0000, packet:: clientPath:null serverPath:null finished:false header:: 163,14 replyHeader:: 163,113,0 request:: org.apache.zookeeper.MultiOperationRecord@a5116167 response:: org.apache.zookeeper.MultiResponse@bf53df71 11:10:52 11:10:52.011 [ProcessThread(sid:0 cport:39173):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Client credentials: ['sasl,'zooclient 11:10:52 , 'ip,'127.0.0.1 11:10:52 ] 11:10:52 11:10:52.011 [ProcessThread(sid:0 cport:39173):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 213428829733 11:10:52 11:10:52.011 [ProcessThread(sid:0 cport:39173):] DEBUG org.apache.zookeeper.server.SessionTrackerImpl - Checking session 0x10000020e9e0000 11:10:52 11:10:52.011 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - Processing request:: sessionid:0x10000020e9e0000 type:multi cxid:0xa4 zxid:0x72 txntype:14 reqpath:n/a 11:10:52 11:10:52.011 [ProcessThread(sid:0 cport:39173):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Processing ACL: 31,s{'world,'anyone} 11:10:52 11:10:52 11:10:52.013 [SyncThread:0] DEBUG org.apache.zookeeper.common.PathTrie - brokers 11:10:52 11:10:52.013 [SyncThread:0] DEBUG org.apache.zookeeper.server.DataTree - Digests are matching for Zxid: 72, Digest in log and actual tree: 206267414992 11:10:52 11:10:52.013 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - sessionid:0x10000020e9e0000 type:multi cxid:0xa4 zxid:0x72 txntype:14 reqpath:n/a 11:10:52 11:10:52.013 [ProcessThread(sid:0 cport:39173):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Permission requested: 4 11:10:52 11:10:52.013 [ProcessThread(sid:0 cport:39173):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - ACLs for node: [31,s{'world,'anyone} 11:10:52 ] 11:10:52 11:10:52.013 [ProcessThread(sid:0 cport:39173):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Client credentials: ['sasl,'zooclient 11:10:52 , 'ip,'127.0.0.1 11:10:52 ] 11:10:52 11:10:52.013 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - Processing request:: sessionid:0x10000020e9e0000 type:multi cxid:0xa5 zxid:0x73 txntype:14 reqpath:n/a 11:10:52 11:10:52.013 [ProcessThread(sid:0 cport:39173):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 213428829733 11:10:52 11:10:52.013 [ProcessThread(sid:0 cport:39173):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 211830870546 11:10:52 11:10:52.013 [ProcessThread(sid:0 cport:39173):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 214419394641 11:10:52 11:10:52.013 [ProcessThread(sid:0 cport:39173):] DEBUG org.apache.zookeeper.server.SessionTrackerImpl - Checking session 0x10000020e9e0000 11:10:52 11:10:52.013 [ProcessThread(sid:0 cport:39173):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Permission requested: 1 11:10:52 11:10:52.013 [ProcessThread(sid:0 cport:39173):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - ACLs for node: [31,s{'world,'anyone} 11:10:52 ] 11:10:52 11:10:52.013 [ProcessThread(sid:0 cport:39173):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Client credentials: ['sasl,'zooclient 11:10:52 , 'ip,'127.0.0.1 11:10:52 ] 11:10:52 11:10:52.013 [main-SendThread(127.0.0.1:39173)] DEBUG org.apache.zookeeper.ClientCnxn - Reading reply session id: 0x10000020e9e0000, packet:: clientPath:null serverPath:null finished:false header:: 164,14 replyHeader:: 164,114,0 request:: org.apache.zookeeper.MultiOperationRecord@7392b052 response:: org.apache.zookeeper.MultiResponse@8dd52e5c 11:10:52 11:10:52.013 [ProcessThread(sid:0 cport:39173):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 214419394641 11:10:52 11:10:52.013 [ProcessThread(sid:0 cport:39173):] DEBUG org.apache.zookeeper.server.SessionTrackerImpl - Checking session 0x10000020e9e0000 11:10:52 11:10:52.014 [ProcessThread(sid:0 cport:39173):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Processing ACL: 31,s{'world,'anyone} 11:10:52 11:10:52 11:10:52.014 [SyncThread:0] DEBUG org.apache.zookeeper.common.PathTrie - brokers 11:10:52 11:10:52.014 [SyncThread:0] DEBUG org.apache.zookeeper.server.DataTree - Digests are matching for Zxid: 73, Digest in log and actual tree: 208513852576 11:10:52 11:10:52.014 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - sessionid:0x10000020e9e0000 type:multi cxid:0xa5 zxid:0x73 txntype:14 reqpath:n/a 11:10:52 11:10:52.014 [ProcessThread(sid:0 cport:39173):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Permission requested: 4 11:10:52 11:10:52.014 [ProcessThread(sid:0 cport:39173):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - ACLs for node: [31,s{'world,'anyone} 11:10:52 ] 11:10:52 11:10:52.014 [ProcessThread(sid:0 cport:39173):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Client credentials: ['sasl,'zooclient 11:10:52 , 'ip,'127.0.0.1 11:10:52 ] 11:10:52 11:10:52.014 [ProcessThread(sid:0 cport:39173):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 214419394641 11:10:52 11:10:52.014 [ProcessThread(sid:0 cport:39173):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 214726996269 11:10:52 11:10:52.014 [ProcessThread(sid:0 cport:39173):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 215567139602 11:10:52 11:10:52.014 [ProcessThread(sid:0 cport:39173):] DEBUG org.apache.zookeeper.server.SessionTrackerImpl - Checking session 0x10000020e9e0000 11:10:52 11:10:52.014 [main-SendThread(127.0.0.1:39173)] DEBUG org.apache.zookeeper.ClientCnxn - Reading reply session id: 0x10000020e9e0000, packet:: clientPath:null serverPath:null finished:false header:: 165,14 replyHeader:: 165,115,0 request:: org.apache.zookeeper.MultiOperationRecord@aad33e50 response:: org.apache.zookeeper.MultiResponse@c515bc5a 11:10:52 11:10:52.014 [ProcessThread(sid:0 cport:39173):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Permission requested: 1 11:10:52 11:10:52.014 [ProcessThread(sid:0 cport:39173):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - ACLs for node: [31,s{'world,'anyone} 11:10:52 ] 11:10:52 11:10:52.014 [ProcessThread(sid:0 cport:39173):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Client credentials: ['sasl,'zooclient 11:10:52 , 'ip,'127.0.0.1 11:10:52 ] 11:10:52 11:10:52.015 [ProcessThread(sid:0 cport:39173):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 215567139602 11:10:52 11:10:52.015 [ProcessThread(sid:0 cport:39173):] DEBUG org.apache.zookeeper.server.SessionTrackerImpl - Checking session 0x10000020e9e0000 11:10:52 11:10:52.015 [ProcessThread(sid:0 cport:39173):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Processing ACL: 31,s{'world,'anyone} 11:10:52 11:10:52 11:10:52.015 [ProcessThread(sid:0 cport:39173):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Permission requested: 4 11:10:52 11:10:52.015 [ProcessThread(sid:0 cport:39173):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - ACLs for node: [31,s{'world,'anyone} 11:10:52 ] 11:10:52 11:10:52.015 [ProcessThread(sid:0 cport:39173):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Client credentials: ['sasl,'zooclient 11:10:52 , 'ip,'127.0.0.1 11:10:52 ] 11:10:52 11:10:52.015 [ProcessThread(sid:0 cport:39173):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 215567139602 11:10:52 11:10:52.015 [ProcessThread(sid:0 cport:39173):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 215325107342 11:10:52 11:10:52.015 [ProcessThread(sid:0 cport:39173):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 217988123742 11:10:52 11:10:52.015 [ProcessThread(sid:0 cport:39173):] DEBUG org.apache.zookeeper.server.SessionTrackerImpl - Checking session 0x10000020e9e0000 11:10:52 11:10:52.015 [ProcessThread(sid:0 cport:39173):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Permission requested: 1 11:10:52 11:10:52.015 [ProcessThread(sid:0 cport:39173):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - ACLs for node: [31,s{'world,'anyone} 11:10:52 ] 11:10:52 11:10:52.015 [ProcessThread(sid:0 cport:39173):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Client credentials: ['sasl,'zooclient 11:10:52 , 'ip,'127.0.0.1 11:10:52 ] 11:10:52 11:10:52.015 [ProcessThread(sid:0 cport:39173):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 217988123742 11:10:52 11:10:52.015 [ProcessThread(sid:0 cport:39173):] DEBUG org.apache.zookeeper.server.SessionTrackerImpl - Checking session 0x10000020e9e0000 11:10:52 11:10:52.015 [ProcessThread(sid:0 cport:39173):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Processing ACL: 31,s{'world,'anyone} 11:10:52 11:10:52 11:10:52.015 [ProcessThread(sid:0 cport:39173):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Permission requested: 4 11:10:52 11:10:52.015 [ProcessThread(sid:0 cport:39173):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - ACLs for node: [31,s{'world,'anyone} 11:10:52 ] 11:10:52 11:10:52.015 [ProcessThread(sid:0 cport:39173):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Client credentials: ['sasl,'zooclient 11:10:52 , 'ip,'127.0.0.1 11:10:52 ] 11:10:52 11:10:52.015 [ProcessThread(sid:0 cport:39173):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 217988123742 11:10:52 11:10:52.015 [ProcessThread(sid:0 cport:39173):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 216356468827 11:10:52 11:10:52.015 [ProcessThread(sid:0 cport:39173):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 218620793374 11:10:52 11:10:52.015 [ProcessThread(sid:0 cport:39173):] DEBUG org.apache.zookeeper.server.SessionTrackerImpl - Checking session 0x10000020e9e0000 11:10:52 11:10:52.015 [ProcessThread(sid:0 cport:39173):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Permission requested: 1 11:10:52 11:10:52.015 [ProcessThread(sid:0 cport:39173):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - ACLs for node: [31,s{'world,'anyone} 11:10:52 ] 11:10:52 11:10:52.015 [ProcessThread(sid:0 cport:39173):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Client credentials: ['sasl,'zooclient 11:10:52 , 'ip,'127.0.0.1 11:10:52 ] 11:10:52 11:10:52.015 [ProcessThread(sid:0 cport:39173):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 218620793374 11:10:52 11:10:52.015 [ProcessThread(sid:0 cport:39173):] DEBUG org.apache.zookeeper.server.SessionTrackerImpl - Checking session 0x10000020e9e0000 11:10:52 11:10:52.015 [ProcessThread(sid:0 cport:39173):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Processing ACL: 31,s{'world,'anyone} 11:10:52 11:10:52 11:10:52.016 [ProcessThread(sid:0 cport:39173):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Permission requested: 4 11:10:52 11:10:52.016 [ProcessThread(sid:0 cport:39173):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - ACLs for node: [31,s{'world,'anyone} 11:10:52 ] 11:10:52 11:10:52.016 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - Processing request:: sessionid:0x10000020e9e0000 type:multi cxid:0xa6 zxid:0x74 txntype:14 reqpath:n/a 11:10:52 11:10:52.016 [ProcessThread(sid:0 cport:39173):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Client credentials: ['sasl,'zooclient 11:10:52 , 'ip,'127.0.0.1 11:10:52 ] 11:10:52 11:10:52.016 [SyncThread:0] DEBUG org.apache.zookeeper.common.PathTrie - brokers 11:10:52 11:10:52.016 [SyncThread:0] DEBUG org.apache.zookeeper.server.DataTree - Digests are matching for Zxid: 74, Digest in log and actual tree: 210382451893 11:10:52 11:10:52.016 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - sessionid:0x10000020e9e0000 type:multi cxid:0xa6 zxid:0x74 txntype:14 reqpath:n/a 11:10:52 11:10:52.016 [ProcessThread(sid:0 cport:39173):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 218620793374 11:10:52 11:10:52.016 [ProcessThread(sid:0 cport:39173):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 221292635619 11:10:52 11:10:52.016 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - Processing request:: sessionid:0x10000020e9e0000 type:multi cxid:0xa7 zxid:0x75 txntype:14 reqpath:n/a 11:10:52 11:10:52.016 [ProcessThread(sid:0 cport:39173):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 224833895547 11:10:52 11:10:52.016 [ProcessThread(sid:0 cport:39173):] DEBUG org.apache.zookeeper.server.SessionTrackerImpl - Checking session 0x10000020e9e0000 11:10:52 11:10:52.016 [ProcessThread(sid:0 cport:39173):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Permission requested: 1 11:10:52 11:10:52.016 [ProcessThread(sid:0 cport:39173):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - ACLs for node: [31,s{'world,'anyone} 11:10:52 ] 11:10:52 11:10:52.016 [ProcessThread(sid:0 cport:39173):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Client credentials: ['sasl,'zooclient 11:10:52 , 'ip,'127.0.0.1 11:10:52 ] 11:10:52 11:10:52.016 [main-SendThread(127.0.0.1:39173)] DEBUG org.apache.zookeeper.ClientCnxn - Reading reply session id: 0x10000020e9e0000, packet:: clientPath:null serverPath:null finished:false header:: 166,14 replyHeader:: 166,116,0 request:: org.apache.zookeeper.MultiOperationRecord@c208c8d response:: org.apache.zookeeper.MultiResponse@26630a97 11:10:52 11:10:52.016 [SyncThread:0] DEBUG org.apache.zookeeper.common.PathTrie - brokers 11:10:52 11:10:52.017 [SyncThread:0] DEBUG org.apache.zookeeper.server.DataTree - Digests are matching for Zxid: 75, Digest in log and actual tree: 208525898802 11:10:52 11:10:52.017 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - sessionid:0x10000020e9e0000 type:multi cxid:0xa7 zxid:0x75 txntype:14 reqpath:n/a 11:10:52 11:10:52.017 [ProcessThread(sid:0 cport:39173):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 224833895547 11:10:52 11:10:52.017 [ProcessThread(sid:0 cport:39173):] DEBUG org.apache.zookeeper.server.SessionTrackerImpl - Checking session 0x10000020e9e0000 11:10:52 11:10:52.017 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - Processing request:: sessionid:0x10000020e9e0000 type:multi cxid:0xa8 zxid:0x76 txntype:14 reqpath:n/a 11:10:52 11:10:52.017 [ProcessThread(sid:0 cport:39173):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Processing ACL: 31,s{'world,'anyone} 11:10:52 11:10:52 11:10:52.017 [main-SendThread(127.0.0.1:39173)] DEBUG org.apache.zookeeper.ClientCnxn - Reading reply session id: 0x10000020e9e0000, packet:: clientPath:null serverPath:null finished:false header:: 167,14 replyHeader:: 167,117,0 request:: org.apache.zookeeper.MultiOperationRecord@3f1b7e2b response:: org.apache.zookeeper.MultiResponse@595dfc35 11:10:52 11:10:52.017 [SyncThread:0] DEBUG org.apache.zookeeper.common.PathTrie - brokers 11:10:52 11:10:52.017 [SyncThread:0] DEBUG org.apache.zookeeper.server.DataTree - Digests are matching for Zxid: 76, Digest in log and actual tree: 213428829733 11:10:52 11:10:52.018 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - sessionid:0x10000020e9e0000 type:multi cxid:0xa8 zxid:0x76 txntype:14 reqpath:n/a 11:10:52 11:10:52.018 [ProcessThread(sid:0 cport:39173):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Permission requested: 4 11:10:52 11:10:52.018 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - Processing request:: sessionid:0x10000020e9e0000 type:multi cxid:0xa9 zxid:0x77 txntype:14 reqpath:n/a 11:10:52 11:10:52.018 [ProcessThread(sid:0 cport:39173):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - ACLs for node: [31,s{'world,'anyone} 11:10:52 ] 11:10:52 11:10:52.018 [ProcessThread(sid:0 cport:39173):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Client credentials: ['sasl,'zooclient 11:10:52 , 'ip,'127.0.0.1 11:10:52 ] 11:10:52 11:10:52.018 [main] DEBUG org.apache.kafka.clients.NetworkClient - [Consumer clientId=mso-123456-consumer-37da6eb2-0a27-47c4-bc3c-8032fdeca549, groupId=mso-group] Sending metadata request MetadataRequestData(topics=[MetadataRequestTopic(topicId=AAAAAAAAAAAAAAAAAAAAAA, name='my-test-topic')], allowAutoTopicCreation=false, includeClusterAuthorizedOperations=false, includeTopicAuthorizedOperations=false) to node localhost:39115 (id: 1 rack: null) 11:10:52 11:10:52.018 [main-SendThread(127.0.0.1:39173)] DEBUG org.apache.zookeeper.ClientCnxn - Reading reply session id: 0x10000020e9e0000, packet:: clientPath:null serverPath:null finished:false header:: 168,14 replyHeader:: 168,118,0 request:: org.apache.zookeeper.MultiOperationRecord@75ed030f response:: org.apache.zookeeper.MultiResponse@902f8119 11:10:52 11:10:52.018 [main] DEBUG org.apache.kafka.clients.NetworkClient - [Consumer clientId=mso-123456-consumer-37da6eb2-0a27-47c4-bc3c-8032fdeca549, groupId=mso-group] Sending METADATA request with header RequestHeader(apiKey=METADATA, apiVersion=12, clientId=mso-123456-consumer-37da6eb2-0a27-47c4-bc3c-8032fdeca549, correlationId=6) and timeout 30000 to node 1: MetadataRequestData(topics=[MetadataRequestTopic(topicId=AAAAAAAAAAAAAAAAAAAAAA, name='my-test-topic')], allowAutoTopicCreation=false, includeClusterAuthorizedOperations=false, includeTopicAuthorizedOperations=false) 11:10:52 11:10:52.018 [SyncThread:0] DEBUG org.apache.zookeeper.common.PathTrie - brokers 11:10:52 11:10:52.018 [SyncThread:0] DEBUG org.apache.zookeeper.server.DataTree - Digests are matching for Zxid: 77, Digest in log and actual tree: 214419394641 11:10:52 11:10:52.018 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - sessionid:0x10000020e9e0000 type:multi cxid:0xa9 zxid:0x77 txntype:14 reqpath:n/a 11:10:52 11:10:52.018 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - Processing request:: sessionid:0x10000020e9e0000 type:multi cxid:0xaa zxid:0x78 txntype:14 reqpath:n/a 11:10:52 11:10:52.019 [main-SendThread(127.0.0.1:39173)] DEBUG org.apache.zookeeper.ClientCnxn - Reading reply session id: 0x10000020e9e0000, packet:: clientPath:null serverPath:null finished:false header:: 169,14 replyHeader:: 169,119,0 request:: org.apache.zookeeper.MultiOperationRecord@e276c4ed response:: org.apache.zookeeper.MultiResponse@fcb942f7 11:10:52 11:10:52.019 [ProcessThread(sid:0 cport:39173):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 224833895547 11:10:52 11:10:52.019 [ProcessThread(sid:0 cport:39173):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 228104388345 11:10:52 11:10:52.019 [ProcessThread(sid:0 cport:39173):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 229614594139 11:10:52 11:10:52.019 [ProcessThread(sid:0 cport:39173):] DEBUG org.apache.zookeeper.server.SessionTrackerImpl - Checking session 0x10000020e9e0000 11:10:52 11:10:52.019 [ProcessThread(sid:0 cport:39173):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Permission requested: 1 11:10:52 11:10:52.019 [ProcessThread(sid:0 cport:39173):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - ACLs for node: [31,s{'world,'anyone} 11:10:52 ] 11:10:52 11:10:52.019 [ProcessThread(sid:0 cport:39173):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Client credentials: ['sasl,'zooclient 11:10:52 , 'ip,'127.0.0.1 11:10:52 ] 11:10:52 11:10:52.020 [SyncThread:0] DEBUG org.apache.zookeeper.common.PathTrie - brokers 11:10:52 11:10:52.020 [SyncThread:0] DEBUG org.apache.zookeeper.server.DataTree - Digests are matching for Zxid: 78, Digest in log and actual tree: 215567139602 11:10:52 11:10:52.020 [ProcessThread(sid:0 cport:39173):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 229614594139 11:10:52 11:10:52.020 [ProcessThread(sid:0 cport:39173):] DEBUG org.apache.zookeeper.server.SessionTrackerImpl - Checking session 0x10000020e9e0000 11:10:52 11:10:52.020 [ProcessThread(sid:0 cport:39173):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Processing ACL: 31,s{'world,'anyone} 11:10:52 11:10:52 11:10:52.020 [ProcessThread(sid:0 cport:39173):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Permission requested: 4 11:10:52 11:10:52.020 [ProcessThread(sid:0 cport:39173):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - ACLs for node: [31,s{'world,'anyone} 11:10:52 ] 11:10:52 11:10:52.020 [ProcessThread(sid:0 cport:39173):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Client credentials: ['sasl,'zooclient 11:10:52 , 'ip,'127.0.0.1 11:10:52 ] 11:10:52 11:10:52.020 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - sessionid:0x10000020e9e0000 type:multi cxid:0xaa zxid:0x78 txntype:14 reqpath:n/a 11:10:52 11:10:52.020 [ProcessThread(sid:0 cport:39173):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 229614594139 11:10:52 11:10:52.020 [ProcessThread(sid:0 cport:39173):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 230938962653 11:10:52 11:10:52.020 [ProcessThread(sid:0 cport:39173):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 234665194322 11:10:52 11:10:52.020 [ProcessThread(sid:0 cport:39173):] DEBUG org.apache.zookeeper.server.SessionTrackerImpl - Checking session 0x10000020e9e0000 11:10:52 11:10:52.020 [ProcessThread(sid:0 cport:39173):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Permission requested: 1 11:10:52 11:10:52.020 [ProcessThread(sid:0 cport:39173):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - ACLs for node: [31,s{'world,'anyone} 11:10:52 ] 11:10:52 11:10:52.020 [ProcessThread(sid:0 cport:39173):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Client credentials: ['sasl,'zooclient 11:10:52 , 'ip,'127.0.0.1 11:10:52 ] 11:10:52 11:10:52.020 [ProcessThread(sid:0 cport:39173):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 234665194322 11:10:52 11:10:52.020 [ProcessThread(sid:0 cport:39173):] DEBUG org.apache.zookeeper.server.SessionTrackerImpl - Checking session 0x10000020e9e0000 11:10:52 11:10:52.020 [ProcessThread(sid:0 cport:39173):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Processing ACL: 31,s{'world,'anyone} 11:10:52 11:10:52 11:10:52.020 [ProcessThread(sid:0 cport:39173):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Permission requested: 4 11:10:52 11:10:52.020 [ProcessThread(sid:0 cport:39173):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - ACLs for node: [31,s{'world,'anyone} 11:10:52 ] 11:10:52 11:10:52.020 [ProcessThread(sid:0 cport:39173):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Client credentials: ['sasl,'zooclient 11:10:52 , 'ip,'127.0.0.1 11:10:52 ] 11:10:52 11:10:52.020 [ProcessThread(sid:0 cport:39173):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 234665194322 11:10:52 11:10:52.020 [ProcessThread(sid:0 cport:39173):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 231993624397 11:10:52 11:10:52.020 [ProcessThread(sid:0 cport:39173):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 232038914790 11:10:52 11:10:52.020 [ProcessThread(sid:0 cport:39173):] DEBUG org.apache.zookeeper.server.SessionTrackerImpl - Checking session 0x10000020e9e0000 11:10:52 11:10:52.021 [ProcessThread(sid:0 cport:39173):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Permission requested: 1 11:10:52 11:10:52.021 [ProcessThread(sid:0 cport:39173):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - ACLs for node: [31,s{'world,'anyone} 11:10:52 ] 11:10:52 11:10:52.021 [ProcessThread(sid:0 cport:39173):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Client credentials: ['sasl,'zooclient 11:10:52 , 'ip,'127.0.0.1 11:10:52 ] 11:10:52 11:10:52.021 [ProcessThread(sid:0 cport:39173):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 232038914790 11:10:52 11:10:52.021 [ProcessThread(sid:0 cport:39173):] DEBUG org.apache.zookeeper.server.SessionTrackerImpl - Checking session 0x10000020e9e0000 11:10:52 11:10:52.021 [ProcessThread(sid:0 cport:39173):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Processing ACL: 31,s{'world,'anyone} 11:10:52 11:10:52 11:10:52.021 [ProcessThread(sid:0 cport:39173):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Permission requested: 4 11:10:52 11:10:52.021 [ProcessThread(sid:0 cport:39173):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - ACLs for node: [31,s{'world,'anyone} 11:10:52 ] 11:10:52 11:10:52.021 [ProcessThread(sid:0 cport:39173):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Client credentials: ['sasl,'zooclient 11:10:52 , 'ip,'127.0.0.1 11:10:52 ] 11:10:52 11:10:52.021 [ProcessThread(sid:0 cport:39173):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 232038914790 11:10:52 11:10:52.021 [ProcessThread(sid:0 cport:39173):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 234702489315 11:10:52 11:10:52.021 [ProcessThread(sid:0 cport:39173):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 238546024739 11:10:52 11:10:52.021 [ProcessThread(sid:0 cport:39173):] DEBUG org.apache.zookeeper.server.SessionTrackerImpl - Checking session 0x10000020e9e0000 11:10:52 11:10:52.021 [ProcessThread(sid:0 cport:39173):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Permission requested: 1 11:10:52 11:10:52.021 [ProcessThread(sid:0 cport:39173):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - ACLs for node: [31,s{'world,'anyone} 11:10:52 ] 11:10:52 11:10:52.021 [ProcessThread(sid:0 cport:39173):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Client credentials: ['sasl,'zooclient 11:10:52 , 'ip,'127.0.0.1 11:10:52 ] 11:10:52 11:10:52.021 [ProcessThread(sid:0 cport:39173):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 238546024739 11:10:52 11:10:52.021 [ProcessThread(sid:0 cport:39173):] DEBUG org.apache.zookeeper.server.SessionTrackerImpl - Checking session 0x10000020e9e0000 11:10:52 11:10:52.021 [ProcessThread(sid:0 cport:39173):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Processing ACL: 31,s{'world,'anyone} 11:10:52 11:10:52 11:10:52.021 [main-SendThread(127.0.0.1:39173)] DEBUG org.apache.zookeeper.ClientCnxn - Reading reply session id: 0x10000020e9e0000, packet:: clientPath:null serverPath:null finished:false header:: 170,14 replyHeader:: 170,120,0 request:: org.apache.zookeeper.MultiOperationRecord@dfb97991 response:: org.apache.zookeeper.MultiResponse@f9fbf79b 11:10:52 11:10:52.022 [main] DEBUG org.apache.kafka.clients.NetworkClient - [Consumer clientId=mso-123456-consumer-37da6eb2-0a27-47c4-bc3c-8032fdeca549, groupId=mso-group] Received METADATA response from node 1 for request with header RequestHeader(apiKey=METADATA, apiVersion=12, clientId=mso-123456-consumer-37da6eb2-0a27-47c4-bc3c-8032fdeca549, correlationId=6): MetadataResponseData(throttleTimeMs=0, brokers=[MetadataResponseBroker(nodeId=1, host='localhost', port=39115, rack=null)], clusterId='jx5ycp9PTHOXo1U6H8QTmw', controllerId=1, topics=[MetadataResponseTopic(errorCode=0, name='my-test-topic', topicId=QbCSIHehTz6oyvgrkYsJtw, isInternal=false, partitions=[MetadataResponsePartition(errorCode=0, partitionIndex=0, leaderId=1, leaderEpoch=0, replicaNodes=[1], isrNodes=[1], offlineReplicas=[])], topicAuthorizedOperations=-2147483648)], clusterAuthorizedOperations=-2147483648) 11:10:52 11:10:52.022 [main] DEBUG org.apache.kafka.clients.Metadata - [Consumer clientId=mso-123456-consumer-37da6eb2-0a27-47c4-bc3c-8032fdeca549, groupId=mso-group] Updating last seen epoch for partition my-test-topic-0 from 0 to epoch 0 from new metadata 11:10:52 11:10:52.022 [data-plane-kafka-network-thread-1-ListenerName(SASL_PLAINTEXT)-SASL_PLAINTEXT-0] DEBUG kafka.request.logger - Completed request:{"isForwarded":false,"requestHeader":{"requestApiKey":3,"requestApiVersion":12,"correlationId":6,"clientId":"mso-123456-consumer-37da6eb2-0a27-47c4-bc3c-8032fdeca549","requestApiKeyName":"METADATA"},"request":{"topics":[{"topicId":"AAAAAAAAAAAAAAAAAAAAAA","name":"my-test-topic"}],"allowAutoTopicCreation":false,"includeTopicAuthorizedOperations":false},"response":{"throttleTimeMs":0,"brokers":[{"nodeId":1,"host":"localhost","port":39115,"rack":null}],"clusterId":"jx5ycp9PTHOXo1U6H8QTmw","controllerId":1,"topics":[{"errorCode":0,"name":"my-test-topic","topicId":"QbCSIHehTz6oyvgrkYsJtw","isInternal":false,"partitions":[{"errorCode":0,"partitionIndex":0,"leaderId":1,"leaderEpoch":0,"replicaNodes":[1],"isrNodes":[1],"offlineReplicas":[]}],"topicAuthorizedOperations":-2147483648}]},"connection":"127.0.0.1:39115-127.0.0.1:43334-3","totalTimeMs":2.772,"requestQueueTimeMs":0.279,"localTimeMs":2.133,"remoteTimeMs":0.0,"throttleTimeMs":0,"responseQueueTimeMs":0.093,"sendTimeMs":0.265,"securityProtocol":"SASL_PLAINTEXT","principal":"User:admin","listener":"SASL_PLAINTEXT","clientInformation":{"softwareName":"apache-kafka-java","softwareVersion":"3.3.1"}} 11:10:52 11:10:52.022 [main] DEBUG org.apache.kafka.clients.Metadata - [Consumer clientId=mso-123456-consumer-37da6eb2-0a27-47c4-bc3c-8032fdeca549, groupId=mso-group] Updated cluster metadata updateVersion 4 to MetadataCache{clusterId='jx5ycp9PTHOXo1U6H8QTmw', nodes={1=localhost:39115 (id: 1 rack: null)}, partitions=[PartitionMetadata(error=NONE, partition=my-test-topic-0, leader=Optional[1], leaderEpoch=Optional[0], replicas=1, isr=1, offlineReplicas=)], controller=localhost:39115 (id: 1 rack: null)} 11:10:52 11:10:52.022 [ProcessThread(sid:0 cport:39173):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Permission requested: 4 11:10:52 11:10:52.022 [ProcessThread(sid:0 cport:39173):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - ACLs for node: [31,s{'world,'anyone} 11:10:52 ] 11:10:52 11:10:52.022 [ProcessThread(sid:0 cport:39173):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Client credentials: ['sasl,'zooclient 11:10:52 , 'ip,'127.0.0.1 11:10:52 ] 11:10:52 11:10:52.022 [main] DEBUG org.apache.kafka.clients.consumer.internals.ConsumerCoordinator - [Consumer clientId=mso-123456-consumer-37da6eb2-0a27-47c4-bc3c-8032fdeca549, groupId=mso-group] Sending FindCoordinator request to broker localhost:39115 (id: 1 rack: null) 11:10:52 11:10:52.023 [ProcessThread(sid:0 cport:39173):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 238546024739 11:10:52 11:10:52.023 [ProcessThread(sid:0 cport:39173):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 237456714175 11:10:52 11:10:52.023 [ProcessThread(sid:0 cport:39173):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 240882962260 11:10:52 11:10:52.023 [main] DEBUG org.apache.kafka.clients.NetworkClient - [Consumer clientId=mso-123456-consumer-37da6eb2-0a27-47c4-bc3c-8032fdeca549, groupId=mso-group] Sending FIND_COORDINATOR request with header RequestHeader(apiKey=FIND_COORDINATOR, apiVersion=4, clientId=mso-123456-consumer-37da6eb2-0a27-47c4-bc3c-8032fdeca549, correlationId=7) and timeout 30000 to node 1: FindCoordinatorRequestData(key='', keyType=0, coordinatorKeys=[mso-group]) 11:10:52 11:10:52.023 [ProcessThread(sid:0 cport:39173):] DEBUG org.apache.zookeeper.server.SessionTrackerImpl - Checking session 0x10000020e9e0000 11:10:52 11:10:52.023 [ProcessThread(sid:0 cport:39173):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Permission requested: 1 11:10:52 11:10:52.023 [ProcessThread(sid:0 cport:39173):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - ACLs for node: [31,s{'world,'anyone} 11:10:52 ] 11:10:52 11:10:52.023 [ProcessThread(sid:0 cport:39173):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Client credentials: ['sasl,'zooclient 11:10:52 , 'ip,'127.0.0.1 11:10:52 ] 11:10:52 11:10:52.023 [ProcessThread(sid:0 cport:39173):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 240882962260 11:10:52 11:10:52.023 [ProcessThread(sid:0 cport:39173):] DEBUG org.apache.zookeeper.server.SessionTrackerImpl - Checking session 0x10000020e9e0000 11:10:52 11:10:52.023 [ProcessThread(sid:0 cport:39173):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Processing ACL: 31,s{'world,'anyone} 11:10:52 11:10:52 11:10:52.024 [ProcessThread(sid:0 cport:39173):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Permission requested: 4 11:10:52 11:10:52.024 [ProcessThread(sid:0 cport:39173):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - ACLs for node: [31,s{'world,'anyone} 11:10:52 ] 11:10:52 11:10:52.024 [ProcessThread(sid:0 cport:39173):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Client credentials: ['sasl,'zooclient 11:10:52 , 'ip,'127.0.0.1 11:10:52 ] 11:10:52 11:10:52.024 [ProcessThread(sid:0 cport:39173):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 240882962260 11:10:52 11:10:52.024 [ProcessThread(sid:0 cport:39173):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 241947436528 11:10:52 11:10:52.024 [ProcessThread(sid:0 cport:39173):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 243362304261 11:10:52 11:10:52.024 [ProcessThread(sid:0 cport:39173):] DEBUG org.apache.zookeeper.server.SessionTrackerImpl - Checking session 0x10000020e9e0000 11:10:52 11:10:52.024 [ProcessThread(sid:0 cport:39173):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Permission requested: 1 11:10:52 11:10:52.024 [ProcessThread(sid:0 cport:39173):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - ACLs for node: [31,s{'world,'anyone} 11:10:52 ] 11:10:52 11:10:52.024 [ProcessThread(sid:0 cport:39173):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Client credentials: ['sasl,'zooclient 11:10:52 , 'ip,'127.0.0.1 11:10:52 ] 11:10:52 11:10:52.024 [ProcessThread(sid:0 cport:39173):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 243362304261 11:10:52 11:10:52.024 [ProcessThread(sid:0 cport:39173):] DEBUG org.apache.zookeeper.server.SessionTrackerImpl - Checking session 0x10000020e9e0000 11:10:52 11:10:52.024 [ProcessThread(sid:0 cport:39173):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Processing ACL: 31,s{'world,'anyone} 11:10:52 11:10:52 11:10:52.024 [ProcessThread(sid:0 cport:39173):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Permission requested: 4 11:10:52 11:10:52.024 [ProcessThread(sid:0 cport:39173):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - ACLs for node: [31,s{'world,'anyone} 11:10:52 ] 11:10:52 11:10:52.024 [ProcessThread(sid:0 cport:39173):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Client credentials: ['sasl,'zooclient 11:10:52 , 'ip,'127.0.0.1 11:10:52 ] 11:10:52 11:10:52.024 [ProcessThread(sid:0 cport:39173):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 243362304261 11:10:52 11:10:52.024 [ProcessThread(sid:0 cport:39173):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 245788254605 11:10:52 11:10:52.024 [ProcessThread(sid:0 cport:39173):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 247275673540 11:10:52 11:10:52.026 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - Processing request:: sessionid:0x10000020e9e0000 type:multi cxid:0xab zxid:0x79 txntype:14 reqpath:n/a 11:10:52 11:10:52.026 [SyncThread:0] DEBUG org.apache.zookeeper.common.PathTrie - brokers 11:10:52 11:10:52.026 [SyncThread:0] DEBUG org.apache.zookeeper.server.DataTree - Digests are matching for Zxid: 79, Digest in log and actual tree: 217988123742 11:10:52 11:10:52.027 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - sessionid:0x10000020e9e0000 type:multi cxid:0xab zxid:0x79 txntype:14 reqpath:n/a 11:10:52 11:10:52.027 [main-SendThread(127.0.0.1:39173)] DEBUG org.apache.zookeeper.ClientCnxn - Reading reply session id: 0x10000020e9e0000, packet:: clientPath:null serverPath:null finished:false header:: 171,14 replyHeader:: 171,121,0 request:: org.apache.zookeeper.MultiOperationRecord@38879f89 response:: org.apache.zookeeper.MultiResponse@52ca1d93 11:10:52 11:10:52.027 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - Processing request:: sessionid:0x10000020e9e0000 type:multi cxid:0xac zxid:0x7a txntype:14 reqpath:n/a 11:10:52 11:10:52.028 [ProcessThread(sid:0 cport:39173):] DEBUG org.apache.zookeeper.server.SessionTrackerImpl - Checking session 0x10000020e9e0000 11:10:52 11:10:52.028 [SyncThread:0] DEBUG org.apache.zookeeper.common.PathTrie - brokers 11:10:52 11:10:52.028 [SyncThread:0] DEBUG org.apache.zookeeper.server.DataTree - Digests are matching for Zxid: 7a, Digest in log and actual tree: 218620793374 11:10:52 11:10:52.028 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - sessionid:0x10000020e9e0000 type:multi cxid:0xac zxid:0x7a txntype:14 reqpath:n/a 11:10:52 11:10:52.028 [ProcessThread(sid:0 cport:39173):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Permission requested: 1 11:10:52 11:10:52.028 [ProcessThread(sid:0 cport:39173):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - ACLs for node: [31,s{'world,'anyone} 11:10:52 ] 11:10:52 11:10:52.028 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - Processing request:: sessionid:0x10000020e9e0000 type:multi cxid:0xad zxid:0x7b txntype:14 reqpath:n/a 11:10:52 11:10:52.028 [ProcessThread(sid:0 cport:39173):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Client credentials: ['sasl,'zooclient 11:10:52 , 'ip,'127.0.0.1 11:10:52 ] 11:10:52 11:10:52.028 [main-SendThread(127.0.0.1:39173)] DEBUG org.apache.zookeeper.ClientCnxn - Reading reply session id: 0x10000020e9e0000, packet:: clientPath:null serverPath:null finished:false header:: 172,14 replyHeader:: 172,122,0 request:: org.apache.zookeeper.MultiOperationRecord@3eac7511 response:: org.apache.zookeeper.MultiResponse@58eef31b 11:10:52 11:10:52.029 [SyncThread:0] DEBUG org.apache.zookeeper.common.PathTrie - brokers 11:10:52 11:10:52.029 [SyncThread:0] DEBUG org.apache.zookeeper.server.DataTree - Digests are matching for Zxid: 7b, Digest in log and actual tree: 224833895547 11:10:52 11:10:52.029 [ProcessThread(sid:0 cport:39173):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 247275673540 11:10:52 11:10:52.029 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - sessionid:0x10000020e9e0000 type:multi cxid:0xad zxid:0x7b txntype:14 reqpath:n/a 11:10:52 11:10:52.029 [ProcessThread(sid:0 cport:39173):] DEBUG org.apache.zookeeper.server.SessionTrackerImpl - Checking session 0x10000020e9e0000 11:10:52 11:10:52.029 [ProcessThread(sid:0 cport:39173):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Processing ACL: 31,s{'world,'anyone} 11:10:52 11:10:52 11:10:52.029 [ProcessThread(sid:0 cport:39173):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Permission requested: 4 11:10:52 11:10:52.029 [ProcessThread(sid:0 cport:39173):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - ACLs for node: [31,s{'world,'anyone} 11:10:52 ] 11:10:52 11:10:52.029 [ProcessThread(sid:0 cport:39173):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Client credentials: ['sasl,'zooclient 11:10:52 , 'ip,'127.0.0.1 11:10:52 ] 11:10:52 11:10:52.030 [ProcessThread(sid:0 cport:39173):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 247275673540 11:10:52 11:10:52.030 [ProcessThread(sid:0 cport:39173):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 244886685532 11:10:52 11:10:52.030 [ProcessThread(sid:0 cport:39173):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 246378305588 11:10:52 11:10:52.030 [ProcessThread(sid:0 cport:39173):] DEBUG org.apache.zookeeper.server.SessionTrackerImpl - Checking session 0x10000020e9e0000 11:10:52 11:10:52.030 [main-SendThread(127.0.0.1:39173)] DEBUG org.apache.zookeeper.ClientCnxn - Reading reply session id: 0x10000020e9e0000, packet:: clientPath:null serverPath:null finished:false header:: 173,14 replyHeader:: 173,123,0 request:: org.apache.zookeeper.MultiOperationRecord@d9f79ca8 response:: org.apache.zookeeper.MultiResponse@f43a1ab2 11:10:52 11:10:52.030 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - Processing request:: sessionid:0x10000020e9e0000 type:multi cxid:0xae zxid:0x7c txntype:14 reqpath:n/a 11:10:52 11:10:52.031 [SyncThread:0] DEBUG org.apache.zookeeper.common.PathTrie - brokers 11:10:52 11:10:52.031 [SyncThread:0] DEBUG org.apache.zookeeper.server.DataTree - Digests are matching for Zxid: 7c, Digest in log and actual tree: 229614594139 11:10:52 11:10:52.031 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - sessionid:0x10000020e9e0000 type:multi cxid:0xae zxid:0x7c txntype:14 reqpath:n/a 11:10:52 11:10:52.031 [ProcessThread(sid:0 cport:39173):] DEBUG org.apache.zookeeper.server.SessionTrackerImpl - Checking session 0x10000020e9e0000 11:10:52 11:10:52.031 [ProcessThread(sid:0 cport:39173):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Permission requested: 1 11:10:52 11:10:52.031 [ProcessThread(sid:0 cport:39173):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - ACLs for node: [31,s{'world,'anyone} 11:10:52 ] 11:10:52 11:10:52.031 [ProcessThread(sid:0 cport:39173):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Client credentials: ['sasl,'zooclient 11:10:52 , 'ip,'127.0.0.1 11:10:52 ] 11:10:52 11:10:52.031 [ProcessThread(sid:0 cport:39173):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 246378305588 11:10:52 11:10:52.031 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - Processing request:: sessionid:0x10000020e9e0000 type:multi cxid:0xaf zxid:0x7d txntype:14 reqpath:n/a 11:10:52 11:10:52.031 [ProcessThread(sid:0 cport:39173):] DEBUG org.apache.zookeeper.server.SessionTrackerImpl - Checking session 0x10000020e9e0000 11:10:52 11:10:52.031 [ProcessThread(sid:0 cport:39173):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Processing ACL: 31,s{'world,'anyone} 11:10:52 11:10:52 11:10:52.031 [main-SendThread(127.0.0.1:39173)] DEBUG org.apache.zookeeper.ClientCnxn - Reading reply session id: 0x10000020e9e0000, packet:: clientPath:null serverPath:null finished:false header:: 174,14 replyHeader:: 174,124,0 request:: org.apache.zookeeper.MultiOperationRecord@12456215 response:: org.apache.zookeeper.MultiResponse@2c87e01f 11:10:52 11:10:52.032 [SyncThread:0] DEBUG org.apache.zookeeper.common.PathTrie - brokers 11:10:52 11:10:52.032 [SyncThread:0] DEBUG org.apache.zookeeper.server.DataTree - Digests are matching for Zxid: 7d, Digest in log and actual tree: 234665194322 11:10:52 11:10:52.032 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - sessionid:0x10000020e9e0000 type:multi cxid:0xaf zxid:0x7d txntype:14 reqpath:n/a 11:10:52 11:10:52.032 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - Processing request:: sessionid:0x10000020e9e0000 type:multi cxid:0xb0 zxid:0x7e txntype:14 reqpath:n/a 11:10:52 11:10:52.032 [ProcessThread(sid:0 cport:39173):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Permission requested: 4 11:10:52 11:10:52.032 [ProcessThread(sid:0 cport:39173):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - ACLs for node: [31,s{'world,'anyone} 11:10:52 ] 11:10:52 11:10:52.032 [ProcessThread(sid:0 cport:39173):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Client credentials: ['sasl,'zooclient 11:10:52 , 'ip,'127.0.0.1 11:10:52 ] 11:10:52 11:10:52.032 [main-SendThread(127.0.0.1:39173)] DEBUG org.apache.zookeeper.ClientCnxn - Reading reply session id: 0x10000020e9e0000, packet:: clientPath:null serverPath:null finished:false header:: 175,14 replyHeader:: 175,125,0 request:: org.apache.zookeeper.MultiOperationRecord@d73a514c response:: org.apache.zookeeper.MultiResponse@f17ccf56 11:10:52 11:10:52.032 [SyncThread:0] DEBUG org.apache.zookeeper.common.PathTrie - brokers 11:10:52 11:10:52.032 [SyncThread:0] DEBUG org.apache.zookeeper.server.DataTree - Digests are matching for Zxid: 7e, Digest in log and actual tree: 232038914790 11:10:52 11:10:52.032 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - sessionid:0x10000020e9e0000 type:multi cxid:0xb0 zxid:0x7e txntype:14 reqpath:n/a 11:10:52 11:10:52.032 [ProcessThread(sid:0 cport:39173):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 246378305588 11:10:52 11:10:52.032 [ProcessThread(sid:0 cport:39173):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 243963949581 11:10:52 11:10:52.032 [ProcessThread(sid:0 cport:39173):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 247580351085 11:10:52 11:10:52.033 [ProcessThread(sid:0 cport:39173):] DEBUG org.apache.zookeeper.server.SessionTrackerImpl - Checking session 0x10000020e9e0000 11:10:52 11:10:52.033 [ProcessThread(sid:0 cport:39173):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Permission requested: 1 11:10:52 11:10:52.033 [ProcessThread(sid:0 cport:39173):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - ACLs for node: [31,s{'world,'anyone} 11:10:52 ] 11:10:52 11:10:52.033 [ProcessThread(sid:0 cport:39173):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Client credentials: ['sasl,'zooclient 11:10:52 , 'ip,'127.0.0.1 11:10:52 ] 11:10:52 11:10:52.033 [ProcessThread(sid:0 cport:39173):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 247580351085 11:10:52 11:10:52.033 [main-SendThread(127.0.0.1:39173)] DEBUG org.apache.zookeeper.ClientCnxn - Reading reply session id: 0x10000020e9e0000, packet:: clientPath:null serverPath:null finished:false header:: 176,14 replyHeader:: 176,126,0 request:: org.apache.zookeeper.MultiOperationRecord@6b829127 response:: org.apache.zookeeper.MultiResponse@85c50f31 11:10:52 11:10:52.033 [ProcessThread(sid:0 cport:39173):] DEBUG org.apache.zookeeper.server.SessionTrackerImpl - Checking session 0x10000020e9e0000 11:10:52 11:10:52.033 [ProcessThread(sid:0 cport:39173):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Processing ACL: 31,s{'world,'anyone} 11:10:52 11:10:52 11:10:52.033 [ProcessThread(sid:0 cport:39173):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Permission requested: 4 11:10:52 11:10:52.033 [ProcessThread(sid:0 cport:39173):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - ACLs for node: [31,s{'world,'anyone} 11:10:52 ] 11:10:52 11:10:52.033 [ProcessThread(sid:0 cport:39173):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Client credentials: ['sasl,'zooclient 11:10:52 , 'ip,'127.0.0.1 11:10:52 ] 11:10:52 11:10:52.033 [ProcessThread(sid:0 cport:39173):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 247580351085 11:10:52 11:10:52.033 [ProcessThread(sid:0 cport:39173):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 249364512848 11:10:52 11:10:52.033 [ProcessThread(sid:0 cport:39173):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 252053357644 11:10:52 11:10:52.033 [ProcessThread(sid:0 cport:39173):] DEBUG org.apache.zookeeper.server.SessionTrackerImpl - Checking session 0x10000020e9e0000 11:10:52 11:10:52.033 [ProcessThread(sid:0 cport:39173):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Permission requested: 1 11:10:52 11:10:52.033 [ProcessThread(sid:0 cport:39173):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - ACLs for node: [31,s{'world,'anyone} 11:10:52 ] 11:10:52 11:10:52.033 [ProcessThread(sid:0 cport:39173):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Client credentials: ['sasl,'zooclient 11:10:52 , 'ip,'127.0.0.1 11:10:52 ] 11:10:52 11:10:52.033 [ProcessThread(sid:0 cport:39173):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 252053357644 11:10:52 11:10:52.033 [ProcessThread(sid:0 cport:39173):] DEBUG org.apache.zookeeper.server.SessionTrackerImpl - Checking session 0x10000020e9e0000 11:10:52 11:10:52.033 [ProcessThread(sid:0 cport:39173):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Processing ACL: 31,s{'world,'anyone} 11:10:52 11:10:52 11:10:52.033 [ProcessThread(sid:0 cport:39173):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Permission requested: 4 11:10:52 11:10:52.033 [ProcessThread(sid:0 cport:39173):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - ACLs for node: [31,s{'world,'anyone} 11:10:52 ] 11:10:52 11:10:52.033 [ProcessThread(sid:0 cport:39173):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Client credentials: ['sasl,'zooclient 11:10:52 , 'ip,'127.0.0.1 11:10:52 ] 11:10:52 11:10:52.033 [ProcessThread(sid:0 cport:39173):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 252053357644 11:10:52 11:10:52.033 [ProcessThread(sid:0 cport:39173):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 253910832084 11:10:52 11:10:52.033 [ProcessThread(sid:0 cport:39173):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 257983331858 11:10:52 11:10:52.034 [ProcessThread(sid:0 cport:39173):] DEBUG org.apache.zookeeper.server.SessionTrackerImpl - Checking session 0x10000020e9e0000 11:10:52 11:10:52.034 [ProcessThread(sid:0 cport:39173):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Permission requested: 1 11:10:52 11:10:52.034 [ProcessThread(sid:0 cport:39173):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - ACLs for node: [31,s{'world,'anyone} 11:10:52 ] 11:10:52 11:10:52.034 [ProcessThread(sid:0 cport:39173):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Client credentials: ['sasl,'zooclient 11:10:52 , 'ip,'127.0.0.1 11:10:52 ] 11:10:52 11:10:52.034 [ProcessThread(sid:0 cport:39173):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 257983331858 11:10:52 11:10:52.034 [ProcessThread(sid:0 cport:39173):] DEBUG org.apache.zookeeper.server.SessionTrackerImpl - Checking session 0x10000020e9e0000 11:10:52 11:10:52.034 [ProcessThread(sid:0 cport:39173):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Processing ACL: 31,s{'world,'anyone} 11:10:52 11:10:52 11:10:52.034 [ProcessThread(sid:0 cport:39173):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Permission requested: 4 11:10:52 11:10:52.034 [ProcessThread(sid:0 cport:39173):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - ACLs for node: [31,s{'world,'anyone} 11:10:52 ] 11:10:52 11:10:52.034 [ProcessThread(sid:0 cport:39173):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Client credentials: ['sasl,'zooclient 11:10:52 , 'ip,'127.0.0.1 11:10:52 ] 11:10:52 11:10:52.034 [ProcessThread(sid:0 cport:39173):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 257983331858 11:10:52 11:10:52.034 [ProcessThread(sid:0 cport:39173):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 260420677034 11:10:52 11:10:52.034 [ProcessThread(sid:0 cport:39173):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 261623656458 11:10:52 11:10:52.034 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - Processing request:: sessionid:0x10000020e9e0000 type:multi cxid:0xb1 zxid:0x7f txntype:14 reqpath:n/a 11:10:52 11:10:52.034 [SyncThread:0] DEBUG org.apache.zookeeper.common.PathTrie - brokers 11:10:52 11:10:52.034 [SyncThread:0] DEBUG org.apache.zookeeper.server.DataTree - Digests are matching for Zxid: 7f, Digest in log and actual tree: 238546024739 11:10:52 11:10:52.034 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - sessionid:0x10000020e9e0000 type:multi cxid:0xb1 zxid:0x7f txntype:14 reqpath:n/a 11:10:52 11:10:52.034 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - Processing request:: sessionid:0x10000020e9e0000 type:multi cxid:0xb2 zxid:0x80 txntype:14 reqpath:n/a 11:10:52 11:10:52.035 [main-SendThread(127.0.0.1:39173)] DEBUG org.apache.zookeeper.ClientCnxn - Reading reply session id: 0x10000020e9e0000, packet:: clientPath:null serverPath:null finished:false header:: 177,14 replyHeader:: 177,127,0 request:: org.apache.zookeeper.MultiOperationRecord@d4dffe8f response:: org.apache.zookeeper.MultiResponse@ef227c99 11:10:52 11:10:52.035 [SyncThread:0] DEBUG org.apache.zookeeper.common.PathTrie - brokers 11:10:52 11:10:52.035 [SyncThread:0] DEBUG org.apache.zookeeper.server.DataTree - Digests are matching for Zxid: 80, Digest in log and actual tree: 240882962260 11:10:52 11:10:52.035 [ProcessThread(sid:0 cport:39173):] DEBUG org.apache.zookeeper.server.SessionTrackerImpl - Checking session 0x10000020e9e0000 11:10:52 11:10:52.035 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - sessionid:0x10000020e9e0000 type:multi cxid:0xb2 zxid:0x80 txntype:14 reqpath:n/a 11:10:52 11:10:52.035 [ProcessThread(sid:0 cport:39173):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Permission requested: 1 11:10:52 11:10:52.035 [ProcessThread(sid:0 cport:39173):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - ACLs for node: [31,s{'world,'anyone} 11:10:52 ] 11:10:52 11:10:52.035 [ProcessThread(sid:0 cport:39173):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Client credentials: ['sasl,'zooclient 11:10:52 , 'ip,'127.0.0.1 11:10:52 ] 11:10:52 11:10:52.035 [ProcessThread(sid:0 cport:39173):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 261623656458 11:10:52 11:10:52.035 [ProcessThread(sid:0 cport:39173):] DEBUG org.apache.zookeeper.server.SessionTrackerImpl - Checking session 0x10000020e9e0000 11:10:52 11:10:52.035 [ProcessThread(sid:0 cport:39173):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Processing ACL: 31,s{'world,'anyone} 11:10:52 11:10:52 11:10:52.035 [ProcessThread(sid:0 cport:39173):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Permission requested: 4 11:10:52 11:10:52.035 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - Processing request:: sessionid:0x10000020e9e0000 type:multi cxid:0xb3 zxid:0x81 txntype:14 reqpath:n/a 11:10:52 11:10:52.035 [ProcessThread(sid:0 cport:39173):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - ACLs for node: [31,s{'world,'anyone} 11:10:52 ] 11:10:52 11:10:52.035 [ProcessThread(sid:0 cport:39173):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Client credentials: ['sasl,'zooclient 11:10:52 , 'ip,'127.0.0.1 11:10:52 ] 11:10:52 11:10:52.035 [main-SendThread(127.0.0.1:39173)] DEBUG org.apache.zookeeper.ClientCnxn - Reading reply session id: 0x10000020e9e0000, packet:: clientPath:null serverPath:null finished:false header:: 178,14 replyHeader:: 178,128,0 request:: org.apache.zookeeper.MultiOperationRecord@eddd7e9 response:: org.apache.zookeeper.MultiResponse@292055f3 11:10:52 11:10:52.036 [SyncThread:0] DEBUG org.apache.zookeeper.common.PathTrie - brokers 11:10:52 11:10:52.036 [SyncThread:0] DEBUG org.apache.zookeeper.server.DataTree - Digests are matching for Zxid: 81, Digest in log and actual tree: 243362304261 11:10:52 11:10:52.036 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - sessionid:0x10000020e9e0000 type:multi cxid:0xb3 zxid:0x81 txntype:14 reqpath:n/a 11:10:52 11:10:52.036 [ProcessThread(sid:0 cport:39173):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 261623656458 11:10:52 11:10:52.036 [ProcessThread(sid:0 cport:39173):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 261099448113 11:10:52 11:10:52.036 [ProcessThread(sid:0 cport:39173):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 261653092023 11:10:52 11:10:52.036 [ProcessThread(sid:0 cport:39173):] DEBUG org.apache.zookeeper.server.SessionTrackerImpl - Checking session 0x10000020e9e0000 11:10:52 11:10:52.036 [ProcessThread(sid:0 cport:39173):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Permission requested: 1 11:10:52 11:10:52.036 [ProcessThread(sid:0 cport:39173):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - ACLs for node: [31,s{'world,'anyone} 11:10:52 ] 11:10:52 11:10:52.036 [ProcessThread(sid:0 cport:39173):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Client credentials: ['sasl,'zooclient 11:10:52 , 'ip,'127.0.0.1 11:10:52 ] 11:10:52 11:10:52.036 [ProcessThread(sid:0 cport:39173):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 261653092023 11:10:52 11:10:52.036 [main-SendThread(127.0.0.1:39173)] DEBUG org.apache.zookeeper.ClientCnxn - Reading reply session id: 0x10000020e9e0000, packet:: clientPath:null serverPath:null finished:false header:: 179,14 replyHeader:: 179,129,0 request:: org.apache.zookeeper.MultiOperationRecord@af7bd34f response:: org.apache.zookeeper.MultiResponse@c9be5159 11:10:52 11:10:52.036 [ProcessThread(sid:0 cport:39173):] DEBUG org.apache.zookeeper.server.SessionTrackerImpl - Checking session 0x10000020e9e0000 11:10:52 11:10:52.036 [ProcessThread(sid:0 cport:39173):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Processing ACL: 31,s{'world,'anyone} 11:10:52 11:10:52 11:10:52.036 [ProcessThread(sid:0 cport:39173):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Permission requested: 4 11:10:52 11:10:52.036 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - Processing request:: sessionid:0x10000020e9e0000 type:multi cxid:0xb4 zxid:0x82 txntype:14 reqpath:n/a 11:10:52 11:10:52.036 [ProcessThread(sid:0 cport:39173):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - ACLs for node: [31,s{'world,'anyone} 11:10:52 ] 11:10:52 11:10:52.037 [ProcessThread(sid:0 cport:39173):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Client credentials: ['sasl,'zooclient 11:10:52 , 'ip,'127.0.0.1 11:10:52 ] 11:10:52 11:10:52.037 [SyncThread:0] DEBUG org.apache.zookeeper.common.PathTrie - brokers 11:10:52 11:10:52.037 [SyncThread:0] DEBUG org.apache.zookeeper.server.DataTree - Digests are matching for Zxid: 82, Digest in log and actual tree: 247275673540 11:10:52 11:10:52.037 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - sessionid:0x10000020e9e0000 type:multi cxid:0xb4 zxid:0x82 txntype:14 reqpath:n/a 11:10:52 11:10:52.037 [ProcessThread(sid:0 cport:39173):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 261653092023 11:10:52 11:10:52.037 [ProcessThread(sid:0 cport:39173):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 261062631776 11:10:52 11:10:52.037 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - Processing request:: sessionid:0x10000020e9e0000 type:multi cxid:0xb5 zxid:0x83 txntype:14 reqpath:n/a 11:10:52 11:10:52.037 [main-SendThread(127.0.0.1:39173)] DEBUG org.apache.zookeeper.ClientCnxn - Reading reply session id: 0x10000020e9e0000, packet:: clientPath:null serverPath:null finished:false header:: 180,14 replyHeader:: 180,130,0 request:: org.apache.zookeeper.MultiOperationRecord@6d6ddaca response:: org.apache.zookeeper.MultiResponse@87b058d4 11:10:52 11:10:52.037 [SyncThread:0] DEBUG org.apache.zookeeper.common.PathTrie - brokers 11:10:52 11:10:52.037 [SyncThread:0] DEBUG org.apache.zookeeper.server.DataTree - Digests are matching for Zxid: 83, Digest in log and actual tree: 246378305588 11:10:52 11:10:52.037 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - sessionid:0x10000020e9e0000 type:multi cxid:0xb5 zxid:0x83 txntype:14 reqpath:n/a 11:10:52 11:10:52.037 [ProcessThread(sid:0 cport:39173):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 261783015415 11:10:52 11:10:52.037 [ProcessThread(sid:0 cport:39173):] DEBUG org.apache.zookeeper.server.SessionTrackerImpl - Checking session 0x10000020e9e0000 11:10:52 11:10:52.038 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - Processing request:: sessionid:0x10000020e9e0000 type:exists cxid:0xb6 zxid:0xfffffffffffffffe txntype:unknown reqpath:/admin/delete_topics/__consumer_offsets 11:10:52 11:10:52.038 [ProcessThread(sid:0 cport:39173):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Permission requested: 1 11:10:52 11:10:52.038 [ProcessThread(sid:0 cport:39173):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - ACLs for node: [31,s{'world,'anyone} 11:10:52 ] 11:10:52 11:10:52.038 [ProcessThread(sid:0 cport:39173):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Client credentials: ['sasl,'zooclient 11:10:52 , 'ip,'127.0.0.1 11:10:52 ] 11:10:52 11:10:52.038 [ProcessThread(sid:0 cport:39173):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 261783015415 11:10:52 11:10:52.038 [ProcessThread(sid:0 cport:39173):] DEBUG org.apache.zookeeper.server.SessionTrackerImpl - Checking session 0x10000020e9e0000 11:10:52 11:10:52.038 [ProcessThread(sid:0 cport:39173):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Processing ACL: 31,s{'world,'anyone} 11:10:52 11:10:52 11:10:52.038 [ProcessThread(sid:0 cport:39173):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Permission requested: 4 11:10:52 11:10:52.038 [main-SendThread(127.0.0.1:39173)] DEBUG org.apache.zookeeper.ClientCnxn - Reading reply session id: 0x10000020e9e0000, packet:: clientPath:null serverPath:null finished:false header:: 181,14 replyHeader:: 181,131,0 request:: org.apache.zookeeper.MultiOperationRecord@43c4132a response:: org.apache.zookeeper.MultiResponse@5e069134 11:10:52 11:10:52.038 [ProcessThread(sid:0 cport:39173):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - ACLs for node: [31,s{'world,'anyone} 11:10:52 ] 11:10:52 11:10:52.038 [ProcessThread(sid:0 cport:39173):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Client credentials: ['sasl,'zooclient 11:10:52 , 'ip,'127.0.0.1 11:10:52 ] 11:10:52 11:10:52.038 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - sessionid:0x10000020e9e0000 type:exists cxid:0xb6 zxid:0xfffffffffffffffe txntype:unknown reqpath:/admin/delete_topics/__consumer_offsets 11:10:52 11:10:52.038 [ProcessThread(sid:0 cport:39173):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 261783015415 11:10:52 11:10:52.038 [ProcessThread(sid:0 cport:39173):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 264189042783 11:10:52 11:10:52.038 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - Processing request:: sessionid:0x10000020e9e0000 type:multi cxid:0xb7 zxid:0x84 txntype:14 reqpath:n/a 11:10:52 11:10:52.038 [ProcessThread(sid:0 cport:39173):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 267957528802 11:10:52 11:10:52.038 [main-SendThread(127.0.0.1:39173)] DEBUG org.apache.zookeeper.ClientCnxn - Reading reply session id: 0x10000020e9e0000, packet:: clientPath:/admin/delete_topics/__consumer_offsets serverPath:/admin/delete_topics/__consumer_offsets finished:false header:: 182,3 replyHeader:: 182,131,-101 request:: '/admin/delete_topics/__consumer_offsets,F response:: 11:10:52 11:10:52.038 [SyncThread:0] DEBUG org.apache.zookeeper.common.PathTrie - brokers 11:10:52 11:10:52.039 [SyncThread:0] DEBUG org.apache.zookeeper.server.DataTree - Digests are matching for Zxid: 84, Digest in log and actual tree: 247580351085 11:10:52 11:10:52.039 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - sessionid:0x10000020e9e0000 type:multi cxid:0xb7 zxid:0x84 txntype:14 reqpath:n/a 11:10:52 11:10:52.039 [ProcessThread(sid:0 cport:39173):] DEBUG org.apache.zookeeper.server.SessionTrackerImpl - Checking session 0x10000020e9e0000 11:10:52 11:10:52.039 [ProcessThread(sid:0 cport:39173):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Permission requested: 1 11:10:52 11:10:52.039 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - Processing request:: sessionid:0x10000020e9e0000 type:multi cxid:0xb8 zxid:0x85 txntype:14 reqpath:n/a 11:10:52 11:10:52.039 [ProcessThread(sid:0 cport:39173):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - ACLs for node: [31,s{'world,'anyone} 11:10:52 ] 11:10:52 11:10:52.039 [ProcessThread(sid:0 cport:39173):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Client credentials: ['sasl,'zooclient 11:10:52 , 'ip,'127.0.0.1 11:10:52 ] 11:10:52 11:10:52.039 [main-SendThread(127.0.0.1:39173)] DEBUG org.apache.zookeeper.ClientCnxn - Reading reply session id: 0x10000020e9e0000, packet:: clientPath:null serverPath:null finished:false header:: 183,14 replyHeader:: 183,132,0 request:: org.apache.zookeeper.MultiOperationRecord@9c639d0 response:: org.apache.zookeeper.MultiResponse@2408b7da 11:10:52 11:10:52.039 [SyncThread:0] DEBUG org.apache.zookeeper.common.PathTrie - brokers 11:10:52 11:10:52.039 [SyncThread:0] DEBUG org.apache.zookeeper.server.DataTree - Digests are matching for Zxid: 85, Digest in log and actual tree: 252053357644 11:10:52 11:10:52.039 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - sessionid:0x10000020e9e0000 type:multi cxid:0xb8 zxid:0x85 txntype:14 reqpath:n/a 11:10:52 11:10:52.039 [ProcessThread(sid:0 cport:39173):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 267957528802 11:10:52 11:10:52.039 [ProcessThread(sid:0 cport:39173):] DEBUG org.apache.zookeeper.server.SessionTrackerImpl - Checking session 0x10000020e9e0000 11:10:52 11:10:52.039 [ProcessThread(sid:0 cport:39173):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Processing ACL: 31,s{'world,'anyone} 11:10:52 11:10:52 11:10:52.039 [ProcessThread(sid:0 cport:39173):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Permission requested: 4 11:10:52 11:10:52.039 [ProcessThread(sid:0 cport:39173):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - ACLs for node: [31,s{'world,'anyone} 11:10:52 ] 11:10:52 11:10:52.039 [ProcessThread(sid:0 cport:39173):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Client credentials: ['sasl,'zooclient 11:10:52 , 'ip,'127.0.0.1 11:10:52 ] 11:10:52 11:10:52.039 [ProcessThread(sid:0 cport:39173):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 267957528802 11:10:52 11:10:52.039 [main-SendThread(127.0.0.1:39173)] DEBUG org.apache.zookeeper.ClientCnxn - Reading reply session id: 0x10000020e9e0000, packet:: clientPath:null serverPath:null finished:false header:: 184,14 replyHeader:: 184,133,0 request:: org.apache.zookeeper.MultiOperationRecord@dd5f26d4 response:: org.apache.zookeeper.MultiResponse@f7a1a4de 11:10:52 11:10:52.039 [ProcessThread(sid:0 cport:39173):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 266067523914 11:10:52 11:10:52.039 [ProcessThread(sid:0 cport:39173):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 267381572106 11:10:52 11:10:52.039 [ProcessThread(sid:0 cport:39173):] DEBUG org.apache.zookeeper.server.SessionTrackerImpl - Checking session 0x10000020e9e0000 11:10:52 11:10:52.040 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - Processing request:: sessionid:0x10000020e9e0000 type:multi cxid:0xb9 zxid:0x86 txntype:14 reqpath:n/a 11:10:52 11:10:52.040 [SyncThread:0] DEBUG org.apache.zookeeper.common.PathTrie - brokers 11:10:52 11:10:52.040 [SyncThread:0] DEBUG org.apache.zookeeper.server.DataTree - Digests are matching for Zxid: 86, Digest in log and actual tree: 257983331858 11:10:52 11:10:52.040 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - sessionid:0x10000020e9e0000 type:multi cxid:0xb9 zxid:0x86 txntype:14 reqpath:n/a 11:10:52 11:10:52.041 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - Processing request:: sessionid:0x10000020e9e0000 type:multi cxid:0xba zxid:0x87 txntype:14 reqpath:n/a 11:10:52 11:10:52.041 [main-SendThread(127.0.0.1:39173)] DEBUG org.apache.zookeeper.ClientCnxn - Reading reply session id: 0x10000020e9e0000, packet:: clientPath:null serverPath:null finished:false header:: 185,14 replyHeader:: 185,134,0 request:: org.apache.zookeeper.MultiOperationRecord@a8e7f4ad response:: org.apache.zookeeper.MultiResponse@c32a72b7 11:10:52 11:10:52.041 [SyncThread:0] DEBUG org.apache.zookeeper.common.PathTrie - brokers 11:10:52 11:10:52.041 [SyncThread:0] DEBUG org.apache.zookeeper.server.DataTree - Digests are matching for Zxid: 87, Digest in log and actual tree: 261623656458 11:10:52 11:10:52.041 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - sessionid:0x10000020e9e0000 type:multi cxid:0xba zxid:0x87 txntype:14 reqpath:n/a 11:10:52 11:10:52.041 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - Processing request:: sessionid:0x10000020e9e0000 type:multi cxid:0xbb zxid:0x88 txntype:14 reqpath:n/a 11:10:52 11:10:52.041 [SyncThread:0] DEBUG org.apache.zookeeper.common.PathTrie - brokers 11:10:52 11:10:52.041 [SyncThread:0] DEBUG org.apache.zookeeper.server.DataTree - Digests are matching for Zxid: 88, Digest in log and actual tree: 261653092023 11:10:52 11:10:52.041 [main-SendThread(127.0.0.1:39173)] DEBUG org.apache.zookeeper.ClientCnxn - Reading reply session id: 0x10000020e9e0000, packet:: clientPath:null serverPath:null finished:false header:: 186,14 replyHeader:: 186,135,0 request:: org.apache.zookeeper.MultiOperationRecord@479aa670 response:: org.apache.zookeeper.MultiResponse@61dd247a 11:10:52 11:10:52.041 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - sessionid:0x10000020e9e0000 type:multi cxid:0xbb zxid:0x88 txntype:14 reqpath:n/a 11:10:52 11:10:52.041 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - Processing request:: sessionid:0x10000020e9e0000 type:multi cxid:0xbc zxid:0x89 txntype:14 reqpath:n/a 11:10:52 11:10:52.042 [main-SendThread(127.0.0.1:39173)] DEBUG org.apache.zookeeper.ClientCnxn - Reading reply session id: 0x10000020e9e0000, packet:: clientPath:null serverPath:null finished:false header:: 187,14 replyHeader:: 187,136,0 request:: org.apache.zookeeper.MultiOperationRecord@a6fcab0a response:: org.apache.zookeeper.MultiResponse@c13f2914 11:10:52 11:10:52.042 [SyncThread:0] DEBUG org.apache.zookeeper.common.PathTrie - brokers 11:10:52 11:10:52.042 [SyncThread:0] DEBUG org.apache.zookeeper.server.DataTree - Digests are matching for Zxid: 89, Digest in log and actual tree: 261783015415 11:10:52 11:10:52.042 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - sessionid:0x10000020e9e0000 type:multi cxid:0xbc zxid:0x89 txntype:14 reqpath:n/a 11:10:52 11:10:52.042 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - Processing request:: sessionid:0x10000020e9e0000 type:multi cxid:0xbd zxid:0x8a txntype:14 reqpath:n/a 11:10:52 11:10:52.042 [main-SendThread(127.0.0.1:39173)] DEBUG org.apache.zookeeper.ClientCnxn - Reading reply session id: 0x10000020e9e0000, packet:: clientPath:null serverPath:null finished:false header:: 188,14 replyHeader:: 188,137,0 request:: org.apache.zookeeper.MultiOperationRecord@3a16448 response:: org.apache.zookeeper.MultiResponse@1de3e252 11:10:52 11:10:52.042 [SyncThread:0] DEBUG org.apache.zookeeper.common.PathTrie - brokers 11:10:52 11:10:52.042 [SyncThread:0] DEBUG org.apache.zookeeper.server.DataTree - Digests are matching for Zxid: 8a, Digest in log and actual tree: 267957528802 11:10:52 11:10:52.042 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - sessionid:0x10000020e9e0000 type:multi cxid:0xbd zxid:0x8a txntype:14 reqpath:n/a 11:10:52 11:10:52.042 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - Processing request:: sessionid:0x10000020e9e0000 type:multi cxid:0xbe zxid:0x8b txntype:14 reqpath:n/a 11:10:52 11:10:52.042 [main-SendThread(127.0.0.1:39173)] DEBUG org.apache.zookeeper.ClientCnxn - Reading reply session id: 0x10000020e9e0000, packet:: clientPath:null serverPath:null finished:false header:: 189,14 replyHeader:: 189,138,0 request:: org.apache.zookeeper.MultiOperationRecord@3d303488 response:: org.apache.zookeeper.MultiResponse@5772b292 11:10:52 11:10:52.043 [SyncThread:0] DEBUG org.apache.zookeeper.common.PathTrie - brokers 11:10:52 11:10:52.043 [SyncThread:0] DEBUG org.apache.zookeeper.server.DataTree - Digests are matching for Zxid: 8b, Digest in log and actual tree: 267381572106 11:10:52 11:10:52.043 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - sessionid:0x10000020e9e0000 type:multi cxid:0xbe zxid:0x8b txntype:14 reqpath:n/a 11:10:52 11:10:52.043 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - Processing request:: sessionid:0x10000020e9e0000 type:exists cxid:0xbf zxid:0xfffffffffffffffe txntype:unknown reqpath:/brokers/topics/__consumer_offsets 11:10:52 11:10:52.043 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - sessionid:0x10000020e9e0000 type:exists cxid:0xbf zxid:0xfffffffffffffffe txntype:unknown reqpath:/brokers/topics/__consumer_offsets 11:10:52 11:10:52.043 [main-SendThread(127.0.0.1:39173)] DEBUG org.apache.zookeeper.ClientCnxn - Reading reply session id: 0x10000020e9e0000, packet:: clientPath:null serverPath:null finished:false header:: 190,14 replyHeader:: 190,139,0 request:: org.apache.zookeeper.MultiOperationRecord@3b44eae5 response:: org.apache.zookeeper.MultiResponse@558768ef 11:10:52 11:10:52.043 [main-SendThread(127.0.0.1:39173)] DEBUG org.apache.zookeeper.ClientCnxn - Reading reply session id: 0x10000020e9e0000, packet:: clientPath:/brokers/topics/__consumer_offsets serverPath:/brokers/topics/__consumer_offsets finished:false header:: 191,3 replyHeader:: 191,139,0 request:: '/brokers/topics/__consumer_offsets,T response:: s{38,38,1768216251835,1768216251835,0,1,0,0,548,1,39} 11:10:52 11:10:52.044 [data-plane-kafka-request-handler-1] DEBUG kafka.server.ZkAdminManager - [Admin Manager on Broker 1]: Topic creation failed since topic '__consumer_offsets' already exists. 11:10:52 org.apache.kafka.common.errors.TopicExistsException: Topic '__consumer_offsets' already exists. 11:10:52 11:10:52.044 [data-plane-kafka-request-handler-1] DEBUG kafka.server.DefaultAutoTopicCreationManager - Cleared inflight topic creation state for HashMap(__consumer_offsets -> CreatableTopic(name='__consumer_offsets', numPartitions=50, replicationFactor=1, assignments=[], configs=[CreateableTopicConfig(name='compression.type', value='producer'), CreateableTopicConfig(name='cleanup.policy', value='compact'), CreateableTopicConfig(name='segment.bytes', value='104857600')])) 11:10:52 11:10:52.045 [main] DEBUG org.apache.kafka.clients.NetworkClient - [Consumer clientId=mso-123456-consumer-37da6eb2-0a27-47c4-bc3c-8032fdeca549, groupId=mso-group] Received FIND_COORDINATOR response from node 1 for request with header RequestHeader(apiKey=FIND_COORDINATOR, apiVersion=4, clientId=mso-123456-consumer-37da6eb2-0a27-47c4-bc3c-8032fdeca549, correlationId=7): FindCoordinatorResponseData(throttleTimeMs=0, errorCode=0, errorMessage='', nodeId=0, host='', port=0, coordinators=[Coordinator(key='mso-group', nodeId=-1, host='', port=-1, errorCode=15, errorMessage='')]) 11:10:52 11:10:52.045 [main] DEBUG org.apache.kafka.clients.consumer.internals.ConsumerCoordinator - [Consumer clientId=mso-123456-consumer-37da6eb2-0a27-47c4-bc3c-8032fdeca549, groupId=mso-group] Received FindCoordinator response ClientResponse(receivedTimeMs=1768216252045, latencyMs=22, disconnected=false, requestHeader=RequestHeader(apiKey=FIND_COORDINATOR, apiVersion=4, clientId=mso-123456-consumer-37da6eb2-0a27-47c4-bc3c-8032fdeca549, correlationId=7), responseBody=FindCoordinatorResponseData(throttleTimeMs=0, errorCode=0, errorMessage='', nodeId=0, host='', port=0, coordinators=[Coordinator(key='mso-group', nodeId=-1, host='', port=-1, errorCode=15, errorMessage='')])) 11:10:52 11:10:52.046 [main] DEBUG org.apache.kafka.clients.consumer.internals.ConsumerCoordinator - [Consumer clientId=mso-123456-consumer-37da6eb2-0a27-47c4-bc3c-8032fdeca549, groupId=mso-group] Group coordinator lookup failed: 11:10:52 11:10:52.046 [main] DEBUG org.apache.kafka.clients.consumer.internals.ConsumerCoordinator - [Consumer clientId=mso-123456-consumer-37da6eb2-0a27-47c4-bc3c-8032fdeca549, groupId=mso-group] Coordinator discovery failed, refreshing metadata 11:10:52 org.apache.kafka.common.errors.CoordinatorNotAvailableException: The coordinator is not available. 11:10:52 11:10:52.046 [data-plane-kafka-network-thread-1-ListenerName(SASL_PLAINTEXT)-SASL_PLAINTEXT-0] DEBUG kafka.request.logger - Completed request:{"isForwarded":false,"requestHeader":{"requestApiKey":10,"requestApiVersion":4,"correlationId":7,"clientId":"mso-123456-consumer-37da6eb2-0a27-47c4-bc3c-8032fdeca549","requestApiKeyName":"FIND_COORDINATOR"},"request":{"keyType":0,"coordinatorKeys":["mso-group"]},"response":{"throttleTimeMs":0,"coordinators":[{"key":"mso-group","nodeId":-1,"host":"","port":-1,"errorCode":15,"errorMessage":""}]},"connection":"127.0.0.1:39115-127.0.0.1:43334-3","totalTimeMs":21.944,"requestQueueTimeMs":0.385,"localTimeMs":20.754,"remoteTimeMs":0.0,"throttleTimeMs":0,"responseQueueTimeMs":0.493,"sendTimeMs":0.311,"securityProtocol":"SASL_PLAINTEXT","principal":"User:admin","listener":"SASL_PLAINTEXT","clientInformation":{"softwareName":"apache-kafka-java","softwareVersion":"3.3.1"}} 11:10:52 11:10:52.054 [controller-event-thread] INFO state.change.logger - [Controller id=1 epoch=1] Changed partition __consumer_offsets-22 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isr=List(1), leaderRecoveryState=RECOVERED, partitionEpoch=0) 11:10:52 11:10:52.054 [controller-event-thread] INFO state.change.logger - [Controller id=1 epoch=1] Changed partition __consumer_offsets-30 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isr=List(1), leaderRecoveryState=RECOVERED, partitionEpoch=0) 11:10:52 11:10:52.054 [controller-event-thread] INFO state.change.logger - [Controller id=1 epoch=1] Changed partition __consumer_offsets-25 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isr=List(1), leaderRecoveryState=RECOVERED, partitionEpoch=0) 11:10:52 11:10:52.054 [controller-event-thread] INFO state.change.logger - [Controller id=1 epoch=1] Changed partition __consumer_offsets-35 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isr=List(1), leaderRecoveryState=RECOVERED, partitionEpoch=0) 11:10:52 11:10:52.054 [controller-event-thread] INFO state.change.logger - [Controller id=1 epoch=1] Changed partition __consumer_offsets-37 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isr=List(1), leaderRecoveryState=RECOVERED, partitionEpoch=0) 11:10:52 11:10:52.054 [controller-event-thread] INFO state.change.logger - [Controller id=1 epoch=1] Changed partition __consumer_offsets-38 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isr=List(1), leaderRecoveryState=RECOVERED, partitionEpoch=0) 11:10:52 11:10:52.054 [controller-event-thread] INFO state.change.logger - [Controller id=1 epoch=1] Changed partition __consumer_offsets-13 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isr=List(1), leaderRecoveryState=RECOVERED, partitionEpoch=0) 11:10:52 11:10:52.054 [controller-event-thread] INFO state.change.logger - [Controller id=1 epoch=1] Changed partition __consumer_offsets-8 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isr=List(1), leaderRecoveryState=RECOVERED, partitionEpoch=0) 11:10:52 11:10:52.054 [controller-event-thread] INFO state.change.logger - [Controller id=1 epoch=1] Changed partition __consumer_offsets-21 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isr=List(1), leaderRecoveryState=RECOVERED, partitionEpoch=0) 11:10:52 11:10:52.055 [controller-event-thread] INFO state.change.logger - [Controller id=1 epoch=1] Changed partition __consumer_offsets-4 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isr=List(1), leaderRecoveryState=RECOVERED, partitionEpoch=0) 11:10:52 11:10:52.055 [controller-event-thread] INFO state.change.logger - [Controller id=1 epoch=1] Changed partition __consumer_offsets-27 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isr=List(1), leaderRecoveryState=RECOVERED, partitionEpoch=0) 11:10:52 11:10:52.055 [controller-event-thread] INFO state.change.logger - [Controller id=1 epoch=1] Changed partition __consumer_offsets-7 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isr=List(1), leaderRecoveryState=RECOVERED, partitionEpoch=0) 11:10:52 11:10:52.055 [controller-event-thread] INFO state.change.logger - [Controller id=1 epoch=1] Changed partition __consumer_offsets-9 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isr=List(1), leaderRecoveryState=RECOVERED, partitionEpoch=0) 11:10:52 11:10:52.055 [controller-event-thread] INFO state.change.logger - [Controller id=1 epoch=1] Changed partition __consumer_offsets-46 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isr=List(1), leaderRecoveryState=RECOVERED, partitionEpoch=0) 11:10:52 11:10:52.055 [controller-event-thread] INFO state.change.logger - [Controller id=1 epoch=1] Changed partition __consumer_offsets-41 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isr=List(1), leaderRecoveryState=RECOVERED, partitionEpoch=0) 11:10:52 11:10:52.055 [controller-event-thread] INFO state.change.logger - [Controller id=1 epoch=1] Changed partition __consumer_offsets-33 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isr=List(1), leaderRecoveryState=RECOVERED, partitionEpoch=0) 11:10:52 11:10:52.055 [controller-event-thread] INFO state.change.logger - [Controller id=1 epoch=1] Changed partition __consumer_offsets-23 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isr=List(1), leaderRecoveryState=RECOVERED, partitionEpoch=0) 11:10:52 11:10:52.055 [controller-event-thread] INFO state.change.logger - [Controller id=1 epoch=1] Changed partition __consumer_offsets-49 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isr=List(1), leaderRecoveryState=RECOVERED, partitionEpoch=0) 11:10:52 11:10:52.055 [controller-event-thread] INFO state.change.logger - [Controller id=1 epoch=1] Changed partition __consumer_offsets-47 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isr=List(1), leaderRecoveryState=RECOVERED, partitionEpoch=0) 11:10:52 11:10:52.055 [controller-event-thread] INFO state.change.logger - [Controller id=1 epoch=1] Changed partition __consumer_offsets-16 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isr=List(1), leaderRecoveryState=RECOVERED, partitionEpoch=0) 11:10:52 11:10:52.055 [controller-event-thread] INFO state.change.logger - [Controller id=1 epoch=1] Changed partition __consumer_offsets-28 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isr=List(1), leaderRecoveryState=RECOVERED, partitionEpoch=0) 11:10:52 11:10:52.055 [controller-event-thread] INFO state.change.logger - [Controller id=1 epoch=1] Changed partition __consumer_offsets-31 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isr=List(1), leaderRecoveryState=RECOVERED, partitionEpoch=0) 11:10:52 11:10:52.055 [controller-event-thread] INFO state.change.logger - [Controller id=1 epoch=1] Changed partition __consumer_offsets-36 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isr=List(1), leaderRecoveryState=RECOVERED, partitionEpoch=0) 11:10:52 11:10:52.055 [controller-event-thread] INFO state.change.logger - [Controller id=1 epoch=1] Changed partition __consumer_offsets-42 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isr=List(1), leaderRecoveryState=RECOVERED, partitionEpoch=0) 11:10:52 11:10:52.055 [controller-event-thread] INFO state.change.logger - [Controller id=1 epoch=1] Changed partition __consumer_offsets-3 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isr=List(1), leaderRecoveryState=RECOVERED, partitionEpoch=0) 11:10:52 11:10:52.055 [controller-event-thread] INFO state.change.logger - [Controller id=1 epoch=1] Changed partition __consumer_offsets-18 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isr=List(1), leaderRecoveryState=RECOVERED, partitionEpoch=0) 11:10:52 11:10:52.055 [controller-event-thread] INFO state.change.logger - [Controller id=1 epoch=1] Changed partition __consumer_offsets-15 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isr=List(1), leaderRecoveryState=RECOVERED, partitionEpoch=0) 11:10:52 11:10:52.055 [controller-event-thread] INFO state.change.logger - [Controller id=1 epoch=1] Changed partition __consumer_offsets-24 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isr=List(1), leaderRecoveryState=RECOVERED, partitionEpoch=0) 11:10:52 11:10:52.055 [controller-event-thread] INFO state.change.logger - [Controller id=1 epoch=1] Changed partition __consumer_offsets-17 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isr=List(1), leaderRecoveryState=RECOVERED, partitionEpoch=0) 11:10:52 11:10:52.055 [controller-event-thread] INFO state.change.logger - [Controller id=1 epoch=1] Changed partition __consumer_offsets-48 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isr=List(1), leaderRecoveryState=RECOVERED, partitionEpoch=0) 11:10:52 11:10:52.055 [controller-event-thread] INFO state.change.logger - [Controller id=1 epoch=1] Changed partition __consumer_offsets-19 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isr=List(1), leaderRecoveryState=RECOVERED, partitionEpoch=0) 11:10:52 11:10:52.055 [controller-event-thread] INFO state.change.logger - [Controller id=1 epoch=1] Changed partition __consumer_offsets-11 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isr=List(1), leaderRecoveryState=RECOVERED, partitionEpoch=0) 11:10:52 11:10:52.055 [controller-event-thread] INFO state.change.logger - [Controller id=1 epoch=1] Changed partition __consumer_offsets-2 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isr=List(1), leaderRecoveryState=RECOVERED, partitionEpoch=0) 11:10:52 11:10:52.055 [controller-event-thread] INFO state.change.logger - [Controller id=1 epoch=1] Changed partition __consumer_offsets-43 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isr=List(1), leaderRecoveryState=RECOVERED, partitionEpoch=0) 11:10:52 11:10:52.055 [controller-event-thread] INFO state.change.logger - [Controller id=1 epoch=1] Changed partition __consumer_offsets-6 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isr=List(1), leaderRecoveryState=RECOVERED, partitionEpoch=0) 11:10:52 11:10:52.055 [controller-event-thread] INFO state.change.logger - [Controller id=1 epoch=1] Changed partition __consumer_offsets-14 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isr=List(1), leaderRecoveryState=RECOVERED, partitionEpoch=0) 11:10:52 11:10:52.055 [controller-event-thread] INFO state.change.logger - [Controller id=1 epoch=1] Changed partition __consumer_offsets-20 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isr=List(1), leaderRecoveryState=RECOVERED, partitionEpoch=0) 11:10:52 11:10:52.055 [controller-event-thread] INFO state.change.logger - [Controller id=1 epoch=1] Changed partition __consumer_offsets-0 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isr=List(1), leaderRecoveryState=RECOVERED, partitionEpoch=0) 11:10:52 11:10:52.055 [controller-event-thread] INFO state.change.logger - [Controller id=1 epoch=1] Changed partition __consumer_offsets-44 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isr=List(1), leaderRecoveryState=RECOVERED, partitionEpoch=0) 11:10:52 11:10:52.055 [controller-event-thread] INFO state.change.logger - [Controller id=1 epoch=1] Changed partition __consumer_offsets-39 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isr=List(1), leaderRecoveryState=RECOVERED, partitionEpoch=0) 11:10:52 11:10:52.055 [controller-event-thread] INFO state.change.logger - [Controller id=1 epoch=1] Changed partition __consumer_offsets-12 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isr=List(1), leaderRecoveryState=RECOVERED, partitionEpoch=0) 11:10:52 11:10:52.055 [controller-event-thread] INFO state.change.logger - [Controller id=1 epoch=1] Changed partition __consumer_offsets-45 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isr=List(1), leaderRecoveryState=RECOVERED, partitionEpoch=0) 11:10:52 11:10:52.055 [controller-event-thread] INFO state.change.logger - [Controller id=1 epoch=1] Changed partition __consumer_offsets-1 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isr=List(1), leaderRecoveryState=RECOVERED, partitionEpoch=0) 11:10:52 11:10:52.055 [controller-event-thread] INFO state.change.logger - [Controller id=1 epoch=1] Changed partition __consumer_offsets-5 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isr=List(1), leaderRecoveryState=RECOVERED, partitionEpoch=0) 11:10:52 11:10:52.056 [controller-event-thread] INFO state.change.logger - [Controller id=1 epoch=1] Changed partition __consumer_offsets-26 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isr=List(1), leaderRecoveryState=RECOVERED, partitionEpoch=0) 11:10:52 11:10:52.056 [controller-event-thread] INFO state.change.logger - [Controller id=1 epoch=1] Changed partition __consumer_offsets-29 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isr=List(1), leaderRecoveryState=RECOVERED, partitionEpoch=0) 11:10:52 11:10:52.056 [controller-event-thread] INFO state.change.logger - [Controller id=1 epoch=1] Changed partition __consumer_offsets-34 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isr=List(1), leaderRecoveryState=RECOVERED, partitionEpoch=0) 11:10:52 11:10:52.056 [controller-event-thread] INFO state.change.logger - [Controller id=1 epoch=1] Changed partition __consumer_offsets-10 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isr=List(1), leaderRecoveryState=RECOVERED, partitionEpoch=0) 11:10:52 11:10:52.056 [controller-event-thread] INFO state.change.logger - [Controller id=1 epoch=1] Changed partition __consumer_offsets-32 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isr=List(1), leaderRecoveryState=RECOVERED, partitionEpoch=0) 11:10:52 11:10:52.056 [controller-event-thread] INFO state.change.logger - [Controller id=1 epoch=1] Changed partition __consumer_offsets-40 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isr=List(1), leaderRecoveryState=RECOVERED, partitionEpoch=0) 11:10:52 11:10:52.056 [controller-event-thread] INFO state.change.logger - [Controller id=1 epoch=1] Sending LeaderAndIsr request to broker 1 with 50 become-leader and 0 become-follower partitions 11:10:52 11:10:52.057 [controller-event-thread] INFO state.change.logger - [Controller id=1 epoch=1] Sending UpdateMetadata request to brokers HashSet(1) for 50 partitions 11:10:52 11:10:52.058 [TestBroker:1:Controller-1-to-broker-1-send-thread] DEBUG org.apache.kafka.clients.NetworkClient - [Controller id=1, targetBrokerId=1] Sending LEADER_AND_ISR request with header RequestHeader(apiKey=LEADER_AND_ISR, apiVersion=6, clientId=1, correlationId=3) and timeout 30000 to node 1: LeaderAndIsrRequestData(controllerId=1, controllerEpoch=1, brokerEpoch=25, type=0, ungroupedPartitionStates=[], topicStates=[LeaderAndIsrTopicState(topicName='__consumer_offsets', topicId=N6JdGKSnS5uP5QsQcans3w, partitionStates=[LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=13, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0), LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=46, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0), LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=9, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0), LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=42, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0), LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=21, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0), LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=17, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0), LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=30, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0), LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=26, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0), LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=5, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0), LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=38, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0), LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=1, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0), LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=34, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0), LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=16, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0), LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=45, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0), LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=12, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0), LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=41, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0), LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=24, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0), LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=20, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0), LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=49, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0), LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=0, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0), LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=29, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0), LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=25, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0), LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=8, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0), LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=37, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0), LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=4, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0), LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=33, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0), LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=15, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0), LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=48, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0), LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=11, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0), LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=44, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0), LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=23, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0), LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=19, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0), LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=32, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0), LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=28, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0), LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=7, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0), LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=40, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0), LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=3, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0), LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=36, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0), LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=47, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0), LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=14, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0), LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=43, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0), LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=10, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0), LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=22, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0), LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=18, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0), LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=31, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0), LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=27, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0), LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=39, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0), LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=6, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0), LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=35, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0), LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=2, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0)])], liveLeaders=[LeaderAndIsrLiveLeader(brokerId=1, hostName='localhost', port=39115)]) 11:10:52 11:10:52.059 [controller-event-thread] INFO state.change.logger - [Controller id=1 epoch=1] Sending UpdateMetadata request to brokers HashSet() for 0 partitions 11:10:52 11:10:52.061 [data-plane-kafka-request-handler-0] INFO state.change.logger - [Broker id=1] Handling LeaderAndIsr request correlationId 3 from controller 1 for 50 partitions 11:10:52 11:10:52.100 [data-plane-kafka-request-handler-0] INFO kafka.server.ReplicaFetcherManager - [ReplicaFetcherManager on broker 1] Removed fetcher for partitions HashSet(__consumer_offsets-22, __consumer_offsets-30, __consumer_offsets-25, __consumer_offsets-35, __consumer_offsets-37, __consumer_offsets-38, __consumer_offsets-13, __consumer_offsets-8, __consumer_offsets-21, __consumer_offsets-4, __consumer_offsets-27, __consumer_offsets-7, __consumer_offsets-9, __consumer_offsets-46, __consumer_offsets-41, __consumer_offsets-33, __consumer_offsets-23, __consumer_offsets-49, __consumer_offsets-47, __consumer_offsets-16, __consumer_offsets-28, __consumer_offsets-31, __consumer_offsets-36, __consumer_offsets-42, __consumer_offsets-3, __consumer_offsets-18, __consumer_offsets-15, __consumer_offsets-24, __consumer_offsets-17, __consumer_offsets-48, __consumer_offsets-19, __consumer_offsets-11, __consumer_offsets-2, __consumer_offsets-43, __consumer_offsets-6, __consumer_offsets-14, __consumer_offsets-20, __consumer_offsets-0, __consumer_offsets-44, __consumer_offsets-39, __consumer_offsets-12, __consumer_offsets-45, __consumer_offsets-1, __consumer_offsets-5, __consumer_offsets-26, __consumer_offsets-29, __consumer_offsets-34, __consumer_offsets-10, __consumer_offsets-32, __consumer_offsets-40) 11:10:52 11:10:52.100 [data-plane-kafka-request-handler-0] INFO state.change.logger - [Broker id=1] Stopped fetchers as part of LeaderAndIsr request correlationId 3 from controller 1 epoch 1 as part of the become-leader transition for 50 partitions 11:10:52 11:10:52.102 [ProcessThread(sid:0 cport:39173):] DEBUG org.apache.zookeeper.server.SessionTrackerImpl - Checking session 0x10000020e9e0000 11:10:52 11:10:52.102 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - Processing request:: sessionid:0x10000020e9e0000 type:getData cxid:0xc0 zxid:0xfffffffffffffffe txntype:unknown reqpath:/config/topics/__consumer_offsets 11:10:52 11:10:52.102 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - sessionid:0x10000020e9e0000 type:getData cxid:0xc0 zxid:0xfffffffffffffffe txntype:unknown reqpath:/config/topics/__consumer_offsets 11:10:52 11:10:52.102 [SyncThread:0] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Permission requested: 1 11:10:52 11:10:52.102 [SyncThread:0] DEBUG org.apache.zookeeper.server.ZooKeeperServer - ACLs for node: [31,s{'world,'anyone} 11:10:52 ] 11:10:52 11:10:52.102 [SyncThread:0] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Client credentials: ['sasl,'zooclient 11:10:52 , 'ip,'127.0.0.1 11:10:52 ] 11:10:52 11:10:52.103 [main-SendThread(127.0.0.1:39173)] DEBUG org.apache.zookeeper.ClientCnxn - Reading reply session id: 0x10000020e9e0000, packet:: clientPath:/config/topics/__consumer_offsets serverPath:/config/topics/__consumer_offsets finished:false header:: 192,4 replyHeader:: 192,139,0 request:: '/config/topics/__consumer_offsets,F response:: #7b2276657273696f6e223a312c22636f6e666967223a7b22636f6d7072657373696f6e2e74797065223a2270726f6475636572222c22636c65616e75702e706f6c696379223a22636f6d70616374222c227365676d656e742e6279746573223a22313034383537363030227d7d,s{37,37,1768216251826,1768216251826,0,0,0,0,109,0,37} 11:10:52 11:10:52.107 [data-plane-kafka-request-handler-0] DEBUG kafka.log.OffsetIndex - Loaded index file /tmp/kafka-unit8944902187107510952/__consumer_offsets-3/00000000000000000000.index with maxEntries = 1310720, maxIndexSize = 10485760, entries = 0, lastOffset = 0, file position = 0 11:10:52 11:10:52.108 [data-plane-kafka-request-handler-0] DEBUG kafka.log.AbstractIndex - Index /tmp/kafka-unit8944902187107510952/__consumer_offsets-3/00000000000000000000.index was not resized because it already has size 10485760 11:10:52 11:10:52.108 [data-plane-kafka-request-handler-0] DEBUG kafka.log.TimeIndex - Loaded index file /tmp/kafka-unit8944902187107510952/__consumer_offsets-3/00000000000000000000.timeindex with maxEntries = 873813, maxIndexSize = 10485760, entries = 0, lastOffset = TimestampOffset(-1,0), file position = 0 11:10:52 11:10:52.108 [data-plane-kafka-request-handler-0] DEBUG kafka.log.AbstractIndex - Index /tmp/kafka-unit8944902187107510952/__consumer_offsets-3/00000000000000000000.timeindex was not resized because it already has size 10485756 11:10:52 11:10:52.108 [data-plane-kafka-request-handler-0] INFO kafka.log.UnifiedLog$ - [LogLoader partition=__consumer_offsets-3, dir=/tmp/kafka-unit8944902187107510952] Loading producer state till offset 0 with message format version 2 11:10:52 11:10:52.109 [data-plane-kafka-request-handler-0] DEBUG kafka.utils.KafkaScheduler - Scheduling task flush-metadata-file with initial delay 0 ms and period -1 ms. 11:10:52 11:10:52.110 [data-plane-kafka-request-handler-0] DEBUG kafka.utils.KafkaScheduler - Scheduling task PeriodicProducerExpirationCheck with initial delay 600000 ms and period 600000 ms. 11:10:52 11:10:52.110 [data-plane-kafka-request-handler-0] INFO kafka.log.LogManager - Created log for partition __consumer_offsets-3 in /tmp/kafka-unit8944902187107510952/__consumer_offsets-3 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} 11:10:52 11:10:52.111 [data-plane-kafka-request-handler-0] INFO kafka.cluster.Partition - [Partition __consumer_offsets-3 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-3 11:10:52 11:10:52.111 [data-plane-kafka-request-handler-0] INFO kafka.cluster.Partition - [Partition __consumer_offsets-3 broker=1] Log loaded for partition __consumer_offsets-3 with initial high watermark 0 11:10:52 11:10:52.111 [data-plane-kafka-request-handler-0] INFO state.change.logger - [Broker id=1] Leader __consumer_offsets-3 with topic id Some(N6JdGKSnS5uP5QsQcans3w) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas []. Previous leader epoch was -1. 11:10:52 11:10:52.111 [data-plane-kafka-request-handler-0] DEBUG kafka.server.epoch.LeaderEpochFileCache - [LeaderEpochCache __consumer_offsets-3] Appended new epoch entry EpochEntry(epoch=0, startOffset=0). Cache now contains 1 entries. 11:10:52 11:10:52.118 [ProcessThread(sid:0 cport:39173):] DEBUG org.apache.zookeeper.server.SessionTrackerImpl - Checking session 0x10000020e9e0000 11:10:52 11:10:52.118 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - Processing request:: sessionid:0x10000020e9e0000 type:getData cxid:0xc1 zxid:0xfffffffffffffffe txntype:unknown reqpath:/config/topics/__consumer_offsets 11:10:52 11:10:52.118 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - sessionid:0x10000020e9e0000 type:getData cxid:0xc1 zxid:0xfffffffffffffffe txntype:unknown reqpath:/config/topics/__consumer_offsets 11:10:52 11:10:52.118 [SyncThread:0] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Permission requested: 1 11:10:52 11:10:52.118 [SyncThread:0] DEBUG org.apache.zookeeper.server.ZooKeeperServer - ACLs for node: [31,s{'world,'anyone} 11:10:52 ] 11:10:52 11:10:52.118 [SyncThread:0] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Client credentials: ['sasl,'zooclient 11:10:52 , 'ip,'127.0.0.1 11:10:52 ] 11:10:52 11:10:52.119 [main-SendThread(127.0.0.1:39173)] DEBUG org.apache.zookeeper.ClientCnxn - Reading reply session id: 0x10000020e9e0000, packet:: clientPath:/config/topics/__consumer_offsets serverPath:/config/topics/__consumer_offsets finished:false header:: 193,4 replyHeader:: 193,139,0 request:: '/config/topics/__consumer_offsets,F response:: #7b2276657273696f6e223a312c22636f6e666967223a7b22636f6d7072657373696f6e2e74797065223a2270726f6475636572222c22636c65616e75702e706f6c696379223a22636f6d70616374222c227365676d656e742e6279746573223a22313034383537363030227d7d,s{37,37,1768216251826,1768216251826,0,0,0,0,109,0,37} 11:10:52 11:10:52.120 [data-plane-kafka-request-handler-0] DEBUG kafka.log.OffsetIndex - Loaded index file /tmp/kafka-unit8944902187107510952/__consumer_offsets-18/00000000000000000000.index with maxEntries = 1310720, maxIndexSize = 10485760, entries = 0, lastOffset = 0, file position = 0 11:10:52 11:10:52.120 [data-plane-kafka-request-handler-0] DEBUG kafka.log.AbstractIndex - Index /tmp/kafka-unit8944902187107510952/__consumer_offsets-18/00000000000000000000.index was not resized because it already has size 10485760 11:10:52 11:10:52.121 [data-plane-kafka-request-handler-0] DEBUG kafka.log.TimeIndex - Loaded index file /tmp/kafka-unit8944902187107510952/__consumer_offsets-18/00000000000000000000.timeindex with maxEntries = 873813, maxIndexSize = 10485760, entries = 0, lastOffset = TimestampOffset(-1,0), file position = 0 11:10:52 11:10:52.121 [data-plane-kafka-request-handler-0] DEBUG kafka.log.AbstractIndex - Index /tmp/kafka-unit8944902187107510952/__consumer_offsets-18/00000000000000000000.timeindex was not resized because it already has size 10485756 11:10:52 11:10:52.121 [data-plane-kafka-request-handler-0] INFO kafka.log.UnifiedLog$ - [LogLoader partition=__consumer_offsets-18, dir=/tmp/kafka-unit8944902187107510952] Loading producer state till offset 0 with message format version 2 11:10:52 11:10:52.121 [data-plane-kafka-request-handler-0] DEBUG kafka.utils.KafkaScheduler - Scheduling task flush-metadata-file with initial delay 0 ms and period -1 ms. 11:10:52 11:10:52.121 [main] DEBUG org.apache.kafka.clients.NetworkClient - [Consumer clientId=mso-123456-consumer-37da6eb2-0a27-47c4-bc3c-8032fdeca549, groupId=mso-group] Sending metadata request MetadataRequestData(topics=[MetadataRequestTopic(topicId=AAAAAAAAAAAAAAAAAAAAAA, name='my-test-topic')], allowAutoTopicCreation=false, includeClusterAuthorizedOperations=false, includeTopicAuthorizedOperations=false) to node localhost:39115 (id: 1 rack: null) 11:10:52 11:10:52.121 [data-plane-kafka-request-handler-0] DEBUG kafka.utils.KafkaScheduler - Scheduling task PeriodicProducerExpirationCheck with initial delay 600000 ms and period 600000 ms. 11:10:52 11:10:52.121 [main] DEBUG org.apache.kafka.clients.NetworkClient - [Consumer clientId=mso-123456-consumer-37da6eb2-0a27-47c4-bc3c-8032fdeca549, groupId=mso-group] Sending METADATA request with header RequestHeader(apiKey=METADATA, apiVersion=12, clientId=mso-123456-consumer-37da6eb2-0a27-47c4-bc3c-8032fdeca549, correlationId=8) and timeout 30000 to node 1: MetadataRequestData(topics=[MetadataRequestTopic(topicId=AAAAAAAAAAAAAAAAAAAAAA, name='my-test-topic')], allowAutoTopicCreation=false, includeClusterAuthorizedOperations=false, includeTopicAuthorizedOperations=false) 11:10:52 11:10:52.121 [data-plane-kafka-request-handler-0] INFO kafka.log.LogManager - Created log for partition __consumer_offsets-18 in /tmp/kafka-unit8944902187107510952/__consumer_offsets-18 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} 11:10:52 11:10:52.122 [data-plane-kafka-request-handler-0] INFO kafka.cluster.Partition - [Partition __consumer_offsets-18 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-18 11:10:52 11:10:52.122 [data-plane-kafka-request-handler-0] INFO kafka.cluster.Partition - [Partition __consumer_offsets-18 broker=1] Log loaded for partition __consumer_offsets-18 with initial high watermark 0 11:10:52 11:10:52.122 [data-plane-kafka-request-handler-0] INFO state.change.logger - [Broker id=1] Leader __consumer_offsets-18 with topic id Some(N6JdGKSnS5uP5QsQcans3w) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas []. Previous leader epoch was -1. 11:10:52 11:10:52.122 [data-plane-kafka-request-handler-0] DEBUG kafka.server.epoch.LeaderEpochFileCache - [LeaderEpochCache __consumer_offsets-18] Appended new epoch entry EpochEntry(epoch=0, startOffset=0). Cache now contains 1 entries. 11:10:52 11:10:52.125 [main] DEBUG org.apache.kafka.clients.NetworkClient - [Consumer clientId=mso-123456-consumer-37da6eb2-0a27-47c4-bc3c-8032fdeca549, groupId=mso-group] Received METADATA response from node 1 for request with header RequestHeader(apiKey=METADATA, apiVersion=12, clientId=mso-123456-consumer-37da6eb2-0a27-47c4-bc3c-8032fdeca549, correlationId=8): MetadataResponseData(throttleTimeMs=0, brokers=[MetadataResponseBroker(nodeId=1, host='localhost', port=39115, rack=null)], clusterId='jx5ycp9PTHOXo1U6H8QTmw', controllerId=1, topics=[MetadataResponseTopic(errorCode=0, name='my-test-topic', topicId=QbCSIHehTz6oyvgrkYsJtw, isInternal=false, partitions=[MetadataResponsePartition(errorCode=0, partitionIndex=0, leaderId=1, leaderEpoch=0, replicaNodes=[1], isrNodes=[1], offlineReplicas=[])], topicAuthorizedOperations=-2147483648)], clusterAuthorizedOperations=-2147483648) 11:10:52 11:10:52.125 [main] DEBUG org.apache.kafka.clients.Metadata - [Consumer clientId=mso-123456-consumer-37da6eb2-0a27-47c4-bc3c-8032fdeca549, groupId=mso-group] Updating last seen epoch for partition my-test-topic-0 from 0 to epoch 0 from new metadata 11:10:52 11:10:52.125 [data-plane-kafka-network-thread-1-ListenerName(SASL_PLAINTEXT)-SASL_PLAINTEXT-0] DEBUG kafka.request.logger - Completed request:{"isForwarded":false,"requestHeader":{"requestApiKey":3,"requestApiVersion":12,"correlationId":8,"clientId":"mso-123456-consumer-37da6eb2-0a27-47c4-bc3c-8032fdeca549","requestApiKeyName":"METADATA"},"request":{"topics":[{"topicId":"AAAAAAAAAAAAAAAAAAAAAA","name":"my-test-topic"}],"allowAutoTopicCreation":false,"includeTopicAuthorizedOperations":false},"response":{"throttleTimeMs":0,"brokers":[{"nodeId":1,"host":"localhost","port":39115,"rack":null}],"clusterId":"jx5ycp9PTHOXo1U6H8QTmw","controllerId":1,"topics":[{"errorCode":0,"name":"my-test-topic","topicId":"QbCSIHehTz6oyvgrkYsJtw","isInternal":false,"partitions":[{"errorCode":0,"partitionIndex":0,"leaderId":1,"leaderEpoch":0,"replicaNodes":[1],"isrNodes":[1],"offlineReplicas":[]}],"topicAuthorizedOperations":-2147483648}]},"connection":"127.0.0.1:39115-127.0.0.1:43334-3","totalTimeMs":2.473,"requestQueueTimeMs":0.378,"localTimeMs":1.747,"remoteTimeMs":0.0,"throttleTimeMs":0,"responseQueueTimeMs":0.09,"sendTimeMs":0.256,"securityProtocol":"SASL_PLAINTEXT","principal":"User:admin","listener":"SASL_PLAINTEXT","clientInformation":{"softwareName":"apache-kafka-java","softwareVersion":"3.3.1"}} 11:10:52 11:10:52.125 [main] DEBUG org.apache.kafka.clients.Metadata - [Consumer clientId=mso-123456-consumer-37da6eb2-0a27-47c4-bc3c-8032fdeca549, groupId=mso-group] Updated cluster metadata updateVersion 5 to MetadataCache{clusterId='jx5ycp9PTHOXo1U6H8QTmw', nodes={1=localhost:39115 (id: 1 rack: null)}, partitions=[PartitionMetadata(error=NONE, partition=my-test-topic-0, leader=Optional[1], leaderEpoch=Optional[0], replicas=1, isr=1, offlineReplicas=)], controller=localhost:39115 (id: 1 rack: null)} 11:10:52 11:10:52.125 [main] DEBUG org.apache.kafka.clients.consumer.internals.ConsumerCoordinator - [Consumer clientId=mso-123456-consumer-37da6eb2-0a27-47c4-bc3c-8032fdeca549, groupId=mso-group] Sending FindCoordinator request to broker localhost:39115 (id: 1 rack: null) 11:10:52 11:10:52.126 [main] DEBUG org.apache.kafka.clients.NetworkClient - [Consumer clientId=mso-123456-consumer-37da6eb2-0a27-47c4-bc3c-8032fdeca549, groupId=mso-group] Sending FIND_COORDINATOR request with header RequestHeader(apiKey=FIND_COORDINATOR, apiVersion=4, clientId=mso-123456-consumer-37da6eb2-0a27-47c4-bc3c-8032fdeca549, correlationId=9) and timeout 30000 to node 1: FindCoordinatorRequestData(key='', keyType=0, coordinatorKeys=[mso-group]) 11:10:52 11:10:52.128 [ProcessThread(sid:0 cport:39173):] DEBUG org.apache.zookeeper.server.SessionTrackerImpl - Checking session 0x10000020e9e0000 11:10:52 11:10:52.128 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - Processing request:: sessionid:0x10000020e9e0000 type:exists cxid:0xc2 zxid:0xfffffffffffffffe txntype:unknown reqpath:/admin/delete_topics/__consumer_offsets 11:10:52 11:10:52.128 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - sessionid:0x10000020e9e0000 type:exists cxid:0xc2 zxid:0xfffffffffffffffe txntype:unknown reqpath:/admin/delete_topics/__consumer_offsets 11:10:52 11:10:52.129 [main-SendThread(127.0.0.1:39173)] DEBUG org.apache.zookeeper.ClientCnxn - Reading reply session id: 0x10000020e9e0000, packet:: clientPath:/admin/delete_topics/__consumer_offsets serverPath:/admin/delete_topics/__consumer_offsets finished:false header:: 194,3 replyHeader:: 194,139,-101 request:: '/admin/delete_topics/__consumer_offsets,F response:: 11:10:52 11:10:52.129 [ProcessThread(sid:0 cport:39173):] DEBUG org.apache.zookeeper.server.SessionTrackerImpl - Checking session 0x10000020e9e0000 11:10:52 11:10:52.129 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - Processing request:: sessionid:0x10000020e9e0000 type:getData cxid:0xc3 zxid:0xfffffffffffffffe txntype:unknown reqpath:/config/topics/__consumer_offsets 11:10:52 11:10:52.129 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - sessionid:0x10000020e9e0000 type:getData cxid:0xc3 zxid:0xfffffffffffffffe txntype:unknown reqpath:/config/topics/__consumer_offsets 11:10:52 11:10:52.129 [SyncThread:0] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Permission requested: 1 11:10:52 11:10:52.129 [SyncThread:0] DEBUG org.apache.zookeeper.server.ZooKeeperServer - ACLs for node: [31,s{'world,'anyone} 11:10:52 ] 11:10:52 11:10:52.129 [SyncThread:0] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Client credentials: ['sasl,'zooclient 11:10:52 , 'ip,'127.0.0.1 11:10:52 ] 11:10:52 11:10:52.130 [ProcessThread(sid:0 cport:39173):] DEBUG org.apache.zookeeper.server.SessionTrackerImpl - Checking session 0x10000020e9e0000 11:10:52 11:10:52.130 [main-SendThread(127.0.0.1:39173)] DEBUG org.apache.zookeeper.ClientCnxn - Reading reply session id: 0x10000020e9e0000, packet:: clientPath:/config/topics/__consumer_offsets serverPath:/config/topics/__consumer_offsets finished:false header:: 195,4 replyHeader:: 195,139,0 request:: '/config/topics/__consumer_offsets,F response:: #7b2276657273696f6e223a312c22636f6e666967223a7b22636f6d7072657373696f6e2e74797065223a2270726f6475636572222c22636c65616e75702e706f6c696379223a22636f6d70616374222c227365676d656e742e6279746573223a22313034383537363030227d7d,s{37,37,1768216251826,1768216251826,0,0,0,0,109,0,37} 11:10:52 11:10:52.130 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - Processing request:: sessionid:0x10000020e9e0000 type:exists cxid:0xc4 zxid:0xfffffffffffffffe txntype:unknown reqpath:/brokers/topics/__consumer_offsets 11:10:52 11:10:52.130 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - sessionid:0x10000020e9e0000 type:exists cxid:0xc4 zxid:0xfffffffffffffffe txntype:unknown reqpath:/brokers/topics/__consumer_offsets 11:10:52 11:10:52.130 [main-SendThread(127.0.0.1:39173)] DEBUG org.apache.zookeeper.ClientCnxn - Reading reply session id: 0x10000020e9e0000, packet:: clientPath:/brokers/topics/__consumer_offsets serverPath:/brokers/topics/__consumer_offsets finished:false header:: 196,3 replyHeader:: 196,139,0 request:: '/brokers/topics/__consumer_offsets,T response:: s{38,38,1768216251835,1768216251835,0,1,0,0,548,1,39} 11:10:52 11:10:52.131 [data-plane-kafka-request-handler-1] DEBUG kafka.server.ZkAdminManager - [Admin Manager on Broker 1]: Topic creation failed since topic '__consumer_offsets' already exists. 11:10:52 org.apache.kafka.common.errors.TopicExistsException: Topic '__consumer_offsets' already exists. 11:10:52 11:10:52.131 [data-plane-kafka-request-handler-1] DEBUG kafka.server.DefaultAutoTopicCreationManager - Cleared inflight topic creation state for HashMap(__consumer_offsets -> CreatableTopic(name='__consumer_offsets', numPartitions=50, replicationFactor=1, assignments=[], configs=[CreateableTopicConfig(name='compression.type', value='producer'), CreateableTopicConfig(name='cleanup.policy', value='compact'), CreateableTopicConfig(name='segment.bytes', value='104857600')])) 11:10:52 11:10:52.131 [main] DEBUG org.apache.kafka.clients.NetworkClient - [Consumer clientId=mso-123456-consumer-37da6eb2-0a27-47c4-bc3c-8032fdeca549, groupId=mso-group] Received FIND_COORDINATOR response from node 1 for request with header RequestHeader(apiKey=FIND_COORDINATOR, apiVersion=4, clientId=mso-123456-consumer-37da6eb2-0a27-47c4-bc3c-8032fdeca549, correlationId=9): FindCoordinatorResponseData(throttleTimeMs=0, errorCode=0, errorMessage='', nodeId=0, host='', port=0, coordinators=[Coordinator(key='mso-group', nodeId=-1, host='', port=-1, errorCode=15, errorMessage='')]) 11:10:52 11:10:52.131 [main] DEBUG org.apache.kafka.clients.consumer.internals.ConsumerCoordinator - [Consumer clientId=mso-123456-consumer-37da6eb2-0a27-47c4-bc3c-8032fdeca549, groupId=mso-group] Received FindCoordinator response ClientResponse(receivedTimeMs=1768216252131, latencyMs=5, disconnected=false, requestHeader=RequestHeader(apiKey=FIND_COORDINATOR, apiVersion=4, clientId=mso-123456-consumer-37da6eb2-0a27-47c4-bc3c-8032fdeca549, correlationId=9), responseBody=FindCoordinatorResponseData(throttleTimeMs=0, errorCode=0, errorMessage='', nodeId=0, host='', port=0, coordinators=[Coordinator(key='mso-group', nodeId=-1, host='', port=-1, errorCode=15, errorMessage='')])) 11:10:52 11:10:52.132 [main] DEBUG org.apache.kafka.clients.consumer.internals.ConsumerCoordinator - [Consumer clientId=mso-123456-consumer-37da6eb2-0a27-47c4-bc3c-8032fdeca549, groupId=mso-group] Group coordinator lookup failed: 11:10:52 11:10:52.132 [data-plane-kafka-network-thread-1-ListenerName(SASL_PLAINTEXT)-SASL_PLAINTEXT-0] DEBUG kafka.request.logger - Completed request:{"isForwarded":false,"requestHeader":{"requestApiKey":10,"requestApiVersion":4,"correlationId":9,"clientId":"mso-123456-consumer-37da6eb2-0a27-47c4-bc3c-8032fdeca549","requestApiKeyName":"FIND_COORDINATOR"},"request":{"keyType":0,"coordinatorKeys":["mso-group"]},"response":{"throttleTimeMs":0,"coordinators":[{"key":"mso-group","nodeId":-1,"host":"","port":-1,"errorCode":15,"errorMessage":""}]},"connection":"127.0.0.1:39115-127.0.0.1:43334-3","totalTimeMs":5.197,"requestQueueTimeMs":0.114,"localTimeMs":4.883,"remoteTimeMs":0.0,"throttleTimeMs":0,"responseQueueTimeMs":0.071,"sendTimeMs":0.128,"securityProtocol":"SASL_PLAINTEXT","principal":"User:admin","listener":"SASL_PLAINTEXT","clientInformation":{"softwareName":"apache-kafka-java","softwareVersion":"3.3.1"}} 11:10:52 11:10:52.132 [main] DEBUG org.apache.kafka.clients.consumer.internals.ConsumerCoordinator - [Consumer clientId=mso-123456-consumer-37da6eb2-0a27-47c4-bc3c-8032fdeca549, groupId=mso-group] Coordinator discovery failed, refreshing metadata 11:10:52 org.apache.kafka.common.errors.CoordinatorNotAvailableException: The coordinator is not available. 11:10:52 11:10:52.133 [data-plane-kafka-request-handler-0] DEBUG kafka.log.OffsetIndex - Loaded index file /tmp/kafka-unit8944902187107510952/__consumer_offsets-41/00000000000000000000.index with maxEntries = 1310720, maxIndexSize = 10485760, entries = 0, lastOffset = 0, file position = 0 11:10:52 11:10:52.133 [data-plane-kafka-request-handler-0] DEBUG kafka.log.AbstractIndex - Index /tmp/kafka-unit8944902187107510952/__consumer_offsets-41/00000000000000000000.index was not resized because it already has size 10485760 11:10:52 11:10:52.133 [data-plane-kafka-request-handler-0] DEBUG kafka.log.TimeIndex - Loaded index file /tmp/kafka-unit8944902187107510952/__consumer_offsets-41/00000000000000000000.timeindex with maxEntries = 873813, maxIndexSize = 10485760, entries = 0, lastOffset = TimestampOffset(-1,0), file position = 0 11:10:52 11:10:52.133 [data-plane-kafka-request-handler-0] DEBUG kafka.log.AbstractIndex - Index /tmp/kafka-unit8944902187107510952/__consumer_offsets-41/00000000000000000000.timeindex was not resized because it already has size 10485756 11:10:52 11:10:52.133 [data-plane-kafka-request-handler-0] INFO kafka.log.UnifiedLog$ - [LogLoader partition=__consumer_offsets-41, dir=/tmp/kafka-unit8944902187107510952] Loading producer state till offset 0 with message format version 2 11:10:52 11:10:52.133 [data-plane-kafka-request-handler-0] DEBUG kafka.utils.KafkaScheduler - Scheduling task flush-metadata-file with initial delay 0 ms and period -1 ms. 11:10:52 11:10:52.134 [data-plane-kafka-request-handler-0] DEBUG kafka.utils.KafkaScheduler - Scheduling task PeriodicProducerExpirationCheck with initial delay 600000 ms and period 600000 ms. 11:10:52 11:10:52.134 [data-plane-kafka-request-handler-0] INFO kafka.log.LogManager - Created log for partition __consumer_offsets-41 in /tmp/kafka-unit8944902187107510952/__consumer_offsets-41 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} 11:10:52 11:10:52.134 [data-plane-kafka-request-handler-0] INFO kafka.cluster.Partition - [Partition __consumer_offsets-41 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-41 11:10:52 11:10:52.135 [data-plane-kafka-request-handler-0] INFO kafka.cluster.Partition - [Partition __consumer_offsets-41 broker=1] Log loaded for partition __consumer_offsets-41 with initial high watermark 0 11:10:52 11:10:52.135 [data-plane-kafka-request-handler-0] INFO state.change.logger - [Broker id=1] Leader __consumer_offsets-41 with topic id Some(N6JdGKSnS5uP5QsQcans3w) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas []. Previous leader epoch was -1. 11:10:52 11:10:52.135 [data-plane-kafka-request-handler-0] DEBUG kafka.server.epoch.LeaderEpochFileCache - [LeaderEpochCache __consumer_offsets-41] Appended new epoch entry EpochEntry(epoch=0, startOffset=0). Cache now contains 1 entries. 11:10:52 11:10:52.140 [ProcessThread(sid:0 cport:39173):] DEBUG org.apache.zookeeper.server.SessionTrackerImpl - Checking session 0x10000020e9e0000 11:10:52 11:10:52.140 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - Processing request:: sessionid:0x10000020e9e0000 type:getData cxid:0xc5 zxid:0xfffffffffffffffe txntype:unknown reqpath:/config/topics/__consumer_offsets 11:10:52 11:10:52.140 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - sessionid:0x10000020e9e0000 type:getData cxid:0xc5 zxid:0xfffffffffffffffe txntype:unknown reqpath:/config/topics/__consumer_offsets 11:10:52 11:10:52.140 [SyncThread:0] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Permission requested: 1 11:10:52 11:10:52.140 [SyncThread:0] DEBUG org.apache.zookeeper.server.ZooKeeperServer - ACLs for node: [31,s{'world,'anyone} 11:10:52 ] 11:10:52 11:10:52.140 [SyncThread:0] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Client credentials: ['sasl,'zooclient 11:10:52 , 'ip,'127.0.0.1 11:10:52 ] 11:10:52 11:10:52.140 [main-SendThread(127.0.0.1:39173)] DEBUG org.apache.zookeeper.ClientCnxn - Reading reply session id: 0x10000020e9e0000, packet:: clientPath:/config/topics/__consumer_offsets serverPath:/config/topics/__consumer_offsets finished:false header:: 197,4 replyHeader:: 197,139,0 request:: '/config/topics/__consumer_offsets,F response:: #7b2276657273696f6e223a312c22636f6e666967223a7b22636f6d7072657373696f6e2e74797065223a2270726f6475636572222c22636c65616e75702e706f6c696379223a22636f6d70616374222c227365676d656e742e6279746573223a22313034383537363030227d7d,s{37,37,1768216251826,1768216251826,0,0,0,0,109,0,37} 11:10:52 11:10:52.142 [data-plane-kafka-request-handler-0] DEBUG kafka.log.OffsetIndex - Loaded index file /tmp/kafka-unit8944902187107510952/__consumer_offsets-10/00000000000000000000.index with maxEntries = 1310720, maxIndexSize = 10485760, entries = 0, lastOffset = 0, file position = 0 11:10:52 11:10:52.142 [data-plane-kafka-request-handler-0] DEBUG kafka.log.AbstractIndex - Index /tmp/kafka-unit8944902187107510952/__consumer_offsets-10/00000000000000000000.index was not resized because it already has size 10485760 11:10:52 11:10:52.143 [data-plane-kafka-request-handler-0] DEBUG kafka.log.TimeIndex - Loaded index file /tmp/kafka-unit8944902187107510952/__consumer_offsets-10/00000000000000000000.timeindex with maxEntries = 873813, maxIndexSize = 10485760, entries = 0, lastOffset = TimestampOffset(-1,0), file position = 0 11:10:52 11:10:52.143 [data-plane-kafka-request-handler-0] DEBUG kafka.log.AbstractIndex - Index /tmp/kafka-unit8944902187107510952/__consumer_offsets-10/00000000000000000000.timeindex was not resized because it already has size 10485756 11:10:52 11:10:52.143 [data-plane-kafka-request-handler-0] INFO kafka.log.UnifiedLog$ - [LogLoader partition=__consumer_offsets-10, dir=/tmp/kafka-unit8944902187107510952] Loading producer state till offset 0 with message format version 2 11:10:52 11:10:52.143 [data-plane-kafka-request-handler-0] DEBUG kafka.utils.KafkaScheduler - Scheduling task flush-metadata-file with initial delay 0 ms and period -1 ms. 11:10:52 11:10:52.143 [data-plane-kafka-request-handler-0] DEBUG kafka.utils.KafkaScheduler - Scheduling task PeriodicProducerExpirationCheck with initial delay 600000 ms and period 600000 ms. 11:10:52 11:10:52.144 [data-plane-kafka-request-handler-0] INFO kafka.log.LogManager - Created log for partition __consumer_offsets-10 in /tmp/kafka-unit8944902187107510952/__consumer_offsets-10 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} 11:10:52 11:10:52.144 [data-plane-kafka-request-handler-0] INFO kafka.cluster.Partition - [Partition __consumer_offsets-10 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-10 11:10:52 11:10:52.144 [data-plane-kafka-request-handler-0] INFO kafka.cluster.Partition - [Partition __consumer_offsets-10 broker=1] Log loaded for partition __consumer_offsets-10 with initial high watermark 0 11:10:52 11:10:52.144 [data-plane-kafka-request-handler-0] INFO state.change.logger - [Broker id=1] Leader __consumer_offsets-10 with topic id Some(N6JdGKSnS5uP5QsQcans3w) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas []. Previous leader epoch was -1. 11:10:52 11:10:52.144 [data-plane-kafka-request-handler-0] DEBUG kafka.server.epoch.LeaderEpochFileCache - [LeaderEpochCache __consumer_offsets-10] Appended new epoch entry EpochEntry(epoch=0, startOffset=0). Cache now contains 1 entries. 11:10:52 11:10:52.148 [ProcessThread(sid:0 cport:39173):] DEBUG org.apache.zookeeper.server.SessionTrackerImpl - Checking session 0x10000020e9e0000 11:10:52 11:10:52.148 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - Processing request:: sessionid:0x10000020e9e0000 type:getData cxid:0xc6 zxid:0xfffffffffffffffe txntype:unknown reqpath:/config/topics/__consumer_offsets 11:10:52 11:10:52.148 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - sessionid:0x10000020e9e0000 type:getData cxid:0xc6 zxid:0xfffffffffffffffe txntype:unknown reqpath:/config/topics/__consumer_offsets 11:10:52 11:10:52.148 [SyncThread:0] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Permission requested: 1 11:10:52 11:10:52.148 [SyncThread:0] DEBUG org.apache.zookeeper.server.ZooKeeperServer - ACLs for node: [31,s{'world,'anyone} 11:10:52 ] 11:10:52 11:10:52.148 [SyncThread:0] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Client credentials: ['sasl,'zooclient 11:10:52 , 'ip,'127.0.0.1 11:10:52 ] 11:10:52 11:10:52.149 [main-SendThread(127.0.0.1:39173)] DEBUG org.apache.zookeeper.ClientCnxn - Reading reply session id: 0x10000020e9e0000, packet:: clientPath:/config/topics/__consumer_offsets serverPath:/config/topics/__consumer_offsets finished:false header:: 198,4 replyHeader:: 198,139,0 request:: '/config/topics/__consumer_offsets,F response:: #7b2276657273696f6e223a312c22636f6e666967223a7b22636f6d7072657373696f6e2e74797065223a2270726f6475636572222c22636c65616e75702e706f6c696379223a22636f6d70616374222c227365676d656e742e6279746573223a22313034383537363030227d7d,s{37,37,1768216251826,1768216251826,0,0,0,0,109,0,37} 11:10:52 11:10:52.150 [data-plane-kafka-request-handler-0] DEBUG kafka.log.OffsetIndex - Loaded index file /tmp/kafka-unit8944902187107510952/__consumer_offsets-33/00000000000000000000.index with maxEntries = 1310720, maxIndexSize = 10485760, entries = 0, lastOffset = 0, file position = 0 11:10:52 11:10:52.151 [data-plane-kafka-request-handler-0] DEBUG kafka.log.AbstractIndex - Index /tmp/kafka-unit8944902187107510952/__consumer_offsets-33/00000000000000000000.index was not resized because it already has size 10485760 11:10:52 11:10:52.151 [data-plane-kafka-request-handler-0] DEBUG kafka.log.TimeIndex - Loaded index file /tmp/kafka-unit8944902187107510952/__consumer_offsets-33/00000000000000000000.timeindex with maxEntries = 873813, maxIndexSize = 10485760, entries = 0, lastOffset = TimestampOffset(-1,0), file position = 0 11:10:52 11:10:52.151 [data-plane-kafka-request-handler-0] DEBUG kafka.log.AbstractIndex - Index /tmp/kafka-unit8944902187107510952/__consumer_offsets-33/00000000000000000000.timeindex was not resized because it already has size 10485756 11:10:52 11:10:52.151 [data-plane-kafka-request-handler-0] INFO kafka.log.UnifiedLog$ - [LogLoader partition=__consumer_offsets-33, dir=/tmp/kafka-unit8944902187107510952] Loading producer state till offset 0 with message format version 2 11:10:52 11:10:52.151 [data-plane-kafka-request-handler-0] DEBUG kafka.utils.KafkaScheduler - Scheduling task flush-metadata-file with initial delay 0 ms and period -1 ms. 11:10:52 11:10:52.152 [data-plane-kafka-request-handler-0] DEBUG kafka.utils.KafkaScheduler - Scheduling task PeriodicProducerExpirationCheck with initial delay 600000 ms and period 600000 ms. 11:10:52 11:10:52.152 [data-plane-kafka-request-handler-0] INFO kafka.log.LogManager - Created log for partition __consumer_offsets-33 in /tmp/kafka-unit8944902187107510952/__consumer_offsets-33 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} 11:10:52 11:10:52.152 [data-plane-kafka-request-handler-0] INFO kafka.cluster.Partition - [Partition __consumer_offsets-33 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-33 11:10:52 11:10:52.152 [data-plane-kafka-request-handler-0] INFO kafka.cluster.Partition - [Partition __consumer_offsets-33 broker=1] Log loaded for partition __consumer_offsets-33 with initial high watermark 0 11:10:52 11:10:52.152 [data-plane-kafka-request-handler-0] INFO state.change.logger - [Broker id=1] Leader __consumer_offsets-33 with topic id Some(N6JdGKSnS5uP5QsQcans3w) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas []. Previous leader epoch was -1. 11:10:52 11:10:52.152 [data-plane-kafka-request-handler-0] DEBUG kafka.server.epoch.LeaderEpochFileCache - [LeaderEpochCache __consumer_offsets-33] Appended new epoch entry EpochEntry(epoch=0, startOffset=0). Cache now contains 1 entries. 11:10:52 11:10:52.157 [ProcessThread(sid:0 cport:39173):] DEBUG org.apache.zookeeper.server.SessionTrackerImpl - Checking session 0x10000020e9e0000 11:10:52 11:10:52.157 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - Processing request:: sessionid:0x10000020e9e0000 type:getData cxid:0xc7 zxid:0xfffffffffffffffe txntype:unknown reqpath:/config/topics/__consumer_offsets 11:10:52 11:10:52.157 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - sessionid:0x10000020e9e0000 type:getData cxid:0xc7 zxid:0xfffffffffffffffe txntype:unknown reqpath:/config/topics/__consumer_offsets 11:10:52 11:10:52.157 [SyncThread:0] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Permission requested: 1 11:10:52 11:10:52.157 [SyncThread:0] DEBUG org.apache.zookeeper.server.ZooKeeperServer - ACLs for node: [31,s{'world,'anyone} 11:10:52 ] 11:10:52 11:10:52.157 [SyncThread:0] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Client credentials: ['sasl,'zooclient 11:10:52 , 'ip,'127.0.0.1 11:10:52 ] 11:10:52 11:10:52.157 [main-SendThread(127.0.0.1:39173)] DEBUG org.apache.zookeeper.ClientCnxn - Reading reply session id: 0x10000020e9e0000, packet:: clientPath:/config/topics/__consumer_offsets serverPath:/config/topics/__consumer_offsets finished:false header:: 199,4 replyHeader:: 199,139,0 request:: '/config/topics/__consumer_offsets,F response:: #7b2276657273696f6e223a312c22636f6e666967223a7b22636f6d7072657373696f6e2e74797065223a2270726f6475636572222c22636c65616e75702e706f6c696379223a22636f6d70616374222c227365676d656e742e6279746573223a22313034383537363030227d7d,s{37,37,1768216251826,1768216251826,0,0,0,0,109,0,37} 11:10:52 11:10:52.159 [data-plane-kafka-request-handler-0] DEBUG kafka.log.OffsetIndex - Loaded index file /tmp/kafka-unit8944902187107510952/__consumer_offsets-48/00000000000000000000.index with maxEntries = 1310720, maxIndexSize = 10485760, entries = 0, lastOffset = 0, file position = 0 11:10:52 11:10:52.159 [data-plane-kafka-request-handler-0] DEBUG kafka.log.AbstractIndex - Index /tmp/kafka-unit8944902187107510952/__consumer_offsets-48/00000000000000000000.index was not resized because it already has size 10485760 11:10:52 11:10:52.159 [data-plane-kafka-request-handler-0] DEBUG kafka.log.TimeIndex - Loaded index file /tmp/kafka-unit8944902187107510952/__consumer_offsets-48/00000000000000000000.timeindex with maxEntries = 873813, maxIndexSize = 10485760, entries = 0, lastOffset = TimestampOffset(-1,0), file position = 0 11:10:52 11:10:52.159 [data-plane-kafka-request-handler-0] DEBUG kafka.log.AbstractIndex - Index /tmp/kafka-unit8944902187107510952/__consumer_offsets-48/00000000000000000000.timeindex was not resized because it already has size 10485756 11:10:52 11:10:52.159 [data-plane-kafka-request-handler-0] INFO kafka.log.UnifiedLog$ - [LogLoader partition=__consumer_offsets-48, dir=/tmp/kafka-unit8944902187107510952] Loading producer state till offset 0 with message format version 2 11:10:52 11:10:52.159 [data-plane-kafka-request-handler-0] DEBUG kafka.utils.KafkaScheduler - Scheduling task flush-metadata-file with initial delay 0 ms and period -1 ms. 11:10:52 11:10:52.160 [data-plane-kafka-request-handler-0] DEBUG kafka.utils.KafkaScheduler - Scheduling task PeriodicProducerExpirationCheck with initial delay 600000 ms and period 600000 ms. 11:10:52 11:10:52.160 [data-plane-kafka-request-handler-0] INFO kafka.log.LogManager - Created log for partition __consumer_offsets-48 in /tmp/kafka-unit8944902187107510952/__consumer_offsets-48 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} 11:10:52 11:10:52.160 [data-plane-kafka-request-handler-0] INFO kafka.cluster.Partition - [Partition __consumer_offsets-48 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-48 11:10:52 11:10:52.160 [data-plane-kafka-request-handler-0] INFO kafka.cluster.Partition - [Partition __consumer_offsets-48 broker=1] Log loaded for partition __consumer_offsets-48 with initial high watermark 0 11:10:52 11:10:52.160 [data-plane-kafka-request-handler-0] INFO state.change.logger - [Broker id=1] Leader __consumer_offsets-48 with topic id Some(N6JdGKSnS5uP5QsQcans3w) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas []. Previous leader epoch was -1. 11:10:52 11:10:52.160 [data-plane-kafka-request-handler-0] DEBUG kafka.server.epoch.LeaderEpochFileCache - [LeaderEpochCache __consumer_offsets-48] Appended new epoch entry EpochEntry(epoch=0, startOffset=0). Cache now contains 1 entries. 11:10:52 11:10:52.165 [ProcessThread(sid:0 cport:39173):] DEBUG org.apache.zookeeper.server.SessionTrackerImpl - Checking session 0x10000020e9e0000 11:10:52 11:10:52.165 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - Processing request:: sessionid:0x10000020e9e0000 type:getData cxid:0xc8 zxid:0xfffffffffffffffe txntype:unknown reqpath:/config/topics/__consumer_offsets 11:10:52 11:10:52.165 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - sessionid:0x10000020e9e0000 type:getData cxid:0xc8 zxid:0xfffffffffffffffe txntype:unknown reqpath:/config/topics/__consumer_offsets 11:10:52 11:10:52.165 [SyncThread:0] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Permission requested: 1 11:10:52 11:10:52.165 [SyncThread:0] DEBUG org.apache.zookeeper.server.ZooKeeperServer - ACLs for node: [31,s{'world,'anyone} 11:10:52 ] 11:10:52 11:10:52.165 [SyncThread:0] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Client credentials: ['sasl,'zooclient 11:10:52 , 'ip,'127.0.0.1 11:10:52 ] 11:10:52 11:10:52.165 [main-SendThread(127.0.0.1:39173)] DEBUG org.apache.zookeeper.ClientCnxn - Reading reply session id: 0x10000020e9e0000, packet:: clientPath:/config/topics/__consumer_offsets serverPath:/config/topics/__consumer_offsets finished:false header:: 200,4 replyHeader:: 200,139,0 request:: '/config/topics/__consumer_offsets,F response:: #7b2276657273696f6e223a312c22636f6e666967223a7b22636f6d7072657373696f6e2e74797065223a2270726f6475636572222c22636c65616e75702e706f6c696379223a22636f6d70616374222c227365676d656e742e6279746573223a22313034383537363030227d7d,s{37,37,1768216251826,1768216251826,0,0,0,0,109,0,37} 11:10:52 11:10:52.167 [data-plane-kafka-request-handler-0] DEBUG kafka.log.OffsetIndex - Loaded index file /tmp/kafka-unit8944902187107510952/__consumer_offsets-19/00000000000000000000.index with maxEntries = 1310720, maxIndexSize = 10485760, entries = 0, lastOffset = 0, file position = 0 11:10:52 11:10:52.167 [data-plane-kafka-request-handler-0] DEBUG kafka.log.AbstractIndex - Index /tmp/kafka-unit8944902187107510952/__consumer_offsets-19/00000000000000000000.index was not resized because it already has size 10485760 11:10:52 11:10:52.168 [data-plane-kafka-request-handler-0] DEBUG kafka.log.TimeIndex - Loaded index file /tmp/kafka-unit8944902187107510952/__consumer_offsets-19/00000000000000000000.timeindex with maxEntries = 873813, maxIndexSize = 10485760, entries = 0, lastOffset = TimestampOffset(-1,0), file position = 0 11:10:52 11:10:52.168 [data-plane-kafka-request-handler-0] DEBUG kafka.log.AbstractIndex - Index /tmp/kafka-unit8944902187107510952/__consumer_offsets-19/00000000000000000000.timeindex was not resized because it already has size 10485756 11:10:52 11:10:52.168 [data-plane-kafka-request-handler-0] INFO kafka.log.UnifiedLog$ - [LogLoader partition=__consumer_offsets-19, dir=/tmp/kafka-unit8944902187107510952] Loading producer state till offset 0 with message format version 2 11:10:52 11:10:52.168 [data-plane-kafka-request-handler-0] DEBUG kafka.utils.KafkaScheduler - Scheduling task flush-metadata-file with initial delay 0 ms and period -1 ms. 11:10:52 11:10:52.168 [data-plane-kafka-request-handler-0] DEBUG kafka.utils.KafkaScheduler - Scheduling task PeriodicProducerExpirationCheck with initial delay 600000 ms and period 600000 ms. 11:10:52 11:10:52.169 [data-plane-kafka-request-handler-0] INFO kafka.log.LogManager - Created log for partition __consumer_offsets-19 in /tmp/kafka-unit8944902187107510952/__consumer_offsets-19 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} 11:10:52 11:10:52.169 [data-plane-kafka-request-handler-0] INFO kafka.cluster.Partition - [Partition __consumer_offsets-19 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-19 11:10:52 11:10:52.169 [data-plane-kafka-request-handler-0] INFO kafka.cluster.Partition - [Partition __consumer_offsets-19 broker=1] Log loaded for partition __consumer_offsets-19 with initial high watermark 0 11:10:52 11:10:52.169 [data-plane-kafka-request-handler-0] INFO state.change.logger - [Broker id=1] Leader __consumer_offsets-19 with topic id Some(N6JdGKSnS5uP5QsQcans3w) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas []. Previous leader epoch was -1. 11:10:52 11:10:52.169 [data-plane-kafka-request-handler-0] DEBUG kafka.server.epoch.LeaderEpochFileCache - [LeaderEpochCache __consumer_offsets-19] Appended new epoch entry EpochEntry(epoch=0, startOffset=0). Cache now contains 1 entries. 11:10:52 11:10:52.173 [ProcessThread(sid:0 cport:39173):] DEBUG org.apache.zookeeper.server.SessionTrackerImpl - Checking session 0x10000020e9e0000 11:10:52 11:10:52.173 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - Processing request:: sessionid:0x10000020e9e0000 type:getData cxid:0xc9 zxid:0xfffffffffffffffe txntype:unknown reqpath:/config/topics/__consumer_offsets 11:10:52 11:10:52.173 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - sessionid:0x10000020e9e0000 type:getData cxid:0xc9 zxid:0xfffffffffffffffe txntype:unknown reqpath:/config/topics/__consumer_offsets 11:10:52 11:10:52.173 [SyncThread:0] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Permission requested: 1 11:10:52 11:10:52.173 [SyncThread:0] DEBUG org.apache.zookeeper.server.ZooKeeperServer - ACLs for node: [31,s{'world,'anyone} 11:10:52 ] 11:10:52 11:10:52.173 [SyncThread:0] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Client credentials: ['sasl,'zooclient 11:10:52 , 'ip,'127.0.0.1 11:10:52 ] 11:10:52 11:10:52.174 [main-SendThread(127.0.0.1:39173)] DEBUG org.apache.zookeeper.ClientCnxn - Reading reply session id: 0x10000020e9e0000, packet:: clientPath:/config/topics/__consumer_offsets serverPath:/config/topics/__consumer_offsets finished:false header:: 201,4 replyHeader:: 201,139,0 request:: '/config/topics/__consumer_offsets,F response:: #7b2276657273696f6e223a312c22636f6e666967223a7b22636f6d7072657373696f6e2e74797065223a2270726f6475636572222c22636c65616e75702e706f6c696379223a22636f6d70616374222c227365676d656e742e6279746573223a22313034383537363030227d7d,s{37,37,1768216251826,1768216251826,0,0,0,0,109,0,37} 11:10:52 11:10:52.176 [data-plane-kafka-request-handler-0] DEBUG kafka.log.OffsetIndex - Loaded index file /tmp/kafka-unit8944902187107510952/__consumer_offsets-34/00000000000000000000.index with maxEntries = 1310720, maxIndexSize = 10485760, entries = 0, lastOffset = 0, file position = 0 11:10:52 11:10:52.176 [data-plane-kafka-request-handler-0] DEBUG kafka.log.AbstractIndex - Index /tmp/kafka-unit8944902187107510952/__consumer_offsets-34/00000000000000000000.index was not resized because it already has size 10485760 11:10:52 11:10:52.176 [data-plane-kafka-request-handler-0] DEBUG kafka.log.TimeIndex - Loaded index file /tmp/kafka-unit8944902187107510952/__consumer_offsets-34/00000000000000000000.timeindex with maxEntries = 873813, maxIndexSize = 10485760, entries = 0, lastOffset = TimestampOffset(-1,0), file position = 0 11:10:52 11:10:52.176 [data-plane-kafka-request-handler-0] DEBUG kafka.log.AbstractIndex - Index /tmp/kafka-unit8944902187107510952/__consumer_offsets-34/00000000000000000000.timeindex was not resized because it already has size 10485756 11:10:52 11:10:52.176 [data-plane-kafka-request-handler-0] INFO kafka.log.UnifiedLog$ - [LogLoader partition=__consumer_offsets-34, dir=/tmp/kafka-unit8944902187107510952] Loading producer state till offset 0 with message format version 2 11:10:52 11:10:52.177 [data-plane-kafka-request-handler-0] DEBUG kafka.utils.KafkaScheduler - Scheduling task flush-metadata-file with initial delay 0 ms and period -1 ms. 11:10:52 11:10:52.177 [data-plane-kafka-request-handler-0] DEBUG kafka.utils.KafkaScheduler - Scheduling task PeriodicProducerExpirationCheck with initial delay 600000 ms and period 600000 ms. 11:10:52 11:10:52.178 [data-plane-kafka-request-handler-0] INFO kafka.log.LogManager - Created log for partition __consumer_offsets-34 in /tmp/kafka-unit8944902187107510952/__consumer_offsets-34 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} 11:10:52 11:10:52.178 [data-plane-kafka-request-handler-0] INFO kafka.cluster.Partition - [Partition __consumer_offsets-34 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-34 11:10:52 11:10:52.178 [data-plane-kafka-request-handler-0] INFO kafka.cluster.Partition - [Partition __consumer_offsets-34 broker=1] Log loaded for partition __consumer_offsets-34 with initial high watermark 0 11:10:52 11:10:52.178 [data-plane-kafka-request-handler-0] INFO state.change.logger - [Broker id=1] Leader __consumer_offsets-34 with topic id Some(N6JdGKSnS5uP5QsQcans3w) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas []. Previous leader epoch was -1. 11:10:52 11:10:52.178 [data-plane-kafka-request-handler-0] DEBUG kafka.server.epoch.LeaderEpochFileCache - [LeaderEpochCache __consumer_offsets-34] Appended new epoch entry EpochEntry(epoch=0, startOffset=0). Cache now contains 1 entries. 11:10:52 11:10:52.190 [ProcessThread(sid:0 cport:39173):] DEBUG org.apache.zookeeper.server.SessionTrackerImpl - Checking session 0x10000020e9e0000 11:10:52 11:10:52.190 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - Processing request:: sessionid:0x10000020e9e0000 type:getData cxid:0xca zxid:0xfffffffffffffffe txntype:unknown reqpath:/config/topics/__consumer_offsets 11:10:52 11:10:52.190 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - sessionid:0x10000020e9e0000 type:getData cxid:0xca zxid:0xfffffffffffffffe txntype:unknown reqpath:/config/topics/__consumer_offsets 11:10:52 11:10:52.190 [SyncThread:0] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Permission requested: 1 11:10:52 11:10:52.190 [SyncThread:0] DEBUG org.apache.zookeeper.server.ZooKeeperServer - ACLs for node: [31,s{'world,'anyone} 11:10:52 ] 11:10:52 11:10:52.190 [SyncThread:0] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Client credentials: ['sasl,'zooclient 11:10:52 , 'ip,'127.0.0.1 11:10:52 ] 11:10:52 11:10:52.190 [main-SendThread(127.0.0.1:39173)] DEBUG org.apache.zookeeper.ClientCnxn - Reading reply session id: 0x10000020e9e0000, packet:: clientPath:/config/topics/__consumer_offsets serverPath:/config/topics/__consumer_offsets finished:false header:: 202,4 replyHeader:: 202,139,0 request:: '/config/topics/__consumer_offsets,F response:: #7b2276657273696f6e223a312c22636f6e666967223a7b22636f6d7072657373696f6e2e74797065223a2270726f6475636572222c22636c65616e75702e706f6c696379223a22636f6d70616374222c227365676d656e742e6279746573223a22313034383537363030227d7d,s{37,37,1768216251826,1768216251826,0,0,0,0,109,0,37} 11:10:52 11:10:52.192 [data-plane-kafka-request-handler-0] DEBUG kafka.log.OffsetIndex - Loaded index file /tmp/kafka-unit8944902187107510952/__consumer_offsets-4/00000000000000000000.index with maxEntries = 1310720, maxIndexSize = 10485760, entries = 0, lastOffset = 0, file position = 0 11:10:52 11:10:52.192 [data-plane-kafka-request-handler-0] DEBUG kafka.log.AbstractIndex - Index /tmp/kafka-unit8944902187107510952/__consumer_offsets-4/00000000000000000000.index was not resized because it already has size 10485760 11:10:52 11:10:52.192 [data-plane-kafka-request-handler-0] DEBUG kafka.log.TimeIndex - Loaded index file /tmp/kafka-unit8944902187107510952/__consumer_offsets-4/00000000000000000000.timeindex with maxEntries = 873813, maxIndexSize = 10485760, entries = 0, lastOffset = TimestampOffset(-1,0), file position = 0 11:10:52 11:10:52.192 [data-plane-kafka-request-handler-0] DEBUG kafka.log.AbstractIndex - Index /tmp/kafka-unit8944902187107510952/__consumer_offsets-4/00000000000000000000.timeindex was not resized because it already has size 10485756 11:10:52 11:10:52.192 [data-plane-kafka-request-handler-0] INFO kafka.log.UnifiedLog$ - [LogLoader partition=__consumer_offsets-4, dir=/tmp/kafka-unit8944902187107510952] Loading producer state till offset 0 with message format version 2 11:10:52 11:10:52.192 [data-plane-kafka-request-handler-0] DEBUG kafka.utils.KafkaScheduler - Scheduling task flush-metadata-file with initial delay 0 ms and period -1 ms. 11:10:52 11:10:52.193 [data-plane-kafka-request-handler-0] DEBUG kafka.utils.KafkaScheduler - Scheduling task PeriodicProducerExpirationCheck with initial delay 600000 ms and period 600000 ms. 11:10:52 11:10:52.193 [data-plane-kafka-request-handler-0] INFO kafka.log.LogManager - Created log for partition __consumer_offsets-4 in /tmp/kafka-unit8944902187107510952/__consumer_offsets-4 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} 11:10:52 11:10:52.193 [data-plane-kafka-request-handler-0] INFO kafka.cluster.Partition - [Partition __consumer_offsets-4 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-4 11:10:52 11:10:52.193 [data-plane-kafka-request-handler-0] INFO kafka.cluster.Partition - [Partition __consumer_offsets-4 broker=1] Log loaded for partition __consumer_offsets-4 with initial high watermark 0 11:10:52 11:10:52.193 [data-plane-kafka-request-handler-0] INFO state.change.logger - [Broker id=1] Leader __consumer_offsets-4 with topic id Some(N6JdGKSnS5uP5QsQcans3w) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas []. Previous leader epoch was -1. 11:10:52 11:10:52.193 [data-plane-kafka-request-handler-0] DEBUG kafka.server.epoch.LeaderEpochFileCache - [LeaderEpochCache __consumer_offsets-4] Appended new epoch entry EpochEntry(epoch=0, startOffset=0). Cache now contains 1 entries. 11:10:52 11:10:52.200 [ProcessThread(sid:0 cport:39173):] DEBUG org.apache.zookeeper.server.SessionTrackerImpl - Checking session 0x10000020e9e0000 11:10:52 11:10:52.200 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - Processing request:: sessionid:0x10000020e9e0000 type:getData cxid:0xcb zxid:0xfffffffffffffffe txntype:unknown reqpath:/config/topics/__consumer_offsets 11:10:52 11:10:52.200 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - sessionid:0x10000020e9e0000 type:getData cxid:0xcb zxid:0xfffffffffffffffe txntype:unknown reqpath:/config/topics/__consumer_offsets 11:10:52 11:10:52.200 [SyncThread:0] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Permission requested: 1 11:10:52 11:10:52.200 [SyncThread:0] DEBUG org.apache.zookeeper.server.ZooKeeperServer - ACLs for node: [31,s{'world,'anyone} 11:10:52 ] 11:10:52 11:10:52.200 [SyncThread:0] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Client credentials: ['sasl,'zooclient 11:10:52 , 'ip,'127.0.0.1 11:10:52 ] 11:10:52 11:10:52.201 [main-SendThread(127.0.0.1:39173)] DEBUG org.apache.zookeeper.ClientCnxn - Reading reply session id: 0x10000020e9e0000, packet:: clientPath:/config/topics/__consumer_offsets serverPath:/config/topics/__consumer_offsets finished:false header:: 203,4 replyHeader:: 203,139,0 request:: '/config/topics/__consumer_offsets,F response:: #7b2276657273696f6e223a312c22636f6e666967223a7b22636f6d7072657373696f6e2e74797065223a2270726f6475636572222c22636c65616e75702e706f6c696379223a22636f6d70616374222c227365676d656e742e6279746573223a22313034383537363030227d7d,s{37,37,1768216251826,1768216251826,0,0,0,0,109,0,37} 11:10:52 11:10:52.202 [data-plane-kafka-request-handler-0] DEBUG kafka.log.OffsetIndex - Loaded index file /tmp/kafka-unit8944902187107510952/__consumer_offsets-11/00000000000000000000.index with maxEntries = 1310720, maxIndexSize = 10485760, entries = 0, lastOffset = 0, file position = 0 11:10:52 11:10:52.202 [data-plane-kafka-request-handler-0] DEBUG kafka.log.AbstractIndex - Index /tmp/kafka-unit8944902187107510952/__consumer_offsets-11/00000000000000000000.index was not resized because it already has size 10485760 11:10:52 11:10:52.203 [data-plane-kafka-request-handler-0] DEBUG kafka.log.TimeIndex - Loaded index file /tmp/kafka-unit8944902187107510952/__consumer_offsets-11/00000000000000000000.timeindex with maxEntries = 873813, maxIndexSize = 10485760, entries = 0, lastOffset = TimestampOffset(-1,0), file position = 0 11:10:52 11:10:52.203 [data-plane-kafka-request-handler-0] DEBUG kafka.log.AbstractIndex - Index /tmp/kafka-unit8944902187107510952/__consumer_offsets-11/00000000000000000000.timeindex was not resized because it already has size 10485756 11:10:52 11:10:52.203 [data-plane-kafka-request-handler-0] INFO kafka.log.UnifiedLog$ - [LogLoader partition=__consumer_offsets-11, dir=/tmp/kafka-unit8944902187107510952] Loading producer state till offset 0 with message format version 2 11:10:52 11:10:52.203 [data-plane-kafka-request-handler-0] DEBUG kafka.utils.KafkaScheduler - Scheduling task flush-metadata-file with initial delay 0 ms and period -1 ms. 11:10:52 11:10:52.203 [data-plane-kafka-request-handler-0] DEBUG kafka.utils.KafkaScheduler - Scheduling task PeriodicProducerExpirationCheck with initial delay 600000 ms and period 600000 ms. 11:10:52 11:10:52.204 [data-plane-kafka-request-handler-0] INFO kafka.log.LogManager - Created log for partition __consumer_offsets-11 in /tmp/kafka-unit8944902187107510952/__consumer_offsets-11 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} 11:10:52 11:10:52.204 [data-plane-kafka-request-handler-0] INFO kafka.cluster.Partition - [Partition __consumer_offsets-11 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-11 11:10:52 11:10:52.204 [data-plane-kafka-request-handler-0] INFO kafka.cluster.Partition - [Partition __consumer_offsets-11 broker=1] Log loaded for partition __consumer_offsets-11 with initial high watermark 0 11:10:52 11:10:52.204 [data-plane-kafka-request-handler-0] INFO state.change.logger - [Broker id=1] Leader __consumer_offsets-11 with topic id Some(N6JdGKSnS5uP5QsQcans3w) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas []. Previous leader epoch was -1. 11:10:52 11:10:52.204 [data-plane-kafka-request-handler-0] DEBUG kafka.server.epoch.LeaderEpochFileCache - [LeaderEpochCache __consumer_offsets-11] Appended new epoch entry EpochEntry(epoch=0, startOffset=0). Cache now contains 1 entries. 11:10:52 11:10:52.209 [ProcessThread(sid:0 cport:39173):] DEBUG org.apache.zookeeper.server.SessionTrackerImpl - Checking session 0x10000020e9e0000 11:10:52 11:10:52.209 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - Processing request:: sessionid:0x10000020e9e0000 type:getData cxid:0xcc zxid:0xfffffffffffffffe txntype:unknown reqpath:/config/topics/__consumer_offsets 11:10:52 11:10:52.209 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - sessionid:0x10000020e9e0000 type:getData cxid:0xcc zxid:0xfffffffffffffffe txntype:unknown reqpath:/config/topics/__consumer_offsets 11:10:52 11:10:52.209 [SyncThread:0] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Permission requested: 1 11:10:52 11:10:52.209 [SyncThread:0] DEBUG org.apache.zookeeper.server.ZooKeeperServer - ACLs for node: [31,s{'world,'anyone} 11:10:52 ] 11:10:52 11:10:52.209 [SyncThread:0] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Client credentials: ['sasl,'zooclient 11:10:52 , 'ip,'127.0.0.1 11:10:52 ] 11:10:52 11:10:52.210 [main-SendThread(127.0.0.1:39173)] DEBUG org.apache.zookeeper.ClientCnxn - Reading reply session id: 0x10000020e9e0000, packet:: clientPath:/config/topics/__consumer_offsets serverPath:/config/topics/__consumer_offsets finished:false header:: 204,4 replyHeader:: 204,139,0 request:: '/config/topics/__consumer_offsets,F response:: #7b2276657273696f6e223a312c22636f6e666967223a7b22636f6d7072657373696f6e2e74797065223a2270726f6475636572222c22636c65616e75702e706f6c696379223a22636f6d70616374222c227365676d656e742e6279746573223a22313034383537363030227d7d,s{37,37,1768216251826,1768216251826,0,0,0,0,109,0,37} 11:10:52 11:10:52.212 [data-plane-kafka-request-handler-0] DEBUG kafka.log.OffsetIndex - Loaded index file /tmp/kafka-unit8944902187107510952/__consumer_offsets-26/00000000000000000000.index with maxEntries = 1310720, maxIndexSize = 10485760, entries = 0, lastOffset = 0, file position = 0 11:10:52 11:10:52.212 [data-plane-kafka-request-handler-0] DEBUG kafka.log.AbstractIndex - Index /tmp/kafka-unit8944902187107510952/__consumer_offsets-26/00000000000000000000.index was not resized because it already has size 10485760 11:10:52 11:10:52.212 [data-plane-kafka-request-handler-0] DEBUG kafka.log.TimeIndex - Loaded index file /tmp/kafka-unit8944902187107510952/__consumer_offsets-26/00000000000000000000.timeindex with maxEntries = 873813, maxIndexSize = 10485760, entries = 0, lastOffset = TimestampOffset(-1,0), file position = 0 11:10:52 11:10:52.212 [data-plane-kafka-request-handler-0] DEBUG kafka.log.AbstractIndex - Index /tmp/kafka-unit8944902187107510952/__consumer_offsets-26/00000000000000000000.timeindex was not resized because it already has size 10485756 11:10:52 11:10:52.212 [data-plane-kafka-request-handler-0] INFO kafka.log.UnifiedLog$ - [LogLoader partition=__consumer_offsets-26, dir=/tmp/kafka-unit8944902187107510952] Loading producer state till offset 0 with message format version 2 11:10:52 11:10:52.212 [data-plane-kafka-request-handler-0] DEBUG kafka.utils.KafkaScheduler - Scheduling task flush-metadata-file with initial delay 0 ms and period -1 ms. 11:10:52 11:10:52.213 [data-plane-kafka-request-handler-0] DEBUG kafka.utils.KafkaScheduler - Scheduling task PeriodicProducerExpirationCheck with initial delay 600000 ms and period 600000 ms. 11:10:52 11:10:52.213 [data-plane-kafka-request-handler-0] INFO kafka.log.LogManager - Created log for partition __consumer_offsets-26 in /tmp/kafka-unit8944902187107510952/__consumer_offsets-26 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} 11:10:52 11:10:52.213 [data-plane-kafka-request-handler-0] INFO kafka.cluster.Partition - [Partition __consumer_offsets-26 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-26 11:10:52 11:10:52.213 [data-plane-kafka-request-handler-0] INFO kafka.cluster.Partition - [Partition __consumer_offsets-26 broker=1] Log loaded for partition __consumer_offsets-26 with initial high watermark 0 11:10:52 11:10:52.213 [data-plane-kafka-request-handler-0] INFO state.change.logger - [Broker id=1] Leader __consumer_offsets-26 with topic id Some(N6JdGKSnS5uP5QsQcans3w) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas []. Previous leader epoch was -1. 11:10:52 11:10:52.213 [data-plane-kafka-request-handler-0] DEBUG kafka.server.epoch.LeaderEpochFileCache - [LeaderEpochCache __consumer_offsets-26] Appended new epoch entry EpochEntry(epoch=0, startOffset=0). Cache now contains 1 entries. 11:10:52 11:10:52.217 [ProcessThread(sid:0 cport:39173):] DEBUG org.apache.zookeeper.server.SessionTrackerImpl - Checking session 0x10000020e9e0000 11:10:52 11:10:52.217 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - Processing request:: sessionid:0x10000020e9e0000 type:getData cxid:0xcd zxid:0xfffffffffffffffe txntype:unknown reqpath:/config/topics/__consumer_offsets 11:10:52 11:10:52.217 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - sessionid:0x10000020e9e0000 type:getData cxid:0xcd zxid:0xfffffffffffffffe txntype:unknown reqpath:/config/topics/__consumer_offsets 11:10:52 11:10:52.217 [SyncThread:0] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Permission requested: 1 11:10:52 11:10:52.217 [SyncThread:0] DEBUG org.apache.zookeeper.server.ZooKeeperServer - ACLs for node: [31,s{'world,'anyone} 11:10:52 ] 11:10:52 11:10:52.217 [SyncThread:0] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Client credentials: ['sasl,'zooclient 11:10:52 , 'ip,'127.0.0.1 11:10:52 ] 11:10:52 11:10:52.218 [main-SendThread(127.0.0.1:39173)] DEBUG org.apache.zookeeper.ClientCnxn - Reading reply session id: 0x10000020e9e0000, packet:: clientPath:/config/topics/__consumer_offsets serverPath:/config/topics/__consumer_offsets finished:false header:: 205,4 replyHeader:: 205,139,0 request:: '/config/topics/__consumer_offsets,F response:: #7b2276657273696f6e223a312c22636f6e666967223a7b22636f6d7072657373696f6e2e74797065223a2270726f6475636572222c22636c65616e75702e706f6c696379223a22636f6d70616374222c227365676d656e742e6279746573223a22313034383537363030227d7d,s{37,37,1768216251826,1768216251826,0,0,0,0,109,0,37} 11:10:52 11:10:52.220 [data-plane-kafka-request-handler-0] DEBUG kafka.log.OffsetIndex - Loaded index file /tmp/kafka-unit8944902187107510952/__consumer_offsets-49/00000000000000000000.index with maxEntries = 1310720, maxIndexSize = 10485760, entries = 0, lastOffset = 0, file position = 0 11:10:52 11:10:52.220 [data-plane-kafka-request-handler-0] DEBUG kafka.log.AbstractIndex - Index /tmp/kafka-unit8944902187107510952/__consumer_offsets-49/00000000000000000000.index was not resized because it already has size 10485760 11:10:52 11:10:52.220 [data-plane-kafka-request-handler-0] DEBUG kafka.log.TimeIndex - Loaded index file /tmp/kafka-unit8944902187107510952/__consumer_offsets-49/00000000000000000000.timeindex with maxEntries = 873813, maxIndexSize = 10485760, entries = 0, lastOffset = TimestampOffset(-1,0), file position = 0 11:10:52 11:10:52.220 [data-plane-kafka-request-handler-0] DEBUG kafka.log.AbstractIndex - Index /tmp/kafka-unit8944902187107510952/__consumer_offsets-49/00000000000000000000.timeindex was not resized because it already has size 10485756 11:10:52 11:10:52.220 [data-plane-kafka-request-handler-0] INFO kafka.log.UnifiedLog$ - [LogLoader partition=__consumer_offsets-49, dir=/tmp/kafka-unit8944902187107510952] Loading producer state till offset 0 with message format version 2 11:10:52 11:10:52.220 [data-plane-kafka-request-handler-0] DEBUG kafka.utils.KafkaScheduler - Scheduling task flush-metadata-file with initial delay 0 ms and period -1 ms. 11:10:52 11:10:52.221 [data-plane-kafka-request-handler-0] DEBUG kafka.utils.KafkaScheduler - Scheduling task PeriodicProducerExpirationCheck with initial delay 600000 ms and period 600000 ms. 11:10:52 11:10:52.221 [data-plane-kafka-request-handler-0] INFO kafka.log.LogManager - Created log for partition __consumer_offsets-49 in /tmp/kafka-unit8944902187107510952/__consumer_offsets-49 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} 11:10:52 11:10:52.221 [data-plane-kafka-request-handler-0] INFO kafka.cluster.Partition - [Partition __consumer_offsets-49 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-49 11:10:52 11:10:52.221 [data-plane-kafka-request-handler-0] INFO kafka.cluster.Partition - [Partition __consumer_offsets-49 broker=1] Log loaded for partition __consumer_offsets-49 with initial high watermark 0 11:10:52 11:10:52.221 [data-plane-kafka-request-handler-0] INFO state.change.logger - [Broker id=1] Leader __consumer_offsets-49 with topic id Some(N6JdGKSnS5uP5QsQcans3w) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas []. Previous leader epoch was -1. 11:10:52 11:10:52.221 [data-plane-kafka-request-handler-0] DEBUG kafka.server.epoch.LeaderEpochFileCache - [LeaderEpochCache __consumer_offsets-49] Appended new epoch entry EpochEntry(epoch=0, startOffset=0). Cache now contains 1 entries. 11:10:52 11:10:52.224 [main] DEBUG org.apache.kafka.clients.NetworkClient - [Consumer clientId=mso-123456-consumer-37da6eb2-0a27-47c4-bc3c-8032fdeca549, groupId=mso-group] Sending metadata request MetadataRequestData(topics=[MetadataRequestTopic(topicId=AAAAAAAAAAAAAAAAAAAAAA, name='my-test-topic')], allowAutoTopicCreation=false, includeClusterAuthorizedOperations=false, includeTopicAuthorizedOperations=false) to node localhost:39115 (id: 1 rack: null) 11:10:52 11:10:52.224 [main] DEBUG org.apache.kafka.clients.NetworkClient - [Consumer clientId=mso-123456-consumer-37da6eb2-0a27-47c4-bc3c-8032fdeca549, groupId=mso-group] Sending METADATA request with header RequestHeader(apiKey=METADATA, apiVersion=12, clientId=mso-123456-consumer-37da6eb2-0a27-47c4-bc3c-8032fdeca549, correlationId=10) and timeout 30000 to node 1: MetadataRequestData(topics=[MetadataRequestTopic(topicId=AAAAAAAAAAAAAAAAAAAAAA, name='my-test-topic')], allowAutoTopicCreation=false, includeClusterAuthorizedOperations=false, includeTopicAuthorizedOperations=false) 11:10:52 11:10:52.227 [main] DEBUG org.apache.kafka.clients.NetworkClient - [Consumer clientId=mso-123456-consumer-37da6eb2-0a27-47c4-bc3c-8032fdeca549, groupId=mso-group] Received METADATA response from node 1 for request with header RequestHeader(apiKey=METADATA, apiVersion=12, clientId=mso-123456-consumer-37da6eb2-0a27-47c4-bc3c-8032fdeca549, correlationId=10): MetadataResponseData(throttleTimeMs=0, brokers=[MetadataResponseBroker(nodeId=1, host='localhost', port=39115, rack=null)], clusterId='jx5ycp9PTHOXo1U6H8QTmw', controllerId=1, topics=[MetadataResponseTopic(errorCode=0, name='my-test-topic', topicId=QbCSIHehTz6oyvgrkYsJtw, isInternal=false, partitions=[MetadataResponsePartition(errorCode=0, partitionIndex=0, leaderId=1, leaderEpoch=0, replicaNodes=[1], isrNodes=[1], offlineReplicas=[])], topicAuthorizedOperations=-2147483648)], clusterAuthorizedOperations=-2147483648) 11:10:52 11:10:52.227 [main] DEBUG org.apache.kafka.clients.Metadata - [Consumer clientId=mso-123456-consumer-37da6eb2-0a27-47c4-bc3c-8032fdeca549, groupId=mso-group] Updating last seen epoch for partition my-test-topic-0 from 0 to epoch 0 from new metadata 11:10:52 11:10:52.228 [main] DEBUG org.apache.kafka.clients.Metadata - [Consumer clientId=mso-123456-consumer-37da6eb2-0a27-47c4-bc3c-8032fdeca549, groupId=mso-group] Updated cluster metadata updateVersion 6 to MetadataCache{clusterId='jx5ycp9PTHOXo1U6H8QTmw', nodes={1=localhost:39115 (id: 1 rack: null)}, partitions=[PartitionMetadata(error=NONE, partition=my-test-topic-0, leader=Optional[1], leaderEpoch=Optional[0], replicas=1, isr=1, offlineReplicas=)], controller=localhost:39115 (id: 1 rack: null)} 11:10:52 11:10:52.228 [main] DEBUG org.apache.kafka.clients.consumer.internals.ConsumerCoordinator - [Consumer clientId=mso-123456-consumer-37da6eb2-0a27-47c4-bc3c-8032fdeca549, groupId=mso-group] Sending FindCoordinator request to broker localhost:39115 (id: 1 rack: null) 11:10:52 11:10:52.228 [ProcessThread(sid:0 cport:39173):] DEBUG org.apache.zookeeper.server.SessionTrackerImpl - Checking session 0x10000020e9e0000 11:10:52 11:10:52.228 [main] DEBUG org.apache.kafka.clients.NetworkClient - [Consumer clientId=mso-123456-consumer-37da6eb2-0a27-47c4-bc3c-8032fdeca549, groupId=mso-group] Sending FIND_COORDINATOR request with header RequestHeader(apiKey=FIND_COORDINATOR, apiVersion=4, clientId=mso-123456-consumer-37da6eb2-0a27-47c4-bc3c-8032fdeca549, correlationId=11) and timeout 30000 to node 1: FindCoordinatorRequestData(key='', keyType=0, coordinatorKeys=[mso-group]) 11:10:52 11:10:52.228 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - Processing request:: sessionid:0x10000020e9e0000 type:getData cxid:0xce zxid:0xfffffffffffffffe txntype:unknown reqpath:/config/topics/__consumer_offsets 11:10:52 11:10:52.228 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - sessionid:0x10000020e9e0000 type:getData cxid:0xce zxid:0xfffffffffffffffe txntype:unknown reqpath:/config/topics/__consumer_offsets 11:10:52 11:10:52.228 [SyncThread:0] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Permission requested: 1 11:10:52 11:10:52.228 [SyncThread:0] DEBUG org.apache.zookeeper.server.ZooKeeperServer - ACLs for node: [31,s{'world,'anyone} 11:10:52 ] 11:10:52 11:10:52.228 [data-plane-kafka-network-thread-1-ListenerName(SASL_PLAINTEXT)-SASL_PLAINTEXT-0] DEBUG kafka.request.logger - Completed request:{"isForwarded":false,"requestHeader":{"requestApiKey":3,"requestApiVersion":12,"correlationId":10,"clientId":"mso-123456-consumer-37da6eb2-0a27-47c4-bc3c-8032fdeca549","requestApiKeyName":"METADATA"},"request":{"topics":[{"topicId":"AAAAAAAAAAAAAAAAAAAAAA","name":"my-test-topic"}],"allowAutoTopicCreation":false,"includeTopicAuthorizedOperations":false},"response":{"throttleTimeMs":0,"brokers":[{"nodeId":1,"host":"localhost","port":39115,"rack":null}],"clusterId":"jx5ycp9PTHOXo1U6H8QTmw","controllerId":1,"topics":[{"errorCode":0,"name":"my-test-topic","topicId":"QbCSIHehTz6oyvgrkYsJtw","isInternal":false,"partitions":[{"errorCode":0,"partitionIndex":0,"leaderId":1,"leaderEpoch":0,"replicaNodes":[1],"isrNodes":[1],"offlineReplicas":[]}],"topicAuthorizedOperations":-2147483648}]},"connection":"127.0.0.1:39115-127.0.0.1:43334-3","totalTimeMs":2.275,"requestQueueTimeMs":0.341,"localTimeMs":1.613,"remoteTimeMs":0.0,"throttleTimeMs":0,"responseQueueTimeMs":0.074,"sendTimeMs":0.244,"securityProtocol":"SASL_PLAINTEXT","principal":"User:admin","listener":"SASL_PLAINTEXT","clientInformation":{"softwareName":"apache-kafka-java","softwareVersion":"3.3.1"}} 11:10:52 11:10:52.228 [SyncThread:0] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Client credentials: ['sasl,'zooclient 11:10:52 , 'ip,'127.0.0.1 11:10:52 ] 11:10:52 11:10:52.229 [main-SendThread(127.0.0.1:39173)] DEBUG org.apache.zookeeper.ClientCnxn - Reading reply session id: 0x10000020e9e0000, packet:: clientPath:/config/topics/__consumer_offsets serverPath:/config/topics/__consumer_offsets finished:false header:: 206,4 replyHeader:: 206,139,0 request:: '/config/topics/__consumer_offsets,F response:: #7b2276657273696f6e223a312c22636f6e666967223a7b22636f6d7072657373696f6e2e74797065223a2270726f6475636572222c22636c65616e75702e706f6c696379223a22636f6d70616374222c227365676d656e742e6279746573223a22313034383537363030227d7d,s{37,37,1768216251826,1768216251826,0,0,0,0,109,0,37} 11:10:52 11:10:52.230 [ProcessThread(sid:0 cport:39173):] DEBUG org.apache.zookeeper.server.SessionTrackerImpl - Checking session 0x10000020e9e0000 11:10:52 11:10:52.230 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - Processing request:: sessionid:0x10000020e9e0000 type:exists cxid:0xcf zxid:0xfffffffffffffffe txntype:unknown reqpath:/admin/delete_topics/__consumer_offsets 11:10:52 11:10:52.230 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - sessionid:0x10000020e9e0000 type:exists cxid:0xcf zxid:0xfffffffffffffffe txntype:unknown reqpath:/admin/delete_topics/__consumer_offsets 11:10:52 11:10:52.230 [main-SendThread(127.0.0.1:39173)] DEBUG org.apache.zookeeper.ClientCnxn - Reading reply session id: 0x10000020e9e0000, packet:: clientPath:/admin/delete_topics/__consumer_offsets serverPath:/admin/delete_topics/__consumer_offsets finished:false header:: 207,3 replyHeader:: 207,139,-101 request:: '/admin/delete_topics/__consumer_offsets,F response:: 11:10:52 11:10:52.231 [data-plane-kafka-request-handler-0] DEBUG kafka.log.OffsetIndex - Loaded index file /tmp/kafka-unit8944902187107510952/__consumer_offsets-39/00000000000000000000.index with maxEntries = 1310720, maxIndexSize = 10485760, entries = 0, lastOffset = 0, file position = 0 11:10:52 11:10:52.231 [data-plane-kafka-request-handler-0] DEBUG kafka.log.AbstractIndex - Index /tmp/kafka-unit8944902187107510952/__consumer_offsets-39/00000000000000000000.index was not resized because it already has size 10485760 11:10:52 11:10:52.231 [ProcessThread(sid:0 cport:39173):] DEBUG org.apache.zookeeper.server.SessionTrackerImpl - Checking session 0x10000020e9e0000 11:10:52 11:10:52.231 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - Processing request:: sessionid:0x10000020e9e0000 type:exists cxid:0xd0 zxid:0xfffffffffffffffe txntype:unknown reqpath:/brokers/topics/__consumer_offsets 11:10:52 11:10:52.231 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - sessionid:0x10000020e9e0000 type:exists cxid:0xd0 zxid:0xfffffffffffffffe txntype:unknown reqpath:/brokers/topics/__consumer_offsets 11:10:52 11:10:52.231 [data-plane-kafka-request-handler-0] DEBUG kafka.log.TimeIndex - Loaded index file /tmp/kafka-unit8944902187107510952/__consumer_offsets-39/00000000000000000000.timeindex with maxEntries = 873813, maxIndexSize = 10485760, entries = 0, lastOffset = TimestampOffset(-1,0), file position = 0 11:10:52 11:10:52.231 [data-plane-kafka-request-handler-0] DEBUG kafka.log.AbstractIndex - Index /tmp/kafka-unit8944902187107510952/__consumer_offsets-39/00000000000000000000.timeindex was not resized because it already has size 10485756 11:10:52 11:10:52.231 [data-plane-kafka-request-handler-0] INFO kafka.log.UnifiedLog$ - [LogLoader partition=__consumer_offsets-39, dir=/tmp/kafka-unit8944902187107510952] Loading producer state till offset 0 with message format version 2 11:10:52 11:10:52.231 [main-SendThread(127.0.0.1:39173)] DEBUG org.apache.zookeeper.ClientCnxn - Reading reply session id: 0x10000020e9e0000, packet:: clientPath:/brokers/topics/__consumer_offsets serverPath:/brokers/topics/__consumer_offsets finished:false header:: 208,3 replyHeader:: 208,139,0 request:: '/brokers/topics/__consumer_offsets,T response:: s{38,38,1768216251835,1768216251835,0,1,0,0,548,1,39} 11:10:52 11:10:52.231 [data-plane-kafka-request-handler-1] DEBUG kafka.server.ZkAdminManager - [Admin Manager on Broker 1]: Topic creation failed since topic '__consumer_offsets' already exists. 11:10:52 org.apache.kafka.common.errors.TopicExistsException: Topic '__consumer_offsets' already exists. 11:10:52 11:10:52.231 [data-plane-kafka-request-handler-0] DEBUG kafka.utils.KafkaScheduler - Scheduling task flush-metadata-file with initial delay 0 ms and period -1 ms. 11:10:52 11:10:52.232 [data-plane-kafka-request-handler-1] DEBUG kafka.server.DefaultAutoTopicCreationManager - Cleared inflight topic creation state for HashMap(__consumer_offsets -> CreatableTopic(name='__consumer_offsets', numPartitions=50, replicationFactor=1, assignments=[], configs=[CreateableTopicConfig(name='compression.type', value='producer'), CreateableTopicConfig(name='cleanup.policy', value='compact'), CreateableTopicConfig(name='segment.bytes', value='104857600')])) 11:10:52 11:10:52.232 [data-plane-kafka-request-handler-0] DEBUG kafka.utils.KafkaScheduler - Scheduling task PeriodicProducerExpirationCheck with initial delay 600000 ms and period 600000 ms. 11:10:52 11:10:52.232 [main] DEBUG org.apache.kafka.clients.NetworkClient - [Consumer clientId=mso-123456-consumer-37da6eb2-0a27-47c4-bc3c-8032fdeca549, groupId=mso-group] Received FIND_COORDINATOR response from node 1 for request with header RequestHeader(apiKey=FIND_COORDINATOR, apiVersion=4, clientId=mso-123456-consumer-37da6eb2-0a27-47c4-bc3c-8032fdeca549, correlationId=11): FindCoordinatorResponseData(throttleTimeMs=0, errorCode=0, errorMessage='', nodeId=0, host='', port=0, coordinators=[Coordinator(key='mso-group', nodeId=-1, host='', port=-1, errorCode=15, errorMessage='')]) 11:10:52 11:10:52.233 [main] DEBUG org.apache.kafka.clients.consumer.internals.ConsumerCoordinator - [Consumer clientId=mso-123456-consumer-37da6eb2-0a27-47c4-bc3c-8032fdeca549, groupId=mso-group] Received FindCoordinator response ClientResponse(receivedTimeMs=1768216252232, latencyMs=4, disconnected=false, requestHeader=RequestHeader(apiKey=FIND_COORDINATOR, apiVersion=4, clientId=mso-123456-consumer-37da6eb2-0a27-47c4-bc3c-8032fdeca549, correlationId=11), responseBody=FindCoordinatorResponseData(throttleTimeMs=0, errorCode=0, errorMessage='', nodeId=0, host='', port=0, coordinators=[Coordinator(key='mso-group', nodeId=-1, host='', port=-1, errorCode=15, errorMessage='')])) 11:10:52 11:10:52.233 [main] DEBUG org.apache.kafka.clients.consumer.internals.ConsumerCoordinator - [Consumer clientId=mso-123456-consumer-37da6eb2-0a27-47c4-bc3c-8032fdeca549, groupId=mso-group] Group coordinator lookup failed: 11:10:52 11:10:52.233 [main] DEBUG org.apache.kafka.clients.consumer.internals.ConsumerCoordinator - [Consumer clientId=mso-123456-consumer-37da6eb2-0a27-47c4-bc3c-8032fdeca549, groupId=mso-group] Coordinator discovery failed, refreshing metadata 11:10:52 org.apache.kafka.common.errors.CoordinatorNotAvailableException: The coordinator is not available. 11:10:52 11:10:52.233 [data-plane-kafka-network-thread-1-ListenerName(SASL_PLAINTEXT)-SASL_PLAINTEXT-0] DEBUG kafka.request.logger - Completed request:{"isForwarded":false,"requestHeader":{"requestApiKey":10,"requestApiVersion":4,"correlationId":11,"clientId":"mso-123456-consumer-37da6eb2-0a27-47c4-bc3c-8032fdeca549","requestApiKeyName":"FIND_COORDINATOR"},"request":{"keyType":0,"coordinatorKeys":["mso-group"]},"response":{"throttleTimeMs":0,"coordinators":[{"key":"mso-group","nodeId":-1,"host":"","port":-1,"errorCode":15,"errorMessage":""}]},"connection":"127.0.0.1:39115-127.0.0.1:43334-3","totalTimeMs":3.825,"requestQueueTimeMs":0.109,"localTimeMs":3.471,"remoteTimeMs":0.0,"throttleTimeMs":0,"responseQueueTimeMs":0.063,"sendTimeMs":0.179,"securityProtocol":"SASL_PLAINTEXT","principal":"User:admin","listener":"SASL_PLAINTEXT","clientInformation":{"softwareName":"apache-kafka-java","softwareVersion":"3.3.1"}} 11:10:52 11:10:52.233 [data-plane-kafka-request-handler-0] INFO kafka.log.LogManager - Created log for partition __consumer_offsets-39 in /tmp/kafka-unit8944902187107510952/__consumer_offsets-39 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} 11:10:52 11:10:52.233 [data-plane-kafka-request-handler-0] INFO kafka.cluster.Partition - [Partition __consumer_offsets-39 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-39 11:10:52 11:10:52.233 [data-plane-kafka-request-handler-0] INFO kafka.cluster.Partition - [Partition __consumer_offsets-39 broker=1] Log loaded for partition __consumer_offsets-39 with initial high watermark 0 11:10:52 11:10:52.233 [data-plane-kafka-request-handler-0] INFO state.change.logger - [Broker id=1] Leader __consumer_offsets-39 with topic id Some(N6JdGKSnS5uP5QsQcans3w) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas []. Previous leader epoch was -1. 11:10:52 11:10:52.234 [data-plane-kafka-request-handler-0] DEBUG kafka.server.epoch.LeaderEpochFileCache - [LeaderEpochCache __consumer_offsets-39] Appended new epoch entry EpochEntry(epoch=0, startOffset=0). Cache now contains 1 entries. 11:10:52 11:10:52.238 [ProcessThread(sid:0 cport:39173):] DEBUG org.apache.zookeeper.server.SessionTrackerImpl - Checking session 0x10000020e9e0000 11:10:52 11:10:52.238 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - Processing request:: sessionid:0x10000020e9e0000 type:getData cxid:0xd1 zxid:0xfffffffffffffffe txntype:unknown reqpath:/config/topics/__consumer_offsets 11:10:52 11:10:52.238 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - sessionid:0x10000020e9e0000 type:getData cxid:0xd1 zxid:0xfffffffffffffffe txntype:unknown reqpath:/config/topics/__consumer_offsets 11:10:52 11:10:52.238 [SyncThread:0] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Permission requested: 1 11:10:52 11:10:52.238 [SyncThread:0] DEBUG org.apache.zookeeper.server.ZooKeeperServer - ACLs for node: [31,s{'world,'anyone} 11:10:52 ] 11:10:52 11:10:52.238 [SyncThread:0] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Client credentials: ['sasl,'zooclient 11:10:52 , 'ip,'127.0.0.1 11:10:52 ] 11:10:52 11:10:52.238 [main-SendThread(127.0.0.1:39173)] DEBUG org.apache.zookeeper.ClientCnxn - Reading reply session id: 0x10000020e9e0000, packet:: clientPath:/config/topics/__consumer_offsets serverPath:/config/topics/__consumer_offsets finished:false header:: 209,4 replyHeader:: 209,139,0 request:: '/config/topics/__consumer_offsets,F response:: #7b2276657273696f6e223a312c22636f6e666967223a7b22636f6d7072657373696f6e2e74797065223a2270726f6475636572222c22636c65616e75702e706f6c696379223a22636f6d70616374222c227365676d656e742e6279746573223a22313034383537363030227d7d,s{37,37,1768216251826,1768216251826,0,0,0,0,109,0,37} 11:10:52 11:10:52.241 [data-plane-kafka-request-handler-0] DEBUG kafka.log.OffsetIndex - Loaded index file /tmp/kafka-unit8944902187107510952/__consumer_offsets-9/00000000000000000000.index with maxEntries = 1310720, maxIndexSize = 10485760, entries = 0, lastOffset = 0, file position = 0 11:10:52 11:10:52.241 [data-plane-kafka-request-handler-0] DEBUG kafka.log.AbstractIndex - Index /tmp/kafka-unit8944902187107510952/__consumer_offsets-9/00000000000000000000.index was not resized because it already has size 10485760 11:10:52 11:10:52.241 [data-plane-kafka-request-handler-0] DEBUG kafka.log.TimeIndex - Loaded index file /tmp/kafka-unit8944902187107510952/__consumer_offsets-9/00000000000000000000.timeindex with maxEntries = 873813, maxIndexSize = 10485760, entries = 0, lastOffset = TimestampOffset(-1,0), file position = 0 11:10:52 11:10:52.241 [data-plane-kafka-request-handler-0] DEBUG kafka.log.AbstractIndex - Index /tmp/kafka-unit8944902187107510952/__consumer_offsets-9/00000000000000000000.timeindex was not resized because it already has size 10485756 11:10:52 11:10:52.241 [data-plane-kafka-request-handler-0] INFO kafka.log.UnifiedLog$ - [LogLoader partition=__consumer_offsets-9, dir=/tmp/kafka-unit8944902187107510952] Loading producer state till offset 0 with message format version 2 11:10:52 11:10:52.242 [data-plane-kafka-request-handler-0] DEBUG kafka.utils.KafkaScheduler - Scheduling task flush-metadata-file with initial delay 0 ms and period -1 ms. 11:10:52 11:10:52.242 [data-plane-kafka-request-handler-0] DEBUG kafka.utils.KafkaScheduler - Scheduling task PeriodicProducerExpirationCheck with initial delay 600000 ms and period 600000 ms. 11:10:52 11:10:52.243 [data-plane-kafka-request-handler-0] INFO kafka.log.LogManager - Created log for partition __consumer_offsets-9 in /tmp/kafka-unit8944902187107510952/__consumer_offsets-9 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} 11:10:52 11:10:52.243 [data-plane-kafka-request-handler-0] INFO kafka.cluster.Partition - [Partition __consumer_offsets-9 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-9 11:10:52 11:10:52.243 [data-plane-kafka-request-handler-0] INFO kafka.cluster.Partition - [Partition __consumer_offsets-9 broker=1] Log loaded for partition __consumer_offsets-9 with initial high watermark 0 11:10:52 11:10:52.243 [data-plane-kafka-request-handler-0] INFO state.change.logger - [Broker id=1] Leader __consumer_offsets-9 with topic id Some(N6JdGKSnS5uP5QsQcans3w) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas []. Previous leader epoch was -1. 11:10:52 11:10:52.243 [data-plane-kafka-request-handler-0] DEBUG kafka.server.epoch.LeaderEpochFileCache - [LeaderEpochCache __consumer_offsets-9] Appended new epoch entry EpochEntry(epoch=0, startOffset=0). Cache now contains 1 entries. 11:10:52 11:10:52.247 [ProcessThread(sid:0 cport:39173):] DEBUG org.apache.zookeeper.server.SessionTrackerImpl - Checking session 0x10000020e9e0000 11:10:52 11:10:52.247 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - Processing request:: sessionid:0x10000020e9e0000 type:getData cxid:0xd2 zxid:0xfffffffffffffffe txntype:unknown reqpath:/config/topics/__consumer_offsets 11:10:52 11:10:52.248 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - sessionid:0x10000020e9e0000 type:getData cxid:0xd2 zxid:0xfffffffffffffffe txntype:unknown reqpath:/config/topics/__consumer_offsets 11:10:52 11:10:52.248 [SyncThread:0] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Permission requested: 1 11:10:52 11:10:52.248 [SyncThread:0] DEBUG org.apache.zookeeper.server.ZooKeeperServer - ACLs for node: [31,s{'world,'anyone} 11:10:52 ] 11:10:52 11:10:52.248 [SyncThread:0] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Client credentials: ['sasl,'zooclient 11:10:52 , 'ip,'127.0.0.1 11:10:52 ] 11:10:52 11:10:52.248 [main-SendThread(127.0.0.1:39173)] DEBUG org.apache.zookeeper.ClientCnxn - Reading reply session id: 0x10000020e9e0000, packet:: clientPath:/config/topics/__consumer_offsets serverPath:/config/topics/__consumer_offsets finished:false header:: 210,4 replyHeader:: 210,139,0 request:: '/config/topics/__consumer_offsets,F response:: #7b2276657273696f6e223a312c22636f6e666967223a7b22636f6d7072657373696f6e2e74797065223a2270726f6475636572222c22636c65616e75702e706f6c696379223a22636f6d70616374222c227365676d656e742e6279746573223a22313034383537363030227d7d,s{37,37,1768216251826,1768216251826,0,0,0,0,109,0,37} 11:10:52 11:10:52.250 [data-plane-kafka-request-handler-0] DEBUG kafka.log.OffsetIndex - Loaded index file /tmp/kafka-unit8944902187107510952/__consumer_offsets-24/00000000000000000000.index with maxEntries = 1310720, maxIndexSize = 10485760, entries = 0, lastOffset = 0, file position = 0 11:10:52 11:10:52.250 [data-plane-kafka-request-handler-0] DEBUG kafka.log.AbstractIndex - Index /tmp/kafka-unit8944902187107510952/__consumer_offsets-24/00000000000000000000.index was not resized because it already has size 10485760 11:10:52 11:10:52.250 [data-plane-kafka-request-handler-0] DEBUG kafka.log.TimeIndex - Loaded index file /tmp/kafka-unit8944902187107510952/__consumer_offsets-24/00000000000000000000.timeindex with maxEntries = 873813, maxIndexSize = 10485760, entries = 0, lastOffset = TimestampOffset(-1,0), file position = 0 11:10:52 11:10:52.250 [data-plane-kafka-request-handler-0] DEBUG kafka.log.AbstractIndex - Index /tmp/kafka-unit8944902187107510952/__consumer_offsets-24/00000000000000000000.timeindex was not resized because it already has size 10485756 11:10:52 11:10:52.250 [data-plane-kafka-request-handler-0] INFO kafka.log.UnifiedLog$ - [LogLoader partition=__consumer_offsets-24, dir=/tmp/kafka-unit8944902187107510952] Loading producer state till offset 0 with message format version 2 11:10:52 11:10:52.250 [data-plane-kafka-request-handler-0] DEBUG kafka.utils.KafkaScheduler - Scheduling task flush-metadata-file with initial delay 0 ms and period -1 ms. 11:10:52 11:10:52.251 [data-plane-kafka-request-handler-0] DEBUG kafka.utils.KafkaScheduler - Scheduling task PeriodicProducerExpirationCheck with initial delay 600000 ms and period 600000 ms. 11:10:52 11:10:52.251 [data-plane-kafka-request-handler-0] INFO kafka.log.LogManager - Created log for partition __consumer_offsets-24 in /tmp/kafka-unit8944902187107510952/__consumer_offsets-24 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} 11:10:52 11:10:52.251 [data-plane-kafka-request-handler-0] INFO kafka.cluster.Partition - [Partition __consumer_offsets-24 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-24 11:10:52 11:10:52.251 [data-plane-kafka-request-handler-0] INFO kafka.cluster.Partition - [Partition __consumer_offsets-24 broker=1] Log loaded for partition __consumer_offsets-24 with initial high watermark 0 11:10:52 11:10:52.251 [data-plane-kafka-request-handler-0] INFO state.change.logger - [Broker id=1] Leader __consumer_offsets-24 with topic id Some(N6JdGKSnS5uP5QsQcans3w) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas []. Previous leader epoch was -1. 11:10:52 11:10:52.251 [data-plane-kafka-request-handler-0] DEBUG kafka.server.epoch.LeaderEpochFileCache - [LeaderEpochCache __consumer_offsets-24] Appended new epoch entry EpochEntry(epoch=0, startOffset=0). Cache now contains 1 entries. 11:10:52 11:10:52.256 [ProcessThread(sid:0 cport:39173):] DEBUG org.apache.zookeeper.server.SessionTrackerImpl - Checking session 0x10000020e9e0000 11:10:52 11:10:52.256 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - Processing request:: sessionid:0x10000020e9e0000 type:getData cxid:0xd3 zxid:0xfffffffffffffffe txntype:unknown reqpath:/config/topics/__consumer_offsets 11:10:52 11:10:52.256 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - sessionid:0x10000020e9e0000 type:getData cxid:0xd3 zxid:0xfffffffffffffffe txntype:unknown reqpath:/config/topics/__consumer_offsets 11:10:52 11:10:52.256 [SyncThread:0] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Permission requested: 1 11:10:52 11:10:52.256 [SyncThread:0] DEBUG org.apache.zookeeper.server.ZooKeeperServer - ACLs for node: [31,s{'world,'anyone} 11:10:52 ] 11:10:52 11:10:52.256 [SyncThread:0] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Client credentials: ['sasl,'zooclient 11:10:52 , 'ip,'127.0.0.1 11:10:52 ] 11:10:52 11:10:52.256 [main-SendThread(127.0.0.1:39173)] DEBUG org.apache.zookeeper.ClientCnxn - Reading reply session id: 0x10000020e9e0000, packet:: clientPath:/config/topics/__consumer_offsets serverPath:/config/topics/__consumer_offsets finished:false header:: 211,4 replyHeader:: 211,139,0 request:: '/config/topics/__consumer_offsets,F response:: #7b2276657273696f6e223a312c22636f6e666967223a7b22636f6d7072657373696f6e2e74797065223a2270726f6475636572222c22636c65616e75702e706f6c696379223a22636f6d70616374222c227365676d656e742e6279746573223a22313034383537363030227d7d,s{37,37,1768216251826,1768216251826,0,0,0,0,109,0,37} 11:10:52 11:10:52.258 [data-plane-kafka-request-handler-0] DEBUG kafka.log.OffsetIndex - Loaded index file /tmp/kafka-unit8944902187107510952/__consumer_offsets-31/00000000000000000000.index with maxEntries = 1310720, maxIndexSize = 10485760, entries = 0, lastOffset = 0, file position = 0 11:10:52 11:10:52.258 [data-plane-kafka-request-handler-0] DEBUG kafka.log.AbstractIndex - Index /tmp/kafka-unit8944902187107510952/__consumer_offsets-31/00000000000000000000.index was not resized because it already has size 10485760 11:10:52 11:10:52.258 [data-plane-kafka-request-handler-0] DEBUG kafka.log.TimeIndex - Loaded index file /tmp/kafka-unit8944902187107510952/__consumer_offsets-31/00000000000000000000.timeindex with maxEntries = 873813, maxIndexSize = 10485760, entries = 0, lastOffset = TimestampOffset(-1,0), file position = 0 11:10:52 11:10:52.258 [data-plane-kafka-request-handler-0] DEBUG kafka.log.AbstractIndex - Index /tmp/kafka-unit8944902187107510952/__consumer_offsets-31/00000000000000000000.timeindex was not resized because it already has size 10485756 11:10:52 11:10:52.259 [data-plane-kafka-request-handler-0] INFO kafka.log.UnifiedLog$ - [LogLoader partition=__consumer_offsets-31, dir=/tmp/kafka-unit8944902187107510952] Loading producer state till offset 0 with message format version 2 11:10:52 11:10:52.259 [data-plane-kafka-request-handler-0] DEBUG kafka.utils.KafkaScheduler - Scheduling task flush-metadata-file with initial delay 0 ms and period -1 ms. 11:10:52 11:10:52.259 [data-plane-kafka-request-handler-0] DEBUG kafka.utils.KafkaScheduler - Scheduling task PeriodicProducerExpirationCheck with initial delay 600000 ms and period 600000 ms. 11:10:52 11:10:52.259 [data-plane-kafka-request-handler-0] INFO kafka.log.LogManager - Created log for partition __consumer_offsets-31 in /tmp/kafka-unit8944902187107510952/__consumer_offsets-31 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} 11:10:52 11:10:52.260 [data-plane-kafka-request-handler-0] INFO kafka.cluster.Partition - [Partition __consumer_offsets-31 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-31 11:10:52 11:10:52.260 [data-plane-kafka-request-handler-0] INFO kafka.cluster.Partition - [Partition __consumer_offsets-31 broker=1] Log loaded for partition __consumer_offsets-31 with initial high watermark 0 11:10:52 11:10:52.260 [data-plane-kafka-request-handler-0] INFO state.change.logger - [Broker id=1] Leader __consumer_offsets-31 with topic id Some(N6JdGKSnS5uP5QsQcans3w) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas []. Previous leader epoch was -1. 11:10:52 11:10:52.260 [data-plane-kafka-request-handler-0] DEBUG kafka.server.epoch.LeaderEpochFileCache - [LeaderEpochCache __consumer_offsets-31] Appended new epoch entry EpochEntry(epoch=0, startOffset=0). Cache now contains 1 entries. 11:10:52 11:10:52.263 [ProcessThread(sid:0 cport:39173):] DEBUG org.apache.zookeeper.server.SessionTrackerImpl - Checking session 0x10000020e9e0000 11:10:52 11:10:52.263 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - Processing request:: sessionid:0x10000020e9e0000 type:getData cxid:0xd4 zxid:0xfffffffffffffffe txntype:unknown reqpath:/config/topics/__consumer_offsets 11:10:52 11:10:52.263 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - sessionid:0x10000020e9e0000 type:getData cxid:0xd4 zxid:0xfffffffffffffffe txntype:unknown reqpath:/config/topics/__consumer_offsets 11:10:52 11:10:52.263 [SyncThread:0] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Permission requested: 1 11:10:52 11:10:52.263 [SyncThread:0] DEBUG org.apache.zookeeper.server.ZooKeeperServer - ACLs for node: [31,s{'world,'anyone} 11:10:52 ] 11:10:52 11:10:52.263 [SyncThread:0] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Client credentials: ['sasl,'zooclient 11:10:52 , 'ip,'127.0.0.1 11:10:52 ] 11:10:52 11:10:52.264 [main-SendThread(127.0.0.1:39173)] DEBUG org.apache.zookeeper.ClientCnxn - Reading reply session id: 0x10000020e9e0000, packet:: clientPath:/config/topics/__consumer_offsets serverPath:/config/topics/__consumer_offsets finished:false header:: 212,4 replyHeader:: 212,139,0 request:: '/config/topics/__consumer_offsets,F response:: #7b2276657273696f6e223a312c22636f6e666967223a7b22636f6d7072657373696f6e2e74797065223a2270726f6475636572222c22636c65616e75702e706f6c696379223a22636f6d70616374222c227365676d656e742e6279746573223a22313034383537363030227d7d,s{37,37,1768216251826,1768216251826,0,0,0,0,109,0,37} 11:10:52 11:10:52.266 [data-plane-kafka-request-handler-0] DEBUG kafka.log.OffsetIndex - Loaded index file /tmp/kafka-unit8944902187107510952/__consumer_offsets-46/00000000000000000000.index with maxEntries = 1310720, maxIndexSize = 10485760, entries = 0, lastOffset = 0, file position = 0 11:10:52 11:10:52.266 [data-plane-kafka-request-handler-0] DEBUG kafka.log.AbstractIndex - Index /tmp/kafka-unit8944902187107510952/__consumer_offsets-46/00000000000000000000.index was not resized because it already has size 10485760 11:10:52 11:10:52.266 [data-plane-kafka-request-handler-0] DEBUG kafka.log.TimeIndex - Loaded index file /tmp/kafka-unit8944902187107510952/__consumer_offsets-46/00000000000000000000.timeindex with maxEntries = 873813, maxIndexSize = 10485760, entries = 0, lastOffset = TimestampOffset(-1,0), file position = 0 11:10:52 11:10:52.266 [data-plane-kafka-request-handler-0] DEBUG kafka.log.AbstractIndex - Index /tmp/kafka-unit8944902187107510952/__consumer_offsets-46/00000000000000000000.timeindex was not resized because it already has size 10485756 11:10:52 11:10:52.266 [data-plane-kafka-request-handler-0] INFO kafka.log.UnifiedLog$ - [LogLoader partition=__consumer_offsets-46, dir=/tmp/kafka-unit8944902187107510952] Loading producer state till offset 0 with message format version 2 11:10:52 11:10:52.266 [data-plane-kafka-request-handler-0] DEBUG kafka.utils.KafkaScheduler - Scheduling task flush-metadata-file with initial delay 0 ms and period -1 ms. 11:10:52 11:10:52.267 [data-plane-kafka-request-handler-0] DEBUG kafka.utils.KafkaScheduler - Scheduling task PeriodicProducerExpirationCheck with initial delay 600000 ms and period 600000 ms. 11:10:52 11:10:52.267 [data-plane-kafka-request-handler-0] INFO kafka.log.LogManager - Created log for partition __consumer_offsets-46 in /tmp/kafka-unit8944902187107510952/__consumer_offsets-46 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} 11:10:52 11:10:52.267 [data-plane-kafka-request-handler-0] INFO kafka.cluster.Partition - [Partition __consumer_offsets-46 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-46 11:10:52 11:10:52.267 [data-plane-kafka-request-handler-0] INFO kafka.cluster.Partition - [Partition __consumer_offsets-46 broker=1] Log loaded for partition __consumer_offsets-46 with initial high watermark 0 11:10:52 11:10:52.267 [data-plane-kafka-request-handler-0] INFO state.change.logger - [Broker id=1] Leader __consumer_offsets-46 with topic id Some(N6JdGKSnS5uP5QsQcans3w) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas []. Previous leader epoch was -1. 11:10:52 11:10:52.268 [data-plane-kafka-request-handler-0] DEBUG kafka.server.epoch.LeaderEpochFileCache - [LeaderEpochCache __consumer_offsets-46] Appended new epoch entry EpochEntry(epoch=0, startOffset=0). Cache now contains 1 entries. 11:10:52 11:10:52.271 [ProcessThread(sid:0 cport:39173):] DEBUG org.apache.zookeeper.server.SessionTrackerImpl - Checking session 0x10000020e9e0000 11:10:52 11:10:52.271 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - Processing request:: sessionid:0x10000020e9e0000 type:getData cxid:0xd5 zxid:0xfffffffffffffffe txntype:unknown reqpath:/config/topics/__consumer_offsets 11:10:52 11:10:52.271 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - sessionid:0x10000020e9e0000 type:getData cxid:0xd5 zxid:0xfffffffffffffffe txntype:unknown reqpath:/config/topics/__consumer_offsets 11:10:52 11:10:52.271 [SyncThread:0] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Permission requested: 1 11:10:52 11:10:52.271 [SyncThread:0] DEBUG org.apache.zookeeper.server.ZooKeeperServer - ACLs for node: [31,s{'world,'anyone} 11:10:52 ] 11:10:52 11:10:52.271 [SyncThread:0] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Client credentials: ['sasl,'zooclient 11:10:52 , 'ip,'127.0.0.1 11:10:52 ] 11:10:52 11:10:52.272 [main-SendThread(127.0.0.1:39173)] DEBUG org.apache.zookeeper.ClientCnxn - Reading reply session id: 0x10000020e9e0000, packet:: clientPath:/config/topics/__consumer_offsets serverPath:/config/topics/__consumer_offsets finished:false header:: 213,4 replyHeader:: 213,139,0 request:: '/config/topics/__consumer_offsets,F response:: #7b2276657273696f6e223a312c22636f6e666967223a7b22636f6d7072657373696f6e2e74797065223a2270726f6475636572222c22636c65616e75702e706f6c696379223a22636f6d70616374222c227365676d656e742e6279746573223a22313034383537363030227d7d,s{37,37,1768216251826,1768216251826,0,0,0,0,109,0,37} 11:10:52 11:10:52.274 [data-plane-kafka-request-handler-0] DEBUG kafka.log.OffsetIndex - Loaded index file /tmp/kafka-unit8944902187107510952/__consumer_offsets-1/00000000000000000000.index with maxEntries = 1310720, maxIndexSize = 10485760, entries = 0, lastOffset = 0, file position = 0 11:10:52 11:10:52.274 [data-plane-kafka-request-handler-0] DEBUG kafka.log.AbstractIndex - Index /tmp/kafka-unit8944902187107510952/__consumer_offsets-1/00000000000000000000.index was not resized because it already has size 10485760 11:10:52 11:10:52.274 [data-plane-kafka-request-handler-0] DEBUG kafka.log.TimeIndex - Loaded index file /tmp/kafka-unit8944902187107510952/__consumer_offsets-1/00000000000000000000.timeindex with maxEntries = 873813, maxIndexSize = 10485760, entries = 0, lastOffset = TimestampOffset(-1,0), file position = 0 11:10:52 11:10:52.274 [data-plane-kafka-request-handler-0] DEBUG kafka.log.AbstractIndex - Index /tmp/kafka-unit8944902187107510952/__consumer_offsets-1/00000000000000000000.timeindex was not resized because it already has size 10485756 11:10:52 11:10:52.274 [data-plane-kafka-request-handler-0] INFO kafka.log.UnifiedLog$ - [LogLoader partition=__consumer_offsets-1, dir=/tmp/kafka-unit8944902187107510952] Loading producer state till offset 0 with message format version 2 11:10:52 11:10:52.275 [data-plane-kafka-request-handler-0] DEBUG kafka.utils.KafkaScheduler - Scheduling task flush-metadata-file with initial delay 0 ms and period -1 ms. 11:10:52 11:10:52.275 [data-plane-kafka-request-handler-0] DEBUG kafka.utils.KafkaScheduler - Scheduling task PeriodicProducerExpirationCheck with initial delay 600000 ms and period 600000 ms. 11:10:52 11:10:52.275 [data-plane-kafka-request-handler-0] INFO kafka.log.LogManager - Created log for partition __consumer_offsets-1 in /tmp/kafka-unit8944902187107510952/__consumer_offsets-1 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} 11:10:52 11:10:52.275 [data-plane-kafka-request-handler-0] INFO kafka.cluster.Partition - [Partition __consumer_offsets-1 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-1 11:10:52 11:10:52.275 [data-plane-kafka-request-handler-0] INFO kafka.cluster.Partition - [Partition __consumer_offsets-1 broker=1] Log loaded for partition __consumer_offsets-1 with initial high watermark 0 11:10:52 11:10:52.276 [data-plane-kafka-request-handler-0] INFO state.change.logger - [Broker id=1] Leader __consumer_offsets-1 with topic id Some(N6JdGKSnS5uP5QsQcans3w) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas []. Previous leader epoch was -1. 11:10:52 11:10:52.276 [data-plane-kafka-request-handler-0] DEBUG kafka.server.epoch.LeaderEpochFileCache - [LeaderEpochCache __consumer_offsets-1] Appended new epoch entry EpochEntry(epoch=0, startOffset=0). Cache now contains 1 entries. 11:10:52 11:10:52.280 [ProcessThread(sid:0 cport:39173):] DEBUG org.apache.zookeeper.server.SessionTrackerImpl - Checking session 0x10000020e9e0000 11:10:52 11:10:52.280 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - Processing request:: sessionid:0x10000020e9e0000 type:getData cxid:0xd6 zxid:0xfffffffffffffffe txntype:unknown reqpath:/config/topics/__consumer_offsets 11:10:52 11:10:52.280 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - sessionid:0x10000020e9e0000 type:getData cxid:0xd6 zxid:0xfffffffffffffffe txntype:unknown reqpath:/config/topics/__consumer_offsets 11:10:52 11:10:52.280 [SyncThread:0] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Permission requested: 1 11:10:52 11:10:52.280 [SyncThread:0] DEBUG org.apache.zookeeper.server.ZooKeeperServer - ACLs for node: [31,s{'world,'anyone} 11:10:52 ] 11:10:52 11:10:52.280 [SyncThread:0] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Client credentials: ['sasl,'zooclient 11:10:52 , 'ip,'127.0.0.1 11:10:52 ] 11:10:52 11:10:52.280 [main-SendThread(127.0.0.1:39173)] DEBUG org.apache.zookeeper.ClientCnxn - Reading reply session id: 0x10000020e9e0000, packet:: clientPath:/config/topics/__consumer_offsets serverPath:/config/topics/__consumer_offsets finished:false header:: 214,4 replyHeader:: 214,139,0 request:: '/config/topics/__consumer_offsets,F response:: #7b2276657273696f6e223a312c22636f6e666967223a7b22636f6d7072657373696f6e2e74797065223a2270726f6475636572222c22636c65616e75702e706f6c696379223a22636f6d70616374222c227365676d656e742e6279746573223a22313034383537363030227d7d,s{37,37,1768216251826,1768216251826,0,0,0,0,109,0,37} 11:10:52 11:10:52.282 [data-plane-kafka-request-handler-0] DEBUG kafka.log.OffsetIndex - Loaded index file /tmp/kafka-unit8944902187107510952/__consumer_offsets-16/00000000000000000000.index with maxEntries = 1310720, maxIndexSize = 10485760, entries = 0, lastOffset = 0, file position = 0 11:10:52 11:10:52.282 [data-plane-kafka-request-handler-0] DEBUG kafka.log.AbstractIndex - Index /tmp/kafka-unit8944902187107510952/__consumer_offsets-16/00000000000000000000.index was not resized because it already has size 10485760 11:10:52 11:10:52.283 [data-plane-kafka-request-handler-0] DEBUG kafka.log.TimeIndex - Loaded index file /tmp/kafka-unit8944902187107510952/__consumer_offsets-16/00000000000000000000.timeindex with maxEntries = 873813, maxIndexSize = 10485760, entries = 0, lastOffset = TimestampOffset(-1,0), file position = 0 11:10:52 11:10:52.283 [data-plane-kafka-request-handler-0] DEBUG kafka.log.AbstractIndex - Index /tmp/kafka-unit8944902187107510952/__consumer_offsets-16/00000000000000000000.timeindex was not resized because it already has size 10485756 11:10:52 11:10:52.283 [data-plane-kafka-request-handler-0] INFO kafka.log.UnifiedLog$ - [LogLoader partition=__consumer_offsets-16, dir=/tmp/kafka-unit8944902187107510952] Loading producer state till offset 0 with message format version 2 11:10:52 11:10:52.283 [data-plane-kafka-request-handler-0] DEBUG kafka.utils.KafkaScheduler - Scheduling task flush-metadata-file with initial delay 0 ms and period -1 ms. 11:10:52 11:10:52.284 [data-plane-kafka-request-handler-0] DEBUG kafka.utils.KafkaScheduler - Scheduling task PeriodicProducerExpirationCheck with initial delay 600000 ms and period 600000 ms. 11:10:52 11:10:52.284 [data-plane-kafka-request-handler-0] INFO kafka.log.LogManager - Created log for partition __consumer_offsets-16 in /tmp/kafka-unit8944902187107510952/__consumer_offsets-16 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} 11:10:52 11:10:52.284 [data-plane-kafka-request-handler-0] INFO kafka.cluster.Partition - [Partition __consumer_offsets-16 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-16 11:10:52 11:10:52.284 [data-plane-kafka-request-handler-0] INFO kafka.cluster.Partition - [Partition __consumer_offsets-16 broker=1] Log loaded for partition __consumer_offsets-16 with initial high watermark 0 11:10:52 11:10:52.284 [data-plane-kafka-request-handler-0] INFO state.change.logger - [Broker id=1] Leader __consumer_offsets-16 with topic id Some(N6JdGKSnS5uP5QsQcans3w) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas []. Previous leader epoch was -1. 11:10:52 11:10:52.284 [data-plane-kafka-request-handler-0] DEBUG kafka.server.epoch.LeaderEpochFileCache - [LeaderEpochCache __consumer_offsets-16] Appended new epoch entry EpochEntry(epoch=0, startOffset=0). Cache now contains 1 entries. 11:10:52 11:10:52.290 [ProcessThread(sid:0 cport:39173):] DEBUG org.apache.zookeeper.server.SessionTrackerImpl - Checking session 0x10000020e9e0000 11:10:52 11:10:52.290 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - Processing request:: sessionid:0x10000020e9e0000 type:getData cxid:0xd7 zxid:0xfffffffffffffffe txntype:unknown reqpath:/config/topics/__consumer_offsets 11:10:52 11:10:52.290 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - sessionid:0x10000020e9e0000 type:getData cxid:0xd7 zxid:0xfffffffffffffffe txntype:unknown reqpath:/config/topics/__consumer_offsets 11:10:52 11:10:52.290 [SyncThread:0] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Permission requested: 1 11:10:52 11:10:52.290 [SyncThread:0] DEBUG org.apache.zookeeper.server.ZooKeeperServer - ACLs for node: [31,s{'world,'anyone} 11:10:52 ] 11:10:52 11:10:52.290 [SyncThread:0] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Client credentials: ['sasl,'zooclient 11:10:52 , 'ip,'127.0.0.1 11:10:52 ] 11:10:52 11:10:52.290 [main-SendThread(127.0.0.1:39173)] DEBUG org.apache.zookeeper.ClientCnxn - Reading reply session id: 0x10000020e9e0000, packet:: clientPath:/config/topics/__consumer_offsets serverPath:/config/topics/__consumer_offsets finished:false header:: 215,4 replyHeader:: 215,139,0 request:: '/config/topics/__consumer_offsets,F response:: #7b2276657273696f6e223a312c22636f6e666967223a7b22636f6d7072657373696f6e2e74797065223a2270726f6475636572222c22636c65616e75702e706f6c696379223a22636f6d70616374222c227365676d656e742e6279746573223a22313034383537363030227d7d,s{37,37,1768216251826,1768216251826,0,0,0,0,109,0,37} 11:10:52 11:10:52.292 [data-plane-kafka-request-handler-0] DEBUG kafka.log.OffsetIndex - Loaded index file /tmp/kafka-unit8944902187107510952/__consumer_offsets-2/00000000000000000000.index with maxEntries = 1310720, maxIndexSize = 10485760, entries = 0, lastOffset = 0, file position = 0 11:10:52 11:10:52.292 [data-plane-kafka-request-handler-0] DEBUG kafka.log.AbstractIndex - Index /tmp/kafka-unit8944902187107510952/__consumer_offsets-2/00000000000000000000.index was not resized because it already has size 10485760 11:10:52 11:10:52.293 [data-plane-kafka-request-handler-0] DEBUG kafka.log.TimeIndex - Loaded index file /tmp/kafka-unit8944902187107510952/__consumer_offsets-2/00000000000000000000.timeindex with maxEntries = 873813, maxIndexSize = 10485760, entries = 0, lastOffset = TimestampOffset(-1,0), file position = 0 11:10:52 11:10:52.293 [data-plane-kafka-request-handler-0] DEBUG kafka.log.AbstractIndex - Index /tmp/kafka-unit8944902187107510952/__consumer_offsets-2/00000000000000000000.timeindex was not resized because it already has size 10485756 11:10:52 11:10:52.293 [data-plane-kafka-request-handler-0] INFO kafka.log.UnifiedLog$ - [LogLoader partition=__consumer_offsets-2, dir=/tmp/kafka-unit8944902187107510952] Loading producer state till offset 0 with message format version 2 11:10:52 11:10:52.293 [data-plane-kafka-request-handler-0] DEBUG kafka.utils.KafkaScheduler - Scheduling task flush-metadata-file with initial delay 0 ms and period -1 ms. 11:10:52 11:10:52.294 [data-plane-kafka-request-handler-0] DEBUG kafka.utils.KafkaScheduler - Scheduling task PeriodicProducerExpirationCheck with initial delay 600000 ms and period 600000 ms. 11:10:52 11:10:52.294 [data-plane-kafka-request-handler-0] INFO kafka.log.LogManager - Created log for partition __consumer_offsets-2 in /tmp/kafka-unit8944902187107510952/__consumer_offsets-2 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} 11:10:52 11:10:52.294 [data-plane-kafka-request-handler-0] INFO kafka.cluster.Partition - [Partition __consumer_offsets-2 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-2 11:10:52 11:10:52.294 [data-plane-kafka-request-handler-0] INFO kafka.cluster.Partition - [Partition __consumer_offsets-2 broker=1] Log loaded for partition __consumer_offsets-2 with initial high watermark 0 11:10:52 11:10:52.294 [data-plane-kafka-request-handler-0] INFO state.change.logger - [Broker id=1] Leader __consumer_offsets-2 with topic id Some(N6JdGKSnS5uP5QsQcans3w) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas []. Previous leader epoch was -1. 11:10:52 11:10:52.294 [data-plane-kafka-request-handler-0] DEBUG kafka.server.epoch.LeaderEpochFileCache - [LeaderEpochCache __consumer_offsets-2] Appended new epoch entry EpochEntry(epoch=0, startOffset=0). Cache now contains 1 entries. 11:10:52 11:10:52.314 [ProcessThread(sid:0 cport:39173):] DEBUG org.apache.zookeeper.server.SessionTrackerImpl - Checking session 0x10000020e9e0000 11:10:52 11:10:52.314 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - Processing request:: sessionid:0x10000020e9e0000 type:getData cxid:0xd8 zxid:0xfffffffffffffffe txntype:unknown reqpath:/config/topics/__consumer_offsets 11:10:52 11:10:52.314 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - sessionid:0x10000020e9e0000 type:getData cxid:0xd8 zxid:0xfffffffffffffffe txntype:unknown reqpath:/config/topics/__consumer_offsets 11:10:52 11:10:52.314 [SyncThread:0] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Permission requested: 1 11:10:52 11:10:52.314 [SyncThread:0] DEBUG org.apache.zookeeper.server.ZooKeeperServer - ACLs for node: [31,s{'world,'anyone} 11:10:52 ] 11:10:52 11:10:52.314 [SyncThread:0] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Client credentials: ['sasl,'zooclient 11:10:52 , 'ip,'127.0.0.1 11:10:52 ] 11:10:52 11:10:52.315 [main-SendThread(127.0.0.1:39173)] DEBUG org.apache.zookeeper.ClientCnxn - Reading reply session id: 0x10000020e9e0000, packet:: clientPath:/config/topics/__consumer_offsets serverPath:/config/topics/__consumer_offsets finished:false header:: 216,4 replyHeader:: 216,139,0 request:: '/config/topics/__consumer_offsets,F response:: #7b2276657273696f6e223a312c22636f6e666967223a7b22636f6d7072657373696f6e2e74797065223a2270726f6475636572222c22636c65616e75702e706f6c696379223a22636f6d70616374222c227365676d656e742e6279746573223a22313034383537363030227d7d,s{37,37,1768216251826,1768216251826,0,0,0,0,109,0,37} 11:10:52 11:10:52.318 [data-plane-kafka-request-handler-0] DEBUG kafka.log.OffsetIndex - Loaded index file /tmp/kafka-unit8944902187107510952/__consumer_offsets-25/00000000000000000000.index with maxEntries = 1310720, maxIndexSize = 10485760, entries = 0, lastOffset = 0, file position = 0 11:10:52 11:10:52.318 [data-plane-kafka-request-handler-0] DEBUG kafka.log.AbstractIndex - Index /tmp/kafka-unit8944902187107510952/__consumer_offsets-25/00000000000000000000.index was not resized because it already has size 10485760 11:10:52 11:10:52.318 [data-plane-kafka-request-handler-0] DEBUG kafka.log.TimeIndex - Loaded index file /tmp/kafka-unit8944902187107510952/__consumer_offsets-25/00000000000000000000.timeindex with maxEntries = 873813, maxIndexSize = 10485760, entries = 0, lastOffset = TimestampOffset(-1,0), file position = 0 11:10:52 11:10:52.318 [data-plane-kafka-request-handler-0] DEBUG kafka.log.AbstractIndex - Index /tmp/kafka-unit8944902187107510952/__consumer_offsets-25/00000000000000000000.timeindex was not resized because it already has size 10485756 11:10:52 11:10:52.318 [data-plane-kafka-request-handler-0] INFO kafka.log.UnifiedLog$ - [LogLoader partition=__consumer_offsets-25, dir=/tmp/kafka-unit8944902187107510952] Loading producer state till offset 0 with message format version 2 11:10:52 11:10:52.319 [data-plane-kafka-request-handler-0] DEBUG kafka.utils.KafkaScheduler - Scheduling task flush-metadata-file with initial delay 0 ms and period -1 ms. 11:10:52 11:10:52.319 [data-plane-kafka-request-handler-0] DEBUG kafka.utils.KafkaScheduler - Scheduling task PeriodicProducerExpirationCheck with initial delay 600000 ms and period 600000 ms. 11:10:52 11:10:52.319 [data-plane-kafka-request-handler-0] INFO kafka.log.LogManager - Created log for partition __consumer_offsets-25 in /tmp/kafka-unit8944902187107510952/__consumer_offsets-25 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} 11:10:52 11:10:52.319 [data-plane-kafka-request-handler-0] INFO kafka.cluster.Partition - [Partition __consumer_offsets-25 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-25 11:10:52 11:10:52.320 [data-plane-kafka-request-handler-0] INFO kafka.cluster.Partition - [Partition __consumer_offsets-25 broker=1] Log loaded for partition __consumer_offsets-25 with initial high watermark 0 11:10:52 11:10:52.320 [data-plane-kafka-request-handler-0] INFO state.change.logger - [Broker id=1] Leader __consumer_offsets-25 with topic id Some(N6JdGKSnS5uP5QsQcans3w) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas []. Previous leader epoch was -1. 11:10:52 11:10:52.320 [data-plane-kafka-request-handler-0] DEBUG kafka.server.epoch.LeaderEpochFileCache - [LeaderEpochCache __consumer_offsets-25] Appended new epoch entry EpochEntry(epoch=0, startOffset=0). Cache now contains 1 entries. 11:10:52 11:10:52.324 [ProcessThread(sid:0 cport:39173):] DEBUG org.apache.zookeeper.server.SessionTrackerImpl - Checking session 0x10000020e9e0000 11:10:52 11:10:52.324 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - Processing request:: sessionid:0x10000020e9e0000 type:getData cxid:0xd9 zxid:0xfffffffffffffffe txntype:unknown reqpath:/config/topics/__consumer_offsets 11:10:52 11:10:52.325 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - sessionid:0x10000020e9e0000 type:getData cxid:0xd9 zxid:0xfffffffffffffffe txntype:unknown reqpath:/config/topics/__consumer_offsets 11:10:52 11:10:52.325 [SyncThread:0] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Permission requested: 1 11:10:52 11:10:52.325 [SyncThread:0] DEBUG org.apache.zookeeper.server.ZooKeeperServer - ACLs for node: [31,s{'world,'anyone} 11:10:52 ] 11:10:52 11:10:52.325 [SyncThread:0] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Client credentials: ['sasl,'zooclient 11:10:52 , 'ip,'127.0.0.1 11:10:52 ] 11:10:52 11:10:52.325 [main-SendThread(127.0.0.1:39173)] DEBUG org.apache.zookeeper.ClientCnxn - Reading reply session id: 0x10000020e9e0000, packet:: clientPath:/config/topics/__consumer_offsets serverPath:/config/topics/__consumer_offsets finished:false header:: 217,4 replyHeader:: 217,139,0 request:: '/config/topics/__consumer_offsets,F response:: #7b2276657273696f6e223a312c22636f6e666967223a7b22636f6d7072657373696f6e2e74797065223a2270726f6475636572222c22636c65616e75702e706f6c696379223a22636f6d70616374222c227365676d656e742e6279746573223a22313034383537363030227d7d,s{37,37,1768216251826,1768216251826,0,0,0,0,109,0,37} 11:10:52 11:10:52.327 [main] DEBUG org.apache.kafka.clients.NetworkClient - [Consumer clientId=mso-123456-consumer-37da6eb2-0a27-47c4-bc3c-8032fdeca549, groupId=mso-group] Sending metadata request MetadataRequestData(topics=[MetadataRequestTopic(topicId=AAAAAAAAAAAAAAAAAAAAAA, name='my-test-topic')], allowAutoTopicCreation=false, includeClusterAuthorizedOperations=false, includeTopicAuthorizedOperations=false) to node localhost:39115 (id: 1 rack: null) 11:10:52 11:10:52.327 [main] DEBUG org.apache.kafka.clients.NetworkClient - [Consumer clientId=mso-123456-consumer-37da6eb2-0a27-47c4-bc3c-8032fdeca549, groupId=mso-group] Sending METADATA request with header RequestHeader(apiKey=METADATA, apiVersion=12, clientId=mso-123456-consumer-37da6eb2-0a27-47c4-bc3c-8032fdeca549, correlationId=12) and timeout 30000 to node 1: MetadataRequestData(topics=[MetadataRequestTopic(topicId=AAAAAAAAAAAAAAAAAAAAAA, name='my-test-topic')], allowAutoTopicCreation=false, includeClusterAuthorizedOperations=false, includeTopicAuthorizedOperations=false) 11:10:52 11:10:52.328 [data-plane-kafka-request-handler-0] DEBUG kafka.log.OffsetIndex - Loaded index file /tmp/kafka-unit8944902187107510952/__consumer_offsets-40/00000000000000000000.index with maxEntries = 1310720, maxIndexSize = 10485760, entries = 0, lastOffset = 0, file position = 0 11:10:52 11:10:52.328 [data-plane-kafka-request-handler-0] DEBUG kafka.log.AbstractIndex - Index /tmp/kafka-unit8944902187107510952/__consumer_offsets-40/00000000000000000000.index was not resized because it already has size 10485760 11:10:52 11:10:52.328 [data-plane-kafka-request-handler-0] DEBUG kafka.log.TimeIndex - Loaded index file /tmp/kafka-unit8944902187107510952/__consumer_offsets-40/00000000000000000000.timeindex with maxEntries = 873813, maxIndexSize = 10485760, entries = 0, lastOffset = TimestampOffset(-1,0), file position = 0 11:10:52 11:10:52.328 [data-plane-kafka-request-handler-0] DEBUG kafka.log.AbstractIndex - Index /tmp/kafka-unit8944902187107510952/__consumer_offsets-40/00000000000000000000.timeindex was not resized because it already has size 10485756 11:10:52 11:10:52.329 [data-plane-kafka-request-handler-0] INFO kafka.log.UnifiedLog$ - [LogLoader partition=__consumer_offsets-40, dir=/tmp/kafka-unit8944902187107510952] Loading producer state till offset 0 with message format version 2 11:10:52 11:10:52.329 [data-plane-kafka-request-handler-0] DEBUG kafka.utils.KafkaScheduler - Scheduling task flush-metadata-file with initial delay 0 ms and period -1 ms. 11:10:52 11:10:52.329 [data-plane-kafka-request-handler-0] DEBUG kafka.utils.KafkaScheduler - Scheduling task PeriodicProducerExpirationCheck with initial delay 600000 ms and period 600000 ms. 11:10:52 11:10:52.330 [data-plane-kafka-request-handler-0] INFO kafka.log.LogManager - Created log for partition __consumer_offsets-40 in /tmp/kafka-unit8944902187107510952/__consumer_offsets-40 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} 11:10:52 11:10:52.330 [data-plane-kafka-request-handler-0] INFO kafka.cluster.Partition - [Partition __consumer_offsets-40 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-40 11:10:52 11:10:52.330 [data-plane-kafka-request-handler-0] INFO kafka.cluster.Partition - [Partition __consumer_offsets-40 broker=1] Log loaded for partition __consumer_offsets-40 with initial high watermark 0 11:10:52 11:10:52.330 [data-plane-kafka-request-handler-0] INFO state.change.logger - [Broker id=1] Leader __consumer_offsets-40 with topic id Some(N6JdGKSnS5uP5QsQcans3w) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas []. Previous leader epoch was -1. 11:10:52 11:10:52.330 [data-plane-kafka-request-handler-0] DEBUG kafka.server.epoch.LeaderEpochFileCache - [LeaderEpochCache __consumer_offsets-40] Appended new epoch entry EpochEntry(epoch=0, startOffset=0). Cache now contains 1 entries. 11:10:52 11:10:52.331 [main] DEBUG org.apache.kafka.clients.NetworkClient - [Consumer clientId=mso-123456-consumer-37da6eb2-0a27-47c4-bc3c-8032fdeca549, groupId=mso-group] Received METADATA response from node 1 for request with header RequestHeader(apiKey=METADATA, apiVersion=12, clientId=mso-123456-consumer-37da6eb2-0a27-47c4-bc3c-8032fdeca549, correlationId=12): MetadataResponseData(throttleTimeMs=0, brokers=[MetadataResponseBroker(nodeId=1, host='localhost', port=39115, rack=null)], clusterId='jx5ycp9PTHOXo1U6H8QTmw', controllerId=1, topics=[MetadataResponseTopic(errorCode=0, name='my-test-topic', topicId=QbCSIHehTz6oyvgrkYsJtw, isInternal=false, partitions=[MetadataResponsePartition(errorCode=0, partitionIndex=0, leaderId=1, leaderEpoch=0, replicaNodes=[1], isrNodes=[1], offlineReplicas=[])], topicAuthorizedOperations=-2147483648)], clusterAuthorizedOperations=-2147483648) 11:10:52 11:10:52.331 [main] DEBUG org.apache.kafka.clients.Metadata - [Consumer clientId=mso-123456-consumer-37da6eb2-0a27-47c4-bc3c-8032fdeca549, groupId=mso-group] Updating last seen epoch for partition my-test-topic-0 from 0 to epoch 0 from new metadata 11:10:52 11:10:52.331 [main] DEBUG org.apache.kafka.clients.Metadata - [Consumer clientId=mso-123456-consumer-37da6eb2-0a27-47c4-bc3c-8032fdeca549, groupId=mso-group] Updated cluster metadata updateVersion 7 to MetadataCache{clusterId='jx5ycp9PTHOXo1U6H8QTmw', nodes={1=localhost:39115 (id: 1 rack: null)}, partitions=[PartitionMetadata(error=NONE, partition=my-test-topic-0, leader=Optional[1], leaderEpoch=Optional[0], replicas=1, isr=1, offlineReplicas=)], controller=localhost:39115 (id: 1 rack: null)} 11:10:52 11:10:52.332 [data-plane-kafka-network-thread-1-ListenerName(SASL_PLAINTEXT)-SASL_PLAINTEXT-0] DEBUG kafka.request.logger - Completed request:{"isForwarded":false,"requestHeader":{"requestApiKey":3,"requestApiVersion":12,"correlationId":12,"clientId":"mso-123456-consumer-37da6eb2-0a27-47c4-bc3c-8032fdeca549","requestApiKeyName":"METADATA"},"request":{"topics":[{"topicId":"AAAAAAAAAAAAAAAAAAAAAA","name":"my-test-topic"}],"allowAutoTopicCreation":false,"includeTopicAuthorizedOperations":false},"response":{"throttleTimeMs":0,"brokers":[{"nodeId":1,"host":"localhost","port":39115,"rack":null}],"clusterId":"jx5ycp9PTHOXo1U6H8QTmw","controllerId":1,"topics":[{"errorCode":0,"name":"my-test-topic","topicId":"QbCSIHehTz6oyvgrkYsJtw","isInternal":false,"partitions":[{"errorCode":0,"partitionIndex":0,"leaderId":1,"leaderEpoch":0,"replicaNodes":[1],"isrNodes":[1],"offlineReplicas":[]}],"topicAuthorizedOperations":-2147483648}]},"connection":"127.0.0.1:39115-127.0.0.1:43334-3","totalTimeMs":2.491,"requestQueueTimeMs":0.242,"localTimeMs":1.764,"remoteTimeMs":0.0,"throttleTimeMs":0,"responseQueueTimeMs":0.095,"sendTimeMs":0.389,"securityProtocol":"SASL_PLAINTEXT","principal":"User:admin","listener":"SASL_PLAINTEXT","clientInformation":{"softwareName":"apache-kafka-java","softwareVersion":"3.3.1"}} 11:10:52 11:10:52.332 [main] DEBUG org.apache.kafka.clients.consumer.internals.ConsumerCoordinator - [Consumer clientId=mso-123456-consumer-37da6eb2-0a27-47c4-bc3c-8032fdeca549, groupId=mso-group] Sending FindCoordinator request to broker localhost:39115 (id: 1 rack: null) 11:10:52 11:10:52.332 [main] DEBUG org.apache.kafka.clients.NetworkClient - [Consumer clientId=mso-123456-consumer-37da6eb2-0a27-47c4-bc3c-8032fdeca549, groupId=mso-group] Sending FIND_COORDINATOR request with header RequestHeader(apiKey=FIND_COORDINATOR, apiVersion=4, clientId=mso-123456-consumer-37da6eb2-0a27-47c4-bc3c-8032fdeca549, correlationId=13) and timeout 30000 to node 1: FindCoordinatorRequestData(key='', keyType=0, coordinatorKeys=[mso-group]) 11:10:52 11:10:52.334 [ProcessThread(sid:0 cport:39173):] DEBUG org.apache.zookeeper.server.SessionTrackerImpl - Checking session 0x10000020e9e0000 11:10:52 11:10:52.334 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - Processing request:: sessionid:0x10000020e9e0000 type:exists cxid:0xda zxid:0xfffffffffffffffe txntype:unknown reqpath:/admin/delete_topics/__consumer_offsets 11:10:52 11:10:52.334 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - sessionid:0x10000020e9e0000 type:exists cxid:0xda zxid:0xfffffffffffffffe txntype:unknown reqpath:/admin/delete_topics/__consumer_offsets 11:10:52 11:10:52.334 [main-SendThread(127.0.0.1:39173)] DEBUG org.apache.zookeeper.ClientCnxn - Reading reply session id: 0x10000020e9e0000, packet:: clientPath:/admin/delete_topics/__consumer_offsets serverPath:/admin/delete_topics/__consumer_offsets finished:false header:: 218,3 replyHeader:: 218,139,-101 request:: '/admin/delete_topics/__consumer_offsets,F response:: 11:10:52 11:10:52.334 [ProcessThread(sid:0 cport:39173):] DEBUG org.apache.zookeeper.server.SessionTrackerImpl - Checking session 0x10000020e9e0000 11:10:52 11:10:52.335 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - Processing request:: sessionid:0x10000020e9e0000 type:getData cxid:0xdb zxid:0xfffffffffffffffe txntype:unknown reqpath:/config/topics/__consumer_offsets 11:10:52 11:10:52.335 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - sessionid:0x10000020e9e0000 type:getData cxid:0xdb zxid:0xfffffffffffffffe txntype:unknown reqpath:/config/topics/__consumer_offsets 11:10:52 11:10:52.335 [SyncThread:0] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Permission requested: 1 11:10:52 11:10:52.335 [SyncThread:0] DEBUG org.apache.zookeeper.server.ZooKeeperServer - ACLs for node: [31,s{'world,'anyone} 11:10:52 ] 11:10:52 11:10:52.335 [SyncThread:0] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Client credentials: ['sasl,'zooclient 11:10:52 , 'ip,'127.0.0.1 11:10:52 ] 11:10:52 11:10:52.335 [main-SendThread(127.0.0.1:39173)] DEBUG org.apache.zookeeper.ClientCnxn - Reading reply session id: 0x10000020e9e0000, packet:: clientPath:/config/topics/__consumer_offsets serverPath:/config/topics/__consumer_offsets finished:false header:: 219,4 replyHeader:: 219,139,0 request:: '/config/topics/__consumer_offsets,F response:: #7b2276657273696f6e223a312c22636f6e666967223a7b22636f6d7072657373696f6e2e74797065223a2270726f6475636572222c22636c65616e75702e706f6c696379223a22636f6d70616374222c227365676d656e742e6279746573223a22313034383537363030227d7d,s{37,37,1768216251826,1768216251826,0,0,0,0,109,0,37} 11:10:52 11:10:52.335 [ProcessThread(sid:0 cport:39173):] DEBUG org.apache.zookeeper.server.SessionTrackerImpl - Checking session 0x10000020e9e0000 11:10:52 11:10:52.336 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - Processing request:: sessionid:0x10000020e9e0000 type:exists cxid:0xdc zxid:0xfffffffffffffffe txntype:unknown reqpath:/brokers/topics/__consumer_offsets 11:10:52 11:10:52.336 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - sessionid:0x10000020e9e0000 type:exists cxid:0xdc zxid:0xfffffffffffffffe txntype:unknown reqpath:/brokers/topics/__consumer_offsets 11:10:52 11:10:52.336 [main-SendThread(127.0.0.1:39173)] DEBUG org.apache.zookeeper.ClientCnxn - Reading reply session id: 0x10000020e9e0000, packet:: clientPath:/brokers/topics/__consumer_offsets serverPath:/brokers/topics/__consumer_offsets finished:false header:: 220,3 replyHeader:: 220,139,0 request:: '/brokers/topics/__consumer_offsets,T response:: s{38,38,1768216251835,1768216251835,0,1,0,0,548,1,39} 11:10:52 11:10:52.336 [data-plane-kafka-request-handler-1] DEBUG kafka.server.ZkAdminManager - [Admin Manager on Broker 1]: Topic creation failed since topic '__consumer_offsets' already exists. 11:10:52 org.apache.kafka.common.errors.TopicExistsException: Topic '__consumer_offsets' already exists. 11:10:52 11:10:52.337 [data-plane-kafka-request-handler-1] DEBUG kafka.server.DefaultAutoTopicCreationManager - Cleared inflight topic creation state for HashMap(__consumer_offsets -> CreatableTopic(name='__consumer_offsets', numPartitions=50, replicationFactor=1, assignments=[], configs=[CreateableTopicConfig(name='compression.type', value='producer'), CreateableTopicConfig(name='cleanup.policy', value='compact'), CreateableTopicConfig(name='segment.bytes', value='104857600')])) 11:10:52 11:10:52.337 [data-plane-kafka-request-handler-0] DEBUG kafka.log.OffsetIndex - Loaded index file /tmp/kafka-unit8944902187107510952/__consumer_offsets-47/00000000000000000000.index with maxEntries = 1310720, maxIndexSize = 10485760, entries = 0, lastOffset = 0, file position = 0 11:10:52 11:10:52.337 [data-plane-kafka-request-handler-0] DEBUG kafka.log.AbstractIndex - Index /tmp/kafka-unit8944902187107510952/__consumer_offsets-47/00000000000000000000.index was not resized because it already has size 10485760 11:10:52 11:10:52.337 [data-plane-kafka-request-handler-0] DEBUG kafka.log.TimeIndex - Loaded index file /tmp/kafka-unit8944902187107510952/__consumer_offsets-47/00000000000000000000.timeindex with maxEntries = 873813, maxIndexSize = 10485760, entries = 0, lastOffset = TimestampOffset(-1,0), file position = 0 11:10:52 11:10:52.337 [data-plane-kafka-request-handler-0] DEBUG kafka.log.AbstractIndex - Index /tmp/kafka-unit8944902187107510952/__consumer_offsets-47/00000000000000000000.timeindex was not resized because it already has size 10485756 11:10:52 11:10:52.337 [main] DEBUG org.apache.kafka.clients.NetworkClient - [Consumer clientId=mso-123456-consumer-37da6eb2-0a27-47c4-bc3c-8032fdeca549, groupId=mso-group] Received FIND_COORDINATOR response from node 1 for request with header RequestHeader(apiKey=FIND_COORDINATOR, apiVersion=4, clientId=mso-123456-consumer-37da6eb2-0a27-47c4-bc3c-8032fdeca549, correlationId=13): FindCoordinatorResponseData(throttleTimeMs=0, errorCode=0, errorMessage='', nodeId=0, host='', port=0, coordinators=[Coordinator(key='mso-group', nodeId=-1, host='', port=-1, errorCode=15, errorMessage='')]) 11:10:52 11:10:52.337 [main] DEBUG org.apache.kafka.clients.consumer.internals.ConsumerCoordinator - [Consumer clientId=mso-123456-consumer-37da6eb2-0a27-47c4-bc3c-8032fdeca549, groupId=mso-group] Received FindCoordinator response ClientResponse(receivedTimeMs=1768216252337, latencyMs=5, disconnected=false, requestHeader=RequestHeader(apiKey=FIND_COORDINATOR, apiVersion=4, clientId=mso-123456-consumer-37da6eb2-0a27-47c4-bc3c-8032fdeca549, correlationId=13), responseBody=FindCoordinatorResponseData(throttleTimeMs=0, errorCode=0, errorMessage='', nodeId=0, host='', port=0, coordinators=[Coordinator(key='mso-group', nodeId=-1, host='', port=-1, errorCode=15, errorMessage='')])) 11:10:52 11:10:52.338 [main] DEBUG org.apache.kafka.clients.consumer.internals.ConsumerCoordinator - [Consumer clientId=mso-123456-consumer-37da6eb2-0a27-47c4-bc3c-8032fdeca549, groupId=mso-group] Group coordinator lookup failed: 11:10:52 11:10:52.338 [data-plane-kafka-network-thread-1-ListenerName(SASL_PLAINTEXT)-SASL_PLAINTEXT-0] DEBUG kafka.request.logger - Completed request:{"isForwarded":false,"requestHeader":{"requestApiKey":10,"requestApiVersion":4,"correlationId":13,"clientId":"mso-123456-consumer-37da6eb2-0a27-47c4-bc3c-8032fdeca549","requestApiKeyName":"FIND_COORDINATOR"},"request":{"keyType":0,"coordinatorKeys":["mso-group"]},"response":{"throttleTimeMs":0,"coordinators":[{"key":"mso-group","nodeId":-1,"host":"","port":-1,"errorCode":15,"errorMessage":""}]},"connection":"127.0.0.1:39115-127.0.0.1:43334-3","totalTimeMs":5.008,"requestQueueTimeMs":0.162,"localTimeMs":4.556,"remoteTimeMs":0.0,"throttleTimeMs":0,"responseQueueTimeMs":0.084,"sendTimeMs":0.205,"securityProtocol":"SASL_PLAINTEXT","principal":"User:admin","listener":"SASL_PLAINTEXT","clientInformation":{"softwareName":"apache-kafka-java","softwareVersion":"3.3.1"}} 11:10:52 11:10:52.338 [main] DEBUG org.apache.kafka.clients.consumer.internals.ConsumerCoordinator - [Consumer clientId=mso-123456-consumer-37da6eb2-0a27-47c4-bc3c-8032fdeca549, groupId=mso-group] Coordinator discovery failed, refreshing metadata 11:10:52 org.apache.kafka.common.errors.CoordinatorNotAvailableException: The coordinator is not available. 11:10:52 11:10:52.338 [data-plane-kafka-request-handler-0] INFO kafka.log.UnifiedLog$ - [LogLoader partition=__consumer_offsets-47, dir=/tmp/kafka-unit8944902187107510952] Loading producer state till offset 0 with message format version 2 11:10:52 11:10:52.338 [data-plane-kafka-request-handler-0] DEBUG kafka.utils.KafkaScheduler - Scheduling task flush-metadata-file with initial delay 0 ms and period -1 ms. 11:10:52 11:10:52.339 [data-plane-kafka-request-handler-0] DEBUG kafka.utils.KafkaScheduler - Scheduling task PeriodicProducerExpirationCheck with initial delay 600000 ms and period 600000 ms. 11:10:52 11:10:52.339 [data-plane-kafka-request-handler-0] INFO kafka.log.LogManager - Created log for partition __consumer_offsets-47 in /tmp/kafka-unit8944902187107510952/__consumer_offsets-47 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} 11:10:52 11:10:52.339 [data-plane-kafka-request-handler-0] INFO kafka.cluster.Partition - [Partition __consumer_offsets-47 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-47 11:10:52 11:10:52.339 [data-plane-kafka-request-handler-0] INFO kafka.cluster.Partition - [Partition __consumer_offsets-47 broker=1] Log loaded for partition __consumer_offsets-47 with initial high watermark 0 11:10:52 11:10:52.339 [data-plane-kafka-request-handler-0] INFO state.change.logger - [Broker id=1] Leader __consumer_offsets-47 with topic id Some(N6JdGKSnS5uP5QsQcans3w) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas []. Previous leader epoch was -1. 11:10:52 11:10:52.339 [data-plane-kafka-request-handler-0] DEBUG kafka.server.epoch.LeaderEpochFileCache - [LeaderEpochCache __consumer_offsets-47] Appended new epoch entry EpochEntry(epoch=0, startOffset=0). Cache now contains 1 entries. 11:10:52 11:10:52.344 [ProcessThread(sid:0 cport:39173):] DEBUG org.apache.zookeeper.server.SessionTrackerImpl - Checking session 0x10000020e9e0000 11:10:52 11:10:52.344 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - Processing request:: sessionid:0x10000020e9e0000 type:getData cxid:0xdd zxid:0xfffffffffffffffe txntype:unknown reqpath:/config/topics/__consumer_offsets 11:10:52 11:10:52.344 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - sessionid:0x10000020e9e0000 type:getData cxid:0xdd zxid:0xfffffffffffffffe txntype:unknown reqpath:/config/topics/__consumer_offsets 11:10:52 11:10:52.344 [SyncThread:0] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Permission requested: 1 11:10:52 11:10:52.344 [SyncThread:0] DEBUG org.apache.zookeeper.server.ZooKeeperServer - ACLs for node: [31,s{'world,'anyone} 11:10:52 ] 11:10:52 11:10:52.344 [SyncThread:0] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Client credentials: ['sasl,'zooclient 11:10:52 , 'ip,'127.0.0.1 11:10:52 ] 11:10:52 11:10:52.344 [main-SendThread(127.0.0.1:39173)] DEBUG org.apache.zookeeper.ClientCnxn - Reading reply session id: 0x10000020e9e0000, packet:: clientPath:/config/topics/__consumer_offsets serverPath:/config/topics/__consumer_offsets finished:false header:: 221,4 replyHeader:: 221,139,0 request:: '/config/topics/__consumer_offsets,F response:: #7b2276657273696f6e223a312c22636f6e666967223a7b22636f6d7072657373696f6e2e74797065223a2270726f6475636572222c22636c65616e75702e706f6c696379223a22636f6d70616374222c227365676d656e742e6279746573223a22313034383537363030227d7d,s{37,37,1768216251826,1768216251826,0,0,0,0,109,0,37} 11:10:52 11:10:52.346 [data-plane-kafka-request-handler-0] DEBUG kafka.log.OffsetIndex - Loaded index file /tmp/kafka-unit8944902187107510952/__consumer_offsets-17/00000000000000000000.index with maxEntries = 1310720, maxIndexSize = 10485760, entries = 0, lastOffset = 0, file position = 0 11:10:52 11:10:52.346 [data-plane-kafka-request-handler-0] DEBUG kafka.log.AbstractIndex - Index /tmp/kafka-unit8944902187107510952/__consumer_offsets-17/00000000000000000000.index was not resized because it already has size 10485760 11:10:52 11:10:52.346 [data-plane-kafka-request-handler-0] DEBUG kafka.log.TimeIndex - Loaded index file /tmp/kafka-unit8944902187107510952/__consumer_offsets-17/00000000000000000000.timeindex with maxEntries = 873813, maxIndexSize = 10485760, entries = 0, lastOffset = TimestampOffset(-1,0), file position = 0 11:10:52 11:10:52.346 [data-plane-kafka-request-handler-0] DEBUG kafka.log.AbstractIndex - Index /tmp/kafka-unit8944902187107510952/__consumer_offsets-17/00000000000000000000.timeindex was not resized because it already has size 10485756 11:10:52 11:10:52.347 [data-plane-kafka-request-handler-0] INFO kafka.log.UnifiedLog$ - [LogLoader partition=__consumer_offsets-17, dir=/tmp/kafka-unit8944902187107510952] Loading producer state till offset 0 with message format version 2 11:10:52 11:10:52.347 [data-plane-kafka-request-handler-0] DEBUG kafka.utils.KafkaScheduler - Scheduling task flush-metadata-file with initial delay 0 ms and period -1 ms. 11:10:52 11:10:52.347 [data-plane-kafka-request-handler-0] DEBUG kafka.utils.KafkaScheduler - Scheduling task PeriodicProducerExpirationCheck with initial delay 600000 ms and period 600000 ms. 11:10:52 11:10:52.347 [data-plane-kafka-request-handler-0] INFO kafka.log.LogManager - Created log for partition __consumer_offsets-17 in /tmp/kafka-unit8944902187107510952/__consumer_offsets-17 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} 11:10:52 11:10:52.347 [data-plane-kafka-request-handler-0] INFO kafka.cluster.Partition - [Partition __consumer_offsets-17 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-17 11:10:52 11:10:52.347 [data-plane-kafka-request-handler-0] INFO kafka.cluster.Partition - [Partition __consumer_offsets-17 broker=1] Log loaded for partition __consumer_offsets-17 with initial high watermark 0 11:10:52 11:10:52.348 [data-plane-kafka-request-handler-0] INFO state.change.logger - [Broker id=1] Leader __consumer_offsets-17 with topic id Some(N6JdGKSnS5uP5QsQcans3w) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas []. Previous leader epoch was -1. 11:10:52 11:10:52.348 [data-plane-kafka-request-handler-0] DEBUG kafka.server.epoch.LeaderEpochFileCache - [LeaderEpochCache __consumer_offsets-17] Appended new epoch entry EpochEntry(epoch=0, startOffset=0). Cache now contains 1 entries. 11:10:52 11:10:52.352 [ProcessThread(sid:0 cport:39173):] DEBUG org.apache.zookeeper.server.SessionTrackerImpl - Checking session 0x10000020e9e0000 11:10:52 11:10:52.352 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - Processing request:: sessionid:0x10000020e9e0000 type:getData cxid:0xde zxid:0xfffffffffffffffe txntype:unknown reqpath:/config/topics/__consumer_offsets 11:10:52 11:10:52.352 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - sessionid:0x10000020e9e0000 type:getData cxid:0xde zxid:0xfffffffffffffffe txntype:unknown reqpath:/config/topics/__consumer_offsets 11:10:52 11:10:52.352 [SyncThread:0] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Permission requested: 1 11:10:52 11:10:52.352 [SyncThread:0] DEBUG org.apache.zookeeper.server.ZooKeeperServer - ACLs for node: [31,s{'world,'anyone} 11:10:52 ] 11:10:52 11:10:52.352 [SyncThread:0] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Client credentials: ['sasl,'zooclient 11:10:52 , 'ip,'127.0.0.1 11:10:52 ] 11:10:52 11:10:52.352 [main-SendThread(127.0.0.1:39173)] DEBUG org.apache.zookeeper.ClientCnxn - Reading reply session id: 0x10000020e9e0000, packet:: clientPath:/config/topics/__consumer_offsets serverPath:/config/topics/__consumer_offsets finished:false header:: 222,4 replyHeader:: 222,139,0 request:: '/config/topics/__consumer_offsets,F response:: #7b2276657273696f6e223a312c22636f6e666967223a7b22636f6d7072657373696f6e2e74797065223a2270726f6475636572222c22636c65616e75702e706f6c696379223a22636f6d70616374222c227365676d656e742e6279746573223a22313034383537363030227d7d,s{37,37,1768216251826,1768216251826,0,0,0,0,109,0,37} 11:10:52 11:10:52.354 [data-plane-kafka-request-handler-0] DEBUG kafka.log.OffsetIndex - Loaded index file /tmp/kafka-unit8944902187107510952/__consumer_offsets-32/00000000000000000000.index with maxEntries = 1310720, maxIndexSize = 10485760, entries = 0, lastOffset = 0, file position = 0 11:10:52 11:10:52.355 [data-plane-kafka-request-handler-0] DEBUG kafka.log.AbstractIndex - Index /tmp/kafka-unit8944902187107510952/__consumer_offsets-32/00000000000000000000.index was not resized because it already has size 10485760 11:10:52 11:10:52.355 [data-plane-kafka-request-handler-0] DEBUG kafka.log.TimeIndex - Loaded index file /tmp/kafka-unit8944902187107510952/__consumer_offsets-32/00000000000000000000.timeindex with maxEntries = 873813, maxIndexSize = 10485760, entries = 0, lastOffset = TimestampOffset(-1,0), file position = 0 11:10:52 11:10:52.355 [data-plane-kafka-request-handler-0] DEBUG kafka.log.AbstractIndex - Index /tmp/kafka-unit8944902187107510952/__consumer_offsets-32/00000000000000000000.timeindex was not resized because it already has size 10485756 11:10:52 11:10:52.355 [data-plane-kafka-request-handler-0] INFO kafka.log.UnifiedLog$ - [LogLoader partition=__consumer_offsets-32, dir=/tmp/kafka-unit8944902187107510952] Loading producer state till offset 0 with message format version 2 11:10:52 11:10:52.355 [data-plane-kafka-request-handler-0] DEBUG kafka.utils.KafkaScheduler - Scheduling task flush-metadata-file with initial delay 0 ms and period -1 ms. 11:10:52 11:10:52.356 [data-plane-kafka-request-handler-0] DEBUG kafka.utils.KafkaScheduler - Scheduling task PeriodicProducerExpirationCheck with initial delay 600000 ms and period 600000 ms. 11:10:52 11:10:52.356 [data-plane-kafka-request-handler-0] INFO kafka.log.LogManager - Created log for partition __consumer_offsets-32 in /tmp/kafka-unit8944902187107510952/__consumer_offsets-32 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} 11:10:52 11:10:52.356 [data-plane-kafka-request-handler-0] INFO kafka.cluster.Partition - [Partition __consumer_offsets-32 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-32 11:10:52 11:10:52.356 [data-plane-kafka-request-handler-0] INFO kafka.cluster.Partition - [Partition __consumer_offsets-32 broker=1] Log loaded for partition __consumer_offsets-32 with initial high watermark 0 11:10:52 11:10:52.356 [data-plane-kafka-request-handler-0] INFO state.change.logger - [Broker id=1] Leader __consumer_offsets-32 with topic id Some(N6JdGKSnS5uP5QsQcans3w) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas []. Previous leader epoch was -1. 11:10:52 11:10:52.356 [data-plane-kafka-request-handler-0] DEBUG kafka.server.epoch.LeaderEpochFileCache - [LeaderEpochCache __consumer_offsets-32] Appended new epoch entry EpochEntry(epoch=0, startOffset=0). Cache now contains 1 entries. 11:10:52 11:10:52.359 [ProcessThread(sid:0 cport:39173):] DEBUG org.apache.zookeeper.server.SessionTrackerImpl - Checking session 0x10000020e9e0000 11:10:52 11:10:52.359 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - Processing request:: sessionid:0x10000020e9e0000 type:getData cxid:0xdf zxid:0xfffffffffffffffe txntype:unknown reqpath:/config/topics/__consumer_offsets 11:10:52 11:10:52.359 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - sessionid:0x10000020e9e0000 type:getData cxid:0xdf zxid:0xfffffffffffffffe txntype:unknown reqpath:/config/topics/__consumer_offsets 11:10:52 11:10:52.359 [SyncThread:0] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Permission requested: 1 11:10:52 11:10:52.359 [SyncThread:0] DEBUG org.apache.zookeeper.server.ZooKeeperServer - ACLs for node: [31,s{'world,'anyone} 11:10:52 ] 11:10:52 11:10:52.359 [SyncThread:0] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Client credentials: ['sasl,'zooclient 11:10:52 , 'ip,'127.0.0.1 11:10:52 ] 11:10:52 11:10:52.360 [main-SendThread(127.0.0.1:39173)] DEBUG org.apache.zookeeper.ClientCnxn - Reading reply session id: 0x10000020e9e0000, packet:: clientPath:/config/topics/__consumer_offsets serverPath:/config/topics/__consumer_offsets finished:false header:: 223,4 replyHeader:: 223,139,0 request:: '/config/topics/__consumer_offsets,F response:: #7b2276657273696f6e223a312c22636f6e666967223a7b22636f6d7072657373696f6e2e74797065223a2270726f6475636572222c22636c65616e75702e706f6c696379223a22636f6d70616374222c227365676d656e742e6279746573223a22313034383537363030227d7d,s{37,37,1768216251826,1768216251826,0,0,0,0,109,0,37} 11:10:52 11:10:52.361 [data-plane-kafka-request-handler-0] DEBUG kafka.log.OffsetIndex - Loaded index file /tmp/kafka-unit8944902187107510952/__consumer_offsets-37/00000000000000000000.index with maxEntries = 1310720, maxIndexSize = 10485760, entries = 0, lastOffset = 0, file position = 0 11:10:52 11:10:52.361 [data-plane-kafka-request-handler-0] DEBUG kafka.log.AbstractIndex - Index /tmp/kafka-unit8944902187107510952/__consumer_offsets-37/00000000000000000000.index was not resized because it already has size 10485760 11:10:52 11:10:52.362 [data-plane-kafka-request-handler-0] DEBUG kafka.log.TimeIndex - Loaded index file /tmp/kafka-unit8944902187107510952/__consumer_offsets-37/00000000000000000000.timeindex with maxEntries = 873813, maxIndexSize = 10485760, entries = 0, lastOffset = TimestampOffset(-1,0), file position = 0 11:10:52 11:10:52.362 [data-plane-kafka-request-handler-0] DEBUG kafka.log.AbstractIndex - Index /tmp/kafka-unit8944902187107510952/__consumer_offsets-37/00000000000000000000.timeindex was not resized because it already has size 10485756 11:10:52 11:10:52.362 [data-plane-kafka-request-handler-0] INFO kafka.log.UnifiedLog$ - [LogLoader partition=__consumer_offsets-37, dir=/tmp/kafka-unit8944902187107510952] Loading producer state till offset 0 with message format version 2 11:10:52 11:10:52.362 [data-plane-kafka-request-handler-0] DEBUG kafka.utils.KafkaScheduler - Scheduling task flush-metadata-file with initial delay 0 ms and period -1 ms. 11:10:52 11:10:52.362 [data-plane-kafka-request-handler-0] DEBUG kafka.utils.KafkaScheduler - Scheduling task PeriodicProducerExpirationCheck with initial delay 600000 ms and period 600000 ms. 11:10:52 11:10:52.363 [data-plane-kafka-request-handler-0] INFO kafka.log.LogManager - Created log for partition __consumer_offsets-37 in /tmp/kafka-unit8944902187107510952/__consumer_offsets-37 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} 11:10:52 11:10:52.363 [data-plane-kafka-request-handler-0] INFO kafka.cluster.Partition - [Partition __consumer_offsets-37 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-37 11:10:52 11:10:52.363 [data-plane-kafka-request-handler-0] INFO kafka.cluster.Partition - [Partition __consumer_offsets-37 broker=1] Log loaded for partition __consumer_offsets-37 with initial high watermark 0 11:10:52 11:10:52.363 [data-plane-kafka-request-handler-0] INFO state.change.logger - [Broker id=1] Leader __consumer_offsets-37 with topic id Some(N6JdGKSnS5uP5QsQcans3w) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas []. Previous leader epoch was -1. 11:10:52 11:10:52.363 [data-plane-kafka-request-handler-0] DEBUG kafka.server.epoch.LeaderEpochFileCache - [LeaderEpochCache __consumer_offsets-37] Appended new epoch entry EpochEntry(epoch=0, startOffset=0). Cache now contains 1 entries. 11:10:52 11:10:52.367 [ProcessThread(sid:0 cport:39173):] DEBUG org.apache.zookeeper.server.SessionTrackerImpl - Checking session 0x10000020e9e0000 11:10:52 11:10:52.367 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - Processing request:: sessionid:0x10000020e9e0000 type:getData cxid:0xe0 zxid:0xfffffffffffffffe txntype:unknown reqpath:/config/topics/__consumer_offsets 11:10:52 11:10:52.367 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - sessionid:0x10000020e9e0000 type:getData cxid:0xe0 zxid:0xfffffffffffffffe txntype:unknown reqpath:/config/topics/__consumer_offsets 11:10:52 11:10:52.367 [SyncThread:0] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Permission requested: 1 11:10:52 11:10:52.367 [SyncThread:0] DEBUG org.apache.zookeeper.server.ZooKeeperServer - ACLs for node: [31,s{'world,'anyone} 11:10:52 ] 11:10:52 11:10:52.367 [SyncThread:0] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Client credentials: ['sasl,'zooclient 11:10:52 , 'ip,'127.0.0.1 11:10:52 ] 11:10:52 11:10:52.367 [main-SendThread(127.0.0.1:39173)] DEBUG org.apache.zookeeper.ClientCnxn - Reading reply session id: 0x10000020e9e0000, packet:: clientPath:/config/topics/__consumer_offsets serverPath:/config/topics/__consumer_offsets finished:false header:: 224,4 replyHeader:: 224,139,0 request:: '/config/topics/__consumer_offsets,F response:: #7b2276657273696f6e223a312c22636f6e666967223a7b22636f6d7072657373696f6e2e74797065223a2270726f6475636572222c22636c65616e75702e706f6c696379223a22636f6d70616374222c227365676d656e742e6279746573223a22313034383537363030227d7d,s{37,37,1768216251826,1768216251826,0,0,0,0,109,0,37} 11:10:52 11:10:52.369 [data-plane-kafka-request-handler-0] DEBUG kafka.log.OffsetIndex - Loaded index file /tmp/kafka-unit8944902187107510952/__consumer_offsets-7/00000000000000000000.index with maxEntries = 1310720, maxIndexSize = 10485760, entries = 0, lastOffset = 0, file position = 0 11:10:52 11:10:52.369 [data-plane-kafka-request-handler-0] DEBUG kafka.log.AbstractIndex - Index /tmp/kafka-unit8944902187107510952/__consumer_offsets-7/00000000000000000000.index was not resized because it already has size 10485760 11:10:52 11:10:52.369 [data-plane-kafka-request-handler-0] DEBUG kafka.log.TimeIndex - Loaded index file /tmp/kafka-unit8944902187107510952/__consumer_offsets-7/00000000000000000000.timeindex with maxEntries = 873813, maxIndexSize = 10485760, entries = 0, lastOffset = TimestampOffset(-1,0), file position = 0 11:10:52 11:10:52.369 [data-plane-kafka-request-handler-0] DEBUG kafka.log.AbstractIndex - Index /tmp/kafka-unit8944902187107510952/__consumer_offsets-7/00000000000000000000.timeindex was not resized because it already has size 10485756 11:10:52 11:10:52.369 [data-plane-kafka-request-handler-0] INFO kafka.log.UnifiedLog$ - [LogLoader partition=__consumer_offsets-7, dir=/tmp/kafka-unit8944902187107510952] Loading producer state till offset 0 with message format version 2 11:10:52 11:10:52.369 [data-plane-kafka-request-handler-0] DEBUG kafka.utils.KafkaScheduler - Scheduling task flush-metadata-file with initial delay 0 ms and period -1 ms. 11:10:52 11:10:52.370 [data-plane-kafka-request-handler-0] DEBUG kafka.utils.KafkaScheduler - Scheduling task PeriodicProducerExpirationCheck with initial delay 600000 ms and period 600000 ms. 11:10:52 11:10:52.370 [data-plane-kafka-request-handler-0] INFO kafka.log.LogManager - Created log for partition __consumer_offsets-7 in /tmp/kafka-unit8944902187107510952/__consumer_offsets-7 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} 11:10:52 11:10:52.370 [data-plane-kafka-request-handler-0] INFO kafka.cluster.Partition - [Partition __consumer_offsets-7 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-7 11:10:52 11:10:52.370 [data-plane-kafka-request-handler-0] INFO kafka.cluster.Partition - [Partition __consumer_offsets-7 broker=1] Log loaded for partition __consumer_offsets-7 with initial high watermark 0 11:10:52 11:10:52.370 [data-plane-kafka-request-handler-0] INFO state.change.logger - [Broker id=1] Leader __consumer_offsets-7 with topic id Some(N6JdGKSnS5uP5QsQcans3w) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas []. Previous leader epoch was -1. 11:10:52 11:10:52.370 [data-plane-kafka-request-handler-0] DEBUG kafka.server.epoch.LeaderEpochFileCache - [LeaderEpochCache __consumer_offsets-7] Appended new epoch entry EpochEntry(epoch=0, startOffset=0). Cache now contains 1 entries. 11:10:52 11:10:52.374 [ProcessThread(sid:0 cport:39173):] DEBUG org.apache.zookeeper.server.SessionTrackerImpl - Checking session 0x10000020e9e0000 11:10:52 11:10:52.374 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - Processing request:: sessionid:0x10000020e9e0000 type:getData cxid:0xe1 zxid:0xfffffffffffffffe txntype:unknown reqpath:/config/topics/__consumer_offsets 11:10:52 11:10:52.375 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - sessionid:0x10000020e9e0000 type:getData cxid:0xe1 zxid:0xfffffffffffffffe txntype:unknown reqpath:/config/topics/__consumer_offsets 11:10:52 11:10:52.375 [SyncThread:0] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Permission requested: 1 11:10:52 11:10:52.375 [SyncThread:0] DEBUG org.apache.zookeeper.server.ZooKeeperServer - ACLs for node: [31,s{'world,'anyone} 11:10:52 ] 11:10:52 11:10:52.375 [SyncThread:0] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Client credentials: ['sasl,'zooclient 11:10:52 , 'ip,'127.0.0.1 11:10:52 ] 11:10:52 11:10:52.375 [main-SendThread(127.0.0.1:39173)] DEBUG org.apache.zookeeper.ClientCnxn - Reading reply session id: 0x10000020e9e0000, packet:: clientPath:/config/topics/__consumer_offsets serverPath:/config/topics/__consumer_offsets finished:false header:: 225,4 replyHeader:: 225,139,0 request:: '/config/topics/__consumer_offsets,F response:: #7b2276657273696f6e223a312c22636f6e666967223a7b22636f6d7072657373696f6e2e74797065223a2270726f6475636572222c22636c65616e75702e706f6c696379223a22636f6d70616374222c227365676d656e742e6279746573223a22313034383537363030227d7d,s{37,37,1768216251826,1768216251826,0,0,0,0,109,0,37} 11:10:52 11:10:52.376 [data-plane-kafka-request-handler-0] DEBUG kafka.log.OffsetIndex - Loaded index file /tmp/kafka-unit8944902187107510952/__consumer_offsets-22/00000000000000000000.index with maxEntries = 1310720, maxIndexSize = 10485760, entries = 0, lastOffset = 0, file position = 0 11:10:52 11:10:52.376 [data-plane-kafka-request-handler-0] DEBUG kafka.log.AbstractIndex - Index /tmp/kafka-unit8944902187107510952/__consumer_offsets-22/00000000000000000000.index was not resized because it already has size 10485760 11:10:52 11:10:52.377 [data-plane-kafka-request-handler-0] DEBUG kafka.log.TimeIndex - Loaded index file /tmp/kafka-unit8944902187107510952/__consumer_offsets-22/00000000000000000000.timeindex with maxEntries = 873813, maxIndexSize = 10485760, entries = 0, lastOffset = TimestampOffset(-1,0), file position = 0 11:10:52 11:10:52.377 [data-plane-kafka-request-handler-0] DEBUG kafka.log.AbstractIndex - Index /tmp/kafka-unit8944902187107510952/__consumer_offsets-22/00000000000000000000.timeindex was not resized because it already has size 10485756 11:10:52 11:10:52.377 [data-plane-kafka-request-handler-0] INFO kafka.log.UnifiedLog$ - [LogLoader partition=__consumer_offsets-22, dir=/tmp/kafka-unit8944902187107510952] Loading producer state till offset 0 with message format version 2 11:10:52 11:10:52.377 [data-plane-kafka-request-handler-0] DEBUG kafka.utils.KafkaScheduler - Scheduling task flush-metadata-file with initial delay 0 ms and period -1 ms. 11:10:52 11:10:52.377 [data-plane-kafka-request-handler-0] DEBUG kafka.utils.KafkaScheduler - Scheduling task PeriodicProducerExpirationCheck with initial delay 600000 ms and period 600000 ms. 11:10:52 11:10:52.378 [data-plane-kafka-request-handler-0] INFO kafka.log.LogManager - Created log for partition __consumer_offsets-22 in /tmp/kafka-unit8944902187107510952/__consumer_offsets-22 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} 11:10:52 11:10:52.378 [data-plane-kafka-request-handler-0] INFO kafka.cluster.Partition - [Partition __consumer_offsets-22 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-22 11:10:52 11:10:52.378 [data-plane-kafka-request-handler-0] INFO kafka.cluster.Partition - [Partition __consumer_offsets-22 broker=1] Log loaded for partition __consumer_offsets-22 with initial high watermark 0 11:10:52 11:10:52.378 [data-plane-kafka-request-handler-0] INFO state.change.logger - [Broker id=1] Leader __consumer_offsets-22 with topic id Some(N6JdGKSnS5uP5QsQcans3w) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas []. Previous leader epoch was -1. 11:10:52 11:10:52.378 [data-plane-kafka-request-handler-0] DEBUG kafka.server.epoch.LeaderEpochFileCache - [LeaderEpochCache __consumer_offsets-22] Appended new epoch entry EpochEntry(epoch=0, startOffset=0). Cache now contains 1 entries. 11:10:52 11:10:52.383 [ProcessThread(sid:0 cport:39173):] DEBUG org.apache.zookeeper.server.SessionTrackerImpl - Checking session 0x10000020e9e0000 11:10:52 11:10:52.383 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - Processing request:: sessionid:0x10000020e9e0000 type:getData cxid:0xe2 zxid:0xfffffffffffffffe txntype:unknown reqpath:/config/topics/__consumer_offsets 11:10:52 11:10:52.383 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - sessionid:0x10000020e9e0000 type:getData cxid:0xe2 zxid:0xfffffffffffffffe txntype:unknown reqpath:/config/topics/__consumer_offsets 11:10:52 11:10:52.383 [SyncThread:0] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Permission requested: 1 11:10:52 11:10:52.383 [SyncThread:0] DEBUG org.apache.zookeeper.server.ZooKeeperServer - ACLs for node: [31,s{'world,'anyone} 11:10:52 ] 11:10:52 11:10:52.383 [SyncThread:0] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Client credentials: ['sasl,'zooclient 11:10:52 , 'ip,'127.0.0.1 11:10:52 ] 11:10:52 11:10:52.383 [main-SendThread(127.0.0.1:39173)] DEBUG org.apache.zookeeper.ClientCnxn - Reading reply session id: 0x10000020e9e0000, packet:: clientPath:/config/topics/__consumer_offsets serverPath:/config/topics/__consumer_offsets finished:false header:: 226,4 replyHeader:: 226,139,0 request:: '/config/topics/__consumer_offsets,F response:: #7b2276657273696f6e223a312c22636f6e666967223a7b22636f6d7072657373696f6e2e74797065223a2270726f6475636572222c22636c65616e75702e706f6c696379223a22636f6d70616374222c227365676d656e742e6279746573223a22313034383537363030227d7d,s{37,37,1768216251826,1768216251826,0,0,0,0,109,0,37} 11:10:52 11:10:52.385 [data-plane-kafka-request-handler-0] DEBUG kafka.log.OffsetIndex - Loaded index file /tmp/kafka-unit8944902187107510952/__consumer_offsets-29/00000000000000000000.index with maxEntries = 1310720, maxIndexSize = 10485760, entries = 0, lastOffset = 0, file position = 0 11:10:52 11:10:52.385 [data-plane-kafka-request-handler-0] DEBUG kafka.log.AbstractIndex - Index /tmp/kafka-unit8944902187107510952/__consumer_offsets-29/00000000000000000000.index was not resized because it already has size 10485760 11:10:52 11:10:52.385 [data-plane-kafka-request-handler-0] DEBUG kafka.log.TimeIndex - Loaded index file /tmp/kafka-unit8944902187107510952/__consumer_offsets-29/00000000000000000000.timeindex with maxEntries = 873813, maxIndexSize = 10485760, entries = 0, lastOffset = TimestampOffset(-1,0), file position = 0 11:10:52 11:10:52.385 [data-plane-kafka-request-handler-0] DEBUG kafka.log.AbstractIndex - Index /tmp/kafka-unit8944902187107510952/__consumer_offsets-29/00000000000000000000.timeindex was not resized because it already has size 10485756 11:10:52 11:10:52.385 [data-plane-kafka-request-handler-0] INFO kafka.log.UnifiedLog$ - [LogLoader partition=__consumer_offsets-29, dir=/tmp/kafka-unit8944902187107510952] Loading producer state till offset 0 with message format version 2 11:10:52 11:10:52.385 [data-plane-kafka-request-handler-0] DEBUG kafka.utils.KafkaScheduler - Scheduling task flush-metadata-file with initial delay 0 ms and period -1 ms. 11:10:52 11:10:52.386 [data-plane-kafka-request-handler-0] DEBUG kafka.utils.KafkaScheduler - Scheduling task PeriodicProducerExpirationCheck with initial delay 600000 ms and period 600000 ms. 11:10:52 11:10:52.386 [data-plane-kafka-request-handler-0] INFO kafka.log.LogManager - Created log for partition __consumer_offsets-29 in /tmp/kafka-unit8944902187107510952/__consumer_offsets-29 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} 11:10:52 11:10:52.386 [data-plane-kafka-request-handler-0] INFO kafka.cluster.Partition - [Partition __consumer_offsets-29 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-29 11:10:52 11:10:52.386 [data-plane-kafka-request-handler-0] INFO kafka.cluster.Partition - [Partition __consumer_offsets-29 broker=1] Log loaded for partition __consumer_offsets-29 with initial high watermark 0 11:10:52 11:10:52.386 [data-plane-kafka-request-handler-0] INFO state.change.logger - [Broker id=1] Leader __consumer_offsets-29 with topic id Some(N6JdGKSnS5uP5QsQcans3w) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas []. Previous leader epoch was -1. 11:10:52 11:10:52.386 [data-plane-kafka-request-handler-0] DEBUG kafka.server.epoch.LeaderEpochFileCache - [LeaderEpochCache __consumer_offsets-29] Appended new epoch entry EpochEntry(epoch=0, startOffset=0). Cache now contains 1 entries. 11:10:52 11:10:52.391 [ProcessThread(sid:0 cport:39173):] DEBUG org.apache.zookeeper.server.SessionTrackerImpl - Checking session 0x10000020e9e0000 11:10:52 11:10:52.391 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - Processing request:: sessionid:0x10000020e9e0000 type:getData cxid:0xe3 zxid:0xfffffffffffffffe txntype:unknown reqpath:/config/topics/__consumer_offsets 11:10:52 11:10:52.391 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - sessionid:0x10000020e9e0000 type:getData cxid:0xe3 zxid:0xfffffffffffffffe txntype:unknown reqpath:/config/topics/__consumer_offsets 11:10:52 11:10:52.391 [SyncThread:0] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Permission requested: 1 11:10:52 11:10:52.391 [SyncThread:0] DEBUG org.apache.zookeeper.server.ZooKeeperServer - ACLs for node: [31,s{'world,'anyone} 11:10:52 ] 11:10:52 11:10:52.391 [SyncThread:0] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Client credentials: ['sasl,'zooclient 11:10:52 , 'ip,'127.0.0.1 11:10:52 ] 11:10:52 11:10:52.391 [main-SendThread(127.0.0.1:39173)] DEBUG org.apache.zookeeper.ClientCnxn - Reading reply session id: 0x10000020e9e0000, packet:: clientPath:/config/topics/__consumer_offsets serverPath:/config/topics/__consumer_offsets finished:false header:: 227,4 replyHeader:: 227,139,0 request:: '/config/topics/__consumer_offsets,F response:: #7b2276657273696f6e223a312c22636f6e666967223a7b22636f6d7072657373696f6e2e74797065223a2270726f6475636572222c22636c65616e75702e706f6c696379223a22636f6d70616374222c227365676d656e742e6279746573223a22313034383537363030227d7d,s{37,37,1768216251826,1768216251826,0,0,0,0,109,0,37} 11:10:52 11:10:52.393 [data-plane-kafka-request-handler-0] DEBUG kafka.log.OffsetIndex - Loaded index file /tmp/kafka-unit8944902187107510952/__consumer_offsets-44/00000000000000000000.index with maxEntries = 1310720, maxIndexSize = 10485760, entries = 0, lastOffset = 0, file position = 0 11:10:52 11:10:52.393 [data-plane-kafka-request-handler-0] DEBUG kafka.log.AbstractIndex - Index /tmp/kafka-unit8944902187107510952/__consumer_offsets-44/00000000000000000000.index was not resized because it already has size 10485760 11:10:52 11:10:52.393 [data-plane-kafka-request-handler-0] DEBUG kafka.log.TimeIndex - Loaded index file /tmp/kafka-unit8944902187107510952/__consumer_offsets-44/00000000000000000000.timeindex with maxEntries = 873813, maxIndexSize = 10485760, entries = 0, lastOffset = TimestampOffset(-1,0), file position = 0 11:10:52 11:10:52.393 [data-plane-kafka-request-handler-0] DEBUG kafka.log.AbstractIndex - Index /tmp/kafka-unit8944902187107510952/__consumer_offsets-44/00000000000000000000.timeindex was not resized because it already has size 10485756 11:10:52 11:10:52.394 [data-plane-kafka-request-handler-0] INFO kafka.log.UnifiedLog$ - [LogLoader partition=__consumer_offsets-44, dir=/tmp/kafka-unit8944902187107510952] Loading producer state till offset 0 with message format version 2 11:10:52 11:10:52.394 [data-plane-kafka-request-handler-0] DEBUG kafka.utils.KafkaScheduler - Scheduling task flush-metadata-file with initial delay 0 ms and period -1 ms. 11:10:52 11:10:52.394 [data-plane-kafka-request-handler-0] DEBUG kafka.utils.KafkaScheduler - Scheduling task PeriodicProducerExpirationCheck with initial delay 600000 ms and period 600000 ms. 11:10:52 11:10:52.394 [data-plane-kafka-request-handler-0] INFO kafka.log.LogManager - Created log for partition __consumer_offsets-44 in /tmp/kafka-unit8944902187107510952/__consumer_offsets-44 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} 11:10:52 11:10:52.394 [data-plane-kafka-request-handler-0] INFO kafka.cluster.Partition - [Partition __consumer_offsets-44 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-44 11:10:52 11:10:52.394 [data-plane-kafka-request-handler-0] INFO kafka.cluster.Partition - [Partition __consumer_offsets-44 broker=1] Log loaded for partition __consumer_offsets-44 with initial high watermark 0 11:10:52 11:10:52.394 [data-plane-kafka-request-handler-0] INFO state.change.logger - [Broker id=1] Leader __consumer_offsets-44 with topic id Some(N6JdGKSnS5uP5QsQcans3w) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas []. Previous leader epoch was -1. 11:10:52 11:10:52.395 [data-plane-kafka-request-handler-0] DEBUG kafka.server.epoch.LeaderEpochFileCache - [LeaderEpochCache __consumer_offsets-44] Appended new epoch entry EpochEntry(epoch=0, startOffset=0). Cache now contains 1 entries. 11:10:52 11:10:52.399 [ProcessThread(sid:0 cport:39173):] DEBUG org.apache.zookeeper.server.SessionTrackerImpl - Checking session 0x10000020e9e0000 11:10:52 11:10:52.399 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - Processing request:: sessionid:0x10000020e9e0000 type:getData cxid:0xe4 zxid:0xfffffffffffffffe txntype:unknown reqpath:/config/topics/__consumer_offsets 11:10:52 11:10:52.399 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - sessionid:0x10000020e9e0000 type:getData cxid:0xe4 zxid:0xfffffffffffffffe txntype:unknown reqpath:/config/topics/__consumer_offsets 11:10:52 11:10:52.399 [SyncThread:0] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Permission requested: 1 11:10:52 11:10:52.399 [SyncThread:0] DEBUG org.apache.zookeeper.server.ZooKeeperServer - ACLs for node: [31,s{'world,'anyone} 11:10:52 ] 11:10:52 11:10:52.399 [SyncThread:0] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Client credentials: ['sasl,'zooclient 11:10:52 , 'ip,'127.0.0.1 11:10:52 ] 11:10:52 11:10:52.399 [main-SendThread(127.0.0.1:39173)] DEBUG org.apache.zookeeper.ClientCnxn - Reading reply session id: 0x10000020e9e0000, packet:: clientPath:/config/topics/__consumer_offsets serverPath:/config/topics/__consumer_offsets finished:false header:: 228,4 replyHeader:: 228,139,0 request:: '/config/topics/__consumer_offsets,F response:: #7b2276657273696f6e223a312c22636f6e666967223a7b22636f6d7072657373696f6e2e74797065223a2270726f6475636572222c22636c65616e75702e706f6c696379223a22636f6d70616374222c227365676d656e742e6279746573223a22313034383537363030227d7d,s{37,37,1768216251826,1768216251826,0,0,0,0,109,0,37} 11:10:52 11:10:52.401 [data-plane-kafka-request-handler-0] DEBUG kafka.log.OffsetIndex - Loaded index file /tmp/kafka-unit8944902187107510952/__consumer_offsets-14/00000000000000000000.index with maxEntries = 1310720, maxIndexSize = 10485760, entries = 0, lastOffset = 0, file position = 0 11:10:52 11:10:52.401 [data-plane-kafka-request-handler-0] DEBUG kafka.log.AbstractIndex - Index /tmp/kafka-unit8944902187107510952/__consumer_offsets-14/00000000000000000000.index was not resized because it already has size 10485760 11:10:52 11:10:52.401 [data-plane-kafka-request-handler-0] DEBUG kafka.log.TimeIndex - Loaded index file /tmp/kafka-unit8944902187107510952/__consumer_offsets-14/00000000000000000000.timeindex with maxEntries = 873813, maxIndexSize = 10485760, entries = 0, lastOffset = TimestampOffset(-1,0), file position = 0 11:10:52 11:10:52.401 [data-plane-kafka-request-handler-0] DEBUG kafka.log.AbstractIndex - Index /tmp/kafka-unit8944902187107510952/__consumer_offsets-14/00000000000000000000.timeindex was not resized because it already has size 10485756 11:10:52 11:10:52.401 [data-plane-kafka-request-handler-0] INFO kafka.log.UnifiedLog$ - [LogLoader partition=__consumer_offsets-14, dir=/tmp/kafka-unit8944902187107510952] Loading producer state till offset 0 with message format version 2 11:10:52 11:10:52.401 [data-plane-kafka-request-handler-0] DEBUG kafka.utils.KafkaScheduler - Scheduling task flush-metadata-file with initial delay 0 ms and period -1 ms. 11:10:52 11:10:52.402 [data-plane-kafka-request-handler-0] DEBUG kafka.utils.KafkaScheduler - Scheduling task PeriodicProducerExpirationCheck with initial delay 600000 ms and period 600000 ms. 11:10:52 11:10:52.402 [data-plane-kafka-request-handler-0] INFO kafka.log.LogManager - Created log for partition __consumer_offsets-14 in /tmp/kafka-unit8944902187107510952/__consumer_offsets-14 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} 11:10:52 11:10:52.402 [data-plane-kafka-request-handler-0] INFO kafka.cluster.Partition - [Partition __consumer_offsets-14 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-14 11:10:52 11:10:52.402 [data-plane-kafka-request-handler-0] INFO kafka.cluster.Partition - [Partition __consumer_offsets-14 broker=1] Log loaded for partition __consumer_offsets-14 with initial high watermark 0 11:10:52 11:10:52.402 [data-plane-kafka-request-handler-0] INFO state.change.logger - [Broker id=1] Leader __consumer_offsets-14 with topic id Some(N6JdGKSnS5uP5QsQcans3w) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas []. Previous leader epoch was -1. 11:10:52 11:10:52.402 [data-plane-kafka-request-handler-0] DEBUG kafka.server.epoch.LeaderEpochFileCache - [LeaderEpochCache __consumer_offsets-14] Appended new epoch entry EpochEntry(epoch=0, startOffset=0). Cache now contains 1 entries. 11:10:52 11:10:52.407 [ProcessThread(sid:0 cport:39173):] DEBUG org.apache.zookeeper.server.SessionTrackerImpl - Checking session 0x10000020e9e0000 11:10:52 11:10:52.407 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - Processing request:: sessionid:0x10000020e9e0000 type:getData cxid:0xe5 zxid:0xfffffffffffffffe txntype:unknown reqpath:/config/topics/__consumer_offsets 11:10:52 11:10:52.407 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - sessionid:0x10000020e9e0000 type:getData cxid:0xe5 zxid:0xfffffffffffffffe txntype:unknown reqpath:/config/topics/__consumer_offsets 11:10:52 11:10:52.407 [SyncThread:0] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Permission requested: 1 11:10:52 11:10:52.407 [SyncThread:0] DEBUG org.apache.zookeeper.server.ZooKeeperServer - ACLs for node: [31,s{'world,'anyone} 11:10:52 ] 11:10:52 11:10:52.407 [SyncThread:0] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Client credentials: ['sasl,'zooclient 11:10:52 , 'ip,'127.0.0.1 11:10:52 ] 11:10:52 11:10:52.407 [main-SendThread(127.0.0.1:39173)] DEBUG org.apache.zookeeper.ClientCnxn - Reading reply session id: 0x10000020e9e0000, packet:: clientPath:/config/topics/__consumer_offsets serverPath:/config/topics/__consumer_offsets finished:false header:: 229,4 replyHeader:: 229,139,0 request:: '/config/topics/__consumer_offsets,F response:: #7b2276657273696f6e223a312c22636f6e666967223a7b22636f6d7072657373696f6e2e74797065223a2270726f6475636572222c22636c65616e75702e706f6c696379223a22636f6d70616374222c227365676d656e742e6279746573223a22313034383537363030227d7d,s{37,37,1768216251826,1768216251826,0,0,0,0,109,0,37} 11:10:52 11:10:52.409 [data-plane-kafka-request-handler-0] DEBUG kafka.log.OffsetIndex - Loaded index file /tmp/kafka-unit8944902187107510952/__consumer_offsets-23/00000000000000000000.index with maxEntries = 1310720, maxIndexSize = 10485760, entries = 0, lastOffset = 0, file position = 0 11:10:52 11:10:52.409 [data-plane-kafka-request-handler-0] DEBUG kafka.log.AbstractIndex - Index /tmp/kafka-unit8944902187107510952/__consumer_offsets-23/00000000000000000000.index was not resized because it already has size 10485760 11:10:52 11:10:52.409 [data-plane-kafka-request-handler-0] DEBUG kafka.log.TimeIndex - Loaded index file /tmp/kafka-unit8944902187107510952/__consumer_offsets-23/00000000000000000000.timeindex with maxEntries = 873813, maxIndexSize = 10485760, entries = 0, lastOffset = TimestampOffset(-1,0), file position = 0 11:10:52 11:10:52.409 [data-plane-kafka-request-handler-0] DEBUG kafka.log.AbstractIndex - Index /tmp/kafka-unit8944902187107510952/__consumer_offsets-23/00000000000000000000.timeindex was not resized because it already has size 10485756 11:10:52 11:10:52.409 [data-plane-kafka-request-handler-0] INFO kafka.log.UnifiedLog$ - [LogLoader partition=__consumer_offsets-23, dir=/tmp/kafka-unit8944902187107510952] Loading producer state till offset 0 with message format version 2 11:10:52 11:10:52.409 [data-plane-kafka-request-handler-0] DEBUG kafka.utils.KafkaScheduler - Scheduling task flush-metadata-file with initial delay 0 ms and period -1 ms. 11:10:52 11:10:52.410 [data-plane-kafka-request-handler-0] DEBUG kafka.utils.KafkaScheduler - Scheduling task PeriodicProducerExpirationCheck with initial delay 600000 ms and period 600000 ms. 11:10:52 11:10:52.410 [data-plane-kafka-request-handler-0] INFO kafka.log.LogManager - Created log for partition __consumer_offsets-23 in /tmp/kafka-unit8944902187107510952/__consumer_offsets-23 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} 11:10:52 11:10:52.410 [data-plane-kafka-request-handler-0] INFO kafka.cluster.Partition - [Partition __consumer_offsets-23 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-23 11:10:52 11:10:52.410 [data-plane-kafka-request-handler-0] INFO kafka.cluster.Partition - [Partition __consumer_offsets-23 broker=1] Log loaded for partition __consumer_offsets-23 with initial high watermark 0 11:10:52 11:10:52.410 [data-plane-kafka-request-handler-0] INFO state.change.logger - [Broker id=1] Leader __consumer_offsets-23 with topic id Some(N6JdGKSnS5uP5QsQcans3w) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas []. Previous leader epoch was -1. 11:10:52 11:10:52.410 [data-plane-kafka-request-handler-0] DEBUG kafka.server.epoch.LeaderEpochFileCache - [LeaderEpochCache __consumer_offsets-23] Appended new epoch entry EpochEntry(epoch=0, startOffset=0). Cache now contains 1 entries. 11:10:52 11:10:52.416 [ProcessThread(sid:0 cport:39173):] DEBUG org.apache.zookeeper.server.SessionTrackerImpl - Checking session 0x10000020e9e0000 11:10:52 11:10:52.416 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - Processing request:: sessionid:0x10000020e9e0000 type:getData cxid:0xe6 zxid:0xfffffffffffffffe txntype:unknown reqpath:/config/topics/__consumer_offsets 11:10:52 11:10:52.416 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - sessionid:0x10000020e9e0000 type:getData cxid:0xe6 zxid:0xfffffffffffffffe txntype:unknown reqpath:/config/topics/__consumer_offsets 11:10:52 11:10:52.416 [SyncThread:0] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Permission requested: 1 11:10:52 11:10:52.416 [SyncThread:0] DEBUG org.apache.zookeeper.server.ZooKeeperServer - ACLs for node: [31,s{'world,'anyone} 11:10:52 ] 11:10:52 11:10:52.416 [SyncThread:0] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Client credentials: ['sasl,'zooclient 11:10:52 , 'ip,'127.0.0.1 11:10:52 ] 11:10:52 11:10:52.417 [main-SendThread(127.0.0.1:39173)] DEBUG org.apache.zookeeper.ClientCnxn - Reading reply session id: 0x10000020e9e0000, packet:: clientPath:/config/topics/__consumer_offsets serverPath:/config/topics/__consumer_offsets finished:false header:: 230,4 replyHeader:: 230,139,0 request:: '/config/topics/__consumer_offsets,F response:: #7b2276657273696f6e223a312c22636f6e666967223a7b22636f6d7072657373696f6e2e74797065223a2270726f6475636572222c22636c65616e75702e706f6c696379223a22636f6d70616374222c227365676d656e742e6279746573223a22313034383537363030227d7d,s{37,37,1768216251826,1768216251826,0,0,0,0,109,0,37} 11:10:52 11:10:52.419 [data-plane-kafka-request-handler-0] DEBUG kafka.log.OffsetIndex - Loaded index file /tmp/kafka-unit8944902187107510952/__consumer_offsets-38/00000000000000000000.index with maxEntries = 1310720, maxIndexSize = 10485760, entries = 0, lastOffset = 0, file position = 0 11:10:52 11:10:52.419 [data-plane-kafka-request-handler-0] DEBUG kafka.log.AbstractIndex - Index /tmp/kafka-unit8944902187107510952/__consumer_offsets-38/00000000000000000000.index was not resized because it already has size 10485760 11:10:52 11:10:52.419 [data-plane-kafka-request-handler-0] DEBUG kafka.log.TimeIndex - Loaded index file /tmp/kafka-unit8944902187107510952/__consumer_offsets-38/00000000000000000000.timeindex with maxEntries = 873813, maxIndexSize = 10485760, entries = 0, lastOffset = TimestampOffset(-1,0), file position = 0 11:10:52 11:10:52.419 [data-plane-kafka-request-handler-0] DEBUG kafka.log.AbstractIndex - Index /tmp/kafka-unit8944902187107510952/__consumer_offsets-38/00000000000000000000.timeindex was not resized because it already has size 10485756 11:10:52 11:10:52.419 [data-plane-kafka-request-handler-0] INFO kafka.log.UnifiedLog$ - [LogLoader partition=__consumer_offsets-38, dir=/tmp/kafka-unit8944902187107510952] Loading producer state till offset 0 with message format version 2 11:10:52 11:10:52.419 [data-plane-kafka-request-handler-0] DEBUG kafka.utils.KafkaScheduler - Scheduling task flush-metadata-file with initial delay 0 ms and period -1 ms. 11:10:52 11:10:52.420 [data-plane-kafka-request-handler-0] DEBUG kafka.utils.KafkaScheduler - Scheduling task PeriodicProducerExpirationCheck with initial delay 600000 ms and period 600000 ms. 11:10:52 11:10:52.420 [data-plane-kafka-request-handler-0] INFO kafka.log.LogManager - Created log for partition __consumer_offsets-38 in /tmp/kafka-unit8944902187107510952/__consumer_offsets-38 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} 11:10:52 11:10:52.420 [data-plane-kafka-request-handler-0] INFO kafka.cluster.Partition - [Partition __consumer_offsets-38 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-38 11:10:52 11:10:52.420 [data-plane-kafka-request-handler-0] INFO kafka.cluster.Partition - [Partition __consumer_offsets-38 broker=1] Log loaded for partition __consumer_offsets-38 with initial high watermark 0 11:10:52 11:10:52.420 [data-plane-kafka-request-handler-0] INFO state.change.logger - [Broker id=1] Leader __consumer_offsets-38 with topic id Some(N6JdGKSnS5uP5QsQcans3w) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas []. Previous leader epoch was -1. 11:10:52 11:10:52.420 [data-plane-kafka-request-handler-0] DEBUG kafka.server.epoch.LeaderEpochFileCache - [LeaderEpochCache __consumer_offsets-38] Appended new epoch entry EpochEntry(epoch=0, startOffset=0). Cache now contains 1 entries. 11:10:52 11:10:52.431 [main] DEBUG org.apache.kafka.clients.NetworkClient - [Consumer clientId=mso-123456-consumer-37da6eb2-0a27-47c4-bc3c-8032fdeca549, groupId=mso-group] Sending metadata request MetadataRequestData(topics=[MetadataRequestTopic(topicId=AAAAAAAAAAAAAAAAAAAAAA, name='my-test-topic')], allowAutoTopicCreation=false, includeClusterAuthorizedOperations=false, includeTopicAuthorizedOperations=false) to node localhost:39115 (id: 1 rack: null) 11:10:52 11:10:52.431 [main] DEBUG org.apache.kafka.clients.NetworkClient - [Consumer clientId=mso-123456-consumer-37da6eb2-0a27-47c4-bc3c-8032fdeca549, groupId=mso-group] Sending METADATA request with header RequestHeader(apiKey=METADATA, apiVersion=12, clientId=mso-123456-consumer-37da6eb2-0a27-47c4-bc3c-8032fdeca549, correlationId=14) and timeout 30000 to node 1: MetadataRequestData(topics=[MetadataRequestTopic(topicId=AAAAAAAAAAAAAAAAAAAAAA, name='my-test-topic')], allowAutoTopicCreation=false, includeClusterAuthorizedOperations=false, includeTopicAuthorizedOperations=false) 11:10:52 11:10:52.435 [main] DEBUG org.apache.kafka.clients.NetworkClient - [Consumer clientId=mso-123456-consumer-37da6eb2-0a27-47c4-bc3c-8032fdeca549, groupId=mso-group] Received METADATA response from node 1 for request with header RequestHeader(apiKey=METADATA, apiVersion=12, clientId=mso-123456-consumer-37da6eb2-0a27-47c4-bc3c-8032fdeca549, correlationId=14): MetadataResponseData(throttleTimeMs=0, brokers=[MetadataResponseBroker(nodeId=1, host='localhost', port=39115, rack=null)], clusterId='jx5ycp9PTHOXo1U6H8QTmw', controllerId=1, topics=[MetadataResponseTopic(errorCode=0, name='my-test-topic', topicId=QbCSIHehTz6oyvgrkYsJtw, isInternal=false, partitions=[MetadataResponsePartition(errorCode=0, partitionIndex=0, leaderId=1, leaderEpoch=0, replicaNodes=[1], isrNodes=[1], offlineReplicas=[])], topicAuthorizedOperations=-2147483648)], clusterAuthorizedOperations=-2147483648) 11:10:52 11:10:52.435 [main] DEBUG org.apache.kafka.clients.Metadata - [Consumer clientId=mso-123456-consumer-37da6eb2-0a27-47c4-bc3c-8032fdeca549, groupId=mso-group] Updating last seen epoch for partition my-test-topic-0 from 0 to epoch 0 from new metadata 11:10:52 11:10:52.435 [data-plane-kafka-network-thread-1-ListenerName(SASL_PLAINTEXT)-SASL_PLAINTEXT-0] DEBUG kafka.request.logger - Completed request:{"isForwarded":false,"requestHeader":{"requestApiKey":3,"requestApiVersion":12,"correlationId":14,"clientId":"mso-123456-consumer-37da6eb2-0a27-47c4-bc3c-8032fdeca549","requestApiKeyName":"METADATA"},"request":{"topics":[{"topicId":"AAAAAAAAAAAAAAAAAAAAAA","name":"my-test-topic"}],"allowAutoTopicCreation":false,"includeTopicAuthorizedOperations":false},"response":{"throttleTimeMs":0,"brokers":[{"nodeId":1,"host":"localhost","port":39115,"rack":null}],"clusterId":"jx5ycp9PTHOXo1U6H8QTmw","controllerId":1,"topics":[{"errorCode":0,"name":"my-test-topic","topicId":"QbCSIHehTz6oyvgrkYsJtw","isInternal":false,"partitions":[{"errorCode":0,"partitionIndex":0,"leaderId":1,"leaderEpoch":0,"replicaNodes":[1],"isrNodes":[1],"offlineReplicas":[]}],"topicAuthorizedOperations":-2147483648}]},"connection":"127.0.0.1:39115-127.0.0.1:43334-3","totalTimeMs":2.118,"requestQueueTimeMs":0.219,"localTimeMs":1.384,"remoteTimeMs":0.0,"throttleTimeMs":0,"responseQueueTimeMs":0.114,"sendTimeMs":0.399,"securityProtocol":"SASL_PLAINTEXT","principal":"User:admin","listener":"SASL_PLAINTEXT","clientInformation":{"softwareName":"apache-kafka-java","softwareVersion":"3.3.1"}} 11:10:52 11:10:52.435 [main] DEBUG org.apache.kafka.clients.Metadata - [Consumer clientId=mso-123456-consumer-37da6eb2-0a27-47c4-bc3c-8032fdeca549, groupId=mso-group] Updated cluster metadata updateVersion 8 to MetadataCache{clusterId='jx5ycp9PTHOXo1U6H8QTmw', nodes={1=localhost:39115 (id: 1 rack: null)}, partitions=[PartitionMetadata(error=NONE, partition=my-test-topic-0, leader=Optional[1], leaderEpoch=Optional[0], replicas=1, isr=1, offlineReplicas=)], controller=localhost:39115 (id: 1 rack: null)} 11:10:52 11:10:52.435 [main] DEBUG org.apache.kafka.clients.consumer.internals.ConsumerCoordinator - [Consumer clientId=mso-123456-consumer-37da6eb2-0a27-47c4-bc3c-8032fdeca549, groupId=mso-group] Sending FindCoordinator request to broker localhost:39115 (id: 1 rack: null) 11:10:52 11:10:52.436 [main] DEBUG org.apache.kafka.clients.NetworkClient - [Consumer clientId=mso-123456-consumer-37da6eb2-0a27-47c4-bc3c-8032fdeca549, groupId=mso-group] Sending FIND_COORDINATOR request with header RequestHeader(apiKey=FIND_COORDINATOR, apiVersion=4, clientId=mso-123456-consumer-37da6eb2-0a27-47c4-bc3c-8032fdeca549, correlationId=15) and timeout 30000 to node 1: FindCoordinatorRequestData(key='', keyType=0, coordinatorKeys=[mso-group]) 11:10:52 11:10:52.437 [ProcessThread(sid:0 cport:39173):] DEBUG org.apache.zookeeper.server.SessionTrackerImpl - Checking session 0x10000020e9e0000 11:10:52 11:10:52.438 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - Processing request:: sessionid:0x10000020e9e0000 type:exists cxid:0xe7 zxid:0xfffffffffffffffe txntype:unknown reqpath:/admin/delete_topics/__consumer_offsets 11:10:52 11:10:52.438 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - sessionid:0x10000020e9e0000 type:exists cxid:0xe7 zxid:0xfffffffffffffffe txntype:unknown reqpath:/admin/delete_topics/__consumer_offsets 11:10:52 11:10:52.438 [main-SendThread(127.0.0.1:39173)] DEBUG org.apache.zookeeper.ClientCnxn - Reading reply session id: 0x10000020e9e0000, packet:: clientPath:/admin/delete_topics/__consumer_offsets serverPath:/admin/delete_topics/__consumer_offsets finished:false header:: 231,3 replyHeader:: 231,139,-101 request:: '/admin/delete_topics/__consumer_offsets,F response:: 11:10:52 11:10:52.438 [ProcessThread(sid:0 cport:39173):] DEBUG org.apache.zookeeper.server.SessionTrackerImpl - Checking session 0x10000020e9e0000 11:10:52 11:10:52.438 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - Processing request:: sessionid:0x10000020e9e0000 type:getData cxid:0xe8 zxid:0xfffffffffffffffe txntype:unknown reqpath:/config/topics/__consumer_offsets 11:10:52 11:10:52.438 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - sessionid:0x10000020e9e0000 type:getData cxid:0xe8 zxid:0xfffffffffffffffe txntype:unknown reqpath:/config/topics/__consumer_offsets 11:10:52 11:10:52.438 [SyncThread:0] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Permission requested: 1 11:10:52 11:10:52.438 [SyncThread:0] DEBUG org.apache.zookeeper.server.ZooKeeperServer - ACLs for node: [31,s{'world,'anyone} 11:10:52 ] 11:10:52 11:10:52.438 [SyncThread:0] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Client credentials: ['sasl,'zooclient 11:10:52 , 'ip,'127.0.0.1 11:10:52 ] 11:10:52 11:10:52.439 [main-SendThread(127.0.0.1:39173)] DEBUG org.apache.zookeeper.ClientCnxn - Reading reply session id: 0x10000020e9e0000, packet:: clientPath:/config/topics/__consumer_offsets serverPath:/config/topics/__consumer_offsets finished:false header:: 232,4 replyHeader:: 232,139,0 request:: '/config/topics/__consumer_offsets,F response:: #7b2276657273696f6e223a312c22636f6e666967223a7b22636f6d7072657373696f6e2e74797065223a2270726f6475636572222c22636c65616e75702e706f6c696379223a22636f6d70616374222c227365676d656e742e6279746573223a22313034383537363030227d7d,s{37,37,1768216251826,1768216251826,0,0,0,0,109,0,37} 11:10:52 11:10:52.439 [ProcessThread(sid:0 cport:39173):] DEBUG org.apache.zookeeper.server.SessionTrackerImpl - Checking session 0x10000020e9e0000 11:10:52 11:10:52.439 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - Processing request:: sessionid:0x10000020e9e0000 type:exists cxid:0xe9 zxid:0xfffffffffffffffe txntype:unknown reqpath:/brokers/topics/__consumer_offsets 11:10:52 11:10:52.439 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - sessionid:0x10000020e9e0000 type:exists cxid:0xe9 zxid:0xfffffffffffffffe txntype:unknown reqpath:/brokers/topics/__consumer_offsets 11:10:52 11:10:52.439 [main-SendThread(127.0.0.1:39173)] DEBUG org.apache.zookeeper.ClientCnxn - Reading reply session id: 0x10000020e9e0000, packet:: clientPath:/brokers/topics/__consumer_offsets serverPath:/brokers/topics/__consumer_offsets finished:false header:: 233,3 replyHeader:: 233,139,0 request:: '/brokers/topics/__consumer_offsets,T response:: s{38,38,1768216251835,1768216251835,0,1,0,0,548,1,39} 11:10:52 11:10:52.439 [data-plane-kafka-request-handler-1] DEBUG kafka.server.ZkAdminManager - [Admin Manager on Broker 1]: Topic creation failed since topic '__consumer_offsets' already exists. 11:10:52 org.apache.kafka.common.errors.TopicExistsException: Topic '__consumer_offsets' already exists. 11:10:52 11:10:52.440 [data-plane-kafka-request-handler-1] DEBUG kafka.server.DefaultAutoTopicCreationManager - Cleared inflight topic creation state for HashMap(__consumer_offsets -> CreatableTopic(name='__consumer_offsets', numPartitions=50, replicationFactor=1, assignments=[], configs=[CreateableTopicConfig(name='compression.type', value='producer'), CreateableTopicConfig(name='cleanup.policy', value='compact'), CreateableTopicConfig(name='segment.bytes', value='104857600')])) 11:10:52 11:10:52.440 [main] DEBUG org.apache.kafka.clients.NetworkClient - [Consumer clientId=mso-123456-consumer-37da6eb2-0a27-47c4-bc3c-8032fdeca549, groupId=mso-group] Received FIND_COORDINATOR response from node 1 for request with header RequestHeader(apiKey=FIND_COORDINATOR, apiVersion=4, clientId=mso-123456-consumer-37da6eb2-0a27-47c4-bc3c-8032fdeca549, correlationId=15): FindCoordinatorResponseData(throttleTimeMs=0, errorCode=0, errorMessage='', nodeId=0, host='', port=0, coordinators=[Coordinator(key='mso-group', nodeId=-1, host='', port=-1, errorCode=15, errorMessage='')]) 11:10:52 11:10:52.440 [main] DEBUG org.apache.kafka.clients.consumer.internals.ConsumerCoordinator - [Consumer clientId=mso-123456-consumer-37da6eb2-0a27-47c4-bc3c-8032fdeca549, groupId=mso-group] Received FindCoordinator response ClientResponse(receivedTimeMs=1768216252440, latencyMs=5, disconnected=false, requestHeader=RequestHeader(apiKey=FIND_COORDINATOR, apiVersion=4, clientId=mso-123456-consumer-37da6eb2-0a27-47c4-bc3c-8032fdeca549, correlationId=15), responseBody=FindCoordinatorResponseData(throttleTimeMs=0, errorCode=0, errorMessage='', nodeId=0, host='', port=0, coordinators=[Coordinator(key='mso-group', nodeId=-1, host='', port=-1, errorCode=15, errorMessage='')])) 11:10:52 11:10:52.440 [data-plane-kafka-network-thread-1-ListenerName(SASL_PLAINTEXT)-SASL_PLAINTEXT-0] DEBUG kafka.request.logger - Completed request:{"isForwarded":false,"requestHeader":{"requestApiKey":10,"requestApiVersion":4,"correlationId":15,"clientId":"mso-123456-consumer-37da6eb2-0a27-47c4-bc3c-8032fdeca549","requestApiKeyName":"FIND_COORDINATOR"},"request":{"keyType":0,"coordinatorKeys":["mso-group"]},"response":{"throttleTimeMs":0,"coordinators":[{"key":"mso-group","nodeId":-1,"host":"","port":-1,"errorCode":15,"errorMessage":""}]},"connection":"127.0.0.1:39115-127.0.0.1:43334-3","totalTimeMs":4.119,"requestQueueTimeMs":0.154,"localTimeMs":3.662,"remoteTimeMs":0.0,"throttleTimeMs":0,"responseQueueTimeMs":0.079,"sendTimeMs":0.223,"securityProtocol":"SASL_PLAINTEXT","principal":"User:admin","listener":"SASL_PLAINTEXT","clientInformation":{"softwareName":"apache-kafka-java","softwareVersion":"3.3.1"}} 11:10:52 11:10:52.441 [main] DEBUG org.apache.kafka.clients.consumer.internals.ConsumerCoordinator - [Consumer clientId=mso-123456-consumer-37da6eb2-0a27-47c4-bc3c-8032fdeca549, groupId=mso-group] Group coordinator lookup failed: 11:10:52 11:10:52.441 [main] DEBUG org.apache.kafka.clients.consumer.internals.ConsumerCoordinator - [Consumer clientId=mso-123456-consumer-37da6eb2-0a27-47c4-bc3c-8032fdeca549, groupId=mso-group] Coordinator discovery failed, refreshing metadata 11:10:52 org.apache.kafka.common.errors.CoordinatorNotAvailableException: The coordinator is not available. 11:10:52 11:10:52.441 [data-plane-kafka-request-handler-0] DEBUG kafka.log.OffsetIndex - Loaded index file /tmp/kafka-unit8944902187107510952/__consumer_offsets-8/00000000000000000000.index with maxEntries = 1310720, maxIndexSize = 10485760, entries = 0, lastOffset = 0, file position = 0 11:10:52 11:10:52.441 [data-plane-kafka-request-handler-0] DEBUG kafka.log.AbstractIndex - Index /tmp/kafka-unit8944902187107510952/__consumer_offsets-8/00000000000000000000.index was not resized because it already has size 10485760 11:10:52 11:10:52.441 [data-plane-kafka-request-handler-0] DEBUG kafka.log.TimeIndex - Loaded index file /tmp/kafka-unit8944902187107510952/__consumer_offsets-8/00000000000000000000.timeindex with maxEntries = 873813, maxIndexSize = 10485760, entries = 0, lastOffset = TimestampOffset(-1,0), file position = 0 11:10:52 11:10:52.442 [data-plane-kafka-request-handler-0] DEBUG kafka.log.AbstractIndex - Index /tmp/kafka-unit8944902187107510952/__consumer_offsets-8/00000000000000000000.timeindex was not resized because it already has size 10485756 11:10:52 11:10:52.442 [data-plane-kafka-request-handler-0] INFO kafka.log.UnifiedLog$ - [LogLoader partition=__consumer_offsets-8, dir=/tmp/kafka-unit8944902187107510952] Loading producer state till offset 0 with message format version 2 11:10:52 11:10:52.442 [data-plane-kafka-request-handler-0] DEBUG kafka.utils.KafkaScheduler - Scheduling task flush-metadata-file with initial delay 0 ms and period -1 ms. 11:10:52 11:10:52.442 [data-plane-kafka-request-handler-0] DEBUG kafka.utils.KafkaScheduler - Scheduling task PeriodicProducerExpirationCheck with initial delay 600000 ms and period 600000 ms. 11:10:52 11:10:52.443 [data-plane-kafka-request-handler-0] INFO kafka.log.LogManager - Created log for partition __consumer_offsets-8 in /tmp/kafka-unit8944902187107510952/__consumer_offsets-8 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} 11:10:52 11:10:52.443 [data-plane-kafka-request-handler-0] INFO kafka.cluster.Partition - [Partition __consumer_offsets-8 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-8 11:10:52 11:10:52.443 [data-plane-kafka-request-handler-0] INFO kafka.cluster.Partition - [Partition __consumer_offsets-8 broker=1] Log loaded for partition __consumer_offsets-8 with initial high watermark 0 11:10:52 11:10:52.443 [data-plane-kafka-request-handler-0] INFO state.change.logger - [Broker id=1] Leader __consumer_offsets-8 with topic id Some(N6JdGKSnS5uP5QsQcans3w) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas []. Previous leader epoch was -1. 11:10:52 11:10:52.443 [data-plane-kafka-request-handler-0] DEBUG kafka.server.epoch.LeaderEpochFileCache - [LeaderEpochCache __consumer_offsets-8] Appended new epoch entry EpochEntry(epoch=0, startOffset=0). Cache now contains 1 entries. 11:10:52 11:10:52.447 [ProcessThread(sid:0 cport:39173):] DEBUG org.apache.zookeeper.server.SessionTrackerImpl - Checking session 0x10000020e9e0000 11:10:52 11:10:52.447 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - Processing request:: sessionid:0x10000020e9e0000 type:getData cxid:0xea zxid:0xfffffffffffffffe txntype:unknown reqpath:/config/topics/__consumer_offsets 11:10:52 11:10:52.447 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - sessionid:0x10000020e9e0000 type:getData cxid:0xea zxid:0xfffffffffffffffe txntype:unknown reqpath:/config/topics/__consumer_offsets 11:10:52 11:10:52.447 [SyncThread:0] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Permission requested: 1 11:10:52 11:10:52.447 [SyncThread:0] DEBUG org.apache.zookeeper.server.ZooKeeperServer - ACLs for node: [31,s{'world,'anyone} 11:10:52 ] 11:10:52 11:10:52.447 [SyncThread:0] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Client credentials: ['sasl,'zooclient 11:10:52 , 'ip,'127.0.0.1 11:10:52 ] 11:10:52 11:10:52.448 [main-SendThread(127.0.0.1:39173)] DEBUG org.apache.zookeeper.ClientCnxn - Reading reply session id: 0x10000020e9e0000, packet:: clientPath:/config/topics/__consumer_offsets serverPath:/config/topics/__consumer_offsets finished:false header:: 234,4 replyHeader:: 234,139,0 request:: '/config/topics/__consumer_offsets,F response:: #7b2276657273696f6e223a312c22636f6e666967223a7b22636f6d7072657373696f6e2e74797065223a2270726f6475636572222c22636c65616e75702e706f6c696379223a22636f6d70616374222c227365676d656e742e6279746573223a22313034383537363030227d7d,s{37,37,1768216251826,1768216251826,0,0,0,0,109,0,37} 11:10:52 11:10:52.449 [data-plane-kafka-request-handler-0] DEBUG kafka.log.OffsetIndex - Loaded index file /tmp/kafka-unit8944902187107510952/__consumer_offsets-45/00000000000000000000.index with maxEntries = 1310720, maxIndexSize = 10485760, entries = 0, lastOffset = 0, file position = 0 11:10:52 11:10:52.449 [data-plane-kafka-request-handler-0] DEBUG kafka.log.AbstractIndex - Index /tmp/kafka-unit8944902187107510952/__consumer_offsets-45/00000000000000000000.index was not resized because it already has size 10485760 11:10:52 11:10:52.450 [data-plane-kafka-request-handler-0] DEBUG kafka.log.TimeIndex - Loaded index file /tmp/kafka-unit8944902187107510952/__consumer_offsets-45/00000000000000000000.timeindex with maxEntries = 873813, maxIndexSize = 10485760, entries = 0, lastOffset = TimestampOffset(-1,0), file position = 0 11:10:52 11:10:52.450 [data-plane-kafka-request-handler-0] DEBUG kafka.log.AbstractIndex - Index /tmp/kafka-unit8944902187107510952/__consumer_offsets-45/00000000000000000000.timeindex was not resized because it already has size 10485756 11:10:52 11:10:52.450 [data-plane-kafka-request-handler-0] INFO kafka.log.UnifiedLog$ - [LogLoader partition=__consumer_offsets-45, dir=/tmp/kafka-unit8944902187107510952] Loading producer state till offset 0 with message format version 2 11:10:52 11:10:52.450 [data-plane-kafka-request-handler-0] DEBUG kafka.utils.KafkaScheduler - Scheduling task flush-metadata-file with initial delay 0 ms and period -1 ms. 11:10:52 11:10:52.450 [data-plane-kafka-request-handler-0] DEBUG kafka.utils.KafkaScheduler - Scheduling task PeriodicProducerExpirationCheck with initial delay 600000 ms and period 600000 ms. 11:10:52 11:10:52.451 [data-plane-kafka-request-handler-0] INFO kafka.log.LogManager - Created log for partition __consumer_offsets-45 in /tmp/kafka-unit8944902187107510952/__consumer_offsets-45 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} 11:10:52 11:10:52.451 [data-plane-kafka-request-handler-0] INFO kafka.cluster.Partition - [Partition __consumer_offsets-45 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-45 11:10:52 11:10:52.451 [data-plane-kafka-request-handler-0] INFO kafka.cluster.Partition - [Partition __consumer_offsets-45 broker=1] Log loaded for partition __consumer_offsets-45 with initial high watermark 0 11:10:52 11:10:52.451 [data-plane-kafka-request-handler-0] INFO state.change.logger - [Broker id=1] Leader __consumer_offsets-45 with topic id Some(N6JdGKSnS5uP5QsQcans3w) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas []. Previous leader epoch was -1. 11:10:52 11:10:52.451 [data-plane-kafka-request-handler-0] DEBUG kafka.server.epoch.LeaderEpochFileCache - [LeaderEpochCache __consumer_offsets-45] Appended new epoch entry EpochEntry(epoch=0, startOffset=0). Cache now contains 1 entries. 11:10:52 11:10:52.456 [ProcessThread(sid:0 cport:39173):] DEBUG org.apache.zookeeper.server.SessionTrackerImpl - Checking session 0x10000020e9e0000 11:10:52 11:10:52.456 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - Processing request:: sessionid:0x10000020e9e0000 type:getData cxid:0xeb zxid:0xfffffffffffffffe txntype:unknown reqpath:/config/topics/__consumer_offsets 11:10:52 11:10:52.456 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - sessionid:0x10000020e9e0000 type:getData cxid:0xeb zxid:0xfffffffffffffffe txntype:unknown reqpath:/config/topics/__consumer_offsets 11:10:52 11:10:52.456 [SyncThread:0] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Permission requested: 1 11:10:52 11:10:52.456 [SyncThread:0] DEBUG org.apache.zookeeper.server.ZooKeeperServer - ACLs for node: [31,s{'world,'anyone} 11:10:52 ] 11:10:52 11:10:52.456 [SyncThread:0] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Client credentials: ['sasl,'zooclient 11:10:52 , 'ip,'127.0.0.1 11:10:52 ] 11:10:52 11:10:52.456 [main-SendThread(127.0.0.1:39173)] DEBUG org.apache.zookeeper.ClientCnxn - Reading reply session id: 0x10000020e9e0000, packet:: clientPath:/config/topics/__consumer_offsets serverPath:/config/topics/__consumer_offsets finished:false header:: 235,4 replyHeader:: 235,139,0 request:: '/config/topics/__consumer_offsets,F response:: #7b2276657273696f6e223a312c22636f6e666967223a7b22636f6d7072657373696f6e2e74797065223a2270726f6475636572222c22636c65616e75702e706f6c696379223a22636f6d70616374222c227365676d656e742e6279746573223a22313034383537363030227d7d,s{37,37,1768216251826,1768216251826,0,0,0,0,109,0,37} 11:10:52 11:10:52.458 [data-plane-kafka-request-handler-0] DEBUG kafka.log.OffsetIndex - Loaded index file /tmp/kafka-unit8944902187107510952/__consumer_offsets-15/00000000000000000000.index with maxEntries = 1310720, maxIndexSize = 10485760, entries = 0, lastOffset = 0, file position = 0 11:10:52 11:10:52.458 [data-plane-kafka-request-handler-0] DEBUG kafka.log.AbstractIndex - Index /tmp/kafka-unit8944902187107510952/__consumer_offsets-15/00000000000000000000.index was not resized because it already has size 10485760 11:10:52 11:10:52.459 [data-plane-kafka-request-handler-0] DEBUG kafka.log.TimeIndex - Loaded index file /tmp/kafka-unit8944902187107510952/__consumer_offsets-15/00000000000000000000.timeindex with maxEntries = 873813, maxIndexSize = 10485760, entries = 0, lastOffset = TimestampOffset(-1,0), file position = 0 11:10:52 11:10:52.459 [data-plane-kafka-request-handler-0] DEBUG kafka.log.AbstractIndex - Index /tmp/kafka-unit8944902187107510952/__consumer_offsets-15/00000000000000000000.timeindex was not resized because it already has size 10485756 11:10:52 11:10:52.459 [data-plane-kafka-request-handler-0] INFO kafka.log.UnifiedLog$ - [LogLoader partition=__consumer_offsets-15, dir=/tmp/kafka-unit8944902187107510952] Loading producer state till offset 0 with message format version 2 11:10:52 11:10:52.459 [data-plane-kafka-request-handler-0] DEBUG kafka.utils.KafkaScheduler - Scheduling task flush-metadata-file with initial delay 0 ms and period -1 ms. 11:10:52 11:10:52.460 [data-plane-kafka-request-handler-0] DEBUG kafka.utils.KafkaScheduler - Scheduling task PeriodicProducerExpirationCheck with initial delay 600000 ms and period 600000 ms. 11:10:52 11:10:52.460 [data-plane-kafka-request-handler-0] INFO kafka.log.LogManager - Created log for partition __consumer_offsets-15 in /tmp/kafka-unit8944902187107510952/__consumer_offsets-15 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} 11:10:52 11:10:52.460 [data-plane-kafka-request-handler-0] INFO kafka.cluster.Partition - [Partition __consumer_offsets-15 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-15 11:10:52 11:10:52.460 [data-plane-kafka-request-handler-0] INFO kafka.cluster.Partition - [Partition __consumer_offsets-15 broker=1] Log loaded for partition __consumer_offsets-15 with initial high watermark 0 11:10:52 11:10:52.460 [data-plane-kafka-request-handler-0] INFO state.change.logger - [Broker id=1] Leader __consumer_offsets-15 with topic id Some(N6JdGKSnS5uP5QsQcans3w) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas []. Previous leader epoch was -1. 11:10:52 11:10:52.460 [data-plane-kafka-request-handler-0] DEBUG kafka.server.epoch.LeaderEpochFileCache - [LeaderEpochCache __consumer_offsets-15] Appended new epoch entry EpochEntry(epoch=0, startOffset=0). Cache now contains 1 entries. 11:10:52 11:10:52.464 [ProcessThread(sid:0 cport:39173):] DEBUG org.apache.zookeeper.server.SessionTrackerImpl - Checking session 0x10000020e9e0000 11:10:52 11:10:52.464 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - Processing request:: sessionid:0x10000020e9e0000 type:getData cxid:0xec zxid:0xfffffffffffffffe txntype:unknown reqpath:/config/topics/__consumer_offsets 11:10:52 11:10:52.464 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - sessionid:0x10000020e9e0000 type:getData cxid:0xec zxid:0xfffffffffffffffe txntype:unknown reqpath:/config/topics/__consumer_offsets 11:10:52 11:10:52.464 [SyncThread:0] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Permission requested: 1 11:10:52 11:10:52.464 [SyncThread:0] DEBUG org.apache.zookeeper.server.ZooKeeperServer - ACLs for node: [31,s{'world,'anyone} 11:10:52 ] 11:10:52 11:10:52.464 [SyncThread:0] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Client credentials: ['sasl,'zooclient 11:10:52 , 'ip,'127.0.0.1 11:10:52 ] 11:10:52 11:10:52.465 [main-SendThread(127.0.0.1:39173)] DEBUG org.apache.zookeeper.ClientCnxn - Reading reply session id: 0x10000020e9e0000, packet:: clientPath:/config/topics/__consumer_offsets serverPath:/config/topics/__consumer_offsets finished:false header:: 236,4 replyHeader:: 236,139,0 request:: '/config/topics/__consumer_offsets,F response:: #7b2276657273696f6e223a312c22636f6e666967223a7b22636f6d7072657373696f6e2e74797065223a2270726f6475636572222c22636c65616e75702e706f6c696379223a22636f6d70616374222c227365676d656e742e6279746573223a22313034383537363030227d7d,s{37,37,1768216251826,1768216251826,0,0,0,0,109,0,37} 11:10:52 11:10:52.466 [data-plane-kafka-request-handler-0] DEBUG kafka.log.OffsetIndex - Loaded index file /tmp/kafka-unit8944902187107510952/__consumer_offsets-30/00000000000000000000.index with maxEntries = 1310720, maxIndexSize = 10485760, entries = 0, lastOffset = 0, file position = 0 11:10:52 11:10:52.466 [data-plane-kafka-request-handler-0] DEBUG kafka.log.AbstractIndex - Index /tmp/kafka-unit8944902187107510952/__consumer_offsets-30/00000000000000000000.index was not resized because it already has size 10485760 11:10:52 11:10:52.466 [data-plane-kafka-request-handler-0] DEBUG kafka.log.TimeIndex - Loaded index file /tmp/kafka-unit8944902187107510952/__consumer_offsets-30/00000000000000000000.timeindex with maxEntries = 873813, maxIndexSize = 10485760, entries = 0, lastOffset = TimestampOffset(-1,0), file position = 0 11:10:52 11:10:52.467 [data-plane-kafka-request-handler-0] DEBUG kafka.log.AbstractIndex - Index /tmp/kafka-unit8944902187107510952/__consumer_offsets-30/00000000000000000000.timeindex was not resized because it already has size 10485756 11:10:52 11:10:52.467 [data-plane-kafka-request-handler-0] INFO kafka.log.UnifiedLog$ - [LogLoader partition=__consumer_offsets-30, dir=/tmp/kafka-unit8944902187107510952] Loading producer state till offset 0 with message format version 2 11:10:52 11:10:52.467 [data-plane-kafka-request-handler-0] DEBUG kafka.utils.KafkaScheduler - Scheduling task flush-metadata-file with initial delay 0 ms and period -1 ms. 11:10:52 11:10:52.467 [data-plane-kafka-request-handler-0] DEBUG kafka.utils.KafkaScheduler - Scheduling task PeriodicProducerExpirationCheck with initial delay 600000 ms and period 600000 ms. 11:10:52 11:10:52.468 [data-plane-kafka-request-handler-0] INFO kafka.log.LogManager - Created log for partition __consumer_offsets-30 in /tmp/kafka-unit8944902187107510952/__consumer_offsets-30 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} 11:10:52 11:10:52.468 [data-plane-kafka-request-handler-0] INFO kafka.cluster.Partition - [Partition __consumer_offsets-30 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-30 11:10:52 11:10:52.468 [data-plane-kafka-request-handler-0] INFO kafka.cluster.Partition - [Partition __consumer_offsets-30 broker=1] Log loaded for partition __consumer_offsets-30 with initial high watermark 0 11:10:52 11:10:52.468 [data-plane-kafka-request-handler-0] INFO state.change.logger - [Broker id=1] Leader __consumer_offsets-30 with topic id Some(N6JdGKSnS5uP5QsQcans3w) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas []. Previous leader epoch was -1. 11:10:52 11:10:52.468 [data-plane-kafka-request-handler-0] DEBUG kafka.server.epoch.LeaderEpochFileCache - [LeaderEpochCache __consumer_offsets-30] Appended new epoch entry EpochEntry(epoch=0, startOffset=0). Cache now contains 1 entries. 11:10:52 11:10:52.472 [ProcessThread(sid:0 cport:39173):] DEBUG org.apache.zookeeper.server.SessionTrackerImpl - Checking session 0x10000020e9e0000 11:10:52 11:10:52.472 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - Processing request:: sessionid:0x10000020e9e0000 type:getData cxid:0xed zxid:0xfffffffffffffffe txntype:unknown reqpath:/config/topics/__consumer_offsets 11:10:52 11:10:52.472 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - sessionid:0x10000020e9e0000 type:getData cxid:0xed zxid:0xfffffffffffffffe txntype:unknown reqpath:/config/topics/__consumer_offsets 11:10:52 11:10:52.472 [SyncThread:0] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Permission requested: 1 11:10:52 11:10:52.472 [SyncThread:0] DEBUG org.apache.zookeeper.server.ZooKeeperServer - ACLs for node: [31,s{'world,'anyone} 11:10:52 ] 11:10:52 11:10:52.472 [SyncThread:0] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Client credentials: ['sasl,'zooclient 11:10:52 , 'ip,'127.0.0.1 11:10:52 ] 11:10:52 11:10:52.472 [main-SendThread(127.0.0.1:39173)] DEBUG org.apache.zookeeper.ClientCnxn - Reading reply session id: 0x10000020e9e0000, packet:: clientPath:/config/topics/__consumer_offsets serverPath:/config/topics/__consumer_offsets finished:false header:: 237,4 replyHeader:: 237,139,0 request:: '/config/topics/__consumer_offsets,F response:: #7b2276657273696f6e223a312c22636f6e666967223a7b22636f6d7072657373696f6e2e74797065223a2270726f6475636572222c22636c65616e75702e706f6c696379223a22636f6d70616374222c227365676d656e742e6279746573223a22313034383537363030227d7d,s{37,37,1768216251826,1768216251826,0,0,0,0,109,0,37} 11:10:52 11:10:52.474 [data-plane-kafka-request-handler-0] DEBUG kafka.log.OffsetIndex - Loaded index file /tmp/kafka-unit8944902187107510952/__consumer_offsets-0/00000000000000000000.index with maxEntries = 1310720, maxIndexSize = 10485760, entries = 0, lastOffset = 0, file position = 0 11:10:52 11:10:52.474 [data-plane-kafka-request-handler-0] DEBUG kafka.log.AbstractIndex - Index /tmp/kafka-unit8944902187107510952/__consumer_offsets-0/00000000000000000000.index was not resized because it already has size 10485760 11:10:52 11:10:52.474 [data-plane-kafka-request-handler-0] DEBUG kafka.log.TimeIndex - Loaded index file /tmp/kafka-unit8944902187107510952/__consumer_offsets-0/00000000000000000000.timeindex with maxEntries = 873813, maxIndexSize = 10485760, entries = 0, lastOffset = TimestampOffset(-1,0), file position = 0 11:10:52 11:10:52.474 [data-plane-kafka-request-handler-0] DEBUG kafka.log.AbstractIndex - Index /tmp/kafka-unit8944902187107510952/__consumer_offsets-0/00000000000000000000.timeindex was not resized because it already has size 10485756 11:10:52 11:10:52.474 [data-plane-kafka-request-handler-0] INFO kafka.log.UnifiedLog$ - [LogLoader partition=__consumer_offsets-0, dir=/tmp/kafka-unit8944902187107510952] Loading producer state till offset 0 with message format version 2 11:10:52 11:10:52.475 [data-plane-kafka-request-handler-0] DEBUG kafka.utils.KafkaScheduler - Scheduling task flush-metadata-file with initial delay 0 ms and period -1 ms. 11:10:52 11:10:52.475 [data-plane-kafka-request-handler-0] DEBUG kafka.utils.KafkaScheduler - Scheduling task PeriodicProducerExpirationCheck with initial delay 600000 ms and period 600000 ms. 11:10:52 11:10:52.475 [data-plane-kafka-request-handler-0] INFO kafka.log.LogManager - Created log for partition __consumer_offsets-0 in /tmp/kafka-unit8944902187107510952/__consumer_offsets-0 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} 11:10:52 11:10:52.475 [data-plane-kafka-request-handler-0] INFO kafka.cluster.Partition - [Partition __consumer_offsets-0 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-0 11:10:52 11:10:52.475 [data-plane-kafka-request-handler-0] INFO kafka.cluster.Partition - [Partition __consumer_offsets-0 broker=1] Log loaded for partition __consumer_offsets-0 with initial high watermark 0 11:10:52 11:10:52.475 [data-plane-kafka-request-handler-0] INFO state.change.logger - [Broker id=1] Leader __consumer_offsets-0 with topic id Some(N6JdGKSnS5uP5QsQcans3w) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas []. Previous leader epoch was -1. 11:10:52 11:10:52.475 [data-plane-kafka-request-handler-0] DEBUG kafka.server.epoch.LeaderEpochFileCache - [LeaderEpochCache __consumer_offsets-0] Appended new epoch entry EpochEntry(epoch=0, startOffset=0). Cache now contains 1 entries. 11:10:52 11:10:52.479 [ProcessThread(sid:0 cport:39173):] DEBUG org.apache.zookeeper.server.SessionTrackerImpl - Checking session 0x10000020e9e0000 11:10:52 11:10:52.480 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - Processing request:: sessionid:0x10000020e9e0000 type:getData cxid:0xee zxid:0xfffffffffffffffe txntype:unknown reqpath:/config/topics/__consumer_offsets 11:10:52 11:10:52.480 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - sessionid:0x10000020e9e0000 type:getData cxid:0xee zxid:0xfffffffffffffffe txntype:unknown reqpath:/config/topics/__consumer_offsets 11:10:52 11:10:52.480 [SyncThread:0] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Permission requested: 1 11:10:52 11:10:52.480 [SyncThread:0] DEBUG org.apache.zookeeper.server.ZooKeeperServer - ACLs for node: [31,s{'world,'anyone} 11:10:52 ] 11:10:52 11:10:52.480 [SyncThread:0] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Client credentials: ['sasl,'zooclient 11:10:52 , 'ip,'127.0.0.1 11:10:52 ] 11:10:52 11:10:52.480 [main-SendThread(127.0.0.1:39173)] DEBUG org.apache.zookeeper.ClientCnxn - Reading reply session id: 0x10000020e9e0000, packet:: clientPath:/config/topics/__consumer_offsets serverPath:/config/topics/__consumer_offsets finished:false header:: 238,4 replyHeader:: 238,139,0 request:: '/config/topics/__consumer_offsets,F response:: #7b2276657273696f6e223a312c22636f6e666967223a7b22636f6d7072657373696f6e2e74797065223a2270726f6475636572222c22636c65616e75702e706f6c696379223a22636f6d70616374222c227365676d656e742e6279746573223a22313034383537363030227d7d,s{37,37,1768216251826,1768216251826,0,0,0,0,109,0,37} 11:10:52 11:10:52.482 [data-plane-kafka-request-handler-0] DEBUG kafka.log.OffsetIndex - Loaded index file /tmp/kafka-unit8944902187107510952/__consumer_offsets-35/00000000000000000000.index with maxEntries = 1310720, maxIndexSize = 10485760, entries = 0, lastOffset = 0, file position = 0 11:10:52 11:10:52.482 [data-plane-kafka-request-handler-0] DEBUG kafka.log.AbstractIndex - Index /tmp/kafka-unit8944902187107510952/__consumer_offsets-35/00000000000000000000.index was not resized because it already has size 10485760 11:10:52 11:10:52.482 [data-plane-kafka-request-handler-0] DEBUG kafka.log.TimeIndex - Loaded index file /tmp/kafka-unit8944902187107510952/__consumer_offsets-35/00000000000000000000.timeindex with maxEntries = 873813, maxIndexSize = 10485760, entries = 0, lastOffset = TimestampOffset(-1,0), file position = 0 11:10:52 11:10:52.482 [data-plane-kafka-request-handler-0] DEBUG kafka.log.AbstractIndex - Index /tmp/kafka-unit8944902187107510952/__consumer_offsets-35/00000000000000000000.timeindex was not resized because it already has size 10485756 11:10:52 11:10:52.482 [data-plane-kafka-request-handler-0] INFO kafka.log.UnifiedLog$ - [LogLoader partition=__consumer_offsets-35, dir=/tmp/kafka-unit8944902187107510952] Loading producer state till offset 0 with message format version 2 11:10:52 11:10:52.482 [data-plane-kafka-request-handler-0] DEBUG kafka.utils.KafkaScheduler - Scheduling task flush-metadata-file with initial delay 0 ms and period -1 ms. 11:10:52 11:10:52.483 [data-plane-kafka-request-handler-0] DEBUG kafka.utils.KafkaScheduler - Scheduling task PeriodicProducerExpirationCheck with initial delay 600000 ms and period 600000 ms. 11:10:52 11:10:52.483 [data-plane-kafka-request-handler-0] INFO kafka.log.LogManager - Created log for partition __consumer_offsets-35 in /tmp/kafka-unit8944902187107510952/__consumer_offsets-35 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} 11:10:52 11:10:52.483 [data-plane-kafka-request-handler-0] INFO kafka.cluster.Partition - [Partition __consumer_offsets-35 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-35 11:10:52 11:10:52.483 [data-plane-kafka-request-handler-0] INFO kafka.cluster.Partition - [Partition __consumer_offsets-35 broker=1] Log loaded for partition __consumer_offsets-35 with initial high watermark 0 11:10:52 11:10:52.483 [data-plane-kafka-request-handler-0] INFO state.change.logger - [Broker id=1] Leader __consumer_offsets-35 with topic id Some(N6JdGKSnS5uP5QsQcans3w) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas []. Previous leader epoch was -1. 11:10:52 11:10:52.483 [data-plane-kafka-request-handler-0] DEBUG kafka.server.epoch.LeaderEpochFileCache - [LeaderEpochCache __consumer_offsets-35] Appended new epoch entry EpochEntry(epoch=0, startOffset=0). Cache now contains 1 entries. 11:10:52 11:10:52.486 [ProcessThread(sid:0 cport:39173):] DEBUG org.apache.zookeeper.server.SessionTrackerImpl - Checking session 0x10000020e9e0000 11:10:52 11:10:52.486 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - Processing request:: sessionid:0x10000020e9e0000 type:getData cxid:0xef zxid:0xfffffffffffffffe txntype:unknown reqpath:/config/topics/__consumer_offsets 11:10:52 11:10:52.486 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - sessionid:0x10000020e9e0000 type:getData cxid:0xef zxid:0xfffffffffffffffe txntype:unknown reqpath:/config/topics/__consumer_offsets 11:10:52 11:10:52.486 [SyncThread:0] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Permission requested: 1 11:10:52 11:10:52.486 [SyncThread:0] DEBUG org.apache.zookeeper.server.ZooKeeperServer - ACLs for node: [31,s{'world,'anyone} 11:10:52 ] 11:10:52 11:10:52.486 [SyncThread:0] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Client credentials: ['sasl,'zooclient 11:10:52 , 'ip,'127.0.0.1 11:10:52 ] 11:10:52 11:10:52.487 [main-SendThread(127.0.0.1:39173)] DEBUG org.apache.zookeeper.ClientCnxn - Reading reply session id: 0x10000020e9e0000, packet:: clientPath:/config/topics/__consumer_offsets serverPath:/config/topics/__consumer_offsets finished:false header:: 239,4 replyHeader:: 239,139,0 request:: '/config/topics/__consumer_offsets,F response:: #7b2276657273696f6e223a312c22636f6e666967223a7b22636f6d7072657373696f6e2e74797065223a2270726f6475636572222c22636c65616e75702e706f6c696379223a22636f6d70616374222c227365676d656e742e6279746573223a22313034383537363030227d7d,s{37,37,1768216251826,1768216251826,0,0,0,0,109,0,37} 11:10:52 11:10:52.489 [data-plane-kafka-request-handler-0] DEBUG kafka.log.OffsetIndex - Loaded index file /tmp/kafka-unit8944902187107510952/__consumer_offsets-5/00000000000000000000.index with maxEntries = 1310720, maxIndexSize = 10485760, entries = 0, lastOffset = 0, file position = 0 11:10:52 11:10:52.489 [data-plane-kafka-request-handler-0] DEBUG kafka.log.AbstractIndex - Index /tmp/kafka-unit8944902187107510952/__consumer_offsets-5/00000000000000000000.index was not resized because it already has size 10485760 11:10:52 11:10:52.489 [data-plane-kafka-request-handler-0] DEBUG kafka.log.TimeIndex - Loaded index file /tmp/kafka-unit8944902187107510952/__consumer_offsets-5/00000000000000000000.timeindex with maxEntries = 873813, maxIndexSize = 10485760, entries = 0, lastOffset = TimestampOffset(-1,0), file position = 0 11:10:52 11:10:52.489 [data-plane-kafka-request-handler-0] DEBUG kafka.log.AbstractIndex - Index /tmp/kafka-unit8944902187107510952/__consumer_offsets-5/00000000000000000000.timeindex was not resized because it already has size 10485756 11:10:52 11:10:52.489 [data-plane-kafka-request-handler-0] INFO kafka.log.UnifiedLog$ - [LogLoader partition=__consumer_offsets-5, dir=/tmp/kafka-unit8944902187107510952] Loading producer state till offset 0 with message format version 2 11:10:52 11:10:52.489 [data-plane-kafka-request-handler-0] DEBUG kafka.utils.KafkaScheduler - Scheduling task flush-metadata-file with initial delay 0 ms and period -1 ms. 11:10:52 11:10:52.490 [data-plane-kafka-request-handler-0] DEBUG kafka.utils.KafkaScheduler - Scheduling task PeriodicProducerExpirationCheck with initial delay 600000 ms and period 600000 ms. 11:10:52 11:10:52.490 [data-plane-kafka-request-handler-0] INFO kafka.log.LogManager - Created log for partition __consumer_offsets-5 in /tmp/kafka-unit8944902187107510952/__consumer_offsets-5 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} 11:10:52 11:10:52.490 [data-plane-kafka-request-handler-0] INFO kafka.cluster.Partition - [Partition __consumer_offsets-5 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-5 11:10:52 11:10:52.490 [data-plane-kafka-request-handler-0] INFO kafka.cluster.Partition - [Partition __consumer_offsets-5 broker=1] Log loaded for partition __consumer_offsets-5 with initial high watermark 0 11:10:52 11:10:52.490 [data-plane-kafka-request-handler-0] INFO state.change.logger - [Broker id=1] Leader __consumer_offsets-5 with topic id Some(N6JdGKSnS5uP5QsQcans3w) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas []. Previous leader epoch was -1. 11:10:52 11:10:52.490 [data-plane-kafka-request-handler-0] DEBUG kafka.server.epoch.LeaderEpochFileCache - [LeaderEpochCache __consumer_offsets-5] Appended new epoch entry EpochEntry(epoch=0, startOffset=0). Cache now contains 1 entries. 11:10:52 11:10:52.494 [ProcessThread(sid:0 cport:39173):] DEBUG org.apache.zookeeper.server.SessionTrackerImpl - Checking session 0x10000020e9e0000 11:10:52 11:10:52.494 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - Processing request:: sessionid:0x10000020e9e0000 type:getData cxid:0xf0 zxid:0xfffffffffffffffe txntype:unknown reqpath:/config/topics/__consumer_offsets 11:10:52 11:10:52.494 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - sessionid:0x10000020e9e0000 type:getData cxid:0xf0 zxid:0xfffffffffffffffe txntype:unknown reqpath:/config/topics/__consumer_offsets 11:10:52 11:10:52.494 [SyncThread:0] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Permission requested: 1 11:10:52 11:10:52.494 [SyncThread:0] DEBUG org.apache.zookeeper.server.ZooKeeperServer - ACLs for node: [31,s{'world,'anyone} 11:10:52 ] 11:10:52 11:10:52.494 [SyncThread:0] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Client credentials: ['sasl,'zooclient 11:10:52 , 'ip,'127.0.0.1 11:10:52 ] 11:10:52 11:10:52.494 [main-SendThread(127.0.0.1:39173)] DEBUG org.apache.zookeeper.ClientCnxn - Reading reply session id: 0x10000020e9e0000, packet:: clientPath:/config/topics/__consumer_offsets serverPath:/config/topics/__consumer_offsets finished:false header:: 240,4 replyHeader:: 240,139,0 request:: '/config/topics/__consumer_offsets,F response:: #7b2276657273696f6e223a312c22636f6e666967223a7b22636f6d7072657373696f6e2e74797065223a2270726f6475636572222c22636c65616e75702e706f6c696379223a22636f6d70616374222c227365676d656e742e6279746573223a22313034383537363030227d7d,s{37,37,1768216251826,1768216251826,0,0,0,0,109,0,37} 11:10:52 11:10:52.496 [data-plane-kafka-request-handler-0] DEBUG kafka.log.OffsetIndex - Loaded index file /tmp/kafka-unit8944902187107510952/__consumer_offsets-20/00000000000000000000.index with maxEntries = 1310720, maxIndexSize = 10485760, entries = 0, lastOffset = 0, file position = 0 11:10:52 11:10:52.496 [data-plane-kafka-request-handler-0] DEBUG kafka.log.AbstractIndex - Index /tmp/kafka-unit8944902187107510952/__consumer_offsets-20/00000000000000000000.index was not resized because it already has size 10485760 11:10:52 11:10:52.496 [data-plane-kafka-request-handler-0] DEBUG kafka.log.TimeIndex - Loaded index file /tmp/kafka-unit8944902187107510952/__consumer_offsets-20/00000000000000000000.timeindex with maxEntries = 873813, maxIndexSize = 10485760, entries = 0, lastOffset = TimestampOffset(-1,0), file position = 0 11:10:52 11:10:52.496 [data-plane-kafka-request-handler-0] DEBUG kafka.log.AbstractIndex - Index /tmp/kafka-unit8944902187107510952/__consumer_offsets-20/00000000000000000000.timeindex was not resized because it already has size 10485756 11:10:52 11:10:52.497 [data-plane-kafka-request-handler-0] INFO kafka.log.UnifiedLog$ - [LogLoader partition=__consumer_offsets-20, dir=/tmp/kafka-unit8944902187107510952] Loading producer state till offset 0 with message format version 2 11:10:52 11:10:52.497 [data-plane-kafka-request-handler-0] DEBUG kafka.utils.KafkaScheduler - Scheduling task flush-metadata-file with initial delay 0 ms and period -1 ms. 11:10:52 11:10:52.497 [data-plane-kafka-request-handler-0] DEBUG kafka.utils.KafkaScheduler - Scheduling task PeriodicProducerExpirationCheck with initial delay 600000 ms and period 600000 ms. 11:10:52 11:10:52.497 [data-plane-kafka-request-handler-0] INFO kafka.log.LogManager - Created log for partition __consumer_offsets-20 in /tmp/kafka-unit8944902187107510952/__consumer_offsets-20 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} 11:10:52 11:10:52.497 [data-plane-kafka-request-handler-0] INFO kafka.cluster.Partition - [Partition __consumer_offsets-20 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-20 11:10:52 11:10:52.497 [data-plane-kafka-request-handler-0] INFO kafka.cluster.Partition - [Partition __consumer_offsets-20 broker=1] Log loaded for partition __consumer_offsets-20 with initial high watermark 0 11:10:52 11:10:52.498 [data-plane-kafka-request-handler-0] INFO state.change.logger - [Broker id=1] Leader __consumer_offsets-20 with topic id Some(N6JdGKSnS5uP5QsQcans3w) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas []. Previous leader epoch was -1. 11:10:52 11:10:52.498 [data-plane-kafka-request-handler-0] DEBUG kafka.server.epoch.LeaderEpochFileCache - [LeaderEpochCache __consumer_offsets-20] Appended new epoch entry EpochEntry(epoch=0, startOffset=0). Cache now contains 1 entries. 11:10:52 11:10:52.502 [ProcessThread(sid:0 cport:39173):] DEBUG org.apache.zookeeper.server.SessionTrackerImpl - Checking session 0x10000020e9e0000 11:10:52 11:10:52.502 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - Processing request:: sessionid:0x10000020e9e0000 type:getData cxid:0xf1 zxid:0xfffffffffffffffe txntype:unknown reqpath:/config/topics/__consumer_offsets 11:10:52 11:10:52.502 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - sessionid:0x10000020e9e0000 type:getData cxid:0xf1 zxid:0xfffffffffffffffe txntype:unknown reqpath:/config/topics/__consumer_offsets 11:10:52 11:10:52.502 [SyncThread:0] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Permission requested: 1 11:10:52 11:10:52.502 [SyncThread:0] DEBUG org.apache.zookeeper.server.ZooKeeperServer - ACLs for node: [31,s{'world,'anyone} 11:10:52 ] 11:10:52 11:10:52.502 [SyncThread:0] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Client credentials: ['sasl,'zooclient 11:10:52 , 'ip,'127.0.0.1 11:10:52 ] 11:10:52 11:10:52.503 [main-SendThread(127.0.0.1:39173)] DEBUG org.apache.zookeeper.ClientCnxn - Reading reply session id: 0x10000020e9e0000, packet:: clientPath:/config/topics/__consumer_offsets serverPath:/config/topics/__consumer_offsets finished:false header:: 241,4 replyHeader:: 241,139,0 request:: '/config/topics/__consumer_offsets,F response:: #7b2276657273696f6e223a312c22636f6e666967223a7b22636f6d7072657373696f6e2e74797065223a2270726f6475636572222c22636c65616e75702e706f6c696379223a22636f6d70616374222c227365676d656e742e6279746573223a22313034383537363030227d7d,s{37,37,1768216251826,1768216251826,0,0,0,0,109,0,37} 11:10:52 11:10:52.504 [data-plane-kafka-request-handler-0] DEBUG kafka.log.OffsetIndex - Loaded index file /tmp/kafka-unit8944902187107510952/__consumer_offsets-27/00000000000000000000.index with maxEntries = 1310720, maxIndexSize = 10485760, entries = 0, lastOffset = 0, file position = 0 11:10:52 11:10:52.504 [data-plane-kafka-request-handler-0] DEBUG kafka.log.AbstractIndex - Index /tmp/kafka-unit8944902187107510952/__consumer_offsets-27/00000000000000000000.index was not resized because it already has size 10485760 11:10:52 11:10:52.505 [data-plane-kafka-request-handler-0] DEBUG kafka.log.TimeIndex - Loaded index file /tmp/kafka-unit8944902187107510952/__consumer_offsets-27/00000000000000000000.timeindex with maxEntries = 873813, maxIndexSize = 10485760, entries = 0, lastOffset = TimestampOffset(-1,0), file position = 0 11:10:52 11:10:52.505 [data-plane-kafka-request-handler-0] DEBUG kafka.log.AbstractIndex - Index /tmp/kafka-unit8944902187107510952/__consumer_offsets-27/00000000000000000000.timeindex was not resized because it already has size 10485756 11:10:52 11:10:52.505 [data-plane-kafka-request-handler-0] INFO kafka.log.UnifiedLog$ - [LogLoader partition=__consumer_offsets-27, dir=/tmp/kafka-unit8944902187107510952] Loading producer state till offset 0 with message format version 2 11:10:52 11:10:52.505 [data-plane-kafka-request-handler-0] DEBUG kafka.utils.KafkaScheduler - Scheduling task flush-metadata-file with initial delay 0 ms and period -1 ms. 11:10:52 11:10:52.505 [data-plane-kafka-request-handler-0] DEBUG kafka.utils.KafkaScheduler - Scheduling task PeriodicProducerExpirationCheck with initial delay 600000 ms and period 600000 ms. 11:10:52 11:10:52.506 [data-plane-kafka-request-handler-0] INFO kafka.log.LogManager - Created log for partition __consumer_offsets-27 in /tmp/kafka-unit8944902187107510952/__consumer_offsets-27 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} 11:10:52 11:10:52.506 [data-plane-kafka-request-handler-0] INFO kafka.cluster.Partition - [Partition __consumer_offsets-27 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-27 11:10:52 11:10:52.506 [data-plane-kafka-request-handler-0] INFO kafka.cluster.Partition - [Partition __consumer_offsets-27 broker=1] Log loaded for partition __consumer_offsets-27 with initial high watermark 0 11:10:52 11:10:52.506 [data-plane-kafka-request-handler-0] INFO state.change.logger - [Broker id=1] Leader __consumer_offsets-27 with topic id Some(N6JdGKSnS5uP5QsQcans3w) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas []. Previous leader epoch was -1. 11:10:52 11:10:52.506 [data-plane-kafka-request-handler-0] DEBUG kafka.server.epoch.LeaderEpochFileCache - [LeaderEpochCache __consumer_offsets-27] Appended new epoch entry EpochEntry(epoch=0, startOffset=0). Cache now contains 1 entries. 11:10:52 11:10:52.511 [ProcessThread(sid:0 cport:39173):] DEBUG org.apache.zookeeper.server.SessionTrackerImpl - Checking session 0x10000020e9e0000 11:10:52 11:10:52.511 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - Processing request:: sessionid:0x10000020e9e0000 type:getData cxid:0xf2 zxid:0xfffffffffffffffe txntype:unknown reqpath:/config/topics/__consumer_offsets 11:10:52 11:10:52.511 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - sessionid:0x10000020e9e0000 type:getData cxid:0xf2 zxid:0xfffffffffffffffe txntype:unknown reqpath:/config/topics/__consumer_offsets 11:10:52 11:10:52.511 [SyncThread:0] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Permission requested: 1 11:10:52 11:10:52.511 [SyncThread:0] DEBUG org.apache.zookeeper.server.ZooKeeperServer - ACLs for node: [31,s{'world,'anyone} 11:10:52 ] 11:10:52 11:10:52.511 [SyncThread:0] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Client credentials: ['sasl,'zooclient 11:10:52 , 'ip,'127.0.0.1 11:10:52 ] 11:10:52 11:10:52.511 [main-SendThread(127.0.0.1:39173)] DEBUG org.apache.zookeeper.ClientCnxn - Reading reply session id: 0x10000020e9e0000, packet:: clientPath:/config/topics/__consumer_offsets serverPath:/config/topics/__consumer_offsets finished:false header:: 242,4 replyHeader:: 242,139,0 request:: '/config/topics/__consumer_offsets,F response:: #7b2276657273696f6e223a312c22636f6e666967223a7b22636f6d7072657373696f6e2e74797065223a2270726f6475636572222c22636c65616e75702e706f6c696379223a22636f6d70616374222c227365676d656e742e6279746573223a22313034383537363030227d7d,s{37,37,1768216251826,1768216251826,0,0,0,0,109,0,37} 11:10:52 11:10:52.513 [data-plane-kafka-request-handler-0] DEBUG kafka.log.OffsetIndex - Loaded index file /tmp/kafka-unit8944902187107510952/__consumer_offsets-42/00000000000000000000.index with maxEntries = 1310720, maxIndexSize = 10485760, entries = 0, lastOffset = 0, file position = 0 11:10:52 11:10:52.513 [data-plane-kafka-request-handler-0] DEBUG kafka.log.AbstractIndex - Index /tmp/kafka-unit8944902187107510952/__consumer_offsets-42/00000000000000000000.index was not resized because it already has size 10485760 11:10:52 11:10:52.513 [data-plane-kafka-request-handler-0] DEBUG kafka.log.TimeIndex - Loaded index file /tmp/kafka-unit8944902187107510952/__consumer_offsets-42/00000000000000000000.timeindex with maxEntries = 873813, maxIndexSize = 10485760, entries = 0, lastOffset = TimestampOffset(-1,0), file position = 0 11:10:52 11:10:52.513 [data-plane-kafka-request-handler-0] DEBUG kafka.log.AbstractIndex - Index /tmp/kafka-unit8944902187107510952/__consumer_offsets-42/00000000000000000000.timeindex was not resized because it already has size 10485756 11:10:52 11:10:52.513 [data-plane-kafka-request-handler-0] INFO kafka.log.UnifiedLog$ - [LogLoader partition=__consumer_offsets-42, dir=/tmp/kafka-unit8944902187107510952] Loading producer state till offset 0 with message format version 2 11:10:52 11:10:52.513 [data-plane-kafka-request-handler-0] DEBUG kafka.utils.KafkaScheduler - Scheduling task flush-metadata-file with initial delay 0 ms and period -1 ms. 11:10:52 11:10:52.514 [data-plane-kafka-request-handler-0] DEBUG kafka.utils.KafkaScheduler - Scheduling task PeriodicProducerExpirationCheck with initial delay 600000 ms and period 600000 ms. 11:10:52 11:10:52.514 [data-plane-kafka-request-handler-0] INFO kafka.log.LogManager - Created log for partition __consumer_offsets-42 in /tmp/kafka-unit8944902187107510952/__consumer_offsets-42 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} 11:10:52 11:10:52.514 [data-plane-kafka-request-handler-0] INFO kafka.cluster.Partition - [Partition __consumer_offsets-42 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-42 11:10:52 11:10:52.514 [data-plane-kafka-request-handler-0] INFO kafka.cluster.Partition - [Partition __consumer_offsets-42 broker=1] Log loaded for partition __consumer_offsets-42 with initial high watermark 0 11:10:52 11:10:52.514 [data-plane-kafka-request-handler-0] INFO state.change.logger - [Broker id=1] Leader __consumer_offsets-42 with topic id Some(N6JdGKSnS5uP5QsQcans3w) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas []. Previous leader epoch was -1. 11:10:52 11:10:52.514 [data-plane-kafka-request-handler-0] DEBUG kafka.server.epoch.LeaderEpochFileCache - [LeaderEpochCache __consumer_offsets-42] Appended new epoch entry EpochEntry(epoch=0, startOffset=0). Cache now contains 1 entries. 11:10:52 11:10:52.518 [ProcessThread(sid:0 cport:39173):] DEBUG org.apache.zookeeper.server.SessionTrackerImpl - Checking session 0x10000020e9e0000 11:10:52 11:10:52.518 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - Processing request:: sessionid:0x10000020e9e0000 type:getData cxid:0xf3 zxid:0xfffffffffffffffe txntype:unknown reqpath:/config/topics/__consumer_offsets 11:10:52 11:10:52.518 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - sessionid:0x10000020e9e0000 type:getData cxid:0xf3 zxid:0xfffffffffffffffe txntype:unknown reqpath:/config/topics/__consumer_offsets 11:10:52 11:10:52.518 [SyncThread:0] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Permission requested: 1 11:10:52 11:10:52.518 [SyncThread:0] DEBUG org.apache.zookeeper.server.ZooKeeperServer - ACLs for node: [31,s{'world,'anyone} 11:10:52 ] 11:10:52 11:10:52.518 [SyncThread:0] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Client credentials: ['sasl,'zooclient 11:10:52 , 'ip,'127.0.0.1 11:10:52 ] 11:10:52 11:10:52.519 [main-SendThread(127.0.0.1:39173)] DEBUG org.apache.zookeeper.ClientCnxn - Reading reply session id: 0x10000020e9e0000, packet:: clientPath:/config/topics/__consumer_offsets serverPath:/config/topics/__consumer_offsets finished:false header:: 243,4 replyHeader:: 243,139,0 request:: '/config/topics/__consumer_offsets,F response:: #7b2276657273696f6e223a312c22636f6e666967223a7b22636f6d7072657373696f6e2e74797065223a2270726f6475636572222c22636c65616e75702e706f6c696379223a22636f6d70616374222c227365676d656e742e6279746573223a22313034383537363030227d7d,s{37,37,1768216251826,1768216251826,0,0,0,0,109,0,37} 11:10:52 11:10:52.520 [data-plane-kafka-request-handler-0] DEBUG kafka.log.OffsetIndex - Loaded index file /tmp/kafka-unit8944902187107510952/__consumer_offsets-12/00000000000000000000.index with maxEntries = 1310720, maxIndexSize = 10485760, entries = 0, lastOffset = 0, file position = 0 11:10:52 11:10:52.520 [data-plane-kafka-request-handler-0] DEBUG kafka.log.AbstractIndex - Index /tmp/kafka-unit8944902187107510952/__consumer_offsets-12/00000000000000000000.index was not resized because it already has size 10485760 11:10:52 11:10:52.520 [data-plane-kafka-request-handler-0] DEBUG kafka.log.TimeIndex - Loaded index file /tmp/kafka-unit8944902187107510952/__consumer_offsets-12/00000000000000000000.timeindex with maxEntries = 873813, maxIndexSize = 10485760, entries = 0, lastOffset = TimestampOffset(-1,0), file position = 0 11:10:52 11:10:52.520 [data-plane-kafka-request-handler-0] DEBUG kafka.log.AbstractIndex - Index /tmp/kafka-unit8944902187107510952/__consumer_offsets-12/00000000000000000000.timeindex was not resized because it already has size 10485756 11:10:52 11:10:52.520 [data-plane-kafka-request-handler-0] INFO kafka.log.UnifiedLog$ - [LogLoader partition=__consumer_offsets-12, dir=/tmp/kafka-unit8944902187107510952] Loading producer state till offset 0 with message format version 2 11:10:52 11:10:52.521 [data-plane-kafka-request-handler-0] DEBUG kafka.utils.KafkaScheduler - Scheduling task flush-metadata-file with initial delay 0 ms and period -1 ms. 11:10:52 11:10:52.521 [data-plane-kafka-request-handler-0] DEBUG kafka.utils.KafkaScheduler - Scheduling task PeriodicProducerExpirationCheck with initial delay 600000 ms and period 600000 ms. 11:10:52 11:10:52.521 [data-plane-kafka-request-handler-0] INFO kafka.log.LogManager - Created log for partition __consumer_offsets-12 in /tmp/kafka-unit8944902187107510952/__consumer_offsets-12 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} 11:10:52 11:10:52.521 [data-plane-kafka-request-handler-0] INFO kafka.cluster.Partition - [Partition __consumer_offsets-12 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-12 11:10:52 11:10:52.521 [data-plane-kafka-request-handler-0] INFO kafka.cluster.Partition - [Partition __consumer_offsets-12 broker=1] Log loaded for partition __consumer_offsets-12 with initial high watermark 0 11:10:52 11:10:52.521 [data-plane-kafka-request-handler-0] INFO state.change.logger - [Broker id=1] Leader __consumer_offsets-12 with topic id Some(N6JdGKSnS5uP5QsQcans3w) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas []. Previous leader epoch was -1. 11:10:52 11:10:52.521 [data-plane-kafka-request-handler-0] DEBUG kafka.server.epoch.LeaderEpochFileCache - [LeaderEpochCache __consumer_offsets-12] Appended new epoch entry EpochEntry(epoch=0, startOffset=0). Cache now contains 1 entries. 11:10:52 11:10:52.526 [ProcessThread(sid:0 cport:39173):] DEBUG org.apache.zookeeper.server.SessionTrackerImpl - Checking session 0x10000020e9e0000 11:10:52 11:10:52.526 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - Processing request:: sessionid:0x10000020e9e0000 type:getData cxid:0xf4 zxid:0xfffffffffffffffe txntype:unknown reqpath:/config/topics/__consumer_offsets 11:10:52 11:10:52.526 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - sessionid:0x10000020e9e0000 type:getData cxid:0xf4 zxid:0xfffffffffffffffe txntype:unknown reqpath:/config/topics/__consumer_offsets 11:10:52 11:10:52.526 [SyncThread:0] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Permission requested: 1 11:10:52 11:10:52.526 [SyncThread:0] DEBUG org.apache.zookeeper.server.ZooKeeperServer - ACLs for node: [31,s{'world,'anyone} 11:10:52 ] 11:10:52 11:10:52.526 [SyncThread:0] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Client credentials: ['sasl,'zooclient 11:10:52 , 'ip,'127.0.0.1 11:10:52 ] 11:10:52 11:10:52.526 [main-SendThread(127.0.0.1:39173)] DEBUG org.apache.zookeeper.ClientCnxn - Reading reply session id: 0x10000020e9e0000, packet:: clientPath:/config/topics/__consumer_offsets serverPath:/config/topics/__consumer_offsets finished:false header:: 244,4 replyHeader:: 244,139,0 request:: '/config/topics/__consumer_offsets,F response:: #7b2276657273696f6e223a312c22636f6e666967223a7b22636f6d7072657373696f6e2e74797065223a2270726f6475636572222c22636c65616e75702e706f6c696379223a22636f6d70616374222c227365676d656e742e6279746573223a22313034383537363030227d7d,s{37,37,1768216251826,1768216251826,0,0,0,0,109,0,37} 11:10:52 11:10:52.528 [data-plane-kafka-request-handler-0] DEBUG kafka.log.OffsetIndex - Loaded index file /tmp/kafka-unit8944902187107510952/__consumer_offsets-21/00000000000000000000.index with maxEntries = 1310720, maxIndexSize = 10485760, entries = 0, lastOffset = 0, file position = 0 11:10:52 11:10:52.528 [data-plane-kafka-request-handler-0] DEBUG kafka.log.AbstractIndex - Index /tmp/kafka-unit8944902187107510952/__consumer_offsets-21/00000000000000000000.index was not resized because it already has size 10485760 11:10:52 11:10:52.528 [data-plane-kafka-request-handler-0] DEBUG kafka.log.TimeIndex - Loaded index file /tmp/kafka-unit8944902187107510952/__consumer_offsets-21/00000000000000000000.timeindex with maxEntries = 873813, maxIndexSize = 10485760, entries = 0, lastOffset = TimestampOffset(-1,0), file position = 0 11:10:52 11:10:52.528 [data-plane-kafka-request-handler-0] DEBUG kafka.log.AbstractIndex - Index /tmp/kafka-unit8944902187107510952/__consumer_offsets-21/00000000000000000000.timeindex was not resized because it already has size 10485756 11:10:52 11:10:52.528 [data-plane-kafka-request-handler-0] INFO kafka.log.UnifiedLog$ - [LogLoader partition=__consumer_offsets-21, dir=/tmp/kafka-unit8944902187107510952] Loading producer state till offset 0 with message format version 2 11:10:52 11:10:52.528 [data-plane-kafka-request-handler-0] DEBUG kafka.utils.KafkaScheduler - Scheduling task flush-metadata-file with initial delay 0 ms and period -1 ms. 11:10:52 11:10:52.528 [data-plane-kafka-request-handler-0] DEBUG kafka.utils.KafkaScheduler - Scheduling task PeriodicProducerExpirationCheck with initial delay 600000 ms and period 600000 ms. 11:10:52 11:10:52.529 [data-plane-kafka-request-handler-0] INFO kafka.log.LogManager - Created log for partition __consumer_offsets-21 in /tmp/kafka-unit8944902187107510952/__consumer_offsets-21 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} 11:10:52 11:10:52.529 [data-plane-kafka-request-handler-0] INFO kafka.cluster.Partition - [Partition __consumer_offsets-21 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-21 11:10:52 11:10:52.529 [data-plane-kafka-request-handler-0] INFO kafka.cluster.Partition - [Partition __consumer_offsets-21 broker=1] Log loaded for partition __consumer_offsets-21 with initial high watermark 0 11:10:52 11:10:52.529 [data-plane-kafka-request-handler-0] INFO state.change.logger - [Broker id=1] Leader __consumer_offsets-21 with topic id Some(N6JdGKSnS5uP5QsQcans3w) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas []. Previous leader epoch was -1. 11:10:52 11:10:52.529 [data-plane-kafka-request-handler-0] DEBUG kafka.server.epoch.LeaderEpochFileCache - [LeaderEpochCache __consumer_offsets-21] Appended new epoch entry EpochEntry(epoch=0, startOffset=0). Cache now contains 1 entries. 11:10:52 11:10:52.533 [ProcessThread(sid:0 cport:39173):] DEBUG org.apache.zookeeper.server.SessionTrackerImpl - Checking session 0x10000020e9e0000 11:10:52 11:10:52.533 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - Processing request:: sessionid:0x10000020e9e0000 type:getData cxid:0xf5 zxid:0xfffffffffffffffe txntype:unknown reqpath:/config/topics/__consumer_offsets 11:10:52 11:10:52.533 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - sessionid:0x10000020e9e0000 type:getData cxid:0xf5 zxid:0xfffffffffffffffe txntype:unknown reqpath:/config/topics/__consumer_offsets 11:10:52 11:10:52.533 [SyncThread:0] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Permission requested: 1 11:10:52 11:10:52.533 [SyncThread:0] DEBUG org.apache.zookeeper.server.ZooKeeperServer - ACLs for node: [31,s{'world,'anyone} 11:10:52 ] 11:10:52 11:10:52.533 [SyncThread:0] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Client credentials: ['sasl,'zooclient 11:10:52 , 'ip,'127.0.0.1 11:10:52 ] 11:10:52 11:10:52.534 [main-SendThread(127.0.0.1:39173)] DEBUG org.apache.zookeeper.ClientCnxn - Reading reply session id: 0x10000020e9e0000, packet:: clientPath:/config/topics/__consumer_offsets serverPath:/config/topics/__consumer_offsets finished:false header:: 245,4 replyHeader:: 245,139,0 request:: '/config/topics/__consumer_offsets,F response:: #7b2276657273696f6e223a312c22636f6e666967223a7b22636f6d7072657373696f6e2e74797065223a2270726f6475636572222c22636c65616e75702e706f6c696379223a22636f6d70616374222c227365676d656e742e6279746573223a22313034383537363030227d7d,s{37,37,1768216251826,1768216251826,0,0,0,0,109,0,37} 11:10:52 11:10:52.534 [main] DEBUG org.apache.kafka.clients.NetworkClient - [Consumer clientId=mso-123456-consumer-37da6eb2-0a27-47c4-bc3c-8032fdeca549, groupId=mso-group] Sending metadata request MetadataRequestData(topics=[MetadataRequestTopic(topicId=AAAAAAAAAAAAAAAAAAAAAA, name='my-test-topic')], allowAutoTopicCreation=false, includeClusterAuthorizedOperations=false, includeTopicAuthorizedOperations=false) to node localhost:39115 (id: 1 rack: null) 11:10:52 11:10:52.534 [main] DEBUG org.apache.kafka.clients.NetworkClient - [Consumer clientId=mso-123456-consumer-37da6eb2-0a27-47c4-bc3c-8032fdeca549, groupId=mso-group] Sending METADATA request with header RequestHeader(apiKey=METADATA, apiVersion=12, clientId=mso-123456-consumer-37da6eb2-0a27-47c4-bc3c-8032fdeca549, correlationId=16) and timeout 30000 to node 1: MetadataRequestData(topics=[MetadataRequestTopic(topicId=AAAAAAAAAAAAAAAAAAAAAA, name='my-test-topic')], allowAutoTopicCreation=false, includeClusterAuthorizedOperations=false, includeTopicAuthorizedOperations=false) 11:10:52 11:10:52.536 [data-plane-kafka-request-handler-0] DEBUG kafka.log.OffsetIndex - Loaded index file /tmp/kafka-unit8944902187107510952/__consumer_offsets-36/00000000000000000000.index with maxEntries = 1310720, maxIndexSize = 10485760, entries = 0, lastOffset = 0, file position = 0 11:10:52 11:10:52.536 [data-plane-kafka-request-handler-0] DEBUG kafka.log.AbstractIndex - Index /tmp/kafka-unit8944902187107510952/__consumer_offsets-36/00000000000000000000.index was not resized because it already has size 10485760 11:10:52 11:10:52.536 [data-plane-kafka-request-handler-0] DEBUG kafka.log.TimeIndex - Loaded index file /tmp/kafka-unit8944902187107510952/__consumer_offsets-36/00000000000000000000.timeindex with maxEntries = 873813, maxIndexSize = 10485760, entries = 0, lastOffset = TimestampOffset(-1,0), file position = 0 11:10:52 11:10:52.536 [data-plane-kafka-request-handler-0] DEBUG kafka.log.AbstractIndex - Index /tmp/kafka-unit8944902187107510952/__consumer_offsets-36/00000000000000000000.timeindex was not resized because it already has size 10485756 11:10:52 11:10:52.537 [data-plane-kafka-request-handler-0] INFO kafka.log.UnifiedLog$ - [LogLoader partition=__consumer_offsets-36, dir=/tmp/kafka-unit8944902187107510952] Loading producer state till offset 0 with message format version 2 11:10:52 11:10:52.537 [data-plane-kafka-request-handler-0] DEBUG kafka.utils.KafkaScheduler - Scheduling task flush-metadata-file with initial delay 0 ms and period -1 ms. 11:10:52 11:10:52.537 [main] DEBUG org.apache.kafka.clients.NetworkClient - [Consumer clientId=mso-123456-consumer-37da6eb2-0a27-47c4-bc3c-8032fdeca549, groupId=mso-group] Received METADATA response from node 1 for request with header RequestHeader(apiKey=METADATA, apiVersion=12, clientId=mso-123456-consumer-37da6eb2-0a27-47c4-bc3c-8032fdeca549, correlationId=16): MetadataResponseData(throttleTimeMs=0, brokers=[MetadataResponseBroker(nodeId=1, host='localhost', port=39115, rack=null)], clusterId='jx5ycp9PTHOXo1U6H8QTmw', controllerId=1, topics=[MetadataResponseTopic(errorCode=0, name='my-test-topic', topicId=QbCSIHehTz6oyvgrkYsJtw, isInternal=false, partitions=[MetadataResponsePartition(errorCode=0, partitionIndex=0, leaderId=1, leaderEpoch=0, replicaNodes=[1], isrNodes=[1], offlineReplicas=[])], topicAuthorizedOperations=-2147483648)], clusterAuthorizedOperations=-2147483648) 11:10:52 11:10:52.537 [main] DEBUG org.apache.kafka.clients.Metadata - [Consumer clientId=mso-123456-consumer-37da6eb2-0a27-47c4-bc3c-8032fdeca549, groupId=mso-group] Updating last seen epoch for partition my-test-topic-0 from 0 to epoch 0 from new metadata 11:10:52 11:10:52.538 [main] DEBUG org.apache.kafka.clients.Metadata - [Consumer clientId=mso-123456-consumer-37da6eb2-0a27-47c4-bc3c-8032fdeca549, groupId=mso-group] Updated cluster metadata updateVersion 9 to MetadataCache{clusterId='jx5ycp9PTHOXo1U6H8QTmw', nodes={1=localhost:39115 (id: 1 rack: null)}, partitions=[PartitionMetadata(error=NONE, partition=my-test-topic-0, leader=Optional[1], leaderEpoch=Optional[0], replicas=1, isr=1, offlineReplicas=)], controller=localhost:39115 (id: 1 rack: null)} 11:10:52 11:10:52.538 [main] DEBUG org.apache.kafka.clients.consumer.internals.ConsumerCoordinator - [Consumer clientId=mso-123456-consumer-37da6eb2-0a27-47c4-bc3c-8032fdeca549, groupId=mso-group] Sending FindCoordinator request to broker localhost:39115 (id: 1 rack: null) 11:10:52 11:10:52.538 [data-plane-kafka-network-thread-1-ListenerName(SASL_PLAINTEXT)-SASL_PLAINTEXT-0] DEBUG kafka.request.logger - Completed request:{"isForwarded":false,"requestHeader":{"requestApiKey":3,"requestApiVersion":12,"correlationId":16,"clientId":"mso-123456-consumer-37da6eb2-0a27-47c4-bc3c-8032fdeca549","requestApiKeyName":"METADATA"},"request":{"topics":[{"topicId":"AAAAAAAAAAAAAAAAAAAAAA","name":"my-test-topic"}],"allowAutoTopicCreation":false,"includeTopicAuthorizedOperations":false},"response":{"throttleTimeMs":0,"brokers":[{"nodeId":1,"host":"localhost","port":39115,"rack":null}],"clusterId":"jx5ycp9PTHOXo1U6H8QTmw","controllerId":1,"topics":[{"errorCode":0,"name":"my-test-topic","topicId":"QbCSIHehTz6oyvgrkYsJtw","isInternal":false,"partitions":[{"errorCode":0,"partitionIndex":0,"leaderId":1,"leaderEpoch":0,"replicaNodes":[1],"isrNodes":[1],"offlineReplicas":[]}],"topicAuthorizedOperations":-2147483648}]},"connection":"127.0.0.1:39115-127.0.0.1:43334-3","totalTimeMs":2.056,"requestQueueTimeMs":0.3,"localTimeMs":1.299,"remoteTimeMs":0.0,"throttleTimeMs":0,"responseQueueTimeMs":0.135,"sendTimeMs":0.319,"securityProtocol":"SASL_PLAINTEXT","principal":"User:admin","listener":"SASL_PLAINTEXT","clientInformation":{"softwareName":"apache-kafka-java","softwareVersion":"3.3.1"}} 11:10:52 11:10:52.538 [main] DEBUG org.apache.kafka.clients.NetworkClient - [Consumer clientId=mso-123456-consumer-37da6eb2-0a27-47c4-bc3c-8032fdeca549, groupId=mso-group] Sending FIND_COORDINATOR request with header RequestHeader(apiKey=FIND_COORDINATOR, apiVersion=4, clientId=mso-123456-consumer-37da6eb2-0a27-47c4-bc3c-8032fdeca549, correlationId=17) and timeout 30000 to node 1: FindCoordinatorRequestData(key='', keyType=0, coordinatorKeys=[mso-group]) 11:10:52 11:10:52.539 [data-plane-kafka-request-handler-0] DEBUG kafka.utils.KafkaScheduler - Scheduling task PeriodicProducerExpirationCheck with initial delay 600000 ms and period 600000 ms. 11:10:52 11:10:52.539 [data-plane-kafka-request-handler-0] INFO kafka.log.LogManager - Created log for partition __consumer_offsets-36 in /tmp/kafka-unit8944902187107510952/__consumer_offsets-36 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} 11:10:52 11:10:52.539 [data-plane-kafka-request-handler-0] INFO kafka.cluster.Partition - [Partition __consumer_offsets-36 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-36 11:10:52 11:10:52.539 [data-plane-kafka-request-handler-0] INFO kafka.cluster.Partition - [Partition __consumer_offsets-36 broker=1] Log loaded for partition __consumer_offsets-36 with initial high watermark 0 11:10:52 11:10:52.539 [data-plane-kafka-request-handler-0] INFO state.change.logger - [Broker id=1] Leader __consumer_offsets-36 with topic id Some(N6JdGKSnS5uP5QsQcans3w) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas []. Previous leader epoch was -1. 11:10:52 11:10:52.539 [data-plane-kafka-request-handler-0] DEBUG kafka.server.epoch.LeaderEpochFileCache - [LeaderEpochCache __consumer_offsets-36] Appended new epoch entry EpochEntry(epoch=0, startOffset=0). Cache now contains 1 entries. 11:10:52 11:10:52.540 [ProcessThread(sid:0 cport:39173):] DEBUG org.apache.zookeeper.server.SessionTrackerImpl - Checking session 0x10000020e9e0000 11:10:52 11:10:52.540 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - Processing request:: sessionid:0x10000020e9e0000 type:exists cxid:0xf6 zxid:0xfffffffffffffffe txntype:unknown reqpath:/admin/delete_topics/__consumer_offsets 11:10:52 11:10:52.540 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - sessionid:0x10000020e9e0000 type:exists cxid:0xf6 zxid:0xfffffffffffffffe txntype:unknown reqpath:/admin/delete_topics/__consumer_offsets 11:10:52 11:10:52.541 [main-SendThread(127.0.0.1:39173)] DEBUG org.apache.zookeeper.ClientCnxn - Reading reply session id: 0x10000020e9e0000, packet:: clientPath:/admin/delete_topics/__consumer_offsets serverPath:/admin/delete_topics/__consumer_offsets finished:false header:: 246,3 replyHeader:: 246,139,-101 request:: '/admin/delete_topics/__consumer_offsets,F response:: 11:10:52 11:10:52.542 [ProcessThread(sid:0 cport:39173):] DEBUG org.apache.zookeeper.server.SessionTrackerImpl - Checking session 0x10000020e9e0000 11:10:52 11:10:52.542 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - Processing request:: sessionid:0x10000020e9e0000 type:exists cxid:0xf7 zxid:0xfffffffffffffffe txntype:unknown reqpath:/brokers/topics/__consumer_offsets 11:10:52 11:10:52.542 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - sessionid:0x10000020e9e0000 type:exists cxid:0xf7 zxid:0xfffffffffffffffe txntype:unknown reqpath:/brokers/topics/__consumer_offsets 11:10:52 11:10:52.542 [main-SendThread(127.0.0.1:39173)] DEBUG org.apache.zookeeper.ClientCnxn - Reading reply session id: 0x10000020e9e0000, packet:: clientPath:/brokers/topics/__consumer_offsets serverPath:/brokers/topics/__consumer_offsets finished:false header:: 247,3 replyHeader:: 247,139,0 request:: '/brokers/topics/__consumer_offsets,T response:: s{38,38,1768216251835,1768216251835,0,1,0,0,548,1,39} 11:10:52 11:10:52.543 [data-plane-kafka-request-handler-1] DEBUG kafka.server.ZkAdminManager - [Admin Manager on Broker 1]: Topic creation failed since topic '__consumer_offsets' already exists. 11:10:52 org.apache.kafka.common.errors.TopicExistsException: Topic '__consumer_offsets' already exists. 11:10:52 11:10:52.543 [data-plane-kafka-request-handler-1] DEBUG kafka.server.DefaultAutoTopicCreationManager - Cleared inflight topic creation state for HashMap(__consumer_offsets -> CreatableTopic(name='__consumer_offsets', numPartitions=50, replicationFactor=1, assignments=[], configs=[CreateableTopicConfig(name='compression.type', value='producer'), CreateableTopicConfig(name='cleanup.policy', value='compact'), CreateableTopicConfig(name='segment.bytes', value='104857600')])) 11:10:52 11:10:52.544 [main] DEBUG org.apache.kafka.clients.NetworkClient - [Consumer clientId=mso-123456-consumer-37da6eb2-0a27-47c4-bc3c-8032fdeca549, groupId=mso-group] Received FIND_COORDINATOR response from node 1 for request with header RequestHeader(apiKey=FIND_COORDINATOR, apiVersion=4, clientId=mso-123456-consumer-37da6eb2-0a27-47c4-bc3c-8032fdeca549, correlationId=17): FindCoordinatorResponseData(throttleTimeMs=0, errorCode=0, errorMessage='', nodeId=0, host='', port=0, coordinators=[Coordinator(key='mso-group', nodeId=-1, host='', port=-1, errorCode=15, errorMessage='')]) 11:10:52 11:10:52.544 [main] DEBUG org.apache.kafka.clients.consumer.internals.ConsumerCoordinator - [Consumer clientId=mso-123456-consumer-37da6eb2-0a27-47c4-bc3c-8032fdeca549, groupId=mso-group] Received FindCoordinator response ClientResponse(receivedTimeMs=1768216252544, latencyMs=6, disconnected=false, requestHeader=RequestHeader(apiKey=FIND_COORDINATOR, apiVersion=4, clientId=mso-123456-consumer-37da6eb2-0a27-47c4-bc3c-8032fdeca549, correlationId=17), responseBody=FindCoordinatorResponseData(throttleTimeMs=0, errorCode=0, errorMessage='', nodeId=0, host='', port=0, coordinators=[Coordinator(key='mso-group', nodeId=-1, host='', port=-1, errorCode=15, errorMessage='')])) 11:10:52 11:10:52.544 [main] DEBUG org.apache.kafka.clients.consumer.internals.ConsumerCoordinator - [Consumer clientId=mso-123456-consumer-37da6eb2-0a27-47c4-bc3c-8032fdeca549, groupId=mso-group] Group coordinator lookup failed: 11:10:52 11:10:52.544 [main] DEBUG org.apache.kafka.clients.consumer.internals.ConsumerCoordinator - [Consumer clientId=mso-123456-consumer-37da6eb2-0a27-47c4-bc3c-8032fdeca549, groupId=mso-group] Coordinator discovery failed, refreshing metadata 11:10:52 org.apache.kafka.common.errors.CoordinatorNotAvailableException: The coordinator is not available. 11:10:52 11:10:52.544 [data-plane-kafka-network-thread-1-ListenerName(SASL_PLAINTEXT)-SASL_PLAINTEXT-0] DEBUG kafka.request.logger - Completed request:{"isForwarded":false,"requestHeader":{"requestApiKey":10,"requestApiVersion":4,"correlationId":17,"clientId":"mso-123456-consumer-37da6eb2-0a27-47c4-bc3c-8032fdeca549","requestApiKeyName":"FIND_COORDINATOR"},"request":{"keyType":0,"coordinatorKeys":["mso-group"]},"response":{"throttleTimeMs":0,"coordinators":[{"key":"mso-group","nodeId":-1,"host":"","port":-1,"errorCode":15,"errorMessage":""}]},"connection":"127.0.0.1:39115-127.0.0.1:43334-3","totalTimeMs":5.126,"requestQueueTimeMs":0.134,"localTimeMs":4.723,"remoteTimeMs":0.0,"throttleTimeMs":0,"responseQueueTimeMs":0.068,"sendTimeMs":0.199,"securityProtocol":"SASL_PLAINTEXT","principal":"User:admin","listener":"SASL_PLAINTEXT","clientInformation":{"softwareName":"apache-kafka-java","softwareVersion":"3.3.1"}} 11:10:52 11:10:52.553 [ProcessThread(sid:0 cport:39173):] DEBUG org.apache.zookeeper.server.SessionTrackerImpl - Checking session 0x10000020e9e0000 11:10:52 11:10:52.553 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - Processing request:: sessionid:0x10000020e9e0000 type:getData cxid:0xf8 zxid:0xfffffffffffffffe txntype:unknown reqpath:/config/topics/__consumer_offsets 11:10:52 11:10:52.553 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - sessionid:0x10000020e9e0000 type:getData cxid:0xf8 zxid:0xfffffffffffffffe txntype:unknown reqpath:/config/topics/__consumer_offsets 11:10:52 11:10:52.553 [SyncThread:0] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Permission requested: 1 11:10:52 11:10:52.553 [SyncThread:0] DEBUG org.apache.zookeeper.server.ZooKeeperServer - ACLs for node: [31,s{'world,'anyone} 11:10:52 ] 11:10:52 11:10:52.553 [SyncThread:0] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Client credentials: ['sasl,'zooclient 11:10:52 , 'ip,'127.0.0.1 11:10:52 ] 11:10:52 11:10:52.554 [main-SendThread(127.0.0.1:39173)] DEBUG org.apache.zookeeper.ClientCnxn - Reading reply session id: 0x10000020e9e0000, packet:: clientPath:/config/topics/__consumer_offsets serverPath:/config/topics/__consumer_offsets finished:false header:: 248,4 replyHeader:: 248,139,0 request:: '/config/topics/__consumer_offsets,F response:: #7b2276657273696f6e223a312c22636f6e666967223a7b22636f6d7072657373696f6e2e74797065223a2270726f6475636572222c22636c65616e75702e706f6c696379223a22636f6d70616374222c227365676d656e742e6279746573223a22313034383537363030227d7d,s{37,37,1768216251826,1768216251826,0,0,0,0,109,0,37} 11:10:52 11:10:52.556 [data-plane-kafka-request-handler-0] DEBUG kafka.log.OffsetIndex - Loaded index file /tmp/kafka-unit8944902187107510952/__consumer_offsets-6/00000000000000000000.index with maxEntries = 1310720, maxIndexSize = 10485760, entries = 0, lastOffset = 0, file position = 0 11:10:52 11:10:52.557 [data-plane-kafka-request-handler-0] DEBUG kafka.log.AbstractIndex - Index /tmp/kafka-unit8944902187107510952/__consumer_offsets-6/00000000000000000000.index was not resized because it already has size 10485760 11:10:52 11:10:52.557 [data-plane-kafka-request-handler-0] DEBUG kafka.log.TimeIndex - Loaded index file /tmp/kafka-unit8944902187107510952/__consumer_offsets-6/00000000000000000000.timeindex with maxEntries = 873813, maxIndexSize = 10485760, entries = 0, lastOffset = TimestampOffset(-1,0), file position = 0 11:10:52 11:10:52.557 [data-plane-kafka-request-handler-0] DEBUG kafka.log.AbstractIndex - Index /tmp/kafka-unit8944902187107510952/__consumer_offsets-6/00000000000000000000.timeindex was not resized because it already has size 10485756 11:10:52 11:10:52.557 [data-plane-kafka-request-handler-0] INFO kafka.log.UnifiedLog$ - [LogLoader partition=__consumer_offsets-6, dir=/tmp/kafka-unit8944902187107510952] Loading producer state till offset 0 with message format version 2 11:10:52 11:10:52.557 [data-plane-kafka-request-handler-0] DEBUG kafka.utils.KafkaScheduler - Scheduling task flush-metadata-file with initial delay 0 ms and period -1 ms. 11:10:52 11:10:52.558 [data-plane-kafka-request-handler-0] DEBUG kafka.utils.KafkaScheduler - Scheduling task PeriodicProducerExpirationCheck with initial delay 600000 ms and period 600000 ms. 11:10:52 11:10:52.558 [data-plane-kafka-request-handler-0] INFO kafka.log.LogManager - Created log for partition __consumer_offsets-6 in /tmp/kafka-unit8944902187107510952/__consumer_offsets-6 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} 11:10:52 11:10:52.558 [data-plane-kafka-request-handler-0] INFO kafka.cluster.Partition - [Partition __consumer_offsets-6 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-6 11:10:52 11:10:52.558 [data-plane-kafka-request-handler-0] INFO kafka.cluster.Partition - [Partition __consumer_offsets-6 broker=1] Log loaded for partition __consumer_offsets-6 with initial high watermark 0 11:10:52 11:10:52.558 [data-plane-kafka-request-handler-0] INFO state.change.logger - [Broker id=1] Leader __consumer_offsets-6 with topic id Some(N6JdGKSnS5uP5QsQcans3w) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas []. Previous leader epoch was -1. 11:10:52 11:10:52.558 [data-plane-kafka-request-handler-0] DEBUG kafka.server.epoch.LeaderEpochFileCache - [LeaderEpochCache __consumer_offsets-6] Appended new epoch entry EpochEntry(epoch=0, startOffset=0). Cache now contains 1 entries. 11:10:52 11:10:52.563 [ProcessThread(sid:0 cport:39173):] DEBUG org.apache.zookeeper.server.SessionTrackerImpl - Checking session 0x10000020e9e0000 11:10:52 11:10:52.563 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - Processing request:: sessionid:0x10000020e9e0000 type:getData cxid:0xf9 zxid:0xfffffffffffffffe txntype:unknown reqpath:/config/topics/__consumer_offsets 11:10:52 11:10:52.563 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - sessionid:0x10000020e9e0000 type:getData cxid:0xf9 zxid:0xfffffffffffffffe txntype:unknown reqpath:/config/topics/__consumer_offsets 11:10:52 11:10:52.563 [SyncThread:0] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Permission requested: 1 11:10:52 11:10:52.563 [SyncThread:0] DEBUG org.apache.zookeeper.server.ZooKeeperServer - ACLs for node: [31,s{'world,'anyone} 11:10:52 ] 11:10:52 11:10:52.563 [SyncThread:0] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Client credentials: ['sasl,'zooclient 11:10:52 , 'ip,'127.0.0.1 11:10:52 ] 11:10:52 11:10:52.564 [main-SendThread(127.0.0.1:39173)] DEBUG org.apache.zookeeper.ClientCnxn - Reading reply session id: 0x10000020e9e0000, packet:: clientPath:/config/topics/__consumer_offsets serverPath:/config/topics/__consumer_offsets finished:false header:: 249,4 replyHeader:: 249,139,0 request:: '/config/topics/__consumer_offsets,F response:: #7b2276657273696f6e223a312c22636f6e666967223a7b22636f6d7072657373696f6e2e74797065223a2270726f6475636572222c22636c65616e75702e706f6c696379223a22636f6d70616374222c227365676d656e742e6279746573223a22313034383537363030227d7d,s{37,37,1768216251826,1768216251826,0,0,0,0,109,0,37} 11:10:52 11:10:52.565 [data-plane-kafka-request-handler-0] DEBUG kafka.log.OffsetIndex - Loaded index file /tmp/kafka-unit8944902187107510952/__consumer_offsets-43/00000000000000000000.index with maxEntries = 1310720, maxIndexSize = 10485760, entries = 0, lastOffset = 0, file position = 0 11:10:52 11:10:52.565 [data-plane-kafka-request-handler-0] DEBUG kafka.log.AbstractIndex - Index /tmp/kafka-unit8944902187107510952/__consumer_offsets-43/00000000000000000000.index was not resized because it already has size 10485760 11:10:52 11:10:52.565 [data-plane-kafka-request-handler-0] DEBUG kafka.log.TimeIndex - Loaded index file /tmp/kafka-unit8944902187107510952/__consumer_offsets-43/00000000000000000000.timeindex with maxEntries = 873813, maxIndexSize = 10485760, entries = 0, lastOffset = TimestampOffset(-1,0), file position = 0 11:10:52 11:10:52.565 [data-plane-kafka-request-handler-0] DEBUG kafka.log.AbstractIndex - Index /tmp/kafka-unit8944902187107510952/__consumer_offsets-43/00000000000000000000.timeindex was not resized because it already has size 10485756 11:10:52 11:10:52.566 [data-plane-kafka-request-handler-0] INFO kafka.log.UnifiedLog$ - [LogLoader partition=__consumer_offsets-43, dir=/tmp/kafka-unit8944902187107510952] Loading producer state till offset 0 with message format version 2 11:10:52 11:10:52.566 [data-plane-kafka-request-handler-0] DEBUG kafka.utils.KafkaScheduler - Scheduling task flush-metadata-file with initial delay 0 ms and period -1 ms. 11:10:52 11:10:52.566 [data-plane-kafka-request-handler-0] DEBUG kafka.utils.KafkaScheduler - Scheduling task PeriodicProducerExpirationCheck with initial delay 600000 ms and period 600000 ms. 11:10:52 11:10:52.566 [data-plane-kafka-request-handler-0] INFO kafka.log.LogManager - Created log for partition __consumer_offsets-43 in /tmp/kafka-unit8944902187107510952/__consumer_offsets-43 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} 11:10:52 11:10:52.566 [data-plane-kafka-request-handler-0] INFO kafka.cluster.Partition - [Partition __consumer_offsets-43 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-43 11:10:52 11:10:52.566 [data-plane-kafka-request-handler-0] INFO kafka.cluster.Partition - [Partition __consumer_offsets-43 broker=1] Log loaded for partition __consumer_offsets-43 with initial high watermark 0 11:10:52 11:10:52.566 [data-plane-kafka-request-handler-0] INFO state.change.logger - [Broker id=1] Leader __consumer_offsets-43 with topic id Some(N6JdGKSnS5uP5QsQcans3w) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas []. Previous leader epoch was -1. 11:10:52 11:10:52.566 [data-plane-kafka-request-handler-0] DEBUG kafka.server.epoch.LeaderEpochFileCache - [LeaderEpochCache __consumer_offsets-43] Appended new epoch entry EpochEntry(epoch=0, startOffset=0). Cache now contains 1 entries. 11:10:52 11:10:52.571 [ProcessThread(sid:0 cport:39173):] DEBUG org.apache.zookeeper.server.SessionTrackerImpl - Checking session 0x10000020e9e0000 11:10:52 11:10:52.571 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - Processing request:: sessionid:0x10000020e9e0000 type:getData cxid:0xfa zxid:0xfffffffffffffffe txntype:unknown reqpath:/config/topics/__consumer_offsets 11:10:52 11:10:52.571 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - sessionid:0x10000020e9e0000 type:getData cxid:0xfa zxid:0xfffffffffffffffe txntype:unknown reqpath:/config/topics/__consumer_offsets 11:10:52 11:10:52.572 [SyncThread:0] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Permission requested: 1 11:10:52 11:10:52.572 [SyncThread:0] DEBUG org.apache.zookeeper.server.ZooKeeperServer - ACLs for node: [31,s{'world,'anyone} 11:10:52 ] 11:10:52 11:10:52.572 [SyncThread:0] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Client credentials: ['sasl,'zooclient 11:10:52 , 'ip,'127.0.0.1 11:10:52 ] 11:10:52 11:10:52.572 [main-SendThread(127.0.0.1:39173)] DEBUG org.apache.zookeeper.ClientCnxn - Reading reply session id: 0x10000020e9e0000, packet:: clientPath:/config/topics/__consumer_offsets serverPath:/config/topics/__consumer_offsets finished:false header:: 250,4 replyHeader:: 250,139,0 request:: '/config/topics/__consumer_offsets,F response:: #7b2276657273696f6e223a312c22636f6e666967223a7b22636f6d7072657373696f6e2e74797065223a2270726f6475636572222c22636c65616e75702e706f6c696379223a22636f6d70616374222c227365676d656e742e6279746573223a22313034383537363030227d7d,s{37,37,1768216251826,1768216251826,0,0,0,0,109,0,37} 11:10:52 11:10:52.573 [data-plane-kafka-request-handler-0] DEBUG kafka.log.OffsetIndex - Loaded index file /tmp/kafka-unit8944902187107510952/__consumer_offsets-13/00000000000000000000.index with maxEntries = 1310720, maxIndexSize = 10485760, entries = 0, lastOffset = 0, file position = 0 11:10:52 11:10:52.573 [data-plane-kafka-request-handler-0] DEBUG kafka.log.AbstractIndex - Index /tmp/kafka-unit8944902187107510952/__consumer_offsets-13/00000000000000000000.index was not resized because it already has size 10485760 11:10:52 11:10:52.573 [data-plane-kafka-request-handler-0] DEBUG kafka.log.TimeIndex - Loaded index file /tmp/kafka-unit8944902187107510952/__consumer_offsets-13/00000000000000000000.timeindex with maxEntries = 873813, maxIndexSize = 10485760, entries = 0, lastOffset = TimestampOffset(-1,0), file position = 0 11:10:52 11:10:52.573 [data-plane-kafka-request-handler-0] DEBUG kafka.log.AbstractIndex - Index /tmp/kafka-unit8944902187107510952/__consumer_offsets-13/00000000000000000000.timeindex was not resized because it already has size 10485756 11:10:52 11:10:52.574 [data-plane-kafka-request-handler-0] INFO kafka.log.UnifiedLog$ - [LogLoader partition=__consumer_offsets-13, dir=/tmp/kafka-unit8944902187107510952] Loading producer state till offset 0 with message format version 2 11:10:52 11:10:52.574 [data-plane-kafka-request-handler-0] DEBUG kafka.utils.KafkaScheduler - Scheduling task flush-metadata-file with initial delay 0 ms and period -1 ms. 11:10:52 11:10:52.574 [data-plane-kafka-request-handler-0] DEBUG kafka.utils.KafkaScheduler - Scheduling task PeriodicProducerExpirationCheck with initial delay 600000 ms and period 600000 ms. 11:10:52 11:10:52.575 [data-plane-kafka-request-handler-0] INFO kafka.log.LogManager - Created log for partition __consumer_offsets-13 in /tmp/kafka-unit8944902187107510952/__consumer_offsets-13 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} 11:10:52 11:10:52.575 [data-plane-kafka-request-handler-0] INFO kafka.cluster.Partition - [Partition __consumer_offsets-13 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-13 11:10:52 11:10:52.575 [data-plane-kafka-request-handler-0] INFO kafka.cluster.Partition - [Partition __consumer_offsets-13 broker=1] Log loaded for partition __consumer_offsets-13 with initial high watermark 0 11:10:52 11:10:52.575 [data-plane-kafka-request-handler-0] INFO state.change.logger - [Broker id=1] Leader __consumer_offsets-13 with topic id Some(N6JdGKSnS5uP5QsQcans3w) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas []. Previous leader epoch was -1. 11:10:52 11:10:52.575 [data-plane-kafka-request-handler-0] DEBUG kafka.server.epoch.LeaderEpochFileCache - [LeaderEpochCache __consumer_offsets-13] Appended new epoch entry EpochEntry(epoch=0, startOffset=0). Cache now contains 1 entries. 11:10:52 11:10:52.581 [ProcessThread(sid:0 cport:39173):] DEBUG org.apache.zookeeper.server.SessionTrackerImpl - Checking session 0x10000020e9e0000 11:10:52 11:10:52.581 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - Processing request:: sessionid:0x10000020e9e0000 type:getData cxid:0xfb zxid:0xfffffffffffffffe txntype:unknown reqpath:/config/topics/__consumer_offsets 11:10:52 11:10:52.581 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - sessionid:0x10000020e9e0000 type:getData cxid:0xfb zxid:0xfffffffffffffffe txntype:unknown reqpath:/config/topics/__consumer_offsets 11:10:52 11:10:52.581 [SyncThread:0] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Permission requested: 1 11:10:52 11:10:52.581 [SyncThread:0] DEBUG org.apache.zookeeper.server.ZooKeeperServer - ACLs for node: [31,s{'world,'anyone} 11:10:52 ] 11:10:52 11:10:52.581 [SyncThread:0] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Client credentials: ['sasl,'zooclient 11:10:52 , 'ip,'127.0.0.1 11:10:52 ] 11:10:52 11:10:52.582 [main-SendThread(127.0.0.1:39173)] DEBUG org.apache.zookeeper.ClientCnxn - Reading reply session id: 0x10000020e9e0000, packet:: clientPath:/config/topics/__consumer_offsets serverPath:/config/topics/__consumer_offsets finished:false header:: 251,4 replyHeader:: 251,139,0 request:: '/config/topics/__consumer_offsets,F response:: #7b2276657273696f6e223a312c22636f6e666967223a7b22636f6d7072657373696f6e2e74797065223a2270726f6475636572222c22636c65616e75702e706f6c696379223a22636f6d70616374222c227365676d656e742e6279746573223a22313034383537363030227d7d,s{37,37,1768216251826,1768216251826,0,0,0,0,109,0,37} 11:10:52 11:10:52.583 [data-plane-kafka-request-handler-0] DEBUG kafka.log.OffsetIndex - Loaded index file /tmp/kafka-unit8944902187107510952/__consumer_offsets-28/00000000000000000000.index with maxEntries = 1310720, maxIndexSize = 10485760, entries = 0, lastOffset = 0, file position = 0 11:10:52 11:10:52.583 [data-plane-kafka-request-handler-0] DEBUG kafka.log.AbstractIndex - Index /tmp/kafka-unit8944902187107510952/__consumer_offsets-28/00000000000000000000.index was not resized because it already has size 10485760 11:10:52 11:10:52.583 [data-plane-kafka-request-handler-0] DEBUG kafka.log.TimeIndex - Loaded index file /tmp/kafka-unit8944902187107510952/__consumer_offsets-28/00000000000000000000.timeindex with maxEntries = 873813, maxIndexSize = 10485760, entries = 0, lastOffset = TimestampOffset(-1,0), file position = 0 11:10:52 11:10:52.583 [data-plane-kafka-request-handler-0] DEBUG kafka.log.AbstractIndex - Index /tmp/kafka-unit8944902187107510952/__consumer_offsets-28/00000000000000000000.timeindex was not resized because it already has size 10485756 11:10:52 11:10:52.583 [data-plane-kafka-request-handler-0] INFO kafka.log.UnifiedLog$ - [LogLoader partition=__consumer_offsets-28, dir=/tmp/kafka-unit8944902187107510952] Loading producer state till offset 0 with message format version 2 11:10:52 11:10:52.584 [data-plane-kafka-request-handler-0] DEBUG kafka.utils.KafkaScheduler - Scheduling task flush-metadata-file with initial delay 0 ms and period -1 ms. 11:10:52 11:10:52.584 [data-plane-kafka-request-handler-0] DEBUG kafka.utils.KafkaScheduler - Scheduling task PeriodicProducerExpirationCheck with initial delay 600000 ms and period 600000 ms. 11:10:52 11:10:52.584 [data-plane-kafka-request-handler-0] INFO kafka.log.LogManager - Created log for partition __consumer_offsets-28 in /tmp/kafka-unit8944902187107510952/__consumer_offsets-28 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} 11:10:52 11:10:52.584 [data-plane-kafka-request-handler-0] INFO kafka.cluster.Partition - [Partition __consumer_offsets-28 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-28 11:10:52 11:10:52.584 [data-plane-kafka-request-handler-0] INFO kafka.cluster.Partition - [Partition __consumer_offsets-28 broker=1] Log loaded for partition __consumer_offsets-28 with initial high watermark 0 11:10:52 11:10:52.584 [data-plane-kafka-request-handler-0] INFO state.change.logger - [Broker id=1] Leader __consumer_offsets-28 with topic id Some(N6JdGKSnS5uP5QsQcans3w) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas []. Previous leader epoch was -1. 11:10:52 11:10:52.585 [data-plane-kafka-request-handler-0] DEBUG kafka.server.epoch.LeaderEpochFileCache - [LeaderEpochCache __consumer_offsets-28] Appended new epoch entry EpochEntry(epoch=0, startOffset=0). Cache now contains 1 entries. 11:10:52 11:10:52.591 [data-plane-kafka-request-handler-0] INFO kafka.coordinator.group.GroupCoordinator - [GroupCoordinator 1]: Elected as the group coordinator for partition 3 in epoch 0 11:10:52 11:10:52.592 [data-plane-kafka-request-handler-0] INFO kafka.coordinator.group.GroupMetadataManager - [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-3 for epoch 0 11:10:52 11:10:52.593 [data-plane-kafka-request-handler-0] DEBUG kafka.utils.KafkaScheduler - Scheduling task __consumer_offsets-3 with initial delay 0 ms and period -1 ms. 11:10:52 11:10:52.595 [group-metadata-manager-0] DEBUG kafka.coordinator.group.GroupMetadataManager - [GroupMetadataManager brokerId=1] Started loading offsets and group metadata from __consumer_offsets-3 for epoch 0 11:10:52 11:10:52.595 [data-plane-kafka-request-handler-0] INFO kafka.coordinator.group.GroupCoordinator - [GroupCoordinator 1]: Elected as the group coordinator for partition 18 in epoch 0 11:10:52 11:10:52.595 [data-plane-kafka-request-handler-0] INFO kafka.coordinator.group.GroupMetadataManager - [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-18 for epoch 0 11:10:52 11:10:52.596 [data-plane-kafka-request-handler-0] DEBUG kafka.utils.KafkaScheduler - Scheduling task __consumer_offsets-18 with initial delay 0 ms and period -1 ms. 11:10:52 11:10:52.596 [data-plane-kafka-request-handler-0] INFO kafka.coordinator.group.GroupCoordinator - [GroupCoordinator 1]: Elected as the group coordinator for partition 41 in epoch 0 11:10:52 11:10:52.596 [data-plane-kafka-request-handler-0] INFO kafka.coordinator.group.GroupMetadataManager - [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-41 for epoch 0 11:10:52 11:10:52.596 [data-plane-kafka-request-handler-0] DEBUG kafka.utils.KafkaScheduler - Scheduling task __consumer_offsets-41 with initial delay 0 ms and period -1 ms. 11:10:52 11:10:52.596 [data-plane-kafka-request-handler-0] INFO kafka.coordinator.group.GroupCoordinator - [GroupCoordinator 1]: Elected as the group coordinator for partition 10 in epoch 0 11:10:52 11:10:52.596 [data-plane-kafka-request-handler-0] INFO kafka.coordinator.group.GroupMetadataManager - [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-10 for epoch 0 11:10:52 11:10:52.596 [data-plane-kafka-request-handler-0] DEBUG kafka.utils.KafkaScheduler - Scheduling task __consumer_offsets-10 with initial delay 0 ms and period -1 ms. 11:10:52 11:10:52.596 [data-plane-kafka-request-handler-0] INFO kafka.coordinator.group.GroupCoordinator - [GroupCoordinator 1]: Elected as the group coordinator for partition 33 in epoch 0 11:10:52 11:10:52.596 [data-plane-kafka-request-handler-0] INFO kafka.coordinator.group.GroupMetadataManager - [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-33 for epoch 0 11:10:52 11:10:52.596 [data-plane-kafka-request-handler-0] DEBUG kafka.utils.KafkaScheduler - Scheduling task __consumer_offsets-33 with initial delay 0 ms and period -1 ms. 11:10:52 11:10:52.596 [data-plane-kafka-request-handler-0] INFO kafka.coordinator.group.GroupCoordinator - [GroupCoordinator 1]: Elected as the group coordinator for partition 48 in epoch 0 11:10:52 11:10:52.596 [data-plane-kafka-request-handler-0] INFO kafka.coordinator.group.GroupMetadataManager - [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-48 for epoch 0 11:10:52 11:10:52.597 [data-plane-kafka-request-handler-0] DEBUG kafka.utils.KafkaScheduler - Scheduling task __consumer_offsets-48 with initial delay 0 ms and period -1 ms. 11:10:52 11:10:52.597 [data-plane-kafka-request-handler-0] INFO kafka.coordinator.group.GroupCoordinator - [GroupCoordinator 1]: Elected as the group coordinator for partition 19 in epoch 0 11:10:52 11:10:52.597 [data-plane-kafka-request-handler-0] INFO kafka.coordinator.group.GroupMetadataManager - [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-19 for epoch 0 11:10:52 11:10:52.597 [data-plane-kafka-request-handler-0] DEBUG kafka.utils.KafkaScheduler - Scheduling task __consumer_offsets-19 with initial delay 0 ms and period -1 ms. 11:10:52 11:10:52.597 [data-plane-kafka-request-handler-0] INFO kafka.coordinator.group.GroupCoordinator - [GroupCoordinator 1]: Elected as the group coordinator for partition 34 in epoch 0 11:10:52 11:10:52.597 [data-plane-kafka-request-handler-0] INFO kafka.coordinator.group.GroupMetadataManager - [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-34 for epoch 0 11:10:52 11:10:52.597 [data-plane-kafka-request-handler-0] DEBUG kafka.utils.KafkaScheduler - Scheduling task __consumer_offsets-34 with initial delay 0 ms and period -1 ms. 11:10:52 11:10:52.597 [data-plane-kafka-request-handler-0] INFO kafka.coordinator.group.GroupCoordinator - [GroupCoordinator 1]: Elected as the group coordinator for partition 4 in epoch 0 11:10:52 11:10:52.597 [data-plane-kafka-request-handler-0] INFO kafka.coordinator.group.GroupMetadataManager - [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-4 for epoch 0 11:10:52 11:10:52.597 [data-plane-kafka-request-handler-0] DEBUG kafka.utils.KafkaScheduler - Scheduling task __consumer_offsets-4 with initial delay 0 ms and period -1 ms. 11:10:52 11:10:52.597 [data-plane-kafka-request-handler-0] INFO kafka.coordinator.group.GroupCoordinator - [GroupCoordinator 1]: Elected as the group coordinator for partition 11 in epoch 0 11:10:52 11:10:52.597 [data-plane-kafka-request-handler-0] INFO kafka.coordinator.group.GroupMetadataManager - [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-11 for epoch 0 11:10:52 11:10:52.597 [data-plane-kafka-request-handler-0] DEBUG kafka.utils.KafkaScheduler - Scheduling task __consumer_offsets-11 with initial delay 0 ms and period -1 ms. 11:10:52 11:10:52.597 [data-plane-kafka-request-handler-0] INFO kafka.coordinator.group.GroupCoordinator - [GroupCoordinator 1]: Elected as the group coordinator for partition 26 in epoch 0 11:10:52 11:10:52.597 [data-plane-kafka-request-handler-0] INFO kafka.coordinator.group.GroupMetadataManager - [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-26 for epoch 0 11:10:52 11:10:52.597 [data-plane-kafka-request-handler-0] DEBUG kafka.utils.KafkaScheduler - Scheduling task __consumer_offsets-26 with initial delay 0 ms and period -1 ms. 11:10:52 11:10:52.597 [data-plane-kafka-request-handler-0] INFO kafka.coordinator.group.GroupCoordinator - [GroupCoordinator 1]: Elected as the group coordinator for partition 49 in epoch 0 11:10:52 11:10:52.597 [data-plane-kafka-request-handler-0] INFO kafka.coordinator.group.GroupMetadataManager - [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-49 for epoch 0 11:10:52 11:10:52.598 [data-plane-kafka-request-handler-0] DEBUG kafka.utils.KafkaScheduler - Scheduling task __consumer_offsets-49 with initial delay 0 ms and period -1 ms. 11:10:52 11:10:52.598 [data-plane-kafka-request-handler-0] INFO kafka.coordinator.group.GroupCoordinator - [GroupCoordinator 1]: Elected as the group coordinator for partition 39 in epoch 0 11:10:52 11:10:52.598 [data-plane-kafka-request-handler-0] INFO kafka.coordinator.group.GroupMetadataManager - [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-39 for epoch 0 11:10:52 11:10:52.598 [data-plane-kafka-request-handler-0] DEBUG kafka.utils.KafkaScheduler - Scheduling task __consumer_offsets-39 with initial delay 0 ms and period -1 ms. 11:10:52 11:10:52.598 [data-plane-kafka-request-handler-0] INFO kafka.coordinator.group.GroupCoordinator - [GroupCoordinator 1]: Elected as the group coordinator for partition 9 in epoch 0 11:10:52 11:10:52.598 [data-plane-kafka-request-handler-0] INFO kafka.coordinator.group.GroupMetadataManager - [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-9 for epoch 0 11:10:52 11:10:52.598 [data-plane-kafka-request-handler-0] DEBUG kafka.utils.KafkaScheduler - Scheduling task __consumer_offsets-9 with initial delay 0 ms and period -1 ms. 11:10:52 11:10:52.598 [data-plane-kafka-request-handler-0] INFO kafka.coordinator.group.GroupCoordinator - [GroupCoordinator 1]: Elected as the group coordinator for partition 24 in epoch 0 11:10:52 11:10:52.598 [data-plane-kafka-request-handler-0] INFO kafka.coordinator.group.GroupMetadataManager - [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-24 for epoch 0 11:10:52 11:10:52.598 [data-plane-kafka-request-handler-0] DEBUG kafka.utils.KafkaScheduler - Scheduling task __consumer_offsets-24 with initial delay 0 ms and period -1 ms. 11:10:52 11:10:52.598 [data-plane-kafka-request-handler-0] INFO kafka.coordinator.group.GroupCoordinator - [GroupCoordinator 1]: Elected as the group coordinator for partition 31 in epoch 0 11:10:52 11:10:52.598 [data-plane-kafka-request-handler-0] INFO kafka.coordinator.group.GroupMetadataManager - [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-31 for epoch 0 11:10:52 11:10:52.598 [data-plane-kafka-request-handler-0] DEBUG kafka.utils.KafkaScheduler - Scheduling task __consumer_offsets-31 with initial delay 0 ms and period -1 ms. 11:10:52 11:10:52.598 [data-plane-kafka-request-handler-0] INFO kafka.coordinator.group.GroupCoordinator - [GroupCoordinator 1]: Elected as the group coordinator for partition 46 in epoch 0 11:10:52 11:10:52.598 [data-plane-kafka-request-handler-0] INFO kafka.coordinator.group.GroupMetadataManager - [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-46 for epoch 0 11:10:52 11:10:52.598 [data-plane-kafka-request-handler-0] DEBUG kafka.utils.KafkaScheduler - Scheduling task __consumer_offsets-46 with initial delay 0 ms and period -1 ms. 11:10:52 11:10:52.598 [data-plane-kafka-request-handler-0] INFO kafka.coordinator.group.GroupCoordinator - [GroupCoordinator 1]: Elected as the group coordinator for partition 1 in epoch 0 11:10:52 11:10:52.598 [data-plane-kafka-request-handler-0] INFO kafka.coordinator.group.GroupMetadataManager - [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-1 for epoch 0 11:10:52 11:10:52.599 [data-plane-kafka-request-handler-0] DEBUG kafka.utils.KafkaScheduler - Scheduling task __consumer_offsets-1 with initial delay 0 ms and period -1 ms. 11:10:52 11:10:52.599 [data-plane-kafka-request-handler-0] INFO kafka.coordinator.group.GroupCoordinator - [GroupCoordinator 1]: Elected as the group coordinator for partition 16 in epoch 0 11:10:52 11:10:52.599 [data-plane-kafka-request-handler-0] INFO kafka.coordinator.group.GroupMetadataManager - [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-16 for epoch 0 11:10:52 11:10:52.599 [data-plane-kafka-request-handler-0] DEBUG kafka.utils.KafkaScheduler - Scheduling task __consumer_offsets-16 with initial delay 0 ms and period -1 ms. 11:10:52 11:10:52.599 [data-plane-kafka-request-handler-0] INFO kafka.coordinator.group.GroupCoordinator - [GroupCoordinator 1]: Elected as the group coordinator for partition 2 in epoch 0 11:10:52 11:10:52.599 [data-plane-kafka-request-handler-0] INFO kafka.coordinator.group.GroupMetadataManager - [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-2 for epoch 0 11:10:52 11:10:52.599 [data-plane-kafka-request-handler-0] DEBUG kafka.utils.KafkaScheduler - Scheduling task __consumer_offsets-2 with initial delay 0 ms and period -1 ms. 11:10:52 11:10:52.599 [data-plane-kafka-request-handler-0] INFO kafka.coordinator.group.GroupCoordinator - [GroupCoordinator 1]: Elected as the group coordinator for partition 25 in epoch 0 11:10:52 11:10:52.599 [data-plane-kafka-request-handler-0] INFO kafka.coordinator.group.GroupMetadataManager - [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-25 for epoch 0 11:10:52 11:10:52.599 [data-plane-kafka-request-handler-0] DEBUG kafka.utils.KafkaScheduler - Scheduling task __consumer_offsets-25 with initial delay 0 ms and period -1 ms. 11:10:52 11:10:52.599 [data-plane-kafka-request-handler-0] INFO kafka.coordinator.group.GroupCoordinator - [GroupCoordinator 1]: Elected as the group coordinator for partition 40 in epoch 0 11:10:52 11:10:52.599 [data-plane-kafka-request-handler-0] INFO kafka.coordinator.group.GroupMetadataManager - [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-40 for epoch 0 11:10:52 11:10:52.599 [data-plane-kafka-request-handler-0] DEBUG kafka.utils.KafkaScheduler - Scheduling task __consumer_offsets-40 with initial delay 0 ms and period -1 ms. 11:10:52 11:10:52.599 [data-plane-kafka-request-handler-0] INFO kafka.coordinator.group.GroupCoordinator - [GroupCoordinator 1]: Elected as the group coordinator for partition 47 in epoch 0 11:10:52 11:10:52.599 [data-plane-kafka-request-handler-0] INFO kafka.coordinator.group.GroupMetadataManager - [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-47 for epoch 0 11:10:52 11:10:52.599 [data-plane-kafka-request-handler-0] DEBUG kafka.utils.KafkaScheduler - Scheduling task __consumer_offsets-47 with initial delay 0 ms and period -1 ms. 11:10:52 11:10:52.599 [data-plane-kafka-request-handler-0] INFO kafka.coordinator.group.GroupCoordinator - [GroupCoordinator 1]: Elected as the group coordinator for partition 17 in epoch 0 11:10:52 11:10:52.599 [data-plane-kafka-request-handler-0] INFO kafka.coordinator.group.GroupMetadataManager - [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-17 for epoch 0 11:10:52 11:10:52.599 [data-plane-kafka-request-handler-0] DEBUG kafka.utils.KafkaScheduler - Scheduling task __consumer_offsets-17 with initial delay 0 ms and period -1 ms. 11:10:52 11:10:52.600 [data-plane-kafka-request-handler-0] INFO kafka.coordinator.group.GroupCoordinator - [GroupCoordinator 1]: Elected as the group coordinator for partition 32 in epoch 0 11:10:52 11:10:52.600 [data-plane-kafka-request-handler-0] INFO kafka.coordinator.group.GroupMetadataManager - [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-32 for epoch 0 11:10:52 11:10:52.600 [data-plane-kafka-request-handler-0] DEBUG kafka.utils.KafkaScheduler - Scheduling task __consumer_offsets-32 with initial delay 0 ms and period -1 ms. 11:10:52 11:10:52.600 [data-plane-kafka-request-handler-0] INFO kafka.coordinator.group.GroupCoordinator - [GroupCoordinator 1]: Elected as the group coordinator for partition 37 in epoch 0 11:10:52 11:10:52.600 [data-plane-kafka-request-handler-0] INFO kafka.coordinator.group.GroupMetadataManager - [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-37 for epoch 0 11:10:52 11:10:52.600 [data-plane-kafka-request-handler-0] DEBUG kafka.utils.KafkaScheduler - Scheduling task __consumer_offsets-37 with initial delay 0 ms and period -1 ms. 11:10:52 11:10:52.600 [data-plane-kafka-request-handler-0] INFO kafka.coordinator.group.GroupCoordinator - [GroupCoordinator 1]: Elected as the group coordinator for partition 7 in epoch 0 11:10:52 11:10:52.600 [data-plane-kafka-request-handler-0] INFO kafka.coordinator.group.GroupMetadataManager - [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-7 for epoch 0 11:10:52 11:10:52.600 [data-plane-kafka-request-handler-0] DEBUG kafka.utils.KafkaScheduler - Scheduling task __consumer_offsets-7 with initial delay 0 ms and period -1 ms. 11:10:52 11:10:52.600 [data-plane-kafka-request-handler-0] INFO kafka.coordinator.group.GroupCoordinator - [GroupCoordinator 1]: Elected as the group coordinator for partition 22 in epoch 0 11:10:52 11:10:52.600 [data-plane-kafka-request-handler-0] INFO kafka.coordinator.group.GroupMetadataManager - [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-22 for epoch 0 11:10:52 11:10:52.600 [data-plane-kafka-request-handler-0] DEBUG kafka.utils.KafkaScheduler - Scheduling task __consumer_offsets-22 with initial delay 0 ms and period -1 ms. 11:10:52 11:10:52.600 [data-plane-kafka-request-handler-0] INFO kafka.coordinator.group.GroupCoordinator - [GroupCoordinator 1]: Elected as the group coordinator for partition 29 in epoch 0 11:10:52 11:10:52.600 [data-plane-kafka-request-handler-0] INFO kafka.coordinator.group.GroupMetadataManager - [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-29 for epoch 0 11:10:52 11:10:52.600 [data-plane-kafka-request-handler-0] DEBUG kafka.utils.KafkaScheduler - Scheduling task __consumer_offsets-29 with initial delay 0 ms and period -1 ms. 11:10:52 11:10:52.600 [data-plane-kafka-request-handler-0] INFO kafka.coordinator.group.GroupCoordinator - [GroupCoordinator 1]: Elected as the group coordinator for partition 44 in epoch 0 11:10:52 11:10:52.600 [data-plane-kafka-request-handler-0] INFO kafka.coordinator.group.GroupMetadataManager - [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-44 for epoch 0 11:10:52 11:10:52.600 [data-plane-kafka-request-handler-0] DEBUG kafka.utils.KafkaScheduler - Scheduling task __consumer_offsets-44 with initial delay 0 ms and period -1 ms. 11:10:52 11:10:52.600 [data-plane-kafka-request-handler-0] INFO kafka.coordinator.group.GroupCoordinator - [GroupCoordinator 1]: Elected as the group coordinator for partition 14 in epoch 0 11:10:52 11:10:52.600 [data-plane-kafka-request-handler-0] INFO kafka.coordinator.group.GroupMetadataManager - [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-14 for epoch 0 11:10:52 11:10:52.600 [data-plane-kafka-request-handler-0] DEBUG kafka.utils.KafkaScheduler - Scheduling task __consumer_offsets-14 with initial delay 0 ms and period -1 ms. 11:10:52 11:10:52.601 [data-plane-kafka-request-handler-0] INFO kafka.coordinator.group.GroupCoordinator - [GroupCoordinator 1]: Elected as the group coordinator for partition 23 in epoch 0 11:10:52 11:10:52.601 [data-plane-kafka-request-handler-0] INFO kafka.coordinator.group.GroupMetadataManager - [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-23 for epoch 0 11:10:52 11:10:52.601 [data-plane-kafka-request-handler-0] DEBUG kafka.utils.KafkaScheduler - Scheduling task __consumer_offsets-23 with initial delay 0 ms and period -1 ms. 11:10:52 11:10:52.601 [data-plane-kafka-request-handler-0] INFO kafka.coordinator.group.GroupCoordinator - [GroupCoordinator 1]: Elected as the group coordinator for partition 38 in epoch 0 11:10:52 11:10:52.601 [data-plane-kafka-request-handler-0] INFO kafka.coordinator.group.GroupMetadataManager - [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-38 for epoch 0 11:10:52 11:10:52.601 [data-plane-kafka-request-handler-0] DEBUG kafka.utils.KafkaScheduler - Scheduling task __consumer_offsets-38 with initial delay 0 ms and period -1 ms. 11:10:52 11:10:52.601 [data-plane-kafka-request-handler-0] INFO kafka.coordinator.group.GroupCoordinator - [GroupCoordinator 1]: Elected as the group coordinator for partition 8 in epoch 0 11:10:52 11:10:52.601 [data-plane-kafka-request-handler-0] INFO kafka.coordinator.group.GroupMetadataManager - [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-8 for epoch 0 11:10:52 11:10:52.601 [data-plane-kafka-request-handler-0] DEBUG kafka.utils.KafkaScheduler - Scheduling task __consumer_offsets-8 with initial delay 0 ms and period -1 ms. 11:10:52 11:10:52.601 [data-plane-kafka-request-handler-0] INFO kafka.coordinator.group.GroupCoordinator - [GroupCoordinator 1]: Elected as the group coordinator for partition 45 in epoch 0 11:10:52 11:10:52.601 [data-plane-kafka-request-handler-0] INFO kafka.coordinator.group.GroupMetadataManager - [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-45 for epoch 0 11:10:52 11:10:52.601 [data-plane-kafka-request-handler-0] DEBUG kafka.utils.KafkaScheduler - Scheduling task __consumer_offsets-45 with initial delay 0 ms and period -1 ms. 11:10:52 11:10:52.601 [data-plane-kafka-request-handler-0] INFO kafka.coordinator.group.GroupCoordinator - [GroupCoordinator 1]: Elected as the group coordinator for partition 15 in epoch 0 11:10:52 11:10:52.601 [data-plane-kafka-request-handler-0] INFO kafka.coordinator.group.GroupMetadataManager - [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-15 for epoch 0 11:10:52 11:10:52.601 [data-plane-kafka-request-handler-0] DEBUG kafka.utils.KafkaScheduler - Scheduling task __consumer_offsets-15 with initial delay 0 ms and period -1 ms. 11:10:52 11:10:52.601 [data-plane-kafka-request-handler-0] INFO kafka.coordinator.group.GroupCoordinator - [GroupCoordinator 1]: Elected as the group coordinator for partition 30 in epoch 0 11:10:52 11:10:52.601 [data-plane-kafka-request-handler-0] INFO kafka.coordinator.group.GroupMetadataManager - [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-30 for epoch 0 11:10:52 11:10:52.601 [data-plane-kafka-request-handler-0] DEBUG kafka.utils.KafkaScheduler - Scheduling task __consumer_offsets-30 with initial delay 0 ms and period -1 ms. 11:10:52 11:10:52.602 [data-plane-kafka-request-handler-0] INFO kafka.coordinator.group.GroupCoordinator - [GroupCoordinator 1]: Elected as the group coordinator for partition 0 in epoch 0 11:10:52 11:10:52.602 [data-plane-kafka-request-handler-0] INFO kafka.coordinator.group.GroupMetadataManager - [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-0 for epoch 0 11:10:52 11:10:52.602 [data-plane-kafka-request-handler-0] DEBUG kafka.utils.KafkaScheduler - Scheduling task __consumer_offsets-0 with initial delay 0 ms and period -1 ms. 11:10:52 11:10:52.602 [data-plane-kafka-request-handler-0] INFO kafka.coordinator.group.GroupCoordinator - [GroupCoordinator 1]: Elected as the group coordinator for partition 35 in epoch 0 11:10:52 11:10:52.602 [data-plane-kafka-request-handler-0] INFO kafka.coordinator.group.GroupMetadataManager - [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-35 for epoch 0 11:10:52 11:10:52.602 [data-plane-kafka-request-handler-0] DEBUG kafka.utils.KafkaScheduler - Scheduling task __consumer_offsets-35 with initial delay 0 ms and period -1 ms. 11:10:52 11:10:52.602 [data-plane-kafka-request-handler-0] INFO kafka.coordinator.group.GroupCoordinator - [GroupCoordinator 1]: Elected as the group coordinator for partition 5 in epoch 0 11:10:52 11:10:52.602 [group-metadata-manager-0] INFO kafka.coordinator.group.GroupMetadataManager - [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-3 in 9 milliseconds for epoch 0, of which 3 milliseconds was spent in the scheduler. 11:10:52 11:10:52.602 [data-plane-kafka-request-handler-0] INFO kafka.coordinator.group.GroupMetadataManager - [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-5 for epoch 0 11:10:52 11:10:52.602 [data-plane-kafka-request-handler-0] DEBUG kafka.utils.KafkaScheduler - Scheduling task __consumer_offsets-5 with initial delay 0 ms and period -1 ms. 11:10:52 11:10:52.602 [data-plane-kafka-request-handler-0] INFO kafka.coordinator.group.GroupCoordinator - [GroupCoordinator 1]: Elected as the group coordinator for partition 20 in epoch 0 11:10:52 11:10:52.602 [group-metadata-manager-0] DEBUG kafka.coordinator.group.GroupMetadataManager - [GroupMetadataManager brokerId=1] Started loading offsets and group metadata from __consumer_offsets-18 for epoch 0 11:10:52 11:10:52.602 [data-plane-kafka-request-handler-0] INFO kafka.coordinator.group.GroupMetadataManager - [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-20 for epoch 0 11:10:52 11:10:52.602 [data-plane-kafka-request-handler-0] DEBUG kafka.utils.KafkaScheduler - Scheduling task __consumer_offsets-20 with initial delay 0 ms and period -1 ms. 11:10:52 11:10:52.602 [data-plane-kafka-request-handler-0] INFO kafka.coordinator.group.GroupCoordinator - [GroupCoordinator 1]: Elected as the group coordinator for partition 27 in epoch 0 11:10:52 11:10:52.602 [group-metadata-manager-0] INFO kafka.coordinator.group.GroupMetadataManager - [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-18 in 7 milliseconds for epoch 0, of which 7 milliseconds was spent in the scheduler. 11:10:52 11:10:52.602 [data-plane-kafka-request-handler-0] INFO kafka.coordinator.group.GroupMetadataManager - [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-27 for epoch 0 11:10:52 11:10:52.602 [group-metadata-manager-0] DEBUG kafka.coordinator.group.GroupMetadataManager - [GroupMetadataManager brokerId=1] Started loading offsets and group metadata from __consumer_offsets-41 for epoch 0 11:10:52 11:10:52.602 [data-plane-kafka-request-handler-0] DEBUG kafka.utils.KafkaScheduler - Scheduling task __consumer_offsets-27 with initial delay 0 ms and period -1 ms. 11:10:52 11:10:52.602 [data-plane-kafka-request-handler-0] INFO kafka.coordinator.group.GroupCoordinator - [GroupCoordinator 1]: Elected as the group coordinator for partition 42 in epoch 0 11:10:52 11:10:52.602 [data-plane-kafka-request-handler-0] INFO kafka.coordinator.group.GroupMetadataManager - [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-42 for epoch 0 11:10:52 11:10:52.602 [group-metadata-manager-0] INFO kafka.coordinator.group.GroupMetadataManager - [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-41 in 6 milliseconds for epoch 0, of which 6 milliseconds was spent in the scheduler. 11:10:52 11:10:52.602 [data-plane-kafka-request-handler-0] DEBUG kafka.utils.KafkaScheduler - Scheduling task __consumer_offsets-42 with initial delay 0 ms and period -1 ms. 11:10:52 11:10:52.603 [data-plane-kafka-request-handler-0] INFO kafka.coordinator.group.GroupCoordinator - [GroupCoordinator 1]: Elected as the group coordinator for partition 12 in epoch 0 11:10:52 11:10:52.603 [group-metadata-manager-0] DEBUG kafka.coordinator.group.GroupMetadataManager - [GroupMetadataManager brokerId=1] Started loading offsets and group metadata from __consumer_offsets-10 for epoch 0 11:10:52 11:10:52.603 [data-plane-kafka-request-handler-0] INFO kafka.coordinator.group.GroupMetadataManager - [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-12 for epoch 0 11:10:52 11:10:52.603 [data-plane-kafka-request-handler-0] DEBUG kafka.utils.KafkaScheduler - Scheduling task __consumer_offsets-12 with initial delay 0 ms and period -1 ms. 11:10:52 11:10:52.603 [group-metadata-manager-0] INFO kafka.coordinator.group.GroupMetadataManager - [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-10 in 7 milliseconds for epoch 0, of which 7 milliseconds was spent in the scheduler. 11:10:52 11:10:52.603 [data-plane-kafka-request-handler-0] INFO kafka.coordinator.group.GroupCoordinator - [GroupCoordinator 1]: Elected as the group coordinator for partition 21 in epoch 0 11:10:52 11:10:52.603 [data-plane-kafka-request-handler-0] INFO kafka.coordinator.group.GroupMetadataManager - [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-21 for epoch 0 11:10:52 11:10:52.603 [group-metadata-manager-0] DEBUG kafka.coordinator.group.GroupMetadataManager - [GroupMetadataManager brokerId=1] Started loading offsets and group metadata from __consumer_offsets-33 for epoch 0 11:10:52 11:10:52.603 [data-plane-kafka-request-handler-0] DEBUG kafka.utils.KafkaScheduler - Scheduling task __consumer_offsets-21 with initial delay 0 ms and period -1 ms. 11:10:52 11:10:52.603 [data-plane-kafka-request-handler-0] INFO kafka.coordinator.group.GroupCoordinator - [GroupCoordinator 1]: Elected as the group coordinator for partition 36 in epoch 0 11:10:52 11:10:52.603 [data-plane-kafka-request-handler-0] INFO kafka.coordinator.group.GroupMetadataManager - [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-36 for epoch 0 11:10:52 11:10:52.603 [group-metadata-manager-0] INFO kafka.coordinator.group.GroupMetadataManager - [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-33 in 7 milliseconds for epoch 0, of which 7 milliseconds was spent in the scheduler. 11:10:52 11:10:52.603 [data-plane-kafka-request-handler-0] DEBUG kafka.utils.KafkaScheduler - Scheduling task __consumer_offsets-36 with initial delay 0 ms and period -1 ms. 11:10:52 11:10:52.603 [group-metadata-manager-0] DEBUG kafka.coordinator.group.GroupMetadataManager - [GroupMetadataManager brokerId=1] Started loading offsets and group metadata from __consumer_offsets-48 for epoch 0 11:10:52 11:10:52.603 [data-plane-kafka-request-handler-0] INFO kafka.coordinator.group.GroupCoordinator - [GroupCoordinator 1]: Elected as the group coordinator for partition 6 in epoch 0 11:10:52 11:10:52.603 [data-plane-kafka-request-handler-0] INFO kafka.coordinator.group.GroupMetadataManager - [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-6 for epoch 0 11:10:52 11:10:52.603 [data-plane-kafka-request-handler-0] DEBUG kafka.utils.KafkaScheduler - Scheduling task __consumer_offsets-6 with initial delay 0 ms and period -1 ms. 11:10:52 11:10:52.603 [group-metadata-manager-0] INFO kafka.coordinator.group.GroupMetadataManager - [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-48 in 7 milliseconds for epoch 0, of which 7 milliseconds was spent in the scheduler. 11:10:52 11:10:52.603 [data-plane-kafka-request-handler-0] INFO kafka.coordinator.group.GroupCoordinator - [GroupCoordinator 1]: Elected as the group coordinator for partition 43 in epoch 0 11:10:52 11:10:52.603 [data-plane-kafka-request-handler-0] INFO kafka.coordinator.group.GroupMetadataManager - [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-43 for epoch 0 11:10:52 11:10:52.603 [group-metadata-manager-0] DEBUG kafka.coordinator.group.GroupMetadataManager - [GroupMetadataManager brokerId=1] Started loading offsets and group metadata from __consumer_offsets-19 for epoch 0 11:10:52 11:10:52.603 [data-plane-kafka-request-handler-0] DEBUG kafka.utils.KafkaScheduler - Scheduling task __consumer_offsets-43 with initial delay 0 ms and period -1 ms. 11:10:52 11:10:52.603 [data-plane-kafka-request-handler-0] INFO kafka.coordinator.group.GroupCoordinator - [GroupCoordinator 1]: Elected as the group coordinator for partition 13 in epoch 0 11:10:52 11:10:52.603 [data-plane-kafka-request-handler-0] INFO kafka.coordinator.group.GroupMetadataManager - [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-13 for epoch 0 11:10:52 11:10:52.603 [group-metadata-manager-0] INFO kafka.coordinator.group.GroupMetadataManager - [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-19 in 6 milliseconds for epoch 0, of which 6 milliseconds was spent in the scheduler. 11:10:52 11:10:52.604 [data-plane-kafka-request-handler-0] DEBUG kafka.utils.KafkaScheduler - Scheduling task __consumer_offsets-13 with initial delay 0 ms and period -1 ms. 11:10:52 11:10:52.604 [group-metadata-manager-0] DEBUG kafka.coordinator.group.GroupMetadataManager - [GroupMetadataManager brokerId=1] Started loading offsets and group metadata from __consumer_offsets-34 for epoch 0 11:10:52 11:10:52.604 [data-plane-kafka-request-handler-0] INFO kafka.coordinator.group.GroupCoordinator - [GroupCoordinator 1]: Elected as the group coordinator for partition 28 in epoch 0 11:10:52 11:10:52.604 [data-plane-kafka-request-handler-0] INFO kafka.coordinator.group.GroupMetadataManager - [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-28 for epoch 0 11:10:52 11:10:52.604 [data-plane-kafka-request-handler-0] DEBUG kafka.utils.KafkaScheduler - Scheduling task __consumer_offsets-28 with initial delay 0 ms and period -1 ms. 11:10:52 11:10:52.604 [group-metadata-manager-0] INFO kafka.coordinator.group.GroupMetadataManager - [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-34 in 7 milliseconds for epoch 0, of which 7 milliseconds was spent in the scheduler. 11:10:52 11:10:52.604 [group-metadata-manager-0] DEBUG kafka.coordinator.group.GroupMetadataManager - [GroupMetadataManager brokerId=1] Started loading offsets and group metadata from __consumer_offsets-4 for epoch 0 11:10:52 11:10:52.604 [group-metadata-manager-0] INFO kafka.coordinator.group.GroupMetadataManager - [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-4 in 7 milliseconds for epoch 0, of which 7 milliseconds was spent in the scheduler. 11:10:52 11:10:52.604 [group-metadata-manager-0] DEBUG kafka.coordinator.group.GroupMetadataManager - [GroupMetadataManager brokerId=1] Started loading offsets and group metadata from __consumer_offsets-11 for epoch 0 11:10:52 11:10:52.604 [group-metadata-manager-0] INFO kafka.coordinator.group.GroupMetadataManager - [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-11 in 7 milliseconds for epoch 0, of which 7 milliseconds was spent in the scheduler. 11:10:52 11:10:52.604 [group-metadata-manager-0] DEBUG kafka.coordinator.group.GroupMetadataManager - [GroupMetadataManager brokerId=1] Started loading offsets and group metadata from __consumer_offsets-26 for epoch 0 11:10:52 11:10:52.604 [group-metadata-manager-0] INFO kafka.coordinator.group.GroupMetadataManager - [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-26 in 7 milliseconds for epoch 0, of which 7 milliseconds was spent in the scheduler. 11:10:52 11:10:52.604 [group-metadata-manager-0] DEBUG kafka.coordinator.group.GroupMetadataManager - [GroupMetadataManager brokerId=1] Started loading offsets and group metadata from __consumer_offsets-49 for epoch 0 11:10:52 11:10:52.605 [data-plane-kafka-request-handler-0] INFO state.change.logger - [Broker id=1] Finished LeaderAndIsr request in 544ms correlationId 3 from controller 1 for 50 partitions 11:10:52 11:10:52.605 [group-metadata-manager-0] INFO kafka.coordinator.group.GroupMetadataManager - [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-49 in 7 milliseconds for epoch 0, of which 6 milliseconds was spent in the scheduler. 11:10:52 11:10:52.605 [group-metadata-manager-0] DEBUG kafka.coordinator.group.GroupMetadataManager - [GroupMetadataManager brokerId=1] Started loading offsets and group metadata from __consumer_offsets-39 for epoch 0 11:10:52 11:10:52.605 [group-metadata-manager-0] INFO kafka.coordinator.group.GroupMetadataManager - [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-39 in 7 milliseconds for epoch 0, of which 7 milliseconds was spent in the scheduler. 11:10:52 11:10:52.605 [group-metadata-manager-0] DEBUG kafka.coordinator.group.GroupMetadataManager - [GroupMetadataManager brokerId=1] Started loading offsets and group metadata from __consumer_offsets-9 for epoch 0 11:10:52 11:10:52.605 [group-metadata-manager-0] INFO kafka.coordinator.group.GroupMetadataManager - [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-9 in 7 milliseconds for epoch 0, of which 7 milliseconds was spent in the scheduler. 11:10:52 11:10:52.605 [group-metadata-manager-0] DEBUG kafka.coordinator.group.GroupMetadataManager - [GroupMetadataManager brokerId=1] Started loading offsets and group metadata from __consumer_offsets-24 for epoch 0 11:10:52 11:10:52.605 [group-metadata-manager-0] INFO kafka.coordinator.group.GroupMetadataManager - [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-24 in 7 milliseconds for epoch 0, of which 7 milliseconds was spent in the scheduler. 11:10:52 11:10:52.605 [group-metadata-manager-0] DEBUG kafka.coordinator.group.GroupMetadataManager - [GroupMetadataManager brokerId=1] Started loading offsets and group metadata from __consumer_offsets-31 for epoch 0 11:10:52 11:10:52.606 [group-metadata-manager-0] INFO kafka.coordinator.group.GroupMetadataManager - [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-31 in 7 milliseconds for epoch 0, of which 7 milliseconds was spent in the scheduler. 11:10:52 11:10:52.606 [group-metadata-manager-0] DEBUG kafka.coordinator.group.GroupMetadataManager - [GroupMetadataManager brokerId=1] Started loading offsets and group metadata from __consumer_offsets-46 for epoch 0 11:10:52 11:10:52.606 [group-metadata-manager-0] INFO kafka.coordinator.group.GroupMetadataManager - [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-46 in 8 milliseconds for epoch 0, of which 8 milliseconds was spent in the scheduler. 11:10:52 11:10:52.606 [group-metadata-manager-0] DEBUG kafka.coordinator.group.GroupMetadataManager - [GroupMetadataManager brokerId=1] Started loading offsets and group metadata from __consumer_offsets-1 for epoch 0 11:10:52 11:10:52.606 [group-metadata-manager-0] INFO kafka.coordinator.group.GroupMetadataManager - [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-1 in 7 milliseconds for epoch 0, of which 7 milliseconds was spent in the scheduler. 11:10:52 11:10:52.606 [group-metadata-manager-0] DEBUG kafka.coordinator.group.GroupMetadataManager - [GroupMetadataManager brokerId=1] Started loading offsets and group metadata from __consumer_offsets-16 for epoch 0 11:10:52 11:10:52.606 [group-metadata-manager-0] INFO kafka.coordinator.group.GroupMetadataManager - [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-16 in 7 milliseconds for epoch 0, of which 7 milliseconds was spent in the scheduler. 11:10:52 11:10:52.606 [group-metadata-manager-0] DEBUG kafka.coordinator.group.GroupMetadataManager - [GroupMetadataManager brokerId=1] Started loading offsets and group metadata from __consumer_offsets-2 for epoch 0 11:10:52 11:10:52.606 [group-metadata-manager-0] INFO kafka.coordinator.group.GroupMetadataManager - [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-2 in 7 milliseconds for epoch 0, of which 7 milliseconds was spent in the scheduler. 11:10:52 11:10:52.606 [group-metadata-manager-0] DEBUG kafka.coordinator.group.GroupMetadataManager - [GroupMetadataManager brokerId=1] Started loading offsets and group metadata from __consumer_offsets-25 for epoch 0 11:10:52 11:10:52.607 [group-metadata-manager-0] INFO kafka.coordinator.group.GroupMetadataManager - [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-25 in 8 milliseconds for epoch 0, of which 7 milliseconds was spent in the scheduler. 11:10:52 11:10:52.607 [group-metadata-manager-0] DEBUG kafka.coordinator.group.GroupMetadataManager - [GroupMetadataManager brokerId=1] Started loading offsets and group metadata from __consumer_offsets-40 for epoch 0 11:10:52 11:10:52.607 [TestBroker:1:Controller-1-to-broker-1-send-thread] DEBUG org.apache.kafka.clients.NetworkClient - [Controller id=1, targetBrokerId=1] Received LEADER_AND_ISR response from node 1 for request with header RequestHeader(apiKey=LEADER_AND_ISR, apiVersion=6, clientId=1, correlationId=3): LeaderAndIsrResponseData(errorCode=0, partitionErrors=[], topics=[LeaderAndIsrTopicError(topicId=N6JdGKSnS5uP5QsQcans3w, partitionErrors=[LeaderAndIsrPartitionError(topicName='', partitionIndex=13, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=46, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=9, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=42, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=21, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=17, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=30, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=26, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=5, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=38, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=1, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=34, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=16, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=45, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=12, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=41, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=24, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=20, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=49, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=0, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=29, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=25, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=8, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=37, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=4, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=33, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=15, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=48, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=11, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=44, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=23, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=19, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=32, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=28, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=7, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=40, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=3, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=36, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=47, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=14, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=43, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=10, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=22, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=18, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=31, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=27, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=39, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=6, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=35, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=2, errorCode=0)])]) 11:10:52 11:10:52.607 [group-metadata-manager-0] INFO kafka.coordinator.group.GroupMetadataManager - [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-40 in 8 milliseconds for epoch 0, of which 8 milliseconds was spent in the scheduler. 11:10:52 11:10:52.607 [group-metadata-manager-0] DEBUG kafka.coordinator.group.GroupMetadataManager - [GroupMetadataManager brokerId=1] Started loading offsets and group metadata from __consumer_offsets-47 for epoch 0 11:10:52 11:10:52.607 [group-metadata-manager-0] INFO kafka.coordinator.group.GroupMetadataManager - [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-47 in 8 milliseconds for epoch 0, of which 8 milliseconds was spent in the scheduler. 11:10:52 11:10:52.607 [group-metadata-manager-0] DEBUG kafka.coordinator.group.GroupMetadataManager - [GroupMetadataManager brokerId=1] Started loading offsets and group metadata from __consumer_offsets-17 for epoch 0 11:10:52 11:10:52.607 [group-metadata-manager-0] INFO kafka.coordinator.group.GroupMetadataManager - [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-17 in 8 milliseconds for epoch 0, of which 8 milliseconds was spent in the scheduler. 11:10:52 11:10:52.607 [group-metadata-manager-0] DEBUG kafka.coordinator.group.GroupMetadataManager - [GroupMetadataManager brokerId=1] Started loading offsets and group metadata from __consumer_offsets-32 for epoch 0 11:10:52 11:10:52.607 [group-metadata-manager-0] INFO kafka.coordinator.group.GroupMetadataManager - [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-32 in 7 milliseconds for epoch 0, of which 7 milliseconds was spent in the scheduler. 11:10:52 11:10:52.607 [group-metadata-manager-0] DEBUG kafka.coordinator.group.GroupMetadataManager - [GroupMetadataManager brokerId=1] Started loading offsets and group metadata from __consumer_offsets-37 for epoch 0 11:10:52 11:10:52.608 [group-metadata-manager-0] INFO kafka.coordinator.group.GroupMetadataManager - [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-37 in 8 milliseconds for epoch 0, of which 7 milliseconds was spent in the scheduler. 11:10:52 11:10:52.608 [group-metadata-manager-0] DEBUG kafka.coordinator.group.GroupMetadataManager - [GroupMetadataManager brokerId=1] Started loading offsets and group metadata from __consumer_offsets-7 for epoch 0 11:10:52 11:10:52.608 [group-metadata-manager-0] INFO kafka.coordinator.group.GroupMetadataManager - [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-7 in 8 milliseconds for epoch 0, of which 8 milliseconds was spent in the scheduler. 11:10:52 11:10:52.608 [group-metadata-manager-0] DEBUG kafka.coordinator.group.GroupMetadataManager - [GroupMetadataManager brokerId=1] Started loading offsets and group metadata from __consumer_offsets-22 for epoch 0 11:10:52 11:10:52.608 [TestBroker:1:Controller-1-to-broker-1-send-thread] DEBUG org.apache.kafka.clients.NetworkClient - [Controller id=1, targetBrokerId=1] Sending UPDATE_METADATA request with header RequestHeader(apiKey=UPDATE_METADATA, apiVersion=7, clientId=1, correlationId=4) and timeout 30000 to node 1: UpdateMetadataRequestData(controllerId=1, controllerEpoch=1, brokerEpoch=25, ungroupedPartitionStates=[], topicStates=[UpdateMetadataTopicState(topicName='__consumer_offsets', topicId=N6JdGKSnS5uP5QsQcans3w, partitionStates=[UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=13, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]), UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=46, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]), UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=9, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]), UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=42, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]), UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=21, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]), UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=17, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]), UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=30, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]), UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=26, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]), UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=5, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]), UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=38, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]), UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=1, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]), UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=34, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]), UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=16, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]), UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=45, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]), UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=12, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]), UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=41, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]), UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=24, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]), UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=20, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]), UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=49, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]), UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=0, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]), UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=29, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]), UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=25, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]), UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=8, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]), UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=37, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]), UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=4, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]), UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=33, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]), UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=15, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]), UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=48, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]), UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=11, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]), UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=44, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]), UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=23, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]), UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=19, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]), UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=32, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]), UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=28, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]), UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=7, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]), UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=40, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]), UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=3, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]), UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=36, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]), UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=47, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]), UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=14, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]), UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=43, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]), UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=10, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]), UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=22, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]), UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=18, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]), UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=31, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]), UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=27, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]), UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=39, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]), UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=6, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]), UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=35, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]), UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=2, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[])])], liveBrokers=[UpdateMetadataBroker(id=1, v0Host='', v0Port=0, endpoints=[UpdateMetadataEndpoint(port=39115, host='localhost', listener='SASL_PLAINTEXT', securityProtocol=2)], rack=null)]) 11:10:52 11:10:52.608 [group-metadata-manager-0] INFO kafka.coordinator.group.GroupMetadataManager - [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-22 in 8 milliseconds for epoch 0, of which 8 milliseconds was spent in the scheduler. 11:10:52 11:10:52.608 [group-metadata-manager-0] DEBUG kafka.coordinator.group.GroupMetadataManager - [GroupMetadataManager brokerId=1] Started loading offsets and group metadata from __consumer_offsets-29 for epoch 0 11:10:52 11:10:52.608 [group-metadata-manager-0] INFO kafka.coordinator.group.GroupMetadataManager - [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-29 in 8 milliseconds for epoch 0, of which 8 milliseconds was spent in the scheduler. 11:10:52 11:10:52.608 [group-metadata-manager-0] DEBUG kafka.coordinator.group.GroupMetadataManager - [GroupMetadataManager brokerId=1] Started loading offsets and group metadata from __consumer_offsets-44 for epoch 0 11:10:52 11:10:52.609 [group-metadata-manager-0] INFO kafka.coordinator.group.GroupMetadataManager - [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-44 in 9 milliseconds for epoch 0, of which 8 milliseconds was spent in the scheduler. 11:10:52 11:10:52.609 [group-metadata-manager-0] DEBUG kafka.coordinator.group.GroupMetadataManager - [GroupMetadataManager brokerId=1] Started loading offsets and group metadata from __consumer_offsets-14 for epoch 0 11:10:52 11:10:52.609 [group-metadata-manager-0] INFO kafka.coordinator.group.GroupMetadataManager - [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-14 in 9 milliseconds for epoch 0, of which 9 milliseconds was spent in the scheduler. 11:10:52 11:10:52.609 [group-metadata-manager-0] DEBUG kafka.coordinator.group.GroupMetadataManager - [GroupMetadataManager brokerId=1] Started loading offsets and group metadata from __consumer_offsets-23 for epoch 0 11:10:52 11:10:52.609 [group-metadata-manager-0] INFO kafka.coordinator.group.GroupMetadataManager - [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-23 in 8 milliseconds for epoch 0, of which 8 milliseconds was spent in the scheduler. 11:10:52 11:10:52.609 [group-metadata-manager-0] DEBUG kafka.coordinator.group.GroupMetadataManager - [GroupMetadataManager brokerId=1] Started loading offsets and group metadata from __consumer_offsets-38 for epoch 0 11:10:52 11:10:52.609 [data-plane-kafka-network-thread-1-ListenerName(SASL_PLAINTEXT)-SASL_PLAINTEXT-0] DEBUG kafka.request.logger - Completed request:{"isForwarded":false,"requestHeader":{"requestApiKey":4,"requestApiVersion":6,"correlationId":3,"clientId":"1","requestApiKeyName":"LEADER_AND_ISR"},"request":{"controllerId":1,"controllerEpoch":1,"brokerEpoch":25,"type":0,"topicStates":[{"topicName":"__consumer_offsets","topicId":"N6JdGKSnS5uP5QsQcans3w","partitionStates":[{"partitionIndex":13,"controllerEpoch":1,"leader":1,"leaderEpoch":0,"isr":[1],"partitionEpoch":0,"replicas":[1],"addingReplicas":[],"removingReplicas":[],"isNew":true,"leaderRecoveryState":0},{"partitionIndex":46,"controllerEpoch":1,"leader":1,"leaderEpoch":0,"isr":[1],"partitionEpoch":0,"replicas":[1],"addingReplicas":[],"removingReplicas":[],"isNew":true,"leaderRecoveryState":0},{"partitionIndex":9,"controllerEpoch":1,"leader":1,"leaderEpoch":0,"isr":[1],"partitionEpoch":0,"replicas":[1],"addingReplicas":[],"removingReplicas":[],"isNew":true,"leaderRecoveryState":0},{"partitionIndex":42,"controllerEpoch":1,"leader":1,"leaderEpoch":0,"isr":[1],"partitionEpoch":0,"replicas":[1],"addingReplicas":[],"removingReplicas":[],"isNew":true,"leaderRecoveryState":0},{"partitionIndex":21,"controllerEpoch":1,"leader":1,"leaderEpoch":0,"isr":[1],"partitionEpoch":0,"replicas":[1],"addingReplicas":[],"removingReplicas":[],"isNew":true,"leaderRecoveryState":0},{"partitionIndex":17,"controllerEpoch":1,"leader":1,"leaderEpoch":0,"isr":[1],"partitionEpoch":0,"replicas":[1],"addingReplicas":[],"removingReplicas":[],"isNew":true,"leaderRecoveryState":0},{"partitionIndex":30,"controllerEpoch":1,"leader":1,"leaderEpoch":0,"isr":[1],"partitionEpoch":0,"replicas":[1],"addingReplicas":[],"removingReplicas":[],"isNew":true,"leaderRecoveryState":0},{"partitionIndex":26,"controllerEpoch":1,"leader":1,"leaderEpoch":0,"isr":[1],"partitionEpoch":0,"replicas":[1],"addingReplicas":[],"removingReplicas":[],"isNew":true,"leaderRecoveryState":0},{"partitionIndex":5,"controllerEpoch":1,"leader":1,"leaderEpoch":0,"isr":[1],"partitionEpoch":0,"replicas":[1],"addingReplicas":[],"removingReplicas":[],"isNew":true,"leaderRecoveryState":0},{"partitionIndex":38,"controllerEpoch":1,"leader":1,"leaderEpoch":0,"isr":[1],"partitionEpoch":0,"replicas":[1],"addingReplicas":[],"removingReplicas":[],"isNew":true,"leaderRecoveryState":0},{"partitionIndex":1,"controllerEpoch":1,"leader":1,"leaderEpoch":0,"isr":[1],"partitionEpoch":0,"replicas":[1],"addingReplicas":[],"removingReplicas":[],"isNew":true,"leaderRecoveryState":0},{"partitionIndex":34,"controllerEpoch":1,"leader":1,"leaderEpoch":0,"isr":[1],"partitionEpoch":0,"replicas":[1],"addingReplicas":[],"removingReplicas":[],"isNew":true,"leaderRecoveryState":0},{"partitionIndex":16,"controllerEpoch":1,"leader":1,"leaderEpoch":0,"isr":[1],"partitionEpoch":0,"replicas":[1],"addingReplicas":[],"removingReplicas":[],"isNew":true,"leaderRecoveryState":0},{"partitionIndex":45,"controllerEpoch":1,"leader":1,"leaderEpoch":0,"isr":[1],"partitionEpoch":0,"replicas":[1],"addingReplicas":[],"removingReplicas":[],"isNew":true,"leaderRecoveryState":0},{"partitionIndex":12,"controllerEpoch":1,"leader":1,"leaderEpoch":0,"isr":[1],"partitionEpoch":0,"replicas":[1],"addingReplicas":[],"removingReplicas":[],"isNew":true,"leaderRecoveryState":0},{"partitionIndex":41,"controllerEpoch":1,"leader":1,"leaderEpoch":0,"isr":[1],"partitionEpoch":0,"replicas":[1],"addingReplicas":[],"removingReplicas":[],"isNew":true,"leaderRecoveryState":0},{"partitionIndex":24,"controllerEpoch":1,"leader":1,"leaderEpoch":0,"isr":[1],"partitionEpoch":0,"replicas":[1],"addingReplicas":[],"removingReplicas":[],"isNew":true,"leaderRecoveryState":0},{"partitionIndex":20,"controllerEpoch":1,"leader":1,"leaderEpoch":0,"isr":[1],"partitionEpoch":0,"replicas":[1],"addingReplicas":[],"removingReplicas":[],"isNew":true,"leaderRecoveryState":0},{"partitionIndex":49,"controllerEpoch":1,"leader":1,"leaderEpoch":0,"isr":[1],"partitionEpoch":0,"replicas":[1],"addingReplicas":[],"removingReplicas":[],"isNew":true,"leaderRecoveryState":0},{"partitionIndex":0,"controllerEpoch":1,"leader":1,"leaderEpoch":0,"isr":[1],"partitionEpoch":0,"replicas":[1],"addingReplicas":[],"removingReplicas":[],"isNew":true,"leaderRecoveryState":0},{"partitionIndex":29,"controllerEpoch":1,"leader":1,"leaderEpoch":0,"isr":[1],"partitionEpoch":0,"replicas":[1],"addingReplicas":[],"removingReplicas":[],"isNew":true,"leaderRecoveryState":0},{"partitionIndex":25,"controllerEpoch":1,"leader":1,"leaderEpoch":0,"isr":[1],"partitionEpoch":0,"replicas":[1],"addingReplicas":[],"removingReplicas":[],"isNew":true,"leaderRecoveryState":0},{"partitionIndex":8,"controllerEpoch":1,"leader":1,"leaderEpoch":0,"isr":[1],"partitionEpoch":0,"replicas":[1],"addingReplicas":[],"removingReplicas":[],"isNew":true,"leaderRecoveryState":0},{"partitionIndex":37,"controllerEpoch":1,"leader":1,"leaderEpoch":0,"isr":[1],"partitionEpoch":0,"replicas":[1],"addingReplicas":[],"removingReplicas":[],"isNew":true,"leaderRecoveryState":0},{"partitionIndex":4,"controllerEpoch":1,"leader":1,"leaderEpoch":0,"isr":[1],"partitionEpoch":0,"replicas":[1],"addingReplicas":[],"removingReplicas":[],"isNew":true,"leaderRecoveryState":0},{"partitionIndex":33,"controllerEpoch":1,"leader":1,"leaderEpoch":0,"isr":[1],"partitionEpoch":0,"replicas":[1],"addingReplicas":[],"removingReplicas":[],"isNew":true,"leaderRecoveryState":0},{"partitionIndex":15,"controllerEpoch":1,"leader":1,"leaderEpoch":0,"isr":[1],"partitionEpoch":0,"replicas":[1],"addingReplicas":[],"removingReplicas":[],"isNew":true,"leaderRecoveryState":0},{"partitionIndex":48,"controllerEpoch":1,"leader":1,"leaderEpoch":0,"isr":[1],"partitionEpoch":0,"replicas":[1],"addingReplicas":[],"removingReplicas":[],"isNew":true,"leaderRecoveryState":0},{"partitionIndex":11,"controllerEpoch":1,"leader":1,"leaderEpoch":0,"isr":[1],"partitionEpoch":0,"replicas":[1],"addingReplicas":[],"removingReplicas":[],"isNew":true,"leaderRecoveryState":0},{"partitionIndex":44,"controllerEpoch":1,"leader":1,"leaderEpoch":0,"isr":[1],"partitionEpoch":0,"replicas":[1],"addingReplicas":[],"removingReplicas":[],"isNew":true,"leaderRecoveryState":0},{"partitionIndex":23,"controllerEpoch":1,"leader":1,"leaderEpoch":0,"isr":[1],"partitionEpoch":0,"replicas":[1],"addingReplicas":[],"removingReplicas":[],"isNew":true,"leaderRecoveryState":0},{"partitionIndex":19,"controllerEpoch":1,"leader":1,"leaderEpoch":0,"isr":[1],"partitionEpoch":0,"replicas":[1],"addingReplicas":[],"removingReplicas":[],"isNew":true,"leaderRecoveryState":0},{"partitionIndex":32,"controllerEpoch":1,"leader":1,"leaderEpoch":0,"isr":[1],"partitionEpoch":0,"replicas":[1],"addingReplicas":[],"removingReplicas":[],"isNew":true,"leaderRecoveryState":0},{"partitionIndex":28,"controllerEpoch":1,"leader":1,"leaderEpoch":0,"isr":[1],"partitionEpoch":0,"replicas":[1],"addingReplicas":[],"removingReplicas":[],"isNew":true,"leaderRecoveryState":0},{"partitionIndex":7,"controllerEpoch":1,"leader":1,"leaderEpoch":0,"isr":[1],"partitionEpoch":0,"replicas":[1],"addingReplicas":[],"removingReplicas":[],"isNew":true,"leaderRecoveryState":0},{"partitionIndex":40,"controllerEpoch":1,"leader":1,"leaderEpoch":0,"isr":[1],"partitionEpoch":0,"replicas":[1],"addingReplicas":[],"removingReplicas":[],"isNew":true,"leaderRecoveryState":0},{"partitionIndex":3,"controllerEpoch":1,"leader":1,"leaderEpoch":0,"isr":[1],"partitionEpoch":0,"replicas":[1],"addingReplicas":[],"removingReplicas":[],"isNew":true,"leaderRecoveryState":0},{"partitionIndex":36,"controllerEpoch":1,"leader":1,"leaderEpoch":0,"isr":[1],"partitionEpoch":0,"replicas":[1],"addingReplicas":[],"removingReplicas":[],"isNew":true,"leaderRecoveryState":0},{"partitionIndex":47,"controllerEpoch":1,"leader":1,"leaderEpoch":0,"isr":[1],"partitionEpoch":0,"replicas":[1],"addingReplicas":[],"removingReplicas":[],"isNew":true,"leaderRecoveryState":0},{"partitionIndex":14,"controllerEpoch":1,"leader":1,"leaderEpoch":0,"isr":[1],"partitionEpoch":0,"replicas":[1],"addingReplicas":[],"removingReplicas":[],"isNew":true,"leaderRecoveryState":0},{"partitionIndex":43,"controllerEpoch":1,"leader":1,"leaderEpoch":0,"isr":[1],"partitionEpoch":0,"replicas":[1],"addingReplicas":[],"removingReplicas":[],"isNew":true,"leaderRecoveryState":0},{"partitionIndex":10,"controllerEpoch":1,"leader":1,"leaderEpoch":0,"isr":[1],"partitionEpoch":0,"replicas":[1],"addingReplicas":[],"removingReplicas":[],"isNew":true,"leaderRecoveryState":0},{"partitionIndex":22,"controllerEpoch":1,"leader":1,"leaderEpoch":0,"isr":[1],"partitionEpoch":0,"replicas":[1],"addingReplicas":[],"removingReplicas":[],"isNew":true,"leaderRecoveryState":0},{"partitionIndex":18,"controllerEpoch":1,"leader":1,"leaderEpoch":0,"isr":[1],"partitionEpoch":0,"replicas":[1],"addingReplicas":[],"removingReplicas":[],"isNew":true,"leaderRecoveryState":0},{"partitionIndex":31,"controllerEpoch":1,"leader":1,"leaderEpoch":0,"isr":[1],"partitionEpoch":0,"replicas":[1],"addingReplicas":[],"removingReplicas":[],"isNew":true,"leaderRecoveryState":0},{"partitionIndex":27,"controllerEpoch":1,"leader":1,"leaderEpoch":0,"isr":[1],"partitionEpoch":0,"replicas":[1],"addingReplicas":[],"removingReplicas":[],"isNew":true,"leaderRecoveryState":0},{"partitionIndex":39,"controllerEpoch":1,"leader":1,"leaderEpoch":0,"isr":[1],"partitionEpoch":0,"replicas":[1],"addingReplicas":[],"removingReplicas":[],"isNew":true,"leaderRecoveryState":0},{"partitionIndex":6,"controllerEpoch":1,"leader":1,"leaderEpoch":0,"isr":[1],"partitionEpoch":0,"replicas":[1],"addingReplicas":[],"removingReplicas":[],"isNew":true,"leaderRecoveryState":0},{"partitionIndex":35,"controllerEpoch":1,"leader":1,"leaderEpoch":0,"isr":[1],"partitionEpoch":0,"replicas":[1],"addingReplicas":[],"removingReplicas":[],"isNew":true,"leaderRecoveryState":0},{"partitionIndex":2,"controllerEpoch":1,"leader":1,"leaderEpoch":0,"isr":[1],"partitionEpoch":0,"replicas":[1],"addingReplicas":[],"removingReplicas":[],"isNew":true,"leaderRecoveryState":0}]}],"liveLeaders":[{"brokerId":1,"hostName":"localhost","port":39115}]},"response":{"errorCode":0,"topics":[{"topicId":"N6JdGKSnS5uP5QsQcans3w","partitionErrors":[{"partitionIndex":13,"errorCode":0},{"partitionIndex":46,"errorCode":0},{"partitionIndex":9,"errorCode":0},{"partitionIndex":42,"errorCode":0},{"partitionIndex":21,"errorCode":0},{"partitionIndex":17,"errorCode":0},{"partitionIndex":30,"errorCode":0},{"partitionIndex":26,"errorCode":0},{"partitionIndex":5,"errorCode":0},{"partitionIndex":38,"errorCode":0},{"partitionIndex":1,"errorCode":0},{"partitionIndex":34,"errorCode":0},{"partitionIndex":16,"errorCode":0},{"partitionIndex":45,"errorCode":0},{"partitionIndex":12,"errorCode":0},{"partitionIndex":41,"errorCode":0},{"partitionIndex":24,"errorCode":0},{"partitionIndex":20,"errorCode":0},{"partitionIndex":49,"errorCode":0},{"partitionIndex":0,"errorCode":0},{"partitionIndex":29,"errorCode":0},{"partitionIndex":25,"errorCode":0},{"partitionIndex":8,"errorCode":0},{"partitionIndex":37,"errorCode":0},{"partitionIndex":4,"errorCode":0},{"partitionIndex":33,"errorCode":0},{"partitionIndex":15,"errorCode":0},{"partitionIndex":48,"errorCode":0},{"partitionIndex":11,"errorCode":0},{"partitionIndex":44,"errorCode":0},{"partitionIndex":23,"errorCode":0},{"partitionIndex":19,"errorCode":0},{"partitionIndex":32,"errorCode":0},{"partitionIndex":28,"errorCode":0},{"partitionIndex":7,"errorCode":0},{"partitionIndex":40,"errorCode":0},{"partitionIndex":3,"errorCode":0},{"partitionIndex":36,"errorCode":0},{"partitionIndex":47,"errorCode":0},{"partitionIndex":14,"errorCode":0},{"partitionIndex":43,"errorCode":0},{"partitionIndex":10,"errorCode":0},{"partitionIndex":22,"errorCode":0},{"partitionIndex":18,"errorCode":0},{"partitionIndex":31,"errorCode":0},{"partitionIndex":27,"errorCode":0},{"partitionIndex":39,"errorCode":0},{"partitionIndex":6,"errorCode":0},{"partitionIndex":35,"errorCode":0},{"partitionIndex":2,"errorCode":0}]}]},"connection":"127.0.0.1:39115-127.0.0.1:43322-0","totalTimeMs":546.811,"requestQueueTimeMs":0.777,"localTimeMs":545.376,"remoteTimeMs":0.0,"throttleTimeMs":0,"responseQueueTimeMs":0.119,"sendTimeMs":0.537,"securityProtocol":"SASL_PLAINTEXT","principal":"User:admin","listener":"SASL_PLAINTEXT","clientInformation":{"softwareName":"unknown","softwareVersion":"unknown"}} 11:10:52 11:10:52.609 [group-metadata-manager-0] INFO kafka.coordinator.group.GroupMetadataManager - [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-38 in 8 milliseconds for epoch 0, of which 8 milliseconds was spent in the scheduler. 11:10:52 11:10:52.610 [group-metadata-manager-0] DEBUG kafka.coordinator.group.GroupMetadataManager - [GroupMetadataManager brokerId=1] Started loading offsets and group metadata from __consumer_offsets-8 for epoch 0 11:10:52 11:10:52.610 [group-metadata-manager-0] INFO kafka.coordinator.group.GroupMetadataManager - [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-8 in 9 milliseconds for epoch 0, of which 9 milliseconds was spent in the scheduler. 11:10:52 11:10:52.610 [group-metadata-manager-0] DEBUG kafka.coordinator.group.GroupMetadataManager - [GroupMetadataManager brokerId=1] Started loading offsets and group metadata from __consumer_offsets-45 for epoch 0 11:10:52 11:10:52.610 [group-metadata-manager-0] INFO kafka.coordinator.group.GroupMetadataManager - [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-45 in 9 milliseconds for epoch 0, of which 9 milliseconds was spent in the scheduler. 11:10:52 11:10:52.610 [group-metadata-manager-0] DEBUG kafka.coordinator.group.GroupMetadataManager - [GroupMetadataManager brokerId=1] Started loading offsets and group metadata from __consumer_offsets-15 for epoch 0 11:10:52 11:10:52.611 [group-metadata-manager-0] INFO kafka.coordinator.group.GroupMetadataManager - [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-15 in 10 milliseconds for epoch 0, of which 9 milliseconds was spent in the scheduler. 11:10:52 11:10:52.611 [group-metadata-manager-0] DEBUG kafka.coordinator.group.GroupMetadataManager - [GroupMetadataManager brokerId=1] Started loading offsets and group metadata from __consumer_offsets-30 for epoch 0 11:10:52 11:10:52.611 [group-metadata-manager-0] INFO kafka.coordinator.group.GroupMetadataManager - [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-30 in 10 milliseconds for epoch 0, of which 10 milliseconds was spent in the scheduler. 11:10:52 11:10:52.611 [group-metadata-manager-0] DEBUG kafka.coordinator.group.GroupMetadataManager - [GroupMetadataManager brokerId=1] Started loading offsets and group metadata from __consumer_offsets-0 for epoch 0 11:10:52 11:10:52.612 [group-metadata-manager-0] INFO kafka.coordinator.group.GroupMetadataManager - [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-0 in 10 milliseconds for epoch 0, of which 9 milliseconds was spent in the scheduler. 11:10:52 11:10:52.612 [group-metadata-manager-0] DEBUG kafka.coordinator.group.GroupMetadataManager - [GroupMetadataManager brokerId=1] Started loading offsets and group metadata from __consumer_offsets-35 for epoch 0 11:10:52 11:10:52.612 [data-plane-kafka-request-handler-1] INFO state.change.logger - [Broker id=1] Add 50 partitions and deleted 0 partitions from metadata cache in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 4 11:10:52 11:10:52.612 [group-metadata-manager-0] INFO kafka.coordinator.group.GroupMetadataManager - [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-35 in 10 milliseconds for epoch 0, of which 10 milliseconds was spent in the scheduler. 11:10:52 11:10:52.612 [group-metadata-manager-0] DEBUG kafka.coordinator.group.GroupMetadataManager - [GroupMetadataManager brokerId=1] Started loading offsets and group metadata from __consumer_offsets-5 for epoch 0 11:10:52 11:10:52.612 [group-metadata-manager-0] INFO kafka.coordinator.group.GroupMetadataManager - [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-5 in 10 milliseconds for epoch 0, of which 10 milliseconds was spent in the scheduler. 11:10:52 11:10:52.612 [group-metadata-manager-0] DEBUG kafka.coordinator.group.GroupMetadataManager - [GroupMetadataManager brokerId=1] Started loading offsets and group metadata from __consumer_offsets-20 for epoch 0 11:10:52 11:10:52.612 [group-metadata-manager-0] INFO kafka.coordinator.group.GroupMetadataManager - [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-20 in 10 milliseconds for epoch 0, of which 10 milliseconds was spent in the scheduler. 11:10:52 11:10:52.612 [group-metadata-manager-0] DEBUG kafka.coordinator.group.GroupMetadataManager - [GroupMetadataManager brokerId=1] Started loading offsets and group metadata from __consumer_offsets-27 for epoch 0 11:10:52 11:10:52.612 [group-metadata-manager-0] INFO kafka.coordinator.group.GroupMetadataManager - [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-27 in 10 milliseconds for epoch 0, of which 10 milliseconds was spent in the scheduler. 11:10:52 11:10:52.612 [group-metadata-manager-0] DEBUG kafka.coordinator.group.GroupMetadataManager - [GroupMetadataManager brokerId=1] Started loading offsets and group metadata from __consumer_offsets-42 for epoch 0 11:10:52 11:10:52.612 [TestBroker:1:Controller-1-to-broker-1-send-thread] DEBUG org.apache.kafka.clients.NetworkClient - [Controller id=1, targetBrokerId=1] Received UPDATE_METADATA response from node 1 for request with header RequestHeader(apiKey=UPDATE_METADATA, apiVersion=7, clientId=1, correlationId=4): UpdateMetadataResponseData(errorCode=0) 11:10:52 11:10:52.612 [group-metadata-manager-0] INFO kafka.coordinator.group.GroupMetadataManager - [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-42 in 10 milliseconds for epoch 0, of which 10 milliseconds was spent in the scheduler. 11:10:52 11:10:52.612 [group-metadata-manager-0] DEBUG kafka.coordinator.group.GroupMetadataManager - [GroupMetadataManager brokerId=1] Started loading offsets and group metadata from __consumer_offsets-12 for epoch 0 11:10:52 11:10:52.612 [group-metadata-manager-0] INFO kafka.coordinator.group.GroupMetadataManager - [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-12 in 9 milliseconds for epoch 0, of which 9 milliseconds was spent in the scheduler. 11:10:52 11:10:52.613 [group-metadata-manager-0] DEBUG kafka.coordinator.group.GroupMetadataManager - [GroupMetadataManager brokerId=1] Started loading offsets and group metadata from __consumer_offsets-21 for epoch 0 11:10:52 11:10:52.613 [group-metadata-manager-0] INFO kafka.coordinator.group.GroupMetadataManager - [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-21 in 10 milliseconds for epoch 0, of which 10 milliseconds was spent in the scheduler. 11:10:52 11:10:52.613 [group-metadata-manager-0] DEBUG kafka.coordinator.group.GroupMetadataManager - [GroupMetadataManager brokerId=1] Started loading offsets and group metadata from __consumer_offsets-36 for epoch 0 11:10:52 11:10:52.613 [group-metadata-manager-0] INFO kafka.coordinator.group.GroupMetadataManager - [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-36 in 10 milliseconds for epoch 0, of which 10 milliseconds was spent in the scheduler. 11:10:52 11:10:52.613 [group-metadata-manager-0] DEBUG kafka.coordinator.group.GroupMetadataManager - [GroupMetadataManager brokerId=1] Started loading offsets and group metadata from __consumer_offsets-6 for epoch 0 11:10:52 11:10:52.613 [group-metadata-manager-0] INFO kafka.coordinator.group.GroupMetadataManager - [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-6 in 10 milliseconds for epoch 0, of which 10 milliseconds was spent in the scheduler. 11:10:52 11:10:52.613 [group-metadata-manager-0] DEBUG kafka.coordinator.group.GroupMetadataManager - [GroupMetadataManager brokerId=1] Started loading offsets and group metadata from __consumer_offsets-43 for epoch 0 11:10:52 11:10:52.613 [group-metadata-manager-0] INFO kafka.coordinator.group.GroupMetadataManager - [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-43 in 10 milliseconds for epoch 0, of which 10 milliseconds was spent in the scheduler. 11:10:52 11:10:52.613 [group-metadata-manager-0] DEBUG kafka.coordinator.group.GroupMetadataManager - [GroupMetadataManager brokerId=1] Started loading offsets and group metadata from __consumer_offsets-13 for epoch 0 11:10:52 11:10:52.613 [group-metadata-manager-0] INFO kafka.coordinator.group.GroupMetadataManager - [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-13 in 10 milliseconds for epoch 0, of which 10 milliseconds was spent in the scheduler. 11:10:52 11:10:52.613 [group-metadata-manager-0] DEBUG kafka.coordinator.group.GroupMetadataManager - [GroupMetadataManager brokerId=1] Started loading offsets and group metadata from __consumer_offsets-28 for epoch 0 11:10:52 11:10:52.613 [group-metadata-manager-0] INFO kafka.coordinator.group.GroupMetadataManager - [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-28 in 9 milliseconds for epoch 0, of which 9 milliseconds was spent in the scheduler. 11:10:52 11:10:52.614 [data-plane-kafka-network-thread-1-ListenerName(SASL_PLAINTEXT)-SASL_PLAINTEXT-0] DEBUG kafka.request.logger - Completed request:{"isForwarded":false,"requestHeader":{"requestApiKey":6,"requestApiVersion":7,"correlationId":4,"clientId":"1","requestApiKeyName":"UPDATE_METADATA"},"request":{"controllerId":1,"controllerEpoch":1,"brokerEpoch":25,"topicStates":[{"topicName":"__consumer_offsets","topicId":"N6JdGKSnS5uP5QsQcans3w","partitionStates":[{"partitionIndex":13,"controllerEpoch":1,"leader":1,"leaderEpoch":0,"isr":[1],"zkVersion":0,"replicas":[1],"offlineReplicas":[]},{"partitionIndex":46,"controllerEpoch":1,"leader":1,"leaderEpoch":0,"isr":[1],"zkVersion":0,"replicas":[1],"offlineReplicas":[]},{"partitionIndex":9,"controllerEpoch":1,"leader":1,"leaderEpoch":0,"isr":[1],"zkVersion":0,"replicas":[1],"offlineReplicas":[]},{"partitionIndex":42,"controllerEpoch":1,"leader":1,"leaderEpoch":0,"isr":[1],"zkVersion":0,"replicas":[1],"offlineReplicas":[]},{"partitionIndex":21,"controllerEpoch":1,"leader":1,"leaderEpoch":0,"isr":[1],"zkVersion":0,"replicas":[1],"offlineReplicas":[]},{"partitionIndex":17,"controllerEpoch":1,"leader":1,"leaderEpoch":0,"isr":[1],"zkVersion":0,"replicas":[1],"offlineReplicas":[]},{"partitionIndex":30,"controllerEpoch":1,"leader":1,"leaderEpoch":0,"isr":[1],"zkVersion":0,"replicas":[1],"offlineReplicas":[]},{"partitionIndex":26,"controllerEpoch":1,"leader":1,"leaderEpoch":0,"isr":[1],"zkVersion":0,"replicas":[1],"offlineReplicas":[]},{"partitionIndex":5,"controllerEpoch":1,"leader":1,"leaderEpoch":0,"isr":[1],"zkVersion":0,"replicas":[1],"offlineReplicas":[]},{"partitionIndex":38,"controllerEpoch":1,"leader":1,"leaderEpoch":0,"isr":[1],"zkVersion":0,"replicas":[1],"offlineReplicas":[]},{"partitionIndex":1,"controllerEpoch":1,"leader":1,"leaderEpoch":0,"isr":[1],"zkVersion":0,"replicas":[1],"offlineReplicas":[]},{"partitionIndex":34,"controllerEpoch":1,"leader":1,"leaderEpoch":0,"isr":[1],"zkVersion":0,"replicas":[1],"offlineReplicas":[]},{"partitionIndex":16,"controllerEpoch":1,"leader":1,"leaderEpoch":0,"isr":[1],"zkVersion":0,"replicas":[1],"offlineReplicas":[]},{"partitionIndex":45,"controllerEpoch":1,"leader":1,"leaderEpoch":0,"isr":[1],"zkVersion":0,"replicas":[1],"offlineReplicas":[]},{"partitionIndex":12,"controllerEpoch":1,"leader":1,"leaderEpoch":0,"isr":[1],"zkVersion":0,"replicas":[1],"offlineReplicas":[]},{"partitionIndex":41,"controllerEpoch":1,"leader":1,"leaderEpoch":0,"isr":[1],"zkVersion":0,"replicas":[1],"offlineReplicas":[]},{"partitionIndex":24,"controllerEpoch":1,"leader":1,"leaderEpoch":0,"isr":[1],"zkVersion":0,"replicas":[1],"offlineReplicas":[]},{"partitionIndex":20,"controllerEpoch":1,"leader":1,"leaderEpoch":0,"isr":[1],"zkVersion":0,"replicas":[1],"offlineReplicas":[]},{"partitionIndex":49,"controllerEpoch":1,"leader":1,"leaderEpoch":0,"isr":[1],"zkVersion":0,"replicas":[1],"offlineReplicas":[]},{"partitionIndex":0,"controllerEpoch":1,"leader":1,"leaderEpoch":0,"isr":[1],"zkVersion":0,"replicas":[1],"offlineReplicas":[]},{"partitionIndex":29,"controllerEpoch":1,"leader":1,"leaderEpoch":0,"isr":[1],"zkVersion":0,"replicas":[1],"offlineReplicas":[]},{"partitionIndex":25,"controllerEpoch":1,"leader":1,"leaderEpoch":0,"isr":[1],"zkVersion":0,"replicas":[1],"offlineReplicas":[]},{"partitionIndex":8,"controllerEpoch":1,"leader":1,"leaderEpoch":0,"isr":[1],"zkVersion":0,"replicas":[1],"offlineReplicas":[]},{"partitionIndex":37,"controllerEpoch":1,"leader":1,"leaderEpoch":0,"isr":[1],"zkVersion":0,"replicas":[1],"offlineReplicas":[]},{"partitionIndex":4,"controllerEpoch":1,"leader":1,"leaderEpoch":0,"isr":[1],"zkVersion":0,"replicas":[1],"offlineReplicas":[]},{"partitionIndex":33,"controllerEpoch":1,"leader":1,"leaderEpoch":0,"isr":[1],"zkVersion":0,"replicas":[1],"offlineReplicas":[]},{"partitionIndex":15,"controllerEpoch":1,"leader":1,"leaderEpoch":0,"isr":[1],"zkVersion":0,"replicas":[1],"offlineReplicas":[]},{"partitionIndex":48,"controllerEpoch":1,"leader":1,"leaderEpoch":0,"isr":[1],"zkVersion":0,"replicas":[1],"offlineReplicas":[]},{"partitionIndex":11,"controllerEpoch":1,"leader":1,"leaderEpoch":0,"isr":[1],"zkVersion":0,"replicas":[1],"offlineReplicas":[]},{"partitionIndex":44,"controllerEpoch":1,"leader":1,"leaderEpoch":0,"isr":[1],"zkVersion":0,"replicas":[1],"offlineReplicas":[]},{"partitionIndex":23,"controllerEpoch":1,"leader":1,"leaderEpoch":0,"isr":[1],"zkVersion":0,"replicas":[1],"offlineReplicas":[]},{"partitionIndex":19,"controllerEpoch":1,"leader":1,"leaderEpoch":0,"isr":[1],"zkVersion":0,"replicas":[1],"offlineReplicas":[]},{"partitionIndex":32,"controllerEpoch":1,"leader":1,"leaderEpoch":0,"isr":[1],"zkVersion":0,"replicas":[1],"offlineReplicas":[]},{"partitionIndex":28,"controllerEpoch":1,"leader":1,"leaderEpoch":0,"isr":[1],"zkVersion":0,"replicas":[1],"offlineReplicas":[]},{"partitionIndex":7,"controllerEpoch":1,"leader":1,"leaderEpoch":0,"isr":[1],"zkVersion":0,"replicas":[1],"offlineReplicas":[]},{"partitionIndex":40,"controllerEpoch":1,"leader":1,"leaderEpoch":0,"isr":[1],"zkVersion":0,"replicas":[1],"offlineReplicas":[]},{"partitionIndex":3,"controllerEpoch":1,"leader":1,"leaderEpoch":0,"isr":[1],"zkVersion":0,"replicas":[1],"offlineReplicas":[]},{"partitionIndex":36,"controllerEpoch":1,"leader":1,"leaderEpoch":0,"isr":[1],"zkVersion":0,"replicas":[1],"offlineReplicas":[]},{"partitionIndex":47,"controllerEpoch":1,"leader":1,"leaderEpoch":0,"isr":[1],"zkVersion":0,"replicas":[1],"offlineReplicas":[]},{"partitionIndex":14,"controllerEpoch":1,"leader":1,"leaderEpoch":0,"isr":[1],"zkVersion":0,"replicas":[1],"offlineReplicas":[]},{"partitionIndex":43,"controllerEpoch":1,"leader":1,"leaderEpoch":0,"isr":[1],"zkVersion":0,"replicas":[1],"offlineReplicas":[]},{"partitionIndex":10,"controllerEpoch":1,"leader":1,"leaderEpoch":0,"isr":[1],"zkVersion":0,"replicas":[1],"offlineReplicas":[]},{"partitionIndex":22,"controllerEpoch":1,"leader":1,"leaderEpoch":0,"isr":[1],"zkVersion":0,"replicas":[1],"offlineReplicas":[]},{"partitionIndex":18,"controllerEpoch":1,"leader":1,"leaderEpoch":0,"isr":[1],"zkVersion":0,"replicas":[1],"offlineReplicas":[]},{"partitionIndex":31,"controllerEpoch":1,"leader":1,"leaderEpoch":0,"isr":[1],"zkVersion":0,"replicas":[1],"offlineReplicas":[]},{"partitionIndex":27,"controllerEpoch":1,"leader":1,"leaderEpoch":0,"isr":[1],"zkVersion":0,"replicas":[1],"offlineReplicas":[]},{"partitionIndex":39,"controllerEpoch":1,"leader":1,"leaderEpoch":0,"isr":[1],"zkVersion":0,"replicas":[1],"offlineReplicas":[]},{"partitionIndex":6,"controllerEpoch":1,"leader":1,"leaderEpoch":0,"isr":[1],"zkVersion":0,"replicas":[1],"offlineReplicas":[]},{"partitionIndex":35,"controllerEpoch":1,"leader":1,"leaderEpoch":0,"isr":[1],"zkVersion":0,"replicas":[1],"offlineReplicas":[]},{"partitionIndex":2,"controllerEpoch":1,"leader":1,"leaderEpoch":0,"isr":[1],"zkVersion":0,"replicas":[1],"offlineReplicas":[]}]}],"liveBrokers":[{"id":1,"endpoints":[{"port":39115,"host":"localhost","listener":"SASL_PLAINTEXT","securityProtocol":2}],"rack":null}]},"response":{"errorCode":0},"connection":"127.0.0.1:39115-127.0.0.1:43322-0","totalTimeMs":2.531,"requestQueueTimeMs":0.805,"localTimeMs":1.497,"remoteTimeMs":0.0,"throttleTimeMs":0,"responseQueueTimeMs":0.07,"sendTimeMs":0.158,"securityProtocol":"SASL_PLAINTEXT","principal":"User:admin","listener":"SASL_PLAINTEXT","clientInformation":{"softwareName":"unknown","softwareVersion":"unknown"}} 11:10:52 11:10:52.638 [main] DEBUG org.apache.kafka.clients.NetworkClient - [Consumer clientId=mso-123456-consumer-37da6eb2-0a27-47c4-bc3c-8032fdeca549, groupId=mso-group] Sending metadata request MetadataRequestData(topics=[MetadataRequestTopic(topicId=AAAAAAAAAAAAAAAAAAAAAA, name='my-test-topic')], allowAutoTopicCreation=false, includeClusterAuthorizedOperations=false, includeTopicAuthorizedOperations=false) to node localhost:39115 (id: 1 rack: null) 11:10:52 11:10:52.638 [main] DEBUG org.apache.kafka.clients.NetworkClient - [Consumer clientId=mso-123456-consumer-37da6eb2-0a27-47c4-bc3c-8032fdeca549, groupId=mso-group] Sending METADATA request with header RequestHeader(apiKey=METADATA, apiVersion=12, clientId=mso-123456-consumer-37da6eb2-0a27-47c4-bc3c-8032fdeca549, correlationId=18) and timeout 30000 to node 1: MetadataRequestData(topics=[MetadataRequestTopic(topicId=AAAAAAAAAAAAAAAAAAAAAA, name='my-test-topic')], allowAutoTopicCreation=false, includeClusterAuthorizedOperations=false, includeTopicAuthorizedOperations=false) 11:10:52 11:10:52.641 [main] DEBUG org.apache.kafka.clients.NetworkClient - [Consumer clientId=mso-123456-consumer-37da6eb2-0a27-47c4-bc3c-8032fdeca549, groupId=mso-group] Received METADATA response from node 1 for request with header RequestHeader(apiKey=METADATA, apiVersion=12, clientId=mso-123456-consumer-37da6eb2-0a27-47c4-bc3c-8032fdeca549, correlationId=18): MetadataResponseData(throttleTimeMs=0, brokers=[MetadataResponseBroker(nodeId=1, host='localhost', port=39115, rack=null)], clusterId='jx5ycp9PTHOXo1U6H8QTmw', controllerId=1, topics=[MetadataResponseTopic(errorCode=0, name='my-test-topic', topicId=QbCSIHehTz6oyvgrkYsJtw, isInternal=false, partitions=[MetadataResponsePartition(errorCode=0, partitionIndex=0, leaderId=1, leaderEpoch=0, replicaNodes=[1], isrNodes=[1], offlineReplicas=[])], topicAuthorizedOperations=-2147483648)], clusterAuthorizedOperations=-2147483648) 11:10:52 11:10:52.641 [data-plane-kafka-network-thread-1-ListenerName(SASL_PLAINTEXT)-SASL_PLAINTEXT-0] DEBUG kafka.request.logger - Completed request:{"isForwarded":false,"requestHeader":{"requestApiKey":3,"requestApiVersion":12,"correlationId":18,"clientId":"mso-123456-consumer-37da6eb2-0a27-47c4-bc3c-8032fdeca549","requestApiKeyName":"METADATA"},"request":{"topics":[{"topicId":"AAAAAAAAAAAAAAAAAAAAAA","name":"my-test-topic"}],"allowAutoTopicCreation":false,"includeTopicAuthorizedOperations":false},"response":{"throttleTimeMs":0,"brokers":[{"nodeId":1,"host":"localhost","port":39115,"rack":null}],"clusterId":"jx5ycp9PTHOXo1U6H8QTmw","controllerId":1,"topics":[{"errorCode":0,"name":"my-test-topic","topicId":"QbCSIHehTz6oyvgrkYsJtw","isInternal":false,"partitions":[{"errorCode":0,"partitionIndex":0,"leaderId":1,"leaderEpoch":0,"replicaNodes":[1],"isrNodes":[1],"offlineReplicas":[]}],"topicAuthorizedOperations":-2147483648}]},"connection":"127.0.0.1:39115-127.0.0.1:43334-3","totalTimeMs":2.128,"requestQueueTimeMs":0.225,"localTimeMs":1.449,"remoteTimeMs":0.0,"throttleTimeMs":0,"responseQueueTimeMs":0.114,"sendTimeMs":0.339,"securityProtocol":"SASL_PLAINTEXT","principal":"User:admin","listener":"SASL_PLAINTEXT","clientInformation":{"softwareName":"apache-kafka-java","softwareVersion":"3.3.1"}} 11:10:52 11:10:52.641 [main] DEBUG org.apache.kafka.clients.Metadata - [Consumer clientId=mso-123456-consumer-37da6eb2-0a27-47c4-bc3c-8032fdeca549, groupId=mso-group] Updating last seen epoch for partition my-test-topic-0 from 0 to epoch 0 from new metadata 11:10:52 11:10:52.642 [main] DEBUG org.apache.kafka.clients.Metadata - [Consumer clientId=mso-123456-consumer-37da6eb2-0a27-47c4-bc3c-8032fdeca549, groupId=mso-group] Updated cluster metadata updateVersion 10 to MetadataCache{clusterId='jx5ycp9PTHOXo1U6H8QTmw', nodes={1=localhost:39115 (id: 1 rack: null)}, partitions=[PartitionMetadata(error=NONE, partition=my-test-topic-0, leader=Optional[1], leaderEpoch=Optional[0], replicas=1, isr=1, offlineReplicas=)], controller=localhost:39115 (id: 1 rack: null)} 11:10:52 11:10:52.642 [main] DEBUG org.apache.kafka.clients.consumer.internals.ConsumerCoordinator - [Consumer clientId=mso-123456-consumer-37da6eb2-0a27-47c4-bc3c-8032fdeca549, groupId=mso-group] Sending FindCoordinator request to broker localhost:39115 (id: 1 rack: null) 11:10:52 11:10:52.642 [main] DEBUG org.apache.kafka.clients.NetworkClient - [Consumer clientId=mso-123456-consumer-37da6eb2-0a27-47c4-bc3c-8032fdeca549, groupId=mso-group] Sending FIND_COORDINATOR request with header RequestHeader(apiKey=FIND_COORDINATOR, apiVersion=4, clientId=mso-123456-consumer-37da6eb2-0a27-47c4-bc3c-8032fdeca549, correlationId=19) and timeout 30000 to node 1: FindCoordinatorRequestData(key='', keyType=0, coordinatorKeys=[mso-group]) 11:10:52 11:10:52.646 [main] DEBUG org.apache.kafka.clients.NetworkClient - [Consumer clientId=mso-123456-consumer-37da6eb2-0a27-47c4-bc3c-8032fdeca549, groupId=mso-group] Received FIND_COORDINATOR response from node 1 for request with header RequestHeader(apiKey=FIND_COORDINATOR, apiVersion=4, clientId=mso-123456-consumer-37da6eb2-0a27-47c4-bc3c-8032fdeca549, correlationId=19): FindCoordinatorResponseData(throttleTimeMs=0, errorCode=0, errorMessage='', nodeId=0, host='', port=0, coordinators=[Coordinator(key='mso-group', nodeId=1, host='localhost', port=39115, errorCode=0, errorMessage='')]) 11:10:52 11:10:52.647 [main] DEBUG org.apache.kafka.clients.consumer.internals.ConsumerCoordinator - [Consumer clientId=mso-123456-consumer-37da6eb2-0a27-47c4-bc3c-8032fdeca549, groupId=mso-group] Received FindCoordinator response ClientResponse(receivedTimeMs=1768216252646, latencyMs=4, disconnected=false, requestHeader=RequestHeader(apiKey=FIND_COORDINATOR, apiVersion=4, clientId=mso-123456-consumer-37da6eb2-0a27-47c4-bc3c-8032fdeca549, correlationId=19), responseBody=FindCoordinatorResponseData(throttleTimeMs=0, errorCode=0, errorMessage='', nodeId=0, host='', port=0, coordinators=[Coordinator(key='mso-group', nodeId=1, host='localhost', port=39115, errorCode=0, errorMessage='')])) 11:10:52 11:10:52.647 [main] INFO org.apache.kafka.clients.consumer.internals.ConsumerCoordinator - [Consumer clientId=mso-123456-consumer-37da6eb2-0a27-47c4-bc3c-8032fdeca549, groupId=mso-group] Discovered group coordinator localhost:39115 (id: 2147483646 rack: null) 11:10:52 11:10:52.647 [data-plane-kafka-network-thread-1-ListenerName(SASL_PLAINTEXT)-SASL_PLAINTEXT-0] DEBUG kafka.request.logger - Completed request:{"isForwarded":false,"requestHeader":{"requestApiKey":10,"requestApiVersion":4,"correlationId":19,"clientId":"mso-123456-consumer-37da6eb2-0a27-47c4-bc3c-8032fdeca549","requestApiKeyName":"FIND_COORDINATOR"},"request":{"keyType":0,"coordinatorKeys":["mso-group"]},"response":{"throttleTimeMs":0,"coordinators":[{"key":"mso-group","nodeId":1,"host":"localhost","port":39115,"errorCode":0,"errorMessage":""}]},"connection":"127.0.0.1:39115-127.0.0.1:43334-3","totalTimeMs":3.568,"requestQueueTimeMs":0.127,"localTimeMs":3.137,"remoteTimeMs":0.0,"throttleTimeMs":0,"responseQueueTimeMs":0.078,"sendTimeMs":0.224,"securityProtocol":"SASL_PLAINTEXT","principal":"User:admin","listener":"SASL_PLAINTEXT","clientInformation":{"softwareName":"apache-kafka-java","softwareVersion":"3.3.1"}} 11:10:52 11:10:52.647 [main] DEBUG org.apache.kafka.clients.ClientUtils - Resolved host localhost as 127.0.0.1 11:10:52 11:10:52.647 [main] DEBUG org.apache.kafka.clients.NetworkClient - [Consumer clientId=mso-123456-consumer-37da6eb2-0a27-47c4-bc3c-8032fdeca549, groupId=mso-group] Initiating connection to node localhost:39115 (id: 2147483646 rack: null) using address localhost/127.0.0.1 11:10:52 11:10:52.647 [data-plane-kafka-socket-acceptor-ListenerName(SASL_PLAINTEXT)-SASL_PLAINTEXT-39115] DEBUG kafka.network.DataPlaneAcceptor - Accepted connection from /127.0.0.1:43336 on /127.0.0.1:39115 and assigned it to processor 1, sendBufferSize [actual|requested]: [102400|102400] recvBufferSize [actual|requested]: [102400|102400] 11:10:52 11:10:52.648 [main] DEBUG org.apache.kafka.common.security.authenticator.SaslClientAuthenticator - [Consumer clientId=mso-123456-consumer-37da6eb2-0a27-47c4-bc3c-8032fdeca549, groupId=mso-group] Set SASL client state to SEND_APIVERSIONS_REQUEST 11:10:52 11:10:52.648 [main] DEBUG org.apache.kafka.common.security.authenticator.SaslClientAuthenticator - [Consumer clientId=mso-123456-consumer-37da6eb2-0a27-47c4-bc3c-8032fdeca549, groupId=mso-group] Creating SaslClient: client=null;service=kafka;serviceHostname=localhost;mechs=[PLAIN] 11:10:52 11:10:52.648 [data-plane-kafka-network-thread-1-ListenerName(SASL_PLAINTEXT)-SASL_PLAINTEXT-1] DEBUG kafka.network.Processor - Processor 1 listening to new connection from /127.0.0.1:43336 11:10:52 11:10:52.651 [main] DEBUG org.apache.kafka.clients.consumer.internals.ConsumerCoordinator - [Consumer clientId=mso-123456-consumer-37da6eb2-0a27-47c4-bc3c-8032fdeca549, groupId=mso-group] Executing onJoinPrepare with generation -1 and memberId 11:10:52 11:10:52.651 [main] DEBUG org.apache.kafka.clients.consumer.internals.ConsumerCoordinator - [Consumer clientId=mso-123456-consumer-37da6eb2-0a27-47c4-bc3c-8032fdeca549, groupId=mso-group] Marking assigned partitions pending for revocation: [] 11:10:52 11:10:52.651 [kafka-coordinator-heartbeat-thread | mso-group] DEBUG org.apache.kafka.clients.consumer.internals.ConsumerCoordinator - [Consumer clientId=mso-123456-consumer-37da6eb2-0a27-47c4-bc3c-8032fdeca549, groupId=mso-group] Heartbeat thread started 11:10:52 11:10:52.654 [main] DEBUG org.apache.kafka.clients.consumer.internals.ConsumerCoordinator - [Consumer clientId=mso-123456-consumer-37da6eb2-0a27-47c4-bc3c-8032fdeca549, groupId=mso-group] Sending asynchronous auto-commit of offsets {} 11:10:52 11:10:52.656 [main] DEBUG org.apache.kafka.common.network.Selector - [Consumer clientId=mso-123456-consumer-37da6eb2-0a27-47c4-bc3c-8032fdeca549, groupId=mso-group] Created socket with SO_RCVBUF = 65536, SO_SNDBUF = 131072, SO_TIMEOUT = 0 to node 2147483646 11:10:52 11:10:52.656 [main] DEBUG org.apache.kafka.common.security.authenticator.SaslClientAuthenticator - [Consumer clientId=mso-123456-consumer-37da6eb2-0a27-47c4-bc3c-8032fdeca549, groupId=mso-group] Set SASL client state to RECEIVE_APIVERSIONS_RESPONSE 11:10:52 11:10:52.656 [main] DEBUG org.apache.kafka.clients.NetworkClient - [Consumer clientId=mso-123456-consumer-37da6eb2-0a27-47c4-bc3c-8032fdeca549, groupId=mso-group] Completed connection to node 2147483646. Fetching API versions. 11:10:52 11:10:52.656 [data-plane-kafka-network-thread-1-ListenerName(SASL_PLAINTEXT)-SASL_PLAINTEXT-1] DEBUG org.apache.kafka.common.security.authenticator.SaslServerAuthenticator - Set SASL server state to HANDSHAKE_OR_VERSIONS_REQUEST during authentication 11:10:52 11:10:52.656 [data-plane-kafka-network-thread-1-ListenerName(SASL_PLAINTEXT)-SASL_PLAINTEXT-1] DEBUG org.apache.kafka.common.security.authenticator.SaslServerAuthenticator - Handling Kafka request API_VERSIONS during authentication 11:10:52 11:10:52.656 [main] INFO org.apache.kafka.clients.consumer.internals.ConsumerCoordinator - [Consumer clientId=mso-123456-consumer-37da6eb2-0a27-47c4-bc3c-8032fdeca549, groupId=mso-group] (Re-)joining group 11:10:52 11:10:52.657 [data-plane-kafka-network-thread-1-ListenerName(SASL_PLAINTEXT)-SASL_PLAINTEXT-1] DEBUG org.apache.kafka.common.security.authenticator.SaslServerAuthenticator - Set SASL server state to HANDSHAKE_REQUEST during authentication 11:10:52 11:10:52.657 [main] DEBUG org.apache.kafka.clients.consumer.internals.ConsumerCoordinator - [Consumer clientId=mso-123456-consumer-37da6eb2-0a27-47c4-bc3c-8032fdeca549, groupId=mso-group] Joining group with current subscription: [my-test-topic] 11:10:52 11:10:52.663 [main] DEBUG org.apache.kafka.clients.consumer.internals.ConsumerCoordinator - [Consumer clientId=mso-123456-consumer-37da6eb2-0a27-47c4-bc3c-8032fdeca549, groupId=mso-group] Sending JoinGroup (JoinGroupRequestData(groupId='mso-group', sessionTimeoutMs=50000, rebalanceTimeoutMs=600000, memberId='', groupInstanceId=null, protocolType='consumer', protocols=[JoinGroupRequestProtocol(name='range', metadata=[0, 1, 0, 0, 0, 1, 0, 13, 109, 121, 45, 116, 101, 115, 116, 45, 116, 111, 112, 105, 99, -1, -1, -1, -1, 0, 0, 0, 0]), JoinGroupRequestProtocol(name='cooperative-sticky', metadata=[0, 1, 0, 0, 0, 1, 0, 13, 109, 121, 45, 116, 101, 115, 116, 45, 116, 111, 112, 105, 99, 0, 0, 0, 4, -1, -1, -1, -1, 0, 0, 0, 0])], reason='')) to coordinator localhost:39115 (id: 2147483646 rack: null) 11:10:52 11:10:52.665 [main] DEBUG org.apache.kafka.common.security.authenticator.SaslClientAuthenticator - [Consumer clientId=mso-123456-consumer-37da6eb2-0a27-47c4-bc3c-8032fdeca549, groupId=mso-group] Set SASL client state to SEND_HANDSHAKE_REQUEST 11:10:52 11:10:52.665 [main] DEBUG org.apache.kafka.common.security.authenticator.SaslClientAuthenticator - [Consumer clientId=mso-123456-consumer-37da6eb2-0a27-47c4-bc3c-8032fdeca549, groupId=mso-group] Set SASL client state to RECEIVE_HANDSHAKE_RESPONSE 11:10:52 11:10:52.665 [data-plane-kafka-network-thread-1-ListenerName(SASL_PLAINTEXT)-SASL_PLAINTEXT-1] DEBUG org.apache.kafka.common.security.authenticator.SaslServerAuthenticator - Handling Kafka request SASL_HANDSHAKE during authentication 11:10:52 11:10:52.665 [data-plane-kafka-network-thread-1-ListenerName(SASL_PLAINTEXT)-SASL_PLAINTEXT-1] DEBUG org.apache.kafka.common.security.authenticator.SaslServerAuthenticator - Using SASL mechanism 'PLAIN' provided by client 11:10:52 11:10:52.666 [data-plane-kafka-network-thread-1-ListenerName(SASL_PLAINTEXT)-SASL_PLAINTEXT-1] DEBUG org.apache.kafka.common.security.authenticator.SaslServerAuthenticator - Set SASL server state to AUTHENTICATE during authentication 11:10:52 11:10:52.667 [main] DEBUG org.apache.kafka.common.security.authenticator.SaslClientAuthenticator - [Consumer clientId=mso-123456-consumer-37da6eb2-0a27-47c4-bc3c-8032fdeca549, groupId=mso-group] Set SASL client state to INITIAL 11:10:52 11:10:52.668 [main] DEBUG org.apache.kafka.common.security.authenticator.SaslClientAuthenticator - [Consumer clientId=mso-123456-consumer-37da6eb2-0a27-47c4-bc3c-8032fdeca549, groupId=mso-group] Set SASL client state to INTERMEDIATE 11:10:52 11:10:52.668 [main] DEBUG org.apache.kafka.clients.consumer.internals.ConsumerCoordinator - [Consumer clientId=mso-123456-consumer-37da6eb2-0a27-47c4-bc3c-8032fdeca549, groupId=mso-group] Completed asynchronous auto-commit of offsets {} 11:10:52 11:10:52.668 [data-plane-kafka-network-thread-1-ListenerName(SASL_PLAINTEXT)-SASL_PLAINTEXT-1] DEBUG org.apache.kafka.common.security.authenticator.SaslServerAuthenticator - Authentication complete; session max lifetime from broker config=0 ms, no credential expiration; no session expiration, sending 0 ms to client 11:10:52 11:10:52.668 [data-plane-kafka-network-thread-1-ListenerName(SASL_PLAINTEXT)-SASL_PLAINTEXT-1] DEBUG org.apache.kafka.common.security.authenticator.SaslServerAuthenticator - Set SASL server state to COMPLETE during authentication 11:10:52 11:10:52.668 [data-plane-kafka-network-thread-1-ListenerName(SASL_PLAINTEXT)-SASL_PLAINTEXT-1] DEBUG org.apache.kafka.common.network.Selector - [SocketServer listenerType=ZK_BROKER, nodeId=1] Successfully authenticated with /127.0.0.1 11:10:52 11:10:52.668 [main] DEBUG org.apache.kafka.common.security.authenticator.SaslClientAuthenticator - [Consumer clientId=mso-123456-consumer-37da6eb2-0a27-47c4-bc3c-8032fdeca549, groupId=mso-group] Set SASL client state to COMPLETE 11:10:52 11:10:52.668 [main] DEBUG org.apache.kafka.common.security.authenticator.SaslClientAuthenticator - [Consumer clientId=mso-123456-consumer-37da6eb2-0a27-47c4-bc3c-8032fdeca549, groupId=mso-group] Finished authentication with no session expiration and no session re-authentication 11:10:52 11:10:52.668 [main] DEBUG org.apache.kafka.common.network.Selector - [Consumer clientId=mso-123456-consumer-37da6eb2-0a27-47c4-bc3c-8032fdeca549, groupId=mso-group] Successfully authenticated with localhost/127.0.0.1 11:10:52 11:10:52.668 [main] DEBUG org.apache.kafka.clients.NetworkClient - [Consumer clientId=mso-123456-consumer-37da6eb2-0a27-47c4-bc3c-8032fdeca549, groupId=mso-group] Initiating API versions fetch from node 2147483646. 11:10:52 11:10:52.668 [main] DEBUG org.apache.kafka.clients.NetworkClient - [Consumer clientId=mso-123456-consumer-37da6eb2-0a27-47c4-bc3c-8032fdeca549, groupId=mso-group] Sending API_VERSIONS request with header RequestHeader(apiKey=API_VERSIONS, apiVersion=3, clientId=mso-123456-consumer-37da6eb2-0a27-47c4-bc3c-8032fdeca549, correlationId=21) and timeout 30000 to node 2147483646: ApiVersionsRequestData(clientSoftwareName='apache-kafka-java', clientSoftwareVersion='3.3.1') 11:10:52 11:10:52.671 [main] DEBUG org.apache.kafka.clients.NetworkClient - [Consumer clientId=mso-123456-consumer-37da6eb2-0a27-47c4-bc3c-8032fdeca549, groupId=mso-group] Received API_VERSIONS response from node 2147483646 for request with header RequestHeader(apiKey=API_VERSIONS, apiVersion=3, clientId=mso-123456-consumer-37da6eb2-0a27-47c4-bc3c-8032fdeca549, correlationId=21): ApiVersionsResponseData(errorCode=0, apiKeys=[ApiVersion(apiKey=0, minVersion=0, maxVersion=9), ApiVersion(apiKey=1, minVersion=0, maxVersion=13), ApiVersion(apiKey=2, minVersion=0, maxVersion=7), ApiVersion(apiKey=3, minVersion=0, maxVersion=12), ApiVersion(apiKey=4, minVersion=0, maxVersion=6), ApiVersion(apiKey=5, minVersion=0, maxVersion=3), ApiVersion(apiKey=6, minVersion=0, maxVersion=7), ApiVersion(apiKey=7, minVersion=0, maxVersion=3), ApiVersion(apiKey=8, minVersion=0, maxVersion=8), ApiVersion(apiKey=9, minVersion=0, maxVersion=8), ApiVersion(apiKey=10, minVersion=0, maxVersion=4), ApiVersion(apiKey=11, minVersion=0, maxVersion=9), ApiVersion(apiKey=12, minVersion=0, maxVersion=4), ApiVersion(apiKey=13, minVersion=0, maxVersion=5), ApiVersion(apiKey=14, minVersion=0, maxVersion=5), ApiVersion(apiKey=15, minVersion=0, maxVersion=5), ApiVersion(apiKey=16, minVersion=0, maxVersion=4), ApiVersion(apiKey=17, minVersion=0, maxVersion=1), ApiVersion(apiKey=18, minVersion=0, maxVersion=3), ApiVersion(apiKey=19, minVersion=0, maxVersion=7), ApiVersion(apiKey=20, minVersion=0, maxVersion=6), ApiVersion(apiKey=21, minVersion=0, maxVersion=2), ApiVersion(apiKey=22, minVersion=0, maxVersion=4), ApiVersion(apiKey=23, minVersion=0, maxVersion=4), ApiVersion(apiKey=24, minVersion=0, maxVersion=3), ApiVersion(apiKey=25, minVersion=0, maxVersion=3), ApiVersion(apiKey=26, minVersion=0, maxVersion=3), ApiVersion(apiKey=27, minVersion=0, maxVersion=1), ApiVersion(apiKey=28, minVersion=0, maxVersion=3), ApiVersion(apiKey=29, minVersion=0, maxVersion=3), ApiVersion(apiKey=30, minVersion=0, maxVersion=3), ApiVersion(apiKey=31, minVersion=0, maxVersion=3), ApiVersion(apiKey=32, minVersion=0, maxVersion=4), ApiVersion(apiKey=33, minVersion=0, maxVersion=2), ApiVersion(apiKey=34, minVersion=0, maxVersion=2), ApiVersion(apiKey=35, minVersion=0, maxVersion=4), ApiVersion(apiKey=36, minVersion=0, maxVersion=2), ApiVersion(apiKey=37, minVersion=0, maxVersion=3), ApiVersion(apiKey=38, minVersion=0, maxVersion=3), ApiVersion(apiKey=39, minVersion=0, maxVersion=2), ApiVersion(apiKey=40, minVersion=0, maxVersion=2), ApiVersion(apiKey=41, minVersion=0, maxVersion=3), ApiVersion(apiKey=42, minVersion=0, maxVersion=2), ApiVersion(apiKey=43, minVersion=0, maxVersion=2), ApiVersion(apiKey=44, minVersion=0, maxVersion=1), ApiVersion(apiKey=45, minVersion=0, maxVersion=0), ApiVersion(apiKey=46, minVersion=0, maxVersion=0), ApiVersion(apiKey=47, minVersion=0, maxVersion=0), ApiVersion(apiKey=48, minVersion=0, maxVersion=1), ApiVersion(apiKey=49, minVersion=0, maxVersion=1), ApiVersion(apiKey=50, minVersion=0, maxVersion=0), ApiVersion(apiKey=51, minVersion=0, maxVersion=0), ApiVersion(apiKey=56, minVersion=0, maxVersion=2), ApiVersion(apiKey=57, minVersion=0, maxVersion=1), ApiVersion(apiKey=60, minVersion=0, maxVersion=0), ApiVersion(apiKey=61, minVersion=0, maxVersion=0), ApiVersion(apiKey=65, minVersion=0, maxVersion=0), ApiVersion(apiKey=66, minVersion=0, maxVersion=0), ApiVersion(apiKey=67, minVersion=0, maxVersion=0)], throttleTimeMs=0, supportedFeatures=[], finalizedFeaturesEpoch=0, finalizedFeatures=[]) 11:10:52 11:10:52.672 [data-plane-kafka-network-thread-1-ListenerName(SASL_PLAINTEXT)-SASL_PLAINTEXT-1] DEBUG kafka.request.logger - Completed request:{"isForwarded":false,"requestHeader":{"requestApiKey":18,"requestApiVersion":3,"correlationId":21,"clientId":"mso-123456-consumer-37da6eb2-0a27-47c4-bc3c-8032fdeca549","requestApiKeyName":"API_VERSIONS"},"request":{"clientSoftwareName":"apache-kafka-java","clientSoftwareVersion":"3.3.1"},"response":{"errorCode":0,"apiKeys":[{"apiKey":0,"minVersion":0,"maxVersion":9},{"apiKey":1,"minVersion":0,"maxVersion":13},{"apiKey":2,"minVersion":0,"maxVersion":7},{"apiKey":3,"minVersion":0,"maxVersion":12},{"apiKey":4,"minVersion":0,"maxVersion":6},{"apiKey":5,"minVersion":0,"maxVersion":3},{"apiKey":6,"minVersion":0,"maxVersion":7},{"apiKey":7,"minVersion":0,"maxVersion":3},{"apiKey":8,"minVersion":0,"maxVersion":8},{"apiKey":9,"minVersion":0,"maxVersion":8},{"apiKey":10,"minVersion":0,"maxVersion":4},{"apiKey":11,"minVersion":0,"maxVersion":9},{"apiKey":12,"minVersion":0,"maxVersion":4},{"apiKey":13,"minVersion":0,"maxVersion":5},{"apiKey":14,"minVersion":0,"maxVersion":5},{"apiKey":15,"minVersion":0,"maxVersion":5},{"apiKey":16,"minVersion":0,"maxVersion":4},{"apiKey":17,"minVersion":0,"maxVersion":1},{"apiKey":18,"minVersion":0,"maxVersion":3},{"apiKey":19,"minVersion":0,"maxVersion":7},{"apiKey":20,"minVersion":0,"maxVersion":6},{"apiKey":21,"minVersion":0,"maxVersion":2},{"apiKey":22,"minVersion":0,"maxVersion":4},{"apiKey":23,"minVersion":0,"maxVersion":4},{"apiKey":24,"minVersion":0,"maxVersion":3},{"apiKey":25,"minVersion":0,"maxVersion":3},{"apiKey":26,"minVersion":0,"maxVersion":3},{"apiKey":27,"minVersion":0,"maxVersion":1},{"apiKey":28,"minVersion":0,"maxVersion":3},{"apiKey":29,"minVersion":0,"maxVersion":3},{"apiKey":30,"minVersion":0,"maxVersion":3},{"apiKey":31,"minVersion":0,"maxVersion":3},{"apiKey":32,"minVersion":0,"maxVersion":4},{"apiKey":33,"minVersion":0,"maxVersion":2},{"apiKey":34,"minVersion":0,"maxVersion":2},{"apiKey":35,"minVersion":0,"maxVersion":4},{"apiKey":36,"minVersion":0,"maxVersion":2},{"apiKey":37,"minVersion":0,"maxVersion":3},{"apiKey":38,"minVersion":0,"maxVersion":3},{"apiKey":39,"minVersion":0,"maxVersion":2},{"apiKey":40,"minVersion":0,"maxVersion":2},{"apiKey":41,"minVersion":0,"maxVersion":3},{"apiKey":42,"minVersion":0,"maxVersion":2},{"apiKey":43,"minVersion":0,"maxVersion":2},{"apiKey":44,"minVersion":0,"maxVersion":1},{"apiKey":45,"minVersion":0,"maxVersion":0},{"apiKey":46,"minVersion":0,"maxVersion":0},{"apiKey":47,"minVersion":0,"maxVersion":0},{"apiKey":48,"minVersion":0,"maxVersion":1},{"apiKey":49,"minVersion":0,"maxVersion":1},{"apiKey":50,"minVersion":0,"maxVersion":0},{"apiKey":51,"minVersion":0,"maxVersion":0},{"apiKey":56,"minVersion":0,"maxVersion":2},{"apiKey":57,"minVersion":0,"maxVersion":1},{"apiKey":60,"minVersion":0,"maxVersion":0},{"apiKey":61,"minVersion":0,"maxVersion":0},{"apiKey":65,"minVersion":0,"maxVersion":0},{"apiKey":66,"minVersion":0,"maxVersion":0},{"apiKey":67,"minVersion":0,"maxVersion":0}],"throttleTimeMs":0,"finalizedFeaturesEpoch":0},"connection":"127.0.0.1:39115-127.0.0.1:43336-3","totalTimeMs":1.832,"requestQueueTimeMs":0.291,"localTimeMs":1.152,"remoteTimeMs":0.0,"throttleTimeMs":0,"responseQueueTimeMs":0.093,"sendTimeMs":0.295,"securityProtocol":"SASL_PLAINTEXT","principal":"User:admin","listener":"SASL_PLAINTEXT","clientInformation":{"softwareName":"unknown","softwareVersion":"unknown"}} 11:10:52 11:10:52.672 [main] DEBUG org.apache.kafka.clients.NetworkClient - [Consumer clientId=mso-123456-consumer-37da6eb2-0a27-47c4-bc3c-8032fdeca549, groupId=mso-group] Node 2147483646 has finalized features epoch: 0, finalized features: [], supported features: [], API versions: (Produce(0): 0 to 9 [usable: 9], Fetch(1): 0 to 13 [usable: 13], ListOffsets(2): 0 to 7 [usable: 7], Metadata(3): 0 to 12 [usable: 12], LeaderAndIsr(4): 0 to 6 [usable: 6], StopReplica(5): 0 to 3 [usable: 3], UpdateMetadata(6): 0 to 7 [usable: 7], ControlledShutdown(7): 0 to 3 [usable: 3], OffsetCommit(8): 0 to 8 [usable: 8], OffsetFetch(9): 0 to 8 [usable: 8], FindCoordinator(10): 0 to 4 [usable: 4], JoinGroup(11): 0 to 9 [usable: 9], Heartbeat(12): 0 to 4 [usable: 4], LeaveGroup(13): 0 to 5 [usable: 5], SyncGroup(14): 0 to 5 [usable: 5], DescribeGroups(15): 0 to 5 [usable: 5], ListGroups(16): 0 to 4 [usable: 4], SaslHandshake(17): 0 to 1 [usable: 1], ApiVersions(18): 0 to 3 [usable: 3], CreateTopics(19): 0 to 7 [usable: 7], DeleteTopics(20): 0 to 6 [usable: 6], DeleteRecords(21): 0 to 2 [usable: 2], InitProducerId(22): 0 to 4 [usable: 4], OffsetForLeaderEpoch(23): 0 to 4 [usable: 4], AddPartitionsToTxn(24): 0 to 3 [usable: 3], AddOffsetsToTxn(25): 0 to 3 [usable: 3], EndTxn(26): 0 to 3 [usable: 3], WriteTxnMarkers(27): 0 to 1 [usable: 1], TxnOffsetCommit(28): 0 to 3 [usable: 3], DescribeAcls(29): 0 to 3 [usable: 3], CreateAcls(30): 0 to 3 [usable: 3], DeleteAcls(31): 0 to 3 [usable: 3], DescribeConfigs(32): 0 to 4 [usable: 4], AlterConfigs(33): 0 to 2 [usable: 2], AlterReplicaLogDirs(34): 0 to 2 [usable: 2], DescribeLogDirs(35): 0 to 4 [usable: 4], SaslAuthenticate(36): 0 to 2 [usable: 2], CreatePartitions(37): 0 to 3 [usable: 3], CreateDelegationToken(38): 0 to 3 [usable: 3], RenewDelegationToken(39): 0 to 2 [usable: 2], ExpireDelegationToken(40): 0 to 2 [usable: 2], DescribeDelegationToken(41): 0 to 3 [usable: 3], DeleteGroups(42): 0 to 2 [usable: 2], ElectLeaders(43): 0 to 2 [usable: 2], IncrementalAlterConfigs(44): 0 to 1 [usable: 1], AlterPartitionReassignments(45): 0 [usable: 0], ListPartitionReassignments(46): 0 [usable: 0], OffsetDelete(47): 0 [usable: 0], DescribeClientQuotas(48): 0 to 1 [usable: 1], AlterClientQuotas(49): 0 to 1 [usable: 1], DescribeUserScramCredentials(50): 0 [usable: 0], AlterUserScramCredentials(51): 0 [usable: 0], AlterPartition(56): 0 to 2 [usable: 2], UpdateFeatures(57): 0 to 1 [usable: 1], DescribeCluster(60): 0 [usable: 0], DescribeProducers(61): 0 [usable: 0], DescribeTransactions(65): 0 [usable: 0], ListTransactions(66): 0 [usable: 0], AllocateProducerIds(67): 0 [usable: 0]). 11:10:52 11:10:52.672 [main] DEBUG org.apache.kafka.clients.NetworkClient - [Consumer clientId=mso-123456-consumer-37da6eb2-0a27-47c4-bc3c-8032fdeca549, groupId=mso-group] Sending JOIN_GROUP request with header RequestHeader(apiKey=JOIN_GROUP, apiVersion=9, clientId=mso-123456-consumer-37da6eb2-0a27-47c4-bc3c-8032fdeca549, correlationId=20) and timeout 605000 to node 2147483646: JoinGroupRequestData(groupId='mso-group', sessionTimeoutMs=50000, rebalanceTimeoutMs=600000, memberId='', groupInstanceId=null, protocolType='consumer', protocols=[JoinGroupRequestProtocol(name='range', metadata=[0, 1, 0, 0, 0, 1, 0, 13, 109, 121, 45, 116, 101, 115, 116, 45, 116, 111, 112, 105, 99, -1, -1, -1, -1, 0, 0, 0, 0]), JoinGroupRequestProtocol(name='cooperative-sticky', metadata=[0, 1, 0, 0, 0, 1, 0, 13, 109, 121, 45, 116, 101, 115, 116, 45, 116, 111, 112, 105, 99, 0, 0, 0, 4, -1, -1, -1, -1, 0, 0, 0, 0])], reason='') 11:10:52 11:10:52.688 [data-plane-kafka-request-handler-1] INFO kafka.coordinator.group.GroupCoordinator - [GroupCoordinator 1]: Dynamic member with unknown member id joins group mso-group in Empty state. Created a new member id mso-123456-consumer-37da6eb2-0a27-47c4-bc3c-8032fdeca549-083c14ec-f89b-4a86-bb74-9b008de55af6 and request the member to rejoin with this id. 11:10:52 11:10:52.694 [main] DEBUG org.apache.kafka.clients.NetworkClient - [Consumer clientId=mso-123456-consumer-37da6eb2-0a27-47c4-bc3c-8032fdeca549, groupId=mso-group] Received JOIN_GROUP response from node 2147483646 for request with header RequestHeader(apiKey=JOIN_GROUP, apiVersion=9, clientId=mso-123456-consumer-37da6eb2-0a27-47c4-bc3c-8032fdeca549, correlationId=20): JoinGroupResponseData(throttleTimeMs=0, errorCode=79, generationId=-1, protocolType=null, protocolName=null, leader='', skipAssignment=false, memberId='mso-123456-consumer-37da6eb2-0a27-47c4-bc3c-8032fdeca549-083c14ec-f89b-4a86-bb74-9b008de55af6', members=[]) 11:10:52 11:10:52.694 [main] DEBUG org.apache.kafka.clients.consumer.internals.ConsumerCoordinator - [Consumer clientId=mso-123456-consumer-37da6eb2-0a27-47c4-bc3c-8032fdeca549, groupId=mso-group] JoinGroup failed due to non-fatal error: MEMBER_ID_REQUIRED. Will set the member id as mso-123456-consumer-37da6eb2-0a27-47c4-bc3c-8032fdeca549-083c14ec-f89b-4a86-bb74-9b008de55af6 and then rejoin. Sent generation was Generation{generationId=-1, memberId='', protocol='null'} 11:10:52 11:10:52.694 [main] INFO org.apache.kafka.clients.consumer.internals.ConsumerCoordinator - [Consumer clientId=mso-123456-consumer-37da6eb2-0a27-47c4-bc3c-8032fdeca549, groupId=mso-group] Request joining group due to: need to re-join with the given member-id: mso-123456-consumer-37da6eb2-0a27-47c4-bc3c-8032fdeca549-083c14ec-f89b-4a86-bb74-9b008de55af6 11:10:52 11:10:52.694 [main] INFO org.apache.kafka.clients.consumer.internals.ConsumerCoordinator - [Consumer clientId=mso-123456-consumer-37da6eb2-0a27-47c4-bc3c-8032fdeca549, groupId=mso-group] Request joining group due to: rebalance failed due to 'The group member needs to have a valid member id before actually entering a consumer group.' (MemberIdRequiredException) 11:10:52 11:10:52.694 [main] INFO org.apache.kafka.clients.consumer.internals.ConsumerCoordinator - [Consumer clientId=mso-123456-consumer-37da6eb2-0a27-47c4-bc3c-8032fdeca549, groupId=mso-group] (Re-)joining group 11:10:52 11:10:52.694 [main] DEBUG org.apache.kafka.clients.consumer.internals.ConsumerCoordinator - [Consumer clientId=mso-123456-consumer-37da6eb2-0a27-47c4-bc3c-8032fdeca549, groupId=mso-group] Joining group with current subscription: [my-test-topic] 11:10:52 11:10:52.695 [main] DEBUG org.apache.kafka.clients.consumer.internals.ConsumerCoordinator - [Consumer clientId=mso-123456-consumer-37da6eb2-0a27-47c4-bc3c-8032fdeca549, groupId=mso-group] Sending JoinGroup (JoinGroupRequestData(groupId='mso-group', sessionTimeoutMs=50000, rebalanceTimeoutMs=600000, memberId='mso-123456-consumer-37da6eb2-0a27-47c4-bc3c-8032fdeca549-083c14ec-f89b-4a86-bb74-9b008de55af6', groupInstanceId=null, protocolType='consumer', protocols=[JoinGroupRequestProtocol(name='range', metadata=[0, 1, 0, 0, 0, 1, 0, 13, 109, 121, 45, 116, 101, 115, 116, 45, 116, 111, 112, 105, 99, -1, -1, -1, -1, 0, 0, 0, 0]), JoinGroupRequestProtocol(name='cooperative-sticky', metadata=[0, 1, 0, 0, 0, 1, 0, 13, 109, 121, 45, 116, 101, 115, 116, 45, 116, 111, 112, 105, 99, 0, 0, 0, 4, -1, -1, -1, -1, 0, 0, 0, 0])], reason='rebalance failed due to MemberIdRequiredException')) to coordinator localhost:39115 (id: 2147483646 rack: null) 11:10:52 11:10:52.695 [data-plane-kafka-network-thread-1-ListenerName(SASL_PLAINTEXT)-SASL_PLAINTEXT-1] DEBUG kafka.request.logger - Completed request:{"isForwarded":false,"requestHeader":{"requestApiKey":11,"requestApiVersion":9,"correlationId":20,"clientId":"mso-123456-consumer-37da6eb2-0a27-47c4-bc3c-8032fdeca549","requestApiKeyName":"JOIN_GROUP"},"request":{"groupId":"mso-group","sessionTimeoutMs":50000,"rebalanceTimeoutMs":600000,"memberId":"","groupInstanceId":null,"protocolType":"consumer","protocols":[{"name":"range","metadata":"AAEAAAABAA1teS10ZXN0LXRvcGlj/////wAAAAA="},{"name":"cooperative-sticky","metadata":"AAEAAAABAA1teS10ZXN0LXRvcGljAAAABP////8AAAAA"}],"reason":""},"response":{"throttleTimeMs":0,"errorCode":79,"generationId":-1,"protocolType":null,"protocolName":null,"leader":"","skipAssignment":false,"memberId":"mso-123456-consumer-37da6eb2-0a27-47c4-bc3c-8032fdeca549-083c14ec-f89b-4a86-bb74-9b008de55af6","members":[]},"connection":"127.0.0.1:39115-127.0.0.1:43336-3","totalTimeMs":21.106,"requestQueueTimeMs":3.634,"localTimeMs":17.196,"remoteTimeMs":0.0,"throttleTimeMs":0,"responseQueueTimeMs":0.073,"sendTimeMs":0.201,"securityProtocol":"SASL_PLAINTEXT","principal":"User:admin","listener":"SASL_PLAINTEXT","clientInformation":{"softwareName":"apache-kafka-java","softwareVersion":"3.3.1"}} 11:10:52 11:10:52.695 [main] DEBUG org.apache.kafka.clients.NetworkClient - [Consumer clientId=mso-123456-consumer-37da6eb2-0a27-47c4-bc3c-8032fdeca549, groupId=mso-group] Sending JOIN_GROUP request with header RequestHeader(apiKey=JOIN_GROUP, apiVersion=9, clientId=mso-123456-consumer-37da6eb2-0a27-47c4-bc3c-8032fdeca549, correlationId=22) and timeout 605000 to node 2147483646: JoinGroupRequestData(groupId='mso-group', sessionTimeoutMs=50000, rebalanceTimeoutMs=600000, memberId='mso-123456-consumer-37da6eb2-0a27-47c4-bc3c-8032fdeca549-083c14ec-f89b-4a86-bb74-9b008de55af6', groupInstanceId=null, protocolType='consumer', protocols=[JoinGroupRequestProtocol(name='range', metadata=[0, 1, 0, 0, 0, 1, 0, 13, 109, 121, 45, 116, 101, 115, 116, 45, 116, 111, 112, 105, 99, -1, -1, -1, -1, 0, 0, 0, 0]), JoinGroupRequestProtocol(name='cooperative-sticky', metadata=[0, 1, 0, 0, 0, 1, 0, 13, 109, 121, 45, 116, 101, 115, 116, 45, 116, 111, 112, 105, 99, 0, 0, 0, 4, -1, -1, -1, -1, 0, 0, 0, 0])], reason='rebalance failed due to MemberIdRequiredException') 11:10:52 11:10:52.698 [data-plane-kafka-request-handler-0] DEBUG kafka.coordinator.group.GroupCoordinator - [GroupCoordinator 1]: Pending dynamic member with id mso-123456-consumer-37da6eb2-0a27-47c4-bc3c-8032fdeca549-083c14ec-f89b-4a86-bb74-9b008de55af6 joins group mso-group in Empty state. Adding to the group now. 11:10:52 11:10:52.707 [data-plane-kafka-request-handler-0] DEBUG kafka.server.DelayedOperationPurgatory - Request key MemberKey(mso-group,mso-123456-consumer-37da6eb2-0a27-47c4-bc3c-8032fdeca549-083c14ec-f89b-4a86-bb74-9b008de55af6) unblocked 1 Heartbeat operations 11:10:52 11:10:52.717 [data-plane-kafka-request-handler-0] INFO kafka.coordinator.group.GroupCoordinator - [GroupCoordinator 1]: Preparing to rebalance group mso-group in state PreparingRebalance with old generation 0 (__consumer_offsets-37) (reason: Adding new member mso-123456-consumer-37da6eb2-0a27-47c4-bc3c-8032fdeca549-083c14ec-f89b-4a86-bb74-9b008de55af6 with group instance id None; client reason: rebalance failed due to MemberIdRequiredException) 11:10:55 11:10:55.727 [executor-Rebalance] INFO kafka.coordinator.group.GroupCoordinator - [GroupCoordinator 1]: Stabilized group mso-group generation 1 (__consumer_offsets-37) with 1 members 11:10:55 11:10:55.730 [data-plane-kafka-network-thread-1-ListenerName(SASL_PLAINTEXT)-SASL_PLAINTEXT-1] DEBUG kafka.request.logger - Completed request:{"isForwarded":false,"requestHeader":{"requestApiKey":11,"requestApiVersion":9,"correlationId":22,"clientId":"mso-123456-consumer-37da6eb2-0a27-47c4-bc3c-8032fdeca549","requestApiKeyName":"JOIN_GROUP"},"request":{"groupId":"mso-group","sessionTimeoutMs":50000,"rebalanceTimeoutMs":600000,"memberId":"mso-123456-consumer-37da6eb2-0a27-47c4-bc3c-8032fdeca549-083c14ec-f89b-4a86-bb74-9b008de55af6","groupInstanceId":null,"protocolType":"consumer","protocols":[{"name":"range","metadata":"AAEAAAABAA1teS10ZXN0LXRvcGlj/////wAAAAA="},{"name":"cooperative-sticky","metadata":"AAEAAAABAA1teS10ZXN0LXRvcGljAAAABP////8AAAAA"}],"reason":"rebalance failed due to MemberIdRequiredException"},"response":{"throttleTimeMs":0,"errorCode":0,"generationId":1,"protocolType":"consumer","protocolName":"range","leader":"mso-123456-consumer-37da6eb2-0a27-47c4-bc3c-8032fdeca549-083c14ec-f89b-4a86-bb74-9b008de55af6","skipAssignment":false,"memberId":"mso-123456-consumer-37da6eb2-0a27-47c4-bc3c-8032fdeca549-083c14ec-f89b-4a86-bb74-9b008de55af6","members":[{"memberId":"mso-123456-consumer-37da6eb2-0a27-47c4-bc3c-8032fdeca549-083c14ec-f89b-4a86-bb74-9b008de55af6","groupInstanceId":null,"metadata":"AAEAAAABAA1teS10ZXN0LXRvcGlj/////wAAAAA="}]},"connection":"127.0.0.1:39115-127.0.0.1:43336-3","totalTimeMs":3034.682,"requestQueueTimeMs":0.199,"localTimeMs":24.381,"remoteTimeMs":3009.72,"throttleTimeMs":0,"responseQueueTimeMs":0.113,"sendTimeMs":0.267,"securityProtocol":"SASL_PLAINTEXT","principal":"User:admin","listener":"SASL_PLAINTEXT","clientInformation":{"softwareName":"apache-kafka-java","softwareVersion":"3.3.1"}} 11:10:55 11:10:55.730 [main] DEBUG org.apache.kafka.clients.NetworkClient - [Consumer clientId=mso-123456-consumer-37da6eb2-0a27-47c4-bc3c-8032fdeca549, groupId=mso-group] Received JOIN_GROUP response from node 2147483646 for request with header RequestHeader(apiKey=JOIN_GROUP, apiVersion=9, clientId=mso-123456-consumer-37da6eb2-0a27-47c4-bc3c-8032fdeca549, correlationId=22): JoinGroupResponseData(throttleTimeMs=0, errorCode=0, generationId=1, protocolType='consumer', protocolName='range', leader='mso-123456-consumer-37da6eb2-0a27-47c4-bc3c-8032fdeca549-083c14ec-f89b-4a86-bb74-9b008de55af6', skipAssignment=false, memberId='mso-123456-consumer-37da6eb2-0a27-47c4-bc3c-8032fdeca549-083c14ec-f89b-4a86-bb74-9b008de55af6', members=[JoinGroupResponseMember(memberId='mso-123456-consumer-37da6eb2-0a27-47c4-bc3c-8032fdeca549-083c14ec-f89b-4a86-bb74-9b008de55af6', groupInstanceId=null, metadata=[0, 1, 0, 0, 0, 1, 0, 13, 109, 121, 45, 116, 101, 115, 116, 45, 116, 111, 112, 105, 99, -1, -1, -1, -1, 0, 0, 0, 0])]) 11:10:55 11:10:55.731 [executor-Rebalance] DEBUG kafka.server.DelayedOperationPurgatory - Request key MemberKey(mso-group,mso-123456-consumer-37da6eb2-0a27-47c4-bc3c-8032fdeca549-083c14ec-f89b-4a86-bb74-9b008de55af6) unblocked 1 Heartbeat operations 11:10:55 11:10:55.731 [main] DEBUG org.apache.kafka.clients.consumer.internals.ConsumerCoordinator - [Consumer clientId=mso-123456-consumer-37da6eb2-0a27-47c4-bc3c-8032fdeca549, groupId=mso-group] Received successful JoinGroup response: JoinGroupResponseData(throttleTimeMs=0, errorCode=0, generationId=1, protocolType='consumer', protocolName='range', leader='mso-123456-consumer-37da6eb2-0a27-47c4-bc3c-8032fdeca549-083c14ec-f89b-4a86-bb74-9b008de55af6', skipAssignment=false, memberId='mso-123456-consumer-37da6eb2-0a27-47c4-bc3c-8032fdeca549-083c14ec-f89b-4a86-bb74-9b008de55af6', members=[JoinGroupResponseMember(memberId='mso-123456-consumer-37da6eb2-0a27-47c4-bc3c-8032fdeca549-083c14ec-f89b-4a86-bb74-9b008de55af6', groupInstanceId=null, metadata=[0, 1, 0, 0, 0, 1, 0, 13, 109, 121, 45, 116, 101, 115, 116, 45, 116, 111, 112, 105, 99, -1, -1, -1, -1, 0, 0, 0, 0])]) 11:10:55 11:10:55.731 [main] DEBUG org.apache.kafka.clients.consumer.internals.ConsumerCoordinator - [Consumer clientId=mso-123456-consumer-37da6eb2-0a27-47c4-bc3c-8032fdeca549, groupId=mso-group] Enabling heartbeat thread 11:10:55 11:10:55.731 [main] INFO org.apache.kafka.clients.consumer.internals.ConsumerCoordinator - [Consumer clientId=mso-123456-consumer-37da6eb2-0a27-47c4-bc3c-8032fdeca549, groupId=mso-group] Successfully joined group with generation Generation{generationId=1, memberId='mso-123456-consumer-37da6eb2-0a27-47c4-bc3c-8032fdeca549-083c14ec-f89b-4a86-bb74-9b008de55af6', protocol='range'} 11:10:55 11:10:55.732 [main] DEBUG org.apache.kafka.clients.consumer.internals.ConsumerCoordinator - [Consumer clientId=mso-123456-consumer-37da6eb2-0a27-47c4-bc3c-8032fdeca549, groupId=mso-group] Performing assignment using strategy range with subscriptions {mso-123456-consumer-37da6eb2-0a27-47c4-bc3c-8032fdeca549-083c14ec-f89b-4a86-bb74-9b008de55af6=Subscription(topics=[my-test-topic], ownedPartitions=[], groupInstanceId=null)} 11:10:55 11:10:55.737 [main] INFO org.apache.kafka.clients.consumer.internals.ConsumerCoordinator - [Consumer clientId=mso-123456-consumer-37da6eb2-0a27-47c4-bc3c-8032fdeca549, groupId=mso-group] Finished assignment for group at generation 1: {mso-123456-consumer-37da6eb2-0a27-47c4-bc3c-8032fdeca549-083c14ec-f89b-4a86-bb74-9b008de55af6=Assignment(partitions=[my-test-topic-0])} 11:10:55 11:10:55.742 [main] DEBUG org.apache.kafka.clients.consumer.internals.ConsumerCoordinator - [Consumer clientId=mso-123456-consumer-37da6eb2-0a27-47c4-bc3c-8032fdeca549, groupId=mso-group] Sending leader SyncGroup to coordinator localhost:39115 (id: 2147483646 rack: null): SyncGroupRequestData(groupId='mso-group', generationId=1, memberId='mso-123456-consumer-37da6eb2-0a27-47c4-bc3c-8032fdeca549-083c14ec-f89b-4a86-bb74-9b008de55af6', groupInstanceId=null, protocolType='consumer', protocolName='range', assignments=[SyncGroupRequestAssignment(memberId='mso-123456-consumer-37da6eb2-0a27-47c4-bc3c-8032fdeca549-083c14ec-f89b-4a86-bb74-9b008de55af6', assignment=[0, 1, 0, 0, 0, 1, 0, 13, 109, 121, 45, 116, 101, 115, 116, 45, 116, 111, 112, 105, 99, 0, 0, 0, 1, 0, 0, 0, 0, -1, -1, -1, -1])]) 11:10:55 11:10:55.744 [kafka-coordinator-heartbeat-thread | mso-group] DEBUG org.apache.kafka.clients.NetworkClient - [Consumer clientId=mso-123456-consumer-37da6eb2-0a27-47c4-bc3c-8032fdeca549, groupId=mso-group] Sending SYNC_GROUP request with header RequestHeader(apiKey=SYNC_GROUP, apiVersion=5, clientId=mso-123456-consumer-37da6eb2-0a27-47c4-bc3c-8032fdeca549, correlationId=23) and timeout 30000 to node 2147483646: SyncGroupRequestData(groupId='mso-group', generationId=1, memberId='mso-123456-consumer-37da6eb2-0a27-47c4-bc3c-8032fdeca549-083c14ec-f89b-4a86-bb74-9b008de55af6', groupInstanceId=null, protocolType='consumer', protocolName='range', assignments=[SyncGroupRequestAssignment(memberId='mso-123456-consumer-37da6eb2-0a27-47c4-bc3c-8032fdeca549-083c14ec-f89b-4a86-bb74-9b008de55af6', assignment=[0, 1, 0, 0, 0, 1, 0, 13, 109, 121, 45, 116, 101, 115, 116, 45, 116, 111, 112, 105, 99, 0, 0, 0, 1, 0, 0, 0, 0, -1, -1, -1, -1])]) 11:10:55 11:10:55.754 [data-plane-kafka-request-handler-1] DEBUG kafka.server.DelayedOperationPurgatory - Request key GroupSyncKey(mso-group) unblocked 1 Rebalance operations 11:10:55 11:10:55.754 [data-plane-kafka-request-handler-1] INFO kafka.coordinator.group.GroupCoordinator - [GroupCoordinator 1]: Assignment received from leader mso-123456-consumer-37da6eb2-0a27-47c4-bc3c-8032fdeca549-083c14ec-f89b-4a86-bb74-9b008de55af6 for group mso-group for generation 1. The group has 1 members, 0 of which are static. 11:10:55 11:10:55.800 [data-plane-kafka-request-handler-1] DEBUG kafka.log.UnifiedLog - [UnifiedLog partition=__consumer_offsets-37, dir=/tmp/kafka-unit8944902187107510952] Flushing log up to offset 1 (exclusive)with recovery point 1, last flushed: 1768216252362, current time: 1768216255800,unflushed: 1 11:10:55 11:10:55.812 [data-plane-kafka-request-handler-1] DEBUG kafka.cluster.Partition - [Partition __consumer_offsets-37 broker=1] High watermark updated from (offset=0 segment=[0:0]) to (offset=1 segment=[0:458]) 11:10:55 11:10:55.814 [data-plane-kafka-request-handler-1] DEBUG kafka.server.ReplicaManager - [ReplicaManager broker=1] Produce to local log in 37 ms 11:10:55 11:10:55.827 [data-plane-kafka-request-handler-1] DEBUG kafka.server.DelayedOperationPurgatory - Request key MemberKey(mso-group,mso-123456-consumer-37da6eb2-0a27-47c4-bc3c-8032fdeca549-083c14ec-f89b-4a86-bb74-9b008de55af6) unblocked 1 Heartbeat operations 11:10:55 11:10:55.828 [data-plane-kafka-network-thread-1-ListenerName(SASL_PLAINTEXT)-SASL_PLAINTEXT-1] DEBUG kafka.request.logger - Completed request:{"isForwarded":false,"requestHeader":{"requestApiKey":14,"requestApiVersion":5,"correlationId":23,"clientId":"mso-123456-consumer-37da6eb2-0a27-47c4-bc3c-8032fdeca549","requestApiKeyName":"SYNC_GROUP"},"request":{"groupId":"mso-group","generationId":1,"memberId":"mso-123456-consumer-37da6eb2-0a27-47c4-bc3c-8032fdeca549-083c14ec-f89b-4a86-bb74-9b008de55af6","groupInstanceId":null,"protocolType":"consumer","protocolName":"range","assignments":[{"memberId":"mso-123456-consumer-37da6eb2-0a27-47c4-bc3c-8032fdeca549-083c14ec-f89b-4a86-bb74-9b008de55af6","assignment":"AAEAAAABAA1teS10ZXN0LXRvcGljAAAAAQAAAAD/////"}]},"response":{"throttleTimeMs":0,"errorCode":0,"protocolType":"consumer","protocolName":"range","assignment":"AAEAAAABAA1teS10ZXN0LXRvcGljAAAAAQAAAAD/////"},"connection":"127.0.0.1:39115-127.0.0.1:43336-3","totalTimeMs":80.75,"requestQueueTimeMs":3.198,"localTimeMs":76.339,"remoteTimeMs":0.0,"throttleTimeMs":0,"responseQueueTimeMs":0.502,"sendTimeMs":0.71,"securityProtocol":"SASL_PLAINTEXT","principal":"User:admin","listener":"SASL_PLAINTEXT","clientInformation":{"softwareName":"apache-kafka-java","softwareVersion":"3.3.1"}} 11:10:55 11:10:55.829 [main] DEBUG org.apache.kafka.clients.NetworkClient - [Consumer clientId=mso-123456-consumer-37da6eb2-0a27-47c4-bc3c-8032fdeca549, groupId=mso-group] Received SYNC_GROUP response from node 2147483646 for request with header RequestHeader(apiKey=SYNC_GROUP, apiVersion=5, clientId=mso-123456-consumer-37da6eb2-0a27-47c4-bc3c-8032fdeca549, correlationId=23): SyncGroupResponseData(throttleTimeMs=0, errorCode=0, protocolType='consumer', protocolName='range', assignment=[0, 1, 0, 0, 0, 1, 0, 13, 109, 121, 45, 116, 101, 115, 116, 45, 116, 111, 112, 105, 99, 0, 0, 0, 1, 0, 0, 0, 0, -1, -1, -1, -1]) 11:10:55 11:10:55.829 [main] DEBUG org.apache.kafka.clients.consumer.internals.ConsumerCoordinator - [Consumer clientId=mso-123456-consumer-37da6eb2-0a27-47c4-bc3c-8032fdeca549, groupId=mso-group] Received successful SyncGroup response: SyncGroupResponseData(throttleTimeMs=0, errorCode=0, protocolType='consumer', protocolName='range', assignment=[0, 1, 0, 0, 0, 1, 0, 13, 109, 121, 45, 116, 101, 115, 116, 45, 116, 111, 112, 105, 99, 0, 0, 0, 1, 0, 0, 0, 0, -1, -1, -1, -1]) 11:10:55 11:10:55.830 [main] INFO org.apache.kafka.clients.consumer.internals.ConsumerCoordinator - [Consumer clientId=mso-123456-consumer-37da6eb2-0a27-47c4-bc3c-8032fdeca549, groupId=mso-group] Successfully synced group in generation Generation{generationId=1, memberId='mso-123456-consumer-37da6eb2-0a27-47c4-bc3c-8032fdeca549-083c14ec-f89b-4a86-bb74-9b008de55af6', protocol='range'} 11:10:55 11:10:55.830 [main] DEBUG org.apache.kafka.clients.consumer.internals.ConsumerCoordinator - [Consumer clientId=mso-123456-consumer-37da6eb2-0a27-47c4-bc3c-8032fdeca549, groupId=mso-group] Executing onJoinComplete with generation 1 and memberId mso-123456-consumer-37da6eb2-0a27-47c4-bc3c-8032fdeca549-083c14ec-f89b-4a86-bb74-9b008de55af6 11:10:55 11:10:55.831 [main] INFO org.apache.kafka.clients.consumer.internals.ConsumerCoordinator - [Consumer clientId=mso-123456-consumer-37da6eb2-0a27-47c4-bc3c-8032fdeca549, groupId=mso-group] Notifying assignor about the new Assignment(partitions=[my-test-topic-0]) 11:10:55 11:10:55.837 [main] INFO org.apache.kafka.clients.consumer.internals.ConsumerCoordinator - [Consumer clientId=mso-123456-consumer-37da6eb2-0a27-47c4-bc3c-8032fdeca549, groupId=mso-group] Adding newly assigned partitions: my-test-topic-0 11:10:55 11:10:55.842 [main] DEBUG org.apache.kafka.clients.consumer.internals.ConsumerCoordinator - [Consumer clientId=mso-123456-consumer-37da6eb2-0a27-47c4-bc3c-8032fdeca549, groupId=mso-group] Fetching committed offsets for partitions: [my-test-topic-0] 11:10:55 11:10:55.843 [main] DEBUG org.apache.kafka.clients.NetworkClient - [Consumer clientId=mso-123456-consumer-37da6eb2-0a27-47c4-bc3c-8032fdeca549, groupId=mso-group] Sending OFFSET_FETCH request with header RequestHeader(apiKey=OFFSET_FETCH, apiVersion=8, clientId=mso-123456-consumer-37da6eb2-0a27-47c4-bc3c-8032fdeca549, correlationId=24) and timeout 30000 to node 2147483646: OffsetFetchRequestData(groupId='', topics=[], groups=[OffsetFetchRequestGroup(groupId='mso-group', topics=[OffsetFetchRequestTopics(name='my-test-topic', partitionIndexes=[0])])], requireStable=true) 11:10:55 11:10:55.856 [data-plane-kafka-network-thread-1-ListenerName(SASL_PLAINTEXT)-SASL_PLAINTEXT-1] DEBUG kafka.request.logger - Completed request:{"isForwarded":false,"requestHeader":{"requestApiKey":9,"requestApiVersion":8,"correlationId":24,"clientId":"mso-123456-consumer-37da6eb2-0a27-47c4-bc3c-8032fdeca549","requestApiKeyName":"OFFSET_FETCH"},"request":{"groups":[{"groupId":"mso-group","topics":[{"name":"my-test-topic","partitionIndexes":[0]}]}],"requireStable":true},"response":{"throttleTimeMs":0,"groups":[{"groupId":"mso-group","topics":[{"name":"my-test-topic","partitions":[{"partitionIndex":0,"committedOffset":-1,"committedLeaderEpoch":-1,"metadata":"","errorCode":0}]}],"errorCode":0}]},"connection":"127.0.0.1:39115-127.0.0.1:43336-3","totalTimeMs":11.338,"requestQueueTimeMs":2.412,"localTimeMs":8.652,"remoteTimeMs":0.0,"throttleTimeMs":0,"responseQueueTimeMs":0.092,"sendTimeMs":0.18,"securityProtocol":"SASL_PLAINTEXT","principal":"User:admin","listener":"SASL_PLAINTEXT","clientInformation":{"softwareName":"apache-kafka-java","softwareVersion":"3.3.1"}} 11:10:55 11:10:55.856 [main] DEBUG org.apache.kafka.clients.NetworkClient - [Consumer clientId=mso-123456-consumer-37da6eb2-0a27-47c4-bc3c-8032fdeca549, groupId=mso-group] Received OFFSET_FETCH response from node 2147483646 for request with header RequestHeader(apiKey=OFFSET_FETCH, apiVersion=8, clientId=mso-123456-consumer-37da6eb2-0a27-47c4-bc3c-8032fdeca549, correlationId=24): OffsetFetchResponseData(throttleTimeMs=0, topics=[], errorCode=0, groups=[OffsetFetchResponseGroup(groupId='mso-group', topics=[OffsetFetchResponseTopics(name='my-test-topic', partitions=[OffsetFetchResponsePartitions(partitionIndex=0, committedOffset=-1, committedLeaderEpoch=-1, metadata='', errorCode=0)])], errorCode=0)]) 11:10:55 11:10:55.857 [main] INFO org.apache.kafka.clients.consumer.internals.ConsumerCoordinator - [Consumer clientId=mso-123456-consumer-37da6eb2-0a27-47c4-bc3c-8032fdeca549, groupId=mso-group] Found no committed offset for partition my-test-topic-0 11:10:55 11:10:55.862 [main] DEBUG org.apache.kafka.clients.consumer.internals.Fetcher - [Consumer clientId=mso-123456-consumer-37da6eb2-0a27-47c4-bc3c-8032fdeca549, groupId=mso-group] Sending ListOffsetRequest ListOffsetsRequestData(replicaId=-1, isolationLevel=0, topics=[ListOffsetsTopic(name='my-test-topic', partitions=[ListOffsetsPartition(partitionIndex=0, currentLeaderEpoch=0, timestamp=-1, maxNumOffsets=1)])]) to broker localhost:39115 (id: 1 rack: null) 11:10:55 11:10:55.863 [main] DEBUG org.apache.kafka.clients.NetworkClient - [Consumer clientId=mso-123456-consumer-37da6eb2-0a27-47c4-bc3c-8032fdeca549, groupId=mso-group] Sending LIST_OFFSETS request with header RequestHeader(apiKey=LIST_OFFSETS, apiVersion=7, clientId=mso-123456-consumer-37da6eb2-0a27-47c4-bc3c-8032fdeca549, correlationId=25) and timeout 30000 to node 1: ListOffsetsRequestData(replicaId=-1, isolationLevel=0, topics=[ListOffsetsTopic(name='my-test-topic', partitions=[ListOffsetsPartition(partitionIndex=0, currentLeaderEpoch=0, timestamp=-1, maxNumOffsets=1)])]) 11:10:55 11:10:55.880 [data-plane-kafka-network-thread-1-ListenerName(SASL_PLAINTEXT)-SASL_PLAINTEXT-0] DEBUG kafka.request.logger - Completed request:{"isForwarded":false,"requestHeader":{"requestApiKey":2,"requestApiVersion":7,"correlationId":25,"clientId":"mso-123456-consumer-37da6eb2-0a27-47c4-bc3c-8032fdeca549","requestApiKeyName":"LIST_OFFSETS"},"request":{"replicaId":-1,"isolationLevel":0,"topics":[{"name":"my-test-topic","partitions":[{"partitionIndex":0,"currentLeaderEpoch":0,"timestamp":-1}]}]},"response":{"throttleTimeMs":0,"topics":[{"name":"my-test-topic","partitions":[{"partitionIndex":0,"errorCode":0,"timestamp":-1,"offset":0,"leaderEpoch":0}]}]},"connection":"127.0.0.1:39115-127.0.0.1:43334-3","totalTimeMs":15.497,"requestQueueTimeMs":2.632,"localTimeMs":12.627,"remoteTimeMs":0.0,"throttleTimeMs":0,"responseQueueTimeMs":0.072,"sendTimeMs":0.164,"securityProtocol":"SASL_PLAINTEXT","principal":"User:admin","listener":"SASL_PLAINTEXT","clientInformation":{"softwareName":"apache-kafka-java","softwareVersion":"3.3.1"}} 11:10:55 11:10:55.880 [main] DEBUG org.apache.kafka.clients.NetworkClient - [Consumer clientId=mso-123456-consumer-37da6eb2-0a27-47c4-bc3c-8032fdeca549, groupId=mso-group] Received LIST_OFFSETS response from node 1 for request with header RequestHeader(apiKey=LIST_OFFSETS, apiVersion=7, clientId=mso-123456-consumer-37da6eb2-0a27-47c4-bc3c-8032fdeca549, correlationId=25): ListOffsetsResponseData(throttleTimeMs=0, topics=[ListOffsetsTopicResponse(name='my-test-topic', partitions=[ListOffsetsPartitionResponse(partitionIndex=0, errorCode=0, oldStyleOffsets=[], timestamp=-1, offset=0, leaderEpoch=0)])]) 11:10:55 11:10:55.882 [main] DEBUG org.apache.kafka.clients.consumer.internals.Fetcher - [Consumer clientId=mso-123456-consumer-37da6eb2-0a27-47c4-bc3c-8032fdeca549, groupId=mso-group] Handling ListOffsetResponse response for my-test-topic-0. Fetched offset 0, timestamp -1 11:10:55 11:10:55.883 [main] DEBUG org.apache.kafka.clients.Metadata - [Consumer clientId=mso-123456-consumer-37da6eb2-0a27-47c4-bc3c-8032fdeca549, groupId=mso-group] Not replacing existing epoch 0 with new epoch 0 for partition my-test-topic-0 11:10:55 11:10:55.884 [main] INFO org.apache.kafka.clients.consumer.internals.SubscriptionState - [Consumer clientId=mso-123456-consumer-37da6eb2-0a27-47c4-bc3c-8032fdeca549, groupId=mso-group] Resetting offset for partition my-test-topic-0 to position FetchPosition{offset=0, offsetEpoch=Optional.empty, currentLeader=LeaderAndEpoch{leader=Optional[localhost:39115 (id: 1 rack: null)], epoch=0}}. 11:10:55 11:10:55.890 [main] DEBUG org.apache.kafka.clients.consumer.internals.Fetcher - [Consumer clientId=mso-123456-consumer-37da6eb2-0a27-47c4-bc3c-8032fdeca549, groupId=mso-group] Added READ_UNCOMMITTED fetch request for partition my-test-topic-0 at position FetchPosition{offset=0, offsetEpoch=Optional.empty, currentLeader=LeaderAndEpoch{leader=Optional[localhost:39115 (id: 1 rack: null)], epoch=0}} to node localhost:39115 (id: 1 rack: null) 11:10:55 11:10:55.890 [main] DEBUG org.apache.kafka.clients.FetchSessionHandler - [Consumer clientId=mso-123456-consumer-37da6eb2-0a27-47c4-bc3c-8032fdeca549, groupId=mso-group] Built full fetch (sessionId=INVALID, epoch=INITIAL) for node 1 with 1 partition(s). 11:10:55 11:10:55.891 [main] DEBUG org.apache.kafka.clients.consumer.internals.Fetcher - [Consumer clientId=mso-123456-consumer-37da6eb2-0a27-47c4-bc3c-8032fdeca549, groupId=mso-group] Sending READ_UNCOMMITTED FullFetchRequest(toSend=(my-test-topic-0), canUseTopicIds=True) to broker localhost:39115 (id: 1 rack: null) 11:10:55 11:10:55.892 [main] DEBUG org.apache.kafka.clients.NetworkClient - [Consumer clientId=mso-123456-consumer-37da6eb2-0a27-47c4-bc3c-8032fdeca549, groupId=mso-group] Sending FETCH request with header RequestHeader(apiKey=FETCH, apiVersion=13, clientId=mso-123456-consumer-37da6eb2-0a27-47c4-bc3c-8032fdeca549, correlationId=26) and timeout 30000 to node 1: FetchRequestData(clusterId=null, replicaId=-1, maxWaitMs=500, minBytes=1, maxBytes=52428800, isolationLevel=0, sessionId=0, sessionEpoch=0, topics=[FetchTopic(topic='my-test-topic', topicId=QbCSIHehTz6oyvgrkYsJtw, partitions=[FetchPartition(partition=0, currentLeaderEpoch=0, fetchOffset=0, lastFetchedEpoch=-1, logStartOffset=-1, partitionMaxBytes=1048576)])], forgottenTopicsData=[], rackId='') 11:10:55 11:10:55.901 [data-plane-kafka-request-handler-0] DEBUG kafka.server.FetchManager - Created a new full FetchContext with 1 partition(s). 11:10:55 11:10:55.991 [controller-event-thread] INFO kafka.controller.KafkaController - [Controller id=1] Processing automatic preferred replica leader election 11:10:56 11:10:56.001 [controller-event-thread] DEBUG kafka.controller.KafkaController - [Controller id=1] Topics not in preferred replica for broker 1 HashMap() 11:10:56 11:10:56.002 [controller-event-thread] DEBUG kafka.utils.KafkaScheduler - Scheduling task auto-leader-rebalance-task with initial delay 300000 ms and period -1000 ms. 11:10:56 11:10:56.441 [executor-Fetch] DEBUG kafka.server.FetchSessionCache - Created fetch session FetchSession(id=1613820950, privileged=false, partitionMap.size=1, usesTopicIds=true, creationMs=1768216256436, lastUsedMs=1768216256436, epoch=1) 11:10:56 11:10:56.445 [executor-Fetch] DEBUG kafka.server.FullFetchContext - Full fetch context with session id 1613820950 returning 1 partition(s) 11:10:56 11:10:56.454 [data-plane-kafka-network-thread-1-ListenerName(SASL_PLAINTEXT)-SASL_PLAINTEXT-0] DEBUG kafka.request.logger - Completed request:{"isForwarded":false,"requestHeader":{"requestApiKey":1,"requestApiVersion":13,"correlationId":26,"clientId":"mso-123456-consumer-37da6eb2-0a27-47c4-bc3c-8032fdeca549","requestApiKeyName":"FETCH"},"request":{"replicaId":-1,"maxWaitMs":500,"minBytes":1,"maxBytes":52428800,"isolationLevel":0,"sessionId":0,"sessionEpoch":0,"topics":[{"topicId":"QbCSIHehTz6oyvgrkYsJtw","partitions":[{"partition":0,"currentLeaderEpoch":0,"fetchOffset":0,"lastFetchedEpoch":-1,"logStartOffset":-1,"partitionMaxBytes":1048576}]}],"forgottenTopicsData":[],"rackId":""},"response":{"throttleTimeMs":0,"errorCode":0,"sessionId":1613820950,"responses":[{"topicId":"QbCSIHehTz6oyvgrkYsJtw","partitions":[{"partitionIndex":0,"errorCode":0,"highWatermark":0,"lastStableOffset":0,"logStartOffset":0,"abortedTransactions":null,"preferredReadReplica":-1,"recordsSizeInBytes":0}]}]},"connection":"127.0.0.1:39115-127.0.0.1:43334-3","totalTimeMs":560.345,"requestQueueTimeMs":2.462,"localTimeMs":32.138,"remoteTimeMs":525.063,"throttleTimeMs":0,"responseQueueTimeMs":0.182,"sendTimeMs":0.498,"securityProtocol":"SASL_PLAINTEXT","principal":"User:admin","listener":"SASL_PLAINTEXT","clientInformation":{"softwareName":"apache-kafka-java","softwareVersion":"3.3.1"}} 11:10:56 11:10:56.456 [main] DEBUG org.apache.kafka.clients.NetworkClient - [Consumer clientId=mso-123456-consumer-37da6eb2-0a27-47c4-bc3c-8032fdeca549, groupId=mso-group] Received FETCH response from node 1 for request with header RequestHeader(apiKey=FETCH, apiVersion=13, clientId=mso-123456-consumer-37da6eb2-0a27-47c4-bc3c-8032fdeca549, correlationId=26): FetchResponseData(throttleTimeMs=0, errorCode=0, sessionId=1613820950, responses=[FetchableTopicResponse(topic='', topicId=QbCSIHehTz6oyvgrkYsJtw, partitions=[PartitionData(partitionIndex=0, errorCode=0, highWatermark=0, lastStableOffset=0, logStartOffset=0, divergingEpoch=EpochEndOffset(epoch=-1, endOffset=-1), currentLeader=LeaderIdAndEpoch(leaderId=-1, leaderEpoch=-1), snapshotId=SnapshotId(endOffset=-1, epoch=-1), abortedTransactions=null, preferredReadReplica=-1, records=MemoryRecords(size=0, buffer=java.nio.HeapByteBuffer[pos=0 lim=0 cap=3]))])]) 11:10:56 11:10:56.458 [main] DEBUG org.apache.kafka.clients.FetchSessionHandler - [Consumer clientId=mso-123456-consumer-37da6eb2-0a27-47c4-bc3c-8032fdeca549, groupId=mso-group] Node 1 sent a full fetch response that created a new incremental fetch session 1613820950 with 1 response partition(s) 11:10:56 11:10:56.460 [main] DEBUG org.apache.kafka.clients.consumer.internals.Fetcher - [Consumer clientId=mso-123456-consumer-37da6eb2-0a27-47c4-bc3c-8032fdeca549, groupId=mso-group] Fetch READ_UNCOMMITTED at offset 0 for partition my-test-topic-0 returned fetch data PartitionData(partitionIndex=0, errorCode=0, highWatermark=0, lastStableOffset=0, logStartOffset=0, divergingEpoch=EpochEndOffset(epoch=-1, endOffset=-1), currentLeader=LeaderIdAndEpoch(leaderId=-1, leaderEpoch=-1), snapshotId=SnapshotId(endOffset=-1, epoch=-1), abortedTransactions=null, preferredReadReplica=-1, records=MemoryRecords(size=0, buffer=java.nio.HeapByteBuffer[pos=0 lim=0 cap=3])) 11:10:56 11:10:56.464 [main] DEBUG org.apache.kafka.clients.consumer.internals.Fetcher - [Consumer clientId=mso-123456-consumer-37da6eb2-0a27-47c4-bc3c-8032fdeca549, groupId=mso-group] Added READ_UNCOMMITTED fetch request for partition my-test-topic-0 at position FetchPosition{offset=0, offsetEpoch=Optional.empty, currentLeader=LeaderAndEpoch{leader=Optional[localhost:39115 (id: 1 rack: null)], epoch=0}} to node localhost:39115 (id: 1 rack: null) 11:10:56 11:10:56.464 [main] DEBUG org.apache.kafka.clients.FetchSessionHandler - [Consumer clientId=mso-123456-consumer-37da6eb2-0a27-47c4-bc3c-8032fdeca549, groupId=mso-group] Built incremental fetch (sessionId=1613820950, epoch=1) for node 1. Added 0 partition(s), altered 0 partition(s), removed 0 partition(s), replaced 0 partition(s) out of 1 partition(s) 11:10:56 11:10:56.464 [main] DEBUG org.apache.kafka.clients.consumer.internals.Fetcher - [Consumer clientId=mso-123456-consumer-37da6eb2-0a27-47c4-bc3c-8032fdeca549, groupId=mso-group] Sending READ_UNCOMMITTED IncrementalFetchRequest(toSend=(), toForget=(), toReplace=(), implied=(my-test-topic-0), canUseTopicIds=True) to broker localhost:39115 (id: 1 rack: null) 11:10:56 11:10:56.464 [main] DEBUG org.apache.kafka.clients.NetworkClient - [Consumer clientId=mso-123456-consumer-37da6eb2-0a27-47c4-bc3c-8032fdeca549, groupId=mso-group] Sending FETCH request with header RequestHeader(apiKey=FETCH, apiVersion=13, clientId=mso-123456-consumer-37da6eb2-0a27-47c4-bc3c-8032fdeca549, correlationId=27) and timeout 30000 to node 1: FetchRequestData(clusterId=null, replicaId=-1, maxWaitMs=500, minBytes=1, maxBytes=52428800, isolationLevel=0, sessionId=1613820950, sessionEpoch=1, topics=[], forgottenTopicsData=[], rackId='') 11:10:56 11:10:56.468 [data-plane-kafka-request-handler-1] DEBUG kafka.server.FetchManager - Created a new incremental FetchContext for session id 1613820950, epoch 2: added 0 partition(s), updated 0 partition(s), removed 0 partition(s) 11:10:56 11:10:56.977 [executor-Fetch] DEBUG kafka.server.IncrementalFetchContext - Incremental fetch context with session id 1613820950 returning 0 partition(s) 11:10:56 11:10:56.979 [main] DEBUG org.apache.kafka.clients.NetworkClient - [Consumer clientId=mso-123456-consumer-37da6eb2-0a27-47c4-bc3c-8032fdeca549, groupId=mso-group] Received FETCH response from node 1 for request with header RequestHeader(apiKey=FETCH, apiVersion=13, clientId=mso-123456-consumer-37da6eb2-0a27-47c4-bc3c-8032fdeca549, correlationId=27): FetchResponseData(throttleTimeMs=0, errorCode=0, sessionId=1613820950, responses=[]) 11:10:56 11:10:56.980 [main] DEBUG org.apache.kafka.clients.FetchSessionHandler - [Consumer clientId=mso-123456-consumer-37da6eb2-0a27-47c4-bc3c-8032fdeca549, groupId=mso-group] Node 1 sent an incremental fetch response with throttleTimeMs = 0 for session 1613820950 with 0 response partition(s), 1 implied partition(s) 11:10:56 11:10:56.980 [data-plane-kafka-network-thread-1-ListenerName(SASL_PLAINTEXT)-SASL_PLAINTEXT-0] DEBUG kafka.request.logger - Completed request:{"isForwarded":false,"requestHeader":{"requestApiKey":1,"requestApiVersion":13,"correlationId":27,"clientId":"mso-123456-consumer-37da6eb2-0a27-47c4-bc3c-8032fdeca549","requestApiKeyName":"FETCH"},"request":{"replicaId":-1,"maxWaitMs":500,"minBytes":1,"maxBytes":52428800,"isolationLevel":0,"sessionId":1613820950,"sessionEpoch":1,"topics":[],"forgottenTopicsData":[],"rackId":""},"response":{"throttleTimeMs":0,"errorCode":0,"sessionId":1613820950,"responses":[]},"connection":"127.0.0.1:39115-127.0.0.1:43334-3","totalTimeMs":513.515,"requestQueueTimeMs":0.239,"localTimeMs":7.34,"remoteTimeMs":504.94,"throttleTimeMs":0,"responseQueueTimeMs":0.297,"sendTimeMs":0.697,"securityProtocol":"SASL_PLAINTEXT","principal":"User:admin","listener":"SASL_PLAINTEXT","clientInformation":{"softwareName":"apache-kafka-java","softwareVersion":"3.3.1"}} 11:10:56 11:10:56.981 [main] DEBUG org.apache.kafka.clients.consumer.internals.Fetcher - [Consumer clientId=mso-123456-consumer-37da6eb2-0a27-47c4-bc3c-8032fdeca549, groupId=mso-group] Added READ_UNCOMMITTED fetch request for partition my-test-topic-0 at position FetchPosition{offset=0, offsetEpoch=Optional.empty, currentLeader=LeaderAndEpoch{leader=Optional[localhost:39115 (id: 1 rack: null)], epoch=0}} to node localhost:39115 (id: 1 rack: null) 11:10:56 11:10:56.981 [main] DEBUG org.apache.kafka.clients.FetchSessionHandler - [Consumer clientId=mso-123456-consumer-37da6eb2-0a27-47c4-bc3c-8032fdeca549, groupId=mso-group] Built incremental fetch (sessionId=1613820950, epoch=2) for node 1. Added 0 partition(s), altered 0 partition(s), removed 0 partition(s), replaced 0 partition(s) out of 1 partition(s) 11:10:56 11:10:56.981 [main] DEBUG org.apache.kafka.clients.consumer.internals.Fetcher - [Consumer clientId=mso-123456-consumer-37da6eb2-0a27-47c4-bc3c-8032fdeca549, groupId=mso-group] Sending READ_UNCOMMITTED IncrementalFetchRequest(toSend=(), toForget=(), toReplace=(), implied=(my-test-topic-0), canUseTopicIds=True) to broker localhost:39115 (id: 1 rack: null) 11:10:56 11:10:56.982 [main] DEBUG org.apache.kafka.clients.NetworkClient - [Consumer clientId=mso-123456-consumer-37da6eb2-0a27-47c4-bc3c-8032fdeca549, groupId=mso-group] Sending FETCH request with header RequestHeader(apiKey=FETCH, apiVersion=13, clientId=mso-123456-consumer-37da6eb2-0a27-47c4-bc3c-8032fdeca549, correlationId=28) and timeout 30000 to node 1: FetchRequestData(clusterId=null, replicaId=-1, maxWaitMs=500, minBytes=1, maxBytes=52428800, isolationLevel=0, sessionId=1613820950, sessionEpoch=2, topics=[], forgottenTopicsData=[], rackId='') 11:10:56 11:10:56.984 [data-plane-kafka-request-handler-1] DEBUG kafka.server.FetchManager - Created a new incremental FetchContext for session id 1613820950, epoch 3: added 0 partition(s), updated 0 partition(s), removed 0 partition(s) 11:10:57 11:10:57.486 [executor-Fetch] DEBUG kafka.server.IncrementalFetchContext - Incremental fetch context with session id 1613820950 returning 0 partition(s) 11:10:57 11:10:57.488 [main] DEBUG org.apache.kafka.clients.NetworkClient - [Consumer clientId=mso-123456-consumer-37da6eb2-0a27-47c4-bc3c-8032fdeca549, groupId=mso-group] Received FETCH response from node 1 for request with header RequestHeader(apiKey=FETCH, apiVersion=13, clientId=mso-123456-consumer-37da6eb2-0a27-47c4-bc3c-8032fdeca549, correlationId=28): FetchResponseData(throttleTimeMs=0, errorCode=0, sessionId=1613820950, responses=[]) 11:10:57 11:10:57.489 [main] DEBUG org.apache.kafka.clients.FetchSessionHandler - [Consumer clientId=mso-123456-consumer-37da6eb2-0a27-47c4-bc3c-8032fdeca549, groupId=mso-group] Node 1 sent an incremental fetch response with throttleTimeMs = 0 for session 1613820950 with 0 response partition(s), 1 implied partition(s) 11:10:57 11:10:57.489 [data-plane-kafka-network-thread-1-ListenerName(SASL_PLAINTEXT)-SASL_PLAINTEXT-0] DEBUG kafka.request.logger - Completed request:{"isForwarded":false,"requestHeader":{"requestApiKey":1,"requestApiVersion":13,"correlationId":28,"clientId":"mso-123456-consumer-37da6eb2-0a27-47c4-bc3c-8032fdeca549","requestApiKeyName":"FETCH"},"request":{"replicaId":-1,"maxWaitMs":500,"minBytes":1,"maxBytes":52428800,"isolationLevel":0,"sessionId":1613820950,"sessionEpoch":2,"topics":[],"forgottenTopicsData":[],"rackId":""},"response":{"throttleTimeMs":0,"errorCode":0,"sessionId":1613820950,"responses":[]},"connection":"127.0.0.1:39115-127.0.0.1:43334-3","totalTimeMs":505.025,"requestQueueTimeMs":0.35,"localTimeMs":1.314,"remoteTimeMs":502.236,"throttleTimeMs":0,"responseQueueTimeMs":0.368,"sendTimeMs":0.756,"securityProtocol":"SASL_PLAINTEXT","principal":"User:admin","listener":"SASL_PLAINTEXT","clientInformation":{"softwareName":"apache-kafka-java","softwareVersion":"3.3.1"}} 11:10:57 11:10:57.490 [main] DEBUG org.apache.kafka.clients.consumer.internals.Fetcher - [Consumer clientId=mso-123456-consumer-37da6eb2-0a27-47c4-bc3c-8032fdeca549, groupId=mso-group] Added READ_UNCOMMITTED fetch request for partition my-test-topic-0 at position FetchPosition{offset=0, offsetEpoch=Optional.empty, currentLeader=LeaderAndEpoch{leader=Optional[localhost:39115 (id: 1 rack: null)], epoch=0}} to node localhost:39115 (id: 1 rack: null) 11:10:57 11:10:57.490 [main] DEBUG org.apache.kafka.clients.FetchSessionHandler - [Consumer clientId=mso-123456-consumer-37da6eb2-0a27-47c4-bc3c-8032fdeca549, groupId=mso-group] Built incremental fetch (sessionId=1613820950, epoch=3) for node 1. Added 0 partition(s), altered 0 partition(s), removed 0 partition(s), replaced 0 partition(s) out of 1 partition(s) 11:10:57 11:10:57.490 [main] DEBUG org.apache.kafka.clients.consumer.internals.Fetcher - [Consumer clientId=mso-123456-consumer-37da6eb2-0a27-47c4-bc3c-8032fdeca549, groupId=mso-group] Sending READ_UNCOMMITTED IncrementalFetchRequest(toSend=(), toForget=(), toReplace=(), implied=(my-test-topic-0), canUseTopicIds=True) to broker localhost:39115 (id: 1 rack: null) 11:10:57 11:10:57.490 [main] DEBUG org.apache.kafka.clients.NetworkClient - [Consumer clientId=mso-123456-consumer-37da6eb2-0a27-47c4-bc3c-8032fdeca549, groupId=mso-group] Sending FETCH request with header RequestHeader(apiKey=FETCH, apiVersion=13, clientId=mso-123456-consumer-37da6eb2-0a27-47c4-bc3c-8032fdeca549, correlationId=29) and timeout 30000 to node 1: FetchRequestData(clusterId=null, replicaId=-1, maxWaitMs=500, minBytes=1, maxBytes=52428800, isolationLevel=0, sessionId=1613820950, sessionEpoch=3, topics=[], forgottenTopicsData=[], rackId='') 11:10:57 11:10:57.493 [data-plane-kafka-request-handler-1] DEBUG kafka.server.FetchManager - Created a new incremental FetchContext for session id 1613820950, epoch 4: added 0 partition(s), updated 0 partition(s), removed 0 partition(s) 11:10:58 11:10:57.996 [executor-Fetch] DEBUG kafka.server.IncrementalFetchContext - Incremental fetch context with session id 1613820950 returning 0 partition(s) 11:10:58 11:10:57.998 [main] DEBUG org.apache.kafka.clients.NetworkClient - [Consumer clientId=mso-123456-consumer-37da6eb2-0a27-47c4-bc3c-8032fdeca549, groupId=mso-group] Received FETCH response from node 1 for request with header RequestHeader(apiKey=FETCH, apiVersion=13, clientId=mso-123456-consumer-37da6eb2-0a27-47c4-bc3c-8032fdeca549, correlationId=29): FetchResponseData(throttleTimeMs=0, errorCode=0, sessionId=1613820950, responses=[]) 11:10:58 11:10:57.998 [main] DEBUG org.apache.kafka.clients.FetchSessionHandler - [Consumer clientId=mso-123456-consumer-37da6eb2-0a27-47c4-bc3c-8032fdeca549, groupId=mso-group] Node 1 sent an incremental fetch response with throttleTimeMs = 0 for session 1613820950 with 0 response partition(s), 1 implied partition(s) 11:10:58 11:10:57.999 [main] DEBUG org.apache.kafka.clients.consumer.internals.Fetcher - [Consumer clientId=mso-123456-consumer-37da6eb2-0a27-47c4-bc3c-8032fdeca549, groupId=mso-group] Added READ_UNCOMMITTED fetch request for partition my-test-topic-0 at position FetchPosition{offset=0, offsetEpoch=Optional.empty, currentLeader=LeaderAndEpoch{leader=Optional[localhost:39115 (id: 1 rack: null)], epoch=0}} to node localhost:39115 (id: 1 rack: null) 11:10:58 11:10:57.999 [data-plane-kafka-network-thread-1-ListenerName(SASL_PLAINTEXT)-SASL_PLAINTEXT-0] DEBUG kafka.request.logger - Completed request:{"isForwarded":false,"requestHeader":{"requestApiKey":1,"requestApiVersion":13,"correlationId":29,"clientId":"mso-123456-consumer-37da6eb2-0a27-47c4-bc3c-8032fdeca549","requestApiKeyName":"FETCH"},"request":{"replicaId":-1,"maxWaitMs":500,"minBytes":1,"maxBytes":52428800,"isolationLevel":0,"sessionId":1613820950,"sessionEpoch":3,"topics":[],"forgottenTopicsData":[],"rackId":""},"response":{"throttleTimeMs":0,"errorCode":0,"sessionId":1613820950,"responses":[]},"connection":"127.0.0.1:39115-127.0.0.1:43334-3","totalTimeMs":506.147,"requestQueueTimeMs":0.349,"localTimeMs":2.108,"remoteTimeMs":502.736,"throttleTimeMs":0,"responseQueueTimeMs":0.231,"sendTimeMs":0.722,"securityProtocol":"SASL_PLAINTEXT","principal":"User:admin","listener":"SASL_PLAINTEXT","clientInformation":{"softwareName":"apache-kafka-java","softwareVersion":"3.3.1"}} 11:10:58 11:10:57.999 [main] DEBUG org.apache.kafka.clients.FetchSessionHandler - [Consumer clientId=mso-123456-consumer-37da6eb2-0a27-47c4-bc3c-8032fdeca549, groupId=mso-group] Built incremental fetch (sessionId=1613820950, epoch=4) for node 1. Added 0 partition(s), altered 0 partition(s), removed 0 partition(s), replaced 0 partition(s) out of 1 partition(s) 11:10:58 11:10:57.999 [main] DEBUG org.apache.kafka.clients.consumer.internals.Fetcher - [Consumer clientId=mso-123456-consumer-37da6eb2-0a27-47c4-bc3c-8032fdeca549, groupId=mso-group] Sending READ_UNCOMMITTED IncrementalFetchRequest(toSend=(), toForget=(), toReplace=(), implied=(my-test-topic-0), canUseTopicIds=True) to broker localhost:39115 (id: 1 rack: null) 11:10:58 11:10:58.000 [main] DEBUG org.apache.kafka.clients.NetworkClient - [Consumer clientId=mso-123456-consumer-37da6eb2-0a27-47c4-bc3c-8032fdeca549, groupId=mso-group] Sending FETCH request with header RequestHeader(apiKey=FETCH, apiVersion=13, clientId=mso-123456-consumer-37da6eb2-0a27-47c4-bc3c-8032fdeca549, correlationId=30) and timeout 30000 to node 1: FetchRequestData(clusterId=null, replicaId=-1, maxWaitMs=500, minBytes=1, maxBytes=52428800, isolationLevel=0, sessionId=1613820950, sessionEpoch=4, topics=[], forgottenTopicsData=[], rackId='') 11:10:58 11:10:58.002 [data-plane-kafka-request-handler-0] DEBUG kafka.server.FetchManager - Created a new incremental FetchContext for session id 1613820950, epoch 5: added 0 partition(s), updated 0 partition(s), removed 0 partition(s) 11:10:58 11:10:58.505 [executor-Fetch] DEBUG kafka.server.IncrementalFetchContext - Incremental fetch context with session id 1613820950 returning 0 partition(s) 11:10:58 11:10:58.507 [main] DEBUG org.apache.kafka.clients.NetworkClient - [Consumer clientId=mso-123456-consumer-37da6eb2-0a27-47c4-bc3c-8032fdeca549, groupId=mso-group] Received FETCH response from node 1 for request with header RequestHeader(apiKey=FETCH, apiVersion=13, clientId=mso-123456-consumer-37da6eb2-0a27-47c4-bc3c-8032fdeca549, correlationId=30): FetchResponseData(throttleTimeMs=0, errorCode=0, sessionId=1613820950, responses=[]) 11:10:58 11:10:58.507 [main] DEBUG org.apache.kafka.clients.FetchSessionHandler - [Consumer clientId=mso-123456-consumer-37da6eb2-0a27-47c4-bc3c-8032fdeca549, groupId=mso-group] Node 1 sent an incremental fetch response with throttleTimeMs = 0 for session 1613820950 with 0 response partition(s), 1 implied partition(s) 11:10:58 11:10:58.508 [main] DEBUG org.apache.kafka.clients.consumer.internals.Fetcher - [Consumer clientId=mso-123456-consumer-37da6eb2-0a27-47c4-bc3c-8032fdeca549, groupId=mso-group] Added READ_UNCOMMITTED fetch request for partition my-test-topic-0 at position FetchPosition{offset=0, offsetEpoch=Optional.empty, currentLeader=LeaderAndEpoch{leader=Optional[localhost:39115 (id: 1 rack: null)], epoch=0}} to node localhost:39115 (id: 1 rack: null) 11:10:58 11:10:58.508 [data-plane-kafka-network-thread-1-ListenerName(SASL_PLAINTEXT)-SASL_PLAINTEXT-0] DEBUG kafka.request.logger - Completed request:{"isForwarded":false,"requestHeader":{"requestApiKey":1,"requestApiVersion":13,"correlationId":30,"clientId":"mso-123456-consumer-37da6eb2-0a27-47c4-bc3c-8032fdeca549","requestApiKeyName":"FETCH"},"request":{"replicaId":-1,"maxWaitMs":500,"minBytes":1,"maxBytes":52428800,"isolationLevel":0,"sessionId":1613820950,"sessionEpoch":4,"topics":[],"forgottenTopicsData":[],"rackId":""},"response":{"throttleTimeMs":0,"errorCode":0,"sessionId":1613820950,"responses":[]},"connection":"127.0.0.1:39115-127.0.0.1:43334-3","totalTimeMs":505.446,"requestQueueTimeMs":0.273,"localTimeMs":2.057,"remoteTimeMs":502.057,"throttleTimeMs":0,"responseQueueTimeMs":0.408,"sendTimeMs":0.649,"securityProtocol":"SASL_PLAINTEXT","principal":"User:admin","listener":"SASL_PLAINTEXT","clientInformation":{"softwareName":"apache-kafka-java","softwareVersion":"3.3.1"}} 11:10:58 11:10:58.509 [main] DEBUG org.apache.kafka.clients.FetchSessionHandler - [Consumer clientId=mso-123456-consumer-37da6eb2-0a27-47c4-bc3c-8032fdeca549, groupId=mso-group] Built incremental fetch (sessionId=1613820950, epoch=5) for node 1. Added 0 partition(s), altered 0 partition(s), removed 0 partition(s), replaced 0 partition(s) out of 1 partition(s) 11:10:58 11:10:58.509 [main] DEBUG org.apache.kafka.clients.consumer.internals.Fetcher - [Consumer clientId=mso-123456-consumer-37da6eb2-0a27-47c4-bc3c-8032fdeca549, groupId=mso-group] Sending READ_UNCOMMITTED IncrementalFetchRequest(toSend=(), toForget=(), toReplace=(), implied=(my-test-topic-0), canUseTopicIds=True) to broker localhost:39115 (id: 1 rack: null) 11:10:58 11:10:58.510 [main] DEBUG org.apache.kafka.clients.NetworkClient - [Consumer clientId=mso-123456-consumer-37da6eb2-0a27-47c4-bc3c-8032fdeca549, groupId=mso-group] Sending FETCH request with header RequestHeader(apiKey=FETCH, apiVersion=13, clientId=mso-123456-consumer-37da6eb2-0a27-47c4-bc3c-8032fdeca549, correlationId=31) and timeout 30000 to node 1: FetchRequestData(clusterId=null, replicaId=-1, maxWaitMs=500, minBytes=1, maxBytes=52428800, isolationLevel=0, sessionId=1613820950, sessionEpoch=5, topics=[], forgottenTopicsData=[], rackId='') 11:10:58 11:10:58.512 [data-plane-kafka-request-handler-0] DEBUG kafka.server.FetchManager - Created a new incremental FetchContext for session id 1613820950, epoch 6: added 0 partition(s), updated 0 partition(s), removed 0 partition(s) 11:10:58 11:10:58.732 [kafka-coordinator-heartbeat-thread | mso-group] DEBUG org.apache.kafka.clients.consumer.internals.ConsumerCoordinator - [Consumer clientId=mso-123456-consumer-37da6eb2-0a27-47c4-bc3c-8032fdeca549, groupId=mso-group] Sending Heartbeat request with generation 1 and member id mso-123456-consumer-37da6eb2-0a27-47c4-bc3c-8032fdeca549-083c14ec-f89b-4a86-bb74-9b008de55af6 to coordinator localhost:39115 (id: 2147483646 rack: null) 11:10:58 11:10:58.735 [kafka-coordinator-heartbeat-thread | mso-group] DEBUG org.apache.kafka.clients.NetworkClient - [Consumer clientId=mso-123456-consumer-37da6eb2-0a27-47c4-bc3c-8032fdeca549, groupId=mso-group] Sending HEARTBEAT request with header RequestHeader(apiKey=HEARTBEAT, apiVersion=4, clientId=mso-123456-consumer-37da6eb2-0a27-47c4-bc3c-8032fdeca549, correlationId=32) and timeout 30000 to node 2147483646: HeartbeatRequestData(groupId='mso-group', generationId=1, memberId='mso-123456-consumer-37da6eb2-0a27-47c4-bc3c-8032fdeca549-083c14ec-f89b-4a86-bb74-9b008de55af6', groupInstanceId=null) 11:10:58 11:10:58.743 [data-plane-kafka-request-handler-0] DEBUG kafka.server.DelayedOperationPurgatory - Request key MemberKey(mso-group,mso-123456-consumer-37da6eb2-0a27-47c4-bc3c-8032fdeca549-083c14ec-f89b-4a86-bb74-9b008de55af6) unblocked 1 Heartbeat operations 11:10:58 11:10:58.746 [main] DEBUG org.apache.kafka.clients.NetworkClient - [Consumer clientId=mso-123456-consumer-37da6eb2-0a27-47c4-bc3c-8032fdeca549, groupId=mso-group] Received HEARTBEAT response from node 2147483646 for request with header RequestHeader(apiKey=HEARTBEAT, apiVersion=4, clientId=mso-123456-consumer-37da6eb2-0a27-47c4-bc3c-8032fdeca549, correlationId=32): HeartbeatResponseData(throttleTimeMs=0, errorCode=0) 11:10:58 11:10:58.746 [main] DEBUG org.apache.kafka.clients.consumer.internals.ConsumerCoordinator - [Consumer clientId=mso-123456-consumer-37da6eb2-0a27-47c4-bc3c-8032fdeca549, groupId=mso-group] Received successful Heartbeat response 11:10:58 11:10:58.747 [data-plane-kafka-network-thread-1-ListenerName(SASL_PLAINTEXT)-SASL_PLAINTEXT-1] DEBUG kafka.request.logger - Completed request:{"isForwarded":false,"requestHeader":{"requestApiKey":12,"requestApiVersion":4,"correlationId":32,"clientId":"mso-123456-consumer-37da6eb2-0a27-47c4-bc3c-8032fdeca549","requestApiKeyName":"HEARTBEAT"},"request":{"groupId":"mso-group","generationId":1,"memberId":"mso-123456-consumer-37da6eb2-0a27-47c4-bc3c-8032fdeca549-083c14ec-f89b-4a86-bb74-9b008de55af6","groupInstanceId":null},"response":{"throttleTimeMs":0,"errorCode":0},"connection":"127.0.0.1:39115-127.0.0.1:43336-3","totalTimeMs":9.043,"requestQueueTimeMs":2.797,"localTimeMs":5.809,"remoteTimeMs":0.0,"throttleTimeMs":0,"responseQueueTimeMs":0.097,"sendTimeMs":0.338,"securityProtocol":"SASL_PLAINTEXT","principal":"User:admin","listener":"SASL_PLAINTEXT","clientInformation":{"softwareName":"apache-kafka-java","softwareVersion":"3.3.1"}} 11:10:59 11:10:59.015 [executor-Fetch] DEBUG kafka.server.IncrementalFetchContext - Incremental fetch context with session id 1613820950 returning 0 partition(s) 11:10:59 11:10:59.017 [main] DEBUG org.apache.kafka.clients.NetworkClient - [Consumer clientId=mso-123456-consumer-37da6eb2-0a27-47c4-bc3c-8032fdeca549, groupId=mso-group] Received FETCH response from node 1 for request with header RequestHeader(apiKey=FETCH, apiVersion=13, clientId=mso-123456-consumer-37da6eb2-0a27-47c4-bc3c-8032fdeca549, correlationId=31): FetchResponseData(throttleTimeMs=0, errorCode=0, sessionId=1613820950, responses=[]) 11:10:59 11:10:59.018 [main] DEBUG org.apache.kafka.clients.FetchSessionHandler - [Consumer clientId=mso-123456-consumer-37da6eb2-0a27-47c4-bc3c-8032fdeca549, groupId=mso-group] Node 1 sent an incremental fetch response with throttleTimeMs = 0 for session 1613820950 with 0 response partition(s), 1 implied partition(s) 11:10:59 11:10:59.018 [data-plane-kafka-network-thread-1-ListenerName(SASL_PLAINTEXT)-SASL_PLAINTEXT-0] DEBUG kafka.request.logger - Completed request:{"isForwarded":false,"requestHeader":{"requestApiKey":1,"requestApiVersion":13,"correlationId":31,"clientId":"mso-123456-consumer-37da6eb2-0a27-47c4-bc3c-8032fdeca549","requestApiKeyName":"FETCH"},"request":{"replicaId":-1,"maxWaitMs":500,"minBytes":1,"maxBytes":52428800,"isolationLevel":0,"sessionId":1613820950,"sessionEpoch":5,"topics":[],"forgottenTopicsData":[],"rackId":""},"response":{"throttleTimeMs":0,"errorCode":0,"sessionId":1613820950,"responses":[]},"connection":"127.0.0.1:39115-127.0.0.1:43334-3","totalTimeMs":505.687,"requestQueueTimeMs":0.272,"localTimeMs":1.857,"remoteTimeMs":502.68,"throttleTimeMs":0,"responseQueueTimeMs":0.306,"sendTimeMs":0.57,"securityProtocol":"SASL_PLAINTEXT","principal":"User:admin","listener":"SASL_PLAINTEXT","clientInformation":{"softwareName":"apache-kafka-java","softwareVersion":"3.3.1"}} 11:10:59 11:10:59.018 [main] DEBUG org.apache.kafka.clients.consumer.internals.Fetcher - [Consumer clientId=mso-123456-consumer-37da6eb2-0a27-47c4-bc3c-8032fdeca549, groupId=mso-group] Added READ_UNCOMMITTED fetch request for partition my-test-topic-0 at position FetchPosition{offset=0, offsetEpoch=Optional.empty, currentLeader=LeaderAndEpoch{leader=Optional[localhost:39115 (id: 1 rack: null)], epoch=0}} to node localhost:39115 (id: 1 rack: null) 11:10:59 11:10:59.019 [main] DEBUG org.apache.kafka.clients.FetchSessionHandler - [Consumer clientId=mso-123456-consumer-37da6eb2-0a27-47c4-bc3c-8032fdeca549, groupId=mso-group] Built incremental fetch (sessionId=1613820950, epoch=6) for node 1. Added 0 partition(s), altered 0 partition(s), removed 0 partition(s), replaced 0 partition(s) out of 1 partition(s) 11:10:59 11:10:59.019 [main] DEBUG org.apache.kafka.clients.consumer.internals.Fetcher - [Consumer clientId=mso-123456-consumer-37da6eb2-0a27-47c4-bc3c-8032fdeca549, groupId=mso-group] Sending READ_UNCOMMITTED IncrementalFetchRequest(toSend=(), toForget=(), toReplace=(), implied=(my-test-topic-0), canUseTopicIds=True) to broker localhost:39115 (id: 1 rack: null) 11:10:59 11:10:59.019 [main] DEBUG org.apache.kafka.clients.NetworkClient - [Consumer clientId=mso-123456-consumer-37da6eb2-0a27-47c4-bc3c-8032fdeca549, groupId=mso-group] Sending FETCH request with header RequestHeader(apiKey=FETCH, apiVersion=13, clientId=mso-123456-consumer-37da6eb2-0a27-47c4-bc3c-8032fdeca549, correlationId=33) and timeout 30000 to node 1: FetchRequestData(clusterId=null, replicaId=-1, maxWaitMs=500, minBytes=1, maxBytes=52428800, isolationLevel=0, sessionId=1613820950, sessionEpoch=6, topics=[], forgottenTopicsData=[], rackId='') 11:10:59 11:10:59.021 [data-plane-kafka-request-handler-0] DEBUG kafka.server.FetchManager - Created a new incremental FetchContext for session id 1613820950, epoch 7: added 0 partition(s), updated 0 partition(s), removed 0 partition(s) 11:10:59 11:10:59.524 [executor-Fetch] DEBUG kafka.server.IncrementalFetchContext - Incremental fetch context with session id 1613820950 returning 0 partition(s) 11:10:59 11:10:59.526 [main] DEBUG org.apache.kafka.clients.NetworkClient - [Consumer clientId=mso-123456-consumer-37da6eb2-0a27-47c4-bc3c-8032fdeca549, groupId=mso-group] Received FETCH response from node 1 for request with header RequestHeader(apiKey=FETCH, apiVersion=13, clientId=mso-123456-consumer-37da6eb2-0a27-47c4-bc3c-8032fdeca549, correlationId=33): FetchResponseData(throttleTimeMs=0, errorCode=0, sessionId=1613820950, responses=[]) 11:10:59 11:10:59.526 [main] DEBUG org.apache.kafka.clients.FetchSessionHandler - [Consumer clientId=mso-123456-consumer-37da6eb2-0a27-47c4-bc3c-8032fdeca549, groupId=mso-group] Node 1 sent an incremental fetch response with throttleTimeMs = 0 for session 1613820950 with 0 response partition(s), 1 implied partition(s) 11:10:59 11:10:59.526 [data-plane-kafka-network-thread-1-ListenerName(SASL_PLAINTEXT)-SASL_PLAINTEXT-0] DEBUG kafka.request.logger - Completed request:{"isForwarded":false,"requestHeader":{"requestApiKey":1,"requestApiVersion":13,"correlationId":33,"clientId":"mso-123456-consumer-37da6eb2-0a27-47c4-bc3c-8032fdeca549","requestApiKeyName":"FETCH"},"request":{"replicaId":-1,"maxWaitMs":500,"minBytes":1,"maxBytes":52428800,"isolationLevel":0,"sessionId":1613820950,"sessionEpoch":6,"topics":[],"forgottenTopicsData":[],"rackId":""},"response":{"throttleTimeMs":0,"errorCode":0,"sessionId":1613820950,"responses":[]},"connection":"127.0.0.1:39115-127.0.0.1:43334-3","totalTimeMs":505.471,"requestQueueTimeMs":0.321,"localTimeMs":2.214,"remoteTimeMs":502.027,"throttleTimeMs":0,"responseQueueTimeMs":0.268,"sendTimeMs":0.64,"securityProtocol":"SASL_PLAINTEXT","principal":"User:admin","listener":"SASL_PLAINTEXT","clientInformation":{"softwareName":"apache-kafka-java","softwareVersion":"3.3.1"}} 11:10:59 11:10:59.527 [main] DEBUG org.apache.kafka.clients.consumer.internals.Fetcher - [Consumer clientId=mso-123456-consumer-37da6eb2-0a27-47c4-bc3c-8032fdeca549, groupId=mso-group] Added READ_UNCOMMITTED fetch request for partition my-test-topic-0 at position FetchPosition{offset=0, offsetEpoch=Optional.empty, currentLeader=LeaderAndEpoch{leader=Optional[localhost:39115 (id: 1 rack: null)], epoch=0}} to node localhost:39115 (id: 1 rack: null) 11:10:59 11:10:59.527 [main] DEBUG org.apache.kafka.clients.FetchSessionHandler - [Consumer clientId=mso-123456-consumer-37da6eb2-0a27-47c4-bc3c-8032fdeca549, groupId=mso-group] Built incremental fetch (sessionId=1613820950, epoch=7) for node 1. Added 0 partition(s), altered 0 partition(s), removed 0 partition(s), replaced 0 partition(s) out of 1 partition(s) 11:10:59 11:10:59.527 [main] DEBUG org.apache.kafka.clients.consumer.internals.Fetcher - [Consumer clientId=mso-123456-consumer-37da6eb2-0a27-47c4-bc3c-8032fdeca549, groupId=mso-group] Sending READ_UNCOMMITTED IncrementalFetchRequest(toSend=(), toForget=(), toReplace=(), implied=(my-test-topic-0), canUseTopicIds=True) to broker localhost:39115 (id: 1 rack: null) 11:10:59 11:10:59.527 [main] DEBUG org.apache.kafka.clients.NetworkClient - [Consumer clientId=mso-123456-consumer-37da6eb2-0a27-47c4-bc3c-8032fdeca549, groupId=mso-group] Sending FETCH request with header RequestHeader(apiKey=FETCH, apiVersion=13, clientId=mso-123456-consumer-37da6eb2-0a27-47c4-bc3c-8032fdeca549, correlationId=34) and timeout 30000 to node 1: FetchRequestData(clusterId=null, replicaId=-1, maxWaitMs=500, minBytes=1, maxBytes=52428800, isolationLevel=0, sessionId=1613820950, sessionEpoch=7, topics=[], forgottenTopicsData=[], rackId='') 11:10:59 11:10:59.529 [data-plane-kafka-request-handler-1] DEBUG kafka.server.FetchManager - Created a new incremental FetchContext for session id 1613820950, epoch 8: added 0 partition(s), updated 0 partition(s), removed 0 partition(s) 11:11:00 11:11:00.033 [executor-Fetch] DEBUG kafka.server.IncrementalFetchContext - Incremental fetch context with session id 1613820950 returning 0 partition(s) 11:11:00 11:11:00.035 [main] DEBUG org.apache.kafka.clients.NetworkClient - [Consumer clientId=mso-123456-consumer-37da6eb2-0a27-47c4-bc3c-8032fdeca549, groupId=mso-group] Received FETCH response from node 1 for request with header RequestHeader(apiKey=FETCH, apiVersion=13, clientId=mso-123456-consumer-37da6eb2-0a27-47c4-bc3c-8032fdeca549, correlationId=34): FetchResponseData(throttleTimeMs=0, errorCode=0, sessionId=1613820950, responses=[]) 11:11:00 11:11:00.036 [main] DEBUG org.apache.kafka.clients.FetchSessionHandler - [Consumer clientId=mso-123456-consumer-37da6eb2-0a27-47c4-bc3c-8032fdeca549, groupId=mso-group] Node 1 sent an incremental fetch response with throttleTimeMs = 0 for session 1613820950 with 0 response partition(s), 1 implied partition(s) 11:11:00 11:11:00.036 [data-plane-kafka-network-thread-1-ListenerName(SASL_PLAINTEXT)-SASL_PLAINTEXT-0] DEBUG kafka.request.logger - Completed request:{"isForwarded":false,"requestHeader":{"requestApiKey":1,"requestApiVersion":13,"correlationId":34,"clientId":"mso-123456-consumer-37da6eb2-0a27-47c4-bc3c-8032fdeca549","requestApiKeyName":"FETCH"},"request":{"replicaId":-1,"maxWaitMs":500,"minBytes":1,"maxBytes":52428800,"isolationLevel":0,"sessionId":1613820950,"sessionEpoch":7,"topics":[],"forgottenTopicsData":[],"rackId":""},"response":{"throttleTimeMs":0,"errorCode":0,"sessionId":1613820950,"responses":[]},"connection":"127.0.0.1:39115-127.0.0.1:43334-3","totalTimeMs":505.947,"requestQueueTimeMs":0.276,"localTimeMs":1.777,"remoteTimeMs":502.905,"throttleTimeMs":0,"responseQueueTimeMs":0.274,"sendTimeMs":0.713,"securityProtocol":"SASL_PLAINTEXT","principal":"User:admin","listener":"SASL_PLAINTEXT","clientInformation":{"softwareName":"apache-kafka-java","softwareVersion":"3.3.1"}} 11:11:00 11:11:00.037 [main] DEBUG org.apache.kafka.clients.consumer.internals.Fetcher - [Consumer clientId=mso-123456-consumer-37da6eb2-0a27-47c4-bc3c-8032fdeca549, groupId=mso-group] Added READ_UNCOMMITTED fetch request for partition my-test-topic-0 at position FetchPosition{offset=0, offsetEpoch=Optional.empty, currentLeader=LeaderAndEpoch{leader=Optional[localhost:39115 (id: 1 rack: null)], epoch=0}} to node localhost:39115 (id: 1 rack: null) 11:11:00 11:11:00.038 [main] DEBUG org.apache.kafka.clients.FetchSessionHandler - [Consumer clientId=mso-123456-consumer-37da6eb2-0a27-47c4-bc3c-8032fdeca549, groupId=mso-group] Built incremental fetch (sessionId=1613820950, epoch=8) for node 1. Added 0 partition(s), altered 0 partition(s), removed 0 partition(s), replaced 0 partition(s) out of 1 partition(s) 11:11:00 11:11:00.038 [main] DEBUG org.apache.kafka.clients.consumer.internals.Fetcher - [Consumer clientId=mso-123456-consumer-37da6eb2-0a27-47c4-bc3c-8032fdeca549, groupId=mso-group] Sending READ_UNCOMMITTED IncrementalFetchRequest(toSend=(), toForget=(), toReplace=(), implied=(my-test-topic-0), canUseTopicIds=True) to broker localhost:39115 (id: 1 rack: null) 11:11:00 11:11:00.039 [main] DEBUG org.apache.kafka.clients.NetworkClient - [Consumer clientId=mso-123456-consumer-37da6eb2-0a27-47c4-bc3c-8032fdeca549, groupId=mso-group] Sending FETCH request with header RequestHeader(apiKey=FETCH, apiVersion=13, clientId=mso-123456-consumer-37da6eb2-0a27-47c4-bc3c-8032fdeca549, correlationId=35) and timeout 30000 to node 1: FetchRequestData(clusterId=null, replicaId=-1, maxWaitMs=500, minBytes=1, maxBytes=52428800, isolationLevel=0, sessionId=1613820950, sessionEpoch=8, topics=[], forgottenTopicsData=[], rackId='') 11:11:00 11:11:00.041 [data-plane-kafka-request-handler-1] DEBUG kafka.server.FetchManager - Created a new incremental FetchContext for session id 1613820950, epoch 9: added 0 partition(s), updated 0 partition(s), removed 0 partition(s) 11:11:00 11:11:00.544 [executor-Fetch] DEBUG kafka.server.IncrementalFetchContext - Incremental fetch context with session id 1613820950 returning 0 partition(s) 11:11:00 11:11:00.546 [main] DEBUG org.apache.kafka.clients.NetworkClient - [Consumer clientId=mso-123456-consumer-37da6eb2-0a27-47c4-bc3c-8032fdeca549, groupId=mso-group] Received FETCH response from node 1 for request with header RequestHeader(apiKey=FETCH, apiVersion=13, clientId=mso-123456-consumer-37da6eb2-0a27-47c4-bc3c-8032fdeca549, correlationId=35): FetchResponseData(throttleTimeMs=0, errorCode=0, sessionId=1613820950, responses=[]) 11:11:00 11:11:00.547 [main] DEBUG org.apache.kafka.clients.FetchSessionHandler - [Consumer clientId=mso-123456-consumer-37da6eb2-0a27-47c4-bc3c-8032fdeca549, groupId=mso-group] Node 1 sent an incremental fetch response with throttleTimeMs = 0 for session 1613820950 with 0 response partition(s), 1 implied partition(s) 11:11:00 11:11:00.547 [data-plane-kafka-network-thread-1-ListenerName(SASL_PLAINTEXT)-SASL_PLAINTEXT-0] DEBUG kafka.request.logger - Completed request:{"isForwarded":false,"requestHeader":{"requestApiKey":1,"requestApiVersion":13,"correlationId":35,"clientId":"mso-123456-consumer-37da6eb2-0a27-47c4-bc3c-8032fdeca549","requestApiKeyName":"FETCH"},"request":{"replicaId":-1,"maxWaitMs":500,"minBytes":1,"maxBytes":52428800,"isolationLevel":0,"sessionId":1613820950,"sessionEpoch":8,"topics":[],"forgottenTopicsData":[],"rackId":""},"response":{"throttleTimeMs":0,"errorCode":0,"sessionId":1613820950,"responses":[]},"connection":"127.0.0.1:39115-127.0.0.1:43334-3","totalTimeMs":505.585,"requestQueueTimeMs":0.343,"localTimeMs":2.099,"remoteTimeMs":502.216,"throttleTimeMs":0,"responseQueueTimeMs":0.24,"sendTimeMs":0.684,"securityProtocol":"SASL_PLAINTEXT","principal":"User:admin","listener":"SASL_PLAINTEXT","clientInformation":{"softwareName":"apache-kafka-java","softwareVersion":"3.3.1"}} 11:11:00 11:11:00.547 [main] DEBUG org.apache.kafka.clients.consumer.internals.Fetcher - [Consumer clientId=mso-123456-consumer-37da6eb2-0a27-47c4-bc3c-8032fdeca549, groupId=mso-group] Added READ_UNCOMMITTED fetch request for partition my-test-topic-0 at position FetchPosition{offset=0, offsetEpoch=Optional.empty, currentLeader=LeaderAndEpoch{leader=Optional[localhost:39115 (id: 1 rack: null)], epoch=0}} to node localhost:39115 (id: 1 rack: null) 11:11:00 11:11:00.548 [main] DEBUG org.apache.kafka.clients.FetchSessionHandler - [Consumer clientId=mso-123456-consumer-37da6eb2-0a27-47c4-bc3c-8032fdeca549, groupId=mso-group] Built incremental fetch (sessionId=1613820950, epoch=9) for node 1. Added 0 partition(s), altered 0 partition(s), removed 0 partition(s), replaced 0 partition(s) out of 1 partition(s) 11:11:00 11:11:00.548 [main] DEBUG org.apache.kafka.clients.consumer.internals.Fetcher - [Consumer clientId=mso-123456-consumer-37da6eb2-0a27-47c4-bc3c-8032fdeca549, groupId=mso-group] Sending READ_UNCOMMITTED IncrementalFetchRequest(toSend=(), toForget=(), toReplace=(), implied=(my-test-topic-0), canUseTopicIds=True) to broker localhost:39115 (id: 1 rack: null) 11:11:00 11:11:00.548 [main] DEBUG org.apache.kafka.clients.NetworkClient - [Consumer clientId=mso-123456-consumer-37da6eb2-0a27-47c4-bc3c-8032fdeca549, groupId=mso-group] Sending FETCH request with header RequestHeader(apiKey=FETCH, apiVersion=13, clientId=mso-123456-consumer-37da6eb2-0a27-47c4-bc3c-8032fdeca549, correlationId=36) and timeout 30000 to node 1: FetchRequestData(clusterId=null, replicaId=-1, maxWaitMs=500, minBytes=1, maxBytes=52428800, isolationLevel=0, sessionId=1613820950, sessionEpoch=9, topics=[], forgottenTopicsData=[], rackId='') 11:11:00 11:11:00.550 [data-plane-kafka-request-handler-1] DEBUG kafka.server.FetchManager - Created a new incremental FetchContext for session id 1613820950, epoch 10: added 0 partition(s), updated 0 partition(s), removed 0 partition(s) 11:11:00 11:11:00.834 [main] DEBUG org.apache.kafka.clients.consumer.internals.ConsumerCoordinator - [Consumer clientId=mso-123456-consumer-37da6eb2-0a27-47c4-bc3c-8032fdeca549, groupId=mso-group] Sending asynchronous auto-commit of offsets {my-test-topic-0=OffsetAndMetadata{offset=0, leaderEpoch=null, metadata=''}} 11:11:00 11:11:00.836 [main] DEBUG org.apache.kafka.clients.NetworkClient - [Consumer clientId=mso-123456-consumer-37da6eb2-0a27-47c4-bc3c-8032fdeca549, groupId=mso-group] Sending OFFSET_COMMIT request with header RequestHeader(apiKey=OFFSET_COMMIT, apiVersion=8, clientId=mso-123456-consumer-37da6eb2-0a27-47c4-bc3c-8032fdeca549, correlationId=37) and timeout 30000 to node 2147483646: OffsetCommitRequestData(groupId='mso-group', generationId=1, memberId='mso-123456-consumer-37da6eb2-0a27-47c4-bc3c-8032fdeca549-083c14ec-f89b-4a86-bb74-9b008de55af6', groupInstanceId=null, retentionTimeMs=-1, topics=[OffsetCommitRequestTopic(name='my-test-topic', partitions=[OffsetCommitRequestPartition(partitionIndex=0, committedOffset=0, committedLeaderEpoch=-1, commitTimestamp=-1, committedMetadata='')])]) 11:11:00 11:11:00.851 [data-plane-kafka-request-handler-1] DEBUG kafka.server.DelayedOperationPurgatory - Request key MemberKey(mso-group,mso-123456-consumer-37da6eb2-0a27-47c4-bc3c-8032fdeca549-083c14ec-f89b-4a86-bb74-9b008de55af6) unblocked 1 Heartbeat operations 11:11:00 11:11:00.861 [data-plane-kafka-request-handler-1] DEBUG kafka.log.UnifiedLog - [UnifiedLog partition=__consumer_offsets-37, dir=/tmp/kafka-unit8944902187107510952] Flushing log up to offset 2 (exclusive)with recovery point 2, last flushed: 1768216255811, current time: 1768216260861,unflushed: 1 11:11:00 11:11:00.866 [data-plane-kafka-request-handler-1] DEBUG kafka.cluster.Partition - [Partition __consumer_offsets-37 broker=1] High watermark updated from (offset=1 segment=[0:458]) to (offset=2 segment=[0:582]) 11:11:00 11:11:00.866 [data-plane-kafka-request-handler-1] DEBUG kafka.server.ReplicaManager - [ReplicaManager broker=1] Produce to local log in 6 ms 11:11:00 11:11:00.877 [main] DEBUG org.apache.kafka.clients.NetworkClient - [Consumer clientId=mso-123456-consumer-37da6eb2-0a27-47c4-bc3c-8032fdeca549, groupId=mso-group] Received OFFSET_COMMIT response from node 2147483646 for request with header RequestHeader(apiKey=OFFSET_COMMIT, apiVersion=8, clientId=mso-123456-consumer-37da6eb2-0a27-47c4-bc3c-8032fdeca549, correlationId=37): OffsetCommitResponseData(throttleTimeMs=0, topics=[OffsetCommitResponseTopic(name='my-test-topic', partitions=[OffsetCommitResponsePartition(partitionIndex=0, errorCode=0)])]) 11:11:00 11:11:00.878 [data-plane-kafka-network-thread-1-ListenerName(SASL_PLAINTEXT)-SASL_PLAINTEXT-1] DEBUG kafka.request.logger - Completed request:{"isForwarded":false,"requestHeader":{"requestApiKey":8,"requestApiVersion":8,"correlationId":37,"clientId":"mso-123456-consumer-37da6eb2-0a27-47c4-bc3c-8032fdeca549","requestApiKeyName":"OFFSET_COMMIT"},"request":{"groupId":"mso-group","generationId":1,"memberId":"mso-123456-consumer-37da6eb2-0a27-47c4-bc3c-8032fdeca549-083c14ec-f89b-4a86-bb74-9b008de55af6","groupInstanceId":null,"topics":[{"name":"my-test-topic","partitions":[{"partitionIndex":0,"committedOffset":0,"committedLeaderEpoch":-1,"committedMetadata":""}]}]},"response":{"throttleTimeMs":0,"topics":[{"name":"my-test-topic","partitions":[{"partitionIndex":0,"errorCode":0}]}]},"connection":"127.0.0.1:39115-127.0.0.1:43336-3","totalTimeMs":39.621,"requestQueueTimeMs":6.299,"localTimeMs":32.355,"remoteTimeMs":0.0,"throttleTimeMs":0,"responseQueueTimeMs":0.308,"sendTimeMs":0.657,"securityProtocol":"SASL_PLAINTEXT","principal":"User:admin","listener":"SASL_PLAINTEXT","clientInformation":{"softwareName":"apache-kafka-java","softwareVersion":"3.3.1"}} 11:11:00 11:11:00.878 [main] DEBUG org.apache.kafka.clients.consumer.internals.ConsumerCoordinator - [Consumer clientId=mso-123456-consumer-37da6eb2-0a27-47c4-bc3c-8032fdeca549, groupId=mso-group] Committed offset 0 for partition my-test-topic-0 11:11:00 11:11:00.878 [main] DEBUG org.apache.kafka.clients.consumer.internals.ConsumerCoordinator - [Consumer clientId=mso-123456-consumer-37da6eb2-0a27-47c4-bc3c-8032fdeca549, groupId=mso-group] Completed asynchronous auto-commit of offsets {my-test-topic-0=OffsetAndMetadata{offset=0, leaderEpoch=null, metadata=''}} 11:11:01 11:11:01.053 [executor-Fetch] DEBUG kafka.server.IncrementalFetchContext - Incremental fetch context with session id 1613820950 returning 0 partition(s) 11:11:01 11:11:01.055 [main] DEBUG org.apache.kafka.clients.NetworkClient - [Consumer clientId=mso-123456-consumer-37da6eb2-0a27-47c4-bc3c-8032fdeca549, groupId=mso-group] Received FETCH response from node 1 for request with header RequestHeader(apiKey=FETCH, apiVersion=13, clientId=mso-123456-consumer-37da6eb2-0a27-47c4-bc3c-8032fdeca549, correlationId=36): FetchResponseData(throttleTimeMs=0, errorCode=0, sessionId=1613820950, responses=[]) 11:11:01 11:11:01.055 [main] DEBUG org.apache.kafka.clients.FetchSessionHandler - [Consumer clientId=mso-123456-consumer-37da6eb2-0a27-47c4-bc3c-8032fdeca549, groupId=mso-group] Node 1 sent an incremental fetch response with throttleTimeMs = 0 for session 1613820950 with 0 response partition(s), 1 implied partition(s) 11:11:01 11:11:01.056 [main] DEBUG org.apache.kafka.clients.consumer.internals.Fetcher - [Consumer clientId=mso-123456-consumer-37da6eb2-0a27-47c4-bc3c-8032fdeca549, groupId=mso-group] Added READ_UNCOMMITTED fetch request for partition my-test-topic-0 at position FetchPosition{offset=0, offsetEpoch=Optional.empty, currentLeader=LeaderAndEpoch{leader=Optional[localhost:39115 (id: 1 rack: null)], epoch=0}} to node localhost:39115 (id: 1 rack: null) 11:11:01 11:11:01.056 [main] DEBUG org.apache.kafka.clients.FetchSessionHandler - [Consumer clientId=mso-123456-consumer-37da6eb2-0a27-47c4-bc3c-8032fdeca549, groupId=mso-group] Built incremental fetch (sessionId=1613820950, epoch=10) for node 1. Added 0 partition(s), altered 0 partition(s), removed 0 partition(s), replaced 0 partition(s) out of 1 partition(s) 11:11:01 11:11:01.056 [data-plane-kafka-network-thread-1-ListenerName(SASL_PLAINTEXT)-SASL_PLAINTEXT-0] DEBUG kafka.request.logger - Completed request:{"isForwarded":false,"requestHeader":{"requestApiKey":1,"requestApiVersion":13,"correlationId":36,"clientId":"mso-123456-consumer-37da6eb2-0a27-47c4-bc3c-8032fdeca549","requestApiKeyName":"FETCH"},"request":{"replicaId":-1,"maxWaitMs":500,"minBytes":1,"maxBytes":52428800,"isolationLevel":0,"sessionId":1613820950,"sessionEpoch":9,"topics":[],"forgottenTopicsData":[],"rackId":""},"response":{"throttleTimeMs":0,"errorCode":0,"sessionId":1613820950,"responses":[]},"connection":"127.0.0.1:39115-127.0.0.1:43334-3","totalTimeMs":505.734,"requestQueueTimeMs":0.288,"localTimeMs":1.98,"remoteTimeMs":502.497,"throttleTimeMs":0,"responseQueueTimeMs":0.315,"sendTimeMs":0.652,"securityProtocol":"SASL_PLAINTEXT","principal":"User:admin","listener":"SASL_PLAINTEXT","clientInformation":{"softwareName":"apache-kafka-java","softwareVersion":"3.3.1"}} 11:11:01 11:11:01.056 [main] DEBUG org.apache.kafka.clients.consumer.internals.Fetcher - [Consumer clientId=mso-123456-consumer-37da6eb2-0a27-47c4-bc3c-8032fdeca549, groupId=mso-group] Sending READ_UNCOMMITTED IncrementalFetchRequest(toSend=(), toForget=(), toReplace=(), implied=(my-test-topic-0), canUseTopicIds=True) to broker localhost:39115 (id: 1 rack: null) 11:11:01 11:11:01.057 [main] DEBUG org.apache.kafka.clients.NetworkClient - [Consumer clientId=mso-123456-consumer-37da6eb2-0a27-47c4-bc3c-8032fdeca549, groupId=mso-group] Sending FETCH request with header RequestHeader(apiKey=FETCH, apiVersion=13, clientId=mso-123456-consumer-37da6eb2-0a27-47c4-bc3c-8032fdeca549, correlationId=38) and timeout 30000 to node 1: FetchRequestData(clusterId=null, replicaId=-1, maxWaitMs=500, minBytes=1, maxBytes=52428800, isolationLevel=0, sessionId=1613820950, sessionEpoch=10, topics=[], forgottenTopicsData=[], rackId='') 11:11:01 11:11:01.059 [data-plane-kafka-request-handler-0] DEBUG kafka.server.FetchManager - Created a new incremental FetchContext for session id 1613820950, epoch 11: added 0 partition(s), updated 0 partition(s), removed 0 partition(s) 11:11:01 11:11:01.561 [executor-Fetch] DEBUG kafka.server.IncrementalFetchContext - Incremental fetch context with session id 1613820950 returning 0 partition(s) 11:11:01 11:11:01.563 [main] DEBUG org.apache.kafka.clients.NetworkClient - [Consumer clientId=mso-123456-consumer-37da6eb2-0a27-47c4-bc3c-8032fdeca549, groupId=mso-group] Received FETCH response from node 1 for request with header RequestHeader(apiKey=FETCH, apiVersion=13, clientId=mso-123456-consumer-37da6eb2-0a27-47c4-bc3c-8032fdeca549, correlationId=38): FetchResponseData(throttleTimeMs=0, errorCode=0, sessionId=1613820950, responses=[]) 11:11:01 11:11:01.563 [main] DEBUG org.apache.kafka.clients.FetchSessionHandler - [Consumer clientId=mso-123456-consumer-37da6eb2-0a27-47c4-bc3c-8032fdeca549, groupId=mso-group] Node 1 sent an incremental fetch response with throttleTimeMs = 0 for session 1613820950 with 0 response partition(s), 1 implied partition(s) 11:11:01 11:11:01.564 [data-plane-kafka-network-thread-1-ListenerName(SASL_PLAINTEXT)-SASL_PLAINTEXT-0] DEBUG kafka.request.logger - Completed request:{"isForwarded":false,"requestHeader":{"requestApiKey":1,"requestApiVersion":13,"correlationId":38,"clientId":"mso-123456-consumer-37da6eb2-0a27-47c4-bc3c-8032fdeca549","requestApiKeyName":"FETCH"},"request":{"replicaId":-1,"maxWaitMs":500,"minBytes":1,"maxBytes":52428800,"isolationLevel":0,"sessionId":1613820950,"sessionEpoch":10,"topics":[],"forgottenTopicsData":[],"rackId":""},"response":{"throttleTimeMs":0,"errorCode":0,"sessionId":1613820950,"responses":[]},"connection":"127.0.0.1:39115-127.0.0.1:43334-3","totalTimeMs":504.669,"requestQueueTimeMs":0.367,"localTimeMs":1.703,"remoteTimeMs":501.797,"throttleTimeMs":0,"responseQueueTimeMs":0.201,"sendTimeMs":0.599,"securityProtocol":"SASL_PLAINTEXT","principal":"User:admin","listener":"SASL_PLAINTEXT","clientInformation":{"softwareName":"apache-kafka-java","softwareVersion":"3.3.1"}} 11:11:01 11:11:01.564 [main] DEBUG org.apache.kafka.clients.consumer.internals.Fetcher - [Consumer clientId=mso-123456-consumer-37da6eb2-0a27-47c4-bc3c-8032fdeca549, groupId=mso-group] Added READ_UNCOMMITTED fetch request for partition my-test-topic-0 at position FetchPosition{offset=0, offsetEpoch=Optional.empty, currentLeader=LeaderAndEpoch{leader=Optional[localhost:39115 (id: 1 rack: null)], epoch=0}} to node localhost:39115 (id: 1 rack: null) 11:11:01 11:11:01.564 [main] DEBUG org.apache.kafka.clients.FetchSessionHandler - [Consumer clientId=mso-123456-consumer-37da6eb2-0a27-47c4-bc3c-8032fdeca549, groupId=mso-group] Built incremental fetch (sessionId=1613820950, epoch=11) for node 1. Added 0 partition(s), altered 0 partition(s), removed 0 partition(s), replaced 0 partition(s) out of 1 partition(s) 11:11:01 11:11:01.564 [main] DEBUG org.apache.kafka.clients.consumer.internals.Fetcher - [Consumer clientId=mso-123456-consumer-37da6eb2-0a27-47c4-bc3c-8032fdeca549, groupId=mso-group] Sending READ_UNCOMMITTED IncrementalFetchRequest(toSend=(), toForget=(), toReplace=(), implied=(my-test-topic-0), canUseTopicIds=True) to broker localhost:39115 (id: 1 rack: null) 11:11:01 11:11:01.564 [main] DEBUG org.apache.kafka.clients.NetworkClient - [Consumer clientId=mso-123456-consumer-37da6eb2-0a27-47c4-bc3c-8032fdeca549, groupId=mso-group] Sending FETCH request with header RequestHeader(apiKey=FETCH, apiVersion=13, clientId=mso-123456-consumer-37da6eb2-0a27-47c4-bc3c-8032fdeca549, correlationId=39) and timeout 30000 to node 1: FetchRequestData(clusterId=null, replicaId=-1, maxWaitMs=500, minBytes=1, maxBytes=52428800, isolationLevel=0, sessionId=1613820950, sessionEpoch=11, topics=[], forgottenTopicsData=[], rackId='') 11:11:01 11:11:01.566 [data-plane-kafka-request-handler-0] DEBUG kafka.server.FetchManager - Created a new incremental FetchContext for session id 1613820950, epoch 12: added 0 partition(s), updated 0 partition(s), removed 0 partition(s) 11:11:01 11:11:01.733 [kafka-coordinator-heartbeat-thread | mso-group] DEBUG org.apache.kafka.clients.consumer.internals.ConsumerCoordinator - [Consumer clientId=mso-123456-consumer-37da6eb2-0a27-47c4-bc3c-8032fdeca549, groupId=mso-group] Sending Heartbeat request with generation 1 and member id mso-123456-consumer-37da6eb2-0a27-47c4-bc3c-8032fdeca549-083c14ec-f89b-4a86-bb74-9b008de55af6 to coordinator localhost:39115 (id: 2147483646 rack: null) 11:11:01 11:11:01.734 [kafka-coordinator-heartbeat-thread | mso-group] DEBUG org.apache.kafka.clients.NetworkClient - [Consumer clientId=mso-123456-consumer-37da6eb2-0a27-47c4-bc3c-8032fdeca549, groupId=mso-group] Sending HEARTBEAT request with header RequestHeader(apiKey=HEARTBEAT, apiVersion=4, clientId=mso-123456-consumer-37da6eb2-0a27-47c4-bc3c-8032fdeca549, correlationId=40) and timeout 30000 to node 2147483646: HeartbeatRequestData(groupId='mso-group', generationId=1, memberId='mso-123456-consumer-37da6eb2-0a27-47c4-bc3c-8032fdeca549-083c14ec-f89b-4a86-bb74-9b008de55af6', groupInstanceId=null) 11:11:01 11:11:01.736 [data-plane-kafka-request-handler-1] DEBUG kafka.server.DelayedOperationPurgatory - Request key MemberKey(mso-group,mso-123456-consumer-37da6eb2-0a27-47c4-bc3c-8032fdeca549-083c14ec-f89b-4a86-bb74-9b008de55af6) unblocked 1 Heartbeat operations 11:11:01 11:11:01.738 [main] DEBUG org.apache.kafka.clients.NetworkClient - [Consumer clientId=mso-123456-consumer-37da6eb2-0a27-47c4-bc3c-8032fdeca549, groupId=mso-group] Received HEARTBEAT response from node 2147483646 for request with header RequestHeader(apiKey=HEARTBEAT, apiVersion=4, clientId=mso-123456-consumer-37da6eb2-0a27-47c4-bc3c-8032fdeca549, correlationId=40): HeartbeatResponseData(throttleTimeMs=0, errorCode=0) 11:11:01 11:11:01.738 [main] DEBUG org.apache.kafka.clients.consumer.internals.ConsumerCoordinator - [Consumer clientId=mso-123456-consumer-37da6eb2-0a27-47c4-bc3c-8032fdeca549, groupId=mso-group] Received successful Heartbeat response 11:11:01 11:11:01.739 [data-plane-kafka-network-thread-1-ListenerName(SASL_PLAINTEXT)-SASL_PLAINTEXT-1] DEBUG kafka.request.logger - Completed request:{"isForwarded":false,"requestHeader":{"requestApiKey":12,"requestApiVersion":4,"correlationId":40,"clientId":"mso-123456-consumer-37da6eb2-0a27-47c4-bc3c-8032fdeca549","requestApiKeyName":"HEARTBEAT"},"request":{"groupId":"mso-group","generationId":1,"memberId":"mso-123456-consumer-37da6eb2-0a27-47c4-bc3c-8032fdeca549-083c14ec-f89b-4a86-bb74-9b008de55af6","groupInstanceId":null},"response":{"throttleTimeMs":0,"errorCode":0},"connection":"127.0.0.1:39115-127.0.0.1:43336-3","totalTimeMs":3.052,"requestQueueTimeMs":0.352,"localTimeMs":2.222,"remoteTimeMs":0.0,"throttleTimeMs":0,"responseQueueTimeMs":0.122,"sendTimeMs":0.354,"securityProtocol":"SASL_PLAINTEXT","principal":"User:admin","listener":"SASL_PLAINTEXT","clientInformation":{"softwareName":"apache-kafka-java","softwareVersion":"3.3.1"}} 11:11:02 11:11:02.070 [executor-Fetch] DEBUG kafka.server.IncrementalFetchContext - Incremental fetch context with session id 1613820950 returning 0 partition(s) 11:11:02 11:11:02.071 [main] DEBUG org.apache.kafka.clients.NetworkClient - [Consumer clientId=mso-123456-consumer-37da6eb2-0a27-47c4-bc3c-8032fdeca549, groupId=mso-group] Received FETCH response from node 1 for request with header RequestHeader(apiKey=FETCH, apiVersion=13, clientId=mso-123456-consumer-37da6eb2-0a27-47c4-bc3c-8032fdeca549, correlationId=39): FetchResponseData(throttleTimeMs=0, errorCode=0, sessionId=1613820950, responses=[]) 11:11:02 11:11:02.072 [main] DEBUG org.apache.kafka.clients.FetchSessionHandler - [Consumer clientId=mso-123456-consumer-37da6eb2-0a27-47c4-bc3c-8032fdeca549, groupId=mso-group] Node 1 sent an incremental fetch response with throttleTimeMs = 0 for session 1613820950 with 0 response partition(s), 1 implied partition(s) 11:11:02 11:11:02.072 [data-plane-kafka-network-thread-1-ListenerName(SASL_PLAINTEXT)-SASL_PLAINTEXT-0] DEBUG kafka.request.logger - Completed request:{"isForwarded":false,"requestHeader":{"requestApiKey":1,"requestApiVersion":13,"correlationId":39,"clientId":"mso-123456-consumer-37da6eb2-0a27-47c4-bc3c-8032fdeca549","requestApiKeyName":"FETCH"},"request":{"replicaId":-1,"maxWaitMs":500,"minBytes":1,"maxBytes":52428800,"isolationLevel":0,"sessionId":1613820950,"sessionEpoch":11,"topics":[],"forgottenTopicsData":[],"rackId":""},"response":{"throttleTimeMs":0,"errorCode":0,"sessionId":1613820950,"responses":[]},"connection":"127.0.0.1:39115-127.0.0.1:43334-3","totalTimeMs":505.682,"requestQueueTimeMs":0.462,"localTimeMs":2.091,"remoteTimeMs":502.302,"throttleTimeMs":0,"responseQueueTimeMs":0.328,"sendTimeMs":0.497,"securityProtocol":"SASL_PLAINTEXT","principal":"User:admin","listener":"SASL_PLAINTEXT","clientInformation":{"softwareName":"apache-kafka-java","softwareVersion":"3.3.1"}} 11:11:02 11:11:02.073 [main] DEBUG org.apache.kafka.clients.consumer.internals.Fetcher - [Consumer clientId=mso-123456-consumer-37da6eb2-0a27-47c4-bc3c-8032fdeca549, groupId=mso-group] Added READ_UNCOMMITTED fetch request for partition my-test-topic-0 at position FetchPosition{offset=0, offsetEpoch=Optional.empty, currentLeader=LeaderAndEpoch{leader=Optional[localhost:39115 (id: 1 rack: null)], epoch=0}} to node localhost:39115 (id: 1 rack: null) 11:11:02 11:11:02.073 [main] DEBUG org.apache.kafka.clients.FetchSessionHandler - [Consumer clientId=mso-123456-consumer-37da6eb2-0a27-47c4-bc3c-8032fdeca549, groupId=mso-group] Built incremental fetch (sessionId=1613820950, epoch=12) for node 1. Added 0 partition(s), altered 0 partition(s), removed 0 partition(s), replaced 0 partition(s) out of 1 partition(s) 11:11:02 11:11:02.073 [main] DEBUG org.apache.kafka.clients.consumer.internals.Fetcher - [Consumer clientId=mso-123456-consumer-37da6eb2-0a27-47c4-bc3c-8032fdeca549, groupId=mso-group] Sending READ_UNCOMMITTED IncrementalFetchRequest(toSend=(), toForget=(), toReplace=(), implied=(my-test-topic-0), canUseTopicIds=True) to broker localhost:39115 (id: 1 rack: null) 11:11:02 11:11:02.073 [main] DEBUG org.apache.kafka.clients.NetworkClient - [Consumer clientId=mso-123456-consumer-37da6eb2-0a27-47c4-bc3c-8032fdeca549, groupId=mso-group] Sending FETCH request with header RequestHeader(apiKey=FETCH, apiVersion=13, clientId=mso-123456-consumer-37da6eb2-0a27-47c4-bc3c-8032fdeca549, correlationId=41) and timeout 30000 to node 1: FetchRequestData(clusterId=null, replicaId=-1, maxWaitMs=500, minBytes=1, maxBytes=52428800, isolationLevel=0, sessionId=1613820950, sessionEpoch=12, topics=[], forgottenTopicsData=[], rackId='') 11:11:02 11:11:02.075 [data-plane-kafka-request-handler-0] DEBUG kafka.server.FetchManager - Created a new incremental FetchContext for session id 1613820950, epoch 13: added 0 partition(s), updated 0 partition(s), removed 0 partition(s) 11:11:02 11:11:02.577 [executor-Fetch] DEBUG kafka.server.IncrementalFetchContext - Incremental fetch context with session id 1613820950 returning 0 partition(s) 11:11:02 11:11:02.578 [main] DEBUG org.apache.kafka.clients.NetworkClient - [Consumer clientId=mso-123456-consumer-37da6eb2-0a27-47c4-bc3c-8032fdeca549, groupId=mso-group] Received FETCH response from node 1 for request with header RequestHeader(apiKey=FETCH, apiVersion=13, clientId=mso-123456-consumer-37da6eb2-0a27-47c4-bc3c-8032fdeca549, correlationId=41): FetchResponseData(throttleTimeMs=0, errorCode=0, sessionId=1613820950, responses=[]) 11:11:02 11:11:02.579 [main] DEBUG org.apache.kafka.clients.FetchSessionHandler - [Consumer clientId=mso-123456-consumer-37da6eb2-0a27-47c4-bc3c-8032fdeca549, groupId=mso-group] Node 1 sent an incremental fetch response with throttleTimeMs = 0 for session 1613820950 with 0 response partition(s), 1 implied partition(s) 11:11:02 11:11:02.579 [data-plane-kafka-network-thread-1-ListenerName(SASL_PLAINTEXT)-SASL_PLAINTEXT-0] DEBUG kafka.request.logger - Completed request:{"isForwarded":false,"requestHeader":{"requestApiKey":1,"requestApiVersion":13,"correlationId":41,"clientId":"mso-123456-consumer-37da6eb2-0a27-47c4-bc3c-8032fdeca549","requestApiKeyName":"FETCH"},"request":{"replicaId":-1,"maxWaitMs":500,"minBytes":1,"maxBytes":52428800,"isolationLevel":0,"sessionId":1613820950,"sessionEpoch":12,"topics":[],"forgottenTopicsData":[],"rackId":""},"response":{"throttleTimeMs":0,"errorCode":0,"sessionId":1613820950,"responses":[]},"connection":"127.0.0.1:39115-127.0.0.1:43334-3","totalTimeMs":504.279,"requestQueueTimeMs":0.267,"localTimeMs":1.756,"remoteTimeMs":501.378,"throttleTimeMs":0,"responseQueueTimeMs":0.275,"sendTimeMs":0.6,"securityProtocol":"SASL_PLAINTEXT","principal":"User:admin","listener":"SASL_PLAINTEXT","clientInformation":{"softwareName":"apache-kafka-java","softwareVersion":"3.3.1"}} 11:11:02 11:11:02.580 [main] DEBUG org.apache.kafka.clients.consumer.internals.Fetcher - [Consumer clientId=mso-123456-consumer-37da6eb2-0a27-47c4-bc3c-8032fdeca549, groupId=mso-group] Added READ_UNCOMMITTED fetch request for partition my-test-topic-0 at position FetchPosition{offset=0, offsetEpoch=Optional.empty, currentLeader=LeaderAndEpoch{leader=Optional[localhost:39115 (id: 1 rack: null)], epoch=0}} to node localhost:39115 (id: 1 rack: null) 11:11:02 11:11:02.580 [main] DEBUG org.apache.kafka.clients.FetchSessionHandler - [Consumer clientId=mso-123456-consumer-37da6eb2-0a27-47c4-bc3c-8032fdeca549, groupId=mso-group] Built incremental fetch (sessionId=1613820950, epoch=13) for node 1. Added 0 partition(s), altered 0 partition(s), removed 0 partition(s), replaced 0 partition(s) out of 1 partition(s) 11:11:02 11:11:02.580 [main] DEBUG org.apache.kafka.clients.consumer.internals.Fetcher - [Consumer clientId=mso-123456-consumer-37da6eb2-0a27-47c4-bc3c-8032fdeca549, groupId=mso-group] Sending READ_UNCOMMITTED IncrementalFetchRequest(toSend=(), toForget=(), toReplace=(), implied=(my-test-topic-0), canUseTopicIds=True) to broker localhost:39115 (id: 1 rack: null) 11:11:02 11:11:02.580 [main] DEBUG org.apache.kafka.clients.NetworkClient - [Consumer clientId=mso-123456-consumer-37da6eb2-0a27-47c4-bc3c-8032fdeca549, groupId=mso-group] Sending FETCH request with header RequestHeader(apiKey=FETCH, apiVersion=13, clientId=mso-123456-consumer-37da6eb2-0a27-47c4-bc3c-8032fdeca549, correlationId=42) and timeout 30000 to node 1: FetchRequestData(clusterId=null, replicaId=-1, maxWaitMs=500, minBytes=1, maxBytes=52428800, isolationLevel=0, sessionId=1613820950, sessionEpoch=13, topics=[], forgottenTopicsData=[], rackId='') 11:11:02 11:11:02.582 [data-plane-kafka-request-handler-1] DEBUG kafka.server.FetchManager - Created a new incremental FetchContext for session id 1613820950, epoch 14: added 0 partition(s), updated 0 partition(s), removed 0 partition(s) 11:11:02 11:11:02.590 [ProcessThread(sid:0 cport:39173):] DEBUG org.apache.zookeeper.server.SessionTrackerImpl - Checking session 0x10000020e9e0000 11:11:02 11:11:02.590 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - Processing request:: sessionid:0x10000020e9e0000 type:ping cxid:0xfffffffffffffffe zxid:0xfffffffffffffffe txntype:unknown reqpath:n/a 11:11:02 11:11:02.590 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - sessionid:0x10000020e9e0000 type:ping cxid:0xfffffffffffffffe zxid:0xfffffffffffffffe txntype:unknown reqpath:n/a 11:11:02 11:11:02.591 [main-SendThread(127.0.0.1:39173)] DEBUG org.apache.zookeeper.ClientCnxn - Got ping response for session id: 0x10000020e9e0000 after 1ms. 11:11:03 11:11:03.085 [executor-Fetch] DEBUG kafka.server.IncrementalFetchContext - Incremental fetch context with session id 1613820950 returning 0 partition(s) 11:11:03 11:11:03.087 [main] DEBUG org.apache.kafka.clients.NetworkClient - [Consumer clientId=mso-123456-consumer-37da6eb2-0a27-47c4-bc3c-8032fdeca549, groupId=mso-group] Received FETCH response from node 1 for request with header RequestHeader(apiKey=FETCH, apiVersion=13, clientId=mso-123456-consumer-37da6eb2-0a27-47c4-bc3c-8032fdeca549, correlationId=42): FetchResponseData(throttleTimeMs=0, errorCode=0, sessionId=1613820950, responses=[]) 11:11:03 11:11:03.087 [main] DEBUG org.apache.kafka.clients.FetchSessionHandler - [Consumer clientId=mso-123456-consumer-37da6eb2-0a27-47c4-bc3c-8032fdeca549, groupId=mso-group] Node 1 sent an incremental fetch response with throttleTimeMs = 0 for session 1613820950 with 0 response partition(s), 1 implied partition(s) 11:11:03 11:11:03.087 [data-plane-kafka-network-thread-1-ListenerName(SASL_PLAINTEXT)-SASL_PLAINTEXT-0] DEBUG kafka.request.logger - Completed request:{"isForwarded":false,"requestHeader":{"requestApiKey":1,"requestApiVersion":13,"correlationId":42,"clientId":"mso-123456-consumer-37da6eb2-0a27-47c4-bc3c-8032fdeca549","requestApiKeyName":"FETCH"},"request":{"replicaId":-1,"maxWaitMs":500,"minBytes":1,"maxBytes":52428800,"isolationLevel":0,"sessionId":1613820950,"sessionEpoch":13,"topics":[],"forgottenTopicsData":[],"rackId":""},"response":{"throttleTimeMs":0,"errorCode":0,"sessionId":1613820950,"responses":[]},"connection":"127.0.0.1:39115-127.0.0.1:43334-3","totalTimeMs":505.489,"requestQueueTimeMs":0.4,"localTimeMs":2.053,"remoteTimeMs":502.168,"throttleTimeMs":0,"responseQueueTimeMs":0.236,"sendTimeMs":0.631,"securityProtocol":"SASL_PLAINTEXT","principal":"User:admin","listener":"SASL_PLAINTEXT","clientInformation":{"softwareName":"apache-kafka-java","softwareVersion":"3.3.1"}} 11:11:03 11:11:03.088 [main] DEBUG org.apache.kafka.clients.consumer.internals.Fetcher - [Consumer clientId=mso-123456-consumer-37da6eb2-0a27-47c4-bc3c-8032fdeca549, groupId=mso-group] Added READ_UNCOMMITTED fetch request for partition my-test-topic-0 at position FetchPosition{offset=0, offsetEpoch=Optional.empty, currentLeader=LeaderAndEpoch{leader=Optional[localhost:39115 (id: 1 rack: null)], epoch=0}} to node localhost:39115 (id: 1 rack: null) 11:11:03 11:11:03.088 [main] DEBUG org.apache.kafka.clients.FetchSessionHandler - [Consumer clientId=mso-123456-consumer-37da6eb2-0a27-47c4-bc3c-8032fdeca549, groupId=mso-group] Built incremental fetch (sessionId=1613820950, epoch=14) for node 1. Added 0 partition(s), altered 0 partition(s), removed 0 partition(s), replaced 0 partition(s) out of 1 partition(s) 11:11:03 11:11:03.088 [main] DEBUG org.apache.kafka.clients.consumer.internals.Fetcher - [Consumer clientId=mso-123456-consumer-37da6eb2-0a27-47c4-bc3c-8032fdeca549, groupId=mso-group] Sending READ_UNCOMMITTED IncrementalFetchRequest(toSend=(), toForget=(), toReplace=(), implied=(my-test-topic-0), canUseTopicIds=True) to broker localhost:39115 (id: 1 rack: null) 11:11:03 11:11:03.089 [main] DEBUG org.apache.kafka.clients.NetworkClient - [Consumer clientId=mso-123456-consumer-37da6eb2-0a27-47c4-bc3c-8032fdeca549, groupId=mso-group] Sending FETCH request with header RequestHeader(apiKey=FETCH, apiVersion=13, clientId=mso-123456-consumer-37da6eb2-0a27-47c4-bc3c-8032fdeca549, correlationId=43) and timeout 30000 to node 1: FetchRequestData(clusterId=null, replicaId=-1, maxWaitMs=500, minBytes=1, maxBytes=52428800, isolationLevel=0, sessionId=1613820950, sessionEpoch=14, topics=[], forgottenTopicsData=[], rackId='') 11:11:03 11:11:03.090 [data-plane-kafka-request-handler-1] DEBUG kafka.server.FetchManager - Created a new incremental FetchContext for session id 1613820950, epoch 15: added 0 partition(s), updated 0 partition(s), removed 0 partition(s) 11:11:03 11:11:03.593 [executor-Fetch] DEBUG kafka.server.IncrementalFetchContext - Incremental fetch context with session id 1613820950 returning 0 partition(s) 11:11:03 11:11:03.594 [main] DEBUG org.apache.kafka.clients.NetworkClient - [Consumer clientId=mso-123456-consumer-37da6eb2-0a27-47c4-bc3c-8032fdeca549, groupId=mso-group] Received FETCH response from node 1 for request with header RequestHeader(apiKey=FETCH, apiVersion=13, clientId=mso-123456-consumer-37da6eb2-0a27-47c4-bc3c-8032fdeca549, correlationId=43): FetchResponseData(throttleTimeMs=0, errorCode=0, sessionId=1613820950, responses=[]) 11:11:03 11:11:03.595 [main] DEBUG org.apache.kafka.clients.FetchSessionHandler - [Consumer clientId=mso-123456-consumer-37da6eb2-0a27-47c4-bc3c-8032fdeca549, groupId=mso-group] Node 1 sent an incremental fetch response with throttleTimeMs = 0 for session 1613820950 with 0 response partition(s), 1 implied partition(s) 11:11:03 11:11:03.595 [data-plane-kafka-network-thread-1-ListenerName(SASL_PLAINTEXT)-SASL_PLAINTEXT-0] DEBUG kafka.request.logger - Completed request:{"isForwarded":false,"requestHeader":{"requestApiKey":1,"requestApiVersion":13,"correlationId":43,"clientId":"mso-123456-consumer-37da6eb2-0a27-47c4-bc3c-8032fdeca549","requestApiKeyName":"FETCH"},"request":{"replicaId":-1,"maxWaitMs":500,"minBytes":1,"maxBytes":52428800,"isolationLevel":0,"sessionId":1613820950,"sessionEpoch":14,"topics":[],"forgottenTopicsData":[],"rackId":""},"response":{"throttleTimeMs":0,"errorCode":0,"sessionId":1613820950,"responses":[]},"connection":"127.0.0.1:39115-127.0.0.1:43334-3","totalTimeMs":504.508,"requestQueueTimeMs":0.315,"localTimeMs":1.921,"remoteTimeMs":501.593,"throttleTimeMs":0,"responseQueueTimeMs":0.166,"sendTimeMs":0.511,"securityProtocol":"SASL_PLAINTEXT","principal":"User:admin","listener":"SASL_PLAINTEXT","clientInformation":{"softwareName":"apache-kafka-java","softwareVersion":"3.3.1"}} 11:11:03 11:11:03.595 [main] DEBUG org.apache.kafka.clients.consumer.internals.Fetcher - [Consumer clientId=mso-123456-consumer-37da6eb2-0a27-47c4-bc3c-8032fdeca549, groupId=mso-group] Added READ_UNCOMMITTED fetch request for partition my-test-topic-0 at position FetchPosition{offset=0, offsetEpoch=Optional.empty, currentLeader=LeaderAndEpoch{leader=Optional[localhost:39115 (id: 1 rack: null)], epoch=0}} to node localhost:39115 (id: 1 rack: null) 11:11:03 11:11:03.595 [main] DEBUG org.apache.kafka.clients.FetchSessionHandler - [Consumer clientId=mso-123456-consumer-37da6eb2-0a27-47c4-bc3c-8032fdeca549, groupId=mso-group] Built incremental fetch (sessionId=1613820950, epoch=15) for node 1. Added 0 partition(s), altered 0 partition(s), removed 0 partition(s), replaced 0 partition(s) out of 1 partition(s) 11:11:03 11:11:03.596 [main] DEBUG org.apache.kafka.clients.consumer.internals.Fetcher - [Consumer clientId=mso-123456-consumer-37da6eb2-0a27-47c4-bc3c-8032fdeca549, groupId=mso-group] Sending READ_UNCOMMITTED IncrementalFetchRequest(toSend=(), toForget=(), toReplace=(), implied=(my-test-topic-0), canUseTopicIds=True) to broker localhost:39115 (id: 1 rack: null) 11:11:03 11:11:03.596 [main] DEBUG org.apache.kafka.clients.NetworkClient - [Consumer clientId=mso-123456-consumer-37da6eb2-0a27-47c4-bc3c-8032fdeca549, groupId=mso-group] Sending FETCH request with header RequestHeader(apiKey=FETCH, apiVersion=13, clientId=mso-123456-consumer-37da6eb2-0a27-47c4-bc3c-8032fdeca549, correlationId=44) and timeout 30000 to node 1: FetchRequestData(clusterId=null, replicaId=-1, maxWaitMs=500, minBytes=1, maxBytes=52428800, isolationLevel=0, sessionId=1613820950, sessionEpoch=15, topics=[], forgottenTopicsData=[], rackId='') 11:11:03 11:11:03.598 [data-plane-kafka-request-handler-1] DEBUG kafka.server.FetchManager - Created a new incremental FetchContext for session id 1613820950, epoch 16: added 0 partition(s), updated 0 partition(s), removed 0 partition(s) 11:11:04 11:11:04.100 [executor-Fetch] DEBUG kafka.server.IncrementalFetchContext - Incremental fetch context with session id 1613820950 returning 0 partition(s) 11:11:04 11:11:04.101 [main] DEBUG org.apache.kafka.clients.NetworkClient - [Consumer clientId=mso-123456-consumer-37da6eb2-0a27-47c4-bc3c-8032fdeca549, groupId=mso-group] Received FETCH response from node 1 for request with header RequestHeader(apiKey=FETCH, apiVersion=13, clientId=mso-123456-consumer-37da6eb2-0a27-47c4-bc3c-8032fdeca549, correlationId=44): FetchResponseData(throttleTimeMs=0, errorCode=0, sessionId=1613820950, responses=[]) 11:11:04 11:11:04.102 [main] DEBUG org.apache.kafka.clients.FetchSessionHandler - [Consumer clientId=mso-123456-consumer-37da6eb2-0a27-47c4-bc3c-8032fdeca549, groupId=mso-group] Node 1 sent an incremental fetch response with throttleTimeMs = 0 for session 1613820950 with 0 response partition(s), 1 implied partition(s) 11:11:04 11:11:04.102 [data-plane-kafka-network-thread-1-ListenerName(SASL_PLAINTEXT)-SASL_PLAINTEXT-0] DEBUG kafka.request.logger - Completed request:{"isForwarded":false,"requestHeader":{"requestApiKey":1,"requestApiVersion":13,"correlationId":44,"clientId":"mso-123456-consumer-37da6eb2-0a27-47c4-bc3c-8032fdeca549","requestApiKeyName":"FETCH"},"request":{"replicaId":-1,"maxWaitMs":500,"minBytes":1,"maxBytes":52428800,"isolationLevel":0,"sessionId":1613820950,"sessionEpoch":15,"topics":[],"forgottenTopicsData":[],"rackId":""},"response":{"throttleTimeMs":0,"errorCode":0,"sessionId":1613820950,"responses":[]},"connection":"127.0.0.1:39115-127.0.0.1:43334-3","totalTimeMs":504.265,"requestQueueTimeMs":0.303,"localTimeMs":1.521,"remoteTimeMs":501.811,"throttleTimeMs":0,"responseQueueTimeMs":0.195,"sendTimeMs":0.434,"securityProtocol":"SASL_PLAINTEXT","principal":"User:admin","listener":"SASL_PLAINTEXT","clientInformation":{"softwareName":"apache-kafka-java","softwareVersion":"3.3.1"}} 11:11:04 11:11:04.102 [main] DEBUG org.apache.kafka.clients.consumer.internals.Fetcher - [Consumer clientId=mso-123456-consumer-37da6eb2-0a27-47c4-bc3c-8032fdeca549, groupId=mso-group] Added READ_UNCOMMITTED fetch request for partition my-test-topic-0 at position FetchPosition{offset=0, offsetEpoch=Optional.empty, currentLeader=LeaderAndEpoch{leader=Optional[localhost:39115 (id: 1 rack: null)], epoch=0}} to node localhost:39115 (id: 1 rack: null) 11:11:04 11:11:04.102 [main] DEBUG org.apache.kafka.clients.FetchSessionHandler - [Consumer clientId=mso-123456-consumer-37da6eb2-0a27-47c4-bc3c-8032fdeca549, groupId=mso-group] Built incremental fetch (sessionId=1613820950, epoch=16) for node 1. Added 0 partition(s), altered 0 partition(s), removed 0 partition(s), replaced 0 partition(s) out of 1 partition(s) 11:11:04 11:11:04.103 [main] DEBUG org.apache.kafka.clients.consumer.internals.Fetcher - [Consumer clientId=mso-123456-consumer-37da6eb2-0a27-47c4-bc3c-8032fdeca549, groupId=mso-group] Sending READ_UNCOMMITTED IncrementalFetchRequest(toSend=(), toForget=(), toReplace=(), implied=(my-test-topic-0), canUseTopicIds=True) to broker localhost:39115 (id: 1 rack: null) 11:11:04 11:11:04.103 [main] DEBUG org.apache.kafka.clients.NetworkClient - [Consumer clientId=mso-123456-consumer-37da6eb2-0a27-47c4-bc3c-8032fdeca549, groupId=mso-group] Sending FETCH request with header RequestHeader(apiKey=FETCH, apiVersion=13, clientId=mso-123456-consumer-37da6eb2-0a27-47c4-bc3c-8032fdeca549, correlationId=45) and timeout 30000 to node 1: FetchRequestData(clusterId=null, replicaId=-1, maxWaitMs=500, minBytes=1, maxBytes=52428800, isolationLevel=0, sessionId=1613820950, sessionEpoch=16, topics=[], forgottenTopicsData=[], rackId='') 11:11:04 11:11:04.104 [data-plane-kafka-request-handler-0] DEBUG kafka.server.FetchManager - Created a new incremental FetchContext for session id 1613820950, epoch 17: added 0 partition(s), updated 0 partition(s), removed 0 partition(s) 11:11:04 11:11:04.607 [executor-Fetch] DEBUG kafka.server.IncrementalFetchContext - Incremental fetch context with session id 1613820950 returning 0 partition(s) 11:11:04 11:11:04.608 [main] DEBUG org.apache.kafka.clients.NetworkClient - [Consumer clientId=mso-123456-consumer-37da6eb2-0a27-47c4-bc3c-8032fdeca549, groupId=mso-group] Received FETCH response from node 1 for request with header RequestHeader(apiKey=FETCH, apiVersion=13, clientId=mso-123456-consumer-37da6eb2-0a27-47c4-bc3c-8032fdeca549, correlationId=45): FetchResponseData(throttleTimeMs=0, errorCode=0, sessionId=1613820950, responses=[]) 11:11:04 11:11:04.609 [main] DEBUG org.apache.kafka.clients.FetchSessionHandler - [Consumer clientId=mso-123456-consumer-37da6eb2-0a27-47c4-bc3c-8032fdeca549, groupId=mso-group] Node 1 sent an incremental fetch response with throttleTimeMs = 0 for session 1613820950 with 0 response partition(s), 1 implied partition(s) 11:11:04 11:11:04.609 [data-plane-kafka-network-thread-1-ListenerName(SASL_PLAINTEXT)-SASL_PLAINTEXT-0] DEBUG kafka.request.logger - Completed request:{"isForwarded":false,"requestHeader":{"requestApiKey":1,"requestApiVersion":13,"correlationId":45,"clientId":"mso-123456-consumer-37da6eb2-0a27-47c4-bc3c-8032fdeca549","requestApiKeyName":"FETCH"},"request":{"replicaId":-1,"maxWaitMs":500,"minBytes":1,"maxBytes":52428800,"isolationLevel":0,"sessionId":1613820950,"sessionEpoch":16,"topics":[],"forgottenTopicsData":[],"rackId":""},"response":{"throttleTimeMs":0,"errorCode":0,"sessionId":1613820950,"responses":[]},"connection":"127.0.0.1:39115-127.0.0.1:43334-3","totalTimeMs":504.591,"requestQueueTimeMs":0.217,"localTimeMs":1.83,"remoteTimeMs":501.975,"throttleTimeMs":0,"responseQueueTimeMs":0.133,"sendTimeMs":0.435,"securityProtocol":"SASL_PLAINTEXT","principal":"User:admin","listener":"SASL_PLAINTEXT","clientInformation":{"softwareName":"apache-kafka-java","softwareVersion":"3.3.1"}} 11:11:04 11:11:04.610 [main] DEBUG org.apache.kafka.clients.consumer.internals.Fetcher - [Consumer clientId=mso-123456-consumer-37da6eb2-0a27-47c4-bc3c-8032fdeca549, groupId=mso-group] Added READ_UNCOMMITTED fetch request for partition my-test-topic-0 at position FetchPosition{offset=0, offsetEpoch=Optional.empty, currentLeader=LeaderAndEpoch{leader=Optional[localhost:39115 (id: 1 rack: null)], epoch=0}} to node localhost:39115 (id: 1 rack: null) 11:11:04 11:11:04.610 [main] DEBUG org.apache.kafka.clients.FetchSessionHandler - [Consumer clientId=mso-123456-consumer-37da6eb2-0a27-47c4-bc3c-8032fdeca549, groupId=mso-group] Built incremental fetch (sessionId=1613820950, epoch=17) for node 1. Added 0 partition(s), altered 0 partition(s), removed 0 partition(s), replaced 0 partition(s) out of 1 partition(s) 11:11:04 11:11:04.610 [main] DEBUG org.apache.kafka.clients.consumer.internals.Fetcher - [Consumer clientId=mso-123456-consumer-37da6eb2-0a27-47c4-bc3c-8032fdeca549, groupId=mso-group] Sending READ_UNCOMMITTED IncrementalFetchRequest(toSend=(), toForget=(), toReplace=(), implied=(my-test-topic-0), canUseTopicIds=True) to broker localhost:39115 (id: 1 rack: null) 11:11:04 11:11:04.610 [main] DEBUG org.apache.kafka.clients.NetworkClient - [Consumer clientId=mso-123456-consumer-37da6eb2-0a27-47c4-bc3c-8032fdeca549, groupId=mso-group] Sending FETCH request with header RequestHeader(apiKey=FETCH, apiVersion=13, clientId=mso-123456-consumer-37da6eb2-0a27-47c4-bc3c-8032fdeca549, correlationId=46) and timeout 30000 to node 1: FetchRequestData(clusterId=null, replicaId=-1, maxWaitMs=500, minBytes=1, maxBytes=52428800, isolationLevel=0, sessionId=1613820950, sessionEpoch=17, topics=[], forgottenTopicsData=[], rackId='') 11:11:04 11:11:04.611 [data-plane-kafka-request-handler-0] DEBUG kafka.server.FetchManager - Created a new incremental FetchContext for session id 1613820950, epoch 18: added 0 partition(s), updated 0 partition(s), removed 0 partition(s) 11:11:04 11:11:04.734 [kafka-coordinator-heartbeat-thread | mso-group] DEBUG org.apache.kafka.clients.consumer.internals.ConsumerCoordinator - [Consumer clientId=mso-123456-consumer-37da6eb2-0a27-47c4-bc3c-8032fdeca549, groupId=mso-group] Sending Heartbeat request with generation 1 and member id mso-123456-consumer-37da6eb2-0a27-47c4-bc3c-8032fdeca549-083c14ec-f89b-4a86-bb74-9b008de55af6 to coordinator localhost:39115 (id: 2147483646 rack: null) 11:11:04 11:11:04.734 [kafka-coordinator-heartbeat-thread | mso-group] DEBUG org.apache.kafka.clients.NetworkClient - [Consumer clientId=mso-123456-consumer-37da6eb2-0a27-47c4-bc3c-8032fdeca549, groupId=mso-group] Sending HEARTBEAT request with header RequestHeader(apiKey=HEARTBEAT, apiVersion=4, clientId=mso-123456-consumer-37da6eb2-0a27-47c4-bc3c-8032fdeca549, correlationId=47) and timeout 30000 to node 2147483646: HeartbeatRequestData(groupId='mso-group', generationId=1, memberId='mso-123456-consumer-37da6eb2-0a27-47c4-bc3c-8032fdeca549-083c14ec-f89b-4a86-bb74-9b008de55af6', groupInstanceId=null) 11:11:04 11:11:04.736 [data-plane-kafka-request-handler-1] DEBUG kafka.server.DelayedOperationPurgatory - Request key MemberKey(mso-group,mso-123456-consumer-37da6eb2-0a27-47c4-bc3c-8032fdeca549-083c14ec-f89b-4a86-bb74-9b008de55af6) unblocked 1 Heartbeat operations 11:11:04 11:11:04.737 [main] DEBUG org.apache.kafka.clients.NetworkClient - [Consumer clientId=mso-123456-consumer-37da6eb2-0a27-47c4-bc3c-8032fdeca549, groupId=mso-group] Received HEARTBEAT response from node 2147483646 for request with header RequestHeader(apiKey=HEARTBEAT, apiVersion=4, clientId=mso-123456-consumer-37da6eb2-0a27-47c4-bc3c-8032fdeca549, correlationId=47): HeartbeatResponseData(throttleTimeMs=0, errorCode=0) 11:11:04 11:11:04.737 [main] DEBUG org.apache.kafka.clients.consumer.internals.ConsumerCoordinator - [Consumer clientId=mso-123456-consumer-37da6eb2-0a27-47c4-bc3c-8032fdeca549, groupId=mso-group] Received successful Heartbeat response 11:11:04 11:11:04.738 [data-plane-kafka-network-thread-1-ListenerName(SASL_PLAINTEXT)-SASL_PLAINTEXT-1] DEBUG kafka.request.logger - Completed request:{"isForwarded":false,"requestHeader":{"requestApiKey":12,"requestApiVersion":4,"correlationId":47,"clientId":"mso-123456-consumer-37da6eb2-0a27-47c4-bc3c-8032fdeca549","requestApiKeyName":"HEARTBEAT"},"request":{"groupId":"mso-group","generationId":1,"memberId":"mso-123456-consumer-37da6eb2-0a27-47c4-bc3c-8032fdeca549-083c14ec-f89b-4a86-bb74-9b008de55af6","groupInstanceId":null},"response":{"throttleTimeMs":0,"errorCode":0},"connection":"127.0.0.1:39115-127.0.0.1:43336-3","totalTimeMs":1.71,"requestQueueTimeMs":0.261,"localTimeMs":1.113,"remoteTimeMs":0.0,"throttleTimeMs":0,"responseQueueTimeMs":0.09,"sendTimeMs":0.245,"securityProtocol":"SASL_PLAINTEXT","principal":"User:admin","listener":"SASL_PLAINTEXT","clientInformation":{"softwareName":"apache-kafka-java","softwareVersion":"3.3.1"}} 11:11:04 11:11:04.802 [kafka-log-cleaner-thread-0] DEBUG kafka.log.LogCleanerManager$ - Finding range of cleanable offsets for log=__consumer_offsets-13. Last clean offset=None now=1768216264796 => firstDirtyOffset=0 firstUncleanableOffset=0 activeSegment.baseOffset=0 11:11:04 11:11:04.808 [kafka-log-cleaner-thread-0] DEBUG kafka.log.LogCleanerManager$ - Finding range of cleanable offsets for log=__consumer_offsets-46. Last clean offset=None now=1768216264796 => firstDirtyOffset=0 firstUncleanableOffset=0 activeSegment.baseOffset=0 11:11:04 11:11:04.808 [kafka-log-cleaner-thread-0] DEBUG kafka.log.LogCleanerManager$ - Finding range of cleanable offsets for log=__consumer_offsets-9. Last clean offset=None now=1768216264796 => firstDirtyOffset=0 firstUncleanableOffset=0 activeSegment.baseOffset=0 11:11:04 11:11:04.809 [kafka-log-cleaner-thread-0] DEBUG kafka.log.LogCleanerManager$ - Finding range of cleanable offsets for log=__consumer_offsets-42. Last clean offset=None now=1768216264796 => firstDirtyOffset=0 firstUncleanableOffset=0 activeSegment.baseOffset=0 11:11:04 11:11:04.809 [kafka-log-cleaner-thread-0] DEBUG kafka.log.LogCleanerManager$ - Finding range of cleanable offsets for log=__consumer_offsets-21. Last clean offset=None now=1768216264796 => firstDirtyOffset=0 firstUncleanableOffset=0 activeSegment.baseOffset=0 11:11:04 11:11:04.809 [kafka-log-cleaner-thread-0] DEBUG kafka.log.LogCleanerManager$ - Finding range of cleanable offsets for log=__consumer_offsets-17. Last clean offset=None now=1768216264796 => firstDirtyOffset=0 firstUncleanableOffset=0 activeSegment.baseOffset=0 11:11:04 11:11:04.809 [kafka-log-cleaner-thread-0] DEBUG kafka.log.LogCleanerManager$ - Finding range of cleanable offsets for log=__consumer_offsets-30. Last clean offset=None now=1768216264796 => firstDirtyOffset=0 firstUncleanableOffset=0 activeSegment.baseOffset=0 11:11:04 11:11:04.809 [kafka-log-cleaner-thread-0] DEBUG kafka.log.LogCleanerManager$ - Finding range of cleanable offsets for log=__consumer_offsets-26. Last clean offset=None now=1768216264796 => firstDirtyOffset=0 firstUncleanableOffset=0 activeSegment.baseOffset=0 11:11:04 11:11:04.809 [kafka-log-cleaner-thread-0] DEBUG kafka.log.LogCleanerManager$ - Finding range of cleanable offsets for log=__consumer_offsets-5. Last clean offset=None now=1768216264796 => firstDirtyOffset=0 firstUncleanableOffset=0 activeSegment.baseOffset=0 11:11:04 11:11:04.810 [kafka-log-cleaner-thread-0] DEBUG kafka.log.LogCleanerManager$ - Finding range of cleanable offsets for log=__consumer_offsets-38. Last clean offset=None now=1768216264796 => firstDirtyOffset=0 firstUncleanableOffset=0 activeSegment.baseOffset=0 11:11:04 11:11:04.810 [kafka-log-cleaner-thread-0] DEBUG kafka.log.LogCleanerManager$ - Finding range of cleanable offsets for log=__consumer_offsets-1. Last clean offset=None now=1768216264796 => firstDirtyOffset=0 firstUncleanableOffset=0 activeSegment.baseOffset=0 11:11:04 11:11:04.810 [kafka-log-cleaner-thread-0] DEBUG kafka.log.LogCleanerManager$ - Finding range of cleanable offsets for log=__consumer_offsets-34. Last clean offset=None now=1768216264796 => firstDirtyOffset=0 firstUncleanableOffset=0 activeSegment.baseOffset=0 11:11:04 11:11:04.810 [kafka-log-cleaner-thread-0] DEBUG kafka.log.LogCleanerManager$ - Finding range of cleanable offsets for log=__consumer_offsets-16. Last clean offset=None now=1768216264796 => firstDirtyOffset=0 firstUncleanableOffset=0 activeSegment.baseOffset=0 11:11:04 11:11:04.810 [kafka-log-cleaner-thread-0] DEBUG kafka.log.LogCleanerManager$ - Finding range of cleanable offsets for log=__consumer_offsets-45. Last clean offset=None now=1768216264796 => firstDirtyOffset=0 firstUncleanableOffset=0 activeSegment.baseOffset=0 11:11:04 11:11:04.810 [kafka-log-cleaner-thread-0] DEBUG kafka.log.LogCleanerManager$ - Finding range of cleanable offsets for log=__consumer_offsets-12. Last clean offset=None now=1768216264796 => firstDirtyOffset=0 firstUncleanableOffset=0 activeSegment.baseOffset=0 11:11:04 11:11:04.810 [kafka-log-cleaner-thread-0] DEBUG kafka.log.LogCleanerManager$ - Finding range of cleanable offsets for log=__consumer_offsets-41. Last clean offset=None now=1768216264796 => firstDirtyOffset=0 firstUncleanableOffset=0 activeSegment.baseOffset=0 11:11:04 11:11:04.811 [kafka-log-cleaner-thread-0] DEBUG kafka.log.LogCleanerManager$ - Finding range of cleanable offsets for log=__consumer_offsets-24. Last clean offset=None now=1768216264796 => firstDirtyOffset=0 firstUncleanableOffset=0 activeSegment.baseOffset=0 11:11:04 11:11:04.811 [kafka-log-cleaner-thread-0] DEBUG kafka.log.LogCleanerManager$ - Finding range of cleanable offsets for log=__consumer_offsets-20. Last clean offset=None now=1768216264796 => firstDirtyOffset=0 firstUncleanableOffset=0 activeSegment.baseOffset=0 11:11:04 11:11:04.811 [kafka-log-cleaner-thread-0] DEBUG kafka.log.LogCleanerManager$ - Finding range of cleanable offsets for log=__consumer_offsets-49. Last clean offset=None now=1768216264796 => firstDirtyOffset=0 firstUncleanableOffset=0 activeSegment.baseOffset=0 11:11:04 11:11:04.811 [kafka-log-cleaner-thread-0] DEBUG kafka.log.LogCleanerManager$ - Finding range of cleanable offsets for log=__consumer_offsets-0. Last clean offset=None now=1768216264796 => firstDirtyOffset=0 firstUncleanableOffset=0 activeSegment.baseOffset=0 11:11:04 11:11:04.811 [kafka-log-cleaner-thread-0] DEBUG kafka.log.LogCleanerManager$ - Finding range of cleanable offsets for log=__consumer_offsets-29. Last clean offset=None now=1768216264796 => firstDirtyOffset=0 firstUncleanableOffset=0 activeSegment.baseOffset=0 11:11:04 11:11:04.811 [kafka-log-cleaner-thread-0] DEBUG kafka.log.LogCleanerManager$ - Finding range of cleanable offsets for log=__consumer_offsets-25. Last clean offset=None now=1768216264796 => firstDirtyOffset=0 firstUncleanableOffset=0 activeSegment.baseOffset=0 11:11:04 11:11:04.812 [kafka-log-cleaner-thread-0] DEBUG kafka.log.LogCleanerManager$ - Finding range of cleanable offsets for log=__consumer_offsets-8. Last clean offset=None now=1768216264796 => firstDirtyOffset=0 firstUncleanableOffset=0 activeSegment.baseOffset=0 11:11:04 11:11:04.812 [kafka-log-cleaner-thread-0] DEBUG kafka.log.LogCleanerManager$ - Finding range of cleanable offsets for log=__consumer_offsets-37. Last clean offset=None now=1768216264796 => firstDirtyOffset=0 firstUncleanableOffset=0 activeSegment.baseOffset=0 11:11:04 11:11:04.812 [kafka-log-cleaner-thread-0] DEBUG kafka.log.LogCleanerManager$ - Finding range of cleanable offsets for log=__consumer_offsets-4. Last clean offset=None now=1768216264796 => firstDirtyOffset=0 firstUncleanableOffset=0 activeSegment.baseOffset=0 11:11:04 11:11:04.812 [kafka-log-cleaner-thread-0] DEBUG kafka.log.LogCleanerManager$ - Finding range of cleanable offsets for log=__consumer_offsets-33. Last clean offset=None now=1768216264796 => firstDirtyOffset=0 firstUncleanableOffset=0 activeSegment.baseOffset=0 11:11:04 11:11:04.812 [kafka-log-cleaner-thread-0] DEBUG kafka.log.LogCleanerManager$ - Finding range of cleanable offsets for log=__consumer_offsets-15. Last clean offset=None now=1768216264796 => firstDirtyOffset=0 firstUncleanableOffset=0 activeSegment.baseOffset=0 11:11:04 11:11:04.812 [kafka-log-cleaner-thread-0] DEBUG kafka.log.LogCleanerManager$ - Finding range of cleanable offsets for log=__consumer_offsets-48. Last clean offset=None now=1768216264796 => firstDirtyOffset=0 firstUncleanableOffset=0 activeSegment.baseOffset=0 11:11:04 11:11:04.812 [kafka-log-cleaner-thread-0] DEBUG kafka.log.LogCleanerManager$ - Finding range of cleanable offsets for log=__consumer_offsets-11. Last clean offset=None now=1768216264796 => firstDirtyOffset=0 firstUncleanableOffset=0 activeSegment.baseOffset=0 11:11:04 11:11:04.813 [kafka-log-cleaner-thread-0] DEBUG kafka.log.LogCleanerManager$ - Finding range of cleanable offsets for log=__consumer_offsets-44. Last clean offset=None now=1768216264796 => firstDirtyOffset=0 firstUncleanableOffset=0 activeSegment.baseOffset=0 11:11:04 11:11:04.813 [kafka-log-cleaner-thread-0] DEBUG kafka.log.LogCleanerManager$ - Finding range of cleanable offsets for log=__consumer_offsets-23. Last clean offset=None now=1768216264796 => firstDirtyOffset=0 firstUncleanableOffset=0 activeSegment.baseOffset=0 11:11:04 11:11:04.813 [kafka-log-cleaner-thread-0] DEBUG kafka.log.LogCleanerManager$ - Finding range of cleanable offsets for log=__consumer_offsets-19. Last clean offset=None now=1768216264796 => firstDirtyOffset=0 firstUncleanableOffset=0 activeSegment.baseOffset=0 11:11:04 11:11:04.813 [kafka-log-cleaner-thread-0] DEBUG kafka.log.LogCleanerManager$ - Finding range of cleanable offsets for log=__consumer_offsets-32. Last clean offset=None now=1768216264796 => firstDirtyOffset=0 firstUncleanableOffset=0 activeSegment.baseOffset=0 11:11:04 11:11:04.813 [kafka-log-cleaner-thread-0] DEBUG kafka.log.LogCleanerManager$ - Finding range of cleanable offsets for log=__consumer_offsets-28. Last clean offset=None now=1768216264796 => firstDirtyOffset=0 firstUncleanableOffset=0 activeSegment.baseOffset=0 11:11:04 11:11:04.813 [kafka-log-cleaner-thread-0] DEBUG kafka.log.LogCleanerManager$ - Finding range of cleanable offsets for log=__consumer_offsets-7. Last clean offset=None now=1768216264796 => firstDirtyOffset=0 firstUncleanableOffset=0 activeSegment.baseOffset=0 11:11:04 11:11:04.813 [kafka-log-cleaner-thread-0] DEBUG kafka.log.LogCleanerManager$ - Finding range of cleanable offsets for log=__consumer_offsets-40. Last clean offset=None now=1768216264796 => firstDirtyOffset=0 firstUncleanableOffset=0 activeSegment.baseOffset=0 11:11:04 11:11:04.813 [kafka-log-cleaner-thread-0] DEBUG kafka.log.LogCleanerManager$ - Finding range of cleanable offsets for log=__consumer_offsets-3. Last clean offset=None now=1768216264796 => firstDirtyOffset=0 firstUncleanableOffset=0 activeSegment.baseOffset=0 11:11:04 11:11:04.813 [kafka-log-cleaner-thread-0] DEBUG kafka.log.LogCleanerManager$ - Finding range of cleanable offsets for log=__consumer_offsets-36. Last clean offset=None now=1768216264796 => firstDirtyOffset=0 firstUncleanableOffset=0 activeSegment.baseOffset=0 11:11:04 11:11:04.814 [kafka-log-cleaner-thread-0] DEBUG kafka.log.LogCleanerManager$ - Finding range of cleanable offsets for log=__consumer_offsets-47. Last clean offset=None now=1768216264796 => firstDirtyOffset=0 firstUncleanableOffset=0 activeSegment.baseOffset=0 11:11:04 11:11:04.814 [kafka-log-cleaner-thread-0] DEBUG kafka.log.LogCleanerManager$ - Finding range of cleanable offsets for log=__consumer_offsets-14. Last clean offset=None now=1768216264796 => firstDirtyOffset=0 firstUncleanableOffset=0 activeSegment.baseOffset=0 11:11:04 11:11:04.814 [kafka-log-cleaner-thread-0] DEBUG kafka.log.LogCleanerManager$ - Finding range of cleanable offsets for log=__consumer_offsets-43. Last clean offset=None now=1768216264796 => firstDirtyOffset=0 firstUncleanableOffset=0 activeSegment.baseOffset=0 11:11:04 11:11:04.814 [kafka-log-cleaner-thread-0] DEBUG kafka.log.LogCleanerManager$ - Finding range of cleanable offsets for log=__consumer_offsets-10. Last clean offset=None now=1768216264796 => firstDirtyOffset=0 firstUncleanableOffset=0 activeSegment.baseOffset=0 11:11:04 11:11:04.814 [kafka-log-cleaner-thread-0] DEBUG kafka.log.LogCleanerManager$ - Finding range of cleanable offsets for log=__consumer_offsets-22. Last clean offset=None now=1768216264796 => firstDirtyOffset=0 firstUncleanableOffset=0 activeSegment.baseOffset=0 11:11:04 11:11:04.814 [kafka-log-cleaner-thread-0] DEBUG kafka.log.LogCleanerManager$ - Finding range of cleanable offsets for log=__consumer_offsets-18. Last clean offset=None now=1768216264796 => firstDirtyOffset=0 firstUncleanableOffset=0 activeSegment.baseOffset=0 11:11:04 11:11:04.814 [kafka-log-cleaner-thread-0] DEBUG kafka.log.LogCleanerManager$ - Finding range of cleanable offsets for log=__consumer_offsets-31. Last clean offset=None now=1768216264796 => firstDirtyOffset=0 firstUncleanableOffset=0 activeSegment.baseOffset=0 11:11:04 11:11:04.814 [kafka-log-cleaner-thread-0] DEBUG kafka.log.LogCleanerManager$ - Finding range of cleanable offsets for log=__consumer_offsets-27. Last clean offset=None now=1768216264796 => firstDirtyOffset=0 firstUncleanableOffset=0 activeSegment.baseOffset=0 11:11:04 11:11:04.814 [kafka-log-cleaner-thread-0] DEBUG kafka.log.LogCleanerManager$ - Finding range of cleanable offsets for log=__consumer_offsets-39. Last clean offset=None now=1768216264796 => firstDirtyOffset=0 firstUncleanableOffset=0 activeSegment.baseOffset=0 11:11:04 11:11:04.814 [kafka-log-cleaner-thread-0] DEBUG kafka.log.LogCleanerManager$ - Finding range of cleanable offsets for log=__consumer_offsets-6. Last clean offset=None now=1768216264796 => firstDirtyOffset=0 firstUncleanableOffset=0 activeSegment.baseOffset=0 11:11:04 11:11:04.815 [kafka-log-cleaner-thread-0] DEBUG kafka.log.LogCleanerManager$ - Finding range of cleanable offsets for log=__consumer_offsets-35. Last clean offset=None now=1768216264796 => firstDirtyOffset=0 firstUncleanableOffset=0 activeSegment.baseOffset=0 11:11:04 11:11:04.815 [kafka-log-cleaner-thread-0] DEBUG kafka.log.LogCleanerManager$ - Finding range of cleanable offsets for log=__consumer_offsets-2. Last clean offset=None now=1768216264796 => firstDirtyOffset=0 firstUncleanableOffset=0 activeSegment.baseOffset=0 11:11:05 11:11:05.114 [executor-Fetch] DEBUG kafka.server.IncrementalFetchContext - Incremental fetch context with session id 1613820950 returning 0 partition(s) 11:11:05 11:11:05.115 [main] DEBUG org.apache.kafka.clients.NetworkClient - [Consumer clientId=mso-123456-consumer-37da6eb2-0a27-47c4-bc3c-8032fdeca549, groupId=mso-group] Received FETCH response from node 1 for request with header RequestHeader(apiKey=FETCH, apiVersion=13, clientId=mso-123456-consumer-37da6eb2-0a27-47c4-bc3c-8032fdeca549, correlationId=46): FetchResponseData(throttleTimeMs=0, errorCode=0, sessionId=1613820950, responses=[]) 11:11:05 11:11:05.115 [main] DEBUG org.apache.kafka.clients.FetchSessionHandler - [Consumer clientId=mso-123456-consumer-37da6eb2-0a27-47c4-bc3c-8032fdeca549, groupId=mso-group] Node 1 sent an incremental fetch response with throttleTimeMs = 0 for session 1613820950 with 0 response partition(s), 1 implied partition(s) 11:11:05 11:11:05.116 [data-plane-kafka-network-thread-1-ListenerName(SASL_PLAINTEXT)-SASL_PLAINTEXT-0] DEBUG kafka.request.logger - Completed request:{"isForwarded":false,"requestHeader":{"requestApiKey":1,"requestApiVersion":13,"correlationId":46,"clientId":"mso-123456-consumer-37da6eb2-0a27-47c4-bc3c-8032fdeca549","requestApiKeyName":"FETCH"},"request":{"replicaId":-1,"maxWaitMs":500,"minBytes":1,"maxBytes":52428800,"isolationLevel":0,"sessionId":1613820950,"sessionEpoch":17,"topics":[],"forgottenTopicsData":[],"rackId":""},"response":{"throttleTimeMs":0,"errorCode":0,"sessionId":1613820950,"responses":[]},"connection":"127.0.0.1:39115-127.0.0.1:43334-3","totalTimeMs":503.915,"requestQueueTimeMs":0.206,"localTimeMs":1.551,"remoteTimeMs":501.53,"throttleTimeMs":0,"responseQueueTimeMs":0.15,"sendTimeMs":0.476,"securityProtocol":"SASL_PLAINTEXT","principal":"User:admin","listener":"SASL_PLAINTEXT","clientInformation":{"softwareName":"apache-kafka-java","softwareVersion":"3.3.1"}} 11:11:05 11:11:05.116 [main] DEBUG org.apache.kafka.clients.consumer.internals.Fetcher - [Consumer clientId=mso-123456-consumer-37da6eb2-0a27-47c4-bc3c-8032fdeca549, groupId=mso-group] Added READ_UNCOMMITTED fetch request for partition my-test-topic-0 at position FetchPosition{offset=0, offsetEpoch=Optional.empty, currentLeader=LeaderAndEpoch{leader=Optional[localhost:39115 (id: 1 rack: null)], epoch=0}} to node localhost:39115 (id: 1 rack: null) 11:11:05 11:11:05.116 [main] DEBUG org.apache.kafka.clients.FetchSessionHandler - [Consumer clientId=mso-123456-consumer-37da6eb2-0a27-47c4-bc3c-8032fdeca549, groupId=mso-group] Built incremental fetch (sessionId=1613820950, epoch=18) for node 1. Added 0 partition(s), altered 0 partition(s), removed 0 partition(s), replaced 0 partition(s) out of 1 partition(s) 11:11:05 11:11:05.117 [main] DEBUG org.apache.kafka.clients.consumer.internals.Fetcher - [Consumer clientId=mso-123456-consumer-37da6eb2-0a27-47c4-bc3c-8032fdeca549, groupId=mso-group] Sending READ_UNCOMMITTED IncrementalFetchRequest(toSend=(), toForget=(), toReplace=(), implied=(my-test-topic-0), canUseTopicIds=True) to broker localhost:39115 (id: 1 rack: null) 11:11:05 11:11:05.117 [main] DEBUG org.apache.kafka.clients.NetworkClient - [Consumer clientId=mso-123456-consumer-37da6eb2-0a27-47c4-bc3c-8032fdeca549, groupId=mso-group] Sending FETCH request with header RequestHeader(apiKey=FETCH, apiVersion=13, clientId=mso-123456-consumer-37da6eb2-0a27-47c4-bc3c-8032fdeca549, correlationId=48) and timeout 30000 to node 1: FetchRequestData(clusterId=null, replicaId=-1, maxWaitMs=500, minBytes=1, maxBytes=52428800, isolationLevel=0, sessionId=1613820950, sessionEpoch=18, topics=[], forgottenTopicsData=[], rackId='') 11:11:05 11:11:05.119 [data-plane-kafka-request-handler-0] DEBUG kafka.server.FetchManager - Created a new incremental FetchContext for session id 1613820950, epoch 19: added 0 partition(s), updated 0 partition(s), removed 0 partition(s) 11:11:05 11:11:05.620 [executor-Fetch] DEBUG kafka.server.IncrementalFetchContext - Incremental fetch context with session id 1613820950 returning 0 partition(s) 11:11:05 11:11:05.621 [main] DEBUG org.apache.kafka.clients.NetworkClient - [Consumer clientId=mso-123456-consumer-37da6eb2-0a27-47c4-bc3c-8032fdeca549, groupId=mso-group] Received FETCH response from node 1 for request with header RequestHeader(apiKey=FETCH, apiVersion=13, clientId=mso-123456-consumer-37da6eb2-0a27-47c4-bc3c-8032fdeca549, correlationId=48): FetchResponseData(throttleTimeMs=0, errorCode=0, sessionId=1613820950, responses=[]) 11:11:05 11:11:05.622 [main] DEBUG org.apache.kafka.clients.FetchSessionHandler - [Consumer clientId=mso-123456-consumer-37da6eb2-0a27-47c4-bc3c-8032fdeca549, groupId=mso-group] Node 1 sent an incremental fetch response with throttleTimeMs = 0 for session 1613820950 with 0 response partition(s), 1 implied partition(s) 11:11:05 11:11:05.622 [data-plane-kafka-network-thread-1-ListenerName(SASL_PLAINTEXT)-SASL_PLAINTEXT-0] DEBUG kafka.request.logger - Completed request:{"isForwarded":false,"requestHeader":{"requestApiKey":1,"requestApiVersion":13,"correlationId":48,"clientId":"mso-123456-consumer-37da6eb2-0a27-47c4-bc3c-8032fdeca549","requestApiKeyName":"FETCH"},"request":{"replicaId":-1,"maxWaitMs":500,"minBytes":1,"maxBytes":52428800,"isolationLevel":0,"sessionId":1613820950,"sessionEpoch":18,"topics":[],"forgottenTopicsData":[],"rackId":""},"response":{"throttleTimeMs":0,"errorCode":0,"sessionId":1613820950,"responses":[]},"connection":"127.0.0.1:39115-127.0.0.1:43334-3","totalTimeMs":503.737,"requestQueueTimeMs":0.286,"localTimeMs":2.112,"remoteTimeMs":500.853,"throttleTimeMs":0,"responseQueueTimeMs":0.18,"sendTimeMs":0.303,"securityProtocol":"SASL_PLAINTEXT","principal":"User:admin","listener":"SASL_PLAINTEXT","clientInformation":{"softwareName":"apache-kafka-java","softwareVersion":"3.3.1"}} 11:11:05 11:11:05.623 [main] DEBUG org.apache.kafka.clients.consumer.internals.Fetcher - [Consumer clientId=mso-123456-consumer-37da6eb2-0a27-47c4-bc3c-8032fdeca549, groupId=mso-group] Added READ_UNCOMMITTED fetch request for partition my-test-topic-0 at position FetchPosition{offset=0, offsetEpoch=Optional.empty, currentLeader=LeaderAndEpoch{leader=Optional[localhost:39115 (id: 1 rack: null)], epoch=0}} to node localhost:39115 (id: 1 rack: null) 11:11:05 11:11:05.623 [main] DEBUG org.apache.kafka.clients.FetchSessionHandler - [Consumer clientId=mso-123456-consumer-37da6eb2-0a27-47c4-bc3c-8032fdeca549, groupId=mso-group] Built incremental fetch (sessionId=1613820950, epoch=19) for node 1. Added 0 partition(s), altered 0 partition(s), removed 0 partition(s), replaced 0 partition(s) out of 1 partition(s) 11:11:05 11:11:05.623 [main] DEBUG org.apache.kafka.clients.consumer.internals.Fetcher - [Consumer clientId=mso-123456-consumer-37da6eb2-0a27-47c4-bc3c-8032fdeca549, groupId=mso-group] Sending READ_UNCOMMITTED IncrementalFetchRequest(toSend=(), toForget=(), toReplace=(), implied=(my-test-topic-0), canUseTopicIds=True) to broker localhost:39115 (id: 1 rack: null) 11:11:05 11:11:05.624 [main] DEBUG org.apache.kafka.clients.NetworkClient - [Consumer clientId=mso-123456-consumer-37da6eb2-0a27-47c4-bc3c-8032fdeca549, groupId=mso-group] Sending FETCH request with header RequestHeader(apiKey=FETCH, apiVersion=13, clientId=mso-123456-consumer-37da6eb2-0a27-47c4-bc3c-8032fdeca549, correlationId=49) and timeout 30000 to node 1: FetchRequestData(clusterId=null, replicaId=-1, maxWaitMs=500, minBytes=1, maxBytes=52428800, isolationLevel=0, sessionId=1613820950, sessionEpoch=19, topics=[], forgottenTopicsData=[], rackId='') 11:11:05 11:11:05.625 [data-plane-kafka-request-handler-1] DEBUG kafka.server.FetchManager - Created a new incremental FetchContext for session id 1613820950, epoch 20: added 0 partition(s), updated 0 partition(s), removed 0 partition(s) 11:11:05 11:11:05.834 [main] DEBUG org.apache.kafka.clients.consumer.internals.ConsumerCoordinator - [Consumer clientId=mso-123456-consumer-37da6eb2-0a27-47c4-bc3c-8032fdeca549, groupId=mso-group] Sending asynchronous auto-commit of offsets {my-test-topic-0=OffsetAndMetadata{offset=0, leaderEpoch=null, metadata=''}} 11:11:05 11:11:05.834 [main] DEBUG org.apache.kafka.clients.NetworkClient - [Consumer clientId=mso-123456-consumer-37da6eb2-0a27-47c4-bc3c-8032fdeca549, groupId=mso-group] Sending OFFSET_COMMIT request with header RequestHeader(apiKey=OFFSET_COMMIT, apiVersion=8, clientId=mso-123456-consumer-37da6eb2-0a27-47c4-bc3c-8032fdeca549, correlationId=50) and timeout 30000 to node 2147483646: OffsetCommitRequestData(groupId='mso-group', generationId=1, memberId='mso-123456-consumer-37da6eb2-0a27-47c4-bc3c-8032fdeca549-083c14ec-f89b-4a86-bb74-9b008de55af6', groupInstanceId=null, retentionTimeMs=-1, topics=[OffsetCommitRequestTopic(name='my-test-topic', partitions=[OffsetCommitRequestPartition(partitionIndex=0, committedOffset=0, committedLeaderEpoch=-1, commitTimestamp=-1, committedMetadata='')])]) 11:11:05 11:11:05.836 [data-plane-kafka-request-handler-1] DEBUG kafka.server.DelayedOperationPurgatory - Request key MemberKey(mso-group,mso-123456-consumer-37da6eb2-0a27-47c4-bc3c-8032fdeca549-083c14ec-f89b-4a86-bb74-9b008de55af6) unblocked 1 Heartbeat operations 11:11:05 11:11:05.838 [data-plane-kafka-request-handler-1] DEBUG kafka.log.UnifiedLog - [UnifiedLog partition=__consumer_offsets-37, dir=/tmp/kafka-unit8944902187107510952] Flushing log up to offset 3 (exclusive)with recovery point 3, last flushed: 1768216260866, current time: 1768216265838,unflushed: 1 11:11:05 11:11:05.843 [data-plane-kafka-request-handler-1] DEBUG kafka.cluster.Partition - [Partition __consumer_offsets-37 broker=1] High watermark updated from (offset=2 segment=[0:582]) to (offset=3 segment=[0:706]) 11:11:05 11:11:05.843 [data-plane-kafka-request-handler-1] DEBUG kafka.server.ReplicaManager - [ReplicaManager broker=1] Produce to local log in 6 ms 11:11:05 11:11:05.844 [main] DEBUG org.apache.kafka.clients.NetworkClient - [Consumer clientId=mso-123456-consumer-37da6eb2-0a27-47c4-bc3c-8032fdeca549, groupId=mso-group] Received OFFSET_COMMIT response from node 2147483646 for request with header RequestHeader(apiKey=OFFSET_COMMIT, apiVersion=8, clientId=mso-123456-consumer-37da6eb2-0a27-47c4-bc3c-8032fdeca549, correlationId=50): OffsetCommitResponseData(throttleTimeMs=0, topics=[OffsetCommitResponseTopic(name='my-test-topic', partitions=[OffsetCommitResponsePartition(partitionIndex=0, errorCode=0)])]) 11:11:05 11:11:05.844 [main] DEBUG org.apache.kafka.clients.consumer.internals.ConsumerCoordinator - [Consumer clientId=mso-123456-consumer-37da6eb2-0a27-47c4-bc3c-8032fdeca549, groupId=mso-group] Committed offset 0 for partition my-test-topic-0 11:11:05 11:11:05.844 [main] DEBUG org.apache.kafka.clients.consumer.internals.ConsumerCoordinator - [Consumer clientId=mso-123456-consumer-37da6eb2-0a27-47c4-bc3c-8032fdeca549, groupId=mso-group] Completed asynchronous auto-commit of offsets {my-test-topic-0=OffsetAndMetadata{offset=0, leaderEpoch=null, metadata=''}} 11:11:05 11:11:05.845 [data-plane-kafka-network-thread-1-ListenerName(SASL_PLAINTEXT)-SASL_PLAINTEXT-1] DEBUG kafka.request.logger - Completed request:{"isForwarded":false,"requestHeader":{"requestApiKey":8,"requestApiVersion":8,"correlationId":50,"clientId":"mso-123456-consumer-37da6eb2-0a27-47c4-bc3c-8032fdeca549","requestApiKeyName":"OFFSET_COMMIT"},"request":{"groupId":"mso-group","generationId":1,"memberId":"mso-123456-consumer-37da6eb2-0a27-47c4-bc3c-8032fdeca549-083c14ec-f89b-4a86-bb74-9b008de55af6","groupInstanceId":null,"topics":[{"name":"my-test-topic","partitions":[{"partitionIndex":0,"committedOffset":0,"committedLeaderEpoch":-1,"committedMetadata":""}]}]},"response":{"throttleTimeMs":0,"topics":[{"name":"my-test-topic","partitions":[{"partitionIndex":0,"errorCode":0}]}]},"connection":"127.0.0.1:39115-127.0.0.1:43336-3","totalTimeMs":9.318,"requestQueueTimeMs":0.476,"localTimeMs":8.298,"remoteTimeMs":0.0,"throttleTimeMs":0,"responseQueueTimeMs":0.108,"sendTimeMs":0.435,"securityProtocol":"SASL_PLAINTEXT","principal":"User:admin","listener":"SASL_PLAINTEXT","clientInformation":{"softwareName":"apache-kafka-java","softwareVersion":"3.3.1"}} 11:11:06 11:11:06.127 [executor-Fetch] DEBUG kafka.server.IncrementalFetchContext - Incremental fetch context with session id 1613820950 returning 0 partition(s) 11:11:06 11:11:06.128 [main] DEBUG org.apache.kafka.clients.NetworkClient - [Consumer clientId=mso-123456-consumer-37da6eb2-0a27-47c4-bc3c-8032fdeca549, groupId=mso-group] Received FETCH response from node 1 for request with header RequestHeader(apiKey=FETCH, apiVersion=13, clientId=mso-123456-consumer-37da6eb2-0a27-47c4-bc3c-8032fdeca549, correlationId=49): FetchResponseData(throttleTimeMs=0, errorCode=0, sessionId=1613820950, responses=[]) 11:11:06 11:11:06.128 [main] DEBUG org.apache.kafka.clients.FetchSessionHandler - [Consumer clientId=mso-123456-consumer-37da6eb2-0a27-47c4-bc3c-8032fdeca549, groupId=mso-group] Node 1 sent an incremental fetch response with throttleTimeMs = 0 for session 1613820950 with 0 response partition(s), 1 implied partition(s) 11:11:06 11:11:06.129 [data-plane-kafka-network-thread-1-ListenerName(SASL_PLAINTEXT)-SASL_PLAINTEXT-0] DEBUG kafka.request.logger - Completed request:{"isForwarded":false,"requestHeader":{"requestApiKey":1,"requestApiVersion":13,"correlationId":49,"clientId":"mso-123456-consumer-37da6eb2-0a27-47c4-bc3c-8032fdeca549","requestApiKeyName":"FETCH"},"request":{"replicaId":-1,"maxWaitMs":500,"minBytes":1,"maxBytes":52428800,"isolationLevel":0,"sessionId":1613820950,"sessionEpoch":19,"topics":[],"forgottenTopicsData":[],"rackId":""},"response":{"throttleTimeMs":0,"errorCode":0,"sessionId":1613820950,"responses":[]},"connection":"127.0.0.1:39115-127.0.0.1:43334-3","totalTimeMs":503.824,"requestQueueTimeMs":0.193,"localTimeMs":1.28,"remoteTimeMs":501.72,"throttleTimeMs":0,"responseQueueTimeMs":0.16,"sendTimeMs":0.469,"securityProtocol":"SASL_PLAINTEXT","principal":"User:admin","listener":"SASL_PLAINTEXT","clientInformation":{"softwareName":"apache-kafka-java","softwareVersion":"3.3.1"}} 11:11:06 11:11:06.130 [main] DEBUG org.apache.kafka.clients.consumer.internals.Fetcher - [Consumer clientId=mso-123456-consumer-37da6eb2-0a27-47c4-bc3c-8032fdeca549, groupId=mso-group] Added READ_UNCOMMITTED fetch request for partition my-test-topic-0 at position FetchPosition{offset=0, offsetEpoch=Optional.empty, currentLeader=LeaderAndEpoch{leader=Optional[localhost:39115 (id: 1 rack: null)], epoch=0}} to node localhost:39115 (id: 1 rack: null) 11:11:06 11:11:06.130 [main] DEBUG org.apache.kafka.clients.FetchSessionHandler - [Consumer clientId=mso-123456-consumer-37da6eb2-0a27-47c4-bc3c-8032fdeca549, groupId=mso-group] Built incremental fetch (sessionId=1613820950, epoch=20) for node 1. Added 0 partition(s), altered 0 partition(s), removed 0 partition(s), replaced 0 partition(s) out of 1 partition(s) 11:11:06 11:11:06.130 [main] DEBUG org.apache.kafka.clients.consumer.internals.Fetcher - [Consumer clientId=mso-123456-consumer-37da6eb2-0a27-47c4-bc3c-8032fdeca549, groupId=mso-group] Sending READ_UNCOMMITTED IncrementalFetchRequest(toSend=(), toForget=(), toReplace=(), implied=(my-test-topic-0), canUseTopicIds=True) to broker localhost:39115 (id: 1 rack: null) 11:11:06 11:11:06.131 [main] DEBUG org.apache.kafka.clients.NetworkClient - [Consumer clientId=mso-123456-consumer-37da6eb2-0a27-47c4-bc3c-8032fdeca549, groupId=mso-group] Sending FETCH request with header RequestHeader(apiKey=FETCH, apiVersion=13, clientId=mso-123456-consumer-37da6eb2-0a27-47c4-bc3c-8032fdeca549, correlationId=51) and timeout 30000 to node 1: FetchRequestData(clusterId=null, replicaId=-1, maxWaitMs=500, minBytes=1, maxBytes=52428800, isolationLevel=0, sessionId=1613820950, sessionEpoch=20, topics=[], forgottenTopicsData=[], rackId='') 11:11:06 11:11:06.133 [data-plane-kafka-request-handler-1] DEBUG kafka.server.FetchManager - Created a new incremental FetchContext for session id 1613820950, epoch 21: added 0 partition(s), updated 0 partition(s), removed 0 partition(s) 11:11:06 11:11:06.634 [executor-Fetch] DEBUG kafka.server.IncrementalFetchContext - Incremental fetch context with session id 1613820950 returning 0 partition(s) 11:11:06 11:11:06.636 [main] DEBUG org.apache.kafka.clients.NetworkClient - [Consumer clientId=mso-123456-consumer-37da6eb2-0a27-47c4-bc3c-8032fdeca549, groupId=mso-group] Received FETCH response from node 1 for request with header RequestHeader(apiKey=FETCH, apiVersion=13, clientId=mso-123456-consumer-37da6eb2-0a27-47c4-bc3c-8032fdeca549, correlationId=51): FetchResponseData(throttleTimeMs=0, errorCode=0, sessionId=1613820950, responses=[]) 11:11:06 11:11:06.637 [main] DEBUG org.apache.kafka.clients.FetchSessionHandler - [Consumer clientId=mso-123456-consumer-37da6eb2-0a27-47c4-bc3c-8032fdeca549, groupId=mso-group] Node 1 sent an incremental fetch response with throttleTimeMs = 0 for session 1613820950 with 0 response partition(s), 1 implied partition(s) 11:11:06 11:11:06.637 [data-plane-kafka-network-thread-1-ListenerName(SASL_PLAINTEXT)-SASL_PLAINTEXT-0] DEBUG kafka.request.logger - Completed request:{"isForwarded":false,"requestHeader":{"requestApiKey":1,"requestApiVersion":13,"correlationId":51,"clientId":"mso-123456-consumer-37da6eb2-0a27-47c4-bc3c-8032fdeca549","requestApiKeyName":"FETCH"},"request":{"replicaId":-1,"maxWaitMs":500,"minBytes":1,"maxBytes":52428800,"isolationLevel":0,"sessionId":1613820950,"sessionEpoch":20,"topics":[],"forgottenTopicsData":[],"rackId":""},"response":{"throttleTimeMs":0,"errorCode":0,"sessionId":1613820950,"responses":[]},"connection":"127.0.0.1:39115-127.0.0.1:43334-3","totalTimeMs":503.842,"requestQueueTimeMs":0.299,"localTimeMs":1.149,"remoteTimeMs":501.38,"throttleTimeMs":0,"responseQueueTimeMs":0.268,"sendTimeMs":0.745,"securityProtocol":"SASL_PLAINTEXT","principal":"User:admin","listener":"SASL_PLAINTEXT","clientInformation":{"softwareName":"apache-kafka-java","softwareVersion":"3.3.1"}} 11:11:06 11:11:06.638 [main] DEBUG org.apache.kafka.clients.consumer.internals.Fetcher - [Consumer clientId=mso-123456-consumer-37da6eb2-0a27-47c4-bc3c-8032fdeca549, groupId=mso-group] Added READ_UNCOMMITTED fetch request for partition my-test-topic-0 at position FetchPosition{offset=0, offsetEpoch=Optional.empty, currentLeader=LeaderAndEpoch{leader=Optional[localhost:39115 (id: 1 rack: null)], epoch=0}} to node localhost:39115 (id: 1 rack: null) 11:11:06 11:11:06.639 [main] DEBUG org.apache.kafka.clients.FetchSessionHandler - [Consumer clientId=mso-123456-consumer-37da6eb2-0a27-47c4-bc3c-8032fdeca549, groupId=mso-group] Built incremental fetch (sessionId=1613820950, epoch=21) for node 1. Added 0 partition(s), altered 0 partition(s), removed 0 partition(s), replaced 0 partition(s) out of 1 partition(s) 11:11:06 11:11:06.639 [main] DEBUG org.apache.kafka.clients.consumer.internals.Fetcher - [Consumer clientId=mso-123456-consumer-37da6eb2-0a27-47c4-bc3c-8032fdeca549, groupId=mso-group] Sending READ_UNCOMMITTED IncrementalFetchRequest(toSend=(), toForget=(), toReplace=(), implied=(my-test-topic-0), canUseTopicIds=True) to broker localhost:39115 (id: 1 rack: null) 11:11:06 11:11:06.639 [main] DEBUG org.apache.kafka.clients.NetworkClient - [Consumer clientId=mso-123456-consumer-37da6eb2-0a27-47c4-bc3c-8032fdeca549, groupId=mso-group] Sending FETCH request with header RequestHeader(apiKey=FETCH, apiVersion=13, clientId=mso-123456-consumer-37da6eb2-0a27-47c4-bc3c-8032fdeca549, correlationId=52) and timeout 30000 to node 1: FetchRequestData(clusterId=null, replicaId=-1, maxWaitMs=500, minBytes=1, maxBytes=52428800, isolationLevel=0, sessionId=1613820950, sessionEpoch=21, topics=[], forgottenTopicsData=[], rackId='') 11:11:06 11:11:06.641 [data-plane-kafka-request-handler-1] DEBUG kafka.server.FetchManager - Created a new incremental FetchContext for session id 1613820950, epoch 22: added 0 partition(s), updated 0 partition(s), removed 0 partition(s) 11:11:07 11:11:07.143 [executor-Fetch] DEBUG kafka.server.IncrementalFetchContext - Incremental fetch context with session id 1613820950 returning 0 partition(s) 11:11:07 11:11:07.145 [main] DEBUG org.apache.kafka.clients.NetworkClient - [Consumer clientId=mso-123456-consumer-37da6eb2-0a27-47c4-bc3c-8032fdeca549, groupId=mso-group] Received FETCH response from node 1 for request with header RequestHeader(apiKey=FETCH, apiVersion=13, clientId=mso-123456-consumer-37da6eb2-0a27-47c4-bc3c-8032fdeca549, correlationId=52): FetchResponseData(throttleTimeMs=0, errorCode=0, sessionId=1613820950, responses=[]) 11:11:07 11:11:07.145 [main] DEBUG org.apache.kafka.clients.FetchSessionHandler - [Consumer clientId=mso-123456-consumer-37da6eb2-0a27-47c4-bc3c-8032fdeca549, groupId=mso-group] Node 1 sent an incremental fetch response with throttleTimeMs = 0 for session 1613820950 with 0 response partition(s), 1 implied partition(s) 11:11:07 11:11:07.146 [data-plane-kafka-network-thread-1-ListenerName(SASL_PLAINTEXT)-SASL_PLAINTEXT-0] DEBUG kafka.request.logger - Completed request:{"isForwarded":false,"requestHeader":{"requestApiKey":1,"requestApiVersion":13,"correlationId":52,"clientId":"mso-123456-consumer-37da6eb2-0a27-47c4-bc3c-8032fdeca549","requestApiKeyName":"FETCH"},"request":{"replicaId":-1,"maxWaitMs":500,"minBytes":1,"maxBytes":52428800,"isolationLevel":0,"sessionId":1613820950,"sessionEpoch":21,"topics":[],"forgottenTopicsData":[],"rackId":""},"response":{"throttleTimeMs":0,"errorCode":0,"sessionId":1613820950,"responses":[]},"connection":"127.0.0.1:39115-127.0.0.1:43334-3","totalTimeMs":504.588,"requestQueueTimeMs":0.281,"localTimeMs":1.119,"remoteTimeMs":502.321,"throttleTimeMs":0,"responseQueueTimeMs":0.261,"sendTimeMs":0.603,"securityProtocol":"SASL_PLAINTEXT","principal":"User:admin","listener":"SASL_PLAINTEXT","clientInformation":{"softwareName":"apache-kafka-java","softwareVersion":"3.3.1"}} 11:11:07 11:11:07.146 [main] DEBUG org.apache.kafka.clients.consumer.internals.Fetcher - [Consumer clientId=mso-123456-consumer-37da6eb2-0a27-47c4-bc3c-8032fdeca549, groupId=mso-group] Added READ_UNCOMMITTED fetch request for partition my-test-topic-0 at position FetchPosition{offset=0, offsetEpoch=Optional.empty, currentLeader=LeaderAndEpoch{leader=Optional[localhost:39115 (id: 1 rack: null)], epoch=0}} to node localhost:39115 (id: 1 rack: null) 11:11:07 11:11:07.146 [main] DEBUG org.apache.kafka.clients.FetchSessionHandler - [Consumer clientId=mso-123456-consumer-37da6eb2-0a27-47c4-bc3c-8032fdeca549, groupId=mso-group] Built incremental fetch (sessionId=1613820950, epoch=22) for node 1. Added 0 partition(s), altered 0 partition(s), removed 0 partition(s), replaced 0 partition(s) out of 1 partition(s) 11:11:07 11:11:07.147 [main] DEBUG org.apache.kafka.clients.consumer.internals.Fetcher - [Consumer clientId=mso-123456-consumer-37da6eb2-0a27-47c4-bc3c-8032fdeca549, groupId=mso-group] Sending READ_UNCOMMITTED IncrementalFetchRequest(toSend=(), toForget=(), toReplace=(), implied=(my-test-topic-0), canUseTopicIds=True) to broker localhost:39115 (id: 1 rack: null) 11:11:07 11:11:07.147 [main] DEBUG org.apache.kafka.clients.NetworkClient - [Consumer clientId=mso-123456-consumer-37da6eb2-0a27-47c4-bc3c-8032fdeca549, groupId=mso-group] Sending FETCH request with header RequestHeader(apiKey=FETCH, apiVersion=13, clientId=mso-123456-consumer-37da6eb2-0a27-47c4-bc3c-8032fdeca549, correlationId=53) and timeout 30000 to node 1: FetchRequestData(clusterId=null, replicaId=-1, maxWaitMs=500, minBytes=1, maxBytes=52428800, isolationLevel=0, sessionId=1613820950, sessionEpoch=22, topics=[], forgottenTopicsData=[], rackId='') 11:11:07 11:11:07.149 [data-plane-kafka-request-handler-0] DEBUG kafka.server.FetchManager - Created a new incremental FetchContext for session id 1613820950, epoch 23: added 0 partition(s), updated 0 partition(s), removed 0 partition(s) 11:11:07 11:11:07.652 [executor-Fetch] DEBUG kafka.server.IncrementalFetchContext - Incremental fetch context with session id 1613820950 returning 0 partition(s) 11:11:07 11:11:07.654 [data-plane-kafka-network-thread-1-ListenerName(SASL_PLAINTEXT)-SASL_PLAINTEXT-0] DEBUG kafka.request.logger - Completed request:{"isForwarded":false,"requestHeader":{"requestApiKey":1,"requestApiVersion":13,"correlationId":53,"clientId":"mso-123456-consumer-37da6eb2-0a27-47c4-bc3c-8032fdeca549","requestApiKeyName":"FETCH"},"request":{"replicaId":-1,"maxWaitMs":500,"minBytes":1,"maxBytes":52428800,"isolationLevel":0,"sessionId":1613820950,"sessionEpoch":22,"topics":[],"forgottenTopicsData":[],"rackId":""},"response":{"throttleTimeMs":0,"errorCode":0,"sessionId":1613820950,"responses":[]},"connection":"127.0.0.1:39115-127.0.0.1:43334-3","totalTimeMs":505.256,"requestQueueTimeMs":0.371,"localTimeMs":2.104,"remoteTimeMs":502.177,"throttleTimeMs":0,"responseQueueTimeMs":0.121,"sendTimeMs":0.481,"securityProtocol":"SASL_PLAINTEXT","principal":"User:admin","listener":"SASL_PLAINTEXT","clientInformation":{"softwareName":"apache-kafka-java","softwareVersion":"3.3.1"}} 11:11:07 11:11:07.654 [main] DEBUG org.apache.kafka.clients.NetworkClient - [Consumer clientId=mso-123456-consumer-37da6eb2-0a27-47c4-bc3c-8032fdeca549, groupId=mso-group] Received FETCH response from node 1 for request with header RequestHeader(apiKey=FETCH, apiVersion=13, clientId=mso-123456-consumer-37da6eb2-0a27-47c4-bc3c-8032fdeca549, correlationId=53): FetchResponseData(throttleTimeMs=0, errorCode=0, sessionId=1613820950, responses=[]) 11:11:07 11:11:07.654 [main] DEBUG org.apache.kafka.clients.FetchSessionHandler - [Consumer clientId=mso-123456-consumer-37da6eb2-0a27-47c4-bc3c-8032fdeca549, groupId=mso-group] Node 1 sent an incremental fetch response with throttleTimeMs = 0 for session 1613820950 with 0 response partition(s), 1 implied partition(s) 11:11:07 11:11:07.655 [main] DEBUG org.apache.kafka.clients.consumer.internals.Fetcher - [Consumer clientId=mso-123456-consumer-37da6eb2-0a27-47c4-bc3c-8032fdeca549, groupId=mso-group] Added READ_UNCOMMITTED fetch request for partition my-test-topic-0 at position FetchPosition{offset=0, offsetEpoch=Optional.empty, currentLeader=LeaderAndEpoch{leader=Optional[localhost:39115 (id: 1 rack: null)], epoch=0}} to node localhost:39115 (id: 1 rack: null) 11:11:07 11:11:07.655 [main] DEBUG org.apache.kafka.clients.FetchSessionHandler - [Consumer clientId=mso-123456-consumer-37da6eb2-0a27-47c4-bc3c-8032fdeca549, groupId=mso-group] Built incremental fetch (sessionId=1613820950, epoch=23) for node 1. Added 0 partition(s), altered 0 partition(s), removed 0 partition(s), replaced 0 partition(s) out of 1 partition(s) 11:11:07 11:11:07.655 [main] DEBUG org.apache.kafka.clients.consumer.internals.Fetcher - [Consumer clientId=mso-123456-consumer-37da6eb2-0a27-47c4-bc3c-8032fdeca549, groupId=mso-group] Sending READ_UNCOMMITTED IncrementalFetchRequest(toSend=(), toForget=(), toReplace=(), implied=(my-test-topic-0), canUseTopicIds=True) to broker localhost:39115 (id: 1 rack: null) 11:11:07 11:11:07.656 [main] DEBUG org.apache.kafka.clients.NetworkClient - [Consumer clientId=mso-123456-consumer-37da6eb2-0a27-47c4-bc3c-8032fdeca549, groupId=mso-group] Sending FETCH request with header RequestHeader(apiKey=FETCH, apiVersion=13, clientId=mso-123456-consumer-37da6eb2-0a27-47c4-bc3c-8032fdeca549, correlationId=54) and timeout 30000 to node 1: FetchRequestData(clusterId=null, replicaId=-1, maxWaitMs=500, minBytes=1, maxBytes=52428800, isolationLevel=0, sessionId=1613820950, sessionEpoch=23, topics=[], forgottenTopicsData=[], rackId='') 11:11:07 11:11:07.656 [data-plane-kafka-request-handler-0] DEBUG kafka.server.FetchManager - Created a new incremental FetchContext for session id 1613820950, epoch 24: added 0 partition(s), updated 0 partition(s), removed 0 partition(s) 11:11:07 11:11:07.735 [kafka-coordinator-heartbeat-thread | mso-group] DEBUG org.apache.kafka.clients.consumer.internals.ConsumerCoordinator - [Consumer clientId=mso-123456-consumer-37da6eb2-0a27-47c4-bc3c-8032fdeca549, groupId=mso-group] Sending Heartbeat request with generation 1 and member id mso-123456-consumer-37da6eb2-0a27-47c4-bc3c-8032fdeca549-083c14ec-f89b-4a86-bb74-9b008de55af6 to coordinator localhost:39115 (id: 2147483646 rack: null) 11:11:07 11:11:07.736 [kafka-coordinator-heartbeat-thread | mso-group] DEBUG org.apache.kafka.clients.NetworkClient - [Consumer clientId=mso-123456-consumer-37da6eb2-0a27-47c4-bc3c-8032fdeca549, groupId=mso-group] Sending HEARTBEAT request with header RequestHeader(apiKey=HEARTBEAT, apiVersion=4, clientId=mso-123456-consumer-37da6eb2-0a27-47c4-bc3c-8032fdeca549, correlationId=55) and timeout 30000 to node 2147483646: HeartbeatRequestData(groupId='mso-group', generationId=1, memberId='mso-123456-consumer-37da6eb2-0a27-47c4-bc3c-8032fdeca549-083c14ec-f89b-4a86-bb74-9b008de55af6', groupInstanceId=null) 11:11:07 11:11:07.738 [data-plane-kafka-request-handler-1] DEBUG kafka.server.DelayedOperationPurgatory - Request key MemberKey(mso-group,mso-123456-consumer-37da6eb2-0a27-47c4-bc3c-8032fdeca549-083c14ec-f89b-4a86-bb74-9b008de55af6) unblocked 1 Heartbeat operations 11:11:07 11:11:07.740 [main] DEBUG org.apache.kafka.clients.NetworkClient - [Consumer clientId=mso-123456-consumer-37da6eb2-0a27-47c4-bc3c-8032fdeca549, groupId=mso-group] Received HEARTBEAT response from node 2147483646 for request with header RequestHeader(apiKey=HEARTBEAT, apiVersion=4, clientId=mso-123456-consumer-37da6eb2-0a27-47c4-bc3c-8032fdeca549, correlationId=55): HeartbeatResponseData(throttleTimeMs=0, errorCode=0) 11:11:07 11:11:07.740 [main] DEBUG org.apache.kafka.clients.consumer.internals.ConsumerCoordinator - [Consumer clientId=mso-123456-consumer-37da6eb2-0a27-47c4-bc3c-8032fdeca549, groupId=mso-group] Received successful Heartbeat response 11:11:07 11:11:07.740 [data-plane-kafka-network-thread-1-ListenerName(SASL_PLAINTEXT)-SASL_PLAINTEXT-1] DEBUG kafka.request.logger - Completed request:{"isForwarded":false,"requestHeader":{"requestApiKey":12,"requestApiVersion":4,"correlationId":55,"clientId":"mso-123456-consumer-37da6eb2-0a27-47c4-bc3c-8032fdeca549","requestApiKeyName":"HEARTBEAT"},"request":{"groupId":"mso-group","generationId":1,"memberId":"mso-123456-consumer-37da6eb2-0a27-47c4-bc3c-8032fdeca549-083c14ec-f89b-4a86-bb74-9b008de55af6","groupInstanceId":null},"response":{"throttleTimeMs":0,"errorCode":0},"connection":"127.0.0.1:39115-127.0.0.1:43336-3","totalTimeMs":1.986,"requestQueueTimeMs":0.287,"localTimeMs":0.906,"remoteTimeMs":0.0,"throttleTimeMs":0,"responseQueueTimeMs":0.497,"sendTimeMs":0.295,"securityProtocol":"SASL_PLAINTEXT","principal":"User:admin","listener":"SASL_PLAINTEXT","clientInformation":{"softwareName":"apache-kafka-java","softwareVersion":"3.3.1"}} 11:11:08 11:11:08.159 [executor-Fetch] DEBUG kafka.server.IncrementalFetchContext - Incremental fetch context with session id 1613820950 returning 0 partition(s) 11:11:08 11:11:08.160 [main] DEBUG org.apache.kafka.clients.NetworkClient - [Consumer clientId=mso-123456-consumer-37da6eb2-0a27-47c4-bc3c-8032fdeca549, groupId=mso-group] Received FETCH response from node 1 for request with header RequestHeader(apiKey=FETCH, apiVersion=13, clientId=mso-123456-consumer-37da6eb2-0a27-47c4-bc3c-8032fdeca549, correlationId=54): FetchResponseData(throttleTimeMs=0, errorCode=0, sessionId=1613820950, responses=[]) 11:11:08 11:11:08.160 [data-plane-kafka-network-thread-1-ListenerName(SASL_PLAINTEXT)-SASL_PLAINTEXT-0] DEBUG kafka.request.logger - Completed request:{"isForwarded":false,"requestHeader":{"requestApiKey":1,"requestApiVersion":13,"correlationId":54,"clientId":"mso-123456-consumer-37da6eb2-0a27-47c4-bc3c-8032fdeca549","requestApiKeyName":"FETCH"},"request":{"replicaId":-1,"maxWaitMs":500,"minBytes":1,"maxBytes":52428800,"isolationLevel":0,"sessionId":1613820950,"sessionEpoch":23,"topics":[],"forgottenTopicsData":[],"rackId":""},"response":{"throttleTimeMs":0,"errorCode":0,"sessionId":1613820950,"responses":[]},"connection":"127.0.0.1:39115-127.0.0.1:43334-3","totalTimeMs":503.509,"requestQueueTimeMs":0.189,"localTimeMs":1.087,"remoteTimeMs":501.796,"throttleTimeMs":0,"responseQueueTimeMs":0.13,"sendTimeMs":0.305,"securityProtocol":"SASL_PLAINTEXT","principal":"User:admin","listener":"SASL_PLAINTEXT","clientInformation":{"softwareName":"apache-kafka-java","softwareVersion":"3.3.1"}} 11:11:08 11:11:08.161 [main] DEBUG org.apache.kafka.clients.FetchSessionHandler - [Consumer clientId=mso-123456-consumer-37da6eb2-0a27-47c4-bc3c-8032fdeca549, groupId=mso-group] Node 1 sent an incremental fetch response with throttleTimeMs = 0 for session 1613820950 with 0 response partition(s), 1 implied partition(s) 11:11:08 11:11:08.161 [main] DEBUG org.apache.kafka.clients.consumer.internals.Fetcher - [Consumer clientId=mso-123456-consumer-37da6eb2-0a27-47c4-bc3c-8032fdeca549, groupId=mso-group] Added READ_UNCOMMITTED fetch request for partition my-test-topic-0 at position FetchPosition{offset=0, offsetEpoch=Optional.empty, currentLeader=LeaderAndEpoch{leader=Optional[localhost:39115 (id: 1 rack: null)], epoch=0}} to node localhost:39115 (id: 1 rack: null) 11:11:08 11:11:08.162 [main] DEBUG org.apache.kafka.clients.FetchSessionHandler - [Consumer clientId=mso-123456-consumer-37da6eb2-0a27-47c4-bc3c-8032fdeca549, groupId=mso-group] Built incremental fetch (sessionId=1613820950, epoch=24) for node 1. Added 0 partition(s), altered 0 partition(s), removed 0 partition(s), replaced 0 partition(s) out of 1 partition(s) 11:11:08 11:11:08.162 [main] DEBUG org.apache.kafka.clients.consumer.internals.Fetcher - [Consumer clientId=mso-123456-consumer-37da6eb2-0a27-47c4-bc3c-8032fdeca549, groupId=mso-group] Sending READ_UNCOMMITTED IncrementalFetchRequest(toSend=(), toForget=(), toReplace=(), implied=(my-test-topic-0), canUseTopicIds=True) to broker localhost:39115 (id: 1 rack: null) 11:11:08 11:11:08.162 [main] DEBUG org.apache.kafka.clients.NetworkClient - [Consumer clientId=mso-123456-consumer-37da6eb2-0a27-47c4-bc3c-8032fdeca549, groupId=mso-group] Sending FETCH request with header RequestHeader(apiKey=FETCH, apiVersion=13, clientId=mso-123456-consumer-37da6eb2-0a27-47c4-bc3c-8032fdeca549, correlationId=56) and timeout 30000 to node 1: FetchRequestData(clusterId=null, replicaId=-1, maxWaitMs=500, minBytes=1, maxBytes=52428800, isolationLevel=0, sessionId=1613820950, sessionEpoch=24, topics=[], forgottenTopicsData=[], rackId='') 11:11:08 11:11:08.163 [data-plane-kafka-request-handler-0] DEBUG kafka.server.FetchManager - Created a new incremental FetchContext for session id 1613820950, epoch 25: added 0 partition(s), updated 0 partition(s), removed 0 partition(s) 11:11:08 11:11:08.666 [executor-Fetch] DEBUG kafka.server.IncrementalFetchContext - Incremental fetch context with session id 1613820950 returning 0 partition(s) 11:11:08 11:11:08.668 [main] DEBUG org.apache.kafka.clients.NetworkClient - [Consumer clientId=mso-123456-consumer-37da6eb2-0a27-47c4-bc3c-8032fdeca549, groupId=mso-group] Received FETCH response from node 1 for request with header RequestHeader(apiKey=FETCH, apiVersion=13, clientId=mso-123456-consumer-37da6eb2-0a27-47c4-bc3c-8032fdeca549, correlationId=56): FetchResponseData(throttleTimeMs=0, errorCode=0, sessionId=1613820950, responses=[]) 11:11:08 11:11:08.668 [data-plane-kafka-network-thread-1-ListenerName(SASL_PLAINTEXT)-SASL_PLAINTEXT-0] DEBUG kafka.request.logger - Completed request:{"isForwarded":false,"requestHeader":{"requestApiKey":1,"requestApiVersion":13,"correlationId":56,"clientId":"mso-123456-consumer-37da6eb2-0a27-47c4-bc3c-8032fdeca549","requestApiKeyName":"FETCH"},"request":{"replicaId":-1,"maxWaitMs":500,"minBytes":1,"maxBytes":52428800,"isolationLevel":0,"sessionId":1613820950,"sessionEpoch":24,"topics":[],"forgottenTopicsData":[],"rackId":""},"response":{"throttleTimeMs":0,"errorCode":0,"sessionId":1613820950,"responses":[]},"connection":"127.0.0.1:39115-127.0.0.1:43334-3","totalTimeMs":504.356,"requestQueueTimeMs":0.236,"localTimeMs":1.021,"remoteTimeMs":502.651,"throttleTimeMs":0,"responseQueueTimeMs":0.156,"sendTimeMs":0.289,"securityProtocol":"SASL_PLAINTEXT","principal":"User:admin","listener":"SASL_PLAINTEXT","clientInformation":{"softwareName":"apache-kafka-java","softwareVersion":"3.3.1"}} 11:11:08 11:11:08.669 [main] DEBUG org.apache.kafka.clients.FetchSessionHandler - [Consumer clientId=mso-123456-consumer-37da6eb2-0a27-47c4-bc3c-8032fdeca549, groupId=mso-group] Node 1 sent an incremental fetch response with throttleTimeMs = 0 for session 1613820950 with 0 response partition(s), 1 implied partition(s) 11:11:08 11:11:08.669 [main] DEBUG org.apache.kafka.clients.consumer.internals.Fetcher - [Consumer clientId=mso-123456-consumer-37da6eb2-0a27-47c4-bc3c-8032fdeca549, groupId=mso-group] Added READ_UNCOMMITTED fetch request for partition my-test-topic-0 at position FetchPosition{offset=0, offsetEpoch=Optional.empty, currentLeader=LeaderAndEpoch{leader=Optional[localhost:39115 (id: 1 rack: null)], epoch=0}} to node localhost:39115 (id: 1 rack: null) 11:11:08 11:11:08.670 [main] DEBUG org.apache.kafka.clients.FetchSessionHandler - [Consumer clientId=mso-123456-consumer-37da6eb2-0a27-47c4-bc3c-8032fdeca549, groupId=mso-group] Built incremental fetch (sessionId=1613820950, epoch=25) for node 1. Added 0 partition(s), altered 0 partition(s), removed 0 partition(s), replaced 0 partition(s) out of 1 partition(s) 11:11:08 11:11:08.670 [main] DEBUG org.apache.kafka.clients.consumer.internals.Fetcher - [Consumer clientId=mso-123456-consumer-37da6eb2-0a27-47c4-bc3c-8032fdeca549, groupId=mso-group] Sending READ_UNCOMMITTED IncrementalFetchRequest(toSend=(), toForget=(), toReplace=(), implied=(my-test-topic-0), canUseTopicIds=True) to broker localhost:39115 (id: 1 rack: null) 11:11:08 11:11:08.671 [main] DEBUG org.apache.kafka.clients.NetworkClient - [Consumer clientId=mso-123456-consumer-37da6eb2-0a27-47c4-bc3c-8032fdeca549, groupId=mso-group] Sending FETCH request with header RequestHeader(apiKey=FETCH, apiVersion=13, clientId=mso-123456-consumer-37da6eb2-0a27-47c4-bc3c-8032fdeca549, correlationId=57) and timeout 30000 to node 1: FetchRequestData(clusterId=null, replicaId=-1, maxWaitMs=500, minBytes=1, maxBytes=52428800, isolationLevel=0, sessionId=1613820950, sessionEpoch=25, topics=[], forgottenTopicsData=[], rackId='') 11:11:08 11:11:08.672 [data-plane-kafka-request-handler-0] DEBUG kafka.server.FetchManager - Created a new incremental FetchContext for session id 1613820950, epoch 26: added 0 partition(s), updated 0 partition(s), removed 0 partition(s) 11:11:09 11:11:09.174 [executor-Fetch] DEBUG kafka.server.IncrementalFetchContext - Incremental fetch context with session id 1613820950 returning 0 partition(s) 11:11:09 11:11:09.175 [main] DEBUG org.apache.kafka.clients.NetworkClient - [Consumer clientId=mso-123456-consumer-37da6eb2-0a27-47c4-bc3c-8032fdeca549, groupId=mso-group] Received FETCH response from node 1 for request with header RequestHeader(apiKey=FETCH, apiVersion=13, clientId=mso-123456-consumer-37da6eb2-0a27-47c4-bc3c-8032fdeca549, correlationId=57): FetchResponseData(throttleTimeMs=0, errorCode=0, sessionId=1613820950, responses=[]) 11:11:09 11:11:09.176 [data-plane-kafka-network-thread-1-ListenerName(SASL_PLAINTEXT)-SASL_PLAINTEXT-0] DEBUG kafka.request.logger - Completed request:{"isForwarded":false,"requestHeader":{"requestApiKey":1,"requestApiVersion":13,"correlationId":57,"clientId":"mso-123456-consumer-37da6eb2-0a27-47c4-bc3c-8032fdeca549","requestApiKeyName":"FETCH"},"request":{"replicaId":-1,"maxWaitMs":500,"minBytes":1,"maxBytes":52428800,"isolationLevel":0,"sessionId":1613820950,"sessionEpoch":25,"topics":[],"forgottenTopicsData":[],"rackId":""},"response":{"throttleTimeMs":0,"errorCode":0,"sessionId":1613820950,"responses":[]},"connection":"127.0.0.1:39115-127.0.0.1:43334-3","totalTimeMs":503.665,"requestQueueTimeMs":0.233,"localTimeMs":1.192,"remoteTimeMs":501.82,"throttleTimeMs":0,"responseQueueTimeMs":0.135,"sendTimeMs":0.282,"securityProtocol":"SASL_PLAINTEXT","principal":"User:admin","listener":"SASL_PLAINTEXT","clientInformation":{"softwareName":"apache-kafka-java","softwareVersion":"3.3.1"}} 11:11:09 11:11:09.176 [main] DEBUG org.apache.kafka.clients.FetchSessionHandler - [Consumer clientId=mso-123456-consumer-37da6eb2-0a27-47c4-bc3c-8032fdeca549, groupId=mso-group] Node 1 sent an incremental fetch response with throttleTimeMs = 0 for session 1613820950 with 0 response partition(s), 1 implied partition(s) 11:11:09 11:11:09.177 [main] DEBUG org.apache.kafka.clients.consumer.internals.Fetcher - [Consumer clientId=mso-123456-consumer-37da6eb2-0a27-47c4-bc3c-8032fdeca549, groupId=mso-group] Added READ_UNCOMMITTED fetch request for partition my-test-topic-0 at position FetchPosition{offset=0, offsetEpoch=Optional.empty, currentLeader=LeaderAndEpoch{leader=Optional[localhost:39115 (id: 1 rack: null)], epoch=0}} to node localhost:39115 (id: 1 rack: null) 11:11:09 11:11:09.177 [main] DEBUG org.apache.kafka.clients.FetchSessionHandler - [Consumer clientId=mso-123456-consumer-37da6eb2-0a27-47c4-bc3c-8032fdeca549, groupId=mso-group] Built incremental fetch (sessionId=1613820950, epoch=26) for node 1. Added 0 partition(s), altered 0 partition(s), removed 0 partition(s), replaced 0 partition(s) out of 1 partition(s) 11:11:09 11:11:09.177 [main] DEBUG org.apache.kafka.clients.consumer.internals.Fetcher - [Consumer clientId=mso-123456-consumer-37da6eb2-0a27-47c4-bc3c-8032fdeca549, groupId=mso-group] Sending READ_UNCOMMITTED IncrementalFetchRequest(toSend=(), toForget=(), toReplace=(), implied=(my-test-topic-0), canUseTopicIds=True) to broker localhost:39115 (id: 1 rack: null) 11:11:09 11:11:09.177 [main] DEBUG org.apache.kafka.clients.NetworkClient - [Consumer clientId=mso-123456-consumer-37da6eb2-0a27-47c4-bc3c-8032fdeca549, groupId=mso-group] Sending FETCH request with header RequestHeader(apiKey=FETCH, apiVersion=13, clientId=mso-123456-consumer-37da6eb2-0a27-47c4-bc3c-8032fdeca549, correlationId=58) and timeout 30000 to node 1: FetchRequestData(clusterId=null, replicaId=-1, maxWaitMs=500, minBytes=1, maxBytes=52428800, isolationLevel=0, sessionId=1613820950, sessionEpoch=26, topics=[], forgottenTopicsData=[], rackId='') 11:11:09 11:11:09.179 [data-plane-kafka-request-handler-1] DEBUG kafka.server.FetchManager - Created a new incremental FetchContext for session id 1613820950, epoch 27: added 0 partition(s), updated 0 partition(s), removed 0 partition(s) 11:11:09 11:11:09.681 [executor-Fetch] DEBUG kafka.server.IncrementalFetchContext - Incremental fetch context with session id 1613820950 returning 0 partition(s) 11:11:09 11:11:09.682 [main] DEBUG org.apache.kafka.clients.NetworkClient - [Consumer clientId=mso-123456-consumer-37da6eb2-0a27-47c4-bc3c-8032fdeca549, groupId=mso-group] Received FETCH response from node 1 for request with header RequestHeader(apiKey=FETCH, apiVersion=13, clientId=mso-123456-consumer-37da6eb2-0a27-47c4-bc3c-8032fdeca549, correlationId=58): FetchResponseData(throttleTimeMs=0, errorCode=0, sessionId=1613820950, responses=[]) 11:11:09 11:11:09.682 [data-plane-kafka-network-thread-1-ListenerName(SASL_PLAINTEXT)-SASL_PLAINTEXT-0] DEBUG kafka.request.logger - Completed request:{"isForwarded":false,"requestHeader":{"requestApiKey":1,"requestApiVersion":13,"correlationId":58,"clientId":"mso-123456-consumer-37da6eb2-0a27-47c4-bc3c-8032fdeca549","requestApiKeyName":"FETCH"},"request":{"replicaId":-1,"maxWaitMs":500,"minBytes":1,"maxBytes":52428800,"isolationLevel":0,"sessionId":1613820950,"sessionEpoch":26,"topics":[],"forgottenTopicsData":[],"rackId":""},"response":{"throttleTimeMs":0,"errorCode":0,"sessionId":1613820950,"responses":[]},"connection":"127.0.0.1:39115-127.0.0.1:43334-3","totalTimeMs":503.392,"requestQueueTimeMs":0.215,"localTimeMs":1.404,"remoteTimeMs":501.303,"throttleTimeMs":0,"responseQueueTimeMs":0.121,"sendTimeMs":0.348,"securityProtocol":"SASL_PLAINTEXT","principal":"User:admin","listener":"SASL_PLAINTEXT","clientInformation":{"softwareName":"apache-kafka-java","softwareVersion":"3.3.1"}} 11:11:09 11:11:09.683 [main] DEBUG org.apache.kafka.clients.FetchSessionHandler - [Consumer clientId=mso-123456-consumer-37da6eb2-0a27-47c4-bc3c-8032fdeca549, groupId=mso-group] Node 1 sent an incremental fetch response with throttleTimeMs = 0 for session 1613820950 with 0 response partition(s), 1 implied partition(s) 11:11:09 11:11:09.684 [main] DEBUG org.apache.kafka.clients.consumer.internals.Fetcher - [Consumer clientId=mso-123456-consumer-37da6eb2-0a27-47c4-bc3c-8032fdeca549, groupId=mso-group] Added READ_UNCOMMITTED fetch request for partition my-test-topic-0 at position FetchPosition{offset=0, offsetEpoch=Optional.empty, currentLeader=LeaderAndEpoch{leader=Optional[localhost:39115 (id: 1 rack: null)], epoch=0}} to node localhost:39115 (id: 1 rack: null) 11:11:09 11:11:09.684 [main] DEBUG org.apache.kafka.clients.FetchSessionHandler - [Consumer clientId=mso-123456-consumer-37da6eb2-0a27-47c4-bc3c-8032fdeca549, groupId=mso-group] Built incremental fetch (sessionId=1613820950, epoch=27) for node 1. Added 0 partition(s), altered 0 partition(s), removed 0 partition(s), replaced 0 partition(s) out of 1 partition(s) 11:11:09 11:11:09.684 [main] DEBUG org.apache.kafka.clients.consumer.internals.Fetcher - [Consumer clientId=mso-123456-consumer-37da6eb2-0a27-47c4-bc3c-8032fdeca549, groupId=mso-group] Sending READ_UNCOMMITTED IncrementalFetchRequest(toSend=(), toForget=(), toReplace=(), implied=(my-test-topic-0), canUseTopicIds=True) to broker localhost:39115 (id: 1 rack: null) 11:11:09 11:11:09.684 [main] DEBUG org.apache.kafka.clients.NetworkClient - [Consumer clientId=mso-123456-consumer-37da6eb2-0a27-47c4-bc3c-8032fdeca549, groupId=mso-group] Sending FETCH request with header RequestHeader(apiKey=FETCH, apiVersion=13, clientId=mso-123456-consumer-37da6eb2-0a27-47c4-bc3c-8032fdeca549, correlationId=59) and timeout 30000 to node 1: FetchRequestData(clusterId=null, replicaId=-1, maxWaitMs=500, minBytes=1, maxBytes=52428800, isolationLevel=0, sessionId=1613820950, sessionEpoch=27, topics=[], forgottenTopicsData=[], rackId='') 11:11:09 11:11:09.686 [data-plane-kafka-request-handler-1] DEBUG kafka.server.FetchManager - Created a new incremental FetchContext for session id 1613820950, epoch 28: added 0 partition(s), updated 0 partition(s), removed 0 partition(s) 11:11:10 11:11:10.188 [executor-Fetch] DEBUG kafka.server.IncrementalFetchContext - Incremental fetch context with session id 1613820950 returning 0 partition(s) 11:11:10 11:11:10.189 [data-plane-kafka-network-thread-1-ListenerName(SASL_PLAINTEXT)-SASL_PLAINTEXT-0] DEBUG kafka.request.logger - Completed request:{"isForwarded":false,"requestHeader":{"requestApiKey":1,"requestApiVersion":13,"correlationId":59,"clientId":"mso-123456-consumer-37da6eb2-0a27-47c4-bc3c-8032fdeca549","requestApiKeyName":"FETCH"},"request":{"replicaId":-1,"maxWaitMs":500,"minBytes":1,"maxBytes":52428800,"isolationLevel":0,"sessionId":1613820950,"sessionEpoch":27,"topics":[],"forgottenTopicsData":[],"rackId":""},"response":{"throttleTimeMs":0,"errorCode":0,"sessionId":1613820950,"responses":[]},"connection":"127.0.0.1:39115-127.0.0.1:43334-3","totalTimeMs":503.466,"requestQueueTimeMs":0.247,"localTimeMs":1.197,"remoteTimeMs":501.594,"throttleTimeMs":0,"responseQueueTimeMs":0.136,"sendTimeMs":0.289,"securityProtocol":"SASL_PLAINTEXT","principal":"User:admin","listener":"SASL_PLAINTEXT","clientInformation":{"softwareName":"apache-kafka-java","softwareVersion":"3.3.1"}} 11:11:10 11:11:10.189 [main] DEBUG org.apache.kafka.clients.NetworkClient - [Consumer clientId=mso-123456-consumer-37da6eb2-0a27-47c4-bc3c-8032fdeca549, groupId=mso-group] Received FETCH response from node 1 for request with header RequestHeader(apiKey=FETCH, apiVersion=13, clientId=mso-123456-consumer-37da6eb2-0a27-47c4-bc3c-8032fdeca549, correlationId=59): FetchResponseData(throttleTimeMs=0, errorCode=0, sessionId=1613820950, responses=[]) 11:11:10 11:11:10.190 [main] DEBUG org.apache.kafka.clients.FetchSessionHandler - [Consumer clientId=mso-123456-consumer-37da6eb2-0a27-47c4-bc3c-8032fdeca549, groupId=mso-group] Node 1 sent an incremental fetch response with throttleTimeMs = 0 for session 1613820950 with 0 response partition(s), 1 implied partition(s) 11:11:10 11:11:10.191 [main] DEBUG org.apache.kafka.clients.consumer.internals.Fetcher - [Consumer clientId=mso-123456-consumer-37da6eb2-0a27-47c4-bc3c-8032fdeca549, groupId=mso-group] Added READ_UNCOMMITTED fetch request for partition my-test-topic-0 at position FetchPosition{offset=0, offsetEpoch=Optional.empty, currentLeader=LeaderAndEpoch{leader=Optional[localhost:39115 (id: 1 rack: null)], epoch=0}} to node localhost:39115 (id: 1 rack: null) 11:11:10 11:11:10.191 [main] DEBUG org.apache.kafka.clients.FetchSessionHandler - [Consumer clientId=mso-123456-consumer-37da6eb2-0a27-47c4-bc3c-8032fdeca549, groupId=mso-group] Built incremental fetch (sessionId=1613820950, epoch=28) for node 1. Added 0 partition(s), altered 0 partition(s), removed 0 partition(s), replaced 0 partition(s) out of 1 partition(s) 11:11:10 11:11:10.191 [main] DEBUG org.apache.kafka.clients.consumer.internals.Fetcher - [Consumer clientId=mso-123456-consumer-37da6eb2-0a27-47c4-bc3c-8032fdeca549, groupId=mso-group] Sending READ_UNCOMMITTED IncrementalFetchRequest(toSend=(), toForget=(), toReplace=(), implied=(my-test-topic-0), canUseTopicIds=True) to broker localhost:39115 (id: 1 rack: null) 11:11:10 11:11:10.192 [main] DEBUG org.apache.kafka.clients.NetworkClient - [Consumer clientId=mso-123456-consumer-37da6eb2-0a27-47c4-bc3c-8032fdeca549, groupId=mso-group] Sending FETCH request with header RequestHeader(apiKey=FETCH, apiVersion=13, clientId=mso-123456-consumer-37da6eb2-0a27-47c4-bc3c-8032fdeca549, correlationId=60) and timeout 30000 to node 1: FetchRequestData(clusterId=null, replicaId=-1, maxWaitMs=500, minBytes=1, maxBytes=52428800, isolationLevel=0, sessionId=1613820950, sessionEpoch=28, topics=[], forgottenTopicsData=[], rackId='') 11:11:10 11:11:10.193 [data-plane-kafka-request-handler-1] DEBUG kafka.server.FetchManager - Created a new incremental FetchContext for session id 1613820950, epoch 29: added 0 partition(s), updated 0 partition(s), removed 0 partition(s) 11:11:10 11:11:10.696 [executor-Fetch] DEBUG kafka.server.IncrementalFetchContext - Incremental fetch context with session id 1613820950 returning 0 partition(s) 11:11:10 11:11:10.697 [main] DEBUG org.apache.kafka.clients.NetworkClient - [Consumer clientId=mso-123456-consumer-37da6eb2-0a27-47c4-bc3c-8032fdeca549, groupId=mso-group] Received FETCH response from node 1 for request with header RequestHeader(apiKey=FETCH, apiVersion=13, clientId=mso-123456-consumer-37da6eb2-0a27-47c4-bc3c-8032fdeca549, correlationId=60): FetchResponseData(throttleTimeMs=0, errorCode=0, sessionId=1613820950, responses=[]) 11:11:10 11:11:10.698 [data-plane-kafka-network-thread-1-ListenerName(SASL_PLAINTEXT)-SASL_PLAINTEXT-0] DEBUG kafka.request.logger - Completed request:{"isForwarded":false,"requestHeader":{"requestApiKey":1,"requestApiVersion":13,"correlationId":60,"clientId":"mso-123456-consumer-37da6eb2-0a27-47c4-bc3c-8032fdeca549","requestApiKeyName":"FETCH"},"request":{"replicaId":-1,"maxWaitMs":500,"minBytes":1,"maxBytes":52428800,"isolationLevel":0,"sessionId":1613820950,"sessionEpoch":28,"topics":[],"forgottenTopicsData":[],"rackId":""},"response":{"throttleTimeMs":0,"errorCode":0,"sessionId":1613820950,"responses":[]},"connection":"127.0.0.1:39115-127.0.0.1:43334-3","totalTimeMs":504.23,"requestQueueTimeMs":0.303,"localTimeMs":1.87,"remoteTimeMs":501.541,"throttleTimeMs":0,"responseQueueTimeMs":0.12,"sendTimeMs":0.394,"securityProtocol":"SASL_PLAINTEXT","principal":"User:admin","listener":"SASL_PLAINTEXT","clientInformation":{"softwareName":"apache-kafka-java","softwareVersion":"3.3.1"}} 11:11:10 11:11:10.698 [main] DEBUG org.apache.kafka.clients.FetchSessionHandler - [Consumer clientId=mso-123456-consumer-37da6eb2-0a27-47c4-bc3c-8032fdeca549, groupId=mso-group] Node 1 sent an incremental fetch response with throttleTimeMs = 0 for session 1613820950 with 0 response partition(s), 1 implied partition(s) 11:11:10 11:11:10.699 [main] DEBUG org.apache.kafka.clients.consumer.internals.Fetcher - [Consumer clientId=mso-123456-consumer-37da6eb2-0a27-47c4-bc3c-8032fdeca549, groupId=mso-group] Added READ_UNCOMMITTED fetch request for partition my-test-topic-0 at position FetchPosition{offset=0, offsetEpoch=Optional.empty, currentLeader=LeaderAndEpoch{leader=Optional[localhost:39115 (id: 1 rack: null)], epoch=0}} to node localhost:39115 (id: 1 rack: null) 11:11:10 11:11:10.699 [main] DEBUG org.apache.kafka.clients.FetchSessionHandler - [Consumer clientId=mso-123456-consumer-37da6eb2-0a27-47c4-bc3c-8032fdeca549, groupId=mso-group] Built incremental fetch (sessionId=1613820950, epoch=29) for node 1. Added 0 partition(s), altered 0 partition(s), removed 0 partition(s), replaced 0 partition(s) out of 1 partition(s) 11:11:10 11:11:10.699 [main] DEBUG org.apache.kafka.clients.consumer.internals.Fetcher - [Consumer clientId=mso-123456-consumer-37da6eb2-0a27-47c4-bc3c-8032fdeca549, groupId=mso-group] Sending READ_UNCOMMITTED IncrementalFetchRequest(toSend=(), toForget=(), toReplace=(), implied=(my-test-topic-0), canUseTopicIds=True) to broker localhost:39115 (id: 1 rack: null) 11:11:10 11:11:10.699 [main] DEBUG org.apache.kafka.clients.NetworkClient - [Consumer clientId=mso-123456-consumer-37da6eb2-0a27-47c4-bc3c-8032fdeca549, groupId=mso-group] Sending FETCH request with header RequestHeader(apiKey=FETCH, apiVersion=13, clientId=mso-123456-consumer-37da6eb2-0a27-47c4-bc3c-8032fdeca549, correlationId=61) and timeout 30000 to node 1: FetchRequestData(clusterId=null, replicaId=-1, maxWaitMs=500, minBytes=1, maxBytes=52428800, isolationLevel=0, sessionId=1613820950, sessionEpoch=29, topics=[], forgottenTopicsData=[], rackId='') 11:11:10 11:11:10.701 [data-plane-kafka-request-handler-0] DEBUG kafka.server.FetchManager - Created a new incremental FetchContext for session id 1613820950, epoch 30: added 0 partition(s), updated 0 partition(s), removed 0 partition(s) 11:11:10 11:11:10.736 [kafka-coordinator-heartbeat-thread | mso-group] DEBUG org.apache.kafka.clients.consumer.internals.ConsumerCoordinator - [Consumer clientId=mso-123456-consumer-37da6eb2-0a27-47c4-bc3c-8032fdeca549, groupId=mso-group] Sending Heartbeat request with generation 1 and member id mso-123456-consumer-37da6eb2-0a27-47c4-bc3c-8032fdeca549-083c14ec-f89b-4a86-bb74-9b008de55af6 to coordinator localhost:39115 (id: 2147483646 rack: null) 11:11:10 11:11:10.737 [kafka-coordinator-heartbeat-thread | mso-group] DEBUG org.apache.kafka.clients.NetworkClient - [Consumer clientId=mso-123456-consumer-37da6eb2-0a27-47c4-bc3c-8032fdeca549, groupId=mso-group] Sending HEARTBEAT request with header RequestHeader(apiKey=HEARTBEAT, apiVersion=4, clientId=mso-123456-consumer-37da6eb2-0a27-47c4-bc3c-8032fdeca549, correlationId=62) and timeout 30000 to node 2147483646: HeartbeatRequestData(groupId='mso-group', generationId=1, memberId='mso-123456-consumer-37da6eb2-0a27-47c4-bc3c-8032fdeca549-083c14ec-f89b-4a86-bb74-9b008de55af6', groupInstanceId=null) 11:11:10 11:11:10.738 [data-plane-kafka-request-handler-1] DEBUG kafka.server.DelayedOperationPurgatory - Request key MemberKey(mso-group,mso-123456-consumer-37da6eb2-0a27-47c4-bc3c-8032fdeca549-083c14ec-f89b-4a86-bb74-9b008de55af6) unblocked 1 Heartbeat operations 11:11:10 11:11:10.739 [main] DEBUG org.apache.kafka.clients.NetworkClient - [Consumer clientId=mso-123456-consumer-37da6eb2-0a27-47c4-bc3c-8032fdeca549, groupId=mso-group] Received HEARTBEAT response from node 2147483646 for request with header RequestHeader(apiKey=HEARTBEAT, apiVersion=4, clientId=mso-123456-consumer-37da6eb2-0a27-47c4-bc3c-8032fdeca549, correlationId=62): HeartbeatResponseData(throttleTimeMs=0, errorCode=0) 11:11:10 11:11:10.739 [main] DEBUG org.apache.kafka.clients.consumer.internals.ConsumerCoordinator - [Consumer clientId=mso-123456-consumer-37da6eb2-0a27-47c4-bc3c-8032fdeca549, groupId=mso-group] Received successful Heartbeat response 11:11:10 11:11:10.740 [data-plane-kafka-network-thread-1-ListenerName(SASL_PLAINTEXT)-SASL_PLAINTEXT-1] DEBUG kafka.request.logger - Completed request:{"isForwarded":false,"requestHeader":{"requestApiKey":12,"requestApiVersion":4,"correlationId":62,"clientId":"mso-123456-consumer-37da6eb2-0a27-47c4-bc3c-8032fdeca549","requestApiKeyName":"HEARTBEAT"},"request":{"groupId":"mso-group","generationId":1,"memberId":"mso-123456-consumer-37da6eb2-0a27-47c4-bc3c-8032fdeca549-083c14ec-f89b-4a86-bb74-9b008de55af6","groupInstanceId":null},"response":{"throttleTimeMs":0,"errorCode":0},"connection":"127.0.0.1:39115-127.0.0.1:43336-3","totalTimeMs":1.737,"requestQueueTimeMs":0.232,"localTimeMs":1.157,"remoteTimeMs":0.0,"throttleTimeMs":0,"responseQueueTimeMs":0.083,"sendTimeMs":0.263,"securityProtocol":"SASL_PLAINTEXT","principal":"User:admin","listener":"SASL_PLAINTEXT","clientInformation":{"softwareName":"apache-kafka-java","softwareVersion":"3.3.1"}} 11:11:10 11:11:10.833 [main] DEBUG org.apache.kafka.clients.consumer.internals.ConsumerCoordinator - [Consumer clientId=mso-123456-consumer-37da6eb2-0a27-47c4-bc3c-8032fdeca549, groupId=mso-group] Sending asynchronous auto-commit of offsets {my-test-topic-0=OffsetAndMetadata{offset=0, leaderEpoch=null, metadata=''}} 11:11:10 11:11:10.834 [main] DEBUG org.apache.kafka.clients.NetworkClient - [Consumer clientId=mso-123456-consumer-37da6eb2-0a27-47c4-bc3c-8032fdeca549, groupId=mso-group] Sending OFFSET_COMMIT request with header RequestHeader(apiKey=OFFSET_COMMIT, apiVersion=8, clientId=mso-123456-consumer-37da6eb2-0a27-47c4-bc3c-8032fdeca549, correlationId=63) and timeout 30000 to node 2147483646: OffsetCommitRequestData(groupId='mso-group', generationId=1, memberId='mso-123456-consumer-37da6eb2-0a27-47c4-bc3c-8032fdeca549-083c14ec-f89b-4a86-bb74-9b008de55af6', groupInstanceId=null, retentionTimeMs=-1, topics=[OffsetCommitRequestTopic(name='my-test-topic', partitions=[OffsetCommitRequestPartition(partitionIndex=0, committedOffset=0, committedLeaderEpoch=-1, commitTimestamp=-1, committedMetadata='')])]) 11:11:10 11:11:10.836 [data-plane-kafka-request-handler-0] DEBUG kafka.server.DelayedOperationPurgatory - Request key MemberKey(mso-group,mso-123456-consumer-37da6eb2-0a27-47c4-bc3c-8032fdeca549-083c14ec-f89b-4a86-bb74-9b008de55af6) unblocked 1 Heartbeat operations 11:11:10 11:11:10.838 [data-plane-kafka-request-handler-0] DEBUG kafka.log.UnifiedLog - [UnifiedLog partition=__consumer_offsets-37, dir=/tmp/kafka-unit8944902187107510952] Flushing log up to offset 4 (exclusive)with recovery point 4, last flushed: 1768216265842, current time: 1768216270838,unflushed: 1 11:11:10 11:11:10.844 [data-plane-kafka-request-handler-0] DEBUG kafka.cluster.Partition - [Partition __consumer_offsets-37 broker=1] High watermark updated from (offset=3 segment=[0:706]) to (offset=4 segment=[0:830]) 11:11:10 11:11:10.845 [data-plane-kafka-request-handler-0] DEBUG kafka.server.ReplicaManager - [ReplicaManager broker=1] Produce to local log in 8 ms 11:11:10 11:11:10.847 [main] DEBUG org.apache.kafka.clients.NetworkClient - [Consumer clientId=mso-123456-consumer-37da6eb2-0a27-47c4-bc3c-8032fdeca549, groupId=mso-group] Received OFFSET_COMMIT response from node 2147483646 for request with header RequestHeader(apiKey=OFFSET_COMMIT, apiVersion=8, clientId=mso-123456-consumer-37da6eb2-0a27-47c4-bc3c-8032fdeca549, correlationId=63): OffsetCommitResponseData(throttleTimeMs=0, topics=[OffsetCommitResponseTopic(name='my-test-topic', partitions=[OffsetCommitResponsePartition(partitionIndex=0, errorCode=0)])]) 11:11:10 11:11:10.847 [data-plane-kafka-network-thread-1-ListenerName(SASL_PLAINTEXT)-SASL_PLAINTEXT-1] DEBUG kafka.request.logger - Completed request:{"isForwarded":false,"requestHeader":{"requestApiKey":8,"requestApiVersion":8,"correlationId":63,"clientId":"mso-123456-consumer-37da6eb2-0a27-47c4-bc3c-8032fdeca549","requestApiKeyName":"OFFSET_COMMIT"},"request":{"groupId":"mso-group","generationId":1,"memberId":"mso-123456-consumer-37da6eb2-0a27-47c4-bc3c-8032fdeca549-083c14ec-f89b-4a86-bb74-9b008de55af6","groupInstanceId":null,"topics":[{"name":"my-test-topic","partitions":[{"partitionIndex":0,"committedOffset":0,"committedLeaderEpoch":-1,"committedMetadata":""}]}]},"response":{"throttleTimeMs":0,"topics":[{"name":"my-test-topic","partitions":[{"partitionIndex":0,"errorCode":0}]}]},"connection":"127.0.0.1:39115-127.0.0.1:43336-3","totalTimeMs":11.915,"requestQueueTimeMs":0.234,"localTimeMs":11.299,"remoteTimeMs":0.0,"throttleTimeMs":0,"responseQueueTimeMs":0.114,"sendTimeMs":0.266,"securityProtocol":"SASL_PLAINTEXT","principal":"User:admin","listener":"SASL_PLAINTEXT","clientInformation":{"softwareName":"apache-kafka-java","softwareVersion":"3.3.1"}} 11:11:10 11:11:10.848 [main] DEBUG org.apache.kafka.clients.consumer.internals.ConsumerCoordinator - [Consumer clientId=mso-123456-consumer-37da6eb2-0a27-47c4-bc3c-8032fdeca549, groupId=mso-group] Committed offset 0 for partition my-test-topic-0 11:11:10 11:11:10.848 [main] DEBUG org.apache.kafka.clients.consumer.internals.ConsumerCoordinator - [Consumer clientId=mso-123456-consumer-37da6eb2-0a27-47c4-bc3c-8032fdeca549, groupId=mso-group] Completed asynchronous auto-commit of offsets {my-test-topic-0=OffsetAndMetadata{offset=0, leaderEpoch=null, metadata=''}} 11:11:11 11:11:11.203 [executor-Fetch] DEBUG kafka.server.IncrementalFetchContext - Incremental fetch context with session id 1613820950 returning 0 partition(s) 11:11:11 11:11:11.205 [main] DEBUG org.apache.kafka.clients.NetworkClient - [Consumer clientId=mso-123456-consumer-37da6eb2-0a27-47c4-bc3c-8032fdeca549, groupId=mso-group] Received FETCH response from node 1 for request with header RequestHeader(apiKey=FETCH, apiVersion=13, clientId=mso-123456-consumer-37da6eb2-0a27-47c4-bc3c-8032fdeca549, correlationId=61): FetchResponseData(throttleTimeMs=0, errorCode=0, sessionId=1613820950, responses=[]) 11:11:11 11:11:11.205 [data-plane-kafka-network-thread-1-ListenerName(SASL_PLAINTEXT)-SASL_PLAINTEXT-0] DEBUG kafka.request.logger - Completed request:{"isForwarded":false,"requestHeader":{"requestApiKey":1,"requestApiVersion":13,"correlationId":61,"clientId":"mso-123456-consumer-37da6eb2-0a27-47c4-bc3c-8032fdeca549","requestApiKeyName":"FETCH"},"request":{"replicaId":-1,"maxWaitMs":500,"minBytes":1,"maxBytes":52428800,"isolationLevel":0,"sessionId":1613820950,"sessionEpoch":29,"topics":[],"forgottenTopicsData":[],"rackId":""},"response":{"throttleTimeMs":0,"errorCode":0,"sessionId":1613820950,"responses":[]},"connection":"127.0.0.1:39115-127.0.0.1:43334-3","totalTimeMs":504.319,"requestQueueTimeMs":0.203,"localTimeMs":1.531,"remoteTimeMs":501.982,"throttleTimeMs":0,"responseQueueTimeMs":0.119,"sendTimeMs":0.481,"securityProtocol":"SASL_PLAINTEXT","principal":"User:admin","listener":"SASL_PLAINTEXT","clientInformation":{"softwareName":"apache-kafka-java","softwareVersion":"3.3.1"}} 11:11:11 11:11:11.206 [main] DEBUG org.apache.kafka.clients.FetchSessionHandler - [Consumer clientId=mso-123456-consumer-37da6eb2-0a27-47c4-bc3c-8032fdeca549, groupId=mso-group] Node 1 sent an incremental fetch response with throttleTimeMs = 0 for session 1613820950 with 0 response partition(s), 1 implied partition(s) 11:11:11 11:11:11.206 [main] DEBUG org.apache.kafka.clients.consumer.internals.Fetcher - [Consumer clientId=mso-123456-consumer-37da6eb2-0a27-47c4-bc3c-8032fdeca549, groupId=mso-group] Added READ_UNCOMMITTED fetch request for partition my-test-topic-0 at position FetchPosition{offset=0, offsetEpoch=Optional.empty, currentLeader=LeaderAndEpoch{leader=Optional[localhost:39115 (id: 1 rack: null)], epoch=0}} to node localhost:39115 (id: 1 rack: null) 11:11:11 11:11:11.207 [main] DEBUG org.apache.kafka.clients.FetchSessionHandler - [Consumer clientId=mso-123456-consumer-37da6eb2-0a27-47c4-bc3c-8032fdeca549, groupId=mso-group] Built incremental fetch (sessionId=1613820950, epoch=30) for node 1. Added 0 partition(s), altered 0 partition(s), removed 0 partition(s), replaced 0 partition(s) out of 1 partition(s) 11:11:11 11:11:11.207 [main] DEBUG org.apache.kafka.clients.consumer.internals.Fetcher - [Consumer clientId=mso-123456-consumer-37da6eb2-0a27-47c4-bc3c-8032fdeca549, groupId=mso-group] Sending READ_UNCOMMITTED IncrementalFetchRequest(toSend=(), toForget=(), toReplace=(), implied=(my-test-topic-0), canUseTopicIds=True) to broker localhost:39115 (id: 1 rack: null) 11:11:11 11:11:11.207 [main] DEBUG org.apache.kafka.clients.NetworkClient - [Consumer clientId=mso-123456-consumer-37da6eb2-0a27-47c4-bc3c-8032fdeca549, groupId=mso-group] Sending FETCH request with header RequestHeader(apiKey=FETCH, apiVersion=13, clientId=mso-123456-consumer-37da6eb2-0a27-47c4-bc3c-8032fdeca549, correlationId=64) and timeout 30000 to node 1: FetchRequestData(clusterId=null, replicaId=-1, maxWaitMs=500, minBytes=1, maxBytes=52428800, isolationLevel=0, sessionId=1613820950, sessionEpoch=30, topics=[], forgottenTopicsData=[], rackId='') 11:11:11 11:11:11.209 [data-plane-kafka-request-handler-1] DEBUG kafka.server.FetchManager - Created a new incremental FetchContext for session id 1613820950, epoch 31: added 0 partition(s), updated 0 partition(s), removed 0 partition(s) 11:11:11 11:11:11.711 [executor-Fetch] DEBUG kafka.server.IncrementalFetchContext - Incremental fetch context with session id 1613820950 returning 0 partition(s) 11:11:11 11:11:11.712 [main] DEBUG org.apache.kafka.clients.NetworkClient - [Consumer clientId=mso-123456-consumer-37da6eb2-0a27-47c4-bc3c-8032fdeca549, groupId=mso-group] Received FETCH response from node 1 for request with header RequestHeader(apiKey=FETCH, apiVersion=13, clientId=mso-123456-consumer-37da6eb2-0a27-47c4-bc3c-8032fdeca549, correlationId=64): FetchResponseData(throttleTimeMs=0, errorCode=0, sessionId=1613820950, responses=[]) 11:11:11 11:11:11.712 [data-plane-kafka-network-thread-1-ListenerName(SASL_PLAINTEXT)-SASL_PLAINTEXT-0] DEBUG kafka.request.logger - Completed request:{"isForwarded":false,"requestHeader":{"requestApiKey":1,"requestApiVersion":13,"correlationId":64,"clientId":"mso-123456-consumer-37da6eb2-0a27-47c4-bc3c-8032fdeca549","requestApiKeyName":"FETCH"},"request":{"replicaId":-1,"maxWaitMs":500,"minBytes":1,"maxBytes":52428800,"isolationLevel":0,"sessionId":1613820950,"sessionEpoch":30,"topics":[],"forgottenTopicsData":[],"rackId":""},"response":{"throttleTimeMs":0,"errorCode":0,"sessionId":1613820950,"responses":[]},"connection":"127.0.0.1:39115-127.0.0.1:43334-3","totalTimeMs":503.729,"requestQueueTimeMs":0.266,"localTimeMs":1.958,"remoteTimeMs":500.995,"throttleTimeMs":0,"responseQueueTimeMs":0.161,"sendTimeMs":0.348,"securityProtocol":"SASL_PLAINTEXT","principal":"User:admin","listener":"SASL_PLAINTEXT","clientInformation":{"softwareName":"apache-kafka-java","softwareVersion":"3.3.1"}} 11:11:11 11:11:11.713 [main] DEBUG org.apache.kafka.clients.FetchSessionHandler - [Consumer clientId=mso-123456-consumer-37da6eb2-0a27-47c4-bc3c-8032fdeca549, groupId=mso-group] Node 1 sent an incremental fetch response with throttleTimeMs = 0 for session 1613820950 with 0 response partition(s), 1 implied partition(s) 11:11:11 11:11:11.714 [main] DEBUG org.apache.kafka.clients.consumer.internals.Fetcher - [Consumer clientId=mso-123456-consumer-37da6eb2-0a27-47c4-bc3c-8032fdeca549, groupId=mso-group] Added READ_UNCOMMITTED fetch request for partition my-test-topic-0 at position FetchPosition{offset=0, offsetEpoch=Optional.empty, currentLeader=LeaderAndEpoch{leader=Optional[localhost:39115 (id: 1 rack: null)], epoch=0}} to node localhost:39115 (id: 1 rack: null) 11:11:11 11:11:11.714 [main] DEBUG org.apache.kafka.clients.FetchSessionHandler - [Consumer clientId=mso-123456-consumer-37da6eb2-0a27-47c4-bc3c-8032fdeca549, groupId=mso-group] Built incremental fetch (sessionId=1613820950, epoch=31) for node 1. Added 0 partition(s), altered 0 partition(s), removed 0 partition(s), replaced 0 partition(s) out of 1 partition(s) 11:11:11 11:11:11.714 [main] DEBUG org.apache.kafka.clients.consumer.internals.Fetcher - [Consumer clientId=mso-123456-consumer-37da6eb2-0a27-47c4-bc3c-8032fdeca549, groupId=mso-group] Sending READ_UNCOMMITTED IncrementalFetchRequest(toSend=(), toForget=(), toReplace=(), implied=(my-test-topic-0), canUseTopicIds=True) to broker localhost:39115 (id: 1 rack: null) 11:11:11 11:11:11.715 [main] DEBUG org.apache.kafka.clients.NetworkClient - [Consumer clientId=mso-123456-consumer-37da6eb2-0a27-47c4-bc3c-8032fdeca549, groupId=mso-group] Sending FETCH request with header RequestHeader(apiKey=FETCH, apiVersion=13, clientId=mso-123456-consumer-37da6eb2-0a27-47c4-bc3c-8032fdeca549, correlationId=65) and timeout 30000 to node 1: FetchRequestData(clusterId=null, replicaId=-1, maxWaitMs=500, minBytes=1, maxBytes=52428800, isolationLevel=0, sessionId=1613820950, sessionEpoch=31, topics=[], forgottenTopicsData=[], rackId='') 11:11:11 11:11:11.716 [data-plane-kafka-request-handler-0] DEBUG kafka.server.FetchManager - Created a new incremental FetchContext for session id 1613820950, epoch 32: added 0 partition(s), updated 0 partition(s), removed 0 partition(s) 11:11:11 11:11:11.793 [main] INFO org.apache.kafka.clients.producer.ProducerConfig - ProducerConfig values: 11:11:11 acks = -1 11:11:11 batch.size = 16384 11:11:11 bootstrap.servers = [SASL_PLAINTEXT://localhost:39115] 11:11:11 buffer.memory = 33554432 11:11:11 client.dns.lookup = use_all_dns_ips 11:11:11 client.id = mso-123456-producer-414c146f-350c-404b-a200-38d58b83784c 11:11:11 compression.type = none 11:11:11 connections.max.idle.ms = 540000 11:11:11 delivery.timeout.ms = 120000 11:11:11 enable.idempotence = true 11:11:11 interceptor.classes = [] 11:11:11 key.serializer = class org.apache.kafka.common.serialization.StringSerializer 11:11:11 linger.ms = 0 11:11:11 max.block.ms = 60000 11:11:11 max.in.flight.requests.per.connection = 5 11:11:11 max.request.size = 1048576 11:11:11 metadata.max.age.ms = 300000 11:11:11 metadata.max.idle.ms = 300000 11:11:11 metric.reporters = [] 11:11:11 metrics.num.samples = 2 11:11:11 metrics.recording.level = INFO 11:11:11 metrics.sample.window.ms = 30000 11:11:11 partitioner.adaptive.partitioning.enable = true 11:11:11 partitioner.availability.timeout.ms = 0 11:11:11 partitioner.class = null 11:11:11 partitioner.ignore.keys = false 11:11:11 receive.buffer.bytes = 32768 11:11:11 reconnect.backoff.max.ms = 1000 11:11:11 reconnect.backoff.ms = 50 11:11:11 request.timeout.ms = 30000 11:11:11 retries = 2147483647 11:11:11 retry.backoff.ms = 100 11:11:11 sasl.client.callback.handler.class = null 11:11:11 sasl.jaas.config = [hidden] 11:11:11 sasl.kerberos.kinit.cmd = /usr/bin/kinit 11:11:11 sasl.kerberos.min.time.before.relogin = 60000 11:11:11 sasl.kerberos.service.name = null 11:11:11 sasl.kerberos.ticket.renew.jitter = 0.05 11:11:11 sasl.kerberos.ticket.renew.window.factor = 0.8 11:11:11 sasl.login.callback.handler.class = null 11:11:11 sasl.login.class = null 11:11:11 sasl.login.connect.timeout.ms = null 11:11:11 sasl.login.read.timeout.ms = null 11:11:11 sasl.login.refresh.buffer.seconds = 300 11:11:11 sasl.login.refresh.min.period.seconds = 60 11:11:11 sasl.login.refresh.window.factor = 0.8 11:11:11 sasl.login.refresh.window.jitter = 0.05 11:11:11 sasl.login.retry.backoff.max.ms = 10000 11:11:11 sasl.login.retry.backoff.ms = 100 11:11:11 sasl.mechanism = PLAIN 11:11:11 sasl.oauthbearer.clock.skew.seconds = 30 11:11:11 sasl.oauthbearer.expected.audience = null 11:11:11 sasl.oauthbearer.expected.issuer = null 11:11:11 sasl.oauthbearer.jwks.endpoint.refresh.ms = 3600000 11:11:11 sasl.oauthbearer.jwks.endpoint.retry.backoff.max.ms = 10000 11:11:11 sasl.oauthbearer.jwks.endpoint.retry.backoff.ms = 100 11:11:11 sasl.oauthbearer.jwks.endpoint.url = null 11:11:11 sasl.oauthbearer.scope.claim.name = scope 11:11:11 sasl.oauthbearer.sub.claim.name = sub 11:11:11 sasl.oauthbearer.token.endpoint.url = null 11:11:11 security.protocol = SASL_PLAINTEXT 11:11:11 security.providers = null 11:11:11 send.buffer.bytes = 131072 11:11:11 socket.connection.setup.timeout.max.ms = 30000 11:11:11 socket.connection.setup.timeout.ms = 10000 11:11:11 ssl.cipher.suites = null 11:11:11 ssl.enabled.protocols = [TLSv1.2, TLSv1.3] 11:11:11 ssl.endpoint.identification.algorithm = https 11:11:11 ssl.engine.factory.class = null 11:11:11 ssl.key.password = null 11:11:11 ssl.keymanager.algorithm = SunX509 11:11:11 ssl.keystore.certificate.chain = null 11:11:11 ssl.keystore.key = null 11:11:11 ssl.keystore.location = null 11:11:11 ssl.keystore.password = null 11:11:11 ssl.keystore.type = JKS 11:11:11 ssl.protocol = TLSv1.3 11:11:11 ssl.provider = null 11:11:11 ssl.secure.random.implementation = null 11:11:11 ssl.trustmanager.algorithm = PKIX 11:11:11 ssl.truststore.certificates = null 11:11:11 ssl.truststore.location = null 11:11:11 ssl.truststore.password = null 11:11:11 ssl.truststore.type = JKS 11:11:11 transaction.timeout.ms = 60000 11:11:11 transactional.id = null 11:11:11 value.serializer = class org.apache.kafka.common.serialization.StringSerializer 11:11:11 11:11:11 11:11:11.806 [main] INFO org.apache.kafka.clients.producer.KafkaProducer - [Producer clientId=mso-123456-producer-414c146f-350c-404b-a200-38d58b83784c] Instantiated an idempotent producer. 11:11:11 11:11:11.826 [main] INFO org.apache.kafka.common.utils.AppInfoParser - Kafka version: 3.3.1 11:11:11 11:11:11.826 [main] INFO org.apache.kafka.common.utils.AppInfoParser - Kafka commitId: e23c59d00e687ff5 11:11:11 11:11:11.826 [kafka-producer-network-thread | mso-123456-producer-414c146f-350c-404b-a200-38d58b83784c] DEBUG org.apache.kafka.clients.producer.internals.Sender - [Producer clientId=mso-123456-producer-414c146f-350c-404b-a200-38d58b83784c] Starting Kafka producer I/O thread. 11:11:11 11:11:11.826 [main] INFO org.apache.kafka.common.utils.AppInfoParser - Kafka startTimeMs: 1768216271826 11:11:11 11:11:11.826 [main] DEBUG org.apache.kafka.clients.producer.KafkaProducer - [Producer clientId=mso-123456-producer-414c146f-350c-404b-a200-38d58b83784c] Kafka producer started 11:11:11 11:11:11.827 [kafka-producer-network-thread | mso-123456-producer-414c146f-350c-404b-a200-38d58b83784c] DEBUG org.apache.kafka.clients.producer.internals.TransactionManager - [Producer clientId=mso-123456-producer-414c146f-350c-404b-a200-38d58b83784c] Transition from state UNINITIALIZED to INITIALIZING 11:11:11 11:11:11.830 [kafka-producer-network-thread | mso-123456-producer-414c146f-350c-404b-a200-38d58b83784c] DEBUG org.apache.kafka.clients.producer.internals.TransactionManager - [Producer clientId=mso-123456-producer-414c146f-350c-404b-a200-38d58b83784c] Enqueuing transactional request InitProducerIdRequestData(transactionalId=null, transactionTimeoutMs=2147483647, producerId=-1, producerEpoch=-1) 11:11:11 11:11:11.830 [kafka-producer-network-thread | mso-123456-producer-414c146f-350c-404b-a200-38d58b83784c] DEBUG org.apache.kafka.clients.NetworkClient - [Producer clientId=mso-123456-producer-414c146f-350c-404b-a200-38d58b83784c] Initialize connection to node localhost:39115 (id: -1 rack: null) for sending metadata request 11:11:11 11:11:11.830 [kafka-producer-network-thread | mso-123456-producer-414c146f-350c-404b-a200-38d58b83784c] DEBUG org.apache.kafka.clients.ClientUtils - Resolved host localhost as 127.0.0.1 11:11:11 11:11:11.830 [kafka-producer-network-thread | mso-123456-producer-414c146f-350c-404b-a200-38d58b83784c] DEBUG org.apache.kafka.clients.NetworkClient - [Producer clientId=mso-123456-producer-414c146f-350c-404b-a200-38d58b83784c] Initiating connection to node localhost:39115 (id: -1 rack: null) using address localhost/127.0.0.1 11:11:11 11:11:11.831 [kafka-producer-network-thread | mso-123456-producer-414c146f-350c-404b-a200-38d58b83784c] DEBUG org.apache.kafka.common.security.authenticator.SaslClientAuthenticator - [Producer clientId=mso-123456-producer-414c146f-350c-404b-a200-38d58b83784c] Set SASL client state to SEND_APIVERSIONS_REQUEST 11:11:11 11:11:11.831 [kafka-producer-network-thread | mso-123456-producer-414c146f-350c-404b-a200-38d58b83784c] DEBUG org.apache.kafka.common.security.authenticator.SaslClientAuthenticator - [Producer clientId=mso-123456-producer-414c146f-350c-404b-a200-38d58b83784c] Creating SaslClient: client=null;service=kafka;serviceHostname=localhost;mechs=[PLAIN] 11:11:11 11:11:11.831 [data-plane-kafka-socket-acceptor-ListenerName(SASL_PLAINTEXT)-SASL_PLAINTEXT-39115] DEBUG kafka.network.DataPlaneAcceptor - Accepted connection from /127.0.0.1:37744 on /127.0.0.1:39115 and assigned it to processor 0, sendBufferSize [actual|requested]: [102400|102400] recvBufferSize [actual|requested]: [102400|102400] 11:11:11 11:11:11.831 [data-plane-kafka-network-thread-1-ListenerName(SASL_PLAINTEXT)-SASL_PLAINTEXT-0] DEBUG kafka.network.Processor - Processor 0 listening to new connection from /127.0.0.1:37744 11:11:11 11:11:11.836 [kafka-producer-network-thread | mso-123456-producer-414c146f-350c-404b-a200-38d58b83784c] DEBUG org.apache.kafka.common.network.Selector - [Producer clientId=mso-123456-producer-414c146f-350c-404b-a200-38d58b83784c] Created socket with SO_RCVBUF = 32768, SO_SNDBUF = 131072, SO_TIMEOUT = 0 to node -1 11:11:11 11:11:11.837 [data-plane-kafka-network-thread-1-ListenerName(SASL_PLAINTEXT)-SASL_PLAINTEXT-0] DEBUG org.apache.kafka.common.security.authenticator.SaslServerAuthenticator - Set SASL server state to HANDSHAKE_OR_VERSIONS_REQUEST during authentication 11:11:11 11:11:11.837 [data-plane-kafka-network-thread-1-ListenerName(SASL_PLAINTEXT)-SASL_PLAINTEXT-0] DEBUG org.apache.kafka.common.security.authenticator.SaslServerAuthenticator - Handling Kafka request API_VERSIONS during authentication 11:11:11 11:11:11.838 [kafka-producer-network-thread | mso-123456-producer-414c146f-350c-404b-a200-38d58b83784c] DEBUG org.apache.kafka.common.security.authenticator.SaslClientAuthenticator - [Producer clientId=mso-123456-producer-414c146f-350c-404b-a200-38d58b83784c] Set SASL client state to RECEIVE_APIVERSIONS_RESPONSE 11:11:11 11:11:11.838 [data-plane-kafka-network-thread-1-ListenerName(SASL_PLAINTEXT)-SASL_PLAINTEXT-0] DEBUG org.apache.kafka.common.security.authenticator.SaslServerAuthenticator - Set SASL server state to HANDSHAKE_REQUEST during authentication 11:11:11 11:11:11.838 [kafka-producer-network-thread | mso-123456-producer-414c146f-350c-404b-a200-38d58b83784c] DEBUG org.apache.kafka.clients.NetworkClient - [Producer clientId=mso-123456-producer-414c146f-350c-404b-a200-38d58b83784c] Completed connection to node -1. Fetching API versions. 11:11:11 11:11:11.841 [kafka-producer-network-thread | mso-123456-producer-414c146f-350c-404b-a200-38d58b83784c] DEBUG org.apache.kafka.common.security.authenticator.SaslClientAuthenticator - [Producer clientId=mso-123456-producer-414c146f-350c-404b-a200-38d58b83784c] Set SASL client state to SEND_HANDSHAKE_REQUEST 11:11:11 11:11:11.841 [kafka-producer-network-thread | mso-123456-producer-414c146f-350c-404b-a200-38d58b83784c] DEBUG org.apache.kafka.common.security.authenticator.SaslClientAuthenticator - [Producer clientId=mso-123456-producer-414c146f-350c-404b-a200-38d58b83784c] Set SASL client state to RECEIVE_HANDSHAKE_RESPONSE 11:11:11 11:11:11.841 [data-plane-kafka-network-thread-1-ListenerName(SASL_PLAINTEXT)-SASL_PLAINTEXT-0] DEBUG org.apache.kafka.common.security.authenticator.SaslServerAuthenticator - Handling Kafka request SASL_HANDSHAKE during authentication 11:11:11 11:11:11.841 [data-plane-kafka-network-thread-1-ListenerName(SASL_PLAINTEXT)-SASL_PLAINTEXT-0] DEBUG org.apache.kafka.common.security.authenticator.SaslServerAuthenticator - Using SASL mechanism 'PLAIN' provided by client 11:11:11 11:11:11.842 [data-plane-kafka-network-thread-1-ListenerName(SASL_PLAINTEXT)-SASL_PLAINTEXT-0] DEBUG org.apache.kafka.common.security.authenticator.SaslServerAuthenticator - Set SASL server state to AUTHENTICATE during authentication 11:11:11 11:11:11.842 [kafka-producer-network-thread | mso-123456-producer-414c146f-350c-404b-a200-38d58b83784c] DEBUG org.apache.kafka.common.security.authenticator.SaslClientAuthenticator - [Producer clientId=mso-123456-producer-414c146f-350c-404b-a200-38d58b83784c] Set SASL client state to INITIAL 11:11:11 11:11:11.843 [kafka-producer-network-thread | mso-123456-producer-414c146f-350c-404b-a200-38d58b83784c] DEBUG org.apache.kafka.common.security.authenticator.SaslClientAuthenticator - [Producer clientId=mso-123456-producer-414c146f-350c-404b-a200-38d58b83784c] Set SASL client state to INTERMEDIATE 11:11:11 11:11:11.843 [data-plane-kafka-network-thread-1-ListenerName(SASL_PLAINTEXT)-SASL_PLAINTEXT-0] DEBUG org.apache.kafka.common.security.authenticator.SaslServerAuthenticator - Authentication complete; session max lifetime from broker config=0 ms, no credential expiration; no session expiration, sending 0 ms to client 11:11:11 11:11:11.843 [data-plane-kafka-network-thread-1-ListenerName(SASL_PLAINTEXT)-SASL_PLAINTEXT-0] DEBUG org.apache.kafka.common.security.authenticator.SaslServerAuthenticator - Set SASL server state to COMPLETE during authentication 11:11:11 11:11:11.843 [data-plane-kafka-network-thread-1-ListenerName(SASL_PLAINTEXT)-SASL_PLAINTEXT-0] DEBUG org.apache.kafka.common.network.Selector - [SocketServer listenerType=ZK_BROKER, nodeId=1] Successfully authenticated with /127.0.0.1 11:11:11 11:11:11.843 [kafka-producer-network-thread | mso-123456-producer-414c146f-350c-404b-a200-38d58b83784c] DEBUG org.apache.kafka.common.security.authenticator.SaslClientAuthenticator - [Producer clientId=mso-123456-producer-414c146f-350c-404b-a200-38d58b83784c] Set SASL client state to COMPLETE 11:11:11 11:11:11.843 [kafka-producer-network-thread | mso-123456-producer-414c146f-350c-404b-a200-38d58b83784c] DEBUG org.apache.kafka.common.security.authenticator.SaslClientAuthenticator - [Producer clientId=mso-123456-producer-414c146f-350c-404b-a200-38d58b83784c] Finished authentication with no session expiration and no session re-authentication 11:11:11 11:11:11.843 [kafka-producer-network-thread | mso-123456-producer-414c146f-350c-404b-a200-38d58b83784c] DEBUG org.apache.kafka.common.network.Selector - [Producer clientId=mso-123456-producer-414c146f-350c-404b-a200-38d58b83784c] Successfully authenticated with localhost/127.0.0.1 11:11:11 11:11:11.844 [kafka-producer-network-thread | mso-123456-producer-414c146f-350c-404b-a200-38d58b83784c] DEBUG org.apache.kafka.clients.NetworkClient - [Producer clientId=mso-123456-producer-414c146f-350c-404b-a200-38d58b83784c] Initiating API versions fetch from node -1. 11:11:11 11:11:11.844 [kafka-producer-network-thread | mso-123456-producer-414c146f-350c-404b-a200-38d58b83784c] DEBUG org.apache.kafka.clients.NetworkClient - [Producer clientId=mso-123456-producer-414c146f-350c-404b-a200-38d58b83784c] Sending API_VERSIONS request with header RequestHeader(apiKey=API_VERSIONS, apiVersion=3, clientId=mso-123456-producer-414c146f-350c-404b-a200-38d58b83784c, correlationId=0) and timeout 30000 to node -1: ApiVersionsRequestData(clientSoftwareName='apache-kafka-java', clientSoftwareVersion='3.3.1') 11:11:11 11:11:11.847 [kafka-producer-network-thread | mso-123456-producer-414c146f-350c-404b-a200-38d58b83784c] DEBUG org.apache.kafka.clients.NetworkClient - [Producer clientId=mso-123456-producer-414c146f-350c-404b-a200-38d58b83784c] Received API_VERSIONS response from node -1 for request with header RequestHeader(apiKey=API_VERSIONS, apiVersion=3, clientId=mso-123456-producer-414c146f-350c-404b-a200-38d58b83784c, correlationId=0): ApiVersionsResponseData(errorCode=0, apiKeys=[ApiVersion(apiKey=0, minVersion=0, maxVersion=9), ApiVersion(apiKey=1, minVersion=0, maxVersion=13), ApiVersion(apiKey=2, minVersion=0, maxVersion=7), ApiVersion(apiKey=3, minVersion=0, maxVersion=12), ApiVersion(apiKey=4, minVersion=0, maxVersion=6), ApiVersion(apiKey=5, minVersion=0, maxVersion=3), ApiVersion(apiKey=6, minVersion=0, maxVersion=7), ApiVersion(apiKey=7, minVersion=0, maxVersion=3), ApiVersion(apiKey=8, minVersion=0, maxVersion=8), ApiVersion(apiKey=9, minVersion=0, maxVersion=8), ApiVersion(apiKey=10, minVersion=0, maxVersion=4), ApiVersion(apiKey=11, minVersion=0, maxVersion=9), ApiVersion(apiKey=12, minVersion=0, maxVersion=4), ApiVersion(apiKey=13, minVersion=0, maxVersion=5), ApiVersion(apiKey=14, minVersion=0, maxVersion=5), ApiVersion(apiKey=15, minVersion=0, maxVersion=5), ApiVersion(apiKey=16, minVersion=0, maxVersion=4), ApiVersion(apiKey=17, minVersion=0, maxVersion=1), ApiVersion(apiKey=18, minVersion=0, maxVersion=3), ApiVersion(apiKey=19, minVersion=0, maxVersion=7), ApiVersion(apiKey=20, minVersion=0, maxVersion=6), ApiVersion(apiKey=21, minVersion=0, maxVersion=2), ApiVersion(apiKey=22, minVersion=0, maxVersion=4), ApiVersion(apiKey=23, minVersion=0, maxVersion=4), ApiVersion(apiKey=24, minVersion=0, maxVersion=3), ApiVersion(apiKey=25, minVersion=0, maxVersion=3), ApiVersion(apiKey=26, minVersion=0, maxVersion=3), ApiVersion(apiKey=27, minVersion=0, maxVersion=1), ApiVersion(apiKey=28, minVersion=0, maxVersion=3), ApiVersion(apiKey=29, minVersion=0, maxVersion=3), ApiVersion(apiKey=30, minVersion=0, maxVersion=3), ApiVersion(apiKey=31, minVersion=0, maxVersion=3), ApiVersion(apiKey=32, minVersion=0, maxVersion=4), ApiVersion(apiKey=33, minVersion=0, maxVersion=2), ApiVersion(apiKey=34, minVersion=0, maxVersion=2), ApiVersion(apiKey=35, minVersion=0, maxVersion=4), ApiVersion(apiKey=36, minVersion=0, maxVersion=2), ApiVersion(apiKey=37, minVersion=0, maxVersion=3), ApiVersion(apiKey=38, minVersion=0, maxVersion=3), ApiVersion(apiKey=39, minVersion=0, maxVersion=2), ApiVersion(apiKey=40, minVersion=0, maxVersion=2), ApiVersion(apiKey=41, minVersion=0, maxVersion=3), ApiVersion(apiKey=42, minVersion=0, maxVersion=2), ApiVersion(apiKey=43, minVersion=0, maxVersion=2), ApiVersion(apiKey=44, minVersion=0, maxVersion=1), ApiVersion(apiKey=45, minVersion=0, maxVersion=0), ApiVersion(apiKey=46, minVersion=0, maxVersion=0), ApiVersion(apiKey=47, minVersion=0, maxVersion=0), ApiVersion(apiKey=48, minVersion=0, maxVersion=1), ApiVersion(apiKey=49, minVersion=0, maxVersion=1), ApiVersion(apiKey=50, minVersion=0, maxVersion=0), ApiVersion(apiKey=51, minVersion=0, maxVersion=0), ApiVersion(apiKey=56, minVersion=0, maxVersion=2), ApiVersion(apiKey=57, minVersion=0, maxVersion=1), ApiVersion(apiKey=60, minVersion=0, maxVersion=0), ApiVersion(apiKey=61, minVersion=0, maxVersion=0), ApiVersion(apiKey=65, minVersion=0, maxVersion=0), ApiVersion(apiKey=66, minVersion=0, maxVersion=0), ApiVersion(apiKey=67, minVersion=0, maxVersion=0)], throttleTimeMs=0, supportedFeatures=[], finalizedFeaturesEpoch=0, finalizedFeatures=[]) 11:11:11 11:11:11.847 [data-plane-kafka-network-thread-1-ListenerName(SASL_PLAINTEXT)-SASL_PLAINTEXT-0] DEBUG kafka.request.logger - Completed request:{"isForwarded":false,"requestHeader":{"requestApiKey":18,"requestApiVersion":3,"correlationId":0,"clientId":"mso-123456-producer-414c146f-350c-404b-a200-38d58b83784c","requestApiKeyName":"API_VERSIONS"},"request":{"clientSoftwareName":"apache-kafka-java","clientSoftwareVersion":"3.3.1"},"response":{"errorCode":0,"apiKeys":[{"apiKey":0,"minVersion":0,"maxVersion":9},{"apiKey":1,"minVersion":0,"maxVersion":13},{"apiKey":2,"minVersion":0,"maxVersion":7},{"apiKey":3,"minVersion":0,"maxVersion":12},{"apiKey":4,"minVersion":0,"maxVersion":6},{"apiKey":5,"minVersion":0,"maxVersion":3},{"apiKey":6,"minVersion":0,"maxVersion":7},{"apiKey":7,"minVersion":0,"maxVersion":3},{"apiKey":8,"minVersion":0,"maxVersion":8},{"apiKey":9,"minVersion":0,"maxVersion":8},{"apiKey":10,"minVersion":0,"maxVersion":4},{"apiKey":11,"minVersion":0,"maxVersion":9},{"apiKey":12,"minVersion":0,"maxVersion":4},{"apiKey":13,"minVersion":0,"maxVersion":5},{"apiKey":14,"minVersion":0,"maxVersion":5},{"apiKey":15,"minVersion":0,"maxVersion":5},{"apiKey":16,"minVersion":0,"maxVersion":4},{"apiKey":17,"minVersion":0,"maxVersion":1},{"apiKey":18,"minVersion":0,"maxVersion":3},{"apiKey":19,"minVersion":0,"maxVersion":7},{"apiKey":20,"minVersion":0,"maxVersion":6},{"apiKey":21,"minVersion":0,"maxVersion":2},{"apiKey":22,"minVersion":0,"maxVersion":4},{"apiKey":23,"minVersion":0,"maxVersion":4},{"apiKey":24,"minVersion":0,"maxVersion":3},{"apiKey":25,"minVersion":0,"maxVersion":3},{"apiKey":26,"minVersion":0,"maxVersion":3},{"apiKey":27,"minVersion":0,"maxVersion":1},{"apiKey":28,"minVersion":0,"maxVersion":3},{"apiKey":29,"minVersion":0,"maxVersion":3},{"apiKey":30,"minVersion":0,"maxVersion":3},{"apiKey":31,"minVersion":0,"maxVersion":3},{"apiKey":32,"minVersion":0,"maxVersion":4},{"apiKey":33,"minVersion":0,"maxVersion":2},{"apiKey":34,"minVersion":0,"maxVersion":2},{"apiKey":35,"minVersion":0,"maxVersion":4},{"apiKey":36,"minVersion":0,"maxVersion":2},{"apiKey":37,"minVersion":0,"maxVersion":3},{"apiKey":38,"minVersion":0,"maxVersion":3},{"apiKey":39,"minVersion":0,"maxVersion":2},{"apiKey":40,"minVersion":0,"maxVersion":2},{"apiKey":41,"minVersion":0,"maxVersion":3},{"apiKey":42,"minVersion":0,"maxVersion":2},{"apiKey":43,"minVersion":0,"maxVersion":2},{"apiKey":44,"minVersion":0,"maxVersion":1},{"apiKey":45,"minVersion":0,"maxVersion":0},{"apiKey":46,"minVersion":0,"maxVersion":0},{"apiKey":47,"minVersion":0,"maxVersion":0},{"apiKey":48,"minVersion":0,"maxVersion":1},{"apiKey":49,"minVersion":0,"maxVersion":1},{"apiKey":50,"minVersion":0,"maxVersion":0},{"apiKey":51,"minVersion":0,"maxVersion":0},{"apiKey":56,"minVersion":0,"maxVersion":2},{"apiKey":57,"minVersion":0,"maxVersion":1},{"apiKey":60,"minVersion":0,"maxVersion":0},{"apiKey":61,"minVersion":0,"maxVersion":0},{"apiKey":65,"minVersion":0,"maxVersion":0},{"apiKey":66,"minVersion":0,"maxVersion":0},{"apiKey":67,"minVersion":0,"maxVersion":0}],"throttleTimeMs":0,"finalizedFeaturesEpoch":0},"connection":"127.0.0.1:39115-127.0.0.1:37744-4","totalTimeMs":1.867,"requestQueueTimeMs":0.679,"localTimeMs":0.843,"remoteTimeMs":0.0,"throttleTimeMs":0,"responseQueueTimeMs":0.092,"sendTimeMs":0.252,"securityProtocol":"SASL_PLAINTEXT","principal":"User:admin","listener":"SASL_PLAINTEXT","clientInformation":{"softwareName":"unknown","softwareVersion":"unknown"}} 11:11:11 11:11:11.847 [kafka-producer-network-thread | mso-123456-producer-414c146f-350c-404b-a200-38d58b83784c] DEBUG org.apache.kafka.clients.NetworkClient - [Producer clientId=mso-123456-producer-414c146f-350c-404b-a200-38d58b83784c] Node -1 has finalized features epoch: 0, finalized features: [], supported features: [], API versions: (Produce(0): 0 to 9 [usable: 9], Fetch(1): 0 to 13 [usable: 13], ListOffsets(2): 0 to 7 [usable: 7], Metadata(3): 0 to 12 [usable: 12], LeaderAndIsr(4): 0 to 6 [usable: 6], StopReplica(5): 0 to 3 [usable: 3], UpdateMetadata(6): 0 to 7 [usable: 7], ControlledShutdown(7): 0 to 3 [usable: 3], OffsetCommit(8): 0 to 8 [usable: 8], OffsetFetch(9): 0 to 8 [usable: 8], FindCoordinator(10): 0 to 4 [usable: 4], JoinGroup(11): 0 to 9 [usable: 9], Heartbeat(12): 0 to 4 [usable: 4], LeaveGroup(13): 0 to 5 [usable: 5], SyncGroup(14): 0 to 5 [usable: 5], DescribeGroups(15): 0 to 5 [usable: 5], ListGroups(16): 0 to 4 [usable: 4], SaslHandshake(17): 0 to 1 [usable: 1], ApiVersions(18): 0 to 3 [usable: 3], CreateTopics(19): 0 to 7 [usable: 7], DeleteTopics(20): 0 to 6 [usable: 6], DeleteRecords(21): 0 to 2 [usable: 2], InitProducerId(22): 0 to 4 [usable: 4], OffsetForLeaderEpoch(23): 0 to 4 [usable: 4], AddPartitionsToTxn(24): 0 to 3 [usable: 3], AddOffsetsToTxn(25): 0 to 3 [usable: 3], EndTxn(26): 0 to 3 [usable: 3], WriteTxnMarkers(27): 0 to 1 [usable: 1], TxnOffsetCommit(28): 0 to 3 [usable: 3], DescribeAcls(29): 0 to 3 [usable: 3], CreateAcls(30): 0 to 3 [usable: 3], DeleteAcls(31): 0 to 3 [usable: 3], DescribeConfigs(32): 0 to 4 [usable: 4], AlterConfigs(33): 0 to 2 [usable: 2], AlterReplicaLogDirs(34): 0 to 2 [usable: 2], DescribeLogDirs(35): 0 to 4 [usable: 4], SaslAuthenticate(36): 0 to 2 [usable: 2], CreatePartitions(37): 0 to 3 [usable: 3], CreateDelegationToken(38): 0 to 3 [usable: 3], RenewDelegationToken(39): 0 to 2 [usable: 2], ExpireDelegationToken(40): 0 to 2 [usable: 2], DescribeDelegationToken(41): 0 to 3 [usable: 3], DeleteGroups(42): 0 to 2 [usable: 2], ElectLeaders(43): 0 to 2 [usable: 2], IncrementalAlterConfigs(44): 0 to 1 [usable: 1], AlterPartitionReassignments(45): 0 [usable: 0], ListPartitionReassignments(46): 0 [usable: 0], OffsetDelete(47): 0 [usable: 0], DescribeClientQuotas(48): 0 to 1 [usable: 1], AlterClientQuotas(49): 0 to 1 [usable: 1], DescribeUserScramCredentials(50): 0 [usable: 0], AlterUserScramCredentials(51): 0 [usable: 0], AlterPartition(56): 0 to 2 [usable: 2], UpdateFeatures(57): 0 to 1 [usable: 1], DescribeCluster(60): 0 [usable: 0], DescribeProducers(61): 0 [usable: 0], DescribeTransactions(65): 0 [usable: 0], ListTransactions(66): 0 [usable: 0], AllocateProducerIds(67): 0 [usable: 0]). 11:11:11 11:11:11.848 [kafka-producer-network-thread | mso-123456-producer-414c146f-350c-404b-a200-38d58b83784c] DEBUG org.apache.kafka.clients.NetworkClient - [Producer clientId=mso-123456-producer-414c146f-350c-404b-a200-38d58b83784c] Sending metadata request MetadataRequestData(topics=[MetadataRequestTopic(topicId=AAAAAAAAAAAAAAAAAAAAAA, name='my-test-topic')], allowAutoTopicCreation=true, includeClusterAuthorizedOperations=false, includeTopicAuthorizedOperations=false) to node localhost:39115 (id: -1 rack: null) 11:11:11 11:11:11.848 [kafka-producer-network-thread | mso-123456-producer-414c146f-350c-404b-a200-38d58b83784c] DEBUG org.apache.kafka.clients.NetworkClient - [Producer clientId=mso-123456-producer-414c146f-350c-404b-a200-38d58b83784c] Sending METADATA request with header RequestHeader(apiKey=METADATA, apiVersion=12, clientId=mso-123456-producer-414c146f-350c-404b-a200-38d58b83784c, correlationId=1) and timeout 30000 to node -1: MetadataRequestData(topics=[MetadataRequestTopic(topicId=AAAAAAAAAAAAAAAAAAAAAA, name='my-test-topic')], allowAutoTopicCreation=true, includeClusterAuthorizedOperations=false, includeTopicAuthorizedOperations=false) 11:11:11 11:11:11.848 [kafka-producer-network-thread | mso-123456-producer-414c146f-350c-404b-a200-38d58b83784c] DEBUG org.apache.kafka.clients.producer.internals.Sender - [Producer clientId=mso-123456-producer-414c146f-350c-404b-a200-38d58b83784c] Sending transactional request InitProducerIdRequestData(transactionalId=null, transactionTimeoutMs=2147483647, producerId=-1, producerEpoch=-1) to node localhost:39115 (id: -1 rack: null) with correlation ID 2 11:11:11 11:11:11.849 [kafka-producer-network-thread | mso-123456-producer-414c146f-350c-404b-a200-38d58b83784c] DEBUG org.apache.kafka.clients.NetworkClient - [Producer clientId=mso-123456-producer-414c146f-350c-404b-a200-38d58b83784c] Sending INIT_PRODUCER_ID request with header RequestHeader(apiKey=INIT_PRODUCER_ID, apiVersion=4, clientId=mso-123456-producer-414c146f-350c-404b-a200-38d58b83784c, correlationId=2) and timeout 30000 to node -1: InitProducerIdRequestData(transactionalId=null, transactionTimeoutMs=2147483647, producerId=-1, producerEpoch=-1) 11:11:11 11:11:11.851 [kafka-producer-network-thread | mso-123456-producer-414c146f-350c-404b-a200-38d58b83784c] DEBUG org.apache.kafka.clients.NetworkClient - [Producer clientId=mso-123456-producer-414c146f-350c-404b-a200-38d58b83784c] Received METADATA response from node -1 for request with header RequestHeader(apiKey=METADATA, apiVersion=12, clientId=mso-123456-producer-414c146f-350c-404b-a200-38d58b83784c, correlationId=1): MetadataResponseData(throttleTimeMs=0, brokers=[MetadataResponseBroker(nodeId=1, host='localhost', port=39115, rack=null)], clusterId='jx5ycp9PTHOXo1U6H8QTmw', controllerId=1, topics=[MetadataResponseTopic(errorCode=0, name='my-test-topic', topicId=QbCSIHehTz6oyvgrkYsJtw, isInternal=false, partitions=[MetadataResponsePartition(errorCode=0, partitionIndex=0, leaderId=1, leaderEpoch=0, replicaNodes=[1], isrNodes=[1], offlineReplicas=[])], topicAuthorizedOperations=-2147483648)], clusterAuthorizedOperations=-2147483648) 11:11:11 11:11:11.851 [data-plane-kafka-network-thread-1-ListenerName(SASL_PLAINTEXT)-SASL_PLAINTEXT-0] DEBUG kafka.request.logger - Completed request:{"isForwarded":false,"requestHeader":{"requestApiKey":3,"requestApiVersion":12,"correlationId":1,"clientId":"mso-123456-producer-414c146f-350c-404b-a200-38d58b83784c","requestApiKeyName":"METADATA"},"request":{"topics":[{"topicId":"AAAAAAAAAAAAAAAAAAAAAA","name":"my-test-topic"}],"allowAutoTopicCreation":true,"includeTopicAuthorizedOperations":false},"response":{"throttleTimeMs":0,"brokers":[{"nodeId":1,"host":"localhost","port":39115,"rack":null}],"clusterId":"jx5ycp9PTHOXo1U6H8QTmw","controllerId":1,"topics":[{"errorCode":0,"name":"my-test-topic","topicId":"QbCSIHehTz6oyvgrkYsJtw","isInternal":false,"partitions":[{"errorCode":0,"partitionIndex":0,"leaderId":1,"leaderEpoch":0,"replicaNodes":[1],"isrNodes":[1],"offlineReplicas":[]}],"topicAuthorizedOperations":-2147483648}]},"connection":"127.0.0.1:39115-127.0.0.1:37744-4","totalTimeMs":1.838,"requestQueueTimeMs":0.195,"localTimeMs":1.356,"remoteTimeMs":0.0,"throttleTimeMs":0,"responseQueueTimeMs":0.092,"sendTimeMs":0.193,"securityProtocol":"SASL_PLAINTEXT","principal":"User:admin","listener":"SASL_PLAINTEXT","clientInformation":{"softwareName":"apache-kafka-java","softwareVersion":"3.3.1"}} 11:11:11 11:11:11.851 [kafka-producer-network-thread | mso-123456-producer-414c146f-350c-404b-a200-38d58b83784c] INFO org.apache.kafka.clients.Metadata - [Producer clientId=mso-123456-producer-414c146f-350c-404b-a200-38d58b83784c] Resetting the last seen epoch of partition my-test-topic-0 to 0 since the associated topicId changed from null to QbCSIHehTz6oyvgrkYsJtw 11:11:11 11:11:11.851 [kafka-producer-network-thread | mso-123456-producer-414c146f-350c-404b-a200-38d58b83784c] INFO org.apache.kafka.clients.Metadata - [Producer clientId=mso-123456-producer-414c146f-350c-404b-a200-38d58b83784c] Cluster ID: jx5ycp9PTHOXo1U6H8QTmw 11:11:11 11:11:11.852 [kafka-producer-network-thread | mso-123456-producer-414c146f-350c-404b-a200-38d58b83784c] DEBUG org.apache.kafka.clients.Metadata - [Producer clientId=mso-123456-producer-414c146f-350c-404b-a200-38d58b83784c] Updated cluster metadata updateVersion 2 to MetadataCache{clusterId='jx5ycp9PTHOXo1U6H8QTmw', nodes={1=localhost:39115 (id: 1 rack: null)}, partitions=[PartitionMetadata(error=NONE, partition=my-test-topic-0, leader=Optional[1], leaderEpoch=Optional[0], replicas=1, isr=1, offlineReplicas=)], controller=localhost:39115 (id: 1 rack: null)} 11:11:11 11:11:11.856 [data-plane-kafka-request-handler-0] DEBUG kafka.coordinator.transaction.RPCProducerIdManager - [RPC ProducerId Manager 1]: Requesting next Producer ID block 11:11:11 11:11:11.860 [TestBroker:1:BrokerToControllerChannelManager broker=1 name=forwarding] DEBUG org.apache.kafka.clients.ClientUtils - Resolved host localhost as 127.0.0.1 11:11:11 11:11:11.860 [TestBroker:1:BrokerToControllerChannelManager broker=1 name=forwarding] DEBUG org.apache.kafka.clients.NetworkClient - [BrokerToControllerChannelManager broker=1 name=forwarding] Initiating connection to node localhost:39115 (id: 1 rack: null) using address localhost/127.0.0.1 11:11:11 11:11:11.861 [TestBroker:1:BrokerToControllerChannelManager broker=1 name=forwarding] DEBUG org.apache.kafka.common.security.authenticator.SaslClientAuthenticator - [BrokerToControllerChannelManager broker=1 name=forwarding] Set SASL client state to SEND_APIVERSIONS_REQUEST 11:11:11 11:11:11.861 [TestBroker:1:BrokerToControllerChannelManager broker=1 name=forwarding] DEBUG org.apache.kafka.common.security.authenticator.SaslClientAuthenticator - [BrokerToControllerChannelManager broker=1 name=forwarding] Creating SaslClient: client=null;service=kafka;serviceHostname=localhost;mechs=[PLAIN] 11:11:11 11:11:11.861 [data-plane-kafka-socket-acceptor-ListenerName(SASL_PLAINTEXT)-SASL_PLAINTEXT-39115] DEBUG kafka.network.DataPlaneAcceptor - Accepted connection from /127.0.0.1:37746 on /127.0.0.1:39115 and assigned it to processor 1, sendBufferSize [actual|requested]: [102400|102400] recvBufferSize [actual|requested]: [102400|102400] 11:11:11 11:11:11.861 [data-plane-kafka-network-thread-1-ListenerName(SASL_PLAINTEXT)-SASL_PLAINTEXT-1] DEBUG kafka.network.Processor - Processor 1 listening to new connection from /127.0.0.1:37746 11:11:11 11:11:11.864 [TestBroker:1:BrokerToControllerChannelManager broker=1 name=forwarding] DEBUG org.apache.kafka.common.network.Selector - [BrokerToControllerChannelManager broker=1 name=forwarding] Created socket with SO_RCVBUF = 65536, SO_SNDBUF = 1313280, SO_TIMEOUT = 0 to node 1 11:11:11 11:11:11.865 [TestBroker:1:BrokerToControllerChannelManager broker=1 name=forwarding] DEBUG org.apache.kafka.common.security.authenticator.SaslClientAuthenticator - [BrokerToControllerChannelManager broker=1 name=forwarding] Set SASL client state to RECEIVE_APIVERSIONS_RESPONSE 11:11:11 11:11:11.865 [TestBroker:1:BrokerToControllerChannelManager broker=1 name=forwarding] DEBUG org.apache.kafka.clients.NetworkClient - [BrokerToControllerChannelManager broker=1 name=forwarding] Completed connection to node 1. Fetching API versions. 11:11:11 11:11:11.865 [data-plane-kafka-network-thread-1-ListenerName(SASL_PLAINTEXT)-SASL_PLAINTEXT-1] DEBUG org.apache.kafka.common.security.authenticator.SaslServerAuthenticator - Set SASL server state to HANDSHAKE_OR_VERSIONS_REQUEST during authentication 11:11:11 11:11:11.865 [data-plane-kafka-network-thread-1-ListenerName(SASL_PLAINTEXT)-SASL_PLAINTEXT-1] DEBUG org.apache.kafka.common.security.authenticator.SaslServerAuthenticator - Handling Kafka request API_VERSIONS during authentication 11:11:11 11:11:11.866 [data-plane-kafka-network-thread-1-ListenerName(SASL_PLAINTEXT)-SASL_PLAINTEXT-1] DEBUG org.apache.kafka.common.security.authenticator.SaslServerAuthenticator - Set SASL server state to HANDSHAKE_REQUEST during authentication 11:11:11 11:11:11.866 [TestBroker:1:BrokerToControllerChannelManager broker=1 name=forwarding] DEBUG org.apache.kafka.common.security.authenticator.SaslClientAuthenticator - [BrokerToControllerChannelManager broker=1 name=forwarding] Set SASL client state to SEND_HANDSHAKE_REQUEST 11:11:11 11:11:11.866 [TestBroker:1:BrokerToControllerChannelManager broker=1 name=forwarding] DEBUG org.apache.kafka.common.security.authenticator.SaslClientAuthenticator - [BrokerToControllerChannelManager broker=1 name=forwarding] Set SASL client state to RECEIVE_HANDSHAKE_RESPONSE 11:11:11 11:11:11.866 [data-plane-kafka-network-thread-1-ListenerName(SASL_PLAINTEXT)-SASL_PLAINTEXT-1] DEBUG org.apache.kafka.common.security.authenticator.SaslServerAuthenticator - Handling Kafka request SASL_HANDSHAKE during authentication 11:11:11 11:11:11.866 [data-plane-kafka-network-thread-1-ListenerName(SASL_PLAINTEXT)-SASL_PLAINTEXT-1] DEBUG org.apache.kafka.common.security.authenticator.SaslServerAuthenticator - Using SASL mechanism 'PLAIN' provided by client 11:11:11 11:11:11.866 [TestBroker:1:BrokerToControllerChannelManager broker=1 name=forwarding] DEBUG org.apache.kafka.common.security.authenticator.SaslClientAuthenticator - [BrokerToControllerChannelManager broker=1 name=forwarding] Set SASL client state to INITIAL 11:11:11 11:11:11.867 [TestBroker:1:BrokerToControllerChannelManager broker=1 name=forwarding] DEBUG org.apache.kafka.common.security.authenticator.SaslClientAuthenticator - [BrokerToControllerChannelManager broker=1 name=forwarding] Set SASL client state to INTERMEDIATE 11:11:11 11:11:11.867 [data-plane-kafka-network-thread-1-ListenerName(SASL_PLAINTEXT)-SASL_PLAINTEXT-1] DEBUG org.apache.kafka.common.security.authenticator.SaslServerAuthenticator - Set SASL server state to AUTHENTICATE during authentication 11:11:11 11:11:11.867 [data-plane-kafka-network-thread-1-ListenerName(SASL_PLAINTEXT)-SASL_PLAINTEXT-1] DEBUG org.apache.kafka.common.security.authenticator.SaslServerAuthenticator - Authentication complete; session max lifetime from broker config=0 ms, no credential expiration; no session expiration, sending 0 ms to client 11:11:11 11:11:11.867 [data-plane-kafka-network-thread-1-ListenerName(SASL_PLAINTEXT)-SASL_PLAINTEXT-1] DEBUG org.apache.kafka.common.security.authenticator.SaslServerAuthenticator - Set SASL server state to COMPLETE during authentication 11:11:11 11:11:11.867 [data-plane-kafka-network-thread-1-ListenerName(SASL_PLAINTEXT)-SASL_PLAINTEXT-1] DEBUG org.apache.kafka.common.network.Selector - [SocketServer listenerType=ZK_BROKER, nodeId=1] Successfully authenticated with /127.0.0.1 11:11:11 11:11:11.867 [TestBroker:1:BrokerToControllerChannelManager broker=1 name=forwarding] DEBUG org.apache.kafka.common.security.authenticator.SaslClientAuthenticator - [BrokerToControllerChannelManager broker=1 name=forwarding] Set SASL client state to COMPLETE 11:11:11 11:11:11.867 [TestBroker:1:BrokerToControllerChannelManager broker=1 name=forwarding] DEBUG org.apache.kafka.common.security.authenticator.SaslClientAuthenticator - [BrokerToControllerChannelManager broker=1 name=forwarding] Finished authentication with no session expiration and no session re-authentication 11:11:11 11:11:11.867 [TestBroker:1:BrokerToControllerChannelManager broker=1 name=forwarding] DEBUG org.apache.kafka.common.network.Selector - [BrokerToControllerChannelManager broker=1 name=forwarding] Successfully authenticated with localhost/127.0.0.1 11:11:11 11:11:11.867 [TestBroker:1:BrokerToControllerChannelManager broker=1 name=forwarding] DEBUG org.apache.kafka.clients.NetworkClient - [BrokerToControllerChannelManager broker=1 name=forwarding] Initiating API versions fetch from node 1. 11:11:11 11:11:11.867 [TestBroker:1:BrokerToControllerChannelManager broker=1 name=forwarding] DEBUG org.apache.kafka.clients.NetworkClient - [BrokerToControllerChannelManager broker=1 name=forwarding] Sending API_VERSIONS request with header RequestHeader(apiKey=API_VERSIONS, apiVersion=3, clientId=1, correlationId=1) and timeout 30000 to node 1: ApiVersionsRequestData(clientSoftwareName='apache-kafka-java', clientSoftwareVersion='3.3.1') 11:11:11 11:11:11.870 [TestBroker:1:BrokerToControllerChannelManager broker=1 name=forwarding] DEBUG org.apache.kafka.clients.NetworkClient - [BrokerToControllerChannelManager broker=1 name=forwarding] Received API_VERSIONS response from node 1 for request with header RequestHeader(apiKey=API_VERSIONS, apiVersion=3, clientId=1, correlationId=1): ApiVersionsResponseData(errorCode=0, apiKeys=[ApiVersion(apiKey=0, minVersion=0, maxVersion=9), ApiVersion(apiKey=1, minVersion=0, maxVersion=13), ApiVersion(apiKey=2, minVersion=0, maxVersion=7), ApiVersion(apiKey=3, minVersion=0, maxVersion=12), ApiVersion(apiKey=4, minVersion=0, maxVersion=6), ApiVersion(apiKey=5, minVersion=0, maxVersion=3), ApiVersion(apiKey=6, minVersion=0, maxVersion=7), ApiVersion(apiKey=7, minVersion=0, maxVersion=3), ApiVersion(apiKey=8, minVersion=0, maxVersion=8), ApiVersion(apiKey=9, minVersion=0, maxVersion=8), ApiVersion(apiKey=10, minVersion=0, maxVersion=4), ApiVersion(apiKey=11, minVersion=0, maxVersion=9), ApiVersion(apiKey=12, minVersion=0, maxVersion=4), ApiVersion(apiKey=13, minVersion=0, maxVersion=5), ApiVersion(apiKey=14, minVersion=0, maxVersion=5), ApiVersion(apiKey=15, minVersion=0, maxVersion=5), ApiVersion(apiKey=16, minVersion=0, maxVersion=4), ApiVersion(apiKey=17, minVersion=0, maxVersion=1), ApiVersion(apiKey=18, minVersion=0, maxVersion=3), ApiVersion(apiKey=19, minVersion=0, maxVersion=7), ApiVersion(apiKey=20, minVersion=0, maxVersion=6), ApiVersion(apiKey=21, minVersion=0, maxVersion=2), ApiVersion(apiKey=22, minVersion=0, maxVersion=4), ApiVersion(apiKey=23, minVersion=0, maxVersion=4), ApiVersion(apiKey=24, minVersion=0, maxVersion=3), ApiVersion(apiKey=25, minVersion=0, maxVersion=3), ApiVersion(apiKey=26, minVersion=0, maxVersion=3), ApiVersion(apiKey=27, minVersion=0, maxVersion=1), ApiVersion(apiKey=28, minVersion=0, maxVersion=3), ApiVersion(apiKey=29, minVersion=0, maxVersion=3), ApiVersion(apiKey=30, minVersion=0, maxVersion=3), ApiVersion(apiKey=31, minVersion=0, maxVersion=3), ApiVersion(apiKey=32, minVersion=0, maxVersion=4), ApiVersion(apiKey=33, minVersion=0, maxVersion=2), ApiVersion(apiKey=34, minVersion=0, maxVersion=2), ApiVersion(apiKey=35, minVersion=0, maxVersion=4), ApiVersion(apiKey=36, minVersion=0, maxVersion=2), ApiVersion(apiKey=37, minVersion=0, maxVersion=3), ApiVersion(apiKey=38, minVersion=0, maxVersion=3), ApiVersion(apiKey=39, minVersion=0, maxVersion=2), ApiVersion(apiKey=40, minVersion=0, maxVersion=2), ApiVersion(apiKey=41, minVersion=0, maxVersion=3), ApiVersion(apiKey=42, minVersion=0, maxVersion=2), ApiVersion(apiKey=43, minVersion=0, maxVersion=2), ApiVersion(apiKey=44, minVersion=0, maxVersion=1), ApiVersion(apiKey=45, minVersion=0, maxVersion=0), ApiVersion(apiKey=46, minVersion=0, maxVersion=0), ApiVersion(apiKey=47, minVersion=0, maxVersion=0), ApiVersion(apiKey=48, minVersion=0, maxVersion=1), ApiVersion(apiKey=49, minVersion=0, maxVersion=1), ApiVersion(apiKey=50, minVersion=0, maxVersion=0), ApiVersion(apiKey=51, minVersion=0, maxVersion=0), ApiVersion(apiKey=56, minVersion=0, maxVersion=2), ApiVersion(apiKey=57, minVersion=0, maxVersion=1), ApiVersion(apiKey=60, minVersion=0, maxVersion=0), ApiVersion(apiKey=61, minVersion=0, maxVersion=0), ApiVersion(apiKey=65, minVersion=0, maxVersion=0), ApiVersion(apiKey=66, minVersion=0, maxVersion=0), ApiVersion(apiKey=67, minVersion=0, maxVersion=0)], throttleTimeMs=0, supportedFeatures=[], finalizedFeaturesEpoch=0, finalizedFeatures=[]) 11:11:11 11:11:11.871 [TestBroker:1:BrokerToControllerChannelManager broker=1 name=forwarding] DEBUG org.apache.kafka.clients.NetworkClient - [BrokerToControllerChannelManager broker=1 name=forwarding] Node 1 has finalized features epoch: 0, finalized features: [], supported features: [], API versions: (Produce(0): 0 to 9 [usable: 9], Fetch(1): 0 to 13 [usable: 13], ListOffsets(2): 0 to 7 [usable: 7], Metadata(3): 0 to 12 [usable: 12], LeaderAndIsr(4): 0 to 6 [usable: 6], StopReplica(5): 0 to 3 [usable: 3], UpdateMetadata(6): 0 to 7 [usable: 7], ControlledShutdown(7): 0 to 3 [usable: 3], OffsetCommit(8): 0 to 8 [usable: 8], OffsetFetch(9): 0 to 8 [usable: 8], FindCoordinator(10): 0 to 4 [usable: 4], JoinGroup(11): 0 to 9 [usable: 9], Heartbeat(12): 0 to 4 [usable: 4], LeaveGroup(13): 0 to 5 [usable: 5], SyncGroup(14): 0 to 5 [usable: 5], DescribeGroups(15): 0 to 5 [usable: 5], ListGroups(16): 0 to 4 [usable: 4], SaslHandshake(17): 0 to 1 [usable: 1], ApiVersions(18): 0 to 3 [usable: 3], CreateTopics(19): 0 to 7 [usable: 7], DeleteTopics(20): 0 to 6 [usable: 6], DeleteRecords(21): 0 to 2 [usable: 2], InitProducerId(22): 0 to 4 [usable: 4], OffsetForLeaderEpoch(23): 0 to 4 [usable: 4], AddPartitionsToTxn(24): 0 to 3 [usable: 3], AddOffsetsToTxn(25): 0 to 3 [usable: 3], EndTxn(26): 0 to 3 [usable: 3], WriteTxnMarkers(27): 0 to 1 [usable: 1], TxnOffsetCommit(28): 0 to 3 [usable: 3], DescribeAcls(29): 0 to 3 [usable: 3], CreateAcls(30): 0 to 3 [usable: 3], DeleteAcls(31): 0 to 3 [usable: 3], DescribeConfigs(32): 0 to 4 [usable: 4], AlterConfigs(33): 0 to 2 [usable: 2], AlterReplicaLogDirs(34): 0 to 2 [usable: 2], DescribeLogDirs(35): 0 to 4 [usable: 4], SaslAuthenticate(36): 0 to 2 [usable: 2], CreatePartitions(37): 0 to 3 [usable: 3], CreateDelegationToken(38): 0 to 3 [usable: 3], RenewDelegationToken(39): 0 to 2 [usable: 2], ExpireDelegationToken(40): 0 to 2 [usable: 2], DescribeDelegationToken(41): 0 to 3 [usable: 3], DeleteGroups(42): 0 to 2 [usable: 2], ElectLeaders(43): 0 to 2 [usable: 2], IncrementalAlterConfigs(44): 0 to 1 [usable: 1], AlterPartitionReassignments(45): 0 [usable: 0], ListPartitionReassignments(46): 0 [usable: 0], OffsetDelete(47): 0 [usable: 0], DescribeClientQuotas(48): 0 to 1 [usable: 1], AlterClientQuotas(49): 0 to 1 [usable: 1], DescribeUserScramCredentials(50): 0 [usable: 0], AlterUserScramCredentials(51): 0 [usable: 0], AlterPartition(56): 0 to 2 [usable: 2], UpdateFeatures(57): 0 to 1 [usable: 1], DescribeCluster(60): 0 [usable: 0], DescribeProducers(61): 0 [usable: 0], DescribeTransactions(65): 0 [usable: 0], ListTransactions(66): 0 [usable: 0], AllocateProducerIds(67): 0 [usable: 0]). 11:11:11 11:11:11.871 [data-plane-kafka-network-thread-1-ListenerName(SASL_PLAINTEXT)-SASL_PLAINTEXT-1] DEBUG kafka.request.logger - Completed request:{"isForwarded":false,"requestHeader":{"requestApiKey":18,"requestApiVersion":3,"correlationId":1,"clientId":"1","requestApiKeyName":"API_VERSIONS"},"request":{"clientSoftwareName":"apache-kafka-java","clientSoftwareVersion":"3.3.1"},"response":{"errorCode":0,"apiKeys":[{"apiKey":0,"minVersion":0,"maxVersion":9},{"apiKey":1,"minVersion":0,"maxVersion":13},{"apiKey":2,"minVersion":0,"maxVersion":7},{"apiKey":3,"minVersion":0,"maxVersion":12},{"apiKey":4,"minVersion":0,"maxVersion":6},{"apiKey":5,"minVersion":0,"maxVersion":3},{"apiKey":6,"minVersion":0,"maxVersion":7},{"apiKey":7,"minVersion":0,"maxVersion":3},{"apiKey":8,"minVersion":0,"maxVersion":8},{"apiKey":9,"minVersion":0,"maxVersion":8},{"apiKey":10,"minVersion":0,"maxVersion":4},{"apiKey":11,"minVersion":0,"maxVersion":9},{"apiKey":12,"minVersion":0,"maxVersion":4},{"apiKey":13,"minVersion":0,"maxVersion":5},{"apiKey":14,"minVersion":0,"maxVersion":5},{"apiKey":15,"minVersion":0,"maxVersion":5},{"apiKey":16,"minVersion":0,"maxVersion":4},{"apiKey":17,"minVersion":0,"maxVersion":1},{"apiKey":18,"minVersion":0,"maxVersion":3},{"apiKey":19,"minVersion":0,"maxVersion":7},{"apiKey":20,"minVersion":0,"maxVersion":6},{"apiKey":21,"minVersion":0,"maxVersion":2},{"apiKey":22,"minVersion":0,"maxVersion":4},{"apiKey":23,"minVersion":0,"maxVersion":4},{"apiKey":24,"minVersion":0,"maxVersion":3},{"apiKey":25,"minVersion":0,"maxVersion":3},{"apiKey":26,"minVersion":0,"maxVersion":3},{"apiKey":27,"minVersion":0,"maxVersion":1},{"apiKey":28,"minVersion":0,"maxVersion":3},{"apiKey":29,"minVersion":0,"maxVersion":3},{"apiKey":30,"minVersion":0,"maxVersion":3},{"apiKey":31,"minVersion":0,"maxVersion":3},{"apiKey":32,"minVersion":0,"maxVersion":4},{"apiKey":33,"minVersion":0,"maxVersion":2},{"apiKey":34,"minVersion":0,"maxVersion":2},{"apiKey":35,"minVersion":0,"maxVersion":4},{"apiKey":36,"minVersion":0,"maxVersion":2},{"apiKey":37,"minVersion":0,"maxVersion":3},{"apiKey":38,"minVersion":0,"maxVersion":3},{"apiKey":39,"minVersion":0,"maxVersion":2},{"apiKey":40,"minVersion":0,"maxVersion":2},{"apiKey":41,"minVersion":0,"maxVersion":3},{"apiKey":42,"minVersion":0,"maxVersion":2},{"apiKey":43,"minVersion":0,"maxVersion":2},{"apiKey":44,"minVersion":0,"maxVersion":1},{"apiKey":45,"minVersion":0,"maxVersion":0},{"apiKey":46,"minVersion":0,"maxVersion":0},{"apiKey":47,"minVersion":0,"maxVersion":0},{"apiKey":48,"minVersion":0,"maxVersion":1},{"apiKey":49,"minVersion":0,"maxVersion":1},{"apiKey":50,"minVersion":0,"maxVersion":0},{"apiKey":51,"minVersion":0,"maxVersion":0},{"apiKey":56,"minVersion":0,"maxVersion":2},{"apiKey":57,"minVersion":0,"maxVersion":1},{"apiKey":60,"minVersion":0,"maxVersion":0},{"apiKey":61,"minVersion":0,"maxVersion":0},{"apiKey":65,"minVersion":0,"maxVersion":0},{"apiKey":66,"minVersion":0,"maxVersion":0},{"apiKey":67,"minVersion":0,"maxVersion":0}],"throttleTimeMs":0,"finalizedFeaturesEpoch":0},"connection":"127.0.0.1:39115-127.0.0.1:37746-4","totalTimeMs":1.535,"requestQueueTimeMs":0.335,"localTimeMs":0.903,"remoteTimeMs":0.0,"throttleTimeMs":0,"responseQueueTimeMs":0.101,"sendTimeMs":0.194,"securityProtocol":"SASL_PLAINTEXT","principal":"User:admin","listener":"SASL_PLAINTEXT","clientInformation":{"softwareName":"unknown","softwareVersion":"unknown"}} 11:11:11 11:11:11.871 [TestBroker:1:BrokerToControllerChannelManager broker=1 name=forwarding] DEBUG org.apache.kafka.clients.NetworkClient - [BrokerToControllerChannelManager broker=1 name=forwarding] Sending ALLOCATE_PRODUCER_IDS request with header RequestHeader(apiKey=ALLOCATE_PRODUCER_IDS, apiVersion=0, clientId=1, correlationId=0) and timeout 30000 to node 1: AllocateProducerIdsRequestData(brokerId=1, brokerEpoch=25) 11:11:11 11:11:11.878 [ProcessThread(sid:0 cport:39173):] DEBUG org.apache.zookeeper.server.SessionTrackerImpl - Checking session 0x10000020e9e0000 11:11:11 11:11:11.879 [ProcessThread(sid:0 cport:39173):] DEBUG org.apache.zookeeper.server.SessionTrackerImpl - Checking session 0x10000020e9e0000 11:11:11 11:11:11.879 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - Processing request:: sessionid:0x10000020e9e0000 type:getData cxid:0xfc zxid:0xfffffffffffffffe txntype:unknown reqpath:/latest_producer_id_block 11:11:11 11:11:11.879 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - sessionid:0x10000020e9e0000 type:getData cxid:0xfc zxid:0xfffffffffffffffe txntype:unknown reqpath:/latest_producer_id_block 11:11:11 11:11:11.879 [SyncThread:0] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Permission requested: 1 11:11:11 11:11:11.879 [SyncThread:0] DEBUG org.apache.zookeeper.server.ZooKeeperServer - ACLs for node: [31,s{'world,'anyone} 11:11:11 ] 11:11:11 11:11:11.879 [SyncThread:0] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Client credentials: ['sasl,'zooclient 11:11:11 , 'ip,'127.0.0.1 11:11:11 ] 11:11:11 11:11:11.879 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - Processing request:: sessionid:0x10000020e9e0000 type:ping cxid:0xfffffffffffffffe zxid:0xfffffffffffffffe txntype:unknown reqpath:n/a 11:11:11 11:11:11.879 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - sessionid:0x10000020e9e0000 type:ping cxid:0xfffffffffffffffe zxid:0xfffffffffffffffe txntype:unknown reqpath:n/a 11:11:11 11:11:11.879 [main-SendThread(127.0.0.1:39173)] DEBUG org.apache.zookeeper.ClientCnxn - Reading reply session id: 0x10000020e9e0000, packet:: clientPath:/latest_producer_id_block serverPath:/latest_producer_id_block finished:false header:: 252,4 replyHeader:: 252,139,0 request:: '/latest_producer_id_block,F response:: ,s{15,15,1768216249093,1768216249093,0,0,0,0,0,0,15} 11:11:11 11:11:11.880 [main-SendThread(127.0.0.1:39173)] DEBUG org.apache.zookeeper.ClientCnxn - Got ping response for session id: 0x10000020e9e0000 after 1ms. 11:11:11 11:11:11.881 [controller-event-thread] DEBUG kafka.controller.KafkaController - [Controller id=1] There is no producerId block yet (Zk path version 0), creating the first block 11:11:11 11:11:11.882 [ProcessThread(sid:0 cport:39173):] DEBUG org.apache.zookeeper.server.SessionTrackerImpl - Checking session 0x10000020e9e0000 11:11:11 11:11:11.882 [ProcessThread(sid:0 cport:39173):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Permission requested: 2 11:11:11 11:11:11.882 [ProcessThread(sid:0 cport:39173):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - ACLs for node: [31,s{'world,'anyone} 11:11:11 ] 11:11:11 11:11:11.882 [ProcessThread(sid:0 cport:39173):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Client credentials: ['sasl,'zooclient 11:11:11 , 'ip,'127.0.0.1 11:11:11 ] 11:11:11 11:11:11.882 [ProcessThread(sid:0 cport:39173):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from data tree is: 267381572106 11:11:11 11:11:11.886 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - Processing request:: sessionid:0x10000020e9e0000 type:setData cxid:0xfd zxid:0x8c txntype:5 reqpath:n/a 11:11:11 11:11:11.886 [SyncThread:0] DEBUG org.apache.zookeeper.common.PathTrie - latest_producer_id_block 11:11:11 11:11:11.886 [SyncThread:0] DEBUG org.apache.zookeeper.server.DataTree - Digests are matching for Zxid: 8c, Digest in log and actual tree: 269372888925 11:11:11 11:11:11.886 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - sessionid:0x10000020e9e0000 type:setData cxid:0xfd zxid:0x8c txntype:5 reqpath:n/a 11:11:11 11:11:11.887 [main-SendThread(127.0.0.1:39173)] DEBUG org.apache.zookeeper.ClientCnxn - Reading reply session id: 0x10000020e9e0000, packet:: clientPath:/latest_producer_id_block serverPath:/latest_producer_id_block finished:false header:: 253,5 replyHeader:: 253,140,0 request:: '/latest_producer_id_block,#7b2276657273696f6e223a312c2262726f6b6572223a312c22626c6f636b5f7374617274223a2230222c22626c6f636b5f656e64223a22393939227d,0 response:: s{15,140,1768216249093,1768216271882,1,0,0,0,60,0,15} 11:11:11 11:11:11.888 [controller-event-thread] DEBUG kafka.zk.KafkaZkClient - Conditional update of path /latest_producer_id_block with value {"version":1,"broker":1,"block_start":"0","block_end":"999"} and expected version 0 succeeded, returning the new version: 1 11:11:11 11:11:11.888 [controller-event-thread] INFO kafka.controller.KafkaController - [Controller id=1] Acquired new producerId block ProducerIdsBlock(assignedBrokerId=1, firstProducerId=0, size=1000) by writing to Zk with path version 1 11:11:11 11:11:11.891 [TestBroker:1:BrokerToControllerChannelManager broker=1 name=forwarding] DEBUG org.apache.kafka.clients.NetworkClient - [BrokerToControllerChannelManager broker=1 name=forwarding] Received ALLOCATE_PRODUCER_IDS response from node 1 for request with header RequestHeader(apiKey=ALLOCATE_PRODUCER_IDS, apiVersion=0, clientId=1, correlationId=0): AllocateProducerIdsResponseData(throttleTimeMs=0, errorCode=0, producerIdStart=0, producerIdLen=1000) 11:11:11 11:11:11.892 [data-plane-kafka-network-thread-1-ListenerName(SASL_PLAINTEXT)-SASL_PLAINTEXT-1] DEBUG kafka.request.logger - Completed request:{"isForwarded":false,"requestHeader":{"requestApiKey":67,"requestApiVersion":0,"correlationId":0,"clientId":"1","requestApiKeyName":"ALLOCATE_PRODUCER_IDS"},"request":{"brokerId":1,"brokerEpoch":25},"response":{"throttleTimeMs":0,"errorCode":0,"producerIdStart":0,"producerIdLen":1000},"connection":"127.0.0.1:39115-127.0.0.1:37746-4","totalTimeMs":19.154,"requestQueueTimeMs":1.629,"localTimeMs":1.435,"remoteTimeMs":15.458,"throttleTimeMs":0,"responseQueueTimeMs":0.237,"sendTimeMs":0.392,"securityProtocol":"SASL_PLAINTEXT","principal":"User:admin","listener":"SASL_PLAINTEXT","clientInformation":{"softwareName":"apache-kafka-java","softwareVersion":"3.3.1"}} 11:11:11 11:11:11.892 [TestBroker:1:BrokerToControllerChannelManager broker=1 name=forwarding] DEBUG kafka.coordinator.transaction.RPCProducerIdManager - [RPC ProducerId Manager 1]: Got next producer ID block from controller AllocateProducerIdsResponseData(throttleTimeMs=0, errorCode=0, producerIdStart=0, producerIdLen=1000) 11:11:11 11:11:11.895 [kafka-producer-network-thread | mso-123456-producer-414c146f-350c-404b-a200-38d58b83784c] DEBUG org.apache.kafka.clients.NetworkClient - [Producer clientId=mso-123456-producer-414c146f-350c-404b-a200-38d58b83784c] Received INIT_PRODUCER_ID response from node -1 for request with header RequestHeader(apiKey=INIT_PRODUCER_ID, apiVersion=4, clientId=mso-123456-producer-414c146f-350c-404b-a200-38d58b83784c, correlationId=2): InitProducerIdResponseData(throttleTimeMs=0, errorCode=0, producerId=0, producerEpoch=0) 11:11:11 11:11:11.896 [kafka-producer-network-thread | mso-123456-producer-414c146f-350c-404b-a200-38d58b83784c] INFO org.apache.kafka.clients.producer.internals.TransactionManager - [Producer clientId=mso-123456-producer-414c146f-350c-404b-a200-38d58b83784c] ProducerId set to 0 with epoch 0 11:11:11 11:11:11.896 [kafka-producer-network-thread | mso-123456-producer-414c146f-350c-404b-a200-38d58b83784c] DEBUG org.apache.kafka.clients.producer.internals.TransactionManager - [Producer clientId=mso-123456-producer-414c146f-350c-404b-a200-38d58b83784c] Transition from state INITIALIZING to READY 11:11:11 11:11:11.896 [data-plane-kafka-network-thread-1-ListenerName(SASL_PLAINTEXT)-SASL_PLAINTEXT-0] DEBUG kafka.request.logger - Completed request:{"isForwarded":false,"requestHeader":{"requestApiKey":22,"requestApiVersion":4,"correlationId":2,"clientId":"mso-123456-producer-414c146f-350c-404b-a200-38d58b83784c","requestApiKeyName":"INIT_PRODUCER_ID"},"request":{"transactionalId":null,"transactionTimeoutMs":2147483647,"producerId":-1,"producerEpoch":-1},"response":{"throttleTimeMs":0,"errorCode":0,"producerId":0,"producerEpoch":0},"connection":"127.0.0.1:39115-127.0.0.1:37744-4","totalTimeMs":43.646,"requestQueueTimeMs":2.084,"localTimeMs":41.268,"remoteTimeMs":0.0,"throttleTimeMs":0,"responseQueueTimeMs":0.092,"sendTimeMs":0.201,"securityProtocol":"SASL_PLAINTEXT","principal":"User:admin","listener":"SASL_PLAINTEXT","clientInformation":{"softwareName":"apache-kafka-java","softwareVersion":"3.3.1"}} 11:11:11 11:11:11.897 [kafka-producer-network-thread | mso-123456-producer-414c146f-350c-404b-a200-38d58b83784c] DEBUG org.apache.kafka.clients.ClientUtils - Resolved host localhost as 127.0.0.1 11:11:11 11:11:11.897 [kafka-producer-network-thread | mso-123456-producer-414c146f-350c-404b-a200-38d58b83784c] DEBUG org.apache.kafka.clients.NetworkClient - [Producer clientId=mso-123456-producer-414c146f-350c-404b-a200-38d58b83784c] Initiating connection to node localhost:39115 (id: 1 rack: null) using address localhost/127.0.0.1 11:11:11 11:11:11.897 [kafka-producer-network-thread | mso-123456-producer-414c146f-350c-404b-a200-38d58b83784c] DEBUG org.apache.kafka.common.security.authenticator.SaslClientAuthenticator - [Producer clientId=mso-123456-producer-414c146f-350c-404b-a200-38d58b83784c] Set SASL client state to SEND_APIVERSIONS_REQUEST 11:11:11 11:11:11.897 [kafka-producer-network-thread | mso-123456-producer-414c146f-350c-404b-a200-38d58b83784c] DEBUG org.apache.kafka.common.security.authenticator.SaslClientAuthenticator - [Producer clientId=mso-123456-producer-414c146f-350c-404b-a200-38d58b83784c] Creating SaslClient: client=null;service=kafka;serviceHostname=localhost;mechs=[PLAIN] 11:11:11 11:11:11.898 [data-plane-kafka-network-thread-1-ListenerName(SASL_PLAINTEXT)-SASL_PLAINTEXT-0] DEBUG kafka.network.Processor - Processor 0 listening to new connection from /127.0.0.1:37748 11:11:11 11:11:11.898 [data-plane-kafka-socket-acceptor-ListenerName(SASL_PLAINTEXT)-SASL_PLAINTEXT-39115] DEBUG kafka.network.DataPlaneAcceptor - Accepted connection from /127.0.0.1:37748 on /127.0.0.1:39115 and assigned it to processor 0, sendBufferSize [actual|requested]: [102400|102400] recvBufferSize [actual|requested]: [102400|102400] 11:11:11 11:11:11.899 [kafka-producer-network-thread | mso-123456-producer-414c146f-350c-404b-a200-38d58b83784c] DEBUG org.apache.kafka.common.network.Selector - [Producer clientId=mso-123456-producer-414c146f-350c-404b-a200-38d58b83784c] Created socket with SO_RCVBUF = 32768, SO_SNDBUF = 131072, SO_TIMEOUT = 0 to node 1 11:11:11 11:11:11.900 [kafka-producer-network-thread | mso-123456-producer-414c146f-350c-404b-a200-38d58b83784c] DEBUG org.apache.kafka.common.security.authenticator.SaslClientAuthenticator - [Producer clientId=mso-123456-producer-414c146f-350c-404b-a200-38d58b83784c] Set SASL client state to RECEIVE_APIVERSIONS_RESPONSE 11:11:11 11:11:11.900 [data-plane-kafka-network-thread-1-ListenerName(SASL_PLAINTEXT)-SASL_PLAINTEXT-0] DEBUG org.apache.kafka.common.security.authenticator.SaslServerAuthenticator - Set SASL server state to HANDSHAKE_OR_VERSIONS_REQUEST during authentication 11:11:11 11:11:11.900 [data-plane-kafka-network-thread-1-ListenerName(SASL_PLAINTEXT)-SASL_PLAINTEXT-0] DEBUG org.apache.kafka.common.security.authenticator.SaslServerAuthenticator - Handling Kafka request API_VERSIONS during authentication 11:11:11 11:11:11.900 [kafka-producer-network-thread | mso-123456-producer-414c146f-350c-404b-a200-38d58b83784c] DEBUG org.apache.kafka.clients.NetworkClient - [Producer clientId=mso-123456-producer-414c146f-350c-404b-a200-38d58b83784c] Completed connection to node 1. Fetching API versions. 11:11:11 11:11:11.901 [data-plane-kafka-network-thread-1-ListenerName(SASL_PLAINTEXT)-SASL_PLAINTEXT-0] DEBUG org.apache.kafka.common.security.authenticator.SaslServerAuthenticator - Set SASL server state to HANDSHAKE_REQUEST during authentication 11:11:11 11:11:11.901 [kafka-producer-network-thread | mso-123456-producer-414c146f-350c-404b-a200-38d58b83784c] DEBUG org.apache.kafka.common.security.authenticator.SaslClientAuthenticator - [Producer clientId=mso-123456-producer-414c146f-350c-404b-a200-38d58b83784c] Set SASL client state to SEND_HANDSHAKE_REQUEST 11:11:11 11:11:11.902 [kafka-producer-network-thread | mso-123456-producer-414c146f-350c-404b-a200-38d58b83784c] DEBUG org.apache.kafka.common.security.authenticator.SaslClientAuthenticator - [Producer clientId=mso-123456-producer-414c146f-350c-404b-a200-38d58b83784c] Set SASL client state to RECEIVE_HANDSHAKE_RESPONSE 11:11:11 11:11:11.902 [data-plane-kafka-network-thread-1-ListenerName(SASL_PLAINTEXT)-SASL_PLAINTEXT-0] DEBUG org.apache.kafka.common.security.authenticator.SaslServerAuthenticator - Handling Kafka request SASL_HANDSHAKE during authentication 11:11:11 11:11:11.902 [data-plane-kafka-network-thread-1-ListenerName(SASL_PLAINTEXT)-SASL_PLAINTEXT-0] DEBUG org.apache.kafka.common.security.authenticator.SaslServerAuthenticator - Using SASL mechanism 'PLAIN' provided by client 11:11:11 11:11:11.902 [kafka-producer-network-thread | mso-123456-producer-414c146f-350c-404b-a200-38d58b83784c] DEBUG org.apache.kafka.common.security.authenticator.SaslClientAuthenticator - [Producer clientId=mso-123456-producer-414c146f-350c-404b-a200-38d58b83784c] Set SASL client state to INITIAL 11:11:11 11:11:11.902 [data-plane-kafka-network-thread-1-ListenerName(SASL_PLAINTEXT)-SASL_PLAINTEXT-0] DEBUG org.apache.kafka.common.security.authenticator.SaslServerAuthenticator - Set SASL server state to AUTHENTICATE during authentication 11:11:11 11:11:11.902 [kafka-producer-network-thread | mso-123456-producer-414c146f-350c-404b-a200-38d58b83784c] DEBUG org.apache.kafka.common.security.authenticator.SaslClientAuthenticator - [Producer clientId=mso-123456-producer-414c146f-350c-404b-a200-38d58b83784c] Set SASL client state to INTERMEDIATE 11:11:11 11:11:11.903 [data-plane-kafka-network-thread-1-ListenerName(SASL_PLAINTEXT)-SASL_PLAINTEXT-0] DEBUG org.apache.kafka.common.security.authenticator.SaslServerAuthenticator - Authentication complete; session max lifetime from broker config=0 ms, no credential expiration; no session expiration, sending 0 ms to client 11:11:11 11:11:11.903 [data-plane-kafka-network-thread-1-ListenerName(SASL_PLAINTEXT)-SASL_PLAINTEXT-0] DEBUG org.apache.kafka.common.security.authenticator.SaslServerAuthenticator - Set SASL server state to COMPLETE during authentication 11:11:11 11:11:11.903 [data-plane-kafka-network-thread-1-ListenerName(SASL_PLAINTEXT)-SASL_PLAINTEXT-0] DEBUG org.apache.kafka.common.network.Selector - [SocketServer listenerType=ZK_BROKER, nodeId=1] Successfully authenticated with /127.0.0.1 11:11:11 11:11:11.903 [kafka-producer-network-thread | mso-123456-producer-414c146f-350c-404b-a200-38d58b83784c] DEBUG org.apache.kafka.common.security.authenticator.SaslClientAuthenticator - [Producer clientId=mso-123456-producer-414c146f-350c-404b-a200-38d58b83784c] Set SASL client state to COMPLETE 11:11:11 11:11:11.903 [kafka-producer-network-thread | mso-123456-producer-414c146f-350c-404b-a200-38d58b83784c] DEBUG org.apache.kafka.common.security.authenticator.SaslClientAuthenticator - [Producer clientId=mso-123456-producer-414c146f-350c-404b-a200-38d58b83784c] Finished authentication with no session expiration and no session re-authentication 11:11:11 11:11:11.903 [kafka-producer-network-thread | mso-123456-producer-414c146f-350c-404b-a200-38d58b83784c] DEBUG org.apache.kafka.common.network.Selector - [Producer clientId=mso-123456-producer-414c146f-350c-404b-a200-38d58b83784c] Successfully authenticated with localhost/127.0.0.1 11:11:11 11:11:11.903 [kafka-producer-network-thread | mso-123456-producer-414c146f-350c-404b-a200-38d58b83784c] DEBUG org.apache.kafka.clients.NetworkClient - [Producer clientId=mso-123456-producer-414c146f-350c-404b-a200-38d58b83784c] Initiating API versions fetch from node 1. 11:11:11 11:11:11.903 [kafka-producer-network-thread | mso-123456-producer-414c146f-350c-404b-a200-38d58b83784c] DEBUG org.apache.kafka.clients.NetworkClient - [Producer clientId=mso-123456-producer-414c146f-350c-404b-a200-38d58b83784c] Sending API_VERSIONS request with header RequestHeader(apiKey=API_VERSIONS, apiVersion=3, clientId=mso-123456-producer-414c146f-350c-404b-a200-38d58b83784c, correlationId=3) and timeout 30000 to node 1: ApiVersionsRequestData(clientSoftwareName='apache-kafka-java', clientSoftwareVersion='3.3.1') 11:11:11 11:11:11.905 [data-plane-kafka-network-thread-1-ListenerName(SASL_PLAINTEXT)-SASL_PLAINTEXT-0] DEBUG kafka.request.logger - Completed request:{"isForwarded":false,"requestHeader":{"requestApiKey":18,"requestApiVersion":3,"correlationId":3,"clientId":"mso-123456-producer-414c146f-350c-404b-a200-38d58b83784c","requestApiKeyName":"API_VERSIONS"},"request":{"clientSoftwareName":"apache-kafka-java","clientSoftwareVersion":"3.3.1"},"response":{"errorCode":0,"apiKeys":[{"apiKey":0,"minVersion":0,"maxVersion":9},{"apiKey":1,"minVersion":0,"maxVersion":13},{"apiKey":2,"minVersion":0,"maxVersion":7},{"apiKey":3,"minVersion":0,"maxVersion":12},{"apiKey":4,"minVersion":0,"maxVersion":6},{"apiKey":5,"minVersion":0,"maxVersion":3},{"apiKey":6,"minVersion":0,"maxVersion":7},{"apiKey":7,"minVersion":0,"maxVersion":3},{"apiKey":8,"minVersion":0,"maxVersion":8},{"apiKey":9,"minVersion":0,"maxVersion":8},{"apiKey":10,"minVersion":0,"maxVersion":4},{"apiKey":11,"minVersion":0,"maxVersion":9},{"apiKey":12,"minVersion":0,"maxVersion":4},{"apiKey":13,"minVersion":0,"maxVersion":5},{"apiKey":14,"minVersion":0,"maxVersion":5},{"apiKey":15,"minVersion":0,"maxVersion":5},{"apiKey":16,"minVersion":0,"maxVersion":4},{"apiKey":17,"minVersion":0,"maxVersion":1},{"apiKey":18,"minVersion":0,"maxVersion":3},{"apiKey":19,"minVersion":0,"maxVersion":7},{"apiKey":20,"minVersion":0,"maxVersion":6},{"apiKey":21,"minVersion":0,"maxVersion":2},{"apiKey":22,"minVersion":0,"maxVersion":4},{"apiKey":23,"minVersion":0,"maxVersion":4},{"apiKey":24,"minVersion":0,"maxVersion":3},{"apiKey":25,"minVersion":0,"maxVersion":3},{"apiKey":26,"minVersion":0,"maxVersion":3},{"apiKey":27,"minVersion":0,"maxVersion":1},{"apiKey":28,"minVersion":0,"maxVersion":3},{"apiKey":29,"minVersion":0,"maxVersion":3},{"apiKey":30,"minVersion":0,"maxVersion":3},{"apiKey":31,"minVersion":0,"maxVersion":3},{"apiKey":32,"minVersion":0,"maxVersion":4},{"apiKey":33,"minVersion":0,"maxVersion":2},{"apiKey":34,"minVersion":0,"maxVersion":2},{"apiKey":35,"minVersion":0,"maxVersion":4},{"apiKey":36,"minVersion":0,"maxVersion":2},{"apiKey":37,"minVersion":0,"maxVersion":3},{"apiKey":38,"minVersion":0,"maxVersion":3},{"apiKey":39,"minVersion":0,"maxVersion":2},{"apiKey":40,"minVersion":0,"maxVersion":2},{"apiKey":41,"minVersion":0,"maxVersion":3},{"apiKey":42,"minVersion":0,"maxVersion":2},{"apiKey":43,"minVersion":0,"maxVersion":2},{"apiKey":44,"minVersion":0,"maxVersion":1},{"apiKey":45,"minVersion":0,"maxVersion":0},{"apiKey":46,"minVersion":0,"maxVersion":0},{"apiKey":47,"minVersion":0,"maxVersion":0},{"apiKey":48,"minVersion":0,"maxVersion":1},{"apiKey":49,"minVersion":0,"maxVersion":1},{"apiKey":50,"minVersion":0,"maxVersion":0},{"apiKey":51,"minVersion":0,"maxVersion":0},{"apiKey":56,"minVersion":0,"maxVersion":2},{"apiKey":57,"minVersion":0,"maxVersion":1},{"apiKey":60,"minVersion":0,"maxVersion":0},{"apiKey":61,"minVersion":0,"maxVersion":0},{"apiKey":65,"minVersion":0,"maxVersion":0},{"apiKey":66,"minVersion":0,"maxVersion":0},{"apiKey":67,"minVersion":0,"maxVersion":0}],"throttleTimeMs":0,"finalizedFeaturesEpoch":0},"connection":"127.0.0.1:39115-127.0.0.1:37748-5","totalTimeMs":0.842,"requestQueueTimeMs":0.195,"localTimeMs":0.461,"remoteTimeMs":0.0,"throttleTimeMs":0,"responseQueueTimeMs":0.055,"sendTimeMs":0.13,"securityProtocol":"SASL_PLAINTEXT","principal":"User:admin","listener":"SASL_PLAINTEXT","clientInformation":{"softwareName":"unknown","softwareVersion":"unknown"}} 11:11:11 11:11:11.905 [kafka-producer-network-thread | mso-123456-producer-414c146f-350c-404b-a200-38d58b83784c] DEBUG org.apache.kafka.clients.NetworkClient - [Producer clientId=mso-123456-producer-414c146f-350c-404b-a200-38d58b83784c] Received API_VERSIONS response from node 1 for request with header RequestHeader(apiKey=API_VERSIONS, apiVersion=3, clientId=mso-123456-producer-414c146f-350c-404b-a200-38d58b83784c, correlationId=3): ApiVersionsResponseData(errorCode=0, apiKeys=[ApiVersion(apiKey=0, minVersion=0, maxVersion=9), ApiVersion(apiKey=1, minVersion=0, maxVersion=13), ApiVersion(apiKey=2, minVersion=0, maxVersion=7), ApiVersion(apiKey=3, minVersion=0, maxVersion=12), ApiVersion(apiKey=4, minVersion=0, maxVersion=6), ApiVersion(apiKey=5, minVersion=0, maxVersion=3), ApiVersion(apiKey=6, minVersion=0, maxVersion=7), ApiVersion(apiKey=7, minVersion=0, maxVersion=3), ApiVersion(apiKey=8, minVersion=0, maxVersion=8), ApiVersion(apiKey=9, minVersion=0, maxVersion=8), ApiVersion(apiKey=10, minVersion=0, maxVersion=4), ApiVersion(apiKey=11, minVersion=0, maxVersion=9), ApiVersion(apiKey=12, minVersion=0, maxVersion=4), ApiVersion(apiKey=13, minVersion=0, maxVersion=5), ApiVersion(apiKey=14, minVersion=0, maxVersion=5), ApiVersion(apiKey=15, minVersion=0, maxVersion=5), ApiVersion(apiKey=16, minVersion=0, maxVersion=4), ApiVersion(apiKey=17, minVersion=0, maxVersion=1), ApiVersion(apiKey=18, minVersion=0, maxVersion=3), ApiVersion(apiKey=19, minVersion=0, maxVersion=7), ApiVersion(apiKey=20, minVersion=0, maxVersion=6), ApiVersion(apiKey=21, minVersion=0, maxVersion=2), ApiVersion(apiKey=22, minVersion=0, maxVersion=4), ApiVersion(apiKey=23, minVersion=0, maxVersion=4), ApiVersion(apiKey=24, minVersion=0, maxVersion=3), ApiVersion(apiKey=25, minVersion=0, maxVersion=3), ApiVersion(apiKey=26, minVersion=0, maxVersion=3), ApiVersion(apiKey=27, minVersion=0, maxVersion=1), ApiVersion(apiKey=28, minVersion=0, maxVersion=3), ApiVersion(apiKey=29, minVersion=0, maxVersion=3), ApiVersion(apiKey=30, minVersion=0, maxVersion=3), ApiVersion(apiKey=31, minVersion=0, maxVersion=3), ApiVersion(apiKey=32, minVersion=0, maxVersion=4), ApiVersion(apiKey=33, minVersion=0, maxVersion=2), ApiVersion(apiKey=34, minVersion=0, maxVersion=2), ApiVersion(apiKey=35, minVersion=0, maxVersion=4), ApiVersion(apiKey=36, minVersion=0, maxVersion=2), ApiVersion(apiKey=37, minVersion=0, maxVersion=3), ApiVersion(apiKey=38, minVersion=0, maxVersion=3), ApiVersion(apiKey=39, minVersion=0, maxVersion=2), ApiVersion(apiKey=40, minVersion=0, maxVersion=2), ApiVersion(apiKey=41, minVersion=0, maxVersion=3), ApiVersion(apiKey=42, minVersion=0, maxVersion=2), ApiVersion(apiKey=43, minVersion=0, maxVersion=2), ApiVersion(apiKey=44, minVersion=0, maxVersion=1), ApiVersion(apiKey=45, minVersion=0, maxVersion=0), ApiVersion(apiKey=46, minVersion=0, maxVersion=0), ApiVersion(apiKey=47, minVersion=0, maxVersion=0), ApiVersion(apiKey=48, minVersion=0, maxVersion=1), ApiVersion(apiKey=49, minVersion=0, maxVersion=1), ApiVersion(apiKey=50, minVersion=0, maxVersion=0), ApiVersion(apiKey=51, minVersion=0, maxVersion=0), ApiVersion(apiKey=56, minVersion=0, maxVersion=2), ApiVersion(apiKey=57, minVersion=0, maxVersion=1), ApiVersion(apiKey=60, minVersion=0, maxVersion=0), ApiVersion(apiKey=61, minVersion=0, maxVersion=0), ApiVersion(apiKey=65, minVersion=0, maxVersion=0), ApiVersion(apiKey=66, minVersion=0, maxVersion=0), ApiVersion(apiKey=67, minVersion=0, maxVersion=0)], throttleTimeMs=0, supportedFeatures=[], finalizedFeaturesEpoch=0, finalizedFeatures=[]) 11:11:11 11:11:11.906 [kafka-producer-network-thread | mso-123456-producer-414c146f-350c-404b-a200-38d58b83784c] DEBUG org.apache.kafka.clients.NetworkClient - [Producer clientId=mso-123456-producer-414c146f-350c-404b-a200-38d58b83784c] Node 1 has finalized features epoch: 0, finalized features: [], supported features: [], API versions: (Produce(0): 0 to 9 [usable: 9], Fetch(1): 0 to 13 [usable: 13], ListOffsets(2): 0 to 7 [usable: 7], Metadata(3): 0 to 12 [usable: 12], LeaderAndIsr(4): 0 to 6 [usable: 6], StopReplica(5): 0 to 3 [usable: 3], UpdateMetadata(6): 0 to 7 [usable: 7], ControlledShutdown(7): 0 to 3 [usable: 3], OffsetCommit(8): 0 to 8 [usable: 8], OffsetFetch(9): 0 to 8 [usable: 8], FindCoordinator(10): 0 to 4 [usable: 4], JoinGroup(11): 0 to 9 [usable: 9], Heartbeat(12): 0 to 4 [usable: 4], LeaveGroup(13): 0 to 5 [usable: 5], SyncGroup(14): 0 to 5 [usable: 5], DescribeGroups(15): 0 to 5 [usable: 5], ListGroups(16): 0 to 4 [usable: 4], SaslHandshake(17): 0 to 1 [usable: 1], ApiVersions(18): 0 to 3 [usable: 3], CreateTopics(19): 0 to 7 [usable: 7], DeleteTopics(20): 0 to 6 [usable: 6], DeleteRecords(21): 0 to 2 [usable: 2], InitProducerId(22): 0 to 4 [usable: 4], OffsetForLeaderEpoch(23): 0 to 4 [usable: 4], AddPartitionsToTxn(24): 0 to 3 [usable: 3], AddOffsetsToTxn(25): 0 to 3 [usable: 3], EndTxn(26): 0 to 3 [usable: 3], WriteTxnMarkers(27): 0 to 1 [usable: 1], TxnOffsetCommit(28): 0 to 3 [usable: 3], DescribeAcls(29): 0 to 3 [usable: 3], CreateAcls(30): 0 to 3 [usable: 3], DeleteAcls(31): 0 to 3 [usable: 3], DescribeConfigs(32): 0 to 4 [usable: 4], AlterConfigs(33): 0 to 2 [usable: 2], AlterReplicaLogDirs(34): 0 to 2 [usable: 2], DescribeLogDirs(35): 0 to 4 [usable: 4], SaslAuthenticate(36): 0 to 2 [usable: 2], CreatePartitions(37): 0 to 3 [usable: 3], CreateDelegationToken(38): 0 to 3 [usable: 3], RenewDelegationToken(39): 0 to 2 [usable: 2], ExpireDelegationToken(40): 0 to 2 [usable: 2], DescribeDelegationToken(41): 0 to 3 [usable: 3], DeleteGroups(42): 0 to 2 [usable: 2], ElectLeaders(43): 0 to 2 [usable: 2], IncrementalAlterConfigs(44): 0 to 1 [usable: 1], AlterPartitionReassignments(45): 0 [usable: 0], ListPartitionReassignments(46): 0 [usable: 0], OffsetDelete(47): 0 [usable: 0], DescribeClientQuotas(48): 0 to 1 [usable: 1], AlterClientQuotas(49): 0 to 1 [usable: 1], DescribeUserScramCredentials(50): 0 [usable: 0], AlterUserScramCredentials(51): 0 [usable: 0], AlterPartition(56): 0 to 2 [usable: 2], UpdateFeatures(57): 0 to 1 [usable: 1], DescribeCluster(60): 0 [usable: 0], DescribeProducers(61): 0 [usable: 0], DescribeTransactions(65): 0 [usable: 0], ListTransactions(66): 0 [usable: 0], AllocateProducerIds(67): 0 [usable: 0]). 11:11:11 11:11:11.913 [kafka-producer-network-thread | mso-123456-producer-414c146f-350c-404b-a200-38d58b83784c] DEBUG org.apache.kafka.clients.producer.internals.TransactionManager - [Producer clientId=mso-123456-producer-414c146f-350c-404b-a200-38d58b83784c] ProducerId of partition my-test-topic-0 set to 0 with epoch 0. Reinitialize sequence at beginning. 11:11:11 11:11:11.913 [kafka-producer-network-thread | mso-123456-producer-414c146f-350c-404b-a200-38d58b83784c] DEBUG org.apache.kafka.clients.producer.internals.RecordAccumulator - [Producer clientId=mso-123456-producer-414c146f-350c-404b-a200-38d58b83784c] Assigned producerId 0 and producerEpoch 0 to batch with base sequence 0 being sent to partition my-test-topic-0 11:11:11 11:11:11.917 [kafka-producer-network-thread | mso-123456-producer-414c146f-350c-404b-a200-38d58b83784c] DEBUG org.apache.kafka.clients.NetworkClient - [Producer clientId=mso-123456-producer-414c146f-350c-404b-a200-38d58b83784c] Sending PRODUCE request with header RequestHeader(apiKey=PRODUCE, apiVersion=9, clientId=mso-123456-producer-414c146f-350c-404b-a200-38d58b83784c, correlationId=4) and timeout 30000 to node 1: {acks=-1,timeout=30000,partitionSizes=[my-test-topic-0=106]} 11:11:11 11:11:11.951 [data-plane-kafka-request-handler-0] DEBUG kafka.log.UnifiedLog - [UnifiedLog partition=my-test-topic-0, dir=/tmp/kafka-unit8944902187107510952] Flushing log up to offset 3 (exclusive)with recovery point 3, last flushed: 1768216251626, current time: 1768216271951,unflushed: 3 11:11:11 11:11:11.955 [data-plane-kafka-request-handler-0] DEBUG kafka.cluster.Partition - [Partition my-test-topic-0 broker=1] High watermark updated from (offset=0 segment=[0:0]) to (offset=3 segment=[0:106]) 11:11:11 11:11:11.955 [data-plane-kafka-request-handler-0] DEBUG kafka.server.ReplicaManager - [ReplicaManager broker=1] Produce to local log in 29 ms 11:11:11 11:11:11.966 [kafka-producer-network-thread | mso-123456-producer-414c146f-350c-404b-a200-38d58b83784c] DEBUG org.apache.kafka.clients.NetworkClient - [Producer clientId=mso-123456-producer-414c146f-350c-404b-a200-38d58b83784c] Received PRODUCE response from node 1 for request with header RequestHeader(apiKey=PRODUCE, apiVersion=9, clientId=mso-123456-producer-414c146f-350c-404b-a200-38d58b83784c, correlationId=4): ProduceResponseData(responses=[TopicProduceResponse(name='my-test-topic', partitionResponses=[PartitionProduceResponse(index=0, errorCode=0, baseOffset=0, logAppendTimeMs=-1, logStartOffset=0, recordErrors=[], errorMessage=null)])], throttleTimeMs=0) 11:11:11 11:11:11.966 [data-plane-kafka-network-thread-1-ListenerName(SASL_PLAINTEXT)-SASL_PLAINTEXT-0] DEBUG kafka.request.logger - Completed request:{"isForwarded":false,"requestHeader":{"requestApiKey":0,"requestApiVersion":9,"correlationId":4,"clientId":"mso-123456-producer-414c146f-350c-404b-a200-38d58b83784c","requestApiKeyName":"PRODUCE"},"request":{"transactionalId":null,"acks":-1,"timeoutMs":30000,"topicData":[{"name":"my-test-topic","partitionData":[{"index":0,"recordsSizeInBytes":106}]}]},"response":{"responses":[{"name":"my-test-topic","partitionResponses":[{"index":0,"errorCode":0,"baseOffset":0,"logAppendTimeMs":-1,"logStartOffset":0,"recordErrors":[],"errorMessage":null}]}],"throttleTimeMs":0},"connection":"127.0.0.1:39115-127.0.0.1:37748-5","totalTimeMs":47.688,"requestQueueTimeMs":4.805,"localTimeMs":42.51,"remoteTimeMs":0.0,"throttleTimeMs":0,"responseQueueTimeMs":0.145,"sendTimeMs":0.226,"securityProtocol":"SASL_PLAINTEXT","principal":"User:admin","listener":"SASL_PLAINTEXT","clientInformation":{"softwareName":"apache-kafka-java","softwareVersion":"3.3.1"}} 11:11:11 11:11:11.970 [kafka-producer-network-thread | mso-123456-producer-414c146f-350c-404b-a200-38d58b83784c] DEBUG org.apache.kafka.clients.producer.internals.TransactionManager - [Producer clientId=mso-123456-producer-414c146f-350c-404b-a200-38d58b83784c] ProducerId: 0; Set last ack'd sequence number for topic-partition my-test-topic-0 to 2 11:11:11 11:11:11.970 [data-plane-kafka-request-handler-0] DEBUG kafka.server.IncrementalFetchContext - Incremental fetch context with session id 1613820950 returning 1 partition(s) 11:11:11 11:11:11.974 [data-plane-kafka-request-handler-0] DEBUG kafka.server.DelayedOperationPurgatory - Request key TopicPartitionOperationKey(my-test-topic,0) unblocked 1 Fetch operations 11:11:11 11:11:11.976 [main] DEBUG org.apache.kafka.clients.NetworkClient - [Consumer clientId=mso-123456-consumer-37da6eb2-0a27-47c4-bc3c-8032fdeca549, groupId=mso-group] Received FETCH response from node 1 for request with header RequestHeader(apiKey=FETCH, apiVersion=13, clientId=mso-123456-consumer-37da6eb2-0a27-47c4-bc3c-8032fdeca549, correlationId=65): FetchResponseData(throttleTimeMs=0, errorCode=0, sessionId=1613820950, responses=[FetchableTopicResponse(topic='', topicId=QbCSIHehTz6oyvgrkYsJtw, partitions=[PartitionData(partitionIndex=0, errorCode=0, highWatermark=3, lastStableOffset=3, logStartOffset=0, divergingEpoch=EpochEndOffset(epoch=-1, endOffset=-1), currentLeader=LeaderIdAndEpoch(leaderId=-1, leaderEpoch=-1), snapshotId=SnapshotId(endOffset=-1, epoch=-1), abortedTransactions=null, preferredReadReplica=-1, records=MemoryRecords(size=106, buffer=java.nio.HeapByteBuffer[pos=0 lim=106 cap=109]))])]) 11:11:11 11:11:11.976 [main] DEBUG org.apache.kafka.clients.FetchSessionHandler - [Consumer clientId=mso-123456-consumer-37da6eb2-0a27-47c4-bc3c-8032fdeca549, groupId=mso-group] Node 1 sent an incremental fetch response with throttleTimeMs = 0 for session 1613820950 with 1 response partition(s) 11:11:11 11:11:11.976 [data-plane-kafka-network-thread-1-ListenerName(SASL_PLAINTEXT)-SASL_PLAINTEXT-0] DEBUG kafka.request.logger - Completed request:{"isForwarded":false,"requestHeader":{"requestApiKey":1,"requestApiVersion":13,"correlationId":65,"clientId":"mso-123456-consumer-37da6eb2-0a27-47c4-bc3c-8032fdeca549","requestApiKeyName":"FETCH"},"request":{"replicaId":-1,"maxWaitMs":500,"minBytes":1,"maxBytes":52428800,"isolationLevel":0,"sessionId":1613820950,"sessionEpoch":31,"topics":[],"forgottenTopicsData":[],"rackId":""},"response":{"throttleTimeMs":0,"errorCode":0,"sessionId":1613820950,"responses":[{"topicId":"QbCSIHehTz6oyvgrkYsJtw","partitions":[{"partitionIndex":0,"errorCode":0,"highWatermark":3,"lastStableOffset":3,"logStartOffset":0,"abortedTransactions":null,"preferredReadReplica":-1,"recordsSizeInBytes":106}]}]},"connection":"127.0.0.1:39115-127.0.0.1:43334-3","totalTimeMs":259.814,"requestQueueTimeMs":0.205,"localTimeMs":1.381,"remoteTimeMs":255.63,"throttleTimeMs":0,"responseQueueTimeMs":0.065,"sendTimeMs":2.531,"securityProtocol":"SASL_PLAINTEXT","principal":"User:admin","listener":"SASL_PLAINTEXT","clientInformation":{"softwareName":"apache-kafka-java","softwareVersion":"3.3.1"}} 11:11:11 11:11:11.976 [main] DEBUG org.apache.kafka.clients.consumer.internals.Fetcher - [Consumer clientId=mso-123456-consumer-37da6eb2-0a27-47c4-bc3c-8032fdeca549, groupId=mso-group] Fetch READ_UNCOMMITTED at offset 0 for partition my-test-topic-0 returned fetch data PartitionData(partitionIndex=0, errorCode=0, highWatermark=3, lastStableOffset=3, logStartOffset=0, divergingEpoch=EpochEndOffset(epoch=-1, endOffset=-1), currentLeader=LeaderIdAndEpoch(leaderId=-1, leaderEpoch=-1), snapshotId=SnapshotId(endOffset=-1, epoch=-1), abortedTransactions=null, preferredReadReplica=-1, records=MemoryRecords(size=106, buffer=java.nio.HeapByteBuffer[pos=0 lim=106 cap=109])) 11:11:12 11:11:11.978 [main] DEBUG org.apache.kafka.clients.consumer.internals.Fetcher - [Consumer clientId=mso-123456-consumer-37da6eb2-0a27-47c4-bc3c-8032fdeca549, groupId=mso-group] Added READ_UNCOMMITTED fetch request for partition my-test-topic-0 at position FetchPosition{offset=3, offsetEpoch=Optional[0], currentLeader=LeaderAndEpoch{leader=Optional[localhost:39115 (id: 1 rack: null)], epoch=0}} to node localhost:39115 (id: 1 rack: null) 11:11:12 11:11:11.978 [main] DEBUG org.apache.kafka.clients.FetchSessionHandler - [Consumer clientId=mso-123456-consumer-37da6eb2-0a27-47c4-bc3c-8032fdeca549, groupId=mso-group] Built incremental fetch (sessionId=1613820950, epoch=32) for node 1. Added 0 partition(s), altered 1 partition(s), removed 0 partition(s), replaced 0 partition(s) out of 1 partition(s) 11:11:12 11:11:11.978 [main] DEBUG org.apache.kafka.clients.consumer.internals.Fetcher - [Consumer clientId=mso-123456-consumer-37da6eb2-0a27-47c4-bc3c-8032fdeca549, groupId=mso-group] Sending READ_UNCOMMITTED IncrementalFetchRequest(toSend=(my-test-topic-0), toForget=(), toReplace=(), implied=(), canUseTopicIds=True) to broker localhost:39115 (id: 1 rack: null) 11:11:12 11:11:11.978 [main] DEBUG org.apache.kafka.clients.NetworkClient - [Consumer clientId=mso-123456-consumer-37da6eb2-0a27-47c4-bc3c-8032fdeca549, groupId=mso-group] Sending FETCH request with header RequestHeader(apiKey=FETCH, apiVersion=13, clientId=mso-123456-consumer-37da6eb2-0a27-47c4-bc3c-8032fdeca549, correlationId=66) and timeout 30000 to node 1: FetchRequestData(clusterId=null, replicaId=-1, maxWaitMs=500, minBytes=1, maxBytes=52428800, isolationLevel=0, sessionId=1613820950, sessionEpoch=32, topics=[FetchTopic(topic='my-test-topic', topicId=QbCSIHehTz6oyvgrkYsJtw, partitions=[FetchPartition(partition=0, currentLeaderEpoch=0, fetchOffset=3, lastFetchedEpoch=-1, logStartOffset=-1, partitionMaxBytes=1048576)])], forgottenTopicsData=[], rackId='') 11:11:12 11:11:11.979 [data-plane-kafka-request-handler-1] DEBUG kafka.server.FetchManager - Created a new incremental FetchContext for session id 1613820950, epoch 33: added 0 partition(s), updated 1 partition(s), removed 0 partition(s) 11:11:12 11:11:11.992 [main] INFO kafka.server.KafkaServer - [KafkaServer id=1] shutting down 11:11:12 11:11:11.992 [main] INFO kafka.server.KafkaServer - [KafkaServer id=1] Starting controlled shutdown 11:11:12 11:11:11.995 [main] DEBUG org.apache.kafka.clients.ClientUtils - Resolved host localhost as 127.0.0.1 11:11:12 11:11:11.995 [main] DEBUG org.apache.kafka.clients.NetworkClient - [KafkaServer id=1] Initiating connection to node localhost:39115 (id: 1 rack: null) using address localhost/127.0.0.1 11:11:12 11:11:11.995 [main] DEBUG org.apache.kafka.common.security.authenticator.SaslClientAuthenticator - [KafkaServer id=1] Set SASL client state to SEND_APIVERSIONS_REQUEST 11:11:12 11:11:11.995 [data-plane-kafka-socket-acceptor-ListenerName(SASL_PLAINTEXT)-SASL_PLAINTEXT-39115] DEBUG kafka.network.DataPlaneAcceptor - Accepted connection from /127.0.0.1:37750 on /127.0.0.1:39115 and assigned it to processor 1, sendBufferSize [actual|requested]: [102400|102400] recvBufferSize [actual|requested]: [102400|102400] 11:11:12 11:11:11.995 [main] DEBUG org.apache.kafka.common.security.authenticator.SaslClientAuthenticator - [KafkaServer id=1] Creating SaslClient: client=null;service=kafka;serviceHostname=localhost;mechs=[PLAIN] 11:11:12 11:11:11.995 [data-plane-kafka-network-thread-1-ListenerName(SASL_PLAINTEXT)-SASL_PLAINTEXT-1] DEBUG kafka.network.Processor - Processor 1 listening to new connection from /127.0.0.1:37750 11:11:12 11:11:11.996 [main] DEBUG org.apache.kafka.common.network.Selector - [KafkaServer id=1] Created socket with SO_RCVBUF = 65536, SO_SNDBUF = 1313280, SO_TIMEOUT = 0 to node 1 11:11:12 11:11:11.996 [main] DEBUG org.apache.kafka.common.security.authenticator.SaslClientAuthenticator - [KafkaServer id=1] Set SASL client state to RECEIVE_APIVERSIONS_RESPONSE 11:11:12 11:11:11.996 [data-plane-kafka-network-thread-1-ListenerName(SASL_PLAINTEXT)-SASL_PLAINTEXT-1] DEBUG org.apache.kafka.common.security.authenticator.SaslServerAuthenticator - Set SASL server state to HANDSHAKE_OR_VERSIONS_REQUEST during authentication 11:11:12 11:11:11.997 [data-plane-kafka-network-thread-1-ListenerName(SASL_PLAINTEXT)-SASL_PLAINTEXT-1] DEBUG org.apache.kafka.common.security.authenticator.SaslServerAuthenticator - Handling Kafka request API_VERSIONS during authentication 11:11:12 11:11:11.997 [main] DEBUG org.apache.kafka.clients.NetworkClient - [KafkaServer id=1] Completed connection to node 1. Ready. 11:11:12 11:11:11.997 [data-plane-kafka-network-thread-1-ListenerName(SASL_PLAINTEXT)-SASL_PLAINTEXT-1] DEBUG org.apache.kafka.common.security.authenticator.SaslServerAuthenticator - Set SASL server state to HANDSHAKE_REQUEST during authentication 11:11:12 11:11:11.997 [main] DEBUG org.apache.kafka.common.security.authenticator.SaslClientAuthenticator - [KafkaServer id=1] Set SASL client state to SEND_HANDSHAKE_REQUEST 11:11:12 11:11:11.997 [main] DEBUG org.apache.kafka.common.security.authenticator.SaslClientAuthenticator - [KafkaServer id=1] Set SASL client state to RECEIVE_HANDSHAKE_RESPONSE 11:11:12 11:11:11.997 [data-plane-kafka-network-thread-1-ListenerName(SASL_PLAINTEXT)-SASL_PLAINTEXT-1] DEBUG org.apache.kafka.common.security.authenticator.SaslServerAuthenticator - Handling Kafka request SASL_HANDSHAKE during authentication 11:11:12 11:11:11.997 [data-plane-kafka-network-thread-1-ListenerName(SASL_PLAINTEXT)-SASL_PLAINTEXT-1] DEBUG org.apache.kafka.common.security.authenticator.SaslServerAuthenticator - Using SASL mechanism 'PLAIN' provided by client 11:11:12 11:11:11.998 [main] DEBUG org.apache.kafka.common.security.authenticator.SaslClientAuthenticator - [KafkaServer id=1] Set SASL client state to INITIAL 11:11:12 11:11:11.998 [main] DEBUG org.apache.kafka.common.security.authenticator.SaslClientAuthenticator - [KafkaServer id=1] Set SASL client state to INTERMEDIATE 11:11:12 11:11:11.998 [data-plane-kafka-network-thread-1-ListenerName(SASL_PLAINTEXT)-SASL_PLAINTEXT-1] DEBUG org.apache.kafka.common.security.authenticator.SaslServerAuthenticator - Set SASL server state to AUTHENTICATE during authentication 11:11:12 11:11:11.998 [data-plane-kafka-network-thread-1-ListenerName(SASL_PLAINTEXT)-SASL_PLAINTEXT-1] DEBUG org.apache.kafka.common.security.authenticator.SaslServerAuthenticator - Authentication complete; session max lifetime from broker config=0 ms, no credential expiration; no session expiration, sending 0 ms to client 11:11:12 11:11:11.998 [data-plane-kafka-network-thread-1-ListenerName(SASL_PLAINTEXT)-SASL_PLAINTEXT-1] DEBUG org.apache.kafka.common.security.authenticator.SaslServerAuthenticator - Set SASL server state to COMPLETE during authentication 11:11:12 11:11:11.998 [data-plane-kafka-network-thread-1-ListenerName(SASL_PLAINTEXT)-SASL_PLAINTEXT-1] DEBUG org.apache.kafka.common.network.Selector - [SocketServer listenerType=ZK_BROKER, nodeId=1] Successfully authenticated with /127.0.0.1 11:11:12 11:11:11.998 [main] DEBUG org.apache.kafka.common.security.authenticator.SaslClientAuthenticator - [KafkaServer id=1] Set SASL client state to COMPLETE 11:11:12 11:11:11.998 [main] DEBUG org.apache.kafka.common.security.authenticator.SaslClientAuthenticator - [KafkaServer id=1] Finished authentication with no session expiration and no session re-authentication 11:11:12 11:11:11.998 [main] DEBUG org.apache.kafka.common.network.Selector - [KafkaServer id=1] Successfully authenticated with localhost/127.0.0.1 11:11:12 11:11:11.999 [main] DEBUG org.apache.kafka.clients.NetworkClient - [KafkaServer id=1] Sending CONTROLLED_SHUTDOWN request with header RequestHeader(apiKey=CONTROLLED_SHUTDOWN, apiVersion=3, clientId=1, correlationId=0) and timeout 30000 to node 1: ControlledShutdownRequestData(brokerId=1, brokerEpoch=25) 11:11:12 11:11:12.002 [controller-event-thread] INFO kafka.controller.KafkaController - [Controller id=1] Shutting down broker 1 11:11:12 11:11:12.003 [controller-event-thread] DEBUG kafka.controller.KafkaController - [Controller id=1] All shutting down brokers: 1 11:11:12 11:11:12.004 [controller-event-thread] DEBUG kafka.controller.KafkaController - [Controller id=1] Live brokers: 11:11:12 11:11:12.010 [controller-event-thread] INFO state.change.logger - [Controller id=1 epoch=1] Sending UpdateMetadata request to brokers HashSet() for 0 partitions 11:11:12 11:11:12.015 [main] DEBUG org.apache.kafka.clients.NetworkClient - [KafkaServer id=1] Received CONTROLLED_SHUTDOWN response from node 1 for request with header RequestHeader(apiKey=CONTROLLED_SHUTDOWN, apiVersion=3, clientId=1, correlationId=0): ControlledShutdownResponseData(errorCode=0, remainingPartitions=[]) 11:11:12 11:11:12.016 [main] INFO kafka.server.KafkaServer - [KafkaServer id=1] Controlled shutdown request returned successfully after 17ms 11:11:12 11:11:12.016 [data-plane-kafka-network-thread-1-ListenerName(SASL_PLAINTEXT)-SASL_PLAINTEXT-1] DEBUG kafka.request.logger - Completed request:{"isForwarded":false,"requestHeader":{"requestApiKey":7,"requestApiVersion":3,"correlationId":0,"clientId":"1","requestApiKeyName":"CONTROLLED_SHUTDOWN"},"request":{"brokerId":1,"brokerEpoch":25},"response":{"errorCode":0,"remainingPartitions":[]},"connection":"127.0.0.1:39115-127.0.0.1:37750-5","totalTimeMs":15.933,"requestQueueTimeMs":0.77,"localTimeMs":0.995,"remoteTimeMs":13.906,"throttleTimeMs":0,"responseQueueTimeMs":0.091,"sendTimeMs":0.168,"securityProtocol":"SASL_PLAINTEXT","principal":"User:admin","listener":"SASL_PLAINTEXT","clientInformation":{"softwareName":"unknown","softwareVersion":"unknown"}} 11:11:12 11:11:12.016 [data-plane-kafka-network-thread-1-ListenerName(SASL_PLAINTEXT)-SASL_PLAINTEXT-1] DEBUG org.apache.kafka.common.network.Selector - [SocketServer listenerType=ZK_BROKER, nodeId=1] Connection with /127.0.0.1 (channelId=127.0.0.1:39115-127.0.0.1:37750-5) disconnected 11:11:12 java.io.EOFException: null 11:11:12 at org.apache.kafka.common.network.NetworkReceive.readFrom(NetworkReceive.java:97) 11:11:12 at org.apache.kafka.common.network.KafkaChannel.receive(KafkaChannel.java:452) 11:11:12 at org.apache.kafka.common.network.KafkaChannel.read(KafkaChannel.java:402) 11:11:12 at org.apache.kafka.common.network.Selector.attemptRead(Selector.java:674) 11:11:12 at org.apache.kafka.common.network.Selector.pollSelectionKeys(Selector.java:576) 11:11:12 at org.apache.kafka.common.network.Selector.poll(Selector.java:481) 11:11:12 at kafka.network.Processor.poll(SocketServer.scala:1055) 11:11:12 at kafka.network.Processor.run(SocketServer.scala:959) 11:11:12 at java.base/java.lang.Thread.run(Thread.java:829) 11:11:12 11:11:12.019 [main] INFO kafka.common.ZkNodeChangeNotificationListener$ChangeEventProcessThread - [/config/changes-event-process-thread]: Shutting down 11:11:12 11:11:12.019 [main] INFO kafka.common.ZkNodeChangeNotificationListener$ChangeEventProcessThread - [/config/changes-event-process-thread]: Shutdown completed 11:11:12 11:11:12.020 [main] INFO kafka.network.SocketServer - [SocketServer listenerType=ZK_BROKER, nodeId=1] Stopping socket server request processors 11:11:12 11:11:12.020 [/config/changes-event-process-thread] INFO kafka.common.ZkNodeChangeNotificationListener$ChangeEventProcessThread - [/config/changes-event-process-thread]: Stopped 11:11:12 11:11:12.021 [data-plane-kafka-socket-acceptor-ListenerName(SASL_PLAINTEXT)-SASL_PLAINTEXT-39115] DEBUG kafka.network.DataPlaneAcceptor - Closing server socket, selector, and any throttled sockets. 11:11:12 11:11:12.021 [data-plane-kafka-network-thread-1-ListenerName(SASL_PLAINTEXT)-SASL_PLAINTEXT-0] DEBUG kafka.network.Processor - Closing selector - processor 0 11:11:12 11:11:12.023 [data-plane-kafka-network-thread-1-ListenerName(SASL_PLAINTEXT)-SASL_PLAINTEXT-1] DEBUG kafka.network.Processor - Closing selector - processor 1 11:11:12 11:11:12.023 [data-plane-kafka-network-thread-1-ListenerName(SASL_PLAINTEXT)-SASL_PLAINTEXT-1] DEBUG kafka.network.Processor - Closing selector connection 127.0.0.1:39115-127.0.0.1:43336-3 11:11:12 11:11:12.023 [data-plane-kafka-network-thread-1-ListenerName(SASL_PLAINTEXT)-SASL_PLAINTEXT-0] DEBUG kafka.network.Processor - Closing selector connection 127.0.0.1:39115-127.0.0.1:43334-3 11:11:12 11:11:12.024 [data-plane-kafka-network-thread-1-ListenerName(SASL_PLAINTEXT)-SASL_PLAINTEXT-1] DEBUG kafka.network.Processor - Closing selector connection 127.0.0.1:39115-127.0.0.1:43332-2 11:11:12 11:11:12.024 [data-plane-kafka-network-thread-1-ListenerName(SASL_PLAINTEXT)-SASL_PLAINTEXT-1] DEBUG kafka.network.Processor - Closing selector connection 127.0.0.1:39115-127.0.0.1:37746-4 11:11:12 11:11:12.024 [data-plane-kafka-network-thread-1-ListenerName(SASL_PLAINTEXT)-SASL_PLAINTEXT-0] DEBUG kafka.network.Processor - Closing selector connection 127.0.0.1:39115-127.0.0.1:43322-0 11:11:12 11:11:12.024 [data-plane-kafka-network-thread-1-ListenerName(SASL_PLAINTEXT)-SASL_PLAINTEXT-0] DEBUG kafka.network.Processor - Closing selector connection 127.0.0.1:39115-127.0.0.1:37748-5 11:11:12 11:11:12.024 [data-plane-kafka-network-thread-1-ListenerName(SASL_PLAINTEXT)-SASL_PLAINTEXT-0] DEBUG kafka.network.Processor - Closing selector connection 127.0.0.1:39115-127.0.0.1:37744-4 11:11:12 11:11:12.024 [TestBroker:1:BrokerToControllerChannelManager broker=1 name=forwarding] DEBUG org.apache.kafka.common.network.Selector - [BrokerToControllerChannelManager broker=1 name=forwarding] Connection with localhost/127.0.0.1 (channelId=1) disconnected 11:11:12 java.io.EOFException: null 11:11:12 at org.apache.kafka.common.network.NetworkReceive.readFrom(NetworkReceive.java:97) 11:11:12 at org.apache.kafka.common.network.KafkaChannel.receive(KafkaChannel.java:452) 11:11:12 at org.apache.kafka.common.network.KafkaChannel.read(KafkaChannel.java:402) 11:11:12 at org.apache.kafka.common.network.Selector.attemptRead(Selector.java:674) 11:11:12 at org.apache.kafka.common.network.Selector.pollSelectionKeys(Selector.java:576) 11:11:12 at org.apache.kafka.common.network.Selector.poll(Selector.java:481) 11:11:12 at org.apache.kafka.clients.NetworkClient.poll(NetworkClient.java:560) 11:11:12 at kafka.common.InterBrokerSendThread.pollOnce(InterBrokerSendThread.scala:74) 11:11:12 at kafka.server.BrokerToControllerRequestThread.doWork(BrokerToControllerChannelManager.scala:368) 11:11:12 at kafka.utils.ShutdownableThread.run(ShutdownableThread.scala:96) 11:11:12 11:11:12.025 [kafka-producer-network-thread | mso-123456-producer-414c146f-350c-404b-a200-38d58b83784c] DEBUG org.apache.kafka.common.network.Selector - [Producer clientId=mso-123456-producer-414c146f-350c-404b-a200-38d58b83784c] Connection with localhost/127.0.0.1 (channelId=1) disconnected 11:11:12 java.io.EOFException: null 11:11:12 at org.apache.kafka.common.network.NetworkReceive.readFrom(NetworkReceive.java:97) 11:11:12 at org.apache.kafka.common.network.KafkaChannel.receive(KafkaChannel.java:452) 11:11:12 at org.apache.kafka.common.network.KafkaChannel.read(KafkaChannel.java:402) 11:11:12 at org.apache.kafka.common.network.Selector.attemptRead(Selector.java:674) 11:11:12 at org.apache.kafka.common.network.Selector.pollSelectionKeys(Selector.java:576) 11:11:12 at org.apache.kafka.common.network.Selector.poll(Selector.java:481) 11:11:12 at org.apache.kafka.clients.NetworkClient.poll(NetworkClient.java:560) 11:11:12 at org.apache.kafka.clients.producer.internals.Sender.runOnce(Sender.java:328) 11:11:12 at org.apache.kafka.clients.producer.internals.Sender.run(Sender.java:243) 11:11:12 at java.base/java.lang.Thread.run(Thread.java:829) 11:11:12 11:11:12.025 [TestBroker:1:BrokerToControllerChannelManager broker=1 name=forwarding] INFO org.apache.kafka.clients.NetworkClient - [BrokerToControllerChannelManager broker=1 name=forwarding] Node 1 disconnected. 11:11:12 11:11:12.025 [kafka-producer-network-thread | mso-123456-producer-414c146f-350c-404b-a200-38d58b83784c] INFO org.apache.kafka.clients.NetworkClient - [Producer clientId=mso-123456-producer-414c146f-350c-404b-a200-38d58b83784c] Node 1 disconnected. 11:11:12 11:11:12.029 [kafka-producer-network-thread | mso-123456-producer-414c146f-350c-404b-a200-38d58b83784c] DEBUG org.apache.kafka.clients.NetworkClient - [Producer clientId=mso-123456-producer-414c146f-350c-404b-a200-38d58b83784c] Give up sending metadata request since no node is available 11:11:12 11:11:12.030 [kafka-producer-network-thread | mso-123456-producer-414c146f-350c-404b-a200-38d58b83784c] DEBUG org.apache.kafka.common.network.Selector - [Producer clientId=mso-123456-producer-414c146f-350c-404b-a200-38d58b83784c] Connection with localhost/127.0.0.1 (channelId=-1) disconnected 11:11:12 java.io.EOFException: null 11:11:12 at org.apache.kafka.common.network.NetworkReceive.readFrom(NetworkReceive.java:97) 11:11:12 at org.apache.kafka.common.network.KafkaChannel.receive(KafkaChannel.java:452) 11:11:12 at org.apache.kafka.common.network.KafkaChannel.read(KafkaChannel.java:402) 11:11:12 at org.apache.kafka.common.network.Selector.attemptRead(Selector.java:674) 11:11:12 at org.apache.kafka.common.network.Selector.pollSelectionKeys(Selector.java:576) 11:11:12 at org.apache.kafka.common.network.Selector.poll(Selector.java:481) 11:11:12 at org.apache.kafka.clients.NetworkClient.poll(NetworkClient.java:560) 11:11:12 at org.apache.kafka.clients.producer.internals.Sender.runOnce(Sender.java:328) 11:11:12 at org.apache.kafka.clients.producer.internals.Sender.run(Sender.java:243) 11:11:12 at java.base/java.lang.Thread.run(Thread.java:829) 11:11:12 11:11:12.030 [kafka-producer-network-thread | mso-123456-producer-414c146f-350c-404b-a200-38d58b83784c] INFO org.apache.kafka.clients.NetworkClient - [Producer clientId=mso-123456-producer-414c146f-350c-404b-a200-38d58b83784c] Node -1 disconnected. 11:11:12 11:11:12.030 [main] INFO kafka.network.SocketServer - [SocketServer listenerType=ZK_BROKER, nodeId=1] Stopped socket server request processors 11:11:12 11:11:12.031 [main] INFO kafka.server.KafkaRequestHandlerPool - [data-plane Kafka Request Handler on Broker 1], shutting down 11:11:12 11:11:12.032 [data-plane-kafka-request-handler-1] DEBUG kafka.server.KafkaRequestHandler - [Kafka Request Handler 1 on Broker 1], Kafka request handler 1 on broker 1 received shut down command 11:11:12 11:11:12.032 [data-plane-kafka-request-handler-0] DEBUG kafka.server.KafkaRequestHandler - [Kafka Request Handler 0 on Broker 1], Kafka request handler 0 on broker 1 received shut down command 11:11:12 11:11:12.035 [main] INFO kafka.server.KafkaRequestHandlerPool - [data-plane Kafka Request Handler on Broker 1], shut down completely 11:11:12 11:11:12.036 [main] DEBUG kafka.utils.KafkaScheduler - Shutting down task scheduler. 11:11:12 11:11:12.045 [main] INFO kafka.server.DelayedOperationPurgatory$ExpiredOperationReaper - [ExpirationReaper-1-AlterAcls]: Shutting down 11:11:12 11:11:12.047 [ExpirationReaper-1-AlterAcls] INFO kafka.server.DelayedOperationPurgatory$ExpiredOperationReaper - [ExpirationReaper-1-AlterAcls]: Stopped 11:11:12 11:11:12.047 [main] INFO kafka.server.DelayedOperationPurgatory$ExpiredOperationReaper - [ExpirationReaper-1-AlterAcls]: Shutdown completed 11:11:12 11:11:12.050 [main] INFO kafka.server.KafkaApis - [KafkaApi-1] Shutdown complete. 11:11:12 11:11:12.051 [main] INFO kafka.server.DelayedOperationPurgatory$ExpiredOperationReaper - [ExpirationReaper-1-topic]: Shutting down 11:11:12 11:11:12.052 [ExpirationReaper-1-topic] INFO kafka.server.DelayedOperationPurgatory$ExpiredOperationReaper - [ExpirationReaper-1-topic]: Stopped 11:11:12 11:11:12.052 [main] INFO kafka.server.DelayedOperationPurgatory$ExpiredOperationReaper - [ExpirationReaper-1-topic]: Shutdown completed 11:11:12 11:11:12.056 [main] INFO kafka.coordinator.transaction.TransactionCoordinator - [TransactionCoordinator id=1] Shutting down. 11:11:12 11:11:12.056 [main] DEBUG kafka.utils.KafkaScheduler - Shutting down task scheduler. 11:11:12 11:11:12.058 [main] INFO kafka.coordinator.transaction.TransactionStateManager - [Transaction State Manager 1]: Shutdown complete 11:11:12 11:11:12.058 [main] INFO kafka.coordinator.transaction.TransactionMarkerChannelManager - [Transaction Marker Channel Manager 1]: Shutting down 11:11:12 11:11:12.058 [TxnMarkerSenderThread-1] INFO kafka.coordinator.transaction.TransactionMarkerChannelManager - [Transaction Marker Channel Manager 1]: Stopped 11:11:12 11:11:12.058 [main] INFO kafka.coordinator.transaction.TransactionMarkerChannelManager - [Transaction Marker Channel Manager 1]: Shutdown completed 11:11:12 11:11:12.059 [main] INFO kafka.coordinator.transaction.TransactionCoordinator - [TransactionCoordinator id=1] Shutdown complete. 11:11:12 11:11:12.060 [main] INFO kafka.coordinator.group.GroupCoordinator - [GroupCoordinator 1]: Shutting down. 11:11:12 11:11:12.060 [main] DEBUG kafka.utils.KafkaScheduler - Shutting down task scheduler. 11:11:12 11:11:12.060 [main] INFO kafka.server.DelayedOperationPurgatory$ExpiredOperationReaper - [ExpirationReaper-1-Heartbeat]: Shutting down 11:11:12 11:11:12.061 [ExpirationReaper-1-Heartbeat] INFO kafka.server.DelayedOperationPurgatory$ExpiredOperationReaper - [ExpirationReaper-1-Heartbeat]: Stopped 11:11:12 11:11:12.061 [main] INFO kafka.server.DelayedOperationPurgatory$ExpiredOperationReaper - [ExpirationReaper-1-Heartbeat]: Shutdown completed 11:11:12 11:11:12.063 [main] INFO kafka.server.DelayedOperationPurgatory$ExpiredOperationReaper - [ExpirationReaper-1-Rebalance]: Shutting down 11:11:12 11:11:12.064 [main] INFO kafka.server.DelayedOperationPurgatory$ExpiredOperationReaper - [ExpirationReaper-1-Rebalance]: Shutdown completed 11:11:12 11:11:12.064 [ExpirationReaper-1-Rebalance] INFO kafka.server.DelayedOperationPurgatory$ExpiredOperationReaper - [ExpirationReaper-1-Rebalance]: Stopped 11:11:12 11:11:12.065 [main] INFO kafka.coordinator.group.GroupCoordinator - [GroupCoordinator 1]: Shutdown complete. 11:11:12 11:11:12.065 [main] INFO kafka.server.ReplicaManager - [ReplicaManager broker=1] Shutting down 11:11:12 11:11:12.066 [main] INFO kafka.server.ReplicaManager$LogDirFailureHandler - [LogDirFailureHandler]: Shutting down 11:11:12 11:11:12.066 [LogDirFailureHandler] INFO kafka.server.ReplicaManager$LogDirFailureHandler - [LogDirFailureHandler]: Stopped 11:11:12 11:11:12.066 [main] INFO kafka.server.ReplicaManager$LogDirFailureHandler - [LogDirFailureHandler]: Shutdown completed 11:11:12 11:11:12.066 [main] INFO kafka.server.ReplicaFetcherManager - [ReplicaFetcherManager on broker 1] shutting down 11:11:12 11:11:12.067 [main] INFO kafka.server.ReplicaFetcherManager - [ReplicaFetcherManager on broker 1] shutdown completed 11:11:12 11:11:12.068 [main] INFO kafka.server.ReplicaAlterLogDirsManager - [ReplicaAlterLogDirsManager on broker 1] shutting down 11:11:12 11:11:12.068 [main] INFO kafka.server.ReplicaAlterLogDirsManager - [ReplicaAlterLogDirsManager on broker 1] shutdown completed 11:11:12 11:11:12.068 [main] INFO kafka.server.DelayedOperationPurgatory$ExpiredOperationReaper - [ExpirationReaper-1-Fetch]: Shutting down 11:11:12 11:11:12.069 [ExpirationReaper-1-Fetch] INFO kafka.server.DelayedOperationPurgatory$ExpiredOperationReaper - [ExpirationReaper-1-Fetch]: Stopped 11:11:12 11:11:12.069 [main] INFO kafka.server.DelayedOperationPurgatory$ExpiredOperationReaper - [ExpirationReaper-1-Fetch]: Shutdown completed 11:11:12 11:11:12.069 [main] INFO kafka.server.DelayedOperationPurgatory$ExpiredOperationReaper - [ExpirationReaper-1-Produce]: Shutting down 11:11:12 11:11:12.070 [ExpirationReaper-1-Produce] INFO kafka.server.DelayedOperationPurgatory$ExpiredOperationReaper - [ExpirationReaper-1-Produce]: Stopped 11:11:12 11:11:12.070 [main] INFO kafka.server.DelayedOperationPurgatory$ExpiredOperationReaper - [ExpirationReaper-1-Produce]: Shutdown completed 11:11:12 11:11:12.070 [main] INFO kafka.server.DelayedOperationPurgatory$ExpiredOperationReaper - [ExpirationReaper-1-DeleteRecords]: Shutting down 11:11:12 11:11:12.071 [ExpirationReaper-1-DeleteRecords] INFO kafka.server.DelayedOperationPurgatory$ExpiredOperationReaper - [ExpirationReaper-1-DeleteRecords]: Stopped 11:11:12 11:11:12.071 [main] INFO kafka.server.DelayedOperationPurgatory$ExpiredOperationReaper - [ExpirationReaper-1-DeleteRecords]: Shutdown completed 11:11:12 11:11:12.072 [main] INFO kafka.server.DelayedOperationPurgatory$ExpiredOperationReaper - [ExpirationReaper-1-ElectLeader]: Shutting down 11:11:12 11:11:12.074 [main] INFO kafka.server.DelayedOperationPurgatory$ExpiredOperationReaper - [ExpirationReaper-1-ElectLeader]: Shutdown completed 11:11:12 11:11:12.074 [ExpirationReaper-1-ElectLeader] INFO kafka.server.DelayedOperationPurgatory$ExpiredOperationReaper - [ExpirationReaper-1-ElectLeader]: Stopped 11:11:12 11:11:12.080 [main] INFO kafka.server.ReplicaManager - [ReplicaManager broker=1] Shut down completely 11:11:12 11:11:12.080 [main] INFO kafka.server.BrokerToControllerRequestThread - [TestBroker:1:BrokerToControllerChannelManager broker=1 name=alterPartition]: Shutting down 11:11:12 11:11:12.080 [TestBroker:1:BrokerToControllerChannelManager broker=1 name=alterPartition] INFO kafka.server.BrokerToControllerRequestThread - [TestBroker:1:BrokerToControllerChannelManager broker=1 name=alterPartition]: Stopped 11:11:12 11:11:12.080 [main] INFO kafka.server.BrokerToControllerRequestThread - [TestBroker:1:BrokerToControllerChannelManager broker=1 name=alterPartition]: Shutdown completed 11:11:12 11:11:12.082 [main] INFO kafka.server.BrokerToControllerChannelManagerImpl - Broker to controller channel manager for alterPartition shutdown 11:11:12 11:11:12.083 [main] INFO kafka.server.BrokerToControllerRequestThread - [TestBroker:1:BrokerToControllerChannelManager broker=1 name=forwarding]: Shutting down 11:11:12 11:11:12.083 [TestBroker:1:BrokerToControllerChannelManager broker=1 name=forwarding] INFO kafka.server.BrokerToControllerRequestThread - [TestBroker:1:BrokerToControllerChannelManager broker=1 name=forwarding]: Stopped 11:11:12 11:11:12.083 [main] INFO kafka.server.BrokerToControllerRequestThread - [TestBroker:1:BrokerToControllerChannelManager broker=1 name=forwarding]: Shutdown completed 11:11:12 11:11:12.084 [main] INFO kafka.server.BrokerToControllerChannelManagerImpl - Broker to controller channel manager for forwarding shutdown 11:11:12 11:11:12.084 [main] INFO kafka.log.LogManager - Shutting down. 11:11:12 11:11:12.085 [main] INFO kafka.log.LogCleaner - Shutting down the log cleaner. 11:11:12 11:11:12.086 [main] INFO kafka.log.LogCleaner - [kafka-log-cleaner-thread-0]: Shutting down 11:11:12 11:11:12.086 [kafka-log-cleaner-thread-0] INFO kafka.log.LogCleaner - [kafka-log-cleaner-thread-0]: Stopped 11:11:12 11:11:12.086 [main] INFO kafka.log.LogCleaner - [kafka-log-cleaner-thread-0]: Shutdown completed 11:11:12 11:11:12.088 [main] DEBUG kafka.log.LogManager - Flushing and closing logs at /tmp/kafka-unit8944902187107510952 11:11:12 11:11:12.091 [log-closing-/tmp/kafka-unit8944902187107510952] DEBUG kafka.log.UnifiedLog - [UnifiedLog partition=__consumer_offsets-29, dir=/tmp/kafka-unit8944902187107510952] Flushing log up to offset 0 (inclusive)with recovery point 0, last flushed: 1768216252385, current time: 1768216272091,unflushed: 0 11:11:12 11:11:12.093 [log-closing-/tmp/kafka-unit8944902187107510952] DEBUG kafka.log.UnifiedLog - [UnifiedLog partition=__consumer_offsets-29, dir=/tmp/kafka-unit8944902187107510952] Closing log 11:11:12 11:11:12.094 [log-closing-/tmp/kafka-unit8944902187107510952] DEBUG kafka.log.AbstractIndex - Resized /tmp/kafka-unit8944902187107510952/__consumer_offsets-29/00000000000000000000.index to 0, position is 0 and limit is 0 11:11:12 11:11:12.098 [log-closing-/tmp/kafka-unit8944902187107510952] DEBUG kafka.log.AbstractIndex - Resized /tmp/kafka-unit8944902187107510952/__consumer_offsets-29/00000000000000000000.timeindex to 0, position is 0 and limit is 0 11:11:12 11:11:12.101 [log-closing-/tmp/kafka-unit8944902187107510952] DEBUG kafka.log.UnifiedLog - [UnifiedLog partition=__consumer_offsets-43, dir=/tmp/kafka-unit8944902187107510952] Flushing log up to offset 0 (inclusive)with recovery point 0, last flushed: 1768216252566, current time: 1768216272101,unflushed: 0 11:11:12 11:11:12.103 [log-closing-/tmp/kafka-unit8944902187107510952] DEBUG kafka.log.UnifiedLog - [UnifiedLog partition=__consumer_offsets-43, dir=/tmp/kafka-unit8944902187107510952] Closing log 11:11:12 11:11:12.103 [log-closing-/tmp/kafka-unit8944902187107510952] DEBUG kafka.log.AbstractIndex - Resized /tmp/kafka-unit8944902187107510952/__consumer_offsets-43/00000000000000000000.index to 0, position is 0 and limit is 0 11:11:12 11:11:12.103 [log-closing-/tmp/kafka-unit8944902187107510952] DEBUG kafka.log.AbstractIndex - Resized /tmp/kafka-unit8944902187107510952/__consumer_offsets-43/00000000000000000000.timeindex to 0, position is 0 and limit is 0 11:11:12 11:11:12.103 [log-closing-/tmp/kafka-unit8944902187107510952] DEBUG kafka.log.UnifiedLog - [UnifiedLog partition=__consumer_offsets-0, dir=/tmp/kafka-unit8944902187107510952] Flushing log up to offset 0 (inclusive)with recovery point 0, last flushed: 1768216252474, current time: 1768216272103,unflushed: 0 11:11:12 11:11:12.105 [log-closing-/tmp/kafka-unit8944902187107510952] DEBUG kafka.log.UnifiedLog - [UnifiedLog partition=__consumer_offsets-0, dir=/tmp/kafka-unit8944902187107510952] Closing log 11:11:12 11:11:12.105 [log-closing-/tmp/kafka-unit8944902187107510952] DEBUG kafka.log.AbstractIndex - Resized /tmp/kafka-unit8944902187107510952/__consumer_offsets-0/00000000000000000000.index to 0, position is 0 and limit is 0 11:11:12 11:11:12.105 [log-closing-/tmp/kafka-unit8944902187107510952] DEBUG kafka.log.AbstractIndex - Resized /tmp/kafka-unit8944902187107510952/__consumer_offsets-0/00000000000000000000.timeindex to 0, position is 0 and limit is 0 11:11:12 11:11:12.106 [log-closing-/tmp/kafka-unit8944902187107510952] DEBUG kafka.log.UnifiedLog - [UnifiedLog partition=__consumer_offsets-6, dir=/tmp/kafka-unit8944902187107510952] Flushing log up to offset 0 (inclusive)with recovery point 0, last flushed: 1768216252557, current time: 1768216272106,unflushed: 0 11:11:12 11:11:12.107 [log-closing-/tmp/kafka-unit8944902187107510952] DEBUG kafka.log.UnifiedLog - [UnifiedLog partition=__consumer_offsets-6, dir=/tmp/kafka-unit8944902187107510952] Closing log 11:11:12 11:11:12.108 [log-closing-/tmp/kafka-unit8944902187107510952] DEBUG kafka.log.AbstractIndex - Resized /tmp/kafka-unit8944902187107510952/__consumer_offsets-6/00000000000000000000.index to 0, position is 0 and limit is 0 11:11:12 11:11:12.108 [log-closing-/tmp/kafka-unit8944902187107510952] DEBUG kafka.log.AbstractIndex - Resized /tmp/kafka-unit8944902187107510952/__consumer_offsets-6/00000000000000000000.timeindex to 0, position is 0 and limit is 0 11:11:12 11:11:12.108 [log-closing-/tmp/kafka-unit8944902187107510952] DEBUG kafka.log.UnifiedLog - [UnifiedLog partition=__consumer_offsets-35, dir=/tmp/kafka-unit8944902187107510952] Flushing log up to offset 0 (inclusive)with recovery point 0, last flushed: 1768216252482, current time: 1768216272108,unflushed: 0 11:11:12 11:11:12.110 [log-closing-/tmp/kafka-unit8944902187107510952] DEBUG kafka.log.UnifiedLog - [UnifiedLog partition=__consumer_offsets-35, dir=/tmp/kafka-unit8944902187107510952] Closing log 11:11:12 11:11:12.110 [log-closing-/tmp/kafka-unit8944902187107510952] DEBUG kafka.log.AbstractIndex - Resized /tmp/kafka-unit8944902187107510952/__consumer_offsets-35/00000000000000000000.index to 0, position is 0 and limit is 0 11:11:12 11:11:12.110 [log-closing-/tmp/kafka-unit8944902187107510952] DEBUG kafka.log.AbstractIndex - Resized /tmp/kafka-unit8944902187107510952/__consumer_offsets-35/00000000000000000000.timeindex to 0, position is 0 and limit is 0 11:11:12 11:11:12.111 [log-closing-/tmp/kafka-unit8944902187107510952] DEBUG kafka.log.UnifiedLog - [UnifiedLog partition=__consumer_offsets-30, dir=/tmp/kafka-unit8944902187107510952] Flushing log up to offset 0 (inclusive)with recovery point 0, last flushed: 1768216252467, current time: 1768216272111,unflushed: 0 11:11:12 11:11:12.112 [log-closing-/tmp/kafka-unit8944902187107510952] DEBUG kafka.log.UnifiedLog - [UnifiedLog partition=__consumer_offsets-30, dir=/tmp/kafka-unit8944902187107510952] Closing log 11:11:12 11:11:12.112 [log-closing-/tmp/kafka-unit8944902187107510952] DEBUG kafka.log.AbstractIndex - Resized /tmp/kafka-unit8944902187107510952/__consumer_offsets-30/00000000000000000000.index to 0, position is 0 and limit is 0 11:11:12 11:11:12.112 [log-closing-/tmp/kafka-unit8944902187107510952] DEBUG kafka.log.AbstractIndex - Resized /tmp/kafka-unit8944902187107510952/__consumer_offsets-30/00000000000000000000.timeindex to 0, position is 0 and limit is 0 11:11:12 11:11:12.113 [log-closing-/tmp/kafka-unit8944902187107510952] DEBUG kafka.log.UnifiedLog - [UnifiedLog partition=__consumer_offsets-13, dir=/tmp/kafka-unit8944902187107510952] Flushing log up to offset 0 (inclusive)with recovery point 0, last flushed: 1768216252574, current time: 1768216272113,unflushed: 0 11:11:12 11:11:12.114 [log-closing-/tmp/kafka-unit8944902187107510952] DEBUG kafka.log.UnifiedLog - [UnifiedLog partition=__consumer_offsets-13, dir=/tmp/kafka-unit8944902187107510952] Closing log 11:11:12 11:11:12.115 [log-closing-/tmp/kafka-unit8944902187107510952] DEBUG kafka.log.AbstractIndex - Resized /tmp/kafka-unit8944902187107510952/__consumer_offsets-13/00000000000000000000.index to 0, position is 0 and limit is 0 11:11:12 11:11:12.115 [kafka-coordinator-heartbeat-thread | mso-group] DEBUG org.apache.kafka.common.network.Selector - [Consumer clientId=mso-123456-consumer-37da6eb2-0a27-47c4-bc3c-8032fdeca549, groupId=mso-group] Connection with localhost/127.0.0.1 (channelId=1) disconnected 11:11:12 java.io.EOFException: null 11:11:12 at org.apache.kafka.common.network.NetworkReceive.readFrom(NetworkReceive.java:97) 11:11:12 at org.apache.kafka.common.network.KafkaChannel.receive(KafkaChannel.java:452) 11:11:12 at org.apache.kafka.common.network.KafkaChannel.read(KafkaChannel.java:402) 11:11:12 at org.apache.kafka.common.network.Selector.attemptRead(Selector.java:674) 11:11:12 at org.apache.kafka.common.network.Selector.pollSelectionKeys(Selector.java:576) 11:11:12 at org.apache.kafka.common.network.Selector.poll(Selector.java:481) 11:11:12 at org.apache.kafka.clients.NetworkClient.poll(NetworkClient.java:560) 11:11:12 at org.apache.kafka.clients.consumer.internals.ConsumerNetworkClient.poll(ConsumerNetworkClient.java:280) 11:11:12 at org.apache.kafka.clients.consumer.internals.ConsumerNetworkClient.pollNoWakeup(ConsumerNetworkClient.java:321) 11:11:12 at org.apache.kafka.clients.consumer.internals.AbstractCoordinator$HeartbeatThread.run(AbstractCoordinator.java:1454) 11:11:12 11:11:12.115 [log-closing-/tmp/kafka-unit8944902187107510952] DEBUG kafka.log.AbstractIndex - Resized /tmp/kafka-unit8944902187107510952/__consumer_offsets-13/00000000000000000000.timeindex to 0, position is 0 and limit is 0 11:11:12 11:11:12.115 [kafka-coordinator-heartbeat-thread | mso-group] DEBUG org.apache.kafka.common.network.Selector - [Consumer clientId=mso-123456-consumer-37da6eb2-0a27-47c4-bc3c-8032fdeca549, groupId=mso-group] Connection with localhost/127.0.0.1 (channelId=-1) disconnected 11:11:12 java.io.EOFException: null 11:11:12 at org.apache.kafka.common.network.NetworkReceive.readFrom(NetworkReceive.java:97) 11:11:12 at org.apache.kafka.common.network.KafkaChannel.receive(KafkaChannel.java:452) 11:11:12 at org.apache.kafka.common.network.KafkaChannel.read(KafkaChannel.java:402) 11:11:12 at org.apache.kafka.common.network.Selector.attemptRead(Selector.java:674) 11:11:12 at org.apache.kafka.common.network.Selector.pollSelectionKeys(Selector.java:576) 11:11:12 at org.apache.kafka.common.network.Selector.poll(Selector.java:481) 11:11:12 at org.apache.kafka.clients.NetworkClient.poll(NetworkClient.java:560) 11:11:12 at org.apache.kafka.clients.consumer.internals.ConsumerNetworkClient.poll(ConsumerNetworkClient.java:280) 11:11:12 at org.apache.kafka.clients.consumer.internals.ConsumerNetworkClient.pollNoWakeup(ConsumerNetworkClient.java:321) 11:11:12 at org.apache.kafka.clients.consumer.internals.AbstractCoordinator$HeartbeatThread.run(AbstractCoordinator.java:1454) 11:11:12 11:11:12.115 [kafka-coordinator-heartbeat-thread | mso-group] DEBUG org.apache.kafka.common.network.Selector - [Consumer clientId=mso-123456-consumer-37da6eb2-0a27-47c4-bc3c-8032fdeca549, groupId=mso-group] Connection with localhost/127.0.0.1 (channelId=2147483646) disconnected 11:11:12 java.io.EOFException: null 11:11:12 at org.apache.kafka.common.network.NetworkReceive.readFrom(NetworkReceive.java:97) 11:11:12 at org.apache.kafka.common.network.KafkaChannel.receive(KafkaChannel.java:452) 11:11:12 at org.apache.kafka.common.network.KafkaChannel.read(KafkaChannel.java:402) 11:11:12 at org.apache.kafka.common.network.Selector.attemptRead(Selector.java:674) 11:11:12 at org.apache.kafka.common.network.Selector.pollSelectionKeys(Selector.java:576) 11:11:12 at org.apache.kafka.common.network.Selector.poll(Selector.java:481) 11:11:12 at org.apache.kafka.clients.NetworkClient.poll(NetworkClient.java:560) 11:11:12 at org.apache.kafka.clients.consumer.internals.ConsumerNetworkClient.poll(ConsumerNetworkClient.java:280) 11:11:12 at org.apache.kafka.clients.consumer.internals.ConsumerNetworkClient.pollNoWakeup(ConsumerNetworkClient.java:321) 11:11:12 at org.apache.kafka.clients.consumer.internals.AbstractCoordinator$HeartbeatThread.run(AbstractCoordinator.java:1454) 11:11:12 11:11:12.115 [kafka-coordinator-heartbeat-thread | mso-group] INFO org.apache.kafka.clients.NetworkClient - [Consumer clientId=mso-123456-consumer-37da6eb2-0a27-47c4-bc3c-8032fdeca549, groupId=mso-group] Node 1 disconnected. 11:11:12 11:11:12.116 [kafka-coordinator-heartbeat-thread | mso-group] DEBUG org.apache.kafka.clients.NetworkClient - [Consumer clientId=mso-123456-consumer-37da6eb2-0a27-47c4-bc3c-8032fdeca549, groupId=mso-group] Cancelled in-flight FETCH request with correlation id 66 due to node 1 being disconnected (elapsed time since creation: 137ms, elapsed time since send: 137ms, request timeout: 30000ms): FetchRequestData(clusterId=null, replicaId=-1, maxWaitMs=500, minBytes=1, maxBytes=52428800, isolationLevel=0, sessionId=1613820950, sessionEpoch=32, topics=[FetchTopic(topic='my-test-topic', topicId=QbCSIHehTz6oyvgrkYsJtw, partitions=[FetchPartition(partition=0, currentLeaderEpoch=0, fetchOffset=3, lastFetchedEpoch=-1, logStartOffset=-1, partitionMaxBytes=1048576)])], forgottenTopicsData=[], rackId='') 11:11:12 11:11:12.116 [kafka-coordinator-heartbeat-thread | mso-group] INFO org.apache.kafka.clients.NetworkClient - [Consumer clientId=mso-123456-consumer-37da6eb2-0a27-47c4-bc3c-8032fdeca549, groupId=mso-group] Node -1 disconnected. 11:11:12 11:11:12.116 [kafka-coordinator-heartbeat-thread | mso-group] INFO org.apache.kafka.clients.NetworkClient - [Consumer clientId=mso-123456-consumer-37da6eb2-0a27-47c4-bc3c-8032fdeca549, groupId=mso-group] Node 2147483646 disconnected. 11:11:12 11:11:12.116 [kafka-coordinator-heartbeat-thread | mso-group] DEBUG org.apache.kafka.clients.consumer.internals.ConsumerNetworkClient - [Consumer clientId=mso-123456-consumer-37da6eb2-0a27-47c4-bc3c-8032fdeca549, groupId=mso-group] Cancelled request with header RequestHeader(apiKey=FETCH, apiVersion=13, clientId=mso-123456-consumer-37da6eb2-0a27-47c4-bc3c-8032fdeca549, correlationId=66) due to node 1 being disconnected 11:11:12 11:11:12.117 [log-closing-/tmp/kafka-unit8944902187107510952] DEBUG kafka.log.UnifiedLog - [UnifiedLog partition=__consumer_offsets-26, dir=/tmp/kafka-unit8944902187107510952] Flushing log up to offset 0 (inclusive)with recovery point 0, last flushed: 1768216252212, current time: 1768216272117,unflushed: 0 11:11:12 11:11:12.117 [kafka-coordinator-heartbeat-thread | mso-group] INFO org.apache.kafka.clients.FetchSessionHandler - [Consumer clientId=mso-123456-consumer-37da6eb2-0a27-47c4-bc3c-8032fdeca549, groupId=mso-group] Error sending fetch request (sessionId=1613820950, epoch=32) to node 1: 11:11:12 org.apache.kafka.common.errors.DisconnectException: null 11:11:12 11:11:12.117 [kafka-coordinator-heartbeat-thread | mso-group] INFO org.apache.kafka.clients.consumer.internals.ConsumerCoordinator - [Consumer clientId=mso-123456-consumer-37da6eb2-0a27-47c4-bc3c-8032fdeca549, groupId=mso-group] Group coordinator localhost:39115 (id: 2147483646 rack: null) is unavailable or invalid due to cause: coordinator unavailable. isDisconnected: true. Rediscovery will be attempted. 11:11:12 11:11:12.118 [kafka-coordinator-heartbeat-thread | mso-group] DEBUG org.apache.kafka.clients.consumer.internals.ConsumerCoordinator - [Consumer clientId=mso-123456-consumer-37da6eb2-0a27-47c4-bc3c-8032fdeca549, groupId=mso-group] No broker available to send FindCoordinator request 11:11:12 11:11:12.119 [log-closing-/tmp/kafka-unit8944902187107510952] DEBUG kafka.log.UnifiedLog - [UnifiedLog partition=__consumer_offsets-26, dir=/tmp/kafka-unit8944902187107510952] Closing log 11:11:12 11:11:12.119 [log-closing-/tmp/kafka-unit8944902187107510952] DEBUG kafka.log.AbstractIndex - Resized /tmp/kafka-unit8944902187107510952/__consumer_offsets-26/00000000000000000000.index to 0, position is 0 and limit is 0 11:11:12 11:11:12.119 [log-closing-/tmp/kafka-unit8944902187107510952] DEBUG kafka.log.AbstractIndex - Resized /tmp/kafka-unit8944902187107510952/__consumer_offsets-26/00000000000000000000.timeindex to 0, position is 0 and limit is 0 11:11:12 11:11:12.120 [log-closing-/tmp/kafka-unit8944902187107510952] DEBUG kafka.log.UnifiedLog - [UnifiedLog partition=__consumer_offsets-21, dir=/tmp/kafka-unit8944902187107510952] Flushing log up to offset 0 (inclusive)with recovery point 0, last flushed: 1768216252528, current time: 1768216272120,unflushed: 0 11:11:12 11:11:12.122 [log-closing-/tmp/kafka-unit8944902187107510952] DEBUG kafka.log.UnifiedLog - [UnifiedLog partition=__consumer_offsets-21, dir=/tmp/kafka-unit8944902187107510952] Closing log 11:11:12 11:11:12.122 [log-closing-/tmp/kafka-unit8944902187107510952] DEBUG kafka.log.AbstractIndex - Resized /tmp/kafka-unit8944902187107510952/__consumer_offsets-21/00000000000000000000.index to 0, position is 0 and limit is 0 11:11:12 11:11:12.122 [log-closing-/tmp/kafka-unit8944902187107510952] DEBUG kafka.log.AbstractIndex - Resized /tmp/kafka-unit8944902187107510952/__consumer_offsets-21/00000000000000000000.timeindex to 0, position is 0 and limit is 0 11:11:12 11:11:12.123 [log-closing-/tmp/kafka-unit8944902187107510952] DEBUG kafka.log.UnifiedLog - [UnifiedLog partition=__consumer_offsets-19, dir=/tmp/kafka-unit8944902187107510952] Flushing log up to offset 0 (inclusive)with recovery point 0, last flushed: 1768216252168, current time: 1768216272123,unflushed: 0 11:11:12 11:11:12.124 [log-closing-/tmp/kafka-unit8944902187107510952] DEBUG kafka.log.UnifiedLog - [UnifiedLog partition=__consumer_offsets-19, dir=/tmp/kafka-unit8944902187107510952] Closing log 11:11:12 11:11:12.124 [log-closing-/tmp/kafka-unit8944902187107510952] DEBUG kafka.log.AbstractIndex - Resized /tmp/kafka-unit8944902187107510952/__consumer_offsets-19/00000000000000000000.index to 0, position is 0 and limit is 0 11:11:12 11:11:12.125 [log-closing-/tmp/kafka-unit8944902187107510952] DEBUG kafka.log.AbstractIndex - Resized /tmp/kafka-unit8944902187107510952/__consumer_offsets-19/00000000000000000000.timeindex to 0, position is 0 and limit is 0 11:11:12 11:11:12.125 [log-closing-/tmp/kafka-unit8944902187107510952] DEBUG kafka.log.UnifiedLog - [UnifiedLog partition=__consumer_offsets-25, dir=/tmp/kafka-unit8944902187107510952] Flushing log up to offset 0 (inclusive)with recovery point 0, last flushed: 1768216252318, current time: 1768216272125,unflushed: 0 11:11:12 11:11:12.127 [log-closing-/tmp/kafka-unit8944902187107510952] DEBUG kafka.log.UnifiedLog - [UnifiedLog partition=__consumer_offsets-25, dir=/tmp/kafka-unit8944902187107510952] Closing log 11:11:12 11:11:12.127 [log-closing-/tmp/kafka-unit8944902187107510952] DEBUG kafka.log.AbstractIndex - Resized /tmp/kafka-unit8944902187107510952/__consumer_offsets-25/00000000000000000000.index to 0, position is 0 and limit is 0 11:11:12 11:11:12.127 [log-closing-/tmp/kafka-unit8944902187107510952] DEBUG kafka.log.AbstractIndex - Resized /tmp/kafka-unit8944902187107510952/__consumer_offsets-25/00000000000000000000.timeindex to 0, position is 0 and limit is 0 11:11:12 11:11:12.127 [log-closing-/tmp/kafka-unit8944902187107510952] DEBUG kafka.log.UnifiedLog - [UnifiedLog partition=__consumer_offsets-33, dir=/tmp/kafka-unit8944902187107510952] Flushing log up to offset 0 (inclusive)with recovery point 0, last flushed: 1768216252151, current time: 1768216272127,unflushed: 0 11:11:12 11:11:12.129 [log-closing-/tmp/kafka-unit8944902187107510952] DEBUG kafka.log.UnifiedLog - [UnifiedLog partition=__consumer_offsets-33, dir=/tmp/kafka-unit8944902187107510952] Closing log 11:11:12 11:11:12.129 [log-closing-/tmp/kafka-unit8944902187107510952] DEBUG kafka.log.AbstractIndex - Resized /tmp/kafka-unit8944902187107510952/__consumer_offsets-33/00000000000000000000.index to 0, position is 0 and limit is 0 11:11:12 11:11:12.129 [log-closing-/tmp/kafka-unit8944902187107510952] DEBUG kafka.log.AbstractIndex - Resized /tmp/kafka-unit8944902187107510952/__consumer_offsets-33/00000000000000000000.timeindex to 0, position is 0 and limit is 0 11:11:12 11:11:12.130 [log-closing-/tmp/kafka-unit8944902187107510952] DEBUG kafka.log.UnifiedLog - [UnifiedLog partition=__consumer_offsets-41, dir=/tmp/kafka-unit8944902187107510952] Flushing log up to offset 0 (inclusive)with recovery point 0, last flushed: 1768216252133, current time: 1768216272130,unflushed: 0 11:11:12 11:11:12.131 [kafka-producer-network-thread | mso-123456-producer-414c146f-350c-404b-a200-38d58b83784c] DEBUG org.apache.kafka.clients.NetworkClient - [Producer clientId=mso-123456-producer-414c146f-350c-404b-a200-38d58b83784c] Initialize connection to node localhost:39115 (id: 1 rack: null) for sending metadata request 11:11:12 11:11:12.131 [kafka-producer-network-thread | mso-123456-producer-414c146f-350c-404b-a200-38d58b83784c] DEBUG org.apache.kafka.clients.ClientUtils - Resolved host localhost as 127.0.0.1 11:11:12 11:11:12.131 [kafka-producer-network-thread | mso-123456-producer-414c146f-350c-404b-a200-38d58b83784c] DEBUG org.apache.kafka.clients.NetworkClient - [Producer clientId=mso-123456-producer-414c146f-350c-404b-a200-38d58b83784c] Initiating connection to node localhost:39115 (id: 1 rack: null) using address localhost/127.0.0.1 11:11:12 11:11:12.132 [log-closing-/tmp/kafka-unit8944902187107510952] DEBUG kafka.log.UnifiedLog - [UnifiedLog partition=__consumer_offsets-41, dir=/tmp/kafka-unit8944902187107510952] Closing log 11:11:12 11:11:12.132 [log-closing-/tmp/kafka-unit8944902187107510952] DEBUG kafka.log.AbstractIndex - Resized /tmp/kafka-unit8944902187107510952/__consumer_offsets-41/00000000000000000000.index to 0, position is 0 and limit is 0 11:11:12 11:11:12.132 [kafka-producer-network-thread | mso-123456-producer-414c146f-350c-404b-a200-38d58b83784c] DEBUG org.apache.kafka.common.security.authenticator.SaslClientAuthenticator - [Producer clientId=mso-123456-producer-414c146f-350c-404b-a200-38d58b83784c] Set SASL client state to SEND_APIVERSIONS_REQUEST 11:11:12 11:11:12.132 [log-closing-/tmp/kafka-unit8944902187107510952] DEBUG kafka.log.AbstractIndex - Resized /tmp/kafka-unit8944902187107510952/__consumer_offsets-41/00000000000000000000.timeindex to 0, position is 0 and limit is 0 11:11:12 11:11:12.132 [kafka-producer-network-thread | mso-123456-producer-414c146f-350c-404b-a200-38d58b83784c] DEBUG org.apache.kafka.common.security.authenticator.SaslClientAuthenticator - [Producer clientId=mso-123456-producer-414c146f-350c-404b-a200-38d58b83784c] Creating SaslClient: client=null;service=kafka;serviceHostname=localhost;mechs=[PLAIN] 11:11:12 11:11:12.132 [log-closing-/tmp/kafka-unit8944902187107510952] DEBUG kafka.log.UnifiedLog - [UnifiedLog partition=__consumer_offsets-37, dir=/tmp/kafka-unit8944902187107510952] Flushing log up to offset 4 (inclusive)with recovery point 4, last flushed: 1768216270844, current time: 1768216272132,unflushed: 0 11:11:12 11:11:12.133 [log-closing-/tmp/kafka-unit8944902187107510952] DEBUG kafka.log.UnifiedLog - [UnifiedLog partition=__consumer_offsets-37, dir=/tmp/kafka-unit8944902187107510952] Closing log 11:11:12 11:11:12.134 [kafka-producer-network-thread | mso-123456-producer-414c146f-350c-404b-a200-38d58b83784c] DEBUG org.apache.kafka.common.network.Selector - [Producer clientId=mso-123456-producer-414c146f-350c-404b-a200-38d58b83784c] Connection with localhost/127.0.0.1 (channelId=1) disconnected 11:11:12 java.net.ConnectException: Connection refused 11:11:12 at java.base/sun.nio.ch.SocketChannelImpl.checkConnect(Native Method) 11:11:12 at java.base/sun.nio.ch.SocketChannelImpl.finishConnect(SocketChannelImpl.java:777) 11:11:12 at org.apache.kafka.common.network.PlaintextTransportLayer.finishConnect(PlaintextTransportLayer.java:50) 11:11:12 at org.apache.kafka.common.network.KafkaChannel.finishConnect(KafkaChannel.java:224) 11:11:12 at org.apache.kafka.common.network.Selector.pollSelectionKeys(Selector.java:526) 11:11:12 at org.apache.kafka.common.network.Selector.poll(Selector.java:481) 11:11:12 at org.apache.kafka.clients.NetworkClient.poll(NetworkClient.java:560) 11:11:12 at org.apache.kafka.clients.producer.internals.Sender.runOnce(Sender.java:328) 11:11:12 at org.apache.kafka.clients.producer.internals.Sender.run(Sender.java:243) 11:11:12 at java.base/java.lang.Thread.run(Thread.java:829) 11:11:12 11:11:12.134 [kafka-producer-network-thread | mso-123456-producer-414c146f-350c-404b-a200-38d58b83784c] INFO org.apache.kafka.clients.NetworkClient - [Producer clientId=mso-123456-producer-414c146f-350c-404b-a200-38d58b83784c] Node 1 disconnected. 11:11:12 11:11:12.134 [kafka-producer-network-thread | mso-123456-producer-414c146f-350c-404b-a200-38d58b83784c] WARN org.apache.kafka.clients.NetworkClient - [Producer clientId=mso-123456-producer-414c146f-350c-404b-a200-38d58b83784c] Connection to node 1 (localhost/127.0.0.1:39115) could not be established. Broker may not be available. 11:11:12 11:11:12.139 [log-closing-/tmp/kafka-unit8944902187107510952] INFO kafka.log.ProducerStateManager - [ProducerStateManager partition=__consumer_offsets-37] Wrote producer snapshot at offset 4 with 0 producer ids in 4 ms. 11:11:12 11:11:12.140 [log-closing-/tmp/kafka-unit8944902187107510952] DEBUG kafka.log.AbstractIndex - Resized /tmp/kafka-unit8944902187107510952/__consumer_offsets-37/00000000000000000000.index to 0, position is 0 and limit is 0 11:11:12 11:11:12.140 [log-closing-/tmp/kafka-unit8944902187107510952] DEBUG kafka.log.AbstractIndex - Resized /tmp/kafka-unit8944902187107510952/__consumer_offsets-37/00000000000000000000.timeindex to 12, position is 12 and limit is 12 11:11:12 11:11:12.141 [log-closing-/tmp/kafka-unit8944902187107510952] DEBUG kafka.log.UnifiedLog - [UnifiedLog partition=__consumer_offsets-8, dir=/tmp/kafka-unit8944902187107510952] Flushing log up to offset 0 (inclusive)with recovery point 0, last flushed: 1768216252442, current time: 1768216272141,unflushed: 0 11:11:12 11:11:12.143 [log-closing-/tmp/kafka-unit8944902187107510952] DEBUG kafka.log.UnifiedLog - [UnifiedLog partition=__consumer_offsets-8, dir=/tmp/kafka-unit8944902187107510952] Closing log 11:11:12 11:11:12.143 [log-closing-/tmp/kafka-unit8944902187107510952] DEBUG kafka.log.AbstractIndex - Resized /tmp/kafka-unit8944902187107510952/__consumer_offsets-8/00000000000000000000.index to 0, position is 0 and limit is 0 11:11:12 11:11:12.143 [log-closing-/tmp/kafka-unit8944902187107510952] DEBUG kafka.log.AbstractIndex - Resized /tmp/kafka-unit8944902187107510952/__consumer_offsets-8/00000000000000000000.timeindex to 0, position is 0 and limit is 0 11:11:12 11:11:12.143 [log-closing-/tmp/kafka-unit8944902187107510952] DEBUG kafka.log.UnifiedLog - [UnifiedLog partition=__consumer_offsets-24, dir=/tmp/kafka-unit8944902187107510952] Flushing log up to offset 0 (inclusive)with recovery point 0, last flushed: 1768216252250, current time: 1768216272143,unflushed: 0 11:11:12 11:11:12.145 [log-closing-/tmp/kafka-unit8944902187107510952] DEBUG kafka.log.UnifiedLog - [UnifiedLog partition=__consumer_offsets-24, dir=/tmp/kafka-unit8944902187107510952] Closing log 11:11:12 11:11:12.145 [log-closing-/tmp/kafka-unit8944902187107510952] DEBUG kafka.log.AbstractIndex - Resized /tmp/kafka-unit8944902187107510952/__consumer_offsets-24/00000000000000000000.index to 0, position is 0 and limit is 0 11:11:12 11:11:12.145 [log-closing-/tmp/kafka-unit8944902187107510952] DEBUG kafka.log.AbstractIndex - Resized /tmp/kafka-unit8944902187107510952/__consumer_offsets-24/00000000000000000000.timeindex to 0, position is 0 and limit is 0 11:11:12 11:11:12.145 [log-closing-/tmp/kafka-unit8944902187107510952] DEBUG kafka.log.UnifiedLog - [UnifiedLog partition=__consumer_offsets-49, dir=/tmp/kafka-unit8944902187107510952] Flushing log up to offset 0 (inclusive)with recovery point 0, last flushed: 1768216252220, current time: 1768216272145,unflushed: 0 11:11:12 11:11:12.147 [log-closing-/tmp/kafka-unit8944902187107510952] DEBUG kafka.log.UnifiedLog - [UnifiedLog partition=__consumer_offsets-49, dir=/tmp/kafka-unit8944902187107510952] Closing log 11:11:12 11:11:12.147 [log-closing-/tmp/kafka-unit8944902187107510952] DEBUG kafka.log.AbstractIndex - Resized /tmp/kafka-unit8944902187107510952/__consumer_offsets-49/00000000000000000000.index to 0, position is 0 and limit is 0 11:11:12 11:11:12.147 [log-closing-/tmp/kafka-unit8944902187107510952] DEBUG kafka.log.AbstractIndex - Resized /tmp/kafka-unit8944902187107510952/__consumer_offsets-49/00000000000000000000.timeindex to 0, position is 0 and limit is 0 11:11:12 11:11:12.148 [log-closing-/tmp/kafka-unit8944902187107510952] DEBUG kafka.log.UnifiedLog - [UnifiedLog partition=my-test-topic-0, dir=/tmp/kafka-unit8944902187107510952] Flushing log up to offset 3 (inclusive)with recovery point 3, last flushed: 1768216271955, current time: 1768216272148,unflushed: 0 11:11:12 11:11:12.148 [log-closing-/tmp/kafka-unit8944902187107510952] DEBUG kafka.log.UnifiedLog - [UnifiedLog partition=my-test-topic-0, dir=/tmp/kafka-unit8944902187107510952] Closing log 11:11:12 11:11:12.152 [log-closing-/tmp/kafka-unit8944902187107510952] INFO kafka.log.ProducerStateManager - [ProducerStateManager partition=my-test-topic-0] Wrote producer snapshot at offset 3 with 1 producer ids in 3 ms. 11:11:12 11:11:12.152 [log-closing-/tmp/kafka-unit8944902187107510952] DEBUG kafka.log.AbstractIndex - Resized /tmp/kafka-unit8944902187107510952/my-test-topic-0/00000000000000000000.index to 0, position is 0 and limit is 0 11:11:12 11:11:12.152 [log-closing-/tmp/kafka-unit8944902187107510952] DEBUG kafka.log.AbstractIndex - Resized /tmp/kafka-unit8944902187107510952/my-test-topic-0/00000000000000000000.timeindex to 12, position is 12 and limit is 12 11:11:12 11:11:12.153 [log-closing-/tmp/kafka-unit8944902187107510952] DEBUG kafka.log.UnifiedLog - [UnifiedLog partition=__consumer_offsets-3, dir=/tmp/kafka-unit8944902187107510952] Flushing log up to offset 0 (inclusive)with recovery point 0, last flushed: 1768216252108, current time: 1768216272153,unflushed: 0 11:11:12 11:11:12.154 [log-closing-/tmp/kafka-unit8944902187107510952] DEBUG kafka.log.UnifiedLog - [UnifiedLog partition=__consumer_offsets-3, dir=/tmp/kafka-unit8944902187107510952] Closing log 11:11:12 11:11:12.155 [log-closing-/tmp/kafka-unit8944902187107510952] DEBUG kafka.log.AbstractIndex - Resized /tmp/kafka-unit8944902187107510952/__consumer_offsets-3/00000000000000000000.index to 0, position is 0 and limit is 0 11:11:12 11:11:12.155 [log-closing-/tmp/kafka-unit8944902187107510952] DEBUG kafka.log.AbstractIndex - Resized /tmp/kafka-unit8944902187107510952/__consumer_offsets-3/00000000000000000000.timeindex to 0, position is 0 and limit is 0 11:11:12 11:11:12.155 [log-closing-/tmp/kafka-unit8944902187107510952] DEBUG kafka.log.UnifiedLog - [UnifiedLog partition=__consumer_offsets-40, dir=/tmp/kafka-unit8944902187107510952] Flushing log up to offset 0 (inclusive)with recovery point 0, last flushed: 1768216252329, current time: 1768216272155,unflushed: 0 11:11:12 11:11:12.157 [log-closing-/tmp/kafka-unit8944902187107510952] DEBUG kafka.log.UnifiedLog - [UnifiedLog partition=__consumer_offsets-40, dir=/tmp/kafka-unit8944902187107510952] Closing log 11:11:12 11:11:12.157 [log-closing-/tmp/kafka-unit8944902187107510952] DEBUG kafka.log.AbstractIndex - Resized /tmp/kafka-unit8944902187107510952/__consumer_offsets-40/00000000000000000000.index to 0, position is 0 and limit is 0 11:11:12 11:11:12.157 [log-closing-/tmp/kafka-unit8944902187107510952] DEBUG kafka.log.AbstractIndex - Resized /tmp/kafka-unit8944902187107510952/__consumer_offsets-40/00000000000000000000.timeindex to 0, position is 0 and limit is 0 11:11:12 11:11:12.157 [log-closing-/tmp/kafka-unit8944902187107510952] DEBUG kafka.log.UnifiedLog - [UnifiedLog partition=__consumer_offsets-27, dir=/tmp/kafka-unit8944902187107510952] Flushing log up to offset 0 (inclusive)with recovery point 0, last flushed: 1768216252505, current time: 1768216272157,unflushed: 0 11:11:12 11:11:12.159 [log-closing-/tmp/kafka-unit8944902187107510952] DEBUG kafka.log.UnifiedLog - [UnifiedLog partition=__consumer_offsets-27, dir=/tmp/kafka-unit8944902187107510952] Closing log 11:11:12 11:11:12.159 [log-closing-/tmp/kafka-unit8944902187107510952] DEBUG kafka.log.AbstractIndex - Resized /tmp/kafka-unit8944902187107510952/__consumer_offsets-27/00000000000000000000.index to 0, position is 0 and limit is 0 11:11:12 11:11:12.159 [log-closing-/tmp/kafka-unit8944902187107510952] DEBUG kafka.log.AbstractIndex - Resized /tmp/kafka-unit8944902187107510952/__consumer_offsets-27/00000000000000000000.timeindex to 0, position is 0 and limit is 0 11:11:12 11:11:12.159 [log-closing-/tmp/kafka-unit8944902187107510952] DEBUG kafka.log.UnifiedLog - [UnifiedLog partition=__consumer_offsets-17, dir=/tmp/kafka-unit8944902187107510952] Flushing log up to offset 0 (inclusive)with recovery point 0, last flushed: 1768216252347, current time: 1768216272159,unflushed: 0 11:11:12 11:11:12.160 [log-closing-/tmp/kafka-unit8944902187107510952] DEBUG kafka.log.UnifiedLog - [UnifiedLog partition=__consumer_offsets-17, dir=/tmp/kafka-unit8944902187107510952] Closing log 11:11:12 11:11:12.161 [log-closing-/tmp/kafka-unit8944902187107510952] DEBUG kafka.log.AbstractIndex - Resized /tmp/kafka-unit8944902187107510952/__consumer_offsets-17/00000000000000000000.index to 0, position is 0 and limit is 0 11:11:12 11:11:12.161 [log-closing-/tmp/kafka-unit8944902187107510952] DEBUG kafka.log.AbstractIndex - Resized /tmp/kafka-unit8944902187107510952/__consumer_offsets-17/00000000000000000000.timeindex to 0, position is 0 and limit is 0 11:11:12 11:11:12.161 [log-closing-/tmp/kafka-unit8944902187107510952] DEBUG kafka.log.UnifiedLog - [UnifiedLog partition=__consumer_offsets-32, dir=/tmp/kafka-unit8944902187107510952] Flushing log up to offset 0 (inclusive)with recovery point 0, last flushed: 1768216252355, current time: 1768216272161,unflushed: 0 11:11:12 11:11:12.163 [log-closing-/tmp/kafka-unit8944902187107510952] DEBUG kafka.log.UnifiedLog - [UnifiedLog partition=__consumer_offsets-32, dir=/tmp/kafka-unit8944902187107510952] Closing log 11:11:12 11:11:12.164 [log-closing-/tmp/kafka-unit8944902187107510952] DEBUG kafka.log.AbstractIndex - Resized /tmp/kafka-unit8944902187107510952/__consumer_offsets-32/00000000000000000000.index to 0, position is 0 and limit is 0 11:11:12 11:11:12.164 [log-closing-/tmp/kafka-unit8944902187107510952] DEBUG kafka.log.AbstractIndex - Resized /tmp/kafka-unit8944902187107510952/__consumer_offsets-32/00000000000000000000.timeindex to 0, position is 0 and limit is 0 11:11:12 11:11:12.164 [log-closing-/tmp/kafka-unit8944902187107510952] DEBUG kafka.log.UnifiedLog - [UnifiedLog partition=__consumer_offsets-39, dir=/tmp/kafka-unit8944902187107510952] Flushing log up to offset 0 (inclusive)with recovery point 0, last flushed: 1768216252231, current time: 1768216272164,unflushed: 0 11:11:12 11:11:12.166 [log-closing-/tmp/kafka-unit8944902187107510952] DEBUG kafka.log.UnifiedLog - [UnifiedLog partition=__consumer_offsets-39, dir=/tmp/kafka-unit8944902187107510952] Closing log 11:11:12 11:11:12.166 [log-closing-/tmp/kafka-unit8944902187107510952] DEBUG kafka.log.AbstractIndex - Resized /tmp/kafka-unit8944902187107510952/__consumer_offsets-39/00000000000000000000.index to 0, position is 0 and limit is 0 11:11:12 11:11:12.166 [log-closing-/tmp/kafka-unit8944902187107510952] DEBUG kafka.log.AbstractIndex - Resized /tmp/kafka-unit8944902187107510952/__consumer_offsets-39/00000000000000000000.timeindex to 0, position is 0 and limit is 0 11:11:12 11:11:12.167 [log-closing-/tmp/kafka-unit8944902187107510952] DEBUG kafka.log.UnifiedLog - [UnifiedLog partition=__consumer_offsets-2, dir=/tmp/kafka-unit8944902187107510952] Flushing log up to offset 0 (inclusive)with recovery point 0, last flushed: 1768216252293, current time: 1768216272167,unflushed: 0 11:11:12 11:11:12.168 [log-closing-/tmp/kafka-unit8944902187107510952] DEBUG kafka.log.UnifiedLog - [UnifiedLog partition=__consumer_offsets-2, dir=/tmp/kafka-unit8944902187107510952] Closing log 11:11:12 11:11:12.169 [log-closing-/tmp/kafka-unit8944902187107510952] DEBUG kafka.log.AbstractIndex - Resized /tmp/kafka-unit8944902187107510952/__consumer_offsets-2/00000000000000000000.index to 0, position is 0 and limit is 0 11:11:12 11:11:12.169 [log-closing-/tmp/kafka-unit8944902187107510952] DEBUG kafka.log.AbstractIndex - Resized /tmp/kafka-unit8944902187107510952/__consumer_offsets-2/00000000000000000000.timeindex to 0, position is 0 and limit is 0 11:11:12 11:11:12.169 [log-closing-/tmp/kafka-unit8944902187107510952] DEBUG kafka.log.UnifiedLog - [UnifiedLog partition=__consumer_offsets-44, dir=/tmp/kafka-unit8944902187107510952] Flushing log up to offset 0 (inclusive)with recovery point 0, last flushed: 1768216252394, current time: 1768216272169,unflushed: 0 11:11:12 11:11:12.171 [log-closing-/tmp/kafka-unit8944902187107510952] DEBUG kafka.log.UnifiedLog - [UnifiedLog partition=__consumer_offsets-44, dir=/tmp/kafka-unit8944902187107510952] Closing log 11:11:12 11:11:12.171 [log-closing-/tmp/kafka-unit8944902187107510952] DEBUG kafka.log.AbstractIndex - Resized /tmp/kafka-unit8944902187107510952/__consumer_offsets-44/00000000000000000000.index to 0, position is 0 and limit is 0 11:11:12 11:11:12.171 [log-closing-/tmp/kafka-unit8944902187107510952] DEBUG kafka.log.AbstractIndex - Resized /tmp/kafka-unit8944902187107510952/__consumer_offsets-44/00000000000000000000.timeindex to 0, position is 0 and limit is 0 11:11:12 11:11:12.171 [log-closing-/tmp/kafka-unit8944902187107510952] DEBUG kafka.log.UnifiedLog - [UnifiedLog partition=__consumer_offsets-12, dir=/tmp/kafka-unit8944902187107510952] Flushing log up to offset 0 (inclusive)with recovery point 0, last flushed: 1768216252521, current time: 1768216272171,unflushed: 0 11:11:12 11:11:12.173 [log-closing-/tmp/kafka-unit8944902187107510952] DEBUG kafka.log.UnifiedLog - [UnifiedLog partition=__consumer_offsets-12, dir=/tmp/kafka-unit8944902187107510952] Closing log 11:11:12 11:11:12.173 [log-closing-/tmp/kafka-unit8944902187107510952] DEBUG kafka.log.AbstractIndex - Resized /tmp/kafka-unit8944902187107510952/__consumer_offsets-12/00000000000000000000.index to 0, position is 0 and limit is 0 11:11:12 11:11:12.173 [log-closing-/tmp/kafka-unit8944902187107510952] DEBUG kafka.log.AbstractIndex - Resized /tmp/kafka-unit8944902187107510952/__consumer_offsets-12/00000000000000000000.timeindex to 0, position is 0 and limit is 0 11:11:12 11:11:12.173 [log-closing-/tmp/kafka-unit8944902187107510952] DEBUG kafka.log.UnifiedLog - [UnifiedLog partition=__consumer_offsets-36, dir=/tmp/kafka-unit8944902187107510952] Flushing log up to offset 0 (inclusive)with recovery point 0, last flushed: 1768216252537, current time: 1768216272173,unflushed: 0 11:11:12 11:11:12.174 [log-closing-/tmp/kafka-unit8944902187107510952] DEBUG kafka.log.UnifiedLog - [UnifiedLog partition=__consumer_offsets-36, dir=/tmp/kafka-unit8944902187107510952] Closing log 11:11:12 11:11:12.174 [log-closing-/tmp/kafka-unit8944902187107510952] DEBUG kafka.log.AbstractIndex - Resized /tmp/kafka-unit8944902187107510952/__consumer_offsets-36/00000000000000000000.index to 0, position is 0 and limit is 0 11:11:12 11:11:12.175 [log-closing-/tmp/kafka-unit8944902187107510952] DEBUG kafka.log.AbstractIndex - Resized /tmp/kafka-unit8944902187107510952/__consumer_offsets-36/00000000000000000000.timeindex to 0, position is 0 and limit is 0 11:11:12 11:11:12.175 [log-closing-/tmp/kafka-unit8944902187107510952] DEBUG kafka.log.UnifiedLog - [UnifiedLog partition=__consumer_offsets-45, dir=/tmp/kafka-unit8944902187107510952] Flushing log up to offset 0 (inclusive)with recovery point 0, last flushed: 1768216252450, current time: 1768216272175,unflushed: 0 11:11:12 11:11:12.176 [log-closing-/tmp/kafka-unit8944902187107510952] DEBUG kafka.log.UnifiedLog - [UnifiedLog partition=__consumer_offsets-45, dir=/tmp/kafka-unit8944902187107510952] Closing log 11:11:12 11:11:12.176 [log-closing-/tmp/kafka-unit8944902187107510952] DEBUG kafka.log.AbstractIndex - Resized /tmp/kafka-unit8944902187107510952/__consumer_offsets-45/00000000000000000000.index to 0, position is 0 and limit is 0 11:11:12 11:11:12.176 [log-closing-/tmp/kafka-unit8944902187107510952] DEBUG kafka.log.AbstractIndex - Resized /tmp/kafka-unit8944902187107510952/__consumer_offsets-45/00000000000000000000.timeindex to 0, position is 0 and limit is 0 11:11:12 11:11:12.177 [log-closing-/tmp/kafka-unit8944902187107510952] DEBUG kafka.log.UnifiedLog - [UnifiedLog partition=__consumer_offsets-16, dir=/tmp/kafka-unit8944902187107510952] Flushing log up to offset 0 (inclusive)with recovery point 0, last flushed: 1768216252283, current time: 1768216272177,unflushed: 0 11:11:12 11:11:12.178 [log-closing-/tmp/kafka-unit8944902187107510952] DEBUG kafka.log.UnifiedLog - [UnifiedLog partition=__consumer_offsets-16, dir=/tmp/kafka-unit8944902187107510952] Closing log 11:11:12 11:11:12.178 [log-closing-/tmp/kafka-unit8944902187107510952] DEBUG kafka.log.AbstractIndex - Resized /tmp/kafka-unit8944902187107510952/__consumer_offsets-16/00000000000000000000.index to 0, position is 0 and limit is 0 11:11:12 11:11:12.178 [log-closing-/tmp/kafka-unit8944902187107510952] DEBUG kafka.log.AbstractIndex - Resized /tmp/kafka-unit8944902187107510952/__consumer_offsets-16/00000000000000000000.timeindex to 0, position is 0 and limit is 0 11:11:12 11:11:12.179 [log-closing-/tmp/kafka-unit8944902187107510952] DEBUG kafka.log.UnifiedLog - [UnifiedLog partition=__consumer_offsets-10, dir=/tmp/kafka-unit8944902187107510952] Flushing log up to offset 0 (inclusive)with recovery point 0, last flushed: 1768216252143, current time: 1768216272179,unflushed: 0 11:11:12 11:11:12.180 [log-closing-/tmp/kafka-unit8944902187107510952] DEBUG kafka.log.UnifiedLog - [UnifiedLog partition=__consumer_offsets-10, dir=/tmp/kafka-unit8944902187107510952] Closing log 11:11:12 11:11:12.180 [log-closing-/tmp/kafka-unit8944902187107510952] DEBUG kafka.log.AbstractIndex - Resized /tmp/kafka-unit8944902187107510952/__consumer_offsets-10/00000000000000000000.index to 0, position is 0 and limit is 0 11:11:12 11:11:12.180 [log-closing-/tmp/kafka-unit8944902187107510952] DEBUG kafka.log.AbstractIndex - Resized /tmp/kafka-unit8944902187107510952/__consumer_offsets-10/00000000000000000000.timeindex to 0, position is 0 and limit is 0 11:11:12 11:11:12.181 [log-closing-/tmp/kafka-unit8944902187107510952] DEBUG kafka.log.UnifiedLog - [UnifiedLog partition=__consumer_offsets-11, dir=/tmp/kafka-unit8944902187107510952] Flushing log up to offset 0 (inclusive)with recovery point 0, last flushed: 1768216252203, current time: 1768216272181,unflushed: 0 11:11:12 11:11:12.182 [log-closing-/tmp/kafka-unit8944902187107510952] DEBUG kafka.log.UnifiedLog - [UnifiedLog partition=__consumer_offsets-11, dir=/tmp/kafka-unit8944902187107510952] Closing log 11:11:12 11:11:12.182 [log-closing-/tmp/kafka-unit8944902187107510952] DEBUG kafka.log.AbstractIndex - Resized /tmp/kafka-unit8944902187107510952/__consumer_offsets-11/00000000000000000000.index to 0, position is 0 and limit is 0 11:11:12 11:11:12.182 [log-closing-/tmp/kafka-unit8944902187107510952] DEBUG kafka.log.AbstractIndex - Resized /tmp/kafka-unit8944902187107510952/__consumer_offsets-11/00000000000000000000.timeindex to 0, position is 0 and limit is 0 11:11:12 11:11:12.182 [log-closing-/tmp/kafka-unit8944902187107510952] DEBUG kafka.log.UnifiedLog - [UnifiedLog partition=__consumer_offsets-20, dir=/tmp/kafka-unit8944902187107510952] Flushing log up to offset 0 (inclusive)with recovery point 0, last flushed: 1768216252497, current time: 1768216272182,unflushed: 0 11:11:12 11:11:12.184 [log-closing-/tmp/kafka-unit8944902187107510952] DEBUG kafka.log.UnifiedLog - [UnifiedLog partition=__consumer_offsets-20, dir=/tmp/kafka-unit8944902187107510952] Closing log 11:11:12 11:11:12.184 [log-closing-/tmp/kafka-unit8944902187107510952] DEBUG kafka.log.AbstractIndex - Resized /tmp/kafka-unit8944902187107510952/__consumer_offsets-20/00000000000000000000.index to 0, position is 0 and limit is 0 11:11:12 11:11:12.184 [log-closing-/tmp/kafka-unit8944902187107510952] DEBUG kafka.log.AbstractIndex - Resized /tmp/kafka-unit8944902187107510952/__consumer_offsets-20/00000000000000000000.timeindex to 0, position is 0 and limit is 0 11:11:12 11:11:12.184 [log-closing-/tmp/kafka-unit8944902187107510952] DEBUG kafka.log.UnifiedLog - [UnifiedLog partition=__consumer_offsets-47, dir=/tmp/kafka-unit8944902187107510952] Flushing log up to offset 0 (inclusive)with recovery point 0, last flushed: 1768216252338, current time: 1768216272184,unflushed: 0 11:11:12 11:11:12.186 [log-closing-/tmp/kafka-unit8944902187107510952] DEBUG kafka.log.UnifiedLog - [UnifiedLog partition=__consumer_offsets-47, dir=/tmp/kafka-unit8944902187107510952] Closing log 11:11:12 11:11:12.186 [log-closing-/tmp/kafka-unit8944902187107510952] DEBUG kafka.log.AbstractIndex - Resized /tmp/kafka-unit8944902187107510952/__consumer_offsets-47/00000000000000000000.index to 0, position is 0 and limit is 0 11:11:12 11:11:12.186 [log-closing-/tmp/kafka-unit8944902187107510952] DEBUG kafka.log.AbstractIndex - Resized /tmp/kafka-unit8944902187107510952/__consumer_offsets-47/00000000000000000000.timeindex to 0, position is 0 and limit is 0 11:11:12 11:11:12.186 [log-closing-/tmp/kafka-unit8944902187107510952] DEBUG kafka.log.UnifiedLog - [UnifiedLog partition=__consumer_offsets-18, dir=/tmp/kafka-unit8944902187107510952] Flushing log up to offset 0 (inclusive)with recovery point 0, last flushed: 1768216252121, current time: 1768216272186,unflushed: 0 11:11:12 11:11:12.188 [log-closing-/tmp/kafka-unit8944902187107510952] DEBUG kafka.log.UnifiedLog - [UnifiedLog partition=__consumer_offsets-18, dir=/tmp/kafka-unit8944902187107510952] Closing log 11:11:12 11:11:12.188 [log-closing-/tmp/kafka-unit8944902187107510952] DEBUG kafka.log.AbstractIndex - Resized /tmp/kafka-unit8944902187107510952/__consumer_offsets-18/00000000000000000000.index to 0, position is 0 and limit is 0 11:11:12 11:11:12.188 [log-closing-/tmp/kafka-unit8944902187107510952] DEBUG kafka.log.AbstractIndex - Resized /tmp/kafka-unit8944902187107510952/__consumer_offsets-18/00000000000000000000.timeindex to 0, position is 0 and limit is 0 11:11:12 11:11:12.188 [log-closing-/tmp/kafka-unit8944902187107510952] DEBUG kafka.log.UnifiedLog - [UnifiedLog partition=__consumer_offsets-7, dir=/tmp/kafka-unit8944902187107510952] Flushing log up to offset 0 (inclusive)with recovery point 0, last flushed: 1768216252369, current time: 1768216272188,unflushed: 0 11:11:12 11:11:12.189 [log-closing-/tmp/kafka-unit8944902187107510952] DEBUG kafka.log.UnifiedLog - [UnifiedLog partition=__consumer_offsets-7, dir=/tmp/kafka-unit8944902187107510952] Closing log 11:11:12 11:11:12.190 [log-closing-/tmp/kafka-unit8944902187107510952] DEBUG kafka.log.AbstractIndex - Resized /tmp/kafka-unit8944902187107510952/__consumer_offsets-7/00000000000000000000.index to 0, position is 0 and limit is 0 11:11:12 11:11:12.190 [log-closing-/tmp/kafka-unit8944902187107510952] DEBUG kafka.log.AbstractIndex - Resized /tmp/kafka-unit8944902187107510952/__consumer_offsets-7/00000000000000000000.timeindex to 0, position is 0 and limit is 0 11:11:12 11:11:12.190 [log-closing-/tmp/kafka-unit8944902187107510952] DEBUG kafka.log.UnifiedLog - [UnifiedLog partition=__consumer_offsets-48, dir=/tmp/kafka-unit8944902187107510952] Flushing log up to offset 0 (inclusive)with recovery point 0, last flushed: 1768216252159, current time: 1768216272190,unflushed: 0 11:11:12 11:11:12.191 [log-closing-/tmp/kafka-unit8944902187107510952] DEBUG kafka.log.UnifiedLog - [UnifiedLog partition=__consumer_offsets-48, dir=/tmp/kafka-unit8944902187107510952] Closing log 11:11:12 11:11:12.191 [log-closing-/tmp/kafka-unit8944902187107510952] DEBUG kafka.log.AbstractIndex - Resized /tmp/kafka-unit8944902187107510952/__consumer_offsets-48/00000000000000000000.index to 0, position is 0 and limit is 0 11:11:12 11:11:12.191 [log-closing-/tmp/kafka-unit8944902187107510952] DEBUG kafka.log.AbstractIndex - Resized /tmp/kafka-unit8944902187107510952/__consumer_offsets-48/00000000000000000000.timeindex to 0, position is 0 and limit is 0 11:11:12 11:11:12.192 [log-closing-/tmp/kafka-unit8944902187107510952] DEBUG kafka.log.UnifiedLog - [UnifiedLog partition=__consumer_offsets-22, dir=/tmp/kafka-unit8944902187107510952] Flushing log up to offset 0 (inclusive)with recovery point 0, last flushed: 1768216252377, current time: 1768216272192,unflushed: 0 11:11:12 11:11:12.193 [log-closing-/tmp/kafka-unit8944902187107510952] DEBUG kafka.log.UnifiedLog - [UnifiedLog partition=__consumer_offsets-22, dir=/tmp/kafka-unit8944902187107510952] Closing log 11:11:12 11:11:12.193 [log-closing-/tmp/kafka-unit8944902187107510952] DEBUG kafka.log.AbstractIndex - Resized /tmp/kafka-unit8944902187107510952/__consumer_offsets-22/00000000000000000000.index to 0, position is 0 and limit is 0 11:11:12 11:11:12.193 [log-closing-/tmp/kafka-unit8944902187107510952] DEBUG kafka.log.AbstractIndex - Resized /tmp/kafka-unit8944902187107510952/__consumer_offsets-22/00000000000000000000.timeindex to 0, position is 0 and limit is 0 11:11:12 11:11:12.193 [log-closing-/tmp/kafka-unit8944902187107510952] DEBUG kafka.log.UnifiedLog - [UnifiedLog partition=__consumer_offsets-46, dir=/tmp/kafka-unit8944902187107510952] Flushing log up to offset 0 (inclusive)with recovery point 0, last flushed: 1768216252266, current time: 1768216272193,unflushed: 0 11:11:12 11:11:12.195 [log-closing-/tmp/kafka-unit8944902187107510952] DEBUG kafka.log.UnifiedLog - [UnifiedLog partition=__consumer_offsets-46, dir=/tmp/kafka-unit8944902187107510952] Closing log 11:11:12 11:11:12.195 [log-closing-/tmp/kafka-unit8944902187107510952] DEBUG kafka.log.AbstractIndex - Resized /tmp/kafka-unit8944902187107510952/__consumer_offsets-46/00000000000000000000.index to 0, position is 0 and limit is 0 11:11:12 11:11:12.195 [log-closing-/tmp/kafka-unit8944902187107510952] DEBUG kafka.log.AbstractIndex - Resized /tmp/kafka-unit8944902187107510952/__consumer_offsets-46/00000000000000000000.timeindex to 0, position is 0 and limit is 0 11:11:12 11:11:12.195 [log-closing-/tmp/kafka-unit8944902187107510952] DEBUG kafka.log.UnifiedLog - [UnifiedLog partition=__consumer_offsets-23, dir=/tmp/kafka-unit8944902187107510952] Flushing log up to offset 0 (inclusive)with recovery point 0, last flushed: 1768216252409, current time: 1768216272195,unflushed: 0 11:11:12 11:11:12.196 [log-closing-/tmp/kafka-unit8944902187107510952] DEBUG kafka.log.UnifiedLog - [UnifiedLog partition=__consumer_offsets-23, dir=/tmp/kafka-unit8944902187107510952] Closing log 11:11:12 11:11:12.197 [log-closing-/tmp/kafka-unit8944902187107510952] DEBUG kafka.log.AbstractIndex - Resized /tmp/kafka-unit8944902187107510952/__consumer_offsets-23/00000000000000000000.index to 0, position is 0 and limit is 0 11:11:12 11:11:12.197 [log-closing-/tmp/kafka-unit8944902187107510952] DEBUG kafka.log.AbstractIndex - Resized /tmp/kafka-unit8944902187107510952/__consumer_offsets-23/00000000000000000000.timeindex to 0, position is 0 and limit is 0 11:11:12 11:11:12.197 [log-closing-/tmp/kafka-unit8944902187107510952] DEBUG kafka.log.UnifiedLog - [UnifiedLog partition=__consumer_offsets-42, dir=/tmp/kafka-unit8944902187107510952] Flushing log up to offset 0 (inclusive)with recovery point 0, last flushed: 1768216252513, current time: 1768216272197,unflushed: 0 11:11:12 11:11:12.199 [log-closing-/tmp/kafka-unit8944902187107510952] DEBUG kafka.log.UnifiedLog - [UnifiedLog partition=__consumer_offsets-42, dir=/tmp/kafka-unit8944902187107510952] Closing log 11:11:12 11:11:12.199 [log-closing-/tmp/kafka-unit8944902187107510952] DEBUG kafka.log.AbstractIndex - Resized /tmp/kafka-unit8944902187107510952/__consumer_offsets-42/00000000000000000000.index to 0, position is 0 and limit is 0 11:11:12 11:11:12.199 [log-closing-/tmp/kafka-unit8944902187107510952] DEBUG kafka.log.AbstractIndex - Resized /tmp/kafka-unit8944902187107510952/__consumer_offsets-42/00000000000000000000.timeindex to 0, position is 0 and limit is 0 11:11:12 11:11:12.199 [log-closing-/tmp/kafka-unit8944902187107510952] DEBUG kafka.log.UnifiedLog - [UnifiedLog partition=__consumer_offsets-28, dir=/tmp/kafka-unit8944902187107510952] Flushing log up to offset 0 (inclusive)with recovery point 0, last flushed: 1768216252584, current time: 1768216272199,unflushed: 0 11:11:12 11:11:12.201 [log-closing-/tmp/kafka-unit8944902187107510952] DEBUG kafka.log.UnifiedLog - [UnifiedLog partition=__consumer_offsets-28, dir=/tmp/kafka-unit8944902187107510952] Closing log 11:11:12 11:11:12.201 [log-closing-/tmp/kafka-unit8944902187107510952] DEBUG kafka.log.AbstractIndex - Resized /tmp/kafka-unit8944902187107510952/__consumer_offsets-28/00000000000000000000.index to 0, position is 0 and limit is 0 11:11:12 11:11:12.201 [log-closing-/tmp/kafka-unit8944902187107510952] DEBUG kafka.log.AbstractIndex - Resized /tmp/kafka-unit8944902187107510952/__consumer_offsets-28/00000000000000000000.timeindex to 0, position is 0 and limit is 0 11:11:12 11:11:12.201 [log-closing-/tmp/kafka-unit8944902187107510952] DEBUG kafka.log.UnifiedLog - [UnifiedLog partition=__consumer_offsets-4, dir=/tmp/kafka-unit8944902187107510952] Flushing log up to offset 0 (inclusive)with recovery point 0, last flushed: 1768216252192, current time: 1768216272201,unflushed: 0 11:11:12 11:11:12.202 [log-closing-/tmp/kafka-unit8944902187107510952] DEBUG kafka.log.UnifiedLog - [UnifiedLog partition=__consumer_offsets-4, dir=/tmp/kafka-unit8944902187107510952] Closing log 11:11:12 11:11:12.203 [log-closing-/tmp/kafka-unit8944902187107510952] DEBUG kafka.log.AbstractIndex - Resized /tmp/kafka-unit8944902187107510952/__consumer_offsets-4/00000000000000000000.index to 0, position is 0 and limit is 0 11:11:12 11:11:12.203 [log-closing-/tmp/kafka-unit8944902187107510952] DEBUG kafka.log.AbstractIndex - Resized /tmp/kafka-unit8944902187107510952/__consumer_offsets-4/00000000000000000000.timeindex to 0, position is 0 and limit is 0 11:11:12 11:11:12.203 [log-closing-/tmp/kafka-unit8944902187107510952] DEBUG kafka.log.UnifiedLog - [UnifiedLog partition=__consumer_offsets-31, dir=/tmp/kafka-unit8944902187107510952] Flushing log up to offset 0 (inclusive)with recovery point 0, last flushed: 1768216252259, current time: 1768216272203,unflushed: 0 11:11:12 11:11:12.204 [log-closing-/tmp/kafka-unit8944902187107510952] DEBUG kafka.log.UnifiedLog - [UnifiedLog partition=__consumer_offsets-31, dir=/tmp/kafka-unit8944902187107510952] Closing log 11:11:12 11:11:12.205 [log-closing-/tmp/kafka-unit8944902187107510952] DEBUG kafka.log.AbstractIndex - Resized /tmp/kafka-unit8944902187107510952/__consumer_offsets-31/00000000000000000000.index to 0, position is 0 and limit is 0 11:11:12 11:11:12.205 [log-closing-/tmp/kafka-unit8944902187107510952] DEBUG kafka.log.AbstractIndex - Resized /tmp/kafka-unit8944902187107510952/__consumer_offsets-31/00000000000000000000.timeindex to 0, position is 0 and limit is 0 11:11:12 11:11:12.205 [log-closing-/tmp/kafka-unit8944902187107510952] DEBUG kafka.log.UnifiedLog - [UnifiedLog partition=__consumer_offsets-5, dir=/tmp/kafka-unit8944902187107510952] Flushing log up to offset 0 (inclusive)with recovery point 0, last flushed: 1768216252489, current time: 1768216272205,unflushed: 0 11:11:12 11:11:12.206 [log-closing-/tmp/kafka-unit8944902187107510952] DEBUG kafka.log.UnifiedLog - [UnifiedLog partition=__consumer_offsets-5, dir=/tmp/kafka-unit8944902187107510952] Closing log 11:11:12 11:11:12.206 [log-closing-/tmp/kafka-unit8944902187107510952] DEBUG kafka.log.AbstractIndex - Resized /tmp/kafka-unit8944902187107510952/__consumer_offsets-5/00000000000000000000.index to 0, position is 0 and limit is 0 11:11:12 11:11:12.207 [log-closing-/tmp/kafka-unit8944902187107510952] DEBUG kafka.log.AbstractIndex - Resized /tmp/kafka-unit8944902187107510952/__consumer_offsets-5/00000000000000000000.timeindex to 0, position is 0 and limit is 0 11:11:12 11:11:12.207 [log-closing-/tmp/kafka-unit8944902187107510952] DEBUG kafka.log.UnifiedLog - [UnifiedLog partition=__consumer_offsets-1, dir=/tmp/kafka-unit8944902187107510952] Flushing log up to offset 0 (inclusive)with recovery point 0, last flushed: 1768216252274, current time: 1768216272207,unflushed: 0 11:11:12 11:11:12.208 [log-closing-/tmp/kafka-unit8944902187107510952] DEBUG kafka.log.UnifiedLog - [UnifiedLog partition=__consumer_offsets-1, dir=/tmp/kafka-unit8944902187107510952] Closing log 11:11:12 11:11:12.208 [log-closing-/tmp/kafka-unit8944902187107510952] DEBUG kafka.log.AbstractIndex - Resized /tmp/kafka-unit8944902187107510952/__consumer_offsets-1/00000000000000000000.index to 0, position is 0 and limit is 0 11:11:12 11:11:12.208 [log-closing-/tmp/kafka-unit8944902187107510952] DEBUG kafka.log.AbstractIndex - Resized /tmp/kafka-unit8944902187107510952/__consumer_offsets-1/00000000000000000000.timeindex to 0, position is 0 and limit is 0 11:11:12 11:11:12.209 [log-closing-/tmp/kafka-unit8944902187107510952] DEBUG kafka.log.UnifiedLog - [UnifiedLog partition=__consumer_offsets-15, dir=/tmp/kafka-unit8944902187107510952] Flushing log up to offset 0 (inclusive)with recovery point 0, last flushed: 1768216252459, current time: 1768216272209,unflushed: 0 11:11:12 11:11:12.210 [log-closing-/tmp/kafka-unit8944902187107510952] DEBUG kafka.log.UnifiedLog - [UnifiedLog partition=__consumer_offsets-15, dir=/tmp/kafka-unit8944902187107510952] Closing log 11:11:12 11:11:12.210 [log-closing-/tmp/kafka-unit8944902187107510952] DEBUG kafka.log.AbstractIndex - Resized /tmp/kafka-unit8944902187107510952/__consumer_offsets-15/00000000000000000000.index to 0, position is 0 and limit is 0 11:11:12 11:11:12.210 [log-closing-/tmp/kafka-unit8944902187107510952] DEBUG kafka.log.AbstractIndex - Resized /tmp/kafka-unit8944902187107510952/__consumer_offsets-15/00000000000000000000.timeindex to 0, position is 0 and limit is 0 11:11:12 11:11:12.211 [log-closing-/tmp/kafka-unit8944902187107510952] DEBUG kafka.log.UnifiedLog - [UnifiedLog partition=__consumer_offsets-38, dir=/tmp/kafka-unit8944902187107510952] Flushing log up to offset 0 (inclusive)with recovery point 0, last flushed: 1768216252419, current time: 1768216272210,unflushed: 0 11:11:12 11:11:12.212 [log-closing-/tmp/kafka-unit8944902187107510952] DEBUG kafka.log.UnifiedLog - [UnifiedLog partition=__consumer_offsets-38, dir=/tmp/kafka-unit8944902187107510952] Closing log 11:11:12 11:11:12.212 [log-closing-/tmp/kafka-unit8944902187107510952] DEBUG kafka.log.AbstractIndex - Resized /tmp/kafka-unit8944902187107510952/__consumer_offsets-38/00000000000000000000.index to 0, position is 0 and limit is 0 11:11:12 11:11:12.212 [log-closing-/tmp/kafka-unit8944902187107510952] DEBUG kafka.log.AbstractIndex - Resized /tmp/kafka-unit8944902187107510952/__consumer_offsets-38/00000000000000000000.timeindex to 0, position is 0 and limit is 0 11:11:12 11:11:12.212 [log-closing-/tmp/kafka-unit8944902187107510952] DEBUG kafka.log.UnifiedLog - [UnifiedLog partition=__consumer_offsets-34, dir=/tmp/kafka-unit8944902187107510952] Flushing log up to offset 0 (inclusive)with recovery point 0, last flushed: 1768216252176, current time: 1768216272212,unflushed: 0 11:11:12 11:11:12.214 [log-closing-/tmp/kafka-unit8944902187107510952] DEBUG kafka.log.UnifiedLog - [UnifiedLog partition=__consumer_offsets-34, dir=/tmp/kafka-unit8944902187107510952] Closing log 11:11:12 11:11:12.214 [log-closing-/tmp/kafka-unit8944902187107510952] DEBUG kafka.log.AbstractIndex - Resized /tmp/kafka-unit8944902187107510952/__consumer_offsets-34/00000000000000000000.index to 0, position is 0 and limit is 0 11:11:12 11:11:12.214 [log-closing-/tmp/kafka-unit8944902187107510952] DEBUG kafka.log.AbstractIndex - Resized /tmp/kafka-unit8944902187107510952/__consumer_offsets-34/00000000000000000000.timeindex to 0, position is 0 and limit is 0 11:11:12 11:11:12.214 [log-closing-/tmp/kafka-unit8944902187107510952] DEBUG kafka.log.UnifiedLog - [UnifiedLog partition=__consumer_offsets-9, dir=/tmp/kafka-unit8944902187107510952] Flushing log up to offset 0 (inclusive)with recovery point 0, last flushed: 1768216252241, current time: 1768216272214,unflushed: 0 11:11:12 11:11:12.216 [log-closing-/tmp/kafka-unit8944902187107510952] DEBUG kafka.log.UnifiedLog - [UnifiedLog partition=__consumer_offsets-9, dir=/tmp/kafka-unit8944902187107510952] Closing log 11:11:12 11:11:12.216 [log-closing-/tmp/kafka-unit8944902187107510952] DEBUG kafka.log.AbstractIndex - Resized /tmp/kafka-unit8944902187107510952/__consumer_offsets-9/00000000000000000000.index to 0, position is 0 and limit is 0 11:11:12 11:11:12.216 [log-closing-/tmp/kafka-unit8944902187107510952] DEBUG kafka.log.AbstractIndex - Resized /tmp/kafka-unit8944902187107510952/__consumer_offsets-9/00000000000000000000.timeindex to 0, position is 0 and limit is 0 11:11:12 11:11:12.216 [log-closing-/tmp/kafka-unit8944902187107510952] DEBUG kafka.log.UnifiedLog - [UnifiedLog partition=__consumer_offsets-14, dir=/tmp/kafka-unit8944902187107510952] Flushing log up to offset 0 (inclusive)with recovery point 0, last flushed: 1768216252401, current time: 1768216272216,unflushed: 0 11:11:12 11:11:12.218 [log-closing-/tmp/kafka-unit8944902187107510952] DEBUG kafka.log.UnifiedLog - [UnifiedLog partition=__consumer_offsets-14, dir=/tmp/kafka-unit8944902187107510952] Closing log 11:11:12 11:11:12.218 [kafka-coordinator-heartbeat-thread | mso-group] DEBUG org.apache.kafka.clients.NetworkClient - [Consumer clientId=mso-123456-consumer-37da6eb2-0a27-47c4-bc3c-8032fdeca549, groupId=mso-group] Initialize connection to node localhost:39115 (id: 1 rack: null) for sending metadata request 11:11:12 11:11:12.218 [kafka-coordinator-heartbeat-thread | mso-group] DEBUG org.apache.kafka.clients.ClientUtils - Resolved host localhost as 127.0.0.1 11:11:12 11:11:12.218 [kafka-coordinator-heartbeat-thread | mso-group] DEBUG org.apache.kafka.clients.NetworkClient - [Consumer clientId=mso-123456-consumer-37da6eb2-0a27-47c4-bc3c-8032fdeca549, groupId=mso-group] Initiating connection to node localhost:39115 (id: 1 rack: null) using address localhost/127.0.0.1 11:11:12 11:11:12.218 [log-closing-/tmp/kafka-unit8944902187107510952] DEBUG kafka.log.AbstractIndex - Resized /tmp/kafka-unit8944902187107510952/__consumer_offsets-14/00000000000000000000.index to 0, position is 0 and limit is 0 11:11:12 11:11:12.218 [log-closing-/tmp/kafka-unit8944902187107510952] DEBUG kafka.log.AbstractIndex - Resized /tmp/kafka-unit8944902187107510952/__consumer_offsets-14/00000000000000000000.timeindex to 0, position is 0 and limit is 0 11:11:12 11:11:12.218 [kafka-coordinator-heartbeat-thread | mso-group] DEBUG org.apache.kafka.common.security.authenticator.SaslClientAuthenticator - [Consumer clientId=mso-123456-consumer-37da6eb2-0a27-47c4-bc3c-8032fdeca549, groupId=mso-group] Set SASL client state to SEND_APIVERSIONS_REQUEST 11:11:12 11:11:12.218 [kafka-coordinator-heartbeat-thread | mso-group] DEBUG org.apache.kafka.common.security.authenticator.SaslClientAuthenticator - [Consumer clientId=mso-123456-consumer-37da6eb2-0a27-47c4-bc3c-8032fdeca549, groupId=mso-group] Creating SaslClient: client=null;service=kafka;serviceHostname=localhost;mechs=[PLAIN] 11:11:12 11:11:12.219 [main] DEBUG kafka.log.LogManager - Updating recovery points at /tmp/kafka-unit8944902187107510952 11:11:12 11:11:12.219 [kafka-coordinator-heartbeat-thread | mso-group] DEBUG org.apache.kafka.common.network.Selector - [Consumer clientId=mso-123456-consumer-37da6eb2-0a27-47c4-bc3c-8032fdeca549, groupId=mso-group] Connection with localhost/127.0.0.1 (channelId=1) disconnected 11:11:12 java.net.ConnectException: Connection refused 11:11:12 at java.base/sun.nio.ch.SocketChannelImpl.checkConnect(Native Method) 11:11:12 at java.base/sun.nio.ch.SocketChannelImpl.finishConnect(SocketChannelImpl.java:777) 11:11:12 at org.apache.kafka.common.network.PlaintextTransportLayer.finishConnect(PlaintextTransportLayer.java:50) 11:11:12 at org.apache.kafka.common.network.KafkaChannel.finishConnect(KafkaChannel.java:224) 11:11:12 at org.apache.kafka.common.network.Selector.pollSelectionKeys(Selector.java:526) 11:11:12 at org.apache.kafka.common.network.Selector.poll(Selector.java:481) 11:11:12 at org.apache.kafka.clients.NetworkClient.poll(NetworkClient.java:560) 11:11:12 at org.apache.kafka.clients.consumer.internals.ConsumerNetworkClient.poll(ConsumerNetworkClient.java:280) 11:11:12 at org.apache.kafka.clients.consumer.internals.ConsumerNetworkClient.pollNoWakeup(ConsumerNetworkClient.java:321) 11:11:12 at org.apache.kafka.clients.consumer.internals.AbstractCoordinator$HeartbeatThread.run(AbstractCoordinator.java:1454) 11:11:12 11:11:12.219 [kafka-coordinator-heartbeat-thread | mso-group] INFO org.apache.kafka.clients.NetworkClient - [Consumer clientId=mso-123456-consumer-37da6eb2-0a27-47c4-bc3c-8032fdeca549, groupId=mso-group] Node 1 disconnected. 11:11:12 11:11:12.219 [kafka-coordinator-heartbeat-thread | mso-group] WARN org.apache.kafka.clients.NetworkClient - [Consumer clientId=mso-123456-consumer-37da6eb2-0a27-47c4-bc3c-8032fdeca549, groupId=mso-group] Connection to node 1 (localhost/127.0.0.1:39115) could not be established. Broker may not be available. 11:11:12 11:11:12.220 [kafka-coordinator-heartbeat-thread | mso-group] DEBUG org.apache.kafka.clients.consumer.internals.ConsumerCoordinator - [Consumer clientId=mso-123456-consumer-37da6eb2-0a27-47c4-bc3c-8032fdeca549, groupId=mso-group] No broker available to send FindCoordinator request 11:11:12 11:11:12.225 [main] DEBUG kafka.log.LogManager - Updating log start offsets at /tmp/kafka-unit8944902187107510952 11:11:12 11:11:12.231 [main] DEBUG kafka.log.LogManager - Writing clean shutdown marker at /tmp/kafka-unit8944902187107510952 11:11:12 11:11:12.233 [main] INFO kafka.log.LogManager - Shutdown complete. 11:11:12 11:11:12.234 [main] INFO kafka.controller.ControllerEventManager$ControllerEventThread - [ControllerEventThread controllerId=1] Shutting down 11:11:12 11:11:12.234 [main] INFO kafka.controller.ControllerEventManager$ControllerEventThread - [ControllerEventThread controllerId=1] Shutdown completed 11:11:12 11:11:12.234 [controller-event-thread] INFO kafka.controller.ControllerEventManager$ControllerEventThread - [ControllerEventThread controllerId=1] Stopped 11:11:12 11:11:12.235 [main] DEBUG kafka.controller.KafkaController - [Controller id=1] Resigning 11:11:12 11:11:12.235 [kafka-producer-network-thread | mso-123456-producer-414c146f-350c-404b-a200-38d58b83784c] DEBUG org.apache.kafka.clients.NetworkClient - [Producer clientId=mso-123456-producer-414c146f-350c-404b-a200-38d58b83784c] Give up sending metadata request since no node is available 11:11:12 11:11:12.235 [main] DEBUG kafka.controller.KafkaController - [Controller id=1] Unregister BrokerModifications handler for Set(1) 11:11:12 11:11:12.236 [main] DEBUG kafka.utils.KafkaScheduler - Shutting down task scheduler. 11:11:12 11:11:12.237 [main] INFO kafka.controller.ZkPartitionStateMachine - [PartitionStateMachine controllerId=1] Stopped partition state machine 11:11:12 11:11:12.238 [main] INFO kafka.controller.ZkReplicaStateMachine - [ReplicaStateMachine controllerId=1] Stopped replica state machine 11:11:12 11:11:12.238 [main] INFO kafka.controller.RequestSendThread - [RequestSendThread controllerId=1] Shutting down 11:11:12 11:11:12.238 [main] INFO kafka.controller.RequestSendThread - [RequestSendThread controllerId=1] Shutdown completed 11:11:12 11:11:12.238 [TestBroker:1:Controller-1-to-broker-1-send-thread] INFO kafka.controller.RequestSendThread - [RequestSendThread controllerId=1] Stopped 11:11:12 11:11:12.241 [main] INFO kafka.controller.KafkaController - [Controller id=1] Resigned 11:11:12 11:11:12.241 [main] INFO kafka.server.FinalizedFeatureChangeListener$ChangeNotificationProcessorThread - [feature-zk-node-event-process-thread]: Shutting down 11:11:12 11:11:12.241 [main] INFO kafka.server.FinalizedFeatureChangeListener$ChangeNotificationProcessorThread - [feature-zk-node-event-process-thread]: Shutdown completed 11:11:12 11:11:12.241 [feature-zk-node-event-process-thread] INFO kafka.server.FinalizedFeatureChangeListener$ChangeNotificationProcessorThread - [feature-zk-node-event-process-thread]: Stopped 11:11:12 11:11:12.242 [main] INFO kafka.zookeeper.ZooKeeperClient - [ZooKeeperClient Kafka server] Closing. 11:11:12 11:11:12.242 [main] DEBUG kafka.utils.KafkaScheduler - Shutting down task scheduler. 11:11:12 11:11:12.242 [main] DEBUG org.apache.zookeeper.ZooKeeper - Closing session: 0x10000020e9e0000 11:11:12 11:11:12.242 [main] DEBUG org.apache.zookeeper.ClientCnxn - Closing client for session: 0x10000020e9e0000 11:11:12 11:11:12.243 [ProcessThread(sid:0 cport:39173):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from data tree is: 269372888925 11:11:12 11:11:12.244 [ProcessThread(sid:0 cport:39173):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 270510307493 11:11:12 11:11:12.244 [ProcessThread(sid:0 cport:39173):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 269363048027 11:11:12 11:11:12.244 [ProcessThread(sid:0 cport:39173):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 270891142067 11:11:12 11:11:12.245 [ProcessThread(sid:0 cport:39173):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 267845296491 11:11:12 11:11:12.247 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - Processing request:: sessionid:0x10000020e9e0000 type:closeSession cxid:0xfe zxid:0x8d txntype:-11 reqpath:n/a 11:11:12 11:11:12.247 [SyncThread:0] DEBUG org.apache.zookeeper.server.SessionTrackerImpl - Removing session 0x10000020e9e0000 11:11:12 11:11:12.248 [SyncThread:0] DEBUG org.apache.zookeeper.common.PathTrie - controller 11:11:12 11:11:12.248 [SyncThread:0] DEBUG org.apache.zookeeper.server.DataTree - Deleting ephemeral node /controller for session 0x10000020e9e0000 11:11:12 11:11:12.248 [SyncThread:0] DEBUG org.apache.zookeeper.common.PathTrie - brokers 11:11:12 11:11:12.248 [SyncThread:0] DEBUG org.apache.zookeeper.server.DataTree - Deleting ephemeral node /brokers/ids/1 for session 0x10000020e9e0000 11:11:12 11:11:12.248 [SyncThread:0] DEBUG org.apache.zookeeper.server.DataTree - Digests are matching for Zxid: 8d, Digest in log and actual tree: 267845296491 11:11:12 11:11:12.248 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - sessionid:0x10000020e9e0000 type:closeSession cxid:0xfe zxid:0x8d txntype:-11 reqpath:n/a 11:11:12 11:11:12.248 [main-SendThread(127.0.0.1:39173)] DEBUG org.apache.zookeeper.ClientCnxn - Got notification session id: 0x10000020e9e0000 11:11:12 11:11:12.248 [main-SendThread(127.0.0.1:39173)] DEBUG org.apache.zookeeper.ClientCnxn - Got WatchedEvent state:SyncConnected type:NodeDeleted path:/controller for session id 0x10000020e9e0000 11:11:12 11:11:12.249 [main-SendThread(127.0.0.1:39173)] DEBUG org.apache.zookeeper.ClientCnxn - Got notification session id: 0x10000020e9e0000 11:11:12 11:11:12.249 [main-SendThread(127.0.0.1:39173)] DEBUG org.apache.zookeeper.ClientCnxn - Got WatchedEvent state:SyncConnected type:NodeDeleted path:/brokers/ids/1 for session id 0x10000020e9e0000 11:11:12 11:11:12.249 [main-EventThread] DEBUG kafka.zookeeper.ZooKeeperClient - [ZooKeeperClient Kafka server] Received event: WatchedEvent state:SyncConnected type:NodeDeleted path:/controller 11:11:12 11:11:12.250 [main-SendThread(127.0.0.1:39173)] DEBUG org.apache.zookeeper.ClientCnxn - Got notification session id: 0x10000020e9e0000 11:11:12 11:11:12.250 [NIOWorkerThread-2] DEBUG org.apache.zookeeper.server.NIOServerCnxn - Closed socket connection for client /127.0.0.1:39794 which had sessionid 0x10000020e9e0000 11:11:12 11:11:12.250 [main-SendThread(127.0.0.1:39173)] DEBUG org.apache.zookeeper.ClientCnxn - Got WatchedEvent state:SyncConnected type:NodeChildrenChanged path:/brokers/ids for session id 0x10000020e9e0000 11:11:12 11:11:12.250 [main-SendThread(127.0.0.1:39173)] DEBUG org.apache.zookeeper.ClientCnxn - Reading reply session id: 0x10000020e9e0000, packet:: clientPath:null serverPath:null finished:false header:: 254,-11 replyHeader:: 254,141,0 request:: null response:: null 11:11:12 11:11:12.250 [main] DEBUG org.apache.zookeeper.ClientCnxn - Disconnecting client for session: 0x10000020e9e0000 11:11:12 11:11:12.250 [main-EventThread] DEBUG kafka.zookeeper.ZooKeeperClient - [ZooKeeperClient Kafka server] Received event: WatchedEvent state:SyncConnected type:NodeDeleted path:/brokers/ids/1 11:11:12 11:11:12.250 [main-EventThread] DEBUG kafka.zookeeper.ZooKeeperClient - [ZooKeeperClient Kafka server] Received event: WatchedEvent state:SyncConnected type:NodeChildrenChanged path:/brokers/ids 11:11:12 11:11:12.250 [main-SendThread(127.0.0.1:39173)] WARN org.apache.zookeeper.ClientCnxn - An exception was thrown while closing send thread for session 0x10000020e9e0000. 11:11:12 org.apache.zookeeper.ClientCnxn$EndOfStreamException: Unable to read additional data from server sessionid 0x10000020e9e0000, likely server has closed socket 11:11:12 at org.apache.zookeeper.ClientCnxnSocketNIO.doIO(ClientCnxnSocketNIO.java:77) 11:11:12 at org.apache.zookeeper.ClientCnxnSocketNIO.doTransport(ClientCnxnSocketNIO.java:350) 11:11:12 at org.apache.zookeeper.ClientCnxn$SendThread.run(ClientCnxn.java:1290) 11:11:12 11:11:12.285 [kafka-producer-network-thread | mso-123456-producer-414c146f-350c-404b-a200-38d58b83784c] DEBUG org.apache.kafka.clients.NetworkClient - [Producer clientId=mso-123456-producer-414c146f-350c-404b-a200-38d58b83784c] Initialize connection to node localhost:39115 (id: 1 rack: null) for sending metadata request 11:11:12 11:11:12.286 [kafka-producer-network-thread | mso-123456-producer-414c146f-350c-404b-a200-38d58b83784c] DEBUG org.apache.kafka.clients.ClientUtils - Resolved host localhost as 127.0.0.1 11:11:12 11:11:12.286 [kafka-producer-network-thread | mso-123456-producer-414c146f-350c-404b-a200-38d58b83784c] DEBUG org.apache.kafka.clients.NetworkClient - [Producer clientId=mso-123456-producer-414c146f-350c-404b-a200-38d58b83784c] Initiating connection to node localhost:39115 (id: 1 rack: null) using address localhost/127.0.0.1 11:11:12 11:11:12.286 [kafka-producer-network-thread | mso-123456-producer-414c146f-350c-404b-a200-38d58b83784c] DEBUG org.apache.kafka.common.security.authenticator.SaslClientAuthenticator - [Producer clientId=mso-123456-producer-414c146f-350c-404b-a200-38d58b83784c] Set SASL client state to SEND_APIVERSIONS_REQUEST 11:11:12 11:11:12.286 [kafka-producer-network-thread | mso-123456-producer-414c146f-350c-404b-a200-38d58b83784c] DEBUG org.apache.kafka.common.security.authenticator.SaslClientAuthenticator - [Producer clientId=mso-123456-producer-414c146f-350c-404b-a200-38d58b83784c] Creating SaslClient: client=null;service=kafka;serviceHostname=localhost;mechs=[PLAIN] 11:11:12 11:11:12.287 [kafka-producer-network-thread | mso-123456-producer-414c146f-350c-404b-a200-38d58b83784c] DEBUG org.apache.kafka.common.network.Selector - [Producer clientId=mso-123456-producer-414c146f-350c-404b-a200-38d58b83784c] Connection with localhost/127.0.0.1 (channelId=1) disconnected 11:11:12 java.net.ConnectException: Connection refused 11:11:12 at java.base/sun.nio.ch.SocketChannelImpl.checkConnect(Native Method) 11:11:12 at java.base/sun.nio.ch.SocketChannelImpl.finishConnect(SocketChannelImpl.java:777) 11:11:12 at org.apache.kafka.common.network.PlaintextTransportLayer.finishConnect(PlaintextTransportLayer.java:50) 11:11:12 at org.apache.kafka.common.network.KafkaChannel.finishConnect(KafkaChannel.java:224) 11:11:12 at org.apache.kafka.common.network.Selector.pollSelectionKeys(Selector.java:526) 11:11:12 at org.apache.kafka.common.network.Selector.poll(Selector.java:481) 11:11:12 at org.apache.kafka.clients.NetworkClient.poll(NetworkClient.java:560) 11:11:12 at org.apache.kafka.clients.producer.internals.Sender.runOnce(Sender.java:328) 11:11:12 at org.apache.kafka.clients.producer.internals.Sender.run(Sender.java:243) 11:11:12 at java.base/java.lang.Thread.run(Thread.java:829) 11:11:12 11:11:12.288 [kafka-producer-network-thread | mso-123456-producer-414c146f-350c-404b-a200-38d58b83784c] INFO org.apache.kafka.clients.NetworkClient - [Producer clientId=mso-123456-producer-414c146f-350c-404b-a200-38d58b83784c] Node 1 disconnected. 11:11:12 11:11:12.288 [kafka-producer-network-thread | mso-123456-producer-414c146f-350c-404b-a200-38d58b83784c] WARN org.apache.kafka.clients.NetworkClient - [Producer clientId=mso-123456-producer-414c146f-350c-404b-a200-38d58b83784c] Connection to node 1 (localhost/127.0.0.1:39115) could not be established. Broker may not be available. 11:11:12 11:11:12.320 [kafka-coordinator-heartbeat-thread | mso-group] DEBUG org.apache.kafka.clients.NetworkClient - [Consumer clientId=mso-123456-consumer-37da6eb2-0a27-47c4-bc3c-8032fdeca549, groupId=mso-group] Initialize connection to node localhost:39115 (id: 1 rack: null) for sending metadata request 11:11:12 11:11:12.320 [kafka-coordinator-heartbeat-thread | mso-group] DEBUG org.apache.kafka.clients.ClientUtils - Resolved host localhost as 127.0.0.1 11:11:12 11:11:12.320 [kafka-coordinator-heartbeat-thread | mso-group] DEBUG org.apache.kafka.clients.NetworkClient - [Consumer clientId=mso-123456-consumer-37da6eb2-0a27-47c4-bc3c-8032fdeca549, groupId=mso-group] Initiating connection to node localhost:39115 (id: 1 rack: null) using address localhost/127.0.0.1 11:11:12 11:11:12.321 [kafka-coordinator-heartbeat-thread | mso-group] DEBUG org.apache.kafka.common.security.authenticator.SaslClientAuthenticator - [Consumer clientId=mso-123456-consumer-37da6eb2-0a27-47c4-bc3c-8032fdeca549, groupId=mso-group] Set SASL client state to SEND_APIVERSIONS_REQUEST 11:11:12 11:11:12.321 [kafka-coordinator-heartbeat-thread | mso-group] DEBUG org.apache.kafka.common.security.authenticator.SaslClientAuthenticator - [Consumer clientId=mso-123456-consumer-37da6eb2-0a27-47c4-bc3c-8032fdeca549, groupId=mso-group] Creating SaslClient: client=null;service=kafka;serviceHostname=localhost;mechs=[PLAIN] 11:11:12 11:11:12.322 [kafka-coordinator-heartbeat-thread | mso-group] DEBUG org.apache.kafka.common.network.Selector - [Consumer clientId=mso-123456-consumer-37da6eb2-0a27-47c4-bc3c-8032fdeca549, groupId=mso-group] Connection with localhost/127.0.0.1 (channelId=1) disconnected 11:11:12 java.net.ConnectException: Connection refused 11:11:12 at java.base/sun.nio.ch.SocketChannelImpl.checkConnect(Native Method) 11:11:12 at java.base/sun.nio.ch.SocketChannelImpl.finishConnect(SocketChannelImpl.java:777) 11:11:12 at org.apache.kafka.common.network.PlaintextTransportLayer.finishConnect(PlaintextTransportLayer.java:50) 11:11:12 at org.apache.kafka.common.network.KafkaChannel.finishConnect(KafkaChannel.java:224) 11:11:12 at org.apache.kafka.common.network.Selector.pollSelectionKeys(Selector.java:526) 11:11:12 at org.apache.kafka.common.network.Selector.poll(Selector.java:481) 11:11:12 at org.apache.kafka.clients.NetworkClient.poll(NetworkClient.java:560) 11:11:12 at org.apache.kafka.clients.consumer.internals.ConsumerNetworkClient.poll(ConsumerNetworkClient.java:280) 11:11:12 at org.apache.kafka.clients.consumer.internals.ConsumerNetworkClient.pollNoWakeup(ConsumerNetworkClient.java:321) 11:11:12 at org.apache.kafka.clients.consumer.internals.AbstractCoordinator$HeartbeatThread.run(AbstractCoordinator.java:1454) 11:11:12 11:11:12.322 [kafka-coordinator-heartbeat-thread | mso-group] INFO org.apache.kafka.clients.NetworkClient - [Consumer clientId=mso-123456-consumer-37da6eb2-0a27-47c4-bc3c-8032fdeca549, groupId=mso-group] Node 1 disconnected. 11:11:12 11:11:12.322 [kafka-coordinator-heartbeat-thread | mso-group] WARN org.apache.kafka.clients.NetworkClient - [Consumer clientId=mso-123456-consumer-37da6eb2-0a27-47c4-bc3c-8032fdeca549, groupId=mso-group] Connection to node 1 (localhost/127.0.0.1:39115) could not be established. Broker may not be available. 11:11:12 11:11:12.322 [kafka-coordinator-heartbeat-thread | mso-group] DEBUG org.apache.kafka.clients.consumer.internals.ConsumerCoordinator - [Consumer clientId=mso-123456-consumer-37da6eb2-0a27-47c4-bc3c-8032fdeca549, groupId=mso-group] No broker available to send FindCoordinator request 11:11:12 11:11:12.351 [main-EventThread] DEBUG kafka.zookeeper.ZooKeeperClient - [ZooKeeperClient Kafka server] Received event: WatchedEvent state:Closed type:None path:null 11:11:12 11:11:12.353 [main] INFO org.apache.zookeeper.ZooKeeper - Session: 0x10000020e9e0000 closed 11:11:12 11:11:12.353 [main-EventThread] INFO org.apache.zookeeper.ClientCnxn - EventThread shut down for session: 0x10000020e9e0000 11:11:12 11:11:12.355 [main] INFO kafka.zookeeper.ZooKeeperClient - [ZooKeeperClient Kafka server] Closed. 11:11:12 11:11:12.355 [main] INFO kafka.server.ClientQuotaManager$ThrottledChannelReaper - [TestBroker:1ThrottledChannelReaper-Fetch]: Shutting down 11:11:12 11:11:12.359 [main] INFO kafka.server.ClientQuotaManager$ThrottledChannelReaper - [TestBroker:1ThrottledChannelReaper-Fetch]: Shutdown completed 11:11:12 11:11:12.359 [main] INFO kafka.server.ClientQuotaManager$ThrottledChannelReaper - [TestBroker:1ThrottledChannelReaper-Produce]: Shutting down 11:11:12 11:11:12.359 [TestBroker:1ThrottledChannelReaper-Fetch] INFO kafka.server.ClientQuotaManager$ThrottledChannelReaper - [TestBroker:1ThrottledChannelReaper-Fetch]: Stopped 11:11:12 11:11:12.359 [TestBroker:1ThrottledChannelReaper-Produce] INFO kafka.server.ClientQuotaManager$ThrottledChannelReaper - [TestBroker:1ThrottledChannelReaper-Produce]: Stopped 11:11:12 11:11:12.359 [main] INFO kafka.server.ClientQuotaManager$ThrottledChannelReaper - [TestBroker:1ThrottledChannelReaper-Produce]: Shutdown completed 11:11:12 11:11:12.360 [main] INFO kafka.server.ClientQuotaManager$ThrottledChannelReaper - [TestBroker:1ThrottledChannelReaper-Request]: Shutting down 11:11:12 11:11:12.360 [TestBroker:1ThrottledChannelReaper-Request] INFO kafka.server.ClientQuotaManager$ThrottledChannelReaper - [TestBroker:1ThrottledChannelReaper-Request]: Stopped 11:11:12 11:11:12.360 [main] INFO kafka.server.ClientQuotaManager$ThrottledChannelReaper - [TestBroker:1ThrottledChannelReaper-Request]: Shutdown completed 11:11:12 11:11:12.360 [main] INFO kafka.server.ClientQuotaManager$ThrottledChannelReaper - [TestBroker:1ThrottledChannelReaper-ControllerMutation]: Shutting down 11:11:12 11:11:12.360 [main] INFO kafka.server.ClientQuotaManager$ThrottledChannelReaper - [TestBroker:1ThrottledChannelReaper-ControllerMutation]: Shutdown completed 11:11:12 11:11:12.360 [TestBroker:1ThrottledChannelReaper-ControllerMutation] INFO kafka.server.ClientQuotaManager$ThrottledChannelReaper - [TestBroker:1ThrottledChannelReaper-ControllerMutation]: Stopped 11:11:12 11:11:12.361 [main] INFO kafka.network.SocketServer - [SocketServer listenerType=ZK_BROKER, nodeId=1] Shutting down socket server 11:11:12 11:11:12.386 [main] INFO kafka.network.SocketServer - [SocketServer listenerType=ZK_BROKER, nodeId=1] Shutdown completed 11:11:12 11:11:12.387 [main] INFO org.apache.kafka.common.metrics.Metrics - Metrics scheduler closed 11:11:12 11:11:12.387 [main] INFO org.apache.kafka.common.metrics.Metrics - Closing reporter org.apache.kafka.common.metrics.JmxReporter 11:11:12 11:11:12.387 [main] INFO org.apache.kafka.common.metrics.Metrics - Metrics reporters closed 11:11:12 11:11:12.388 [main] INFO kafka.server.BrokerTopicStats - Broker and topic stats closed 11:11:12 11:11:12.389 [kafka-producer-network-thread | mso-123456-producer-414c146f-350c-404b-a200-38d58b83784c] DEBUG org.apache.kafka.clients.NetworkClient - [Producer clientId=mso-123456-producer-414c146f-350c-404b-a200-38d58b83784c] Give up sending metadata request since no node is available 11:11:12 11:11:12.389 [main] INFO org.apache.kafka.common.utils.AppInfoParser - App info kafka.server for 1 unregistered 11:11:12 11:11:12.389 [main] INFO kafka.server.KafkaServer - [KafkaServer id=1] shut down completed 11:11:12 11:11:12.389 [main] INFO com.salesforce.kafka.test.ZookeeperTestServer - Shutting down zookeeper test server 11:11:12 11:11:12.390 [ConnnectionExpirer] INFO org.apache.zookeeper.server.NIOServerCnxnFactory - ConnnectionExpirerThread interrupted 11:11:12 11:11:12.390 [NIOServerCxnFactory.SelectorThread-1] INFO org.apache.zookeeper.server.NIOServerCnxnFactory - selector thread exitted run method 11:11:12 11:11:12.391 [NIOServerCxnFactory.AcceptThread:0.0.0.0/0.0.0.0:39173] INFO org.apache.zookeeper.server.NIOServerCnxnFactory - accept thread exitted run method 11:11:12 11:11:12.391 [NIOServerCxnFactory.SelectorThread-0] INFO org.apache.zookeeper.server.NIOServerCnxnFactory - selector thread exitted run method 11:11:12 11:11:12.394 [main] INFO org.apache.zookeeper.server.ZooKeeperServer - shutting down 11:11:12 11:11:12.394 [main] INFO org.apache.zookeeper.server.RequestThrottler - Shutting down 11:11:12 11:11:12.395 [RequestThrottler] INFO org.apache.zookeeper.server.RequestThrottler - Draining request throttler queue 11:11:12 11:11:12.395 [RequestThrottler] INFO org.apache.zookeeper.server.RequestThrottler - RequestThrottler shutdown. Dropped 0 requests 11:11:12 11:11:12.396 [main] INFO org.apache.zookeeper.server.SessionTrackerImpl - Shutting down 11:11:12 11:11:12.396 [main] INFO org.apache.zookeeper.server.PrepRequestProcessor - Shutting down 11:11:12 11:11:12.396 [main] INFO org.apache.zookeeper.server.SyncRequestProcessor - Shutting down 11:11:12 11:11:12.396 [ProcessThread(sid:0 cport:39173):] INFO org.apache.zookeeper.server.PrepRequestProcessor - PrepRequestProcessor exited loop! 11:11:12 11:11:12.396 [SyncThread:0] INFO org.apache.zookeeper.server.SyncRequestProcessor - SyncRequestProcessor exited! 11:11:12 11:11:12.396 [main] INFO org.apache.zookeeper.server.FinalRequestProcessor - shutdown of request processor complete 11:11:12 11:11:12.397 [main] DEBUG org.apache.zookeeper.server.persistence.FileTxnLog - Created new input stream: /tmp/kafka-unit1168587096075541194/version-2/log.1 11:11:12 11:11:12.397 [main] DEBUG org.apache.zookeeper.server.persistence.FileTxnLog - Created new input archive: /tmp/kafka-unit1168587096075541194/version-2/log.1 11:11:12 11:11:12.401 [main] DEBUG org.apache.zookeeper.server.persistence.FileTxnLog - EOF exception 11:11:12 java.io.EOFException: Failed to read /tmp/kafka-unit1168587096075541194/version-2/log.1 11:11:12 at org.apache.zookeeper.server.persistence.FileTxnLog$FileTxnIterator.next(FileTxnLog.java:771) 11:11:12 at org.apache.zookeeper.server.persistence.FileTxnLog$FileTxnIterator.(FileTxnLog.java:650) 11:11:12 at org.apache.zookeeper.server.persistence.FileTxnLog.read(FileTxnLog.java:462) 11:11:12 at org.apache.zookeeper.server.persistence.FileTxnLog.read(FileTxnLog.java:449) 11:11:12 at org.apache.zookeeper.server.persistence.FileTxnSnapLog.fastForwardFromEdits(FileTxnSnapLog.java:321) 11:11:12 at org.apache.zookeeper.server.ZKDatabase.fastForwardDataBase(ZKDatabase.java:300) 11:11:12 at org.apache.zookeeper.server.ZooKeeperServer.shutdown(ZooKeeperServer.java:848) 11:11:12 at org.apache.zookeeper.server.ZooKeeperServer.shutdown(ZooKeeperServer.java:796) 11:11:12 at org.apache.zookeeper.server.NIOServerCnxnFactory.shutdown(NIOServerCnxnFactory.java:922) 11:11:12 at org.apache.zookeeper.server.ZooKeeperServerMain.shutdown(ZooKeeperServerMain.java:219) 11:11:12 at org.apache.curator.test.TestingZooKeeperMain.close(TestingZooKeeperMain.java:144) 11:11:12 at org.apache.curator.test.TestingZooKeeperServer.stop(TestingZooKeeperServer.java:110) 11:11:12 at org.apache.curator.test.TestingServer.stop(TestingServer.java:161) 11:11:12 at com.salesforce.kafka.test.ZookeeperTestServer.stop(ZookeeperTestServer.java:129) 11:11:12 at com.salesforce.kafka.test.KafkaTestCluster.stop(KafkaTestCluster.java:303) 11:11:12 at com.salesforce.kafka.test.KafkaTestCluster.close(KafkaTestCluster.java:312) 11:11:12 at org.onap.sdc.utils.SdcKafkaTest.after(SdcKafkaTest.java:65) 11:11:12 at java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke0(Native Method) 11:11:12 at java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) 11:11:12 at java.base/jdk.internal.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) 11:11:12 at java.base/java.lang.reflect.Method.invoke(Method.java:566) 11:11:12 at org.junit.platform.commons.util.ReflectionUtils.invokeMethod(ReflectionUtils.java:688) 11:11:12 at org.junit.jupiter.engine.execution.MethodInvocation.proceed(MethodInvocation.java:60) 11:11:12 at org.junit.jupiter.engine.execution.InvocationInterceptorChain$ValidatingInvocation.proceed(InvocationInterceptorChain.java:131) 11:11:12 at org.junit.jupiter.engine.extension.TimeoutExtension.intercept(TimeoutExtension.java:149) 11:11:12 at org.junit.jupiter.engine.extension.TimeoutExtension.interceptLifecycleMethod(TimeoutExtension.java:126) 11:11:12 at org.junit.jupiter.engine.extension.TimeoutExtension.interceptAfterAllMethod(TimeoutExtension.java:116) 11:11:12 at org.junit.jupiter.engine.execution.ExecutableInvoker$ReflectiveInterceptorCall.lambda$ofVoidMethod$0(ExecutableInvoker.java:115) 11:11:12 at org.junit.jupiter.engine.execution.ExecutableInvoker.lambda$invoke$0(ExecutableInvoker.java:105) 11:11:12 at org.junit.jupiter.engine.execution.InvocationInterceptorChain$InterceptedInvocation.proceed(InvocationInterceptorChain.java:106) 11:11:12 at org.junit.jupiter.engine.execution.InvocationInterceptorChain.proceed(InvocationInterceptorChain.java:64) 11:11:12 at org.junit.jupiter.engine.execution.InvocationInterceptorChain.chainAndInvoke(InvocationInterceptorChain.java:45) 11:11:12 at org.junit.jupiter.engine.execution.InvocationInterceptorChain.invoke(InvocationInterceptorChain.java:37) 11:11:12 at org.junit.jupiter.engine.execution.ExecutableInvoker.invoke(ExecutableInvoker.java:104) 11:11:12 at org.junit.jupiter.engine.execution.ExecutableInvoker.invoke(ExecutableInvoker.java:98) 11:11:12 at org.junit.jupiter.engine.descriptor.ClassBasedTestDescriptor.lambda$invokeAfterAllMethods$11(ClassBasedTestDescriptor.java:412) 11:11:12 at org.junit.platform.engine.support.hierarchical.ThrowableCollector.execute(ThrowableCollector.java:73) 11:11:12 at org.junit.jupiter.engine.descriptor.ClassBasedTestDescriptor.lambda$invokeAfterAllMethods$12(ClassBasedTestDescriptor.java:410) 11:11:12 at java.base/java.util.ArrayList.forEach(ArrayList.java:1541) 11:11:12 at java.base/java.util.Collections$UnmodifiableCollection.forEach(Collections.java:1085) 11:11:12 at org.junit.jupiter.engine.descriptor.ClassBasedTestDescriptor.invokeAfterAllMethods(ClassBasedTestDescriptor.java:410) 11:11:12 at org.junit.jupiter.engine.descriptor.ClassBasedTestDescriptor.after(ClassBasedTestDescriptor.java:212) 11:11:12 at org.junit.jupiter.engine.descriptor.ClassBasedTestDescriptor.after(ClassBasedTestDescriptor.java:78) 11:11:12 at org.junit.platform.engine.support.hierarchical.NodeTestTask.lambda$executeRecursively$6(NodeTestTask.java:149) 11:11:12 at org.junit.platform.engine.support.hierarchical.ThrowableCollector.execute(ThrowableCollector.java:73) 11:11:12 at org.junit.platform.engine.support.hierarchical.NodeTestTask.lambda$executeRecursively$7(NodeTestTask.java:149) 11:11:12 at org.junit.platform.engine.support.hierarchical.Node.around(Node.java:137) 11:11:12 at org.junit.platform.engine.support.hierarchical.NodeTestTask.lambda$executeRecursively$8(NodeTestTask.java:127) 11:11:12 at org.junit.platform.engine.support.hierarchical.ThrowableCollector.execute(ThrowableCollector.java:73) 11:11:12 at org.junit.platform.engine.support.hierarchical.NodeTestTask.executeRecursively(NodeTestTask.java:126) 11:11:12 at org.junit.platform.engine.support.hierarchical.NodeTestTask.execute(NodeTestTask.java:84) 11:11:12 at java.base/java.util.ArrayList.forEach(ArrayList.java:1541) 11:11:12 at org.junit.platform.engine.support.hierarchical.SameThreadHierarchicalTestExecutorService.invokeAll(SameThreadHierarchicalTestExecutorService.java:38) 11:11:12 at org.junit.platform.engine.support.hierarchical.NodeTestTask.lambda$executeRecursively$5(NodeTestTask.java:143) 11:11:12 at org.junit.platform.engine.support.hierarchical.ThrowableCollector.execute(ThrowableCollector.java:73) 11:11:12 at org.junit.platform.engine.support.hierarchical.NodeTestTask.lambda$executeRecursively$7(NodeTestTask.java:129) 11:11:12 at org.junit.platform.engine.support.hierarchical.Node.around(Node.java:137) 11:11:12 at org.junit.platform.engine.support.hierarchical.NodeTestTask.lambda$executeRecursively$8(NodeTestTask.java:127) 11:11:12 at org.junit.platform.engine.support.hierarchical.ThrowableCollector.execute(ThrowableCollector.java:73) 11:11:12 at org.junit.platform.engine.support.hierarchical.NodeTestTask.executeRecursively(NodeTestTask.java:126) 11:11:12 at org.junit.platform.engine.support.hierarchical.NodeTestTask.execute(NodeTestTask.java:84) 11:11:12 at org.junit.platform.engine.support.hierarchical.SameThreadHierarchicalTestExecutorService.submit(SameThreadHierarchicalTestExecutorService.java:32) 11:11:12 at org.junit.platform.engine.support.hierarchical.HierarchicalTestExecutor.execute(HierarchicalTestExecutor.java:57) 11:11:12 at org.junit.platform.engine.support.hierarchical.HierarchicalTestEngine.execute(HierarchicalTestEngine.java:51) 11:11:12 at org.junit.platform.launcher.core.EngineExecutionOrchestrator.execute(EngineExecutionOrchestrator.java:108) 11:11:12 at org.junit.platform.launcher.core.EngineExecutionOrchestrator.execute(EngineExecutionOrchestrator.java:88) 11:11:12 at org.junit.platform.launcher.core.EngineExecutionOrchestrator.lambda$execute$0(EngineExecutionOrchestrator.java:54) 11:11:12 at org.junit.platform.launcher.core.EngineExecutionOrchestrator.withInterceptedStreams(EngineExecutionOrchestrator.java:67) 11:11:12 at org.junit.platform.launcher.core.EngineExecutionOrchestrator.execute(EngineExecutionOrchestrator.java:52) 11:11:12 at org.junit.platform.launcher.core.DefaultLauncher.execute(DefaultLauncher.java:96) 11:11:12 at org.junit.platform.launcher.core.DefaultLauncher.execute(DefaultLauncher.java:75) 11:11:12 at org.apache.maven.surefire.junitplatform.JUnitPlatformProvider.invokeAllTests(JUnitPlatformProvider.java:154) 11:11:12 at org.apache.maven.surefire.junitplatform.JUnitPlatformProvider.invoke(JUnitPlatformProvider.java:127) 11:11:12 at org.apache.maven.surefire.booter.ForkedBooter.runSuitesInProcess(ForkedBooter.java:377) 11:11:12 at org.apache.maven.surefire.booter.ForkedBooter.execute(ForkedBooter.java:138) 11:11:12 at org.apache.maven.surefire.booter.ForkedBooter.run(ForkedBooter.java:465) 11:11:12 at org.apache.maven.surefire.booter.ForkedBooter.main(ForkedBooter.java:451) 11:11:12 11:11:12.401 [Thread-2] DEBUG org.apache.zookeeper.server.ZooKeeperServer - ZooKeeper server is not running, so not proceeding to shutdown! 11:11:12 11:11:12.402 [main] INFO kafka.server.KafkaServer - [KafkaServer id=1] shutting down 11:11:12 11:11:12.402 [main] INFO com.salesforce.kafka.test.ZookeeperTestServer - Shutting down zookeeper test server 11:11:12 [INFO] Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 25.862 s - in org.onap.sdc.utils.SdcKafkaTest 11:11:12 [INFO] Running org.onap.sdc.utils.NotificationSenderTest 11:11:12 11:11:12.598 [kafka-coordinator-heartbeat-thread | mso-group] DEBUG org.apache.kafka.clients.NetworkClient - [Consumer clientId=mso-123456-consumer-37da6eb2-0a27-47c4-bc3c-8032fdeca549, groupId=mso-group] Initialize connection to node localhost:39115 (id: 1 rack: null) for sending metadata request 11:11:12 11:11:12.598 [kafka-producer-network-thread | mso-123456-producer-414c146f-350c-404b-a200-38d58b83784c] DEBUG org.apache.kafka.clients.NetworkClient - [Producer clientId=mso-123456-producer-414c146f-350c-404b-a200-38d58b83784c] Initialize connection to node localhost:39115 (id: 1 rack: null) for sending metadata request 11:11:12 11:11:12.599 [kafka-coordinator-heartbeat-thread | mso-group] DEBUG org.apache.kafka.clients.ClientUtils - Resolved host localhost as 127.0.0.1 11:11:12 11:11:12.599 [kafka-producer-network-thread | mso-123456-producer-414c146f-350c-404b-a200-38d58b83784c] DEBUG org.apache.kafka.clients.ClientUtils - Resolved host localhost as 127.0.0.1 11:11:12 11:11:12.599 [kafka-coordinator-heartbeat-thread | mso-group] DEBUG org.apache.kafka.clients.NetworkClient - [Consumer clientId=mso-123456-consumer-37da6eb2-0a27-47c4-bc3c-8032fdeca549, groupId=mso-group] Initiating connection to node localhost:39115 (id: 1 rack: null) using address localhost/127.0.0.1 11:11:12 11:11:12.599 [kafka-producer-network-thread | mso-123456-producer-414c146f-350c-404b-a200-38d58b83784c] DEBUG org.apache.kafka.clients.NetworkClient - [Producer clientId=mso-123456-producer-414c146f-350c-404b-a200-38d58b83784c] Initiating connection to node localhost:39115 (id: 1 rack: null) using address localhost/127.0.0.1 11:11:12 11:11:12.600 [kafka-coordinator-heartbeat-thread | mso-group] DEBUG org.apache.kafka.common.security.authenticator.SaslClientAuthenticator - [Consumer clientId=mso-123456-consumer-37da6eb2-0a27-47c4-bc3c-8032fdeca549, groupId=mso-group] Set SASL client state to SEND_APIVERSIONS_REQUEST 11:11:12 11:11:12.600 [kafka-producer-network-thread | mso-123456-producer-414c146f-350c-404b-a200-38d58b83784c] DEBUG org.apache.kafka.common.security.authenticator.SaslClientAuthenticator - [Producer clientId=mso-123456-producer-414c146f-350c-404b-a200-38d58b83784c] Set SASL client state to SEND_APIVERSIONS_REQUEST 11:11:12 11:11:12.600 [kafka-coordinator-heartbeat-thread | mso-group] DEBUG org.apache.kafka.common.security.authenticator.SaslClientAuthenticator - [Consumer clientId=mso-123456-consumer-37da6eb2-0a27-47c4-bc3c-8032fdeca549, groupId=mso-group] Creating SaslClient: client=null;service=kafka;serviceHostname=localhost;mechs=[PLAIN] 11:11:12 11:11:12.600 [kafka-producer-network-thread | mso-123456-producer-414c146f-350c-404b-a200-38d58b83784c] DEBUG org.apache.kafka.common.security.authenticator.SaslClientAuthenticator - [Producer clientId=mso-123456-producer-414c146f-350c-404b-a200-38d58b83784c] Creating SaslClient: client=null;service=kafka;serviceHostname=localhost;mechs=[PLAIN] 11:11:12 11:11:12.604 [kafka-producer-network-thread | mso-123456-producer-414c146f-350c-404b-a200-38d58b83784c] DEBUG org.apache.kafka.common.network.Selector - [Producer clientId=mso-123456-producer-414c146f-350c-404b-a200-38d58b83784c] Connection with localhost/127.0.0.1 (channelId=1) disconnected 11:11:12 java.net.ConnectException: Connection refused 11:11:12 at java.base/sun.nio.ch.SocketChannelImpl.checkConnect(Native Method) 11:11:12 at java.base/sun.nio.ch.SocketChannelImpl.finishConnect(SocketChannelImpl.java:777) 11:11:12 at org.apache.kafka.common.network.PlaintextTransportLayer.finishConnect(PlaintextTransportLayer.java:50) 11:11:12 at org.apache.kafka.common.network.KafkaChannel.finishConnect(KafkaChannel.java:224) 11:11:12 at org.apache.kafka.common.network.Selector.pollSelectionKeys(Selector.java:526) 11:11:12 at org.apache.kafka.common.network.Selector.poll(Selector.java:481) 11:11:12 at org.apache.kafka.clients.NetworkClient.poll(NetworkClient.java:560) 11:11:12 at org.apache.kafka.clients.producer.internals.Sender.runOnce(Sender.java:328) 11:11:12 at org.apache.kafka.clients.producer.internals.Sender.run(Sender.java:243) 11:11:12 at java.base/java.lang.Thread.run(Thread.java:829) 11:11:12 11:11:12.604 [kafka-coordinator-heartbeat-thread | mso-group] DEBUG org.apache.kafka.common.network.Selector - [Consumer clientId=mso-123456-consumer-37da6eb2-0a27-47c4-bc3c-8032fdeca549, groupId=mso-group] Connection with localhost/127.0.0.1 (channelId=1) disconnected 11:11:12 java.net.ConnectException: Connection refused 11:11:12 at java.base/sun.nio.ch.SocketChannelImpl.checkConnect(Native Method) 11:11:12 at java.base/sun.nio.ch.SocketChannelImpl.finishConnect(SocketChannelImpl.java:777) 11:11:12 at org.apache.kafka.common.network.PlaintextTransportLayer.finishConnect(PlaintextTransportLayer.java:50) 11:11:12 at org.apache.kafka.common.network.KafkaChannel.finishConnect(KafkaChannel.java:224) 11:11:12 at org.apache.kafka.common.network.Selector.pollSelectionKeys(Selector.java:526) 11:11:12 at org.apache.kafka.common.network.Selector.poll(Selector.java:481) 11:11:12 at org.apache.kafka.clients.NetworkClient.poll(NetworkClient.java:560) 11:11:12 at org.apache.kafka.clients.consumer.internals.ConsumerNetworkClient.poll(ConsumerNetworkClient.java:280) 11:11:12 at org.apache.kafka.clients.consumer.internals.ConsumerNetworkClient.pollNoWakeup(ConsumerNetworkClient.java:321) 11:11:12 at org.apache.kafka.clients.consumer.internals.AbstractCoordinator$HeartbeatThread.run(AbstractCoordinator.java:1454) 11:11:12 11:11:12.605 [kafka-producer-network-thread | mso-123456-producer-414c146f-350c-404b-a200-38d58b83784c] INFO org.apache.kafka.clients.NetworkClient - [Producer clientId=mso-123456-producer-414c146f-350c-404b-a200-38d58b83784c] Node 1 disconnected. 11:11:12 11:11:12.605 [kafka-coordinator-heartbeat-thread | mso-group] INFO org.apache.kafka.clients.NetworkClient - [Consumer clientId=mso-123456-consumer-37da6eb2-0a27-47c4-bc3c-8032fdeca549, groupId=mso-group] Node 1 disconnected. 11:11:12 11:11:12.605 [kafka-coordinator-heartbeat-thread | mso-group] WARN org.apache.kafka.clients.NetworkClient - [Consumer clientId=mso-123456-consumer-37da6eb2-0a27-47c4-bc3c-8032fdeca549, groupId=mso-group] Connection to node 1 (localhost/127.0.0.1:39115) could not be established. Broker may not be available. 11:11:12 11:11:12.605 [kafka-producer-network-thread | mso-123456-producer-414c146f-350c-404b-a200-38d58b83784c] WARN org.apache.kafka.clients.NetworkClient - [Producer clientId=mso-123456-producer-414c146f-350c-404b-a200-38d58b83784c] Connection to node 1 (localhost/127.0.0.1:39115) could not be established. Broker may not be available. 11:11:12 11:11:12.605 [kafka-coordinator-heartbeat-thread | mso-group] DEBUG org.apache.kafka.clients.consumer.internals.ConsumerCoordinator - [Consumer clientId=mso-123456-consumer-37da6eb2-0a27-47c4-bc3c-8032fdeca549, groupId=mso-group] No broker available to send FindCoordinator request 11:11:12 11:11:12.801 [kafka-producer-network-thread | mso-123456-producer-414c146f-350c-404b-a200-38d58b83784c] DEBUG org.apache.kafka.clients.NetworkClient - [Producer clientId=mso-123456-producer-414c146f-350c-404b-a200-38d58b83784c] Give up sending metadata request since no node is available 11:11:12 11:11:12.802 [kafka-coordinator-heartbeat-thread | mso-group] DEBUG org.apache.kafka.clients.NetworkClient - [Consumer clientId=mso-123456-consumer-37da6eb2-0a27-47c4-bc3c-8032fdeca549, groupId=mso-group] Give up sending metadata request since no node is available 11:11:12 11:11:12.803 [kafka-coordinator-heartbeat-thread | mso-group] DEBUG org.apache.kafka.clients.consumer.internals.ConsumerCoordinator - [Consumer clientId=mso-123456-consumer-37da6eb2-0a27-47c4-bc3c-8032fdeca549, groupId=mso-group] No broker available to send FindCoordinator request 11:11:12 11:11:12.847 [main] INFO org.onap.sdc.utils.NotificationSender - DistributionClient - sendStatus 11:11:12 11:11:12.848 [main] DEBUG org.onap.sdc.utils.NotificationSender - Publisher server list: null 11:11:12 11:11:12.849 [main] INFO org.onap.sdc.utils.NotificationSender - Trying to send status: status 11:11:12 to topic null 11:11:12 11:11:12.852 [kafka-producer-network-thread | mso-123456-producer-414c146f-350c-404b-a200-38d58b83784c] DEBUG org.apache.kafka.clients.NetworkClient - [Producer clientId=mso-123456-producer-414c146f-350c-404b-a200-38d58b83784c] Give up sending metadata request since no node is available 11:11:12 11:11:12.903 [kafka-producer-network-thread | mso-123456-producer-414c146f-350c-404b-a200-38d58b83784c] DEBUG org.apache.kafka.clients.NetworkClient - [Producer clientId=mso-123456-producer-414c146f-350c-404b-a200-38d58b83784c] Give up sending metadata request since no node is available 11:11:12 11:11:12.903 [kafka-coordinator-heartbeat-thread | mso-group] DEBUG org.apache.kafka.clients.NetworkClient - [Consumer clientId=mso-123456-consumer-37da6eb2-0a27-47c4-bc3c-8032fdeca549, groupId=mso-group] Give up sending metadata request since no node is available 11:11:12 11:11:12.904 [kafka-coordinator-heartbeat-thread | mso-group] DEBUG org.apache.kafka.clients.consumer.internals.ConsumerCoordinator - [Consumer clientId=mso-123456-consumer-37da6eb2-0a27-47c4-bc3c-8032fdeca549, groupId=mso-group] No broker available to send FindCoordinator request 11:11:12 11:11:12.953 [kafka-producer-network-thread | mso-123456-producer-414c146f-350c-404b-a200-38d58b83784c] DEBUG org.apache.kafka.clients.NetworkClient - [Producer clientId=mso-123456-producer-414c146f-350c-404b-a200-38d58b83784c] Give up sending metadata request since no node is available 11:11:13 11:11:13.004 [kafka-producer-network-thread | mso-123456-producer-414c146f-350c-404b-a200-38d58b83784c] DEBUG org.apache.kafka.clients.NetworkClient - [Producer clientId=mso-123456-producer-414c146f-350c-404b-a200-38d58b83784c] Initialize connection to node localhost:39115 (id: 1 rack: null) for sending metadata request 11:11:13 11:11:13.004 [kafka-coordinator-heartbeat-thread | mso-group] DEBUG org.apache.kafka.clients.NetworkClient - [Consumer clientId=mso-123456-consumer-37da6eb2-0a27-47c4-bc3c-8032fdeca549, groupId=mso-group] Initialize connection to node localhost:39115 (id: 1 rack: null) for sending metadata request 11:11:13 11:11:13.004 [kafka-producer-network-thread | mso-123456-producer-414c146f-350c-404b-a200-38d58b83784c] DEBUG org.apache.kafka.clients.ClientUtils - Resolved host localhost as 127.0.0.1 11:11:13 11:11:13.004 [kafka-producer-network-thread | mso-123456-producer-414c146f-350c-404b-a200-38d58b83784c] DEBUG org.apache.kafka.clients.NetworkClient - [Producer clientId=mso-123456-producer-414c146f-350c-404b-a200-38d58b83784c] Initiating connection to node localhost:39115 (id: 1 rack: null) using address localhost/127.0.0.1 11:11:13 11:11:13.004 [kafka-coordinator-heartbeat-thread | mso-group] DEBUG org.apache.kafka.clients.ClientUtils - Resolved host localhost as 127.0.0.1 11:11:13 11:11:13.005 [kafka-coordinator-heartbeat-thread | mso-group] DEBUG org.apache.kafka.clients.NetworkClient - [Consumer clientId=mso-123456-consumer-37da6eb2-0a27-47c4-bc3c-8032fdeca549, groupId=mso-group] Initiating connection to node localhost:39115 (id: 1 rack: null) using address localhost/127.0.0.1 11:11:13 11:11:13.005 [kafka-producer-network-thread | mso-123456-producer-414c146f-350c-404b-a200-38d58b83784c] DEBUG org.apache.kafka.common.security.authenticator.SaslClientAuthenticator - [Producer clientId=mso-123456-producer-414c146f-350c-404b-a200-38d58b83784c] Set SASL client state to SEND_APIVERSIONS_REQUEST 11:11:13 11:11:13.005 [kafka-coordinator-heartbeat-thread | mso-group] DEBUG org.apache.kafka.common.security.authenticator.SaslClientAuthenticator - [Consumer clientId=mso-123456-consumer-37da6eb2-0a27-47c4-bc3c-8032fdeca549, groupId=mso-group] Set SASL client state to SEND_APIVERSIONS_REQUEST 11:11:13 11:11:13.006 [kafka-producer-network-thread | mso-123456-producer-414c146f-350c-404b-a200-38d58b83784c] DEBUG org.apache.kafka.common.security.authenticator.SaslClientAuthenticator - [Producer clientId=mso-123456-producer-414c146f-350c-404b-a200-38d58b83784c] Creating SaslClient: client=null;service=kafka;serviceHostname=localhost;mechs=[PLAIN] 11:11:13 11:11:13.006 [kafka-coordinator-heartbeat-thread | mso-group] DEBUG org.apache.kafka.common.security.authenticator.SaslClientAuthenticator - [Consumer clientId=mso-123456-consumer-37da6eb2-0a27-47c4-bc3c-8032fdeca549, groupId=mso-group] Creating SaslClient: client=null;service=kafka;serviceHostname=localhost;mechs=[PLAIN] 11:11:13 11:11:13.009 [kafka-coordinator-heartbeat-thread | mso-group] DEBUG org.apache.kafka.common.network.Selector - [Consumer clientId=mso-123456-consumer-37da6eb2-0a27-47c4-bc3c-8032fdeca549, groupId=mso-group] Connection with localhost/127.0.0.1 (channelId=1) disconnected 11:11:13 java.net.ConnectException: Connection refused 11:11:13 at java.base/sun.nio.ch.SocketChannelImpl.checkConnect(Native Method) 11:11:13 at java.base/sun.nio.ch.SocketChannelImpl.finishConnect(SocketChannelImpl.java:777) 11:11:13 at org.apache.kafka.common.network.PlaintextTransportLayer.finishConnect(PlaintextTransportLayer.java:50) 11:11:13 at org.apache.kafka.common.network.KafkaChannel.finishConnect(KafkaChannel.java:224) 11:11:13 at org.apache.kafka.common.network.Selector.pollSelectionKeys(Selector.java:526) 11:11:13 at org.apache.kafka.common.network.Selector.poll(Selector.java:481) 11:11:13 at org.apache.kafka.clients.NetworkClient.poll(NetworkClient.java:560) 11:11:13 at org.apache.kafka.clients.consumer.internals.ConsumerNetworkClient.poll(ConsumerNetworkClient.java:280) 11:11:13 at org.apache.kafka.clients.consumer.internals.ConsumerNetworkClient.pollNoWakeup(ConsumerNetworkClient.java:321) 11:11:13 at org.apache.kafka.clients.consumer.internals.AbstractCoordinator$HeartbeatThread.run(AbstractCoordinator.java:1454) 11:11:13 11:11:13.009 [kafka-producer-network-thread | mso-123456-producer-414c146f-350c-404b-a200-38d58b83784c] DEBUG org.apache.kafka.common.network.Selector - [Producer clientId=mso-123456-producer-414c146f-350c-404b-a200-38d58b83784c] Connection with localhost/127.0.0.1 (channelId=1) disconnected 11:11:13 java.net.ConnectException: Connection refused 11:11:13 at java.base/sun.nio.ch.SocketChannelImpl.checkConnect(Native Method) 11:11:13 at java.base/sun.nio.ch.SocketChannelImpl.finishConnect(SocketChannelImpl.java:777) 11:11:13 at org.apache.kafka.common.network.PlaintextTransportLayer.finishConnect(PlaintextTransportLayer.java:50) 11:11:13 at org.apache.kafka.common.network.KafkaChannel.finishConnect(KafkaChannel.java:224) 11:11:13 at org.apache.kafka.common.network.Selector.pollSelectionKeys(Selector.java:526) 11:11:13 at org.apache.kafka.common.network.Selector.poll(Selector.java:481) 11:11:13 at org.apache.kafka.clients.NetworkClient.poll(NetworkClient.java:560) 11:11:13 at org.apache.kafka.clients.producer.internals.Sender.runOnce(Sender.java:328) 11:11:13 at org.apache.kafka.clients.producer.internals.Sender.run(Sender.java:243) 11:11:13 at java.base/java.lang.Thread.run(Thread.java:829) 11:11:13 11:11:13.010 [kafka-coordinator-heartbeat-thread | mso-group] INFO org.apache.kafka.clients.NetworkClient - [Consumer clientId=mso-123456-consumer-37da6eb2-0a27-47c4-bc3c-8032fdeca549, groupId=mso-group] Node 1 disconnected. 11:11:13 11:11:13.010 [kafka-producer-network-thread | mso-123456-producer-414c146f-350c-404b-a200-38d58b83784c] INFO org.apache.kafka.clients.NetworkClient - [Producer clientId=mso-123456-producer-414c146f-350c-404b-a200-38d58b83784c] Node 1 disconnected. 11:11:13 11:11:13.010 [kafka-coordinator-heartbeat-thread | mso-group] WARN org.apache.kafka.clients.NetworkClient - [Consumer clientId=mso-123456-consumer-37da6eb2-0a27-47c4-bc3c-8032fdeca549, groupId=mso-group] Connection to node 1 (localhost/127.0.0.1:39115) could not be established. Broker may not be available. 11:11:13 11:11:13.010 [kafka-producer-network-thread | mso-123456-producer-414c146f-350c-404b-a200-38d58b83784c] WARN org.apache.kafka.clients.NetworkClient - [Producer clientId=mso-123456-producer-414c146f-350c-404b-a200-38d58b83784c] Connection to node 1 (localhost/127.0.0.1:39115) could not be established. Broker may not be available. 11:11:13 11:11:13.010 [kafka-coordinator-heartbeat-thread | mso-group] DEBUG org.apache.kafka.clients.consumer.internals.ConsumerCoordinator - [Consumer clientId=mso-123456-consumer-37da6eb2-0a27-47c4-bc3c-8032fdeca549, groupId=mso-group] No broker available to send FindCoordinator request 11:11:13 11:11:13.110 [kafka-coordinator-heartbeat-thread | mso-group] DEBUG org.apache.kafka.clients.NetworkClient - [Consumer clientId=mso-123456-consumer-37da6eb2-0a27-47c4-bc3c-8032fdeca549, groupId=mso-group] Give up sending metadata request since no node is available 11:11:13 11:11:13.111 [kafka-coordinator-heartbeat-thread | mso-group] DEBUG org.apache.kafka.clients.consumer.internals.ConsumerCoordinator - [Consumer clientId=mso-123456-consumer-37da6eb2-0a27-47c4-bc3c-8032fdeca549, groupId=mso-group] No broker available to send FindCoordinator request 11:11:13 11:11:13.111 [kafka-producer-network-thread | mso-123456-producer-414c146f-350c-404b-a200-38d58b83784c] DEBUG org.apache.kafka.clients.NetworkClient - [Producer clientId=mso-123456-producer-414c146f-350c-404b-a200-38d58b83784c] Give up sending metadata request since no node is available 11:11:13 11:11:13.161 [kafka-producer-network-thread | mso-123456-producer-414c146f-350c-404b-a200-38d58b83784c] DEBUG org.apache.kafka.clients.NetworkClient - [Producer clientId=mso-123456-producer-414c146f-350c-404b-a200-38d58b83784c] Give up sending metadata request since no node is available 11:11:13 11:11:13.211 [kafka-coordinator-heartbeat-thread | mso-group] DEBUG org.apache.kafka.clients.NetworkClient - [Consumer clientId=mso-123456-consumer-37da6eb2-0a27-47c4-bc3c-8032fdeca549, groupId=mso-group] Give up sending metadata request since no node is available 11:11:13 11:11:13.211 [kafka-coordinator-heartbeat-thread | mso-group] DEBUG org.apache.kafka.clients.consumer.internals.ConsumerCoordinator - [Consumer clientId=mso-123456-consumer-37da6eb2-0a27-47c4-bc3c-8032fdeca549, groupId=mso-group] No broker available to send FindCoordinator request 11:11:13 11:11:13.212 [kafka-producer-network-thread | mso-123456-producer-414c146f-350c-404b-a200-38d58b83784c] DEBUG org.apache.kafka.clients.NetworkClient - [Producer clientId=mso-123456-producer-414c146f-350c-404b-a200-38d58b83784c] Give up sending metadata request since no node is available 11:11:13 11:11:13.262 [kafka-producer-network-thread | mso-123456-producer-414c146f-350c-404b-a200-38d58b83784c] DEBUG org.apache.kafka.clients.NetworkClient - [Producer clientId=mso-123456-producer-414c146f-350c-404b-a200-38d58b83784c] Give up sending metadata request since no node is available 11:11:13 11:11:13.311 [kafka-coordinator-heartbeat-thread | mso-group] DEBUG org.apache.kafka.clients.NetworkClient - [Consumer clientId=mso-123456-consumer-37da6eb2-0a27-47c4-bc3c-8032fdeca549, groupId=mso-group] Give up sending metadata request since no node is available 11:11:13 11:11:13.312 [kafka-coordinator-heartbeat-thread | mso-group] DEBUG org.apache.kafka.clients.consumer.internals.ConsumerCoordinator - [Consumer clientId=mso-123456-consumer-37da6eb2-0a27-47c4-bc3c-8032fdeca549, groupId=mso-group] No broker available to send FindCoordinator request 11:11:13 11:11:13.313 [kafka-producer-network-thread | mso-123456-producer-414c146f-350c-404b-a200-38d58b83784c] DEBUG org.apache.kafka.clients.NetworkClient - [Producer clientId=mso-123456-producer-414c146f-350c-404b-a200-38d58b83784c] Give up sending metadata request since no node is available 11:11:13 11:11:13.363 [kafka-producer-network-thread | mso-123456-producer-414c146f-350c-404b-a200-38d58b83784c] DEBUG org.apache.kafka.clients.NetworkClient - [Producer clientId=mso-123456-producer-414c146f-350c-404b-a200-38d58b83784c] Give up sending metadata request since no node is available 11:11:13 11:11:13.412 [kafka-coordinator-heartbeat-thread | mso-group] DEBUG org.apache.kafka.clients.NetworkClient - [Consumer clientId=mso-123456-consumer-37da6eb2-0a27-47c4-bc3c-8032fdeca549, groupId=mso-group] Give up sending metadata request since no node is available 11:11:13 11:11:13.412 [kafka-coordinator-heartbeat-thread | mso-group] DEBUG org.apache.kafka.clients.consumer.internals.ConsumerCoordinator - [Consumer clientId=mso-123456-consumer-37da6eb2-0a27-47c4-bc3c-8032fdeca549, groupId=mso-group] No broker available to send FindCoordinator request 11:11:13 11:11:13.414 [kafka-producer-network-thread | mso-123456-producer-414c146f-350c-404b-a200-38d58b83784c] DEBUG org.apache.kafka.clients.NetworkClient - [Producer clientId=mso-123456-producer-414c146f-350c-404b-a200-38d58b83784c] Give up sending metadata request since no node is available 11:11:13 11:11:13.464 [kafka-producer-network-thread | mso-123456-producer-414c146f-350c-404b-a200-38d58b83784c] DEBUG org.apache.kafka.clients.NetworkClient - [Producer clientId=mso-123456-producer-414c146f-350c-404b-a200-38d58b83784c] Give up sending metadata request since no node is available 11:11:13 11:11:13.513 [kafka-coordinator-heartbeat-thread | mso-group] DEBUG org.apache.kafka.clients.NetworkClient - [Consumer clientId=mso-123456-consumer-37da6eb2-0a27-47c4-bc3c-8032fdeca549, groupId=mso-group] Give up sending metadata request since no node is available 11:11:13 11:11:13.513 [kafka-coordinator-heartbeat-thread | mso-group] DEBUG org.apache.kafka.clients.consumer.internals.ConsumerCoordinator - [Consumer clientId=mso-123456-consumer-37da6eb2-0a27-47c4-bc3c-8032fdeca549, groupId=mso-group] No broker available to send FindCoordinator request 11:11:13 11:11:13.515 [kafka-producer-network-thread | mso-123456-producer-414c146f-350c-404b-a200-38d58b83784c] DEBUG org.apache.kafka.clients.NetworkClient - [Producer clientId=mso-123456-producer-414c146f-350c-404b-a200-38d58b83784c] Give up sending metadata request since no node is available 11:11:13 11:11:13.565 [kafka-producer-network-thread | mso-123456-producer-414c146f-350c-404b-a200-38d58b83784c] DEBUG org.apache.kafka.clients.NetworkClient - [Producer clientId=mso-123456-producer-414c146f-350c-404b-a200-38d58b83784c] Give up sending metadata request since no node is available 11:11:13 11:11:13.613 [kafka-coordinator-heartbeat-thread | mso-group] DEBUG org.apache.kafka.clients.NetworkClient - [Consumer clientId=mso-123456-consumer-37da6eb2-0a27-47c4-bc3c-8032fdeca549, groupId=mso-group] Give up sending metadata request since no node is available 11:11:13 11:11:13.614 [kafka-coordinator-heartbeat-thread | mso-group] DEBUG org.apache.kafka.clients.consumer.internals.ConsumerCoordinator - [Consumer clientId=mso-123456-consumer-37da6eb2-0a27-47c4-bc3c-8032fdeca549, groupId=mso-group] No broker available to send FindCoordinator request 11:11:13 11:11:13.616 [kafka-producer-network-thread | mso-123456-producer-414c146f-350c-404b-a200-38d58b83784c] DEBUG org.apache.kafka.clients.NetworkClient - [Producer clientId=mso-123456-producer-414c146f-350c-404b-a200-38d58b83784c] Give up sending metadata request since no node is available 11:11:13 11:11:13.666 [kafka-producer-network-thread | mso-123456-producer-414c146f-350c-404b-a200-38d58b83784c] DEBUG org.apache.kafka.clients.NetworkClient - [Producer clientId=mso-123456-producer-414c146f-350c-404b-a200-38d58b83784c] Give up sending metadata request since no node is available 11:11:13 11:11:13.714 [kafka-coordinator-heartbeat-thread | mso-group] DEBUG org.apache.kafka.clients.NetworkClient - [Consumer clientId=mso-123456-consumer-37da6eb2-0a27-47c4-bc3c-8032fdeca549, groupId=mso-group] Give up sending metadata request since no node is available 11:11:13 11:11:13.714 [kafka-coordinator-heartbeat-thread | mso-group] DEBUG org.apache.kafka.clients.consumer.internals.ConsumerCoordinator - [Consumer clientId=mso-123456-consumer-37da6eb2-0a27-47c4-bc3c-8032fdeca549, groupId=mso-group] No broker available to send FindCoordinator request 11:11:13 11:11:13.717 [kafka-producer-network-thread | mso-123456-producer-414c146f-350c-404b-a200-38d58b83784c] DEBUG org.apache.kafka.clients.NetworkClient - [Producer clientId=mso-123456-producer-414c146f-350c-404b-a200-38d58b83784c] Give up sending metadata request since no node is available 11:11:13 11:11:13.767 [kafka-producer-network-thread | mso-123456-producer-414c146f-350c-404b-a200-38d58b83784c] DEBUG org.apache.kafka.clients.NetworkClient - [Producer clientId=mso-123456-producer-414c146f-350c-404b-a200-38d58b83784c] Give up sending metadata request since no node is available 11:11:13 11:11:13.815 [kafka-coordinator-heartbeat-thread | mso-group] DEBUG org.apache.kafka.clients.NetworkClient - [Consumer clientId=mso-123456-consumer-37da6eb2-0a27-47c4-bc3c-8032fdeca549, groupId=mso-group] Give up sending metadata request since no node is available 11:11:13 11:11:13.815 [kafka-coordinator-heartbeat-thread | mso-group] DEBUG org.apache.kafka.clients.consumer.internals.ConsumerCoordinator - [Consumer clientId=mso-123456-consumer-37da6eb2-0a27-47c4-bc3c-8032fdeca549, groupId=mso-group] No broker available to send FindCoordinator request 11:11:13 11:11:13.818 [kafka-producer-network-thread | mso-123456-producer-414c146f-350c-404b-a200-38d58b83784c] DEBUG org.apache.kafka.clients.NetworkClient - [Producer clientId=mso-123456-producer-414c146f-350c-404b-a200-38d58b83784c] Give up sending metadata request since no node is available 11:11:13 11:11:13.863 [main] INFO org.onap.sdc.utils.NotificationSender - DistributionClient - sendStatus 11:11:13 11:11:13.863 [main] DEBUG org.onap.sdc.utils.NotificationSender - Publisher server list: null 11:11:13 11:11:13.863 [main] INFO org.onap.sdc.utils.NotificationSender - Trying to send status: status 11:11:13 to topic null 11:11:13 11:11:13.869 [kafka-producer-network-thread | mso-123456-producer-414c146f-350c-404b-a200-38d58b83784c] DEBUG org.apache.kafka.clients.NetworkClient - [Producer clientId=mso-123456-producer-414c146f-350c-404b-a200-38d58b83784c] Give up sending metadata request since no node is available 11:11:13 11:11:13.916 [kafka-coordinator-heartbeat-thread | mso-group] DEBUG org.apache.kafka.clients.NetworkClient - [Consumer clientId=mso-123456-consumer-37da6eb2-0a27-47c4-bc3c-8032fdeca549, groupId=mso-group] Initialize connection to node localhost:39115 (id: 1 rack: null) for sending metadata request 11:11:13 11:11:13.916 [kafka-coordinator-heartbeat-thread | mso-group] DEBUG org.apache.kafka.clients.ClientUtils - Resolved host localhost as 127.0.0.1 11:11:13 11:11:13.916 [kafka-coordinator-heartbeat-thread | mso-group] DEBUG org.apache.kafka.clients.NetworkClient - [Consumer clientId=mso-123456-consumer-37da6eb2-0a27-47c4-bc3c-8032fdeca549, groupId=mso-group] Initiating connection to node localhost:39115 (id: 1 rack: null) using address localhost/127.0.0.1 11:11:13 11:11:13.917 [kafka-coordinator-heartbeat-thread | mso-group] DEBUG org.apache.kafka.common.security.authenticator.SaslClientAuthenticator - [Consumer clientId=mso-123456-consumer-37da6eb2-0a27-47c4-bc3c-8032fdeca549, groupId=mso-group] Set SASL client state to SEND_APIVERSIONS_REQUEST 11:11:13 11:11:13.917 [kafka-coordinator-heartbeat-thread | mso-group] DEBUG org.apache.kafka.common.security.authenticator.SaslClientAuthenticator - [Consumer clientId=mso-123456-consumer-37da6eb2-0a27-47c4-bc3c-8032fdeca549, groupId=mso-group] Creating SaslClient: client=null;service=kafka;serviceHostname=localhost;mechs=[PLAIN] 11:11:13 11:11:13.919 [kafka-coordinator-heartbeat-thread | mso-group] DEBUG org.apache.kafka.common.network.Selector - [Consumer clientId=mso-123456-consumer-37da6eb2-0a27-47c4-bc3c-8032fdeca549, groupId=mso-group] Connection with localhost/127.0.0.1 (channelId=1) disconnected 11:11:13 java.net.ConnectException: Connection refused 11:11:13 at java.base/sun.nio.ch.SocketChannelImpl.checkConnect(Native Method) 11:11:13 at java.base/sun.nio.ch.SocketChannelImpl.finishConnect(SocketChannelImpl.java:777) 11:11:13 at org.apache.kafka.common.network.PlaintextTransportLayer.finishConnect(PlaintextTransportLayer.java:50) 11:11:13 at org.apache.kafka.common.network.KafkaChannel.finishConnect(KafkaChannel.java:224) 11:11:13 at org.apache.kafka.common.network.Selector.pollSelectionKeys(Selector.java:526) 11:11:13 at org.apache.kafka.common.network.Selector.poll(Selector.java:481) 11:11:13 at org.apache.kafka.clients.NetworkClient.poll(NetworkClient.java:560) 11:11:13 at org.apache.kafka.clients.consumer.internals.ConsumerNetworkClient.poll(ConsumerNetworkClient.java:280) 11:11:13 at org.apache.kafka.clients.consumer.internals.ConsumerNetworkClient.pollNoWakeup(ConsumerNetworkClient.java:321) 11:11:13 at org.apache.kafka.clients.consumer.internals.AbstractCoordinator$HeartbeatThread.run(AbstractCoordinator.java:1454) 11:11:13 11:11:13.919 [kafka-coordinator-heartbeat-thread | mso-group] INFO org.apache.kafka.clients.NetworkClient - [Consumer clientId=mso-123456-consumer-37da6eb2-0a27-47c4-bc3c-8032fdeca549, groupId=mso-group] Node 1 disconnected. 11:11:13 11:11:13.919 [kafka-producer-network-thread | mso-123456-producer-414c146f-350c-404b-a200-38d58b83784c] DEBUG org.apache.kafka.clients.NetworkClient - [Producer clientId=mso-123456-producer-414c146f-350c-404b-a200-38d58b83784c] Initialize connection to node localhost:39115 (id: 1 rack: null) for sending metadata request 11:11:13 11:11:13.919 [kafka-coordinator-heartbeat-thread | mso-group] WARN org.apache.kafka.clients.NetworkClient - [Consumer clientId=mso-123456-consumer-37da6eb2-0a27-47c4-bc3c-8032fdeca549, groupId=mso-group] Connection to node 1 (localhost/127.0.0.1:39115) could not be established. Broker may not be available. 11:11:13 11:11:13.919 [kafka-producer-network-thread | mso-123456-producer-414c146f-350c-404b-a200-38d58b83784c] DEBUG org.apache.kafka.clients.ClientUtils - Resolved host localhost as 127.0.0.1 11:11:13 11:11:13.919 [kafka-producer-network-thread | mso-123456-producer-414c146f-350c-404b-a200-38d58b83784c] DEBUG org.apache.kafka.clients.NetworkClient - [Producer clientId=mso-123456-producer-414c146f-350c-404b-a200-38d58b83784c] Initiating connection to node localhost:39115 (id: 1 rack: null) using address localhost/127.0.0.1 11:11:13 11:11:13.920 [kafka-coordinator-heartbeat-thread | mso-group] DEBUG org.apache.kafka.clients.consumer.internals.ConsumerCoordinator - [Consumer clientId=mso-123456-consumer-37da6eb2-0a27-47c4-bc3c-8032fdeca549, groupId=mso-group] No broker available to send FindCoordinator request 11:11:13 11:11:13.920 [kafka-producer-network-thread | mso-123456-producer-414c146f-350c-404b-a200-38d58b83784c] DEBUG org.apache.kafka.common.security.authenticator.SaslClientAuthenticator - [Producer clientId=mso-123456-producer-414c146f-350c-404b-a200-38d58b83784c] Set SASL client state to SEND_APIVERSIONS_REQUEST 11:11:13 11:11:13.920 [kafka-producer-network-thread | mso-123456-producer-414c146f-350c-404b-a200-38d58b83784c] DEBUG org.apache.kafka.common.security.authenticator.SaslClientAuthenticator - [Producer clientId=mso-123456-producer-414c146f-350c-404b-a200-38d58b83784c] Creating SaslClient: client=null;service=kafka;serviceHostname=localhost;mechs=[PLAIN] 11:11:13 11:11:13.921 [kafka-producer-network-thread | mso-123456-producer-414c146f-350c-404b-a200-38d58b83784c] DEBUG org.apache.kafka.common.network.Selector - [Producer clientId=mso-123456-producer-414c146f-350c-404b-a200-38d58b83784c] Connection with localhost/127.0.0.1 (channelId=1) disconnected 11:11:13 java.net.ConnectException: Connection refused 11:11:13 at java.base/sun.nio.ch.SocketChannelImpl.checkConnect(Native Method) 11:11:13 at java.base/sun.nio.ch.SocketChannelImpl.finishConnect(SocketChannelImpl.java:777) 11:11:13 at org.apache.kafka.common.network.PlaintextTransportLayer.finishConnect(PlaintextTransportLayer.java:50) 11:11:13 at org.apache.kafka.common.network.KafkaChannel.finishConnect(KafkaChannel.java:224) 11:11:13 at org.apache.kafka.common.network.Selector.pollSelectionKeys(Selector.java:526) 11:11:13 at org.apache.kafka.common.network.Selector.poll(Selector.java:481) 11:11:13 at org.apache.kafka.clients.NetworkClient.poll(NetworkClient.java:560) 11:11:13 at org.apache.kafka.clients.producer.internals.Sender.runOnce(Sender.java:328) 11:11:13 at org.apache.kafka.clients.producer.internals.Sender.run(Sender.java:243) 11:11:13 at java.base/java.lang.Thread.run(Thread.java:829) 11:11:13 11:11:13.922 [kafka-producer-network-thread | mso-123456-producer-414c146f-350c-404b-a200-38d58b83784c] INFO org.apache.kafka.clients.NetworkClient - [Producer clientId=mso-123456-producer-414c146f-350c-404b-a200-38d58b83784c] Node 1 disconnected. 11:11:13 11:11:13.922 [kafka-producer-network-thread | mso-123456-producer-414c146f-350c-404b-a200-38d58b83784c] WARN org.apache.kafka.clients.NetworkClient - [Producer clientId=mso-123456-producer-414c146f-350c-404b-a200-38d58b83784c] Connection to node 1 (localhost/127.0.0.1:39115) could not be established. Broker may not be available. 11:11:14 11:11:14.020 [kafka-coordinator-heartbeat-thread | mso-group] DEBUG org.apache.kafka.clients.NetworkClient - [Consumer clientId=mso-123456-consumer-37da6eb2-0a27-47c4-bc3c-8032fdeca549, groupId=mso-group] Give up sending metadata request since no node is available 11:11:14 11:11:14.021 [kafka-coordinator-heartbeat-thread | mso-group] DEBUG org.apache.kafka.clients.consumer.internals.ConsumerCoordinator - [Consumer clientId=mso-123456-consumer-37da6eb2-0a27-47c4-bc3c-8032fdeca549, groupId=mso-group] No broker available to send FindCoordinator request 11:11:14 11:11:14.023 [kafka-producer-network-thread | mso-123456-producer-414c146f-350c-404b-a200-38d58b83784c] DEBUG org.apache.kafka.clients.NetworkClient - [Producer clientId=mso-123456-producer-414c146f-350c-404b-a200-38d58b83784c] Give up sending metadata request since no node is available 11:11:14 11:11:14.073 [kafka-producer-network-thread | mso-123456-producer-414c146f-350c-404b-a200-38d58b83784c] DEBUG org.apache.kafka.clients.NetworkClient - [Producer clientId=mso-123456-producer-414c146f-350c-404b-a200-38d58b83784c] Give up sending metadata request since no node is available 11:11:14 11:11:14.121 [kafka-coordinator-heartbeat-thread | mso-group] DEBUG org.apache.kafka.clients.NetworkClient - [Consumer clientId=mso-123456-consumer-37da6eb2-0a27-47c4-bc3c-8032fdeca549, groupId=mso-group] Give up sending metadata request since no node is available 11:11:14 11:11:14.121 [kafka-coordinator-heartbeat-thread | mso-group] DEBUG org.apache.kafka.clients.consumer.internals.ConsumerCoordinator - [Consumer clientId=mso-123456-consumer-37da6eb2-0a27-47c4-bc3c-8032fdeca549, groupId=mso-group] No broker available to send FindCoordinator request 11:11:14 11:11:14.124 [kafka-producer-network-thread | mso-123456-producer-414c146f-350c-404b-a200-38d58b83784c] DEBUG org.apache.kafka.clients.NetworkClient - [Producer clientId=mso-123456-producer-414c146f-350c-404b-a200-38d58b83784c] Give up sending metadata request since no node is available 11:11:14 11:11:14.149 [SessionTracker] INFO org.apache.zookeeper.server.SessionTrackerImpl - SessionTrackerImpl exited loop! 11:11:14 11:11:14.174 [kafka-producer-network-thread | mso-123456-producer-414c146f-350c-404b-a200-38d58b83784c] DEBUG org.apache.kafka.clients.NetworkClient - [Producer clientId=mso-123456-producer-414c146f-350c-404b-a200-38d58b83784c] Give up sending metadata request since no node is available 11:11:14 11:11:14.221 [kafka-coordinator-heartbeat-thread | mso-group] DEBUG org.apache.kafka.clients.NetworkClient - [Consumer clientId=mso-123456-consumer-37da6eb2-0a27-47c4-bc3c-8032fdeca549, groupId=mso-group] Give up sending metadata request since no node is available 11:11:14 11:11:14.222 [kafka-coordinator-heartbeat-thread | mso-group] DEBUG org.apache.kafka.clients.consumer.internals.ConsumerCoordinator - [Consumer clientId=mso-123456-consumer-37da6eb2-0a27-47c4-bc3c-8032fdeca549, groupId=mso-group] No broker available to send FindCoordinator request 11:11:14 11:11:14.225 [kafka-producer-network-thread | mso-123456-producer-414c146f-350c-404b-a200-38d58b83784c] DEBUG org.apache.kafka.clients.NetworkClient - [Producer clientId=mso-123456-producer-414c146f-350c-404b-a200-38d58b83784c] Give up sending metadata request since no node is available 11:11:14 11:11:14.275 [kafka-producer-network-thread | mso-123456-producer-414c146f-350c-404b-a200-38d58b83784c] DEBUG org.apache.kafka.clients.NetworkClient - [Producer clientId=mso-123456-producer-414c146f-350c-404b-a200-38d58b83784c] Give up sending metadata request since no node is available 11:11:14 11:11:14.322 [kafka-coordinator-heartbeat-thread | mso-group] DEBUG org.apache.kafka.clients.NetworkClient - [Consumer clientId=mso-123456-consumer-37da6eb2-0a27-47c4-bc3c-8032fdeca549, groupId=mso-group] Give up sending metadata request since no node is available 11:11:14 11:11:14.322 [kafka-coordinator-heartbeat-thread | mso-group] DEBUG org.apache.kafka.clients.consumer.internals.ConsumerCoordinator - [Consumer clientId=mso-123456-consumer-37da6eb2-0a27-47c4-bc3c-8032fdeca549, groupId=mso-group] No broker available to send FindCoordinator request 11:11:14 11:11:14.325 [kafka-producer-network-thread | mso-123456-producer-414c146f-350c-404b-a200-38d58b83784c] DEBUG org.apache.kafka.clients.NetworkClient - [Producer clientId=mso-123456-producer-414c146f-350c-404b-a200-38d58b83784c] Give up sending metadata request since no node is available 11:11:14 11:11:14.376 [kafka-producer-network-thread | mso-123456-producer-414c146f-350c-404b-a200-38d58b83784c] DEBUG org.apache.kafka.clients.NetworkClient - [Producer clientId=mso-123456-producer-414c146f-350c-404b-a200-38d58b83784c] Give up sending metadata request since no node is available 11:11:14 11:11:14.423 [kafka-coordinator-heartbeat-thread | mso-group] DEBUG org.apache.kafka.clients.NetworkClient - [Consumer clientId=mso-123456-consumer-37da6eb2-0a27-47c4-bc3c-8032fdeca549, groupId=mso-group] Give up sending metadata request since no node is available 11:11:14 11:11:14.423 [kafka-coordinator-heartbeat-thread | mso-group] DEBUG org.apache.kafka.clients.consumer.internals.ConsumerCoordinator - [Consumer clientId=mso-123456-consumer-37da6eb2-0a27-47c4-bc3c-8032fdeca549, groupId=mso-group] No broker available to send FindCoordinator request 11:11:14 11:11:14.426 [kafka-producer-network-thread | mso-123456-producer-414c146f-350c-404b-a200-38d58b83784c] DEBUG org.apache.kafka.clients.NetworkClient - [Producer clientId=mso-123456-producer-414c146f-350c-404b-a200-38d58b83784c] Give up sending metadata request since no node is available 11:11:14 11:11:14.477 [kafka-producer-network-thread | mso-123456-producer-414c146f-350c-404b-a200-38d58b83784c] DEBUG org.apache.kafka.clients.NetworkClient - [Producer clientId=mso-123456-producer-414c146f-350c-404b-a200-38d58b83784c] Give up sending metadata request since no node is available 11:11:14 11:11:14.523 [kafka-coordinator-heartbeat-thread | mso-group] DEBUG org.apache.kafka.clients.NetworkClient - [Consumer clientId=mso-123456-consumer-37da6eb2-0a27-47c4-bc3c-8032fdeca549, groupId=mso-group] Give up sending metadata request since no node is available 11:11:14 11:11:14.524 [kafka-coordinator-heartbeat-thread | mso-group] DEBUG org.apache.kafka.clients.consumer.internals.ConsumerCoordinator - [Consumer clientId=mso-123456-consumer-37da6eb2-0a27-47c4-bc3c-8032fdeca549, groupId=mso-group] No broker available to send FindCoordinator request 11:11:14 11:11:14.527 [kafka-producer-network-thread | mso-123456-producer-414c146f-350c-404b-a200-38d58b83784c] DEBUG org.apache.kafka.clients.NetworkClient - [Producer clientId=mso-123456-producer-414c146f-350c-404b-a200-38d58b83784c] Give up sending metadata request since no node is available 11:11:14 11:11:14.577 [kafka-producer-network-thread | mso-123456-producer-414c146f-350c-404b-a200-38d58b83784c] DEBUG org.apache.kafka.clients.NetworkClient - [Producer clientId=mso-123456-producer-414c146f-350c-404b-a200-38d58b83784c] Give up sending metadata request since no node is available 11:11:14 11:11:14.624 [kafka-coordinator-heartbeat-thread | mso-group] DEBUG org.apache.kafka.clients.NetworkClient - [Consumer clientId=mso-123456-consumer-37da6eb2-0a27-47c4-bc3c-8032fdeca549, groupId=mso-group] Give up sending metadata request since no node is available 11:11:14 11:11:14.625 [kafka-coordinator-heartbeat-thread | mso-group] DEBUG org.apache.kafka.clients.consumer.internals.ConsumerCoordinator - [Consumer clientId=mso-123456-consumer-37da6eb2-0a27-47c4-bc3c-8032fdeca549, groupId=mso-group] No broker available to send FindCoordinator request 11:11:14 11:11:14.628 [kafka-producer-network-thread | mso-123456-producer-414c146f-350c-404b-a200-38d58b83784c] DEBUG org.apache.kafka.clients.NetworkClient - [Producer clientId=mso-123456-producer-414c146f-350c-404b-a200-38d58b83784c] Give up sending metadata request since no node is available 11:11:14 11:11:14.678 [kafka-producer-network-thread | mso-123456-producer-414c146f-350c-404b-a200-38d58b83784c] DEBUG org.apache.kafka.clients.NetworkClient - [Producer clientId=mso-123456-producer-414c146f-350c-404b-a200-38d58b83784c] Give up sending metadata request since no node is available 11:11:14 11:11:14.725 [kafka-coordinator-heartbeat-thread | mso-group] DEBUG org.apache.kafka.clients.NetworkClient - [Consumer clientId=mso-123456-consumer-37da6eb2-0a27-47c4-bc3c-8032fdeca549, groupId=mso-group] Give up sending metadata request since no node is available 11:11:14 11:11:14.725 [kafka-coordinator-heartbeat-thread | mso-group] DEBUG org.apache.kafka.clients.consumer.internals.ConsumerCoordinator - [Consumer clientId=mso-123456-consumer-37da6eb2-0a27-47c4-bc3c-8032fdeca549, groupId=mso-group] No broker available to send FindCoordinator request 11:11:14 11:11:14.729 [kafka-producer-network-thread | mso-123456-producer-414c146f-350c-404b-a200-38d58b83784c] DEBUG org.apache.kafka.clients.NetworkClient - [Producer clientId=mso-123456-producer-414c146f-350c-404b-a200-38d58b83784c] Give up sending metadata request since no node is available 11:11:14 11:11:14.779 [kafka-producer-network-thread | mso-123456-producer-414c146f-350c-404b-a200-38d58b83784c] DEBUG org.apache.kafka.clients.NetworkClient - [Producer clientId=mso-123456-producer-414c146f-350c-404b-a200-38d58b83784c] Initialize connection to node localhost:39115 (id: 1 rack: null) for sending metadata request 11:11:14 11:11:14.779 [kafka-producer-network-thread | mso-123456-producer-414c146f-350c-404b-a200-38d58b83784c] DEBUG org.apache.kafka.clients.ClientUtils - Resolved host localhost as 127.0.0.1 11:11:14 11:11:14.779 [kafka-producer-network-thread | mso-123456-producer-414c146f-350c-404b-a200-38d58b83784c] DEBUG org.apache.kafka.clients.NetworkClient - [Producer clientId=mso-123456-producer-414c146f-350c-404b-a200-38d58b83784c] Initiating connection to node localhost:39115 (id: 1 rack: null) using address localhost/127.0.0.1 11:11:14 11:11:14.780 [kafka-producer-network-thread | mso-123456-producer-414c146f-350c-404b-a200-38d58b83784c] DEBUG org.apache.kafka.common.security.authenticator.SaslClientAuthenticator - [Producer clientId=mso-123456-producer-414c146f-350c-404b-a200-38d58b83784c] Set SASL client state to SEND_APIVERSIONS_REQUEST 11:11:14 11:11:14.780 [kafka-producer-network-thread | mso-123456-producer-414c146f-350c-404b-a200-38d58b83784c] DEBUG org.apache.kafka.common.security.authenticator.SaslClientAuthenticator - [Producer clientId=mso-123456-producer-414c146f-350c-404b-a200-38d58b83784c] Creating SaslClient: client=null;service=kafka;serviceHostname=localhost;mechs=[PLAIN] 11:11:14 11:11:14.782 [kafka-producer-network-thread | mso-123456-producer-414c146f-350c-404b-a200-38d58b83784c] DEBUG org.apache.kafka.common.network.Selector - [Producer clientId=mso-123456-producer-414c146f-350c-404b-a200-38d58b83784c] Connection with localhost/127.0.0.1 (channelId=1) disconnected 11:11:14 java.net.ConnectException: Connection refused 11:11:14 at java.base/sun.nio.ch.SocketChannelImpl.checkConnect(Native Method) 11:11:14 at java.base/sun.nio.ch.SocketChannelImpl.finishConnect(SocketChannelImpl.java:777) 11:11:14 at org.apache.kafka.common.network.PlaintextTransportLayer.finishConnect(PlaintextTransportLayer.java:50) 11:11:14 at org.apache.kafka.common.network.KafkaChannel.finishConnect(KafkaChannel.java:224) 11:11:14 at org.apache.kafka.common.network.Selector.pollSelectionKeys(Selector.java:526) 11:11:14 at org.apache.kafka.common.network.Selector.poll(Selector.java:481) 11:11:14 at org.apache.kafka.clients.NetworkClient.poll(NetworkClient.java:560) 11:11:14 at org.apache.kafka.clients.producer.internals.Sender.runOnce(Sender.java:328) 11:11:14 at org.apache.kafka.clients.producer.internals.Sender.run(Sender.java:243) 11:11:14 at java.base/java.lang.Thread.run(Thread.java:829) 11:11:14 11:11:14.782 [kafka-producer-network-thread | mso-123456-producer-414c146f-350c-404b-a200-38d58b83784c] INFO org.apache.kafka.clients.NetworkClient - [Producer clientId=mso-123456-producer-414c146f-350c-404b-a200-38d58b83784c] Node 1 disconnected. 11:11:14 11:11:14.782 [kafka-producer-network-thread | mso-123456-producer-414c146f-350c-404b-a200-38d58b83784c] WARN org.apache.kafka.clients.NetworkClient - [Producer clientId=mso-123456-producer-414c146f-350c-404b-a200-38d58b83784c] Connection to node 1 (localhost/127.0.0.1:39115) could not be established. Broker may not be available. 11:11:14 11:11:14.826 [kafka-coordinator-heartbeat-thread | mso-group] DEBUG org.apache.kafka.clients.NetworkClient - [Consumer clientId=mso-123456-consumer-37da6eb2-0a27-47c4-bc3c-8032fdeca549, groupId=mso-group] Give up sending metadata request since no node is available 11:11:14 11:11:14.826 [kafka-coordinator-heartbeat-thread | mso-group] DEBUG org.apache.kafka.clients.consumer.internals.ConsumerCoordinator - [Consumer clientId=mso-123456-consumer-37da6eb2-0a27-47c4-bc3c-8032fdeca549, groupId=mso-group] No broker available to send FindCoordinator request 11:11:14 11:11:14.865 [main] ERROR org.onap.sdc.utils.NotificationSender - DistributionClient - sendDownloadStatus. Failed to send messages and close publisher. 11:11:14 org.apache.kafka.common.KafkaException: null 11:11:14 11:11:14.883 [kafka-producer-network-thread | mso-123456-producer-414c146f-350c-404b-a200-38d58b83784c] DEBUG org.apache.kafka.clients.NetworkClient - [Producer clientId=mso-123456-producer-414c146f-350c-404b-a200-38d58b83784c] Give up sending metadata request since no node is available 11:11:14 11:11:14.888 [main] INFO org.onap.sdc.utils.NotificationSender - DistributionClient - sendStatus 11:11:14 11:11:14.888 [main] DEBUG org.onap.sdc.utils.NotificationSender - Publisher server list: null 11:11:14 11:11:14.889 [main] INFO org.onap.sdc.utils.NotificationSender - Trying to send status: status 11:11:14 to topic null 11:11:14 11:11:14.889 [main] ERROR org.onap.sdc.utils.NotificationSender - DistributionClient - sendStatus. Failed to send status 11:11:14 org.apache.kafka.common.KafkaException: null 11:11:14 at org.onap.sdc.utils.kafka.SdcKafkaProducer.send(SdcKafkaProducer.java:65) 11:11:14 at org.onap.sdc.utils.NotificationSender.send(NotificationSender.java:48) 11:11:14 at org.onap.sdc.utils.NotificationSenderTest.whenSendingThrowsIOExceptionShouldReturnGeneralErrorStatus(NotificationSenderTest.java:84) 11:11:14 at java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke0(Native Method) 11:11:14 at java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) 11:11:14 at java.base/jdk.internal.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) 11:11:14 at java.base/java.lang.reflect.Method.invoke(Method.java:566) 11:11:14 at org.junit.platform.commons.util.ReflectionUtils.invokeMethod(ReflectionUtils.java:688) 11:11:14 at org.junit.jupiter.engine.execution.MethodInvocation.proceed(MethodInvocation.java:60) 11:11:14 at org.junit.jupiter.engine.execution.InvocationInterceptorChain$ValidatingInvocation.proceed(InvocationInterceptorChain.java:131) 11:11:14 at org.junit.jupiter.engine.extension.TimeoutExtension.intercept(TimeoutExtension.java:149) 11:11:14 at org.junit.jupiter.engine.extension.TimeoutExtension.interceptTestableMethod(TimeoutExtension.java:140) 11:11:14 at org.junit.jupiter.engine.extension.TimeoutExtension.interceptTestMethod(TimeoutExtension.java:84) 11:11:14 at org.junit.jupiter.engine.execution.ExecutableInvoker$ReflectiveInterceptorCall.lambda$ofVoidMethod$0(ExecutableInvoker.java:115) 11:11:14 at org.junit.jupiter.engine.execution.ExecutableInvoker.lambda$invoke$0(ExecutableInvoker.java:105) 11:11:14 at org.junit.jupiter.engine.execution.InvocationInterceptorChain$InterceptedInvocation.proceed(InvocationInterceptorChain.java:106) 11:11:14 at org.junit.jupiter.engine.execution.InvocationInterceptorChain.proceed(InvocationInterceptorChain.java:64) 11:11:14 at org.junit.jupiter.engine.execution.InvocationInterceptorChain.chainAndInvoke(InvocationInterceptorChain.java:45) 11:11:14 at org.junit.jupiter.engine.execution.InvocationInterceptorChain.invoke(InvocationInterceptorChain.java:37) 11:11:14 at org.junit.jupiter.engine.execution.ExecutableInvoker.invoke(ExecutableInvoker.java:104) 11:11:14 at org.junit.jupiter.engine.execution.ExecutableInvoker.invoke(ExecutableInvoker.java:98) 11:11:14 at org.junit.jupiter.engine.descriptor.TestMethodTestDescriptor.lambda$invokeTestMethod$6(TestMethodTestDescriptor.java:210) 11:11:14 at org.junit.platform.engine.support.hierarchical.ThrowableCollector.execute(ThrowableCollector.java:73) 11:11:14 at org.junit.jupiter.engine.descriptor.TestMethodTestDescriptor.invokeTestMethod(TestMethodTestDescriptor.java:206) 11:11:14 at org.junit.jupiter.engine.descriptor.TestMethodTestDescriptor.execute(TestMethodTestDescriptor.java:131) 11:11:14 at org.junit.jupiter.engine.descriptor.TestMethodTestDescriptor.execute(TestMethodTestDescriptor.java:65) 11:11:14 at org.junit.platform.engine.support.hierarchical.NodeTestTask.lambda$executeRecursively$5(NodeTestTask.java:139) 11:11:14 at org.junit.platform.engine.support.hierarchical.ThrowableCollector.execute(ThrowableCollector.java:73) 11:11:14 at org.junit.platform.engine.support.hierarchical.NodeTestTask.lambda$executeRecursively$7(NodeTestTask.java:129) 11:11:14 at org.junit.platform.engine.support.hierarchical.Node.around(Node.java:137) 11:11:14 at org.junit.platform.engine.support.hierarchical.NodeTestTask.lambda$executeRecursively$8(NodeTestTask.java:127) 11:11:14 at org.junit.platform.engine.support.hierarchical.ThrowableCollector.execute(ThrowableCollector.java:73) 11:11:14 at org.junit.platform.engine.support.hierarchical.NodeTestTask.executeRecursively(NodeTestTask.java:126) 11:11:14 at org.junit.platform.engine.support.hierarchical.NodeTestTask.execute(NodeTestTask.java:84) 11:11:14 at java.base/java.util.ArrayList.forEach(ArrayList.java:1541) 11:11:14 at org.junit.platform.engine.support.hierarchical.SameThreadHierarchicalTestExecutorService.invokeAll(SameThreadHierarchicalTestExecutorService.java:38) 11:11:14 at org.junit.platform.engine.support.hierarchical.NodeTestTask.lambda$executeRecursively$5(NodeTestTask.java:143) 11:11:14 at org.junit.platform.engine.support.hierarchical.ThrowableCollector.execute(ThrowableCollector.java:73) 11:11:14 at org.junit.platform.engine.support.hierarchical.NodeTestTask.lambda$executeRecursively$7(NodeTestTask.java:129) 11:11:14 at org.junit.platform.engine.support.hierarchical.Node.around(Node.java:137) 11:11:14 at org.junit.platform.engine.support.hierarchical.NodeTestTask.lambda$executeRecursively$8(NodeTestTask.java:127) 11:11:14 at org.junit.platform.engine.support.hierarchical.ThrowableCollector.execute(ThrowableCollector.java:73) 11:11:14 at org.junit.platform.engine.support.hierarchical.NodeTestTask.executeRecursively(NodeTestTask.java:126) 11:11:14 at org.junit.platform.engine.support.hierarchical.NodeTestTask.execute(NodeTestTask.java:84) 11:11:14 at java.base/java.util.ArrayList.forEach(ArrayList.java:1541) 11:11:14 at org.junit.platform.engine.support.hierarchical.SameThreadHierarchicalTestExecutorService.invokeAll(SameThreadHierarchicalTestExecutorService.java:38) 11:11:14 at org.junit.platform.engine.support.hierarchical.NodeTestTask.lambda$executeRecursively$5(NodeTestTask.java:143) 11:11:14 at org.junit.platform.engine.support.hierarchical.ThrowableCollector.execute(ThrowableCollector.java:73) 11:11:14 at org.junit.platform.engine.support.hierarchical.NodeTestTask.lambda$executeRecursively$7(NodeTestTask.java:129) 11:11:14 at org.junit.platform.engine.support.hierarchical.Node.around(Node.java:137) 11:11:14 at org.junit.platform.engine.support.hierarchical.NodeTestTask.lambda$executeRecursively$8(NodeTestTask.java:127) 11:11:14 at org.junit.platform.engine.support.hierarchical.ThrowableCollector.execute(ThrowableCollector.java:73) 11:11:14 at org.junit.platform.engine.support.hierarchical.NodeTestTask.executeRecursively(NodeTestTask.java:126) 11:11:14 at org.junit.platform.engine.support.hierarchical.NodeTestTask.execute(NodeTestTask.java:84) 11:11:14 at org.junit.platform.engine.support.hierarchical.SameThreadHierarchicalTestExecutorService.submit(SameThreadHierarchicalTestExecutorService.java:32) 11:11:14 at org.junit.platform.engine.support.hierarchical.HierarchicalTestExecutor.execute(HierarchicalTestExecutor.java:57) 11:11:14 at org.junit.platform.engine.support.hierarchical.HierarchicalTestEngine.execute(HierarchicalTestEngine.java:51) 11:11:14 at org.junit.platform.launcher.core.EngineExecutionOrchestrator.execute(EngineExecutionOrchestrator.java:108) 11:11:14 at org.junit.platform.launcher.core.EngineExecutionOrchestrator.execute(EngineExecutionOrchestrator.java:88) 11:11:14 at org.junit.platform.launcher.core.EngineExecutionOrchestrator.lambda$execute$0(EngineExecutionOrchestrator.java:54) 11:11:14 at org.junit.platform.launcher.core.EngineExecutionOrchestrator.withInterceptedStreams(EngineExecutionOrchestrator.java:67) 11:11:14 at org.junit.platform.launcher.core.EngineExecutionOrchestrator.execute(EngineExecutionOrchestrator.java:52) 11:11:14 at org.junit.platform.launcher.core.DefaultLauncher.execute(DefaultLauncher.java:96) 11:11:14 at org.junit.platform.launcher.core.DefaultLauncher.execute(DefaultLauncher.java:75) 11:11:14 at org.apache.maven.surefire.junitplatform.JUnitPlatformProvider.invokeAllTests(JUnitPlatformProvider.java:154) 11:11:14 at org.apache.maven.surefire.junitplatform.JUnitPlatformProvider.invoke(JUnitPlatformProvider.java:127) 11:11:14 at org.apache.maven.surefire.booter.ForkedBooter.runSuitesInProcess(ForkedBooter.java:377) 11:11:14 at org.apache.maven.surefire.booter.ForkedBooter.execute(ForkedBooter.java:138) 11:11:14 at org.apache.maven.surefire.booter.ForkedBooter.run(ForkedBooter.java:465) 11:11:14 at org.apache.maven.surefire.booter.ForkedBooter.main(ForkedBooter.java:451) 11:11:14 [INFO] Tests run: 3, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 2.486 s - in org.onap.sdc.utils.NotificationSenderTest 11:11:14 [INFO] Running org.onap.sdc.utils.KafkaCommonConfigTest 11:11:14 [INFO] Tests run: 2, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.003 s - in org.onap.sdc.utils.KafkaCommonConfigTest 11:11:14 [INFO] Running org.onap.sdc.utils.GeneralUtilsTest 11:11:14 [INFO] Tests run: 4, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.006 s - in org.onap.sdc.utils.GeneralUtilsTest 11:11:14 [INFO] Running org.onap.sdc.impl.NotificationConsumerTest 11:11:15 11:11:15.087 [kafka-coordinator-heartbeat-thread | mso-group] DEBUG org.apache.kafka.clients.NetworkClient - [Consumer clientId=mso-123456-consumer-37da6eb2-0a27-47c4-bc3c-8032fdeca549, groupId=mso-group] Initialize connection to node localhost:39115 (id: 1 rack: null) for sending metadata request 11:11:15 11:11:15.088 [kafka-producer-network-thread | mso-123456-producer-414c146f-350c-404b-a200-38d58b83784c] DEBUG org.apache.kafka.clients.NetworkClient - [Producer clientId=mso-123456-producer-414c146f-350c-404b-a200-38d58b83784c] Give up sending metadata request since no node is available 11:11:15 11:11:15.088 [kafka-coordinator-heartbeat-thread | mso-group] DEBUG org.apache.kafka.clients.ClientUtils - Resolved host localhost as 127.0.0.1 11:11:15 11:11:15.088 [kafka-coordinator-heartbeat-thread | mso-group] DEBUG org.apache.kafka.clients.NetworkClient - [Consumer clientId=mso-123456-consumer-37da6eb2-0a27-47c4-bc3c-8032fdeca549, groupId=mso-group] Initiating connection to node localhost:39115 (id: 1 rack: null) using address localhost/127.0.0.1 11:11:15 11:11:15.089 [kafka-coordinator-heartbeat-thread | mso-group] DEBUG org.apache.kafka.common.security.authenticator.SaslClientAuthenticator - [Consumer clientId=mso-123456-consumer-37da6eb2-0a27-47c4-bc3c-8032fdeca549, groupId=mso-group] Set SASL client state to SEND_APIVERSIONS_REQUEST 11:11:15 11:11:15.089 [kafka-coordinator-heartbeat-thread | mso-group] DEBUG org.apache.kafka.common.security.authenticator.SaslClientAuthenticator - [Consumer clientId=mso-123456-consumer-37da6eb2-0a27-47c4-bc3c-8032fdeca549, groupId=mso-group] Creating SaslClient: client=null;service=kafka;serviceHostname=localhost;mechs=[PLAIN] 11:11:15 11:11:15.091 [kafka-coordinator-heartbeat-thread | mso-group] DEBUG org.apache.kafka.common.network.Selector - [Consumer clientId=mso-123456-consumer-37da6eb2-0a27-47c4-bc3c-8032fdeca549, groupId=mso-group] Connection with localhost/127.0.0.1 (channelId=1) disconnected 11:11:15 java.net.ConnectException: Connection refused 11:11:15 at java.base/sun.nio.ch.SocketChannelImpl.checkConnect(Native Method) 11:11:15 at java.base/sun.nio.ch.SocketChannelImpl.finishConnect(SocketChannelImpl.java:777) 11:11:15 at org.apache.kafka.common.network.PlaintextTransportLayer.finishConnect(PlaintextTransportLayer.java:50) 11:11:15 at org.apache.kafka.common.network.KafkaChannel.finishConnect(KafkaChannel.java:224) 11:11:15 at org.apache.kafka.common.network.Selector.pollSelectionKeys(Selector.java:526) 11:11:15 at org.apache.kafka.common.network.Selector.poll(Selector.java:481) 11:11:15 at org.apache.kafka.clients.NetworkClient.poll(NetworkClient.java:560) 11:11:15 at org.apache.kafka.clients.consumer.internals.ConsumerNetworkClient.poll(ConsumerNetworkClient.java:280) 11:11:15 at org.apache.kafka.clients.consumer.internals.ConsumerNetworkClient.pollNoWakeup(ConsumerNetworkClient.java:321) 11:11:15 at org.apache.kafka.clients.consumer.internals.AbstractCoordinator$HeartbeatThread.run(AbstractCoordinator.java:1454) 11:11:15 11:11:15.092 [kafka-coordinator-heartbeat-thread | mso-group] INFO org.apache.kafka.clients.NetworkClient - [Consumer clientId=mso-123456-consumer-37da6eb2-0a27-47c4-bc3c-8032fdeca549, groupId=mso-group] Node 1 disconnected. 11:11:15 11:11:15.092 [kafka-coordinator-heartbeat-thread | mso-group] WARN org.apache.kafka.clients.NetworkClient - [Consumer clientId=mso-123456-consumer-37da6eb2-0a27-47c4-bc3c-8032fdeca549, groupId=mso-group] Connection to node 1 (localhost/127.0.0.1:39115) could not be established. Broker may not be available. 11:11:15 11:11:15.092 [kafka-coordinator-heartbeat-thread | mso-group] DEBUG org.apache.kafka.clients.consumer.internals.ConsumerCoordinator - [Consumer clientId=mso-123456-consumer-37da6eb2-0a27-47c4-bc3c-8032fdeca549, groupId=mso-group] No broker available to send FindCoordinator request 11:11:15 11:11:15.270 [kafka-coordinator-heartbeat-thread | mso-group] DEBUG org.apache.kafka.clients.NetworkClient - [Consumer clientId=mso-123456-consumer-37da6eb2-0a27-47c4-bc3c-8032fdeca549, groupId=mso-group] Give up sending metadata request since no node is available 11:11:15 11:11:15.271 [kafka-producer-network-thread | mso-123456-producer-414c146f-350c-404b-a200-38d58b83784c] DEBUG org.apache.kafka.clients.NetworkClient - [Producer clientId=mso-123456-producer-414c146f-350c-404b-a200-38d58b83784c] Give up sending metadata request since no node is available 11:11:15 11:11:15.272 [kafka-coordinator-heartbeat-thread | mso-group] DEBUG org.apache.kafka.clients.consumer.internals.ConsumerCoordinator - [Consumer clientId=mso-123456-consumer-37da6eb2-0a27-47c4-bc3c-8032fdeca549, groupId=mso-group] No broker available to send FindCoordinator request 11:11:15 11:11:15.322 [kafka-producer-network-thread | mso-123456-producer-414c146f-350c-404b-a200-38d58b83784c] DEBUG org.apache.kafka.clients.NetworkClient - [Producer clientId=mso-123456-producer-414c146f-350c-404b-a200-38d58b83784c] Give up sending metadata request since no node is available 11:11:15 11:11:15.563 [kafka-producer-network-thread | mso-123456-producer-414c146f-350c-404b-a200-38d58b83784c] DEBUG org.apache.kafka.clients.NetworkClient - [Producer clientId=mso-123456-producer-414c146f-350c-404b-a200-38d58b83784c] Give up sending metadata request since no node is available 11:11:15 11:11:15.563 [kafka-coordinator-heartbeat-thread | mso-group] DEBUG org.apache.kafka.clients.NetworkClient - [Consumer clientId=mso-123456-consumer-37da6eb2-0a27-47c4-bc3c-8032fdeca549, groupId=mso-group] Give up sending metadata request since no node is available 11:11:15 11:11:15.565 [kafka-coordinator-heartbeat-thread | mso-group] DEBUG org.apache.kafka.clients.consumer.internals.ConsumerCoordinator - [Consumer clientId=mso-123456-consumer-37da6eb2-0a27-47c4-bc3c-8032fdeca549, groupId=mso-group] No broker available to send FindCoordinator request 11:11:15 11:11:15.576 [main] INFO org.onap.sdc.impl.DistributionClientImpl - DistributionClient - sendNotificationStatus 11:11:15 11:11:15.576 [main] DEBUG org.onap.sdc.impl.DistributionClientImpl - client was not initialized 11:11:15 11:11:15.582 [pool-8-thread-1] DEBUG org.onap.sdc.impl.NotificationConsumer - Polling for messages from topic: null 11:11:15 11:11:15.615 [kafka-producer-network-thread | mso-123456-producer-414c146f-350c-404b-a200-38d58b83784c] DEBUG org.apache.kafka.clients.NetworkClient - [Producer clientId=mso-123456-producer-414c146f-350c-404b-a200-38d58b83784c] Give up sending metadata request since no node is available 11:11:15 11:11:15.665 [kafka-producer-network-thread | mso-123456-producer-414c146f-350c-404b-a200-38d58b83784c] DEBUG org.apache.kafka.clients.NetworkClient - [Producer clientId=mso-123456-producer-414c146f-350c-404b-a200-38d58b83784c] Give up sending metadata request since no node is available 11:11:15 11:11:15.665 [kafka-coordinator-heartbeat-thread | mso-group] DEBUG org.apache.kafka.clients.NetworkClient - [Consumer clientId=mso-123456-consumer-37da6eb2-0a27-47c4-bc3c-8032fdeca549, groupId=mso-group] Give up sending metadata request since no node is available 11:11:15 11:11:15.666 [kafka-coordinator-heartbeat-thread | mso-group] DEBUG org.apache.kafka.clients.consumer.internals.ConsumerCoordinator - [Consumer clientId=mso-123456-consumer-37da6eb2-0a27-47c4-bc3c-8032fdeca549, groupId=mso-group] No broker available to send FindCoordinator request 11:11:15 11:11:15.680 [pool-8-thread-1] DEBUG org.onap.sdc.impl.NotificationConsumer - Polling for messages from topic: null 11:11:15 11:11:15.715 [kafka-producer-network-thread | mso-123456-producer-414c146f-350c-404b-a200-38d58b83784c] DEBUG org.apache.kafka.clients.NetworkClient - [Producer clientId=mso-123456-producer-414c146f-350c-404b-a200-38d58b83784c] Give up sending metadata request since no node is available 11:11:15 11:11:15.766 [kafka-producer-network-thread | mso-123456-producer-414c146f-350c-404b-a200-38d58b83784c] DEBUG org.apache.kafka.clients.NetworkClient - [Producer clientId=mso-123456-producer-414c146f-350c-404b-a200-38d58b83784c] Give up sending metadata request since no node is available 11:11:15 11:11:15.766 [kafka-coordinator-heartbeat-thread | mso-group] DEBUG org.apache.kafka.clients.NetworkClient - [Consumer clientId=mso-123456-consumer-37da6eb2-0a27-47c4-bc3c-8032fdeca549, groupId=mso-group] Give up sending metadata request since no node is available 11:11:15 11:11:15.766 [kafka-coordinator-heartbeat-thread | mso-group] DEBUG org.apache.kafka.clients.consumer.internals.ConsumerCoordinator - [Consumer clientId=mso-123456-consumer-37da6eb2-0a27-47c4-bc3c-8032fdeca549, groupId=mso-group] No broker available to send FindCoordinator request 11:11:15 11:11:15.781 [pool-8-thread-2] DEBUG org.onap.sdc.impl.NotificationConsumer - Polling for messages from topic: null 11:11:15 11:11:15.817 [kafka-producer-network-thread | mso-123456-producer-414c146f-350c-404b-a200-38d58b83784c] DEBUG org.apache.kafka.clients.NetworkClient - [Producer clientId=mso-123456-producer-414c146f-350c-404b-a200-38d58b83784c] Initialize connection to node localhost:39115 (id: 1 rack: null) for sending metadata request 11:11:15 11:11:15.817 [kafka-producer-network-thread | mso-123456-producer-414c146f-350c-404b-a200-38d58b83784c] DEBUG org.apache.kafka.clients.ClientUtils - Resolved host localhost as 127.0.0.1 11:11:15 11:11:15.817 [kafka-producer-network-thread | mso-123456-producer-414c146f-350c-404b-a200-38d58b83784c] DEBUG org.apache.kafka.clients.NetworkClient - [Producer clientId=mso-123456-producer-414c146f-350c-404b-a200-38d58b83784c] Initiating connection to node localhost:39115 (id: 1 rack: null) using address localhost/127.0.0.1 11:11:15 11:11:15.818 [kafka-producer-network-thread | mso-123456-producer-414c146f-350c-404b-a200-38d58b83784c] DEBUG org.apache.kafka.common.security.authenticator.SaslClientAuthenticator - [Producer clientId=mso-123456-producer-414c146f-350c-404b-a200-38d58b83784c] Set SASL client state to SEND_APIVERSIONS_REQUEST 11:11:15 11:11:15.818 [kafka-producer-network-thread | mso-123456-producer-414c146f-350c-404b-a200-38d58b83784c] DEBUG org.apache.kafka.common.security.authenticator.SaslClientAuthenticator - [Producer clientId=mso-123456-producer-414c146f-350c-404b-a200-38d58b83784c] Creating SaslClient: client=null;service=kafka;serviceHostname=localhost;mechs=[PLAIN] 11:11:15 11:11:15.821 [kafka-producer-network-thread | mso-123456-producer-414c146f-350c-404b-a200-38d58b83784c] DEBUG org.apache.kafka.common.network.Selector - [Producer clientId=mso-123456-producer-414c146f-350c-404b-a200-38d58b83784c] Connection with localhost/127.0.0.1 (channelId=1) disconnected 11:11:15 java.net.ConnectException: Connection refused 11:11:15 at java.base/sun.nio.ch.SocketChannelImpl.checkConnect(Native Method) 11:11:15 at java.base/sun.nio.ch.SocketChannelImpl.finishConnect(SocketChannelImpl.java:777) 11:11:15 at org.apache.kafka.common.network.PlaintextTransportLayer.finishConnect(PlaintextTransportLayer.java:50) 11:11:15 at org.apache.kafka.common.network.KafkaChannel.finishConnect(KafkaChannel.java:224) 11:11:15 at org.apache.kafka.common.network.Selector.pollSelectionKeys(Selector.java:526) 11:11:15 at org.apache.kafka.common.network.Selector.poll(Selector.java:481) 11:11:15 at org.apache.kafka.clients.NetworkClient.poll(NetworkClient.java:560) 11:11:15 at org.apache.kafka.clients.producer.internals.Sender.runOnce(Sender.java:328) 11:11:15 at org.apache.kafka.clients.producer.internals.Sender.run(Sender.java:243) 11:11:15 at java.base/java.lang.Thread.run(Thread.java:829) 11:11:15 11:11:15.821 [kafka-producer-network-thread | mso-123456-producer-414c146f-350c-404b-a200-38d58b83784c] INFO org.apache.kafka.clients.NetworkClient - [Producer clientId=mso-123456-producer-414c146f-350c-404b-a200-38d58b83784c] Node 1 disconnected. 11:11:15 11:11:15.821 [kafka-producer-network-thread | mso-123456-producer-414c146f-350c-404b-a200-38d58b83784c] WARN org.apache.kafka.clients.NetworkClient - [Producer clientId=mso-123456-producer-414c146f-350c-404b-a200-38d58b83784c] Connection to node 1 (localhost/127.0.0.1:39115) could not be established. Broker may not be available. 11:11:15 11:11:15.867 [kafka-coordinator-heartbeat-thread | mso-group] DEBUG org.apache.kafka.clients.NetworkClient - [Consumer clientId=mso-123456-consumer-37da6eb2-0a27-47c4-bc3c-8032fdeca549, groupId=mso-group] Give up sending metadata request since no node is available 11:11:15 11:11:15.867 [kafka-coordinator-heartbeat-thread | mso-group] DEBUG org.apache.kafka.clients.consumer.internals.ConsumerCoordinator - [Consumer clientId=mso-123456-consumer-37da6eb2-0a27-47c4-bc3c-8032fdeca549, groupId=mso-group] No broker available to send FindCoordinator request 11:11:15 11:11:15.881 [pool-8-thread-1] DEBUG org.onap.sdc.impl.NotificationConsumer - Polling for messages from topic: null 11:11:15 11:11:15.921 [kafka-producer-network-thread | mso-123456-producer-414c146f-350c-404b-a200-38d58b83784c] DEBUG org.apache.kafka.clients.NetworkClient - [Producer clientId=mso-123456-producer-414c146f-350c-404b-a200-38d58b83784c] Give up sending metadata request since no node is available 11:11:15 11:11:15.968 [kafka-coordinator-heartbeat-thread | mso-group] DEBUG org.apache.kafka.clients.NetworkClient - [Consumer clientId=mso-123456-consumer-37da6eb2-0a27-47c4-bc3c-8032fdeca549, groupId=mso-group] Give up sending metadata request since no node is available 11:11:15 11:11:15.968 [kafka-coordinator-heartbeat-thread | mso-group] DEBUG org.apache.kafka.clients.consumer.internals.ConsumerCoordinator - [Consumer clientId=mso-123456-consumer-37da6eb2-0a27-47c4-bc3c-8032fdeca549, groupId=mso-group] No broker available to send FindCoordinator request 11:11:15 11:11:15.972 [kafka-producer-network-thread | mso-123456-producer-414c146f-350c-404b-a200-38d58b83784c] DEBUG org.apache.kafka.clients.NetworkClient - [Producer clientId=mso-123456-producer-414c146f-350c-404b-a200-38d58b83784c] Give up sending metadata request since no node is available 11:11:15 11:11:15.982 [pool-8-thread-3] DEBUG org.onap.sdc.impl.NotificationConsumer - Polling for messages from topic: null 11:11:16 11:11:16.022 [kafka-producer-network-thread | mso-123456-producer-414c146f-350c-404b-a200-38d58b83784c] DEBUG org.apache.kafka.clients.NetworkClient - [Producer clientId=mso-123456-producer-414c146f-350c-404b-a200-38d58b83784c] Give up sending metadata request since no node is available 11:11:16 11:11:16.069 [kafka-coordinator-heartbeat-thread | mso-group] DEBUG org.apache.kafka.clients.NetworkClient - [Consumer clientId=mso-123456-consumer-37da6eb2-0a27-47c4-bc3c-8032fdeca549, groupId=mso-group] Give up sending metadata request since no node is available 11:11:16 11:11:16.069 [kafka-coordinator-heartbeat-thread | mso-group] DEBUG org.apache.kafka.clients.consumer.internals.ConsumerCoordinator - [Consumer clientId=mso-123456-consumer-37da6eb2-0a27-47c4-bc3c-8032fdeca549, groupId=mso-group] No broker available to send FindCoordinator request 11:11:16 11:11:16.073 [kafka-producer-network-thread | mso-123456-producer-414c146f-350c-404b-a200-38d58b83784c] DEBUG org.apache.kafka.clients.NetworkClient - [Producer clientId=mso-123456-producer-414c146f-350c-404b-a200-38d58b83784c] Give up sending metadata request since no node is available 11:11:16 11:11:16.080 [pool-8-thread-2] DEBUG org.onap.sdc.impl.NotificationConsumer - Polling for messages from topic: null 11:11:16 11:11:16.123 [kafka-producer-network-thread | mso-123456-producer-414c146f-350c-404b-a200-38d58b83784c] DEBUG org.apache.kafka.clients.NetworkClient - [Producer clientId=mso-123456-producer-414c146f-350c-404b-a200-38d58b83784c] Give up sending metadata request since no node is available 11:11:16 11:11:16.170 [kafka-coordinator-heartbeat-thread | mso-group] DEBUG org.apache.kafka.clients.NetworkClient - [Consumer clientId=mso-123456-consumer-37da6eb2-0a27-47c4-bc3c-8032fdeca549, groupId=mso-group] Give up sending metadata request since no node is available 11:11:16 11:11:16.170 [kafka-coordinator-heartbeat-thread | mso-group] DEBUG org.apache.kafka.clients.consumer.internals.ConsumerCoordinator - [Consumer clientId=mso-123456-consumer-37da6eb2-0a27-47c4-bc3c-8032fdeca549, groupId=mso-group] No broker available to send FindCoordinator request 11:11:16 11:11:16.174 [kafka-producer-network-thread | mso-123456-producer-414c146f-350c-404b-a200-38d58b83784c] DEBUG org.apache.kafka.clients.NetworkClient - [Producer clientId=mso-123456-producer-414c146f-350c-404b-a200-38d58b83784c] Give up sending metadata request since no node is available 11:11:16 11:11:16.181 [pool-8-thread-4] DEBUG org.onap.sdc.impl.NotificationConsumer - Polling for messages from topic: null 11:11:16 11:11:16.224 [kafka-producer-network-thread | mso-123456-producer-414c146f-350c-404b-a200-38d58b83784c] DEBUG org.apache.kafka.clients.NetworkClient - [Producer clientId=mso-123456-producer-414c146f-350c-404b-a200-38d58b83784c] Give up sending metadata request since no node is available 11:11:16 11:11:16.270 [kafka-coordinator-heartbeat-thread | mso-group] DEBUG org.apache.kafka.clients.NetworkClient - [Consumer clientId=mso-123456-consumer-37da6eb2-0a27-47c4-bc3c-8032fdeca549, groupId=mso-group] Initialize connection to node localhost:39115 (id: 1 rack: null) for sending metadata request 11:11:16 11:11:16.271 [kafka-coordinator-heartbeat-thread | mso-group] DEBUG org.apache.kafka.clients.ClientUtils - Resolved host localhost as 127.0.0.1 11:11:16 11:11:16.271 [kafka-coordinator-heartbeat-thread | mso-group] DEBUG org.apache.kafka.clients.NetworkClient - [Consumer clientId=mso-123456-consumer-37da6eb2-0a27-47c4-bc3c-8032fdeca549, groupId=mso-group] Initiating connection to node localhost:39115 (id: 1 rack: null) using address localhost/127.0.0.1 11:11:16 11:11:16.271 [kafka-coordinator-heartbeat-thread | mso-group] DEBUG org.apache.kafka.common.security.authenticator.SaslClientAuthenticator - [Consumer clientId=mso-123456-consumer-37da6eb2-0a27-47c4-bc3c-8032fdeca549, groupId=mso-group] Set SASL client state to SEND_APIVERSIONS_REQUEST 11:11:16 11:11:16.272 [kafka-coordinator-heartbeat-thread | mso-group] DEBUG org.apache.kafka.common.security.authenticator.SaslClientAuthenticator - [Consumer clientId=mso-123456-consumer-37da6eb2-0a27-47c4-bc3c-8032fdeca549, groupId=mso-group] Creating SaslClient: client=null;service=kafka;serviceHostname=localhost;mechs=[PLAIN] 11:11:16 11:11:16.273 [kafka-coordinator-heartbeat-thread | mso-group] DEBUG org.apache.kafka.common.network.Selector - [Consumer clientId=mso-123456-consumer-37da6eb2-0a27-47c4-bc3c-8032fdeca549, groupId=mso-group] Connection with localhost/127.0.0.1 (channelId=1) disconnected 11:11:16 java.net.ConnectException: Connection refused 11:11:16 at java.base/sun.nio.ch.SocketChannelImpl.checkConnect(Native Method) 11:11:16 at java.base/sun.nio.ch.SocketChannelImpl.finishConnect(SocketChannelImpl.java:777) 11:11:16 at org.apache.kafka.common.network.PlaintextTransportLayer.finishConnect(PlaintextTransportLayer.java:50) 11:11:16 at org.apache.kafka.common.network.KafkaChannel.finishConnect(KafkaChannel.java:224) 11:11:16 at org.apache.kafka.common.network.Selector.pollSelectionKeys(Selector.java:526) 11:11:16 at org.apache.kafka.common.network.Selector.poll(Selector.java:481) 11:11:16 at org.apache.kafka.clients.NetworkClient.poll(NetworkClient.java:560) 11:11:16 at org.apache.kafka.clients.consumer.internals.ConsumerNetworkClient.poll(ConsumerNetworkClient.java:280) 11:11:16 at org.apache.kafka.clients.consumer.internals.ConsumerNetworkClient.pollNoWakeup(ConsumerNetworkClient.java:321) 11:11:16 at org.apache.kafka.clients.consumer.internals.AbstractCoordinator$HeartbeatThread.run(AbstractCoordinator.java:1454) 11:11:16 11:11:16.273 [kafka-coordinator-heartbeat-thread | mso-group] INFO org.apache.kafka.clients.NetworkClient - [Consumer clientId=mso-123456-consumer-37da6eb2-0a27-47c4-bc3c-8032fdeca549, groupId=mso-group] Node 1 disconnected. 11:11:16 11:11:16.273 [kafka-coordinator-heartbeat-thread | mso-group] WARN org.apache.kafka.clients.NetworkClient - [Consumer clientId=mso-123456-consumer-37da6eb2-0a27-47c4-bc3c-8032fdeca549, groupId=mso-group] Connection to node 1 (localhost/127.0.0.1:39115) could not be established. Broker may not be available. 11:11:16 11:11:16.273 [kafka-coordinator-heartbeat-thread | mso-group] DEBUG org.apache.kafka.clients.consumer.internals.ConsumerCoordinator - [Consumer clientId=mso-123456-consumer-37da6eb2-0a27-47c4-bc3c-8032fdeca549, groupId=mso-group] No broker available to send FindCoordinator request 11:11:16 11:11:16.275 [kafka-producer-network-thread | mso-123456-producer-414c146f-350c-404b-a200-38d58b83784c] DEBUG org.apache.kafka.clients.NetworkClient - [Producer clientId=mso-123456-producer-414c146f-350c-404b-a200-38d58b83784c] Give up sending metadata request since no node is available 11:11:16 11:11:16.280 [pool-8-thread-1] DEBUG org.onap.sdc.impl.NotificationConsumer - Polling for messages from topic: null 11:11:16 11:11:16.325 [kafka-producer-network-thread | mso-123456-producer-414c146f-350c-404b-a200-38d58b83784c] DEBUG org.apache.kafka.clients.NetworkClient - [Producer clientId=mso-123456-producer-414c146f-350c-404b-a200-38d58b83784c] Give up sending metadata request since no node is available 11:11:16 11:11:16.374 [kafka-coordinator-heartbeat-thread | mso-group] DEBUG org.apache.kafka.clients.NetworkClient - [Consumer clientId=mso-123456-consumer-37da6eb2-0a27-47c4-bc3c-8032fdeca549, groupId=mso-group] Give up sending metadata request since no node is available 11:11:16 11:11:16.374 [kafka-coordinator-heartbeat-thread | mso-group] DEBUG org.apache.kafka.clients.consumer.internals.ConsumerCoordinator - [Consumer clientId=mso-123456-consumer-37da6eb2-0a27-47c4-bc3c-8032fdeca549, groupId=mso-group] No broker available to send FindCoordinator request 11:11:16 11:11:16.376 [kafka-producer-network-thread | mso-123456-producer-414c146f-350c-404b-a200-38d58b83784c] DEBUG org.apache.kafka.clients.NetworkClient - [Producer clientId=mso-123456-producer-414c146f-350c-404b-a200-38d58b83784c] Give up sending metadata request since no node is available 11:11:16 11:11:16.381 [pool-8-thread-5] DEBUG org.onap.sdc.impl.NotificationConsumer - Polling for messages from topic: null 11:11:16 11:11:16.427 [kafka-producer-network-thread | mso-123456-producer-414c146f-350c-404b-a200-38d58b83784c] DEBUG org.apache.kafka.clients.NetworkClient - [Producer clientId=mso-123456-producer-414c146f-350c-404b-a200-38d58b83784c] Give up sending metadata request since no node is available 11:11:16 11:11:16.475 [kafka-coordinator-heartbeat-thread | mso-group] DEBUG org.apache.kafka.clients.NetworkClient - [Consumer clientId=mso-123456-consumer-37da6eb2-0a27-47c4-bc3c-8032fdeca549, groupId=mso-group] Give up sending metadata request since no node is available 11:11:16 11:11:16.476 [kafka-coordinator-heartbeat-thread | mso-group] DEBUG org.apache.kafka.clients.consumer.internals.ConsumerCoordinator - [Consumer clientId=mso-123456-consumer-37da6eb2-0a27-47c4-bc3c-8032fdeca549, groupId=mso-group] No broker available to send FindCoordinator request 11:11:16 11:11:16.477 [kafka-producer-network-thread | mso-123456-producer-414c146f-350c-404b-a200-38d58b83784c] DEBUG org.apache.kafka.clients.NetworkClient - [Producer clientId=mso-123456-producer-414c146f-350c-404b-a200-38d58b83784c] Give up sending metadata request since no node is available 11:11:16 11:11:16.481 [pool-8-thread-3] DEBUG org.onap.sdc.impl.NotificationConsumer - Polling for messages from topic: null 11:11:16 11:11:16.528 [kafka-producer-network-thread | mso-123456-producer-414c146f-350c-404b-a200-38d58b83784c] DEBUG org.apache.kafka.clients.NetworkClient - [Producer clientId=mso-123456-producer-414c146f-350c-404b-a200-38d58b83784c] Give up sending metadata request since no node is available 11:11:16 11:11:16.576 [kafka-coordinator-heartbeat-thread | mso-group] DEBUG org.apache.kafka.clients.NetworkClient - [Consumer clientId=mso-123456-consumer-37da6eb2-0a27-47c4-bc3c-8032fdeca549, groupId=mso-group] Give up sending metadata request since no node is available 11:11:16 11:11:16.576 [kafka-coordinator-heartbeat-thread | mso-group] DEBUG org.apache.kafka.clients.consumer.internals.ConsumerCoordinator - [Consumer clientId=mso-123456-consumer-37da6eb2-0a27-47c4-bc3c-8032fdeca549, groupId=mso-group] No broker available to send FindCoordinator request 11:11:16 11:11:16.578 [kafka-producer-network-thread | mso-123456-producer-414c146f-350c-404b-a200-38d58b83784c] DEBUG org.apache.kafka.clients.NetworkClient - [Producer clientId=mso-123456-producer-414c146f-350c-404b-a200-38d58b83784c] Give up sending metadata request since no node is available 11:11:16 11:11:16.580 [pool-8-thread-3] DEBUG org.onap.sdc.impl.NotificationConsumer - Polling for messages from topic: null 11:11:16 11:11:16.587 [main] INFO org.onap.sdc.impl.DistributionClientImpl - DistributionClient - sendNotificationStatus 11:11:16 11:11:16.587 [main] DEBUG org.onap.sdc.impl.DistributionClientImpl - client was not initialized 11:11:16 11:11:16.590 [pool-9-thread-1] DEBUG org.onap.sdc.impl.NotificationConsumer - Polling for messages from topic: null 11:11:16 11:11:16.629 [kafka-producer-network-thread | mso-123456-producer-414c146f-350c-404b-a200-38d58b83784c] DEBUG org.apache.kafka.clients.NetworkClient - [Producer clientId=mso-123456-producer-414c146f-350c-404b-a200-38d58b83784c] Give up sending metadata request since no node is available 11:11:16 11:11:16.676 [kafka-coordinator-heartbeat-thread | mso-group] DEBUG org.apache.kafka.clients.NetworkClient - [Consumer clientId=mso-123456-consumer-37da6eb2-0a27-47c4-bc3c-8032fdeca549, groupId=mso-group] Give up sending metadata request since no node is available 11:11:16 11:11:16.676 [kafka-coordinator-heartbeat-thread | mso-group] DEBUG org.apache.kafka.clients.consumer.internals.ConsumerCoordinator - [Consumer clientId=mso-123456-consumer-37da6eb2-0a27-47c4-bc3c-8032fdeca549, groupId=mso-group] No broker available to send FindCoordinator request 11:11:16 11:11:16.680 [kafka-producer-network-thread | mso-123456-producer-414c146f-350c-404b-a200-38d58b83784c] DEBUG org.apache.kafka.clients.NetworkClient - [Producer clientId=mso-123456-producer-414c146f-350c-404b-a200-38d58b83784c] Give up sending metadata request since no node is available 11:11:16 11:11:16.688 [pool-9-thread-1] DEBUG org.onap.sdc.impl.NotificationConsumer - Polling for messages from topic: null 11:11:16 11:11:16.730 [kafka-producer-network-thread | mso-123456-producer-414c146f-350c-404b-a200-38d58b83784c] DEBUG org.apache.kafka.clients.NetworkClient - [Producer clientId=mso-123456-producer-414c146f-350c-404b-a200-38d58b83784c] Give up sending metadata request since no node is available 11:11:16 11:11:16.777 [kafka-coordinator-heartbeat-thread | mso-group] DEBUG org.apache.kafka.clients.NetworkClient - [Consumer clientId=mso-123456-consumer-37da6eb2-0a27-47c4-bc3c-8032fdeca549, groupId=mso-group] Give up sending metadata request since no node is available 11:11:16 11:11:16.777 [kafka-coordinator-heartbeat-thread | mso-group] DEBUG org.apache.kafka.clients.consumer.internals.ConsumerCoordinator - [Consumer clientId=mso-123456-consumer-37da6eb2-0a27-47c4-bc3c-8032fdeca549, groupId=mso-group] No broker available to send FindCoordinator request 11:11:16 11:11:16.780 [kafka-producer-network-thread | mso-123456-producer-414c146f-350c-404b-a200-38d58b83784c] DEBUG org.apache.kafka.clients.NetworkClient - [Producer clientId=mso-123456-producer-414c146f-350c-404b-a200-38d58b83784c] Give up sending metadata request since no node is available 11:11:16 11:11:16.789 [pool-9-thread-2] DEBUG org.onap.sdc.impl.NotificationConsumer - Polling for messages from topic: null 11:11:16 11:11:16.790 [pool-9-thread-2] DEBUG org.onap.sdc.impl.NotificationConsumer - received message from topic 11:11:16 11:11:16.790 [pool-9-thread-2] DEBUG org.onap.sdc.impl.NotificationConsumer - received notification from broker: {"distributionID" : "bcc7a72e-90b1-4c5f-9a37-28dc3cd86416", 11:11:16 "serviceName" : "Testnotificationser1", 11:11:16 "serviceVersion" : "1.0", 11:11:16 "serviceUUID" : "7f7f94f4-373a-4b71-a0e3-80ae2ba4eb5d", 11:11:16 "serviceDescription" : "TestNotificationVF1", 11:11:16 "bugabuga" : "xyz", 11:11:16 "resources" : [{ 11:11:16 "resourceInstanceName" : "testnotificationvf11", 11:11:16 "resourceName" : "TestNotificationVF1", 11:11:16 "resourceVersion" : "1.0", 11:11:16 "resoucreType" : "VF", 11:11:16 "resourceUUID" : "907e1746-9f69-40f5-9f2a-313654092a2d", 11:11:16 "artifacts" : [{ 11:11:16 "artifactName" : "heat.yaml", 11:11:16 "artifactType" : "HEAT", 11:11:16 "artifactURL" : "/sdc/v1/catalog/services/Testnotificationser1/1.0/resourceInstances/testnotificationvf11/artifacts/heat.yaml", 11:11:16 "artifactChecksum" : "ODEyNjE4YTMzYzRmMTk2ODVhNTU2NTg3YWEyNmIxMTM\u003d", 11:11:16 "artifactDescription" : "heat", 11:11:16 "artifactTimeout" : 60, 11:11:16 "artifactUUID" : "8df6123c-f368-47d3-93be-1972cefbcc35", 11:11:16 "artifactBuga" : "8df6123c-f368-47d3-93be-1972cefbcc35", 11:11:16 "artifactVersion" : "1" 11:11:16 }, { 11:11:16 "artifactName" : "buga.bug", 11:11:16 "artifactType" : "BUGA_BUGA", 11:11:16 "artifactURL" : "/sdc/v1/catalog/services/Testnotificationser1/1.0/resourceInstances/testnotificationvf11/artifacts/heat.env", 11:11:16 "artifactChecksum" : "NGIzMjExZTM1NDc2NjBjOTQyMGJmMWNiMmU0NTE5NzM\u003d", 11:11:16 "artifactDescription" : "Auto-generated HEAT Environment deployment artifact", 11:11:16 "artifactTimeout" : 0, 11:11:16 "artifactUUID" : "ce65d31c-35c0-43a9-90c7-596fc51d0c86", 11:11:16 "artifactVersion" : "1", 11:11:16 "generatedFromUUID" : "8df6123c-f368-47d3-93be-1972cefbcc35" 11:11:16 } 11:11:16 ] 11:11:16 } 11:11:16 ]} 11:11:16 11:11:16.832 [kafka-producer-network-thread | mso-123456-producer-414c146f-350c-404b-a200-38d58b83784c] DEBUG org.apache.kafka.clients.NetworkClient - [Producer clientId=mso-123456-producer-414c146f-350c-404b-a200-38d58b83784c] Initialize connection to node localhost:39115 (id: 1 rack: null) for sending metadata request 11:11:16 11:11:16.835 [kafka-producer-network-thread | mso-123456-producer-414c146f-350c-404b-a200-38d58b83784c] DEBUG org.apache.kafka.clients.ClientUtils - Resolved host localhost as 127.0.0.1 11:11:16 11:11:16.835 [kafka-producer-network-thread | mso-123456-producer-414c146f-350c-404b-a200-38d58b83784c] DEBUG org.apache.kafka.clients.NetworkClient - [Producer clientId=mso-123456-producer-414c146f-350c-404b-a200-38d58b83784c] Initiating connection to node localhost:39115 (id: 1 rack: null) using address localhost/127.0.0.1 11:11:16 11:11:16.835 [kafka-producer-network-thread | mso-123456-producer-414c146f-350c-404b-a200-38d58b83784c] DEBUG org.apache.kafka.common.security.authenticator.SaslClientAuthenticator - [Producer clientId=mso-123456-producer-414c146f-350c-404b-a200-38d58b83784c] Set SASL client state to SEND_APIVERSIONS_REQUEST 11:11:16 11:11:16.835 [kafka-producer-network-thread | mso-123456-producer-414c146f-350c-404b-a200-38d58b83784c] DEBUG org.apache.kafka.common.security.authenticator.SaslClientAuthenticator - [Producer clientId=mso-123456-producer-414c146f-350c-404b-a200-38d58b83784c] Creating SaslClient: client=null;service=kafka;serviceHostname=localhost;mechs=[PLAIN] 11:11:16 11:11:16.837 [pool-9-thread-2] DEBUG org.onap.sdc.impl.NotificationConsumer - sending notification to client: { 11:11:16 "distributionID": "bcc7a72e-90b1-4c5f-9a37-28dc3cd86416", 11:11:16 "serviceName": "Testnotificationser1", 11:11:16 "serviceVersion": "1.0", 11:11:16 "serviceUUID": "7f7f94f4-373a-4b71-a0e3-80ae2ba4eb5d", 11:11:16 "serviceDescription": "TestNotificationVF1", 11:11:16 "resources": [ 11:11:16 { 11:11:16 "resourceInstanceName": "testnotificationvf11", 11:11:16 "resourceName": "TestNotificationVF1", 11:11:16 "resourceVersion": "1.0", 11:11:16 "resoucreType": "VF", 11:11:16 "resourceUUID": "907e1746-9f69-40f5-9f2a-313654092a2d", 11:11:16 "artifacts": [ 11:11:16 { 11:11:16 "artifactName": "heat.yaml", 11:11:16 "artifactType": "HEAT", 11:11:16 "artifactURL": "/sdc/v1/catalog/services/Testnotificationser1/1.0/resourceInstances/testnotificationvf11/artifacts/heat.yaml", 11:11:16 "artifactChecksum": "ODEyNjE4YTMzYzRmMTk2ODVhNTU2NTg3YWEyNmIxMTM\u003d", 11:11:16 "artifactDescription": "heat", 11:11:16 "artifactTimeout": 60, 11:11:16 "artifactVersion": "1", 11:11:16 "artifactUUID": "8df6123c-f368-47d3-93be-1972cefbcc35", 11:11:16 "relatedArtifactsInfo": [] 11:11:16 } 11:11:16 ] 11:11:16 } 11:11:16 ], 11:11:16 "serviceArtifacts": [] 11:11:16 } 11:11:16 11:11:16.837 [kafka-producer-network-thread | mso-123456-producer-414c146f-350c-404b-a200-38d58b83784c] DEBUG org.apache.kafka.common.network.Selector - [Producer clientId=mso-123456-producer-414c146f-350c-404b-a200-38d58b83784c] Connection with localhost/127.0.0.1 (channelId=1) disconnected 11:11:16 java.net.ConnectException: Connection refused 11:11:16 at java.base/sun.nio.ch.SocketChannelImpl.checkConnect(Native Method) 11:11:16 at java.base/sun.nio.ch.SocketChannelImpl.finishConnect(SocketChannelImpl.java:777) 11:11:16 at org.apache.kafka.common.network.PlaintextTransportLayer.finishConnect(PlaintextTransportLayer.java:50) 11:11:16 at org.apache.kafka.common.network.KafkaChannel.finishConnect(KafkaChannel.java:224) 11:11:16 at org.apache.kafka.common.network.Selector.pollSelectionKeys(Selector.java:526) 11:11:16 at org.apache.kafka.common.network.Selector.poll(Selector.java:481) 11:11:16 at org.apache.kafka.clients.NetworkClient.poll(NetworkClient.java:560) 11:11:16 at org.apache.kafka.clients.producer.internals.Sender.runOnce(Sender.java:328) 11:11:16 at org.apache.kafka.clients.producer.internals.Sender.run(Sender.java:243) 11:11:16 at java.base/java.lang.Thread.run(Thread.java:829) 11:11:16 11:11:16.838 [kafka-producer-network-thread | mso-123456-producer-414c146f-350c-404b-a200-38d58b83784c] INFO org.apache.kafka.clients.NetworkClient - [Producer clientId=mso-123456-producer-414c146f-350c-404b-a200-38d58b83784c] Node 1 disconnected. 11:11:16 11:11:16.838 [kafka-producer-network-thread | mso-123456-producer-414c146f-350c-404b-a200-38d58b83784c] WARN org.apache.kafka.clients.NetworkClient - [Producer clientId=mso-123456-producer-414c146f-350c-404b-a200-38d58b83784c] Connection to node 1 (localhost/127.0.0.1:39115) could not be established. Broker may not be available. 11:11:16 11:11:16.878 [kafka-coordinator-heartbeat-thread | mso-group] DEBUG org.apache.kafka.clients.NetworkClient - [Consumer clientId=mso-123456-consumer-37da6eb2-0a27-47c4-bc3c-8032fdeca549, groupId=mso-group] Give up sending metadata request since no node is available 11:11:16 11:11:16.878 [kafka-coordinator-heartbeat-thread | mso-group] DEBUG org.apache.kafka.clients.consumer.internals.ConsumerCoordinator - [Consumer clientId=mso-123456-consumer-37da6eb2-0a27-47c4-bc3c-8032fdeca549, groupId=mso-group] No broker available to send FindCoordinator request 11:11:16 11:11:16.888 [pool-9-thread-1] DEBUG org.onap.sdc.impl.NotificationConsumer - Polling for messages from topic: null 11:11:16 11:11:16.938 [kafka-producer-network-thread | mso-123456-producer-414c146f-350c-404b-a200-38d58b83784c] DEBUG org.apache.kafka.clients.NetworkClient - [Producer clientId=mso-123456-producer-414c146f-350c-404b-a200-38d58b83784c] Give up sending metadata request since no node is available 11:11:16 11:11:16.979 [kafka-coordinator-heartbeat-thread | mso-group] DEBUG org.apache.kafka.clients.NetworkClient - [Consumer clientId=mso-123456-consumer-37da6eb2-0a27-47c4-bc3c-8032fdeca549, groupId=mso-group] Give up sending metadata request since no node is available 11:11:16 11:11:16.980 [kafka-coordinator-heartbeat-thread | mso-group] DEBUG org.apache.kafka.clients.consumer.internals.ConsumerCoordinator - [Consumer clientId=mso-123456-consumer-37da6eb2-0a27-47c4-bc3c-8032fdeca549, groupId=mso-group] No broker available to send FindCoordinator request 11:11:16 11:11:16.989 [kafka-producer-network-thread | mso-123456-producer-414c146f-350c-404b-a200-38d58b83784c] DEBUG org.apache.kafka.clients.NetworkClient - [Producer clientId=mso-123456-producer-414c146f-350c-404b-a200-38d58b83784c] Give up sending metadata request since no node is available 11:11:16 11:11:16.989 [pool-9-thread-3] DEBUG org.onap.sdc.impl.NotificationConsumer - Polling for messages from topic: null 11:11:17 11:11:17.039 [kafka-producer-network-thread | mso-123456-producer-414c146f-350c-404b-a200-38d58b83784c] DEBUG org.apache.kafka.clients.NetworkClient - [Producer clientId=mso-123456-producer-414c146f-350c-404b-a200-38d58b83784c] Give up sending metadata request since no node is available 11:11:17 11:11:17.080 [kafka-coordinator-heartbeat-thread | mso-group] DEBUG org.apache.kafka.clients.NetworkClient - [Consumer clientId=mso-123456-consumer-37da6eb2-0a27-47c4-bc3c-8032fdeca549, groupId=mso-group] Give up sending metadata request since no node is available 11:11:17 11:11:17.081 [kafka-coordinator-heartbeat-thread | mso-group] DEBUG org.apache.kafka.clients.consumer.internals.ConsumerCoordinator - [Consumer clientId=mso-123456-consumer-37da6eb2-0a27-47c4-bc3c-8032fdeca549, groupId=mso-group] No broker available to send FindCoordinator request 11:11:17 11:11:17.089 [pool-9-thread-2] DEBUG org.onap.sdc.impl.NotificationConsumer - Polling for messages from topic: null 11:11:17 11:11:17.090 [kafka-producer-network-thread | mso-123456-producer-414c146f-350c-404b-a200-38d58b83784c] DEBUG org.apache.kafka.clients.NetworkClient - [Producer clientId=mso-123456-producer-414c146f-350c-404b-a200-38d58b83784c] Give up sending metadata request since no node is available 11:11:17 11:11:17.140 [kafka-producer-network-thread | mso-123456-producer-414c146f-350c-404b-a200-38d58b83784c] DEBUG org.apache.kafka.clients.NetworkClient - [Producer clientId=mso-123456-producer-414c146f-350c-404b-a200-38d58b83784c] Give up sending metadata request since no node is available 11:11:17 11:11:17.181 [kafka-coordinator-heartbeat-thread | mso-group] DEBUG org.apache.kafka.clients.NetworkClient - [Consumer clientId=mso-123456-consumer-37da6eb2-0a27-47c4-bc3c-8032fdeca549, groupId=mso-group] Give up sending metadata request since no node is available 11:11:17 11:11:17.181 [kafka-coordinator-heartbeat-thread | mso-group] DEBUG org.apache.kafka.clients.consumer.internals.ConsumerCoordinator - [Consumer clientId=mso-123456-consumer-37da6eb2-0a27-47c4-bc3c-8032fdeca549, groupId=mso-group] No broker available to send FindCoordinator request 11:11:17 11:11:17.189 [pool-9-thread-4] DEBUG org.onap.sdc.impl.NotificationConsumer - Polling for messages from topic: null 11:11:17 11:11:17.191 [kafka-producer-network-thread | mso-123456-producer-414c146f-350c-404b-a200-38d58b83784c] DEBUG org.apache.kafka.clients.NetworkClient - [Producer clientId=mso-123456-producer-414c146f-350c-404b-a200-38d58b83784c] Give up sending metadata request since no node is available 11:11:17 11:11:17.242 [kafka-producer-network-thread | mso-123456-producer-414c146f-350c-404b-a200-38d58b83784c] DEBUG org.apache.kafka.clients.NetworkClient - [Producer clientId=mso-123456-producer-414c146f-350c-404b-a200-38d58b83784c] Give up sending metadata request since no node is available 11:11:17 11:11:17.282 [kafka-coordinator-heartbeat-thread | mso-group] DEBUG org.apache.kafka.clients.NetworkClient - [Consumer clientId=mso-123456-consumer-37da6eb2-0a27-47c4-bc3c-8032fdeca549, groupId=mso-group] Give up sending metadata request since no node is available 11:11:17 11:11:17.282 [kafka-coordinator-heartbeat-thread | mso-group] DEBUG org.apache.kafka.clients.consumer.internals.ConsumerCoordinator - [Consumer clientId=mso-123456-consumer-37da6eb2-0a27-47c4-bc3c-8032fdeca549, groupId=mso-group] No broker available to send FindCoordinator request 11:11:17 11:11:17.289 [pool-9-thread-1] DEBUG org.onap.sdc.impl.NotificationConsumer - Polling for messages from topic: null 11:11:17 11:11:17.292 [kafka-producer-network-thread | mso-123456-producer-414c146f-350c-404b-a200-38d58b83784c] DEBUG org.apache.kafka.clients.NetworkClient - [Producer clientId=mso-123456-producer-414c146f-350c-404b-a200-38d58b83784c] Give up sending metadata request since no node is available 11:11:17 11:11:17.343 [kafka-producer-network-thread | mso-123456-producer-414c146f-350c-404b-a200-38d58b83784c] DEBUG org.apache.kafka.clients.NetworkClient - [Producer clientId=mso-123456-producer-414c146f-350c-404b-a200-38d58b83784c] Give up sending metadata request since no node is available 11:11:17 11:11:17.383 [kafka-coordinator-heartbeat-thread | mso-group] DEBUG org.apache.kafka.clients.NetworkClient - [Consumer clientId=mso-123456-consumer-37da6eb2-0a27-47c4-bc3c-8032fdeca549, groupId=mso-group] Initialize connection to node localhost:39115 (id: 1 rack: null) for sending metadata request 11:11:17 11:11:17.383 [kafka-coordinator-heartbeat-thread | mso-group] DEBUG org.apache.kafka.clients.ClientUtils - Resolved host localhost as 127.0.0.1 11:11:17 11:11:17.383 [kafka-coordinator-heartbeat-thread | mso-group] DEBUG org.apache.kafka.clients.NetworkClient - [Consumer clientId=mso-123456-consumer-37da6eb2-0a27-47c4-bc3c-8032fdeca549, groupId=mso-group] Initiating connection to node localhost:39115 (id: 1 rack: null) using address localhost/127.0.0.1 11:11:17 11:11:17.384 [kafka-coordinator-heartbeat-thread | mso-group] DEBUG org.apache.kafka.common.security.authenticator.SaslClientAuthenticator - [Consumer clientId=mso-123456-consumer-37da6eb2-0a27-47c4-bc3c-8032fdeca549, groupId=mso-group] Set SASL client state to SEND_APIVERSIONS_REQUEST 11:11:17 11:11:17.384 [kafka-coordinator-heartbeat-thread | mso-group] DEBUG org.apache.kafka.common.security.authenticator.SaslClientAuthenticator - [Consumer clientId=mso-123456-consumer-37da6eb2-0a27-47c4-bc3c-8032fdeca549, groupId=mso-group] Creating SaslClient: client=null;service=kafka;serviceHostname=localhost;mechs=[PLAIN] 11:11:17 11:11:17.385 [kafka-coordinator-heartbeat-thread | mso-group] DEBUG org.apache.kafka.common.network.Selector - [Consumer clientId=mso-123456-consumer-37da6eb2-0a27-47c4-bc3c-8032fdeca549, groupId=mso-group] Connection with localhost/127.0.0.1 (channelId=1) disconnected 11:11:17 java.net.ConnectException: Connection refused 11:11:17 at java.base/sun.nio.ch.SocketChannelImpl.checkConnect(Native Method) 11:11:17 at java.base/sun.nio.ch.SocketChannelImpl.finishConnect(SocketChannelImpl.java:777) 11:11:17 at org.apache.kafka.common.network.PlaintextTransportLayer.finishConnect(PlaintextTransportLayer.java:50) 11:11:17 at org.apache.kafka.common.network.KafkaChannel.finishConnect(KafkaChannel.java:224) 11:11:17 at org.apache.kafka.common.network.Selector.pollSelectionKeys(Selector.java:526) 11:11:17 at org.apache.kafka.common.network.Selector.poll(Selector.java:481) 11:11:17 at org.apache.kafka.clients.NetworkClient.poll(NetworkClient.java:560) 11:11:17 at org.apache.kafka.clients.consumer.internals.ConsumerNetworkClient.poll(ConsumerNetworkClient.java:280) 11:11:17 at org.apache.kafka.clients.consumer.internals.ConsumerNetworkClient.pollNoWakeup(ConsumerNetworkClient.java:321) 11:11:17 at org.apache.kafka.clients.consumer.internals.AbstractCoordinator$HeartbeatThread.run(AbstractCoordinator.java:1454) 11:11:17 11:11:17.385 [kafka-coordinator-heartbeat-thread | mso-group] INFO org.apache.kafka.clients.NetworkClient - [Consumer clientId=mso-123456-consumer-37da6eb2-0a27-47c4-bc3c-8032fdeca549, groupId=mso-group] Node 1 disconnected. 11:11:17 11:11:17.385 [kafka-coordinator-heartbeat-thread | mso-group] WARN org.apache.kafka.clients.NetworkClient - [Consumer clientId=mso-123456-consumer-37da6eb2-0a27-47c4-bc3c-8032fdeca549, groupId=mso-group] Connection to node 1 (localhost/127.0.0.1:39115) could not be established. Broker may not be available. 11:11:17 11:11:17.386 [kafka-coordinator-heartbeat-thread | mso-group] DEBUG org.apache.kafka.clients.consumer.internals.ConsumerCoordinator - [Consumer clientId=mso-123456-consumer-37da6eb2-0a27-47c4-bc3c-8032fdeca549, groupId=mso-group] No broker available to send FindCoordinator request 11:11:17 11:11:17.389 [pool-9-thread-5] DEBUG org.onap.sdc.impl.NotificationConsumer - Polling for messages from topic: null 11:11:17 11:11:17.410 [kafka-producer-network-thread | mso-123456-producer-414c146f-350c-404b-a200-38d58b83784c] DEBUG org.apache.kafka.clients.NetworkClient - [Producer clientId=mso-123456-producer-414c146f-350c-404b-a200-38d58b83784c] Give up sending metadata request since no node is available 11:11:17 11:11:17.463 [kafka-producer-network-thread | mso-123456-producer-414c146f-350c-404b-a200-38d58b83784c] DEBUG org.apache.kafka.clients.NetworkClient - [Producer clientId=mso-123456-producer-414c146f-350c-404b-a200-38d58b83784c] Give up sending metadata request since no node is available 11:11:17 11:11:17.487 [kafka-coordinator-heartbeat-thread | mso-group] DEBUG org.apache.kafka.clients.NetworkClient - [Consumer clientId=mso-123456-consumer-37da6eb2-0a27-47c4-bc3c-8032fdeca549, groupId=mso-group] Give up sending metadata request since no node is available 11:11:17 11:11:17.487 [kafka-coordinator-heartbeat-thread | mso-group] DEBUG org.apache.kafka.clients.consumer.internals.ConsumerCoordinator - [Consumer clientId=mso-123456-consumer-37da6eb2-0a27-47c4-bc3c-8032fdeca549, groupId=mso-group] No broker available to send FindCoordinator request 11:11:17 11:11:17.489 [pool-9-thread-3] DEBUG org.onap.sdc.impl.NotificationConsumer - Polling for messages from topic: null 11:11:17 11:11:17.514 [kafka-producer-network-thread | mso-123456-producer-414c146f-350c-404b-a200-38d58b83784c] DEBUG org.apache.kafka.clients.NetworkClient - [Producer clientId=mso-123456-producer-414c146f-350c-404b-a200-38d58b83784c] Give up sending metadata request since no node is available 11:11:17 11:11:17.565 [kafka-producer-network-thread | mso-123456-producer-414c146f-350c-404b-a200-38d58b83784c] DEBUG org.apache.kafka.clients.NetworkClient - [Producer clientId=mso-123456-producer-414c146f-350c-404b-a200-38d58b83784c] Give up sending metadata request since no node is available 11:11:17 11:11:17.589 [pool-9-thread-3] DEBUG org.onap.sdc.impl.NotificationConsumer - Polling for messages from topic: null 11:11:17 11:11:17.589 [kafka-coordinator-heartbeat-thread | mso-group] DEBUG org.apache.kafka.clients.NetworkClient - [Consumer clientId=mso-123456-consumer-37da6eb2-0a27-47c4-bc3c-8032fdeca549, groupId=mso-group] Give up sending metadata request since no node is available 11:11:17 11:11:17.592 [kafka-coordinator-heartbeat-thread | mso-group] DEBUG org.apache.kafka.clients.consumer.internals.ConsumerCoordinator - [Consumer clientId=mso-123456-consumer-37da6eb2-0a27-47c4-bc3c-8032fdeca549, groupId=mso-group] No broker available to send FindCoordinator request 11:11:17 11:11:17.602 [main] INFO org.onap.sdc.impl.DistributionClientImpl - DistributionClient - sendNotificationStatus 11:11:17 11:11:17.602 [main] DEBUG org.onap.sdc.impl.DistributionClientImpl - client was not initialized 11:11:17 11:11:17.605 [pool-10-thread-1] DEBUG org.onap.sdc.impl.NotificationConsumer - Polling for messages from topic: null 11:11:17 11:11:17.616 [kafka-producer-network-thread | mso-123456-producer-414c146f-350c-404b-a200-38d58b83784c] DEBUG org.apache.kafka.clients.NetworkClient - [Producer clientId=mso-123456-producer-414c146f-350c-404b-a200-38d58b83784c] Give up sending metadata request since no node is available 11:11:17 11:11:17.666 [kafka-producer-network-thread | mso-123456-producer-414c146f-350c-404b-a200-38d58b83784c] DEBUG org.apache.kafka.clients.NetworkClient - [Producer clientId=mso-123456-producer-414c146f-350c-404b-a200-38d58b83784c] Give up sending metadata request since no node is available 11:11:17 11:11:17.693 [kafka-coordinator-heartbeat-thread | mso-group] DEBUG org.apache.kafka.clients.NetworkClient - [Consumer clientId=mso-123456-consumer-37da6eb2-0a27-47c4-bc3c-8032fdeca549, groupId=mso-group] Give up sending metadata request since no node is available 11:11:17 11:11:17.693 [kafka-coordinator-heartbeat-thread | mso-group] DEBUG org.apache.kafka.clients.consumer.internals.ConsumerCoordinator - [Consumer clientId=mso-123456-consumer-37da6eb2-0a27-47c4-bc3c-8032fdeca549, groupId=mso-group] No broker available to send FindCoordinator request 11:11:17 11:11:17.704 [pool-10-thread-1] DEBUG org.onap.sdc.impl.NotificationConsumer - Polling for messages from topic: null 11:11:17 11:11:17.717 [kafka-producer-network-thread | mso-123456-producer-414c146f-350c-404b-a200-38d58b83784c] DEBUG org.apache.kafka.clients.NetworkClient - [Producer clientId=mso-123456-producer-414c146f-350c-404b-a200-38d58b83784c] Initialize connection to node localhost:39115 (id: 1 rack: null) for sending metadata request 11:11:17 11:11:17.717 [kafka-producer-network-thread | mso-123456-producer-414c146f-350c-404b-a200-38d58b83784c] DEBUG org.apache.kafka.clients.ClientUtils - Resolved host localhost as 127.0.0.1 11:11:17 11:11:17.718 [kafka-producer-network-thread | mso-123456-producer-414c146f-350c-404b-a200-38d58b83784c] DEBUG org.apache.kafka.clients.NetworkClient - [Producer clientId=mso-123456-producer-414c146f-350c-404b-a200-38d58b83784c] Initiating connection to node localhost:39115 (id: 1 rack: null) using address localhost/127.0.0.1 11:11:17 11:11:17.718 [kafka-producer-network-thread | mso-123456-producer-414c146f-350c-404b-a200-38d58b83784c] DEBUG org.apache.kafka.common.security.authenticator.SaslClientAuthenticator - [Producer clientId=mso-123456-producer-414c146f-350c-404b-a200-38d58b83784c] Set SASL client state to SEND_APIVERSIONS_REQUEST 11:11:17 11:11:17.718 [kafka-producer-network-thread | mso-123456-producer-414c146f-350c-404b-a200-38d58b83784c] DEBUG org.apache.kafka.common.security.authenticator.SaslClientAuthenticator - [Producer clientId=mso-123456-producer-414c146f-350c-404b-a200-38d58b83784c] Creating SaslClient: client=null;service=kafka;serviceHostname=localhost;mechs=[PLAIN] 11:11:17 11:11:17.721 [kafka-producer-network-thread | mso-123456-producer-414c146f-350c-404b-a200-38d58b83784c] DEBUG org.apache.kafka.common.network.Selector - [Producer clientId=mso-123456-producer-414c146f-350c-404b-a200-38d58b83784c] Connection with localhost/127.0.0.1 (channelId=1) disconnected 11:11:17 java.net.ConnectException: Connection refused 11:11:17 at java.base/sun.nio.ch.SocketChannelImpl.checkConnect(Native Method) 11:11:17 at java.base/sun.nio.ch.SocketChannelImpl.finishConnect(SocketChannelImpl.java:777) 11:11:17 at org.apache.kafka.common.network.PlaintextTransportLayer.finishConnect(PlaintextTransportLayer.java:50) 11:11:17 at org.apache.kafka.common.network.KafkaChannel.finishConnect(KafkaChannel.java:224) 11:11:17 at org.apache.kafka.common.network.Selector.pollSelectionKeys(Selector.java:526) 11:11:17 at org.apache.kafka.common.network.Selector.poll(Selector.java:481) 11:11:17 at org.apache.kafka.clients.NetworkClient.poll(NetworkClient.java:560) 11:11:17 at org.apache.kafka.clients.producer.internals.Sender.runOnce(Sender.java:328) 11:11:17 at org.apache.kafka.clients.producer.internals.Sender.run(Sender.java:243) 11:11:17 at java.base/java.lang.Thread.run(Thread.java:829) 11:11:17 11:11:17.722 [kafka-producer-network-thread | mso-123456-producer-414c146f-350c-404b-a200-38d58b83784c] INFO org.apache.kafka.clients.NetworkClient - [Producer clientId=mso-123456-producer-414c146f-350c-404b-a200-38d58b83784c] Node 1 disconnected. 11:11:17 11:11:17.722 [kafka-producer-network-thread | mso-123456-producer-414c146f-350c-404b-a200-38d58b83784c] WARN org.apache.kafka.clients.NetworkClient - [Producer clientId=mso-123456-producer-414c146f-350c-404b-a200-38d58b83784c] Connection to node 1 (localhost/127.0.0.1:39115) could not be established. Broker may not be available. 11:11:17 11:11:17.793 [kafka-coordinator-heartbeat-thread | mso-group] DEBUG org.apache.kafka.clients.NetworkClient - [Consumer clientId=mso-123456-consumer-37da6eb2-0a27-47c4-bc3c-8032fdeca549, groupId=mso-group] Give up sending metadata request since no node is available 11:11:17 11:11:17.794 [kafka-coordinator-heartbeat-thread | mso-group] DEBUG org.apache.kafka.clients.consumer.internals.ConsumerCoordinator - [Consumer clientId=mso-123456-consumer-37da6eb2-0a27-47c4-bc3c-8032fdeca549, groupId=mso-group] No broker available to send FindCoordinator request 11:11:17 11:11:17.805 [pool-10-thread-2] DEBUG org.onap.sdc.impl.NotificationConsumer - Polling for messages from topic: null 11:11:17 11:11:17.806 [pool-10-thread-2] DEBUG org.onap.sdc.impl.NotificationConsumer - received message from topic 11:11:17 11:11:17.806 [pool-10-thread-2] DEBUG org.onap.sdc.impl.NotificationConsumer - received notification from broker: {"distributionID" : "bcc7a72e-90b1-4c5f-9a37-28dc3cd86416", 11:11:17 "serviceName" : "Testnotificationser1", 11:11:17 "serviceVersion" : "1.0", 11:11:17 "serviceUUID" : "7f7f94f4-373a-4b71-a0e3-80ae2ba4eb5d", 11:11:17 "serviceDescription" : "TestNotificationVF1", 11:11:17 "resources" : [{ 11:11:17 "resourceInstanceName" : "testnotificationvf11", 11:11:17 "resourceName" : "TestNotificationVF1", 11:11:17 "resourceVersion" : "1.0", 11:11:17 "resoucreType" : "VF", 11:11:17 "resourceUUID" : "907e1746-9f69-40f5-9f2a-313654092a2d", 11:11:17 "artifacts" : [{ 11:11:17 "artifactName" : "sample-xml-alldata-1-1.xml", 11:11:17 "artifactType" : "YANG_XML", 11:11:17 "artifactURL" : "/sdc/v1/catalog/services/Testnotificationser1/1.0/resourceInstances/testnotificationvf11/artifacts/sample-xml-alldata-1-1.xml", 11:11:17 "artifactChecksum" : "MTUxODFkMmRlOTNhNjYxMGYyYTI1ZjA5Y2QyNWQyYTk\u003d", 11:11:17 "artifactDescription" : "MyYang", 11:11:17 "artifactTimeout" : 0, 11:11:17 "artifactUUID" : "0005bc4a-2c19-452e-be6d-d574a56be4d0", 11:11:17 "artifactVersion" : "1", 11:11:17 "relatedArtifacts" : [ 11:11:17 "ce65d31c-35c0-43a9-90c7-596fc51d0c86" 11:11:17 ] }, { 11:11:17 "artifactName" : "heat.yaml", 11:11:17 "artifactType" : "HEAT", 11:11:17 "artifactURL" : "/sdc/v1/catalog/services/Testnotificationser1/1.0/resourceInstances/testnotificationvf11/artifacts/heat.yaml", 11:11:17 "artifactChecksum" : "ODEyNjE4YTMzYzRmMTk2ODVhNTU2NTg3YWEyNmIxMTM\u003d", 11:11:17 "artifactDescription" : "heat", 11:11:17 "artifactTimeout" : 60, 11:11:17 "artifactUUID" : "8df6123c-f368-47d3-93be-1972cefbcc35", 11:11:17 "artifactVersion" : "1", 11:11:17 "relatedArtifacts" : [ 11:11:17 "0005bc4a-2c19-452e-be6d-d574a56be4d0", 11:11:17 "ce65d31c-35c0-43a9-90c7-596fc51d0c86" 11:11:17 ] }, { 11:11:17 "artifactName" : "heat.env", 11:11:17 "artifactType" : "HEAT_ENV", 11:11:17 "artifactURL" : "/sdc/v1/catalog/services/Testnotificationser1/1.0/resourceInstances/testnotificationvf11/artifacts/heat.env", 11:11:17 "artifactChecksum" : "NGIzMjExZTM1NDc2NjBjOTQyMGJmMWNiMmU0NTE5NzM\u003d", 11:11:17 "artifactDescription" : "Auto-generated HEAT Environment deployment artifact", 11:11:17 "artifactTimeout" : 0, 11:11:17 "artifactUUID" : "ce65d31c-35c0-43a9-90c7-596fc51d0c86", 11:11:17 "artifactVersion" : "1", 11:11:17 "generatedFromUUID" : "8df6123c-f368-47d3-93be-1972cefbcc35" 11:11:17 } 11:11:17 ] 11:11:17 } 11:11:17 ]} 11:11:17 11:11:17.811 [pool-10-thread-2] DEBUG org.onap.sdc.impl.NotificationConsumer - sending notification to client: { 11:11:17 "distributionID": "bcc7a72e-90b1-4c5f-9a37-28dc3cd86416", 11:11:17 "serviceName": "Testnotificationser1", 11:11:17 "serviceVersion": "1.0", 11:11:17 "serviceUUID": "7f7f94f4-373a-4b71-a0e3-80ae2ba4eb5d", 11:11:17 "serviceDescription": "TestNotificationVF1", 11:11:17 "resources": [ 11:11:17 { 11:11:17 "resourceInstanceName": "testnotificationvf11", 11:11:17 "resourceName": "TestNotificationVF1", 11:11:17 "resourceVersion": "1.0", 11:11:17 "resoucreType": "VF", 11:11:17 "resourceUUID": "907e1746-9f69-40f5-9f2a-313654092a2d", 11:11:17 "artifacts": [ 11:11:17 { 11:11:17 "artifactName": "sample-xml-alldata-1-1.xml", 11:11:17 "artifactType": "YANG_XML", 11:11:17 "artifactURL": "/sdc/v1/catalog/services/Testnotificationser1/1.0/resourceInstances/testnotificationvf11/artifacts/sample-xml-alldata-1-1.xml", 11:11:17 "artifactChecksum": "MTUxODFkMmRlOTNhNjYxMGYyYTI1ZjA5Y2QyNWQyYTk\u003d", 11:11:17 "artifactDescription": "MyYang", 11:11:17 "artifactTimeout": 0, 11:11:17 "artifactVersion": "1", 11:11:17 "artifactUUID": "0005bc4a-2c19-452e-be6d-d574a56be4d0", 11:11:17 "relatedArtifacts": [ 11:11:17 "ce65d31c-35c0-43a9-90c7-596fc51d0c86" 11:11:17 ], 11:11:17 "relatedArtifactsInfo": [ 11:11:17 { 11:11:17 "artifactName": "heat.env", 11:11:17 "artifactType": "HEAT_ENV", 11:11:17 "artifactURL": "/sdc/v1/catalog/services/Testnotificationser1/1.0/resourceInstances/testnotificationvf11/artifacts/heat.env", 11:11:17 "artifactChecksum": "NGIzMjExZTM1NDc2NjBjOTQyMGJmMWNiMmU0NTE5NzM\u003d", 11:11:17 "artifactDescription": "Auto-generated HEAT Environment deployment artifact", 11:11:17 "artifactTimeout": 0, 11:11:17 "artifactVersion": "1", 11:11:17 "artifactUUID": "ce65d31c-35c0-43a9-90c7-596fc51d0c86", 11:11:17 "generatedFromUUID": "8df6123c-f368-47d3-93be-1972cefbcc35" 11:11:17 } 11:11:17 ] 11:11:17 }, 11:11:17 { 11:11:17 "artifactName": "heat.yaml", 11:11:17 "artifactType": "HEAT", 11:11:17 "artifactURL": "/sdc/v1/catalog/services/Testnotificationser1/1.0/resourceInstances/testnotificationvf11/artifacts/heat.yaml", 11:11:17 "artifactChecksum": "ODEyNjE4YTMzYzRmMTk2ODVhNTU2NTg3YWEyNmIxMTM\u003d", 11:11:17 "artifactDescription": "heat", 11:11:17 "artifactTimeout": 60, 11:11:17 "artifactVersion": "1", 11:11:17 "artifactUUID": "8df6123c-f368-47d3-93be-1972cefbcc35", 11:11:17 "generatedArtifact": { 11:11:17 "artifactName": "heat.env", 11:11:17 "artifactType": "HEAT_ENV", 11:11:17 "artifactURL": "/sdc/v1/catalog/services/Testnotificationser1/1.0/resourceInstances/testnotificationvf11/artifacts/heat.env", 11:11:17 "artifactChecksum": "NGIzMjExZTM1NDc2NjBjOTQyMGJmMWNiMmU0NTE5NzM\u003d", 11:11:17 "artifactDescription": "Auto-generated HEAT Environment deployment artifact", 11:11:17 "artifactTimeout": 0, 11:11:17 "artifactVersion": "1", 11:11:17 "artifactUUID": "ce65d31c-35c0-43a9-90c7-596fc51d0c86", 11:11:17 "generatedFromUUID": "8df6123c-f368-47d3-93be-1972cefbcc35" 11:11:17 }, 11:11:17 "relatedArtifacts": [ 11:11:17 "0005bc4a-2c19-452e-be6d-d574a56be4d0", 11:11:17 "ce65d31c-35c0-43a9-90c7-596fc51d0c86" 11:11:17 ], 11:11:17 "relatedArtifactsInfo": [ 11:11:17 { 11:11:17 "artifactName": "sample-xml-alldata-1-1.xml", 11:11:17 "artifactType": "YANG_XML", 11:11:17 "artifactURL": "/sdc/v1/catalog/services/Testnotificationser1/1.0/resourceInstances/testnotificationvf11/artifacts/sample-xml-alldata-1-1.xml", 11:11:17 "artifactChecksum": "MTUxODFkMmRlOTNhNjYxMGYyYTI1ZjA5Y2QyNWQyYTk\u003d", 11:11:17 "artifactDescription": "MyYang", 11:11:17 "artifactTimeout": 0, 11:11:17 "artifactVersion": "1", 11:11:17 "artifactUUID": "0005bc4a-2c19-452e-be6d-d574a56be4d0", 11:11:17 "relatedArtifacts": [ 11:11:17 "ce65d31c-35c0-43a9-90c7-596fc51d0c86" 11:11:17 ], 11:11:17 "relatedArtifactsInfo": [ 11:11:17 { 11:11:17 "artifactName": "heat.env", 11:11:17 "artifactType": "HEAT_ENV", 11:11:17 "artifactURL": "/sdc/v1/catalog/services/Testnotificationser1/1.0/resourceInstances/testnotificationvf11/artifacts/heat.env", 11:11:17 "artifactChecksum": "NGIzMjExZTM1NDc2NjBjOTQyMGJmMWNiMmU0NTE5NzM\u003d", 11:11:17 "artifactDescription": "Auto-generated HEAT Environment deployment artifact", 11:11:17 "artifactTimeout": 0, 11:11:17 "artifactVersion": "1", 11:11:17 "artifactUUID": "ce65d31c-35c0-43a9-90c7-596fc51d0c86", 11:11:17 "generatedFromUUID": "8df6123c-f368-47d3-93be-1972cefbcc35" 11:11:17 } 11:11:17 ] 11:11:17 }, 11:11:17 { 11:11:17 "artifactName": "heat.env", 11:11:17 "artifactType": "HEAT_ENV", 11:11:17 "artifactURL": "/sdc/v1/catalog/services/Testnotificationser1/1.0/resourceInstances/testnotificationvf11/artifacts/heat.env", 11:11:17 "artifactChecksum": "NGIzMjExZTM1NDc2NjBjOTQyMGJmMWNiMmU0NTE5NzM\u003d", 11:11:17 "artifactDescription": "Auto-generated HEAT Environment deployment artifact", 11:11:17 "artifactTimeout": 0, 11:11:17 "artifactVersion": "1", 11:11:17 "artifactUUID": "ce65d31c-35c0-43a9-90c7-596fc51d0c86", 11:11:17 "generatedFromUUID": "8df6123c-f368-47d3-93be-1972cefbcc35" 11:11:17 } 11:11:17 ] 11:11:17 }, 11:11:17 { 11:11:17 "artifactName": "heat.env", 11:11:17 "artifactType": "HEAT_ENV", 11:11:17 "artifactURL": "/sdc/v1/catalog/services/Testnotificationser1/1.0/resourceInstances/testnotificationvf11/artifacts/heat.env", 11:11:17 "artifactChecksum": "NGIzMjExZTM1NDc2NjBjOTQyMGJmMWNiMmU0NTE5NzM\u003d", 11:11:17 "artifactDescription": "Auto-generated HEAT Environment deployment artifact", 11:11:17 "artifactTimeout": 0, 11:11:17 "artifactVersion": "1", 11:11:17 "artifactUUID": "ce65d31c-35c0-43a9-90c7-596fc51d0c86", 11:11:17 "relatedArtifactsInfo": [] 11:11:17 } 11:11:17 ] 11:11:17 } 11:11:17 ], 11:11:17 "serviceArtifacts": [] 11:11:17 } 11:11:17 11:11:17.823 [kafka-producer-network-thread | mso-123456-producer-414c146f-350c-404b-a200-38d58b83784c] DEBUG org.apache.kafka.clients.NetworkClient - [Producer clientId=mso-123456-producer-414c146f-350c-404b-a200-38d58b83784c] Give up sending metadata request since no node is available 11:11:17 11:11:17.874 [kafka-producer-network-thread | mso-123456-producer-414c146f-350c-404b-a200-38d58b83784c] DEBUG org.apache.kafka.clients.NetworkClient - [Producer clientId=mso-123456-producer-414c146f-350c-404b-a200-38d58b83784c] Give up sending metadata request since no node is available 11:11:17 11:11:17.894 [kafka-coordinator-heartbeat-thread | mso-group] DEBUG org.apache.kafka.clients.NetworkClient - [Consumer clientId=mso-123456-consumer-37da6eb2-0a27-47c4-bc3c-8032fdeca549, groupId=mso-group] Give up sending metadata request since no node is available 11:11:17 11:11:17.894 [kafka-coordinator-heartbeat-thread | mso-group] DEBUG org.apache.kafka.clients.consumer.internals.ConsumerCoordinator - [Consumer clientId=mso-123456-consumer-37da6eb2-0a27-47c4-bc3c-8032fdeca549, groupId=mso-group] No broker available to send FindCoordinator request 11:11:17 11:11:17.904 [pool-10-thread-1] DEBUG org.onap.sdc.impl.NotificationConsumer - Polling for messages from topic: null 11:11:17 11:11:17.924 [kafka-producer-network-thread | mso-123456-producer-414c146f-350c-404b-a200-38d58b83784c] DEBUG org.apache.kafka.clients.NetworkClient - [Producer clientId=mso-123456-producer-414c146f-350c-404b-a200-38d58b83784c] Give up sending metadata request since no node is available 11:11:17 11:11:17.975 [kafka-producer-network-thread | mso-123456-producer-414c146f-350c-404b-a200-38d58b83784c] DEBUG org.apache.kafka.clients.NetworkClient - [Producer clientId=mso-123456-producer-414c146f-350c-404b-a200-38d58b83784c] Give up sending metadata request since no node is available 11:11:17 11:11:17.994 [kafka-coordinator-heartbeat-thread | mso-group] DEBUG org.apache.kafka.clients.NetworkClient - [Consumer clientId=mso-123456-consumer-37da6eb2-0a27-47c4-bc3c-8032fdeca549, groupId=mso-group] Give up sending metadata request since no node is available 11:11:17 11:11:17.995 [kafka-coordinator-heartbeat-thread | mso-group] DEBUG org.apache.kafka.clients.consumer.internals.ConsumerCoordinator - [Consumer clientId=mso-123456-consumer-37da6eb2-0a27-47c4-bc3c-8032fdeca549, groupId=mso-group] No broker available to send FindCoordinator request 11:11:18 11:11:18.005 [pool-10-thread-3] DEBUG org.onap.sdc.impl.NotificationConsumer - Polling for messages from topic: null 11:11:18 11:11:18.025 [kafka-producer-network-thread | mso-123456-producer-414c146f-350c-404b-a200-38d58b83784c] DEBUG org.apache.kafka.clients.NetworkClient - [Producer clientId=mso-123456-producer-414c146f-350c-404b-a200-38d58b83784c] Give up sending metadata request since no node is available 11:11:18 11:11:18.075 [kafka-producer-network-thread | mso-123456-producer-414c146f-350c-404b-a200-38d58b83784c] DEBUG org.apache.kafka.clients.NetworkClient - [Producer clientId=mso-123456-producer-414c146f-350c-404b-a200-38d58b83784c] Give up sending metadata request since no node is available 11:11:18 11:11:18.095 [kafka-coordinator-heartbeat-thread | mso-group] DEBUG org.apache.kafka.clients.NetworkClient - [Consumer clientId=mso-123456-consumer-37da6eb2-0a27-47c4-bc3c-8032fdeca549, groupId=mso-group] Give up sending metadata request since no node is available 11:11:18 11:11:18.095 [kafka-coordinator-heartbeat-thread | mso-group] DEBUG org.apache.kafka.clients.consumer.internals.ConsumerCoordinator - [Consumer clientId=mso-123456-consumer-37da6eb2-0a27-47c4-bc3c-8032fdeca549, groupId=mso-group] No broker available to send FindCoordinator request 11:11:18 11:11:18.104 [pool-10-thread-2] DEBUG org.onap.sdc.impl.NotificationConsumer - Polling for messages from topic: null 11:11:18 11:11:18.126 [kafka-producer-network-thread | mso-123456-producer-414c146f-350c-404b-a200-38d58b83784c] DEBUG org.apache.kafka.clients.NetworkClient - [Producer clientId=mso-123456-producer-414c146f-350c-404b-a200-38d58b83784c] Give up sending metadata request since no node is available 11:11:18 11:11:18.176 [kafka-producer-network-thread | mso-123456-producer-414c146f-350c-404b-a200-38d58b83784c] DEBUG org.apache.kafka.clients.NetworkClient - [Producer clientId=mso-123456-producer-414c146f-350c-404b-a200-38d58b83784c] Give up sending metadata request since no node is available 11:11:18 11:11:18.195 [kafka-coordinator-heartbeat-thread | mso-group] DEBUG org.apache.kafka.clients.NetworkClient - [Consumer clientId=mso-123456-consumer-37da6eb2-0a27-47c4-bc3c-8032fdeca549, groupId=mso-group] Give up sending metadata request since no node is available 11:11:18 11:11:18.195 [kafka-coordinator-heartbeat-thread | mso-group] DEBUG org.apache.kafka.clients.consumer.internals.ConsumerCoordinator - [Consumer clientId=mso-123456-consumer-37da6eb2-0a27-47c4-bc3c-8032fdeca549, groupId=mso-group] No broker available to send FindCoordinator request 11:11:18 11:11:18.205 [pool-10-thread-4] DEBUG org.onap.sdc.impl.NotificationConsumer - Polling for messages from topic: null 11:11:18 11:11:18.226 [kafka-producer-network-thread | mso-123456-producer-414c146f-350c-404b-a200-38d58b83784c] DEBUG org.apache.kafka.clients.NetworkClient - [Producer clientId=mso-123456-producer-414c146f-350c-404b-a200-38d58b83784c] Give up sending metadata request since no node is available 11:11:18 11:11:18.277 [kafka-producer-network-thread | mso-123456-producer-414c146f-350c-404b-a200-38d58b83784c] DEBUG org.apache.kafka.clients.NetworkClient - [Producer clientId=mso-123456-producer-414c146f-350c-404b-a200-38d58b83784c] Give up sending metadata request since no node is available 11:11:18 11:11:18.295 [kafka-coordinator-heartbeat-thread | mso-group] DEBUG org.apache.kafka.clients.NetworkClient - [Consumer clientId=mso-123456-consumer-37da6eb2-0a27-47c4-bc3c-8032fdeca549, groupId=mso-group] Give up sending metadata request since no node is available 11:11:18 11:11:18.296 [kafka-coordinator-heartbeat-thread | mso-group] DEBUG org.apache.kafka.clients.consumer.internals.ConsumerCoordinator - [Consumer clientId=mso-123456-consumer-37da6eb2-0a27-47c4-bc3c-8032fdeca549, groupId=mso-group] No broker available to send FindCoordinator request 11:11:18 11:11:18.304 [pool-10-thread-1] DEBUG org.onap.sdc.impl.NotificationConsumer - Polling for messages from topic: null 11:11:18 11:11:18.327 [kafka-producer-network-thread | mso-123456-producer-414c146f-350c-404b-a200-38d58b83784c] DEBUG org.apache.kafka.clients.NetworkClient - [Producer clientId=mso-123456-producer-414c146f-350c-404b-a200-38d58b83784c] Give up sending metadata request since no node is available 11:11:18 11:11:18.378 [kafka-producer-network-thread | mso-123456-producer-414c146f-350c-404b-a200-38d58b83784c] DEBUG org.apache.kafka.clients.NetworkClient - [Producer clientId=mso-123456-producer-414c146f-350c-404b-a200-38d58b83784c] Give up sending metadata request since no node is available 11:11:18 11:11:18.396 [kafka-coordinator-heartbeat-thread | mso-group] DEBUG org.apache.kafka.clients.NetworkClient - [Consumer clientId=mso-123456-consumer-37da6eb2-0a27-47c4-bc3c-8032fdeca549, groupId=mso-group] Initialize connection to node localhost:39115 (id: 1 rack: null) for sending metadata request 11:11:18 11:11:18.396 [kafka-coordinator-heartbeat-thread | mso-group] DEBUG org.apache.kafka.clients.ClientUtils - Resolved host localhost as 127.0.0.1 11:11:18 11:11:18.396 [kafka-coordinator-heartbeat-thread | mso-group] DEBUG org.apache.kafka.clients.NetworkClient - [Consumer clientId=mso-123456-consumer-37da6eb2-0a27-47c4-bc3c-8032fdeca549, groupId=mso-group] Initiating connection to node localhost:39115 (id: 1 rack: null) using address localhost/127.0.0.1 11:11:18 11:11:18.397 [kafka-coordinator-heartbeat-thread | mso-group] DEBUG org.apache.kafka.common.security.authenticator.SaslClientAuthenticator - [Consumer clientId=mso-123456-consumer-37da6eb2-0a27-47c4-bc3c-8032fdeca549, groupId=mso-group] Set SASL client state to SEND_APIVERSIONS_REQUEST 11:11:18 11:11:18.397 [kafka-coordinator-heartbeat-thread | mso-group] DEBUG org.apache.kafka.common.security.authenticator.SaslClientAuthenticator - [Consumer clientId=mso-123456-consumer-37da6eb2-0a27-47c4-bc3c-8032fdeca549, groupId=mso-group] Creating SaslClient: client=null;service=kafka;serviceHostname=localhost;mechs=[PLAIN] 11:11:18 11:11:18.398 [kafka-coordinator-heartbeat-thread | mso-group] DEBUG org.apache.kafka.common.network.Selector - [Consumer clientId=mso-123456-consumer-37da6eb2-0a27-47c4-bc3c-8032fdeca549, groupId=mso-group] Connection with localhost/127.0.0.1 (channelId=1) disconnected 11:11:18 java.net.ConnectException: Connection refused 11:11:18 at java.base/sun.nio.ch.SocketChannelImpl.checkConnect(Native Method) 11:11:18 at java.base/sun.nio.ch.SocketChannelImpl.finishConnect(SocketChannelImpl.java:777) 11:11:18 at org.apache.kafka.common.network.PlaintextTransportLayer.finishConnect(PlaintextTransportLayer.java:50) 11:11:18 at org.apache.kafka.common.network.KafkaChannel.finishConnect(KafkaChannel.java:224) 11:11:18 at org.apache.kafka.common.network.Selector.pollSelectionKeys(Selector.java:526) 11:11:18 at org.apache.kafka.common.network.Selector.poll(Selector.java:481) 11:11:18 at org.apache.kafka.clients.NetworkClient.poll(NetworkClient.java:560) 11:11:18 at org.apache.kafka.clients.consumer.internals.ConsumerNetworkClient.poll(ConsumerNetworkClient.java:280) 11:11:18 at org.apache.kafka.clients.consumer.internals.ConsumerNetworkClient.pollNoWakeup(ConsumerNetworkClient.java:321) 11:11:18 at org.apache.kafka.clients.consumer.internals.AbstractCoordinator$HeartbeatThread.run(AbstractCoordinator.java:1454) 11:11:18 11:11:18.398 [kafka-coordinator-heartbeat-thread | mso-group] INFO org.apache.kafka.clients.NetworkClient - [Consumer clientId=mso-123456-consumer-37da6eb2-0a27-47c4-bc3c-8032fdeca549, groupId=mso-group] Node 1 disconnected. 11:11:18 11:11:18.398 [kafka-coordinator-heartbeat-thread | mso-group] WARN org.apache.kafka.clients.NetworkClient - [Consumer clientId=mso-123456-consumer-37da6eb2-0a27-47c4-bc3c-8032fdeca549, groupId=mso-group] Connection to node 1 (localhost/127.0.0.1:39115) could not be established. Broker may not be available. 11:11:18 11:11:18.399 [kafka-coordinator-heartbeat-thread | mso-group] DEBUG org.apache.kafka.clients.consumer.internals.ConsumerCoordinator - [Consumer clientId=mso-123456-consumer-37da6eb2-0a27-47c4-bc3c-8032fdeca549, groupId=mso-group] No broker available to send FindCoordinator request 11:11:18 11:11:18.405 [pool-10-thread-5] DEBUG org.onap.sdc.impl.NotificationConsumer - Polling for messages from topic: null 11:11:18 11:11:18.429 [kafka-producer-network-thread | mso-123456-producer-414c146f-350c-404b-a200-38d58b83784c] DEBUG org.apache.kafka.clients.NetworkClient - [Producer clientId=mso-123456-producer-414c146f-350c-404b-a200-38d58b83784c] Give up sending metadata request since no node is available 11:11:18 11:11:18.480 [kafka-producer-network-thread | mso-123456-producer-414c146f-350c-404b-a200-38d58b83784c] DEBUG org.apache.kafka.clients.NetworkClient - [Producer clientId=mso-123456-producer-414c146f-350c-404b-a200-38d58b83784c] Give up sending metadata request since no node is available 11:11:18 11:11:18.499 [kafka-coordinator-heartbeat-thread | mso-group] DEBUG org.apache.kafka.clients.NetworkClient - [Consumer clientId=mso-123456-consumer-37da6eb2-0a27-47c4-bc3c-8032fdeca549, groupId=mso-group] Give up sending metadata request since no node is available 11:11:18 11:11:18.499 [kafka-coordinator-heartbeat-thread | mso-group] DEBUG org.apache.kafka.clients.consumer.internals.ConsumerCoordinator - [Consumer clientId=mso-123456-consumer-37da6eb2-0a27-47c4-bc3c-8032fdeca549, groupId=mso-group] No broker available to send FindCoordinator request 11:11:18 11:11:18.504 [pool-10-thread-3] DEBUG org.onap.sdc.impl.NotificationConsumer - Polling for messages from topic: null 11:11:18 11:11:18.530 [kafka-producer-network-thread | mso-123456-producer-414c146f-350c-404b-a200-38d58b83784c] DEBUG org.apache.kafka.clients.NetworkClient - [Producer clientId=mso-123456-producer-414c146f-350c-404b-a200-38d58b83784c] Give up sending metadata request since no node is available 11:11:18 11:11:18.581 [kafka-producer-network-thread | mso-123456-producer-414c146f-350c-404b-a200-38d58b83784c] DEBUG org.apache.kafka.clients.NetworkClient - [Producer clientId=mso-123456-producer-414c146f-350c-404b-a200-38d58b83784c] Give up sending metadata request since no node is available 11:11:18 11:11:18.599 [kafka-coordinator-heartbeat-thread | mso-group] DEBUG org.apache.kafka.clients.NetworkClient - [Consumer clientId=mso-123456-consumer-37da6eb2-0a27-47c4-bc3c-8032fdeca549, groupId=mso-group] Give up sending metadata request since no node is available 11:11:18 11:11:18.599 [kafka-coordinator-heartbeat-thread | mso-group] DEBUG org.apache.kafka.clients.consumer.internals.ConsumerCoordinator - [Consumer clientId=mso-123456-consumer-37da6eb2-0a27-47c4-bc3c-8032fdeca549, groupId=mso-group] No broker available to send FindCoordinator request 11:11:18 11:11:18.604 [pool-10-thread-3] DEBUG org.onap.sdc.impl.NotificationConsumer - Polling for messages from topic: null 11:11:18 11:11:18.612 [main] INFO org.onap.sdc.impl.DistributionClientImpl - DistributionClient - sendNotificationStatus 11:11:18 11:11:18.612 [main] DEBUG org.onap.sdc.impl.DistributionClientImpl - client was not initialized 11:11:18 11:11:18.615 [pool-11-thread-1] DEBUG org.onap.sdc.impl.NotificationConsumer - Polling for messages from topic: null 11:11:18 11:11:18.632 [kafka-producer-network-thread | mso-123456-producer-414c146f-350c-404b-a200-38d58b83784c] DEBUG org.apache.kafka.clients.NetworkClient - [Producer clientId=mso-123456-producer-414c146f-350c-404b-a200-38d58b83784c] Give up sending metadata request since no node is available 11:11:18 11:11:18.682 [kafka-producer-network-thread | mso-123456-producer-414c146f-350c-404b-a200-38d58b83784c] DEBUG org.apache.kafka.clients.NetworkClient - [Producer clientId=mso-123456-producer-414c146f-350c-404b-a200-38d58b83784c] Give up sending metadata request since no node is available 11:11:18 11:11:18.700 [kafka-coordinator-heartbeat-thread | mso-group] DEBUG org.apache.kafka.clients.NetworkClient - [Consumer clientId=mso-123456-consumer-37da6eb2-0a27-47c4-bc3c-8032fdeca549, groupId=mso-group] Give up sending metadata request since no node is available 11:11:18 11:11:18.700 [kafka-coordinator-heartbeat-thread | mso-group] DEBUG org.apache.kafka.clients.consumer.internals.ConsumerCoordinator - [Consumer clientId=mso-123456-consumer-37da6eb2-0a27-47c4-bc3c-8032fdeca549, groupId=mso-group] No broker available to send FindCoordinator request 11:11:18 11:11:18.714 [pool-11-thread-1] DEBUG org.onap.sdc.impl.NotificationConsumer - Polling for messages from topic: null 11:11:18 11:11:18.733 [kafka-producer-network-thread | mso-123456-producer-414c146f-350c-404b-a200-38d58b83784c] DEBUG org.apache.kafka.clients.NetworkClient - [Producer clientId=mso-123456-producer-414c146f-350c-404b-a200-38d58b83784c] Give up sending metadata request since no node is available 11:11:18 11:11:18.784 [kafka-producer-network-thread | mso-123456-producer-414c146f-350c-404b-a200-38d58b83784c] DEBUG org.apache.kafka.clients.NetworkClient - [Producer clientId=mso-123456-producer-414c146f-350c-404b-a200-38d58b83784c] Give up sending metadata request since no node is available 11:11:18 11:11:18.801 [kafka-coordinator-heartbeat-thread | mso-group] DEBUG org.apache.kafka.clients.NetworkClient - [Consumer clientId=mso-123456-consumer-37da6eb2-0a27-47c4-bc3c-8032fdeca549, groupId=mso-group] Give up sending metadata request since no node is available 11:11:18 11:11:18.801 [kafka-coordinator-heartbeat-thread | mso-group] DEBUG org.apache.kafka.clients.consumer.internals.ConsumerCoordinator - [Consumer clientId=mso-123456-consumer-37da6eb2-0a27-47c4-bc3c-8032fdeca549, groupId=mso-group] No broker available to send FindCoordinator request 11:11:18 11:11:18.815 [pool-11-thread-2] DEBUG org.onap.sdc.impl.NotificationConsumer - Polling for messages from topic: null 11:11:18 11:11:18.815 [pool-11-thread-2] DEBUG org.onap.sdc.impl.NotificationConsumer - received message from topic 11:11:18 11:11:18.815 [pool-11-thread-2] DEBUG org.onap.sdc.impl.NotificationConsumer - received notification from broker: {"distributionID" : "bcc7a72e-90b1-4c5f-9a37-28dc3cd86416", 11:11:18 "serviceName" : "Testnotificationser1", 11:11:18 "serviceVersion" : "1.0", 11:11:18 "serviceUUID" : "7f7f94f4-373a-4b71-a0e3-80ae2ba4eb5d", 11:11:18 "serviceDescription" : "TestNotificationVF1", 11:11:18 "resources" : [{ 11:11:18 "resourceInstanceName" : "testnotificationvf11", 11:11:18 "resourceName" : "TestNotificationVF1", 11:11:18 "resourceVersion" : "1.0", 11:11:18 "resoucreType" : "VF", 11:11:18 "resourceUUID" : "907e1746-9f69-40f5-9f2a-313654092a2d", 11:11:18 "artifacts" : [{ 11:11:18 "artifactName" : "sample-xml-alldata-1-1.xml", 11:11:18 "artifactType" : "YANG_XML", 11:11:18 "artifactURL" : "/sdc/v1/catalog/services/Testnotificationser1/1.0/resourceInstances/testnotificationvf11/artifacts/sample-xml-alldata-1-1.xml", 11:11:18 "artifactChecksum" : "MTUxODFkMmRlOTNhNjYxMGYyYTI1ZjA5Y2QyNWQyYTk\u003d", 11:11:18 "artifactDescription" : "MyYang", 11:11:18 "artifactTimeout" : 0, 11:11:18 "artifactUUID" : "0005bc4a-2c19-452e-be6d-d574a56be4d0", 11:11:18 "artifactVersion" : "1" 11:11:18 }, { 11:11:18 "artifactName" : "heat.yaml", 11:11:18 "artifactType" : "HEAT", 11:11:18 "artifactURL" : "/sdc/v1/catalog/services/Testnotificationser1/1.0/resourceInstances/testnotificationvf11/artifacts/heat.yaml", 11:11:18 "artifactChecksum" : "ODEyNjE4YTMzYzRmMTk2ODVhNTU2NTg3YWEyNmIxMTM\u003d", 11:11:18 "artifactDescription" : "heat", 11:11:18 "artifactTimeout" : 60, 11:11:18 "artifactUUID" : "8df6123c-f368-47d3-93be-1972cefbcc35", 11:11:18 "artifactVersion" : "1" 11:11:18 }, { 11:11:18 "artifactName" : "heat.env", 11:11:18 "artifactType" : "HEAT_ENV", 11:11:18 "artifactURL" : "/sdc/v1/catalog/services/Testnotificationser1/1.0/resourceInstances/testnotificationvf11/artifacts/heat.env", 11:11:18 "artifactChecksum" : "NGIzMjExZTM1NDc2NjBjOTQyMGJmMWNiMmU0NTE5NzM\u003d", 11:11:18 "artifactDescription" : "Auto-generated HEAT Environment deployment artifact", 11:11:18 "artifactTimeout" : 0, 11:11:18 "artifactUUID" : "ce65d31c-35c0-43a9-90c7-596fc51d0c86", 11:11:18 "artifactVersion" : "1", 11:11:18 "generatedFromUUID" : "8df6123c-f368-47d3-93be-1972cefbcc35" 11:11:18 } 11:11:18 ] 11:11:18 } 11:11:18 ]} 11:11:18 11:11:18.819 [pool-11-thread-2] DEBUG org.onap.sdc.impl.NotificationConsumer - sending notification to client: { 11:11:18 "distributionID": "bcc7a72e-90b1-4c5f-9a37-28dc3cd86416", 11:11:18 "serviceName": "Testnotificationser1", 11:11:18 "serviceVersion": "1.0", 11:11:18 "serviceUUID": "7f7f94f4-373a-4b71-a0e3-80ae2ba4eb5d", 11:11:18 "serviceDescription": "TestNotificationVF1", 11:11:18 "resources": [ 11:11:18 { 11:11:18 "resourceInstanceName": "testnotificationvf11", 11:11:18 "resourceName": "TestNotificationVF1", 11:11:18 "resourceVersion": "1.0", 11:11:18 "resoucreType": "VF", 11:11:18 "resourceUUID": "907e1746-9f69-40f5-9f2a-313654092a2d", 11:11:18 "artifacts": [ 11:11:18 { 11:11:18 "artifactName": "heat.yaml", 11:11:18 "artifactType": "HEAT", 11:11:18 "artifactURL": "/sdc/v1/catalog/services/Testnotificationser1/1.0/resourceInstances/testnotificationvf11/artifacts/heat.yaml", 11:11:18 "artifactChecksum": "ODEyNjE4YTMzYzRmMTk2ODVhNTU2NTg3YWEyNmIxMTM\u003d", 11:11:18 "artifactDescription": "heat", 11:11:18 "artifactTimeout": 60, 11:11:18 "artifactVersion": "1", 11:11:18 "artifactUUID": "8df6123c-f368-47d3-93be-1972cefbcc35", 11:11:18 "generatedArtifact": { 11:11:18 "artifactName": "heat.env", 11:11:18 "artifactType": "HEAT_ENV", 11:11:18 "artifactURL": "/sdc/v1/catalog/services/Testnotificationser1/1.0/resourceInstances/testnotificationvf11/artifacts/heat.env", 11:11:18 "artifactChecksum": "NGIzMjExZTM1NDc2NjBjOTQyMGJmMWNiMmU0NTE5NzM\u003d", 11:11:18 "artifactDescription": "Auto-generated HEAT Environment deployment artifact", 11:11:18 "artifactTimeout": 0, 11:11:18 "artifactVersion": "1", 11:11:18 "artifactUUID": "ce65d31c-35c0-43a9-90c7-596fc51d0c86", 11:11:18 "generatedFromUUID": "8df6123c-f368-47d3-93be-1972cefbcc35" 11:11:18 }, 11:11:18 "relatedArtifactsInfo": [] 11:11:18 } 11:11:18 ] 11:11:18 } 11:11:18 ], 11:11:18 "serviceArtifacts": [] 11:11:18 } 11:11:18 11:11:18.837 [kafka-producer-network-thread | mso-123456-producer-414c146f-350c-404b-a200-38d58b83784c] DEBUG org.apache.kafka.clients.NetworkClient - [Producer clientId=mso-123456-producer-414c146f-350c-404b-a200-38d58b83784c] Give up sending metadata request since no node is available 11:11:18 11:11:18.888 [kafka-producer-network-thread | mso-123456-producer-414c146f-350c-404b-a200-38d58b83784c] DEBUG org.apache.kafka.clients.NetworkClient - [Producer clientId=mso-123456-producer-414c146f-350c-404b-a200-38d58b83784c] Give up sending metadata request since no node is available 11:11:18 11:11:18.901 [kafka-coordinator-heartbeat-thread | mso-group] DEBUG org.apache.kafka.clients.NetworkClient - [Consumer clientId=mso-123456-consumer-37da6eb2-0a27-47c4-bc3c-8032fdeca549, groupId=mso-group] Give up sending metadata request since no node is available 11:11:18 11:11:18.902 [kafka-coordinator-heartbeat-thread | mso-group] DEBUG org.apache.kafka.clients.consumer.internals.ConsumerCoordinator - [Consumer clientId=mso-123456-consumer-37da6eb2-0a27-47c4-bc3c-8032fdeca549, groupId=mso-group] No broker available to send FindCoordinator request 11:11:18 11:11:18.914 [pool-11-thread-1] DEBUG org.onap.sdc.impl.NotificationConsumer - Polling for messages from topic: null 11:11:18 11:11:18.938 [kafka-producer-network-thread | mso-123456-producer-414c146f-350c-404b-a200-38d58b83784c] DEBUG org.apache.kafka.clients.NetworkClient - [Producer clientId=mso-123456-producer-414c146f-350c-404b-a200-38d58b83784c] Initialize connection to node localhost:39115 (id: 1 rack: null) for sending metadata request 11:11:18 11:11:18.939 [kafka-producer-network-thread | mso-123456-producer-414c146f-350c-404b-a200-38d58b83784c] DEBUG org.apache.kafka.clients.ClientUtils - Resolved host localhost as 127.0.0.1 11:11:18 11:11:18.939 [kafka-producer-network-thread | mso-123456-producer-414c146f-350c-404b-a200-38d58b83784c] DEBUG org.apache.kafka.clients.NetworkClient - [Producer clientId=mso-123456-producer-414c146f-350c-404b-a200-38d58b83784c] Initiating connection to node localhost:39115 (id: 1 rack: null) using address localhost/127.0.0.1 11:11:18 11:11:18.939 [kafka-producer-network-thread | mso-123456-producer-414c146f-350c-404b-a200-38d58b83784c] DEBUG org.apache.kafka.common.security.authenticator.SaslClientAuthenticator - [Producer clientId=mso-123456-producer-414c146f-350c-404b-a200-38d58b83784c] Set SASL client state to SEND_APIVERSIONS_REQUEST 11:11:18 11:11:18.939 [kafka-producer-network-thread | mso-123456-producer-414c146f-350c-404b-a200-38d58b83784c] DEBUG org.apache.kafka.common.security.authenticator.SaslClientAuthenticator - [Producer clientId=mso-123456-producer-414c146f-350c-404b-a200-38d58b83784c] Creating SaslClient: client=null;service=kafka;serviceHostname=localhost;mechs=[PLAIN] 11:11:18 11:11:18.940 [kafka-producer-network-thread | mso-123456-producer-414c146f-350c-404b-a200-38d58b83784c] DEBUG org.apache.kafka.common.network.Selector - [Producer clientId=mso-123456-producer-414c146f-350c-404b-a200-38d58b83784c] Connection with localhost/127.0.0.1 (channelId=1) disconnected 11:11:18 java.net.ConnectException: Connection refused 11:11:18 at java.base/sun.nio.ch.SocketChannelImpl.checkConnect(Native Method) 11:11:18 at java.base/sun.nio.ch.SocketChannelImpl.finishConnect(SocketChannelImpl.java:777) 11:11:18 at org.apache.kafka.common.network.PlaintextTransportLayer.finishConnect(PlaintextTransportLayer.java:50) 11:11:18 at org.apache.kafka.common.network.KafkaChannel.finishConnect(KafkaChannel.java:224) 11:11:18 at org.apache.kafka.common.network.Selector.pollSelectionKeys(Selector.java:526) 11:11:18 at org.apache.kafka.common.network.Selector.poll(Selector.java:481) 11:11:18 at org.apache.kafka.clients.NetworkClient.poll(NetworkClient.java:560) 11:11:18 at org.apache.kafka.clients.producer.internals.Sender.runOnce(Sender.java:328) 11:11:18 at org.apache.kafka.clients.producer.internals.Sender.run(Sender.java:243) 11:11:18 at java.base/java.lang.Thread.run(Thread.java:829) 11:11:18 11:11:18.940 [kafka-producer-network-thread | mso-123456-producer-414c146f-350c-404b-a200-38d58b83784c] INFO org.apache.kafka.clients.NetworkClient - [Producer clientId=mso-123456-producer-414c146f-350c-404b-a200-38d58b83784c] Node 1 disconnected. 11:11:18 11:11:18.941 [kafka-producer-network-thread | mso-123456-producer-414c146f-350c-404b-a200-38d58b83784c] WARN org.apache.kafka.clients.NetworkClient - [Producer clientId=mso-123456-producer-414c146f-350c-404b-a200-38d58b83784c] Connection to node 1 (localhost/127.0.0.1:39115) could not be established. Broker may not be available. 11:11:19 11:11:19.002 [kafka-coordinator-heartbeat-thread | mso-group] DEBUG org.apache.kafka.clients.NetworkClient - [Consumer clientId=mso-123456-consumer-37da6eb2-0a27-47c4-bc3c-8032fdeca549, groupId=mso-group] Give up sending metadata request since no node is available 11:11:19 11:11:19.002 [kafka-coordinator-heartbeat-thread | mso-group] DEBUG org.apache.kafka.clients.consumer.internals.ConsumerCoordinator - [Consumer clientId=mso-123456-consumer-37da6eb2-0a27-47c4-bc3c-8032fdeca549, groupId=mso-group] No broker available to send FindCoordinator request 11:11:19 11:11:19.014 [pool-11-thread-3] DEBUG org.onap.sdc.impl.NotificationConsumer - Polling for messages from topic: null 11:11:19 11:11:19.040 [kafka-producer-network-thread | mso-123456-producer-414c146f-350c-404b-a200-38d58b83784c] DEBUG org.apache.kafka.clients.NetworkClient - [Producer clientId=mso-123456-producer-414c146f-350c-404b-a200-38d58b83784c] Give up sending metadata request since no node is available 11:11:19 11:11:19.091 [kafka-producer-network-thread | mso-123456-producer-414c146f-350c-404b-a200-38d58b83784c] DEBUG org.apache.kafka.clients.NetworkClient - [Producer clientId=mso-123456-producer-414c146f-350c-404b-a200-38d58b83784c] Give up sending metadata request since no node is available 11:11:19 11:11:19.103 [kafka-coordinator-heartbeat-thread | mso-group] DEBUG org.apache.kafka.clients.NetworkClient - [Consumer clientId=mso-123456-consumer-37da6eb2-0a27-47c4-bc3c-8032fdeca549, groupId=mso-group] Give up sending metadata request since no node is available 11:11:19 11:11:19.103 [kafka-coordinator-heartbeat-thread | mso-group] DEBUG org.apache.kafka.clients.consumer.internals.ConsumerCoordinator - [Consumer clientId=mso-123456-consumer-37da6eb2-0a27-47c4-bc3c-8032fdeca549, groupId=mso-group] No broker available to send FindCoordinator request 11:11:19 11:11:19.114 [pool-11-thread-2] DEBUG org.onap.sdc.impl.NotificationConsumer - Polling for messages from topic: null 11:11:19 11:11:19.141 [kafka-producer-network-thread | mso-123456-producer-414c146f-350c-404b-a200-38d58b83784c] DEBUG org.apache.kafka.clients.NetworkClient - [Producer clientId=mso-123456-producer-414c146f-350c-404b-a200-38d58b83784c] Give up sending metadata request since no node is available 11:11:19 11:11:19.192 [kafka-producer-network-thread | mso-123456-producer-414c146f-350c-404b-a200-38d58b83784c] DEBUG org.apache.kafka.clients.NetworkClient - [Producer clientId=mso-123456-producer-414c146f-350c-404b-a200-38d58b83784c] Give up sending metadata request since no node is available 11:11:19 11:11:19.203 [kafka-coordinator-heartbeat-thread | mso-group] DEBUG org.apache.kafka.clients.NetworkClient - [Consumer clientId=mso-123456-consumer-37da6eb2-0a27-47c4-bc3c-8032fdeca549, groupId=mso-group] Give up sending metadata request since no node is available 11:11:19 11:11:19.204 [kafka-coordinator-heartbeat-thread | mso-group] DEBUG org.apache.kafka.clients.consumer.internals.ConsumerCoordinator - [Consumer clientId=mso-123456-consumer-37da6eb2-0a27-47c4-bc3c-8032fdeca549, groupId=mso-group] No broker available to send FindCoordinator request 11:11:19 11:11:19.215 [pool-11-thread-4] DEBUG org.onap.sdc.impl.NotificationConsumer - Polling for messages from topic: null 11:11:19 11:11:19.242 [kafka-producer-network-thread | mso-123456-producer-414c146f-350c-404b-a200-38d58b83784c] DEBUG org.apache.kafka.clients.NetworkClient - [Producer clientId=mso-123456-producer-414c146f-350c-404b-a200-38d58b83784c] Give up sending metadata request since no node is available 11:11:19 11:11:19.293 [kafka-producer-network-thread | mso-123456-producer-414c146f-350c-404b-a200-38d58b83784c] DEBUG org.apache.kafka.clients.NetworkClient - [Producer clientId=mso-123456-producer-414c146f-350c-404b-a200-38d58b83784c] Give up sending metadata request since no node is available 11:11:19 11:11:19.304 [kafka-coordinator-heartbeat-thread | mso-group] DEBUG org.apache.kafka.clients.NetworkClient - [Consumer clientId=mso-123456-consumer-37da6eb2-0a27-47c4-bc3c-8032fdeca549, groupId=mso-group] Initialize connection to node localhost:39115 (id: 1 rack: null) for sending metadata request 11:11:19 11:11:19.304 [kafka-coordinator-heartbeat-thread | mso-group] DEBUG org.apache.kafka.clients.ClientUtils - Resolved host localhost as 127.0.0.1 11:11:19 11:11:19.304 [kafka-coordinator-heartbeat-thread | mso-group] DEBUG org.apache.kafka.clients.NetworkClient - [Consumer clientId=mso-123456-consumer-37da6eb2-0a27-47c4-bc3c-8032fdeca549, groupId=mso-group] Initiating connection to node localhost:39115 (id: 1 rack: null) using address localhost/127.0.0.1 11:11:19 11:11:19.305 [kafka-coordinator-heartbeat-thread | mso-group] DEBUG org.apache.kafka.common.security.authenticator.SaslClientAuthenticator - [Consumer clientId=mso-123456-consumer-37da6eb2-0a27-47c4-bc3c-8032fdeca549, groupId=mso-group] Set SASL client state to SEND_APIVERSIONS_REQUEST 11:11:19 11:11:19.305 [kafka-coordinator-heartbeat-thread | mso-group] DEBUG org.apache.kafka.common.security.authenticator.SaslClientAuthenticator - [Consumer clientId=mso-123456-consumer-37da6eb2-0a27-47c4-bc3c-8032fdeca549, groupId=mso-group] Creating SaslClient: client=null;service=kafka;serviceHostname=localhost;mechs=[PLAIN] 11:11:19 11:11:19.306 [kafka-coordinator-heartbeat-thread | mso-group] DEBUG org.apache.kafka.common.network.Selector - [Consumer clientId=mso-123456-consumer-37da6eb2-0a27-47c4-bc3c-8032fdeca549, groupId=mso-group] Connection with localhost/127.0.0.1 (channelId=1) disconnected 11:11:19 java.net.ConnectException: Connection refused 11:11:19 at java.base/sun.nio.ch.SocketChannelImpl.checkConnect(Native Method) 11:11:19 at java.base/sun.nio.ch.SocketChannelImpl.finishConnect(SocketChannelImpl.java:777) 11:11:19 at org.apache.kafka.common.network.PlaintextTransportLayer.finishConnect(PlaintextTransportLayer.java:50) 11:11:19 at org.apache.kafka.common.network.KafkaChannel.finishConnect(KafkaChannel.java:224) 11:11:19 at org.apache.kafka.common.network.Selector.pollSelectionKeys(Selector.java:526) 11:11:19 at org.apache.kafka.common.network.Selector.poll(Selector.java:481) 11:11:19 at org.apache.kafka.clients.NetworkClient.poll(NetworkClient.java:560) 11:11:19 at org.apache.kafka.clients.consumer.internals.ConsumerNetworkClient.poll(ConsumerNetworkClient.java:280) 11:11:19 at org.apache.kafka.clients.consumer.internals.ConsumerNetworkClient.pollNoWakeup(ConsumerNetworkClient.java:321) 11:11:19 at org.apache.kafka.clients.consumer.internals.AbstractCoordinator$HeartbeatThread.run(AbstractCoordinator.java:1454) 11:11:19 11:11:19.306 [kafka-coordinator-heartbeat-thread | mso-group] INFO org.apache.kafka.clients.NetworkClient - [Consumer clientId=mso-123456-consumer-37da6eb2-0a27-47c4-bc3c-8032fdeca549, groupId=mso-group] Node 1 disconnected. 11:11:19 11:11:19.307 [kafka-coordinator-heartbeat-thread | mso-group] WARN org.apache.kafka.clients.NetworkClient - [Consumer clientId=mso-123456-consumer-37da6eb2-0a27-47c4-bc3c-8032fdeca549, groupId=mso-group] Connection to node 1 (localhost/127.0.0.1:39115) could not be established. Broker may not be available. 11:11:19 11:11:19.307 [kafka-coordinator-heartbeat-thread | mso-group] DEBUG org.apache.kafka.clients.consumer.internals.ConsumerCoordinator - [Consumer clientId=mso-123456-consumer-37da6eb2-0a27-47c4-bc3c-8032fdeca549, groupId=mso-group] No broker available to send FindCoordinator request 11:11:19 11:11:19.314 [pool-11-thread-1] DEBUG org.onap.sdc.impl.NotificationConsumer - Polling for messages from topic: null 11:11:19 11:11:19.344 [kafka-producer-network-thread | mso-123456-producer-414c146f-350c-404b-a200-38d58b83784c] DEBUG org.apache.kafka.clients.NetworkClient - [Producer clientId=mso-123456-producer-414c146f-350c-404b-a200-38d58b83784c] Give up sending metadata request since no node is available 11:11:19 11:11:19.394 [kafka-producer-network-thread | mso-123456-producer-414c146f-350c-404b-a200-38d58b83784c] DEBUG org.apache.kafka.clients.NetworkClient - [Producer clientId=mso-123456-producer-414c146f-350c-404b-a200-38d58b83784c] Give up sending metadata request since no node is available 11:11:19 11:11:19.407 [kafka-coordinator-heartbeat-thread | mso-group] DEBUG org.apache.kafka.clients.NetworkClient - [Consumer clientId=mso-123456-consumer-37da6eb2-0a27-47c4-bc3c-8032fdeca549, groupId=mso-group] Give up sending metadata request since no node is available 11:11:19 11:11:19.408 [kafka-coordinator-heartbeat-thread | mso-group] DEBUG org.apache.kafka.clients.consumer.internals.ConsumerCoordinator - [Consumer clientId=mso-123456-consumer-37da6eb2-0a27-47c4-bc3c-8032fdeca549, groupId=mso-group] No broker available to send FindCoordinator request 11:11:19 11:11:19.414 [pool-11-thread-5] DEBUG org.onap.sdc.impl.NotificationConsumer - Polling for messages from topic: null 11:11:19 11:11:19.445 [kafka-producer-network-thread | mso-123456-producer-414c146f-350c-404b-a200-38d58b83784c] DEBUG org.apache.kafka.clients.NetworkClient - [Producer clientId=mso-123456-producer-414c146f-350c-404b-a200-38d58b83784c] Give up sending metadata request since no node is available 11:11:19 11:11:19.495 [kafka-producer-network-thread | mso-123456-producer-414c146f-350c-404b-a200-38d58b83784c] DEBUG org.apache.kafka.clients.NetworkClient - [Producer clientId=mso-123456-producer-414c146f-350c-404b-a200-38d58b83784c] Give up sending metadata request since no node is available 11:11:19 11:11:19.508 [kafka-coordinator-heartbeat-thread | mso-group] DEBUG org.apache.kafka.clients.NetworkClient - [Consumer clientId=mso-123456-consumer-37da6eb2-0a27-47c4-bc3c-8032fdeca549, groupId=mso-group] Give up sending metadata request since no node is available 11:11:19 11:11:19.508 [kafka-coordinator-heartbeat-thread | mso-group] DEBUG org.apache.kafka.clients.consumer.internals.ConsumerCoordinator - [Consumer clientId=mso-123456-consumer-37da6eb2-0a27-47c4-bc3c-8032fdeca549, groupId=mso-group] No broker available to send FindCoordinator request 11:11:19 11:11:19.514 [pool-11-thread-3] DEBUG org.onap.sdc.impl.NotificationConsumer - Polling for messages from topic: null 11:11:19 11:11:19.545 [kafka-producer-network-thread | mso-123456-producer-414c146f-350c-404b-a200-38d58b83784c] DEBUG org.apache.kafka.clients.NetworkClient - [Producer clientId=mso-123456-producer-414c146f-350c-404b-a200-38d58b83784c] Give up sending metadata request since no node is available 11:11:19 11:11:19.596 [kafka-producer-network-thread | mso-123456-producer-414c146f-350c-404b-a200-38d58b83784c] DEBUG org.apache.kafka.clients.NetworkClient - [Producer clientId=mso-123456-producer-414c146f-350c-404b-a200-38d58b83784c] Give up sending metadata request since no node is available 11:11:19 11:11:19.609 [kafka-coordinator-heartbeat-thread | mso-group] DEBUG org.apache.kafka.clients.NetworkClient - [Consumer clientId=mso-123456-consumer-37da6eb2-0a27-47c4-bc3c-8032fdeca549, groupId=mso-group] Give up sending metadata request since no node is available 11:11:19 11:11:19.609 [kafka-coordinator-heartbeat-thread | mso-group] DEBUG org.apache.kafka.clients.consumer.internals.ConsumerCoordinator - [Consumer clientId=mso-123456-consumer-37da6eb2-0a27-47c4-bc3c-8032fdeca549, groupId=mso-group] No broker available to send FindCoordinator request 11:11:19 11:11:19.614 [pool-11-thread-3] DEBUG org.onap.sdc.impl.NotificationConsumer - Polling for messages from topic: null 11:11:19 11:11:19.621 [main] INFO org.onap.sdc.impl.DistributionClientImpl - DistributionClient - sendNotificationStatus 11:11:19 11:11:19.622 [main] DEBUG org.onap.sdc.impl.DistributionClientImpl - client was not initialized 11:11:19 11:11:19.624 [pool-12-thread-1] DEBUG org.onap.sdc.impl.NotificationConsumer - Polling for messages from topic: null 11:11:19 11:11:19.646 [kafka-producer-network-thread | mso-123456-producer-414c146f-350c-404b-a200-38d58b83784c] DEBUG org.apache.kafka.clients.NetworkClient - [Producer clientId=mso-123456-producer-414c146f-350c-404b-a200-38d58b83784c] Give up sending metadata request since no node is available 11:11:19 11:11:19.697 [kafka-producer-network-thread | mso-123456-producer-414c146f-350c-404b-a200-38d58b83784c] DEBUG org.apache.kafka.clients.NetworkClient - [Producer clientId=mso-123456-producer-414c146f-350c-404b-a200-38d58b83784c] Give up sending metadata request since no node is available 11:11:19 11:11:19.709 [kafka-coordinator-heartbeat-thread | mso-group] DEBUG org.apache.kafka.clients.NetworkClient - [Consumer clientId=mso-123456-consumer-37da6eb2-0a27-47c4-bc3c-8032fdeca549, groupId=mso-group] Give up sending metadata request since no node is available 11:11:19 11:11:19.710 [kafka-coordinator-heartbeat-thread | mso-group] DEBUG org.apache.kafka.clients.consumer.internals.ConsumerCoordinator - [Consumer clientId=mso-123456-consumer-37da6eb2-0a27-47c4-bc3c-8032fdeca549, groupId=mso-group] No broker available to send FindCoordinator request 11:11:19 11:11:19.723 [pool-12-thread-1] DEBUG org.onap.sdc.impl.NotificationConsumer - Polling for messages from topic: null 11:11:19 11:11:19.748 [kafka-producer-network-thread | mso-123456-producer-414c146f-350c-404b-a200-38d58b83784c] DEBUG org.apache.kafka.clients.NetworkClient - [Producer clientId=mso-123456-producer-414c146f-350c-404b-a200-38d58b83784c] Give up sending metadata request since no node is available 11:11:19 11:11:19.798 [kafka-producer-network-thread | mso-123456-producer-414c146f-350c-404b-a200-38d58b83784c] DEBUG org.apache.kafka.clients.NetworkClient - [Producer clientId=mso-123456-producer-414c146f-350c-404b-a200-38d58b83784c] Give up sending metadata request since no node is available 11:11:19 11:11:19.810 [kafka-coordinator-heartbeat-thread | mso-group] DEBUG org.apache.kafka.clients.NetworkClient - [Consumer clientId=mso-123456-consumer-37da6eb2-0a27-47c4-bc3c-8032fdeca549, groupId=mso-group] Give up sending metadata request since no node is available 11:11:19 11:11:19.810 [kafka-coordinator-heartbeat-thread | mso-group] DEBUG org.apache.kafka.clients.consumer.internals.ConsumerCoordinator - [Consumer clientId=mso-123456-consumer-37da6eb2-0a27-47c4-bc3c-8032fdeca549, groupId=mso-group] No broker available to send FindCoordinator request 11:11:19 11:11:19.823 [pool-12-thread-2] DEBUG org.onap.sdc.impl.NotificationConsumer - Polling for messages from topic: null 11:11:19 11:11:19.824 [pool-12-thread-2] DEBUG org.onap.sdc.impl.NotificationConsumer - received message from topic 11:11:19 11:11:19.824 [pool-12-thread-2] DEBUG org.onap.sdc.impl.NotificationConsumer - received notification from broker: { "distributionID" : "5v1234d8-5b6d-42c4-7t54-47v95n58qb7", "serviceName" : "srv1", "serviceVersion": "2.0", "serviceUUID" : "4e0697d8-5b6d-42c4-8c74-46c33d46624c", "serviceArtifacts":[ { "artifactName" : "ddd.yml", "artifactType" : "DG_XML", "artifactTimeout" : "65", "artifactDescription" : "description", "artifactURL" : "/sdc/v1/catalog/services/srv1/2.0/resources/ddd/3.0/artifacts/ddd.xml" , "resourceUUID" : "4e5874d8-5b6d-42c4-8c74-46c33d90drw" , "checksum" : "15e389rnrp58hsw==" } ]} 11:11:19 11:11:19.824 [pool-12-thread-2] ERROR org.onap.sdc.impl.NotificationConsumer - Error exception occurred when fetching with Kafka Consumer:null 11:11:19 11:11:19.824 [pool-12-thread-2] DEBUG org.onap.sdc.impl.NotificationConsumer - Error exception occurred when fetching with Kafka Consumer:null 11:11:19 java.lang.NullPointerException: null 11:11:19 at org.onap.sdc.impl.NotificationCallbackBuilder.buildResourceInstancesLogic(NotificationCallbackBuilder.java:62) 11:11:19 at org.onap.sdc.impl.NotificationCallbackBuilder.buildCallbackNotificationLogic(NotificationCallbackBuilder.java:48) 11:11:19 at org.onap.sdc.impl.NotificationConsumer.run(NotificationConsumer.java:58) 11:11:19 at java.base/java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:515) 11:11:19 at java.base/java.util.concurrent.FutureTask.runAndReset(FutureTask.java:305) 11:11:19 at java.base/java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(ScheduledThreadPoolExecutor.java:305) 11:11:19 at java.base/java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128) 11:11:19 at java.base/java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628) 11:11:19 at java.base/java.lang.Thread.run(Thread.java:829) 11:11:19 11:11:19.849 [kafka-producer-network-thread | mso-123456-producer-414c146f-350c-404b-a200-38d58b83784c] DEBUG org.apache.kafka.clients.NetworkClient - [Producer clientId=mso-123456-producer-414c146f-350c-404b-a200-38d58b83784c] Give up sending metadata request since no node is available 11:11:19 11:11:19.900 [kafka-producer-network-thread | mso-123456-producer-414c146f-350c-404b-a200-38d58b83784c] DEBUG org.apache.kafka.clients.NetworkClient - [Producer clientId=mso-123456-producer-414c146f-350c-404b-a200-38d58b83784c] Give up sending metadata request since no node is available 11:11:19 11:11:19.910 [kafka-coordinator-heartbeat-thread | mso-group] DEBUG org.apache.kafka.clients.NetworkClient - [Consumer clientId=mso-123456-consumer-37da6eb2-0a27-47c4-bc3c-8032fdeca549, groupId=mso-group] Give up sending metadata request since no node is available 11:11:19 11:11:19.911 [kafka-coordinator-heartbeat-thread | mso-group] DEBUG org.apache.kafka.clients.consumer.internals.ConsumerCoordinator - [Consumer clientId=mso-123456-consumer-37da6eb2-0a27-47c4-bc3c-8032fdeca549, groupId=mso-group] No broker available to send FindCoordinator request 11:11:19 11:11:19.923 [pool-12-thread-1] DEBUG org.onap.sdc.impl.NotificationConsumer - Polling for messages from topic: null 11:11:19 11:11:19.951 [kafka-producer-network-thread | mso-123456-producer-414c146f-350c-404b-a200-38d58b83784c] DEBUG org.apache.kafka.clients.NetworkClient - [Producer clientId=mso-123456-producer-414c146f-350c-404b-a200-38d58b83784c] Give up sending metadata request since no node is available 11:11:20 11:11:20.001 [kafka-producer-network-thread | mso-123456-producer-414c146f-350c-404b-a200-38d58b83784c] DEBUG org.apache.kafka.clients.NetworkClient - [Producer clientId=mso-123456-producer-414c146f-350c-404b-a200-38d58b83784c] Give up sending metadata request since no node is available 11:11:20 11:11:20.011 [kafka-coordinator-heartbeat-thread | mso-group] DEBUG org.apache.kafka.clients.NetworkClient - [Consumer clientId=mso-123456-consumer-37da6eb2-0a27-47c4-bc3c-8032fdeca549, groupId=mso-group] Give up sending metadata request since no node is available 11:11:20 11:11:20.011 [kafka-coordinator-heartbeat-thread | mso-group] DEBUG org.apache.kafka.clients.consumer.internals.ConsumerCoordinator - [Consumer clientId=mso-123456-consumer-37da6eb2-0a27-47c4-bc3c-8032fdeca549, groupId=mso-group] No broker available to send FindCoordinator request 11:11:20 11:11:20.024 [pool-12-thread-3] DEBUG org.onap.sdc.impl.NotificationConsumer - Polling for messages from topic: null 11:11:20 11:11:20.052 [kafka-producer-network-thread | mso-123456-producer-414c146f-350c-404b-a200-38d58b83784c] DEBUG org.apache.kafka.clients.NetworkClient - [Producer clientId=mso-123456-producer-414c146f-350c-404b-a200-38d58b83784c] Initialize connection to node localhost:39115 (id: 1 rack: null) for sending metadata request 11:11:20 11:11:20.052 [kafka-producer-network-thread | mso-123456-producer-414c146f-350c-404b-a200-38d58b83784c] DEBUG org.apache.kafka.clients.ClientUtils - Resolved host localhost as 127.0.0.1 11:11:20 11:11:20.052 [kafka-producer-network-thread | mso-123456-producer-414c146f-350c-404b-a200-38d58b83784c] DEBUG org.apache.kafka.clients.NetworkClient - [Producer clientId=mso-123456-producer-414c146f-350c-404b-a200-38d58b83784c] Initiating connection to node localhost:39115 (id: 1 rack: null) using address localhost/127.0.0.1 11:11:20 11:11:20.053 [kafka-producer-network-thread | mso-123456-producer-414c146f-350c-404b-a200-38d58b83784c] DEBUG org.apache.kafka.common.security.authenticator.SaslClientAuthenticator - [Producer clientId=mso-123456-producer-414c146f-350c-404b-a200-38d58b83784c] Set SASL client state to SEND_APIVERSIONS_REQUEST 11:11:20 11:11:20.053 [kafka-producer-network-thread | mso-123456-producer-414c146f-350c-404b-a200-38d58b83784c] DEBUG org.apache.kafka.common.security.authenticator.SaslClientAuthenticator - [Producer clientId=mso-123456-producer-414c146f-350c-404b-a200-38d58b83784c] Creating SaslClient: client=null;service=kafka;serviceHostname=localhost;mechs=[PLAIN] 11:11:20 11:11:20.054 [kafka-producer-network-thread | mso-123456-producer-414c146f-350c-404b-a200-38d58b83784c] DEBUG org.apache.kafka.common.network.Selector - [Producer clientId=mso-123456-producer-414c146f-350c-404b-a200-38d58b83784c] Connection with localhost/127.0.0.1 (channelId=1) disconnected 11:11:20 java.net.ConnectException: Connection refused 11:11:20 at java.base/sun.nio.ch.SocketChannelImpl.checkConnect(Native Method) 11:11:20 at java.base/sun.nio.ch.SocketChannelImpl.finishConnect(SocketChannelImpl.java:777) 11:11:20 at org.apache.kafka.common.network.PlaintextTransportLayer.finishConnect(PlaintextTransportLayer.java:50) 11:11:20 at org.apache.kafka.common.network.KafkaChannel.finishConnect(KafkaChannel.java:224) 11:11:20 at org.apache.kafka.common.network.Selector.pollSelectionKeys(Selector.java:526) 11:11:20 at org.apache.kafka.common.network.Selector.poll(Selector.java:481) 11:11:20 at org.apache.kafka.clients.NetworkClient.poll(NetworkClient.java:560) 11:11:20 at org.apache.kafka.clients.producer.internals.Sender.runOnce(Sender.java:328) 11:11:20 at org.apache.kafka.clients.producer.internals.Sender.run(Sender.java:243) 11:11:20 at java.base/java.lang.Thread.run(Thread.java:829) 11:11:20 11:11:20.054 [kafka-producer-network-thread | mso-123456-producer-414c146f-350c-404b-a200-38d58b83784c] INFO org.apache.kafka.clients.NetworkClient - [Producer clientId=mso-123456-producer-414c146f-350c-404b-a200-38d58b83784c] Node 1 disconnected. 11:11:20 11:11:20.054 [kafka-producer-network-thread | mso-123456-producer-414c146f-350c-404b-a200-38d58b83784c] WARN org.apache.kafka.clients.NetworkClient - [Producer clientId=mso-123456-producer-414c146f-350c-404b-a200-38d58b83784c] Connection to node 1 (localhost/127.0.0.1:39115) could not be established. Broker may not be available. 11:11:20 11:11:20.111 [kafka-coordinator-heartbeat-thread | mso-group] DEBUG org.apache.kafka.clients.NetworkClient - [Consumer clientId=mso-123456-consumer-37da6eb2-0a27-47c4-bc3c-8032fdeca549, groupId=mso-group] Give up sending metadata request since no node is available 11:11:20 11:11:20.112 [kafka-coordinator-heartbeat-thread | mso-group] DEBUG org.apache.kafka.clients.consumer.internals.ConsumerCoordinator - [Consumer clientId=mso-123456-consumer-37da6eb2-0a27-47c4-bc3c-8032fdeca549, groupId=mso-group] No broker available to send FindCoordinator request 11:11:20 11:11:20.123 [pool-12-thread-2] DEBUG org.onap.sdc.impl.NotificationConsumer - Polling for messages from topic: null 11:11:20 11:11:20.155 [kafka-producer-network-thread | mso-123456-producer-414c146f-350c-404b-a200-38d58b83784c] DEBUG org.apache.kafka.clients.NetworkClient - [Producer clientId=mso-123456-producer-414c146f-350c-404b-a200-38d58b83784c] Give up sending metadata request since no node is available 11:11:20 11:11:20.206 [kafka-producer-network-thread | mso-123456-producer-414c146f-350c-404b-a200-38d58b83784c] DEBUG org.apache.kafka.clients.NetworkClient - [Producer clientId=mso-123456-producer-414c146f-350c-404b-a200-38d58b83784c] Give up sending metadata request since no node is available 11:11:20 11:11:20.212 [kafka-coordinator-heartbeat-thread | mso-group] DEBUG org.apache.kafka.clients.NetworkClient - [Consumer clientId=mso-123456-consumer-37da6eb2-0a27-47c4-bc3c-8032fdeca549, groupId=mso-group] Give up sending metadata request since no node is available 11:11:20 11:11:20.212 [kafka-coordinator-heartbeat-thread | mso-group] DEBUG org.apache.kafka.clients.consumer.internals.ConsumerCoordinator - [Consumer clientId=mso-123456-consumer-37da6eb2-0a27-47c4-bc3c-8032fdeca549, groupId=mso-group] No broker available to send FindCoordinator request 11:11:20 11:11:20.223 [pool-12-thread-4] DEBUG org.onap.sdc.impl.NotificationConsumer - Polling for messages from topic: null 11:11:20 11:11:20.257 [kafka-producer-network-thread | mso-123456-producer-414c146f-350c-404b-a200-38d58b83784c] DEBUG org.apache.kafka.clients.NetworkClient - [Producer clientId=mso-123456-producer-414c146f-350c-404b-a200-38d58b83784c] Give up sending metadata request since no node is available 11:11:20 11:11:20.307 [kafka-producer-network-thread | mso-123456-producer-414c146f-350c-404b-a200-38d58b83784c] DEBUG org.apache.kafka.clients.NetworkClient - [Producer clientId=mso-123456-producer-414c146f-350c-404b-a200-38d58b83784c] Give up sending metadata request since no node is available 11:11:20 11:11:20.313 [kafka-coordinator-heartbeat-thread | mso-group] DEBUG org.apache.kafka.clients.NetworkClient - [Consumer clientId=mso-123456-consumer-37da6eb2-0a27-47c4-bc3c-8032fdeca549, groupId=mso-group] Give up sending metadata request since no node is available 11:11:20 11:11:20.313 [kafka-coordinator-heartbeat-thread | mso-group] DEBUG org.apache.kafka.clients.consumer.internals.ConsumerCoordinator - [Consumer clientId=mso-123456-consumer-37da6eb2-0a27-47c4-bc3c-8032fdeca549, groupId=mso-group] No broker available to send FindCoordinator request 11:11:20 11:11:20.323 [pool-12-thread-1] DEBUG org.onap.sdc.impl.NotificationConsumer - Polling for messages from topic: null 11:11:20 11:11:20.358 [kafka-producer-network-thread | mso-123456-producer-414c146f-350c-404b-a200-38d58b83784c] DEBUG org.apache.kafka.clients.NetworkClient - [Producer clientId=mso-123456-producer-414c146f-350c-404b-a200-38d58b83784c] Give up sending metadata request since no node is available 11:11:20 11:11:20.410 [kafka-producer-network-thread | mso-123456-producer-414c146f-350c-404b-a200-38d58b83784c] DEBUG org.apache.kafka.clients.NetworkClient - [Producer clientId=mso-123456-producer-414c146f-350c-404b-a200-38d58b83784c] Give up sending metadata request since no node is available 11:11:20 11:11:20.413 [kafka-coordinator-heartbeat-thread | mso-group] DEBUG org.apache.kafka.clients.NetworkClient - [Consumer clientId=mso-123456-consumer-37da6eb2-0a27-47c4-bc3c-8032fdeca549, groupId=mso-group] Initialize connection to node localhost:39115 (id: 1 rack: null) for sending metadata request 11:11:20 11:11:20.413 [kafka-coordinator-heartbeat-thread | mso-group] DEBUG org.apache.kafka.clients.ClientUtils - Resolved host localhost as 127.0.0.1 11:11:20 11:11:20.413 [kafka-coordinator-heartbeat-thread | mso-group] DEBUG org.apache.kafka.clients.NetworkClient - [Consumer clientId=mso-123456-consumer-37da6eb2-0a27-47c4-bc3c-8032fdeca549, groupId=mso-group] Initiating connection to node localhost:39115 (id: 1 rack: null) using address localhost/127.0.0.1 11:11:20 11:11:20.414 [kafka-coordinator-heartbeat-thread | mso-group] DEBUG org.apache.kafka.common.security.authenticator.SaslClientAuthenticator - [Consumer clientId=mso-123456-consumer-37da6eb2-0a27-47c4-bc3c-8032fdeca549, groupId=mso-group] Set SASL client state to SEND_APIVERSIONS_REQUEST 11:11:20 11:11:20.414 [kafka-coordinator-heartbeat-thread | mso-group] DEBUG org.apache.kafka.common.security.authenticator.SaslClientAuthenticator - [Consumer clientId=mso-123456-consumer-37da6eb2-0a27-47c4-bc3c-8032fdeca549, groupId=mso-group] Creating SaslClient: client=null;service=kafka;serviceHostname=localhost;mechs=[PLAIN] 11:11:20 11:11:20.415 [kafka-coordinator-heartbeat-thread | mso-group] DEBUG org.apache.kafka.common.network.Selector - [Consumer clientId=mso-123456-consumer-37da6eb2-0a27-47c4-bc3c-8032fdeca549, groupId=mso-group] Connection with localhost/127.0.0.1 (channelId=1) disconnected 11:11:20 java.net.ConnectException: Connection refused 11:11:20 at java.base/sun.nio.ch.SocketChannelImpl.checkConnect(Native Method) 11:11:20 at java.base/sun.nio.ch.SocketChannelImpl.finishConnect(SocketChannelImpl.java:777) 11:11:20 at org.apache.kafka.common.network.PlaintextTransportLayer.finishConnect(PlaintextTransportLayer.java:50) 11:11:20 at org.apache.kafka.common.network.KafkaChannel.finishConnect(KafkaChannel.java:224) 11:11:20 at org.apache.kafka.common.network.Selector.pollSelectionKeys(Selector.java:526) 11:11:20 at org.apache.kafka.common.network.Selector.poll(Selector.java:481) 11:11:20 at org.apache.kafka.clients.NetworkClient.poll(NetworkClient.java:560) 11:11:20 at org.apache.kafka.clients.consumer.internals.ConsumerNetworkClient.poll(ConsumerNetworkClient.java:280) 11:11:20 at org.apache.kafka.clients.consumer.internals.ConsumerNetworkClient.pollNoWakeup(ConsumerNetworkClient.java:321) 11:11:20 at org.apache.kafka.clients.consumer.internals.AbstractCoordinator$HeartbeatThread.run(AbstractCoordinator.java:1454) 11:11:20 11:11:20.415 [kafka-coordinator-heartbeat-thread | mso-group] INFO org.apache.kafka.clients.NetworkClient - [Consumer clientId=mso-123456-consumer-37da6eb2-0a27-47c4-bc3c-8032fdeca549, groupId=mso-group] Node 1 disconnected. 11:11:20 11:11:20.415 [kafka-coordinator-heartbeat-thread | mso-group] WARN org.apache.kafka.clients.NetworkClient - [Consumer clientId=mso-123456-consumer-37da6eb2-0a27-47c4-bc3c-8032fdeca549, groupId=mso-group] Connection to node 1 (localhost/127.0.0.1:39115) could not be established. Broker may not be available. 11:11:20 11:11:20.415 [kafka-coordinator-heartbeat-thread | mso-group] DEBUG org.apache.kafka.clients.consumer.internals.ConsumerCoordinator - [Consumer clientId=mso-123456-consumer-37da6eb2-0a27-47c4-bc3c-8032fdeca549, groupId=mso-group] No broker available to send FindCoordinator request 11:11:20 11:11:20.424 [pool-12-thread-5] DEBUG org.onap.sdc.impl.NotificationConsumer - Polling for messages from topic: null 11:11:20 11:11:20.461 [kafka-producer-network-thread | mso-123456-producer-414c146f-350c-404b-a200-38d58b83784c] DEBUG org.apache.kafka.clients.NetworkClient - [Producer clientId=mso-123456-producer-414c146f-350c-404b-a200-38d58b83784c] Give up sending metadata request since no node is available 11:11:20 11:11:20.512 [kafka-producer-network-thread | mso-123456-producer-414c146f-350c-404b-a200-38d58b83784c] DEBUG org.apache.kafka.clients.NetworkClient - [Producer clientId=mso-123456-producer-414c146f-350c-404b-a200-38d58b83784c] Give up sending metadata request since no node is available 11:11:20 11:11:20.515 [kafka-coordinator-heartbeat-thread | mso-group] DEBUG org.apache.kafka.clients.NetworkClient - [Consumer clientId=mso-123456-consumer-37da6eb2-0a27-47c4-bc3c-8032fdeca549, groupId=mso-group] Give up sending metadata request since no node is available 11:11:20 11:11:20.516 [kafka-coordinator-heartbeat-thread | mso-group] DEBUG org.apache.kafka.clients.consumer.internals.ConsumerCoordinator - [Consumer clientId=mso-123456-consumer-37da6eb2-0a27-47c4-bc3c-8032fdeca549, groupId=mso-group] No broker available to send FindCoordinator request 11:11:20 11:11:20.523 [pool-12-thread-3] DEBUG org.onap.sdc.impl.NotificationConsumer - Polling for messages from topic: null 11:11:20 11:11:20.563 [kafka-producer-network-thread | mso-123456-producer-414c146f-350c-404b-a200-38d58b83784c] DEBUG org.apache.kafka.clients.NetworkClient - [Producer clientId=mso-123456-producer-414c146f-350c-404b-a200-38d58b83784c] Give up sending metadata request since no node is available 11:11:20 11:11:20.614 [kafka-producer-network-thread | mso-123456-producer-414c146f-350c-404b-a200-38d58b83784c] DEBUG org.apache.kafka.clients.NetworkClient - [Producer clientId=mso-123456-producer-414c146f-350c-404b-a200-38d58b83784c] Give up sending metadata request since no node is available 11:11:20 11:11:20.616 [kafka-coordinator-heartbeat-thread | mso-group] DEBUG org.apache.kafka.clients.NetworkClient - [Consumer clientId=mso-123456-consumer-37da6eb2-0a27-47c4-bc3c-8032fdeca549, groupId=mso-group] Give up sending metadata request since no node is available 11:11:20 11:11:20.616 [kafka-coordinator-heartbeat-thread | mso-group] DEBUG org.apache.kafka.clients.consumer.internals.ConsumerCoordinator - [Consumer clientId=mso-123456-consumer-37da6eb2-0a27-47c4-bc3c-8032fdeca549, groupId=mso-group] No broker available to send FindCoordinator request 11:11:20 11:11:20.623 [pool-12-thread-3] DEBUG org.onap.sdc.impl.NotificationConsumer - Polling for messages from topic: null 11:11:20 11:11:20.629 [main] INFO org.onap.sdc.impl.DistributionClientImpl - DistributionClient - sendNotificationStatus 11:11:20 11:11:20.629 [main] DEBUG org.onap.sdc.impl.DistributionClientImpl - client was not initialized 11:11:20 11:11:20.633 [pool-13-thread-1] DEBUG org.onap.sdc.impl.NotificationConsumer - Polling for messages from topic: null 11:11:20 11:11:20.664 [kafka-producer-network-thread | mso-123456-producer-414c146f-350c-404b-a200-38d58b83784c] DEBUG org.apache.kafka.clients.NetworkClient - [Producer clientId=mso-123456-producer-414c146f-350c-404b-a200-38d58b83784c] Give up sending metadata request since no node is available 11:11:20 11:11:20.715 [kafka-producer-network-thread | mso-123456-producer-414c146f-350c-404b-a200-38d58b83784c] DEBUG org.apache.kafka.clients.NetworkClient - [Producer clientId=mso-123456-producer-414c146f-350c-404b-a200-38d58b83784c] Give up sending metadata request since no node is available 11:11:20 11:11:20.716 [kafka-coordinator-heartbeat-thread | mso-group] DEBUG org.apache.kafka.clients.NetworkClient - [Consumer clientId=mso-123456-consumer-37da6eb2-0a27-47c4-bc3c-8032fdeca549, groupId=mso-group] Give up sending metadata request since no node is available 11:11:20 11:11:20.717 [kafka-coordinator-heartbeat-thread | mso-group] DEBUG org.apache.kafka.clients.consumer.internals.ConsumerCoordinator - [Consumer clientId=mso-123456-consumer-37da6eb2-0a27-47c4-bc3c-8032fdeca549, groupId=mso-group] No broker available to send FindCoordinator request 11:11:20 11:11:20.731 [pool-13-thread-1] DEBUG org.onap.sdc.impl.NotificationConsumer - Polling for messages from topic: null 11:11:20 11:11:20.765 [kafka-producer-network-thread | mso-123456-producer-414c146f-350c-404b-a200-38d58b83784c] DEBUG org.apache.kafka.clients.NetworkClient - [Producer clientId=mso-123456-producer-414c146f-350c-404b-a200-38d58b83784c] Give up sending metadata request since no node is available 11:11:20 11:11:20.816 [kafka-producer-network-thread | mso-123456-producer-414c146f-350c-404b-a200-38d58b83784c] DEBUG org.apache.kafka.clients.NetworkClient - [Producer clientId=mso-123456-producer-414c146f-350c-404b-a200-38d58b83784c] Give up sending metadata request since no node is available 11:11:20 11:11:20.817 [kafka-coordinator-heartbeat-thread | mso-group] DEBUG org.apache.kafka.clients.NetworkClient - [Consumer clientId=mso-123456-consumer-37da6eb2-0a27-47c4-bc3c-8032fdeca549, groupId=mso-group] Give up sending metadata request since no node is available 11:11:20 11:11:20.817 [kafka-coordinator-heartbeat-thread | mso-group] DEBUG org.apache.kafka.clients.consumer.internals.ConsumerCoordinator - [Consumer clientId=mso-123456-consumer-37da6eb2-0a27-47c4-bc3c-8032fdeca549, groupId=mso-group] No broker available to send FindCoordinator request 11:11:20 11:11:20.831 [pool-13-thread-2] DEBUG org.onap.sdc.impl.NotificationConsumer - Polling for messages from topic: null 11:11:20 11:11:20.832 [pool-13-thread-2] DEBUG org.onap.sdc.impl.NotificationConsumer - received message from topic 11:11:20 11:11:20.832 [pool-13-thread-2] DEBUG org.onap.sdc.impl.NotificationConsumer - received notification from broker: {"distributionID" : "bcc7a72e-90b1-4c5f-9a37-28dc3cd86416", 11:11:20 "serviceName" : "Testnotificationser1", 11:11:20 "serviceVersion" : "1.0", 11:11:20 "serviceUUID" : "7f7f94f4-373a-4b71-a0e3-80ae2ba4eb5d", 11:11:20 "serviceDescription" : "TestNotificationVF1", 11:11:20 "resources" : [{ 11:11:20 "resourceInstanceName" : "testnotificationvf11", 11:11:20 "resourceName" : "TestNotificationVF1", 11:11:20 "resourceVersion" : "1.0", 11:11:20 "resoucreType" : "VF", 11:11:20 "resourceUUID" : "907e1746-9f69-40f5-9f2a-313654092a2d", 11:11:20 "artifacts" : [{ 11:11:20 "artifactName" : "sample-xml-alldata-1-1.xml", 11:11:20 "artifactType" : "YANG_XML", 11:11:20 "artifactURL" : "/sdc/v1/catalog/services/Testnotificationser1/1.0/resourceInstances/testnotificationvf11/artifacts/sample-xml-alldata-1-1.xml", 11:11:20 "artifactChecksum" : "MTUxODFkMmRlOTNhNjYxMGYyYTI1ZjA5Y2QyNWQyYTk\u003d", 11:11:20 "artifactDescription" : "MyYang", 11:11:20 "artifactTimeout" : 0, 11:11:20 "artifactUUID" : "0005bc4a-2c19-452e-be6d-d574a56be4d0", 11:11:20 "artifactVersion" : "1" 11:11:20 }, { 11:11:20 "artifactName" : "heat.yaml", 11:11:20 "artifactType" : "HEAT", 11:11:20 "artifactURL" : "/sdc/v1/catalog/services/Testnotificationser1/1.0/resourceInstances/testnotificationvf11/artifacts/heat.yaml", 11:11:20 "artifactChecksum" : "ODEyNjE4YTMzYzRmMTk2ODVhNTU2NTg3YWEyNmIxMTM\u003d", 11:11:20 "artifactDescription" : "heat", 11:11:20 "artifactTimeout" : 60, 11:11:20 "artifactUUID" : "8df6123c-f368-47d3-93be-1972cefbcc35", 11:11:20 "artifactVersion" : "1" 11:11:20 }, { 11:11:20 "artifactName" : "heat.env", 11:11:20 "artifactType" : "HEAT_ENV", 11:11:20 "artifactURL" : "/sdc/v1/catalog/services/Testnotificationser1/1.0/resourceInstances/testnotificationvf11/artifacts/heat.env", 11:11:20 "artifactChecksum" : "NGIzMjExZTM1NDc2NjBjOTQyMGJmMWNiMmU0NTE5NzM\u003d", 11:11:20 "artifactDescription" : "Auto-generated HEAT Environment deployment artifact", 11:11:20 "artifactTimeout" : 0, 11:11:20 "artifactUUID" : "ce65d31c-35c0-43a9-90c7-596fc51d0c86", 11:11:20 "artifactVersion" : "1", 11:11:20 "generatedFromUUID" : "8df6123c-f368-47d3-93be-1972cefbcc35" 11:11:20 } 11:11:20 ] 11:11:20 } 11:11:20 ]} 11:11:20 11:11:20.836 [pool-13-thread-2] DEBUG org.onap.sdc.impl.NotificationConsumer - sending notification to client: { 11:11:20 "distributionID": "bcc7a72e-90b1-4c5f-9a37-28dc3cd86416", 11:11:20 "serviceName": "Testnotificationser1", 11:11:20 "serviceVersion": "1.0", 11:11:20 "serviceUUID": "7f7f94f4-373a-4b71-a0e3-80ae2ba4eb5d", 11:11:20 "serviceDescription": "TestNotificationVF1", 11:11:20 "resources": [ 11:11:20 { 11:11:20 "resourceInstanceName": "testnotificationvf11", 11:11:20 "resourceName": "TestNotificationVF1", 11:11:20 "resourceVersion": "1.0", 11:11:20 "resoucreType": "VF", 11:11:20 "resourceUUID": "907e1746-9f69-40f5-9f2a-313654092a2d", 11:11:20 "artifacts": [ 11:11:20 { 11:11:20 "artifactName": "heat.yaml", 11:11:20 "artifactType": "HEAT", 11:11:20 "artifactURL": "/sdc/v1/catalog/services/Testnotificationser1/1.0/resourceInstances/testnotificationvf11/artifacts/heat.yaml", 11:11:20 "artifactChecksum": "ODEyNjE4YTMzYzRmMTk2ODVhNTU2NTg3YWEyNmIxMTM\u003d", 11:11:20 "artifactDescription": "heat", 11:11:20 "artifactTimeout": 60, 11:11:20 "artifactVersion": "1", 11:11:20 "artifactUUID": "8df6123c-f368-47d3-93be-1972cefbcc35", 11:11:20 "generatedArtifact": { 11:11:20 "artifactName": "heat.env", 11:11:20 "artifactType": "HEAT_ENV", 11:11:20 "artifactURL": "/sdc/v1/catalog/services/Testnotificationser1/1.0/resourceInstances/testnotificationvf11/artifacts/heat.env", 11:11:20 "artifactChecksum": "NGIzMjExZTM1NDc2NjBjOTQyMGJmMWNiMmU0NTE5NzM\u003d", 11:11:20 "artifactDescription": "Auto-generated HEAT Environment deployment artifact", 11:11:20 "artifactTimeout": 0, 11:11:20 "artifactVersion": "1", 11:11:20 "artifactUUID": "ce65d31c-35c0-43a9-90c7-596fc51d0c86", 11:11:20 "generatedFromUUID": "8df6123c-f368-47d3-93be-1972cefbcc35" 11:11:20 }, 11:11:20 "relatedArtifactsInfo": [] 11:11:20 } 11:11:20 ] 11:11:20 } 11:11:20 ], 11:11:20 "serviceArtifacts": [] 11:11:20 } 11:11:20 11:11:20.866 [kafka-producer-network-thread | mso-123456-producer-414c146f-350c-404b-a200-38d58b83784c] DEBUG org.apache.kafka.clients.NetworkClient - [Producer clientId=mso-123456-producer-414c146f-350c-404b-a200-38d58b83784c] Give up sending metadata request since no node is available 11:11:20 11:11:20.917 [kafka-producer-network-thread | mso-123456-producer-414c146f-350c-404b-a200-38d58b83784c] DEBUG org.apache.kafka.clients.NetworkClient - [Producer clientId=mso-123456-producer-414c146f-350c-404b-a200-38d58b83784c] Give up sending metadata request since no node is available 11:11:20 11:11:20.917 [kafka-coordinator-heartbeat-thread | mso-group] DEBUG org.apache.kafka.clients.NetworkClient - [Consumer clientId=mso-123456-consumer-37da6eb2-0a27-47c4-bc3c-8032fdeca549, groupId=mso-group] Give up sending metadata request since no node is available 11:11:20 11:11:20.918 [kafka-coordinator-heartbeat-thread | mso-group] DEBUG org.apache.kafka.clients.consumer.internals.ConsumerCoordinator - [Consumer clientId=mso-123456-consumer-37da6eb2-0a27-47c4-bc3c-8032fdeca549, groupId=mso-group] No broker available to send FindCoordinator request 11:11:20 11:11:20.931 [pool-13-thread-1] DEBUG org.onap.sdc.impl.NotificationConsumer - Polling for messages from topic: null 11:11:20 11:11:20.967 [kafka-producer-network-thread | mso-123456-producer-414c146f-350c-404b-a200-38d58b83784c] DEBUG org.apache.kafka.clients.NetworkClient - [Producer clientId=mso-123456-producer-414c146f-350c-404b-a200-38d58b83784c] Initialize connection to node localhost:39115 (id: 1 rack: null) for sending metadata request 11:11:20 11:11:20.967 [kafka-producer-network-thread | mso-123456-producer-414c146f-350c-404b-a200-38d58b83784c] DEBUG org.apache.kafka.clients.ClientUtils - Resolved host localhost as 127.0.0.1 11:11:20 11:11:20.967 [kafka-producer-network-thread | mso-123456-producer-414c146f-350c-404b-a200-38d58b83784c] DEBUG org.apache.kafka.clients.NetworkClient - [Producer clientId=mso-123456-producer-414c146f-350c-404b-a200-38d58b83784c] Initiating connection to node localhost:39115 (id: 1 rack: null) using address localhost/127.0.0.1 11:11:20 11:11:20.968 [kafka-producer-network-thread | mso-123456-producer-414c146f-350c-404b-a200-38d58b83784c] DEBUG org.apache.kafka.common.security.authenticator.SaslClientAuthenticator - [Producer clientId=mso-123456-producer-414c146f-350c-404b-a200-38d58b83784c] Set SASL client state to SEND_APIVERSIONS_REQUEST 11:11:20 11:11:20.968 [kafka-producer-network-thread | mso-123456-producer-414c146f-350c-404b-a200-38d58b83784c] DEBUG org.apache.kafka.common.security.authenticator.SaslClientAuthenticator - [Producer clientId=mso-123456-producer-414c146f-350c-404b-a200-38d58b83784c] Creating SaslClient: client=null;service=kafka;serviceHostname=localhost;mechs=[PLAIN] 11:11:20 11:11:20.969 [kafka-producer-network-thread | mso-123456-producer-414c146f-350c-404b-a200-38d58b83784c] DEBUG org.apache.kafka.common.network.Selector - [Producer clientId=mso-123456-producer-414c146f-350c-404b-a200-38d58b83784c] Connection with localhost/127.0.0.1 (channelId=1) disconnected 11:11:20 java.net.ConnectException: Connection refused 11:11:20 at java.base/sun.nio.ch.SocketChannelImpl.checkConnect(Native Method) 11:11:20 at java.base/sun.nio.ch.SocketChannelImpl.finishConnect(SocketChannelImpl.java:777) 11:11:20 at org.apache.kafka.common.network.PlaintextTransportLayer.finishConnect(PlaintextTransportLayer.java:50) 11:11:20 at org.apache.kafka.common.network.KafkaChannel.finishConnect(KafkaChannel.java:224) 11:11:20 at org.apache.kafka.common.network.Selector.pollSelectionKeys(Selector.java:526) 11:11:20 at org.apache.kafka.common.network.Selector.poll(Selector.java:481) 11:11:20 at org.apache.kafka.clients.NetworkClient.poll(NetworkClient.java:560) 11:11:20 at org.apache.kafka.clients.producer.internals.Sender.runOnce(Sender.java:328) 11:11:20 at org.apache.kafka.clients.producer.internals.Sender.run(Sender.java:243) 11:11:20 at java.base/java.lang.Thread.run(Thread.java:829) 11:11:20 11:11:20.969 [kafka-producer-network-thread | mso-123456-producer-414c146f-350c-404b-a200-38d58b83784c] INFO org.apache.kafka.clients.NetworkClient - [Producer clientId=mso-123456-producer-414c146f-350c-404b-a200-38d58b83784c] Node 1 disconnected. 11:11:20 11:11:20.969 [kafka-producer-network-thread | mso-123456-producer-414c146f-350c-404b-a200-38d58b83784c] WARN org.apache.kafka.clients.NetworkClient - [Producer clientId=mso-123456-producer-414c146f-350c-404b-a200-38d58b83784c] Connection to node 1 (localhost/127.0.0.1:39115) could not be established. Broker may not be available. 11:11:21 11:11:21.018 [kafka-coordinator-heartbeat-thread | mso-group] DEBUG org.apache.kafka.clients.NetworkClient - [Consumer clientId=mso-123456-consumer-37da6eb2-0a27-47c4-bc3c-8032fdeca549, groupId=mso-group] Give up sending metadata request since no node is available 11:11:21 11:11:21.018 [kafka-coordinator-heartbeat-thread | mso-group] DEBUG org.apache.kafka.clients.consumer.internals.ConsumerCoordinator - [Consumer clientId=mso-123456-consumer-37da6eb2-0a27-47c4-bc3c-8032fdeca549, groupId=mso-group] No broker available to send FindCoordinator request 11:11:21 11:11:21.031 [pool-13-thread-3] DEBUG org.onap.sdc.impl.NotificationConsumer - Polling for messages from topic: null 11:11:21 11:11:21.069 [kafka-producer-network-thread | mso-123456-producer-414c146f-350c-404b-a200-38d58b83784c] DEBUG org.apache.kafka.clients.NetworkClient - [Producer clientId=mso-123456-producer-414c146f-350c-404b-a200-38d58b83784c] Give up sending metadata request since no node is available 11:11:21 11:11:21.118 [kafka-coordinator-heartbeat-thread | mso-group] DEBUG org.apache.kafka.clients.NetworkClient - [Consumer clientId=mso-123456-consumer-37da6eb2-0a27-47c4-bc3c-8032fdeca549, groupId=mso-group] Give up sending metadata request since no node is available 11:11:21 11:11:21.119 [kafka-coordinator-heartbeat-thread | mso-group] DEBUG org.apache.kafka.clients.consumer.internals.ConsumerCoordinator - [Consumer clientId=mso-123456-consumer-37da6eb2-0a27-47c4-bc3c-8032fdeca549, groupId=mso-group] No broker available to send FindCoordinator request 11:11:21 11:11:21.119 [kafka-producer-network-thread | mso-123456-producer-414c146f-350c-404b-a200-38d58b83784c] DEBUG org.apache.kafka.clients.NetworkClient - [Producer clientId=mso-123456-producer-414c146f-350c-404b-a200-38d58b83784c] Give up sending metadata request since no node is available 11:11:21 11:11:21.131 [pool-13-thread-2] DEBUG org.onap.sdc.impl.NotificationConsumer - Polling for messages from topic: null 11:11:21 11:11:21.170 [kafka-producer-network-thread | mso-123456-producer-414c146f-350c-404b-a200-38d58b83784c] DEBUG org.apache.kafka.clients.NetworkClient - [Producer clientId=mso-123456-producer-414c146f-350c-404b-a200-38d58b83784c] Give up sending metadata request since no node is available 11:11:21 11:11:21.219 [kafka-coordinator-heartbeat-thread | mso-group] DEBUG org.apache.kafka.clients.NetworkClient - [Consumer clientId=mso-123456-consumer-37da6eb2-0a27-47c4-bc3c-8032fdeca549, groupId=mso-group] Give up sending metadata request since no node is available 11:11:21 11:11:21.219 [kafka-coordinator-heartbeat-thread | mso-group] DEBUG org.apache.kafka.clients.consumer.internals.ConsumerCoordinator - [Consumer clientId=mso-123456-consumer-37da6eb2-0a27-47c4-bc3c-8032fdeca549, groupId=mso-group] No broker available to send FindCoordinator request 11:11:21 11:11:21.220 [kafka-producer-network-thread | mso-123456-producer-414c146f-350c-404b-a200-38d58b83784c] DEBUG org.apache.kafka.clients.NetworkClient - [Producer clientId=mso-123456-producer-414c146f-350c-404b-a200-38d58b83784c] Give up sending metadata request since no node is available 11:11:21 11:11:21.232 [pool-13-thread-4] DEBUG org.onap.sdc.impl.NotificationConsumer - Polling for messages from topic: null 11:11:21 11:11:21.271 [kafka-producer-network-thread | mso-123456-producer-414c146f-350c-404b-a200-38d58b83784c] DEBUG org.apache.kafka.clients.NetworkClient - [Producer clientId=mso-123456-producer-414c146f-350c-404b-a200-38d58b83784c] Give up sending metadata request since no node is available 11:11:21 11:11:21.319 [kafka-coordinator-heartbeat-thread | mso-group] DEBUG org.apache.kafka.clients.NetworkClient - [Consumer clientId=mso-123456-consumer-37da6eb2-0a27-47c4-bc3c-8032fdeca549, groupId=mso-group] Give up sending metadata request since no node is available 11:11:21 11:11:21.320 [kafka-coordinator-heartbeat-thread | mso-group] DEBUG org.apache.kafka.clients.consumer.internals.ConsumerCoordinator - [Consumer clientId=mso-123456-consumer-37da6eb2-0a27-47c4-bc3c-8032fdeca549, groupId=mso-group] No broker available to send FindCoordinator request 11:11:21 11:11:21.321 [kafka-producer-network-thread | mso-123456-producer-414c146f-350c-404b-a200-38d58b83784c] DEBUG org.apache.kafka.clients.NetworkClient - [Producer clientId=mso-123456-producer-414c146f-350c-404b-a200-38d58b83784c] Give up sending metadata request since no node is available 11:11:21 11:11:21.331 [pool-13-thread-1] DEBUG org.onap.sdc.impl.NotificationConsumer - Polling for messages from topic: null 11:11:21 11:11:21.372 [kafka-producer-network-thread | mso-123456-producer-414c146f-350c-404b-a200-38d58b83784c] DEBUG org.apache.kafka.clients.NetworkClient - [Producer clientId=mso-123456-producer-414c146f-350c-404b-a200-38d58b83784c] Give up sending metadata request since no node is available 11:11:21 11:11:21.420 [kafka-coordinator-heartbeat-thread | mso-group] DEBUG org.apache.kafka.clients.NetworkClient - [Consumer clientId=mso-123456-consumer-37da6eb2-0a27-47c4-bc3c-8032fdeca549, groupId=mso-group] Initialize connection to node localhost:39115 (id: 1 rack: null) for sending metadata request 11:11:21 11:11:21.420 [kafka-coordinator-heartbeat-thread | mso-group] DEBUG org.apache.kafka.clients.ClientUtils - Resolved host localhost as 127.0.0.1 11:11:21 11:11:21.421 [kafka-coordinator-heartbeat-thread | mso-group] DEBUG org.apache.kafka.clients.NetworkClient - [Consumer clientId=mso-123456-consumer-37da6eb2-0a27-47c4-bc3c-8032fdeca549, groupId=mso-group] Initiating connection to node localhost:39115 (id: 1 rack: null) using address localhost/127.0.0.1 11:11:21 11:11:21.421 [kafka-coordinator-heartbeat-thread | mso-group] DEBUG org.apache.kafka.common.security.authenticator.SaslClientAuthenticator - [Consumer clientId=mso-123456-consumer-37da6eb2-0a27-47c4-bc3c-8032fdeca549, groupId=mso-group] Set SASL client state to SEND_APIVERSIONS_REQUEST 11:11:21 11:11:21.421 [kafka-coordinator-heartbeat-thread | mso-group] DEBUG org.apache.kafka.common.security.authenticator.SaslClientAuthenticator - [Consumer clientId=mso-123456-consumer-37da6eb2-0a27-47c4-bc3c-8032fdeca549, groupId=mso-group] Creating SaslClient: client=null;service=kafka;serviceHostname=localhost;mechs=[PLAIN] 11:11:21 11:11:21.422 [kafka-coordinator-heartbeat-thread | mso-group] DEBUG org.apache.kafka.common.network.Selector - [Consumer clientId=mso-123456-consumer-37da6eb2-0a27-47c4-bc3c-8032fdeca549, groupId=mso-group] Connection with localhost/127.0.0.1 (channelId=1) disconnected 11:11:21 java.net.ConnectException: Connection refused 11:11:21 at java.base/sun.nio.ch.SocketChannelImpl.checkConnect(Native Method) 11:11:21 at java.base/sun.nio.ch.SocketChannelImpl.finishConnect(SocketChannelImpl.java:777) 11:11:21 at org.apache.kafka.common.network.PlaintextTransportLayer.finishConnect(PlaintextTransportLayer.java:50) 11:11:21 at org.apache.kafka.common.network.KafkaChannel.finishConnect(KafkaChannel.java:224) 11:11:21 at org.apache.kafka.common.network.Selector.pollSelectionKeys(Selector.java:526) 11:11:21 at org.apache.kafka.common.network.Selector.poll(Selector.java:481) 11:11:21 at org.apache.kafka.clients.NetworkClient.poll(NetworkClient.java:560) 11:11:21 at org.apache.kafka.clients.consumer.internals.ConsumerNetworkClient.poll(ConsumerNetworkClient.java:280) 11:11:21 at org.apache.kafka.clients.consumer.internals.ConsumerNetworkClient.pollNoWakeup(ConsumerNetworkClient.java:321) 11:11:21 at org.apache.kafka.clients.consumer.internals.AbstractCoordinator$HeartbeatThread.run(AbstractCoordinator.java:1454) 11:11:21 11:11:21.422 [kafka-coordinator-heartbeat-thread | mso-group] INFO org.apache.kafka.clients.NetworkClient - [Consumer clientId=mso-123456-consumer-37da6eb2-0a27-47c4-bc3c-8032fdeca549, groupId=mso-group] Node 1 disconnected. 11:11:21 11:11:21.423 [kafka-coordinator-heartbeat-thread | mso-group] WARN org.apache.kafka.clients.NetworkClient - [Consumer clientId=mso-123456-consumer-37da6eb2-0a27-47c4-bc3c-8032fdeca549, groupId=mso-group] Connection to node 1 (localhost/127.0.0.1:39115) could not be established. Broker may not be available. 11:11:21 11:11:21.423 [kafka-producer-network-thread | mso-123456-producer-414c146f-350c-404b-a200-38d58b83784c] DEBUG org.apache.kafka.clients.NetworkClient - [Producer clientId=mso-123456-producer-414c146f-350c-404b-a200-38d58b83784c] Give up sending metadata request since no node is available 11:11:21 11:11:21.423 [kafka-coordinator-heartbeat-thread | mso-group] DEBUG org.apache.kafka.clients.consumer.internals.ConsumerCoordinator - [Consumer clientId=mso-123456-consumer-37da6eb2-0a27-47c4-bc3c-8032fdeca549, groupId=mso-group] No broker available to send FindCoordinator request 11:11:21 11:11:21.432 [pool-13-thread-5] DEBUG org.onap.sdc.impl.NotificationConsumer - Polling for messages from topic: null 11:11:21 11:11:21.473 [kafka-producer-network-thread | mso-123456-producer-414c146f-350c-404b-a200-38d58b83784c] DEBUG org.apache.kafka.clients.NetworkClient - [Producer clientId=mso-123456-producer-414c146f-350c-404b-a200-38d58b83784c] Give up sending metadata request since no node is available 11:11:21 11:11:21.523 [kafka-coordinator-heartbeat-thread | mso-group] DEBUG org.apache.kafka.clients.NetworkClient - [Consumer clientId=mso-123456-consumer-37da6eb2-0a27-47c4-bc3c-8032fdeca549, groupId=mso-group] Give up sending metadata request since no node is available 11:11:21 11:11:21.523 [kafka-coordinator-heartbeat-thread | mso-group] DEBUG org.apache.kafka.clients.consumer.internals.ConsumerCoordinator - [Consumer clientId=mso-123456-consumer-37da6eb2-0a27-47c4-bc3c-8032fdeca549, groupId=mso-group] No broker available to send FindCoordinator request 11:11:21 11:11:21.524 [kafka-producer-network-thread | mso-123456-producer-414c146f-350c-404b-a200-38d58b83784c] DEBUG org.apache.kafka.clients.NetworkClient - [Producer clientId=mso-123456-producer-414c146f-350c-404b-a200-38d58b83784c] Give up sending metadata request since no node is available 11:11:21 11:11:21.531 [pool-13-thread-3] DEBUG org.onap.sdc.impl.NotificationConsumer - Polling for messages from topic: null 11:11:21 11:11:21.574 [kafka-producer-network-thread | mso-123456-producer-414c146f-350c-404b-a200-38d58b83784c] DEBUG org.apache.kafka.clients.NetworkClient - [Producer clientId=mso-123456-producer-414c146f-350c-404b-a200-38d58b83784c] Give up sending metadata request since no node is available 11:11:21 11:11:21.624 [kafka-coordinator-heartbeat-thread | mso-group] DEBUG org.apache.kafka.clients.NetworkClient - [Consumer clientId=mso-123456-consumer-37da6eb2-0a27-47c4-bc3c-8032fdeca549, groupId=mso-group] Give up sending metadata request since no node is available 11:11:21 11:11:21.624 [kafka-coordinator-heartbeat-thread | mso-group] DEBUG org.apache.kafka.clients.consumer.internals.ConsumerCoordinator - [Consumer clientId=mso-123456-consumer-37da6eb2-0a27-47c4-bc3c-8032fdeca549, groupId=mso-group] No broker available to send FindCoordinator request 11:11:21 11:11:21.625 [kafka-producer-network-thread | mso-123456-producer-414c146f-350c-404b-a200-38d58b83784c] DEBUG org.apache.kafka.clients.NetworkClient - [Producer clientId=mso-123456-producer-414c146f-350c-404b-a200-38d58b83784c] Give up sending metadata request since no node is available 11:11:21 11:11:21.631 [pool-13-thread-3] DEBUG org.onap.sdc.impl.NotificationConsumer - Polling for messages from topic: null 11:11:21 11:11:21.637 [main] INFO org.onap.sdc.impl.DistributionClientImpl - DistributionClient - sendNotificationStatus 11:11:21 11:11:21.638 [main] DEBUG org.onap.sdc.impl.DistributionClientImpl - client was not initialized 11:11:21 11:11:21.641 [pool-14-thread-1] DEBUG org.onap.sdc.impl.NotificationConsumer - Polling for messages from topic: null 11:11:21 11:11:21.676 [kafka-producer-network-thread | mso-123456-producer-414c146f-350c-404b-a200-38d58b83784c] DEBUG org.apache.kafka.clients.NetworkClient - [Producer clientId=mso-123456-producer-414c146f-350c-404b-a200-38d58b83784c] Give up sending metadata request since no node is available 11:11:21 11:11:21.724 [kafka-coordinator-heartbeat-thread | mso-group] DEBUG org.apache.kafka.clients.NetworkClient - [Consumer clientId=mso-123456-consumer-37da6eb2-0a27-47c4-bc3c-8032fdeca549, groupId=mso-group] Give up sending metadata request since no node is available 11:11:21 11:11:21.725 [kafka-coordinator-heartbeat-thread | mso-group] DEBUG org.apache.kafka.clients.consumer.internals.ConsumerCoordinator - [Consumer clientId=mso-123456-consumer-37da6eb2-0a27-47c4-bc3c-8032fdeca549, groupId=mso-group] No broker available to send FindCoordinator request 11:11:21 11:11:21.734 [kafka-producer-network-thread | mso-123456-producer-414c146f-350c-404b-a200-38d58b83784c] DEBUG org.apache.kafka.clients.NetworkClient - [Producer clientId=mso-123456-producer-414c146f-350c-404b-a200-38d58b83784c] Give up sending metadata request since no node is available 11:11:21 11:11:21.739 [pool-14-thread-1] DEBUG org.onap.sdc.impl.NotificationConsumer - Polling for messages from topic: null 11:11:21 11:11:21.784 [kafka-producer-network-thread | mso-123456-producer-414c146f-350c-404b-a200-38d58b83784c] DEBUG org.apache.kafka.clients.NetworkClient - [Producer clientId=mso-123456-producer-414c146f-350c-404b-a200-38d58b83784c] Give up sending metadata request since no node is available 11:11:21 11:11:21.825 [kafka-coordinator-heartbeat-thread | mso-group] DEBUG org.apache.kafka.clients.NetworkClient - [Consumer clientId=mso-123456-consumer-37da6eb2-0a27-47c4-bc3c-8032fdeca549, groupId=mso-group] Give up sending metadata request since no node is available 11:11:21 11:11:21.825 [kafka-coordinator-heartbeat-thread | mso-group] DEBUG org.apache.kafka.clients.consumer.internals.ConsumerCoordinator - [Consumer clientId=mso-123456-consumer-37da6eb2-0a27-47c4-bc3c-8032fdeca549, groupId=mso-group] No broker available to send FindCoordinator request 11:11:21 11:11:21.835 [kafka-producer-network-thread | mso-123456-producer-414c146f-350c-404b-a200-38d58b83784c] DEBUG org.apache.kafka.clients.NetworkClient - [Producer clientId=mso-123456-producer-414c146f-350c-404b-a200-38d58b83784c] Give up sending metadata request since no node is available 11:11:21 11:11:21.840 [pool-14-thread-2] DEBUG org.onap.sdc.impl.NotificationConsumer - Polling for messages from topic: null 11:11:21 11:11:21.840 [pool-14-thread-2] DEBUG org.onap.sdc.impl.NotificationConsumer - received message from topic 11:11:21 11:11:21.840 [pool-14-thread-2] DEBUG org.onap.sdc.impl.NotificationConsumer - received notification from broker: { 11:11:21 "distributionID" : "bcc7a72e-90b1-4c5f-9a37-28dc3cd86416", 11:11:21 "serviceName" : "Testnotificationser1", 11:11:21 "serviceVersion" : "1.0", 11:11:21 "serviceUUID" : "7f7f94f4-373a-4b71-a0e3-80ae2ba4eb5d", 11:11:21 "serviceDescription" : "TestNotificationVF1", 11:11:21 "serviceArtifacts" : [{ 11:11:21 "artifactName" : "sample-xml-alldata-1-1.xml", 11:11:21 "artifactType" : "YANG_XML", 11:11:21 "artifactURL" : "/sdc/v1/catalog/services/Testnotificationser1/1.0/resourceInstances/testnotificationvf11/artifacts/sample-xml-alldata-1-1.xml", 11:11:21 "artifactChecksum" : "MTUxODFkMmRlOTNhNjYxMGYyYTI1ZjA5Y2QyNWQyYTk\u003d", 11:11:21 "artifactDescription" : "MyYang", 11:11:21 "artifactTimeout" : 0, 11:11:21 "artifactUUID" : "0005bc4a-2c19-452e-be6d-d574a56be4d0", 11:11:21 "artifactVersion" : "1" 11:11:21 }, { 11:11:21 "artifactName" : "heat.yaml", 11:11:21 "artifactType" : "HEAT", 11:11:21 "artifactURL" : "/sdc/v1/catalog/services/Testnotificationser1/1.0/resourceInstances/testnotificationvf11/artifacts/heat.yaml", 11:11:21 "artifactChecksum" : "ODEyNjE4YTMzYzRmMTk2ODVhNTU2NTg3YWEyNmIxMTM\u003d", 11:11:21 "artifactDescription" : "heat", 11:11:21 "artifactTimeout" : 60, 11:11:21 "artifactUUID" : "8df6123c-f368-47d3-93be-1972cefbcc35", 11:11:21 "artifactVersion" : "1" 11:11:21 }, { 11:11:21 "artifactName" : "heat.env", 11:11:21 "artifactType" : "HEAT_ENV", 11:11:21 "artifactURL" : "/sdc/v1/catalog/services/Testnotificationser1/1.0/resourceInstances/testnotificationvf11/artifacts/heat.env", 11:11:21 "artifactChecksum" : "NGIzMjExZTM1NDc2NjBjOTQyMGJmMWNiMmU0NTE5NzM\u003d", 11:11:21 "artifactDescription" : "Auto-generated HEAT Environment deployment artifact", 11:11:21 "artifactTimeout" : 0, 11:11:21 "artifactUUID" : "ce65d31c-35c0-43a9-90c7-596fc51d0c86", 11:11:21 "artifactVersion" : "1", 11:11:21 "generatedFromUUID" : "8df6123c-f368-47d3-93be-1972cefbcc35" 11:11:21 } 11:11:21 ], 11:11:21 "resources" : [{ 11:11:21 "resourceInstanceName" : "testnotificationvf11", 11:11:21 "resourceName" : "TestNotificationVF1", 11:11:21 "resourceVersion" : "1.0", 11:11:21 "resoucreType" : "VF", 11:11:21 "resourceUUID" : "907e1746-9f69-40f5-9f2a-313654092a2d", 11:11:21 "artifacts" : [{ 11:11:21 "artifactName" : "sample-xml-alldata-1-1.xml", 11:11:21 "artifactType" : "YANG_XML", 11:11:21 "artifactURL" : "/sdc/v1/catalog/services/Testnotificationser1/1.0/resourceInstances/testnotificationvf11/artifacts/sample-xml-alldata-1-1.xml", 11:11:21 "artifactChecksum" : "MTUxODFkMmRlOTNhNjYxMGYyYTI1ZjA5Y2QyNWQyYTk\u003d", 11:11:21 "artifactDescription" : "MyYang", 11:11:21 "artifactTimeout" : 0, 11:11:21 "artifactUUID" : "0005bc4a-2c19-452e-be6d-d574a56be4d0", 11:11:21 "artifactVersion" : "1" 11:11:21 }, { 11:11:21 "artifactName" : "heat.yaml", 11:11:21 "artifactType" : "HEAT", 11:11:21 "artifactURL" : "/sdc/v1/catalog/services/Testnotificationser1/1.0/resourceInstances/testnotificationvf11/artifacts/heat.yaml", 11:11:21 "artifactChecksum" : "ODEyNjE4YTMzYzRmMTk2ODVhNTU2NTg3YWEyNmIxMTM\u003d", 11:11:21 "artifactDescription" : "heat", 11:11:21 "artifactTimeout" : 60, 11:11:21 "artifactUUID" : "8df6123c-f368-47d3-93be-1972cefbcc35", 11:11:21 "artifactVersion" : "1" 11:11:21 }, { 11:11:21 "artifactName" : "heat.env", 11:11:21 "artifactType" : "HEAT_ENV", 11:11:21 "artifactURL" : "/sdc/v1/catalog/services/Testnotificationser1/1.0/resourceInstances/testnotificationvf11/artifacts/heat.env", 11:11:21 "artifactChecksum" : "NGIzMjExZTM1NDc2NjBjOTQyMGJmMWNiMmU0NTE5NzM\u003d", 11:11:21 "artifactDescription" : "Auto-generated HEAT Environment deployment artifact", 11:11:21 "artifactTimeout" : 0, 11:11:21 "artifactUUID" : "ce65d31c-35c0-43a9-90c7-596fc51d0c86", 11:11:21 "artifactVersion" : "1", 11:11:21 "generatedFromUUID" : "8df6123c-f368-47d3-93be-1972cefbcc35" 11:11:21 } 11:11:21 ] 11:11:21 } 11:11:21 ] 11:11:21 } 11:11:21 11:11:21.846 [pool-14-thread-2] DEBUG org.onap.sdc.impl.NotificationConsumer - sending notification to client: { 11:11:21 "distributionID": "bcc7a72e-90b1-4c5f-9a37-28dc3cd86416", 11:11:21 "serviceName": "Testnotificationser1", 11:11:21 "serviceVersion": "1.0", 11:11:21 "serviceUUID": "7f7f94f4-373a-4b71-a0e3-80ae2ba4eb5d", 11:11:21 "serviceDescription": "TestNotificationVF1", 11:11:21 "resources": [ 11:11:21 { 11:11:21 "resourceInstanceName": "testnotificationvf11", 11:11:21 "resourceName": "TestNotificationVF1", 11:11:21 "resourceVersion": "1.0", 11:11:21 "resoucreType": "VF", 11:11:21 "resourceUUID": "907e1746-9f69-40f5-9f2a-313654092a2d", 11:11:21 "artifacts": [ 11:11:21 { 11:11:21 "artifactName": "heat.yaml", 11:11:21 "artifactType": "HEAT", 11:11:21 "artifactURL": "/sdc/v1/catalog/services/Testnotificationser1/1.0/resourceInstances/testnotificationvf11/artifacts/heat.yaml", 11:11:21 "artifactChecksum": "ODEyNjE4YTMzYzRmMTk2ODVhNTU2NTg3YWEyNmIxMTM\u003d", 11:11:21 "artifactDescription": "heat", 11:11:21 "artifactTimeout": 60, 11:11:21 "artifactVersion": "1", 11:11:21 "artifactUUID": "8df6123c-f368-47d3-93be-1972cefbcc35", 11:11:21 "generatedArtifact": { 11:11:21 "artifactName": "heat.env", 11:11:21 "artifactType": "HEAT_ENV", 11:11:21 "artifactURL": "/sdc/v1/catalog/services/Testnotificationser1/1.0/resourceInstances/testnotificationvf11/artifacts/heat.env", 11:11:21 "artifactChecksum": "NGIzMjExZTM1NDc2NjBjOTQyMGJmMWNiMmU0NTE5NzM\u003d", 11:11:21 "artifactDescription": "Auto-generated HEAT Environment deployment artifact", 11:11:21 "artifactTimeout": 0, 11:11:21 "artifactVersion": "1", 11:11:21 "artifactUUID": "ce65d31c-35c0-43a9-90c7-596fc51d0c86", 11:11:21 "generatedFromUUID": "8df6123c-f368-47d3-93be-1972cefbcc35" 11:11:21 }, 11:11:21 "relatedArtifactsInfo": [] 11:11:21 } 11:11:21 ] 11:11:21 } 11:11:21 ], 11:11:21 "serviceArtifacts": [ 11:11:21 { 11:11:21 "artifactName": "heat.yaml", 11:11:21 "artifactType": "HEAT", 11:11:21 "artifactURL": "/sdc/v1/catalog/services/Testnotificationser1/1.0/resourceInstances/testnotificationvf11/artifacts/heat.yaml", 11:11:21 "artifactChecksum": "ODEyNjE4YTMzYzRmMTk2ODVhNTU2NTg3YWEyNmIxMTM\u003d", 11:11:21 "artifactDescription": "heat", 11:11:21 "artifactTimeout": 60, 11:11:21 "artifactVersion": "1", 11:11:21 "artifactUUID": "8df6123c-f368-47d3-93be-1972cefbcc35", 11:11:21 "generatedArtifact": { 11:11:21 "artifactName": "heat.env", 11:11:21 "artifactType": "HEAT_ENV", 11:11:21 "artifactURL": "/sdc/v1/catalog/services/Testnotificationser1/1.0/resourceInstances/testnotificationvf11/artifacts/heat.env", 11:11:21 "artifactChecksum": "NGIzMjExZTM1NDc2NjBjOTQyMGJmMWNiMmU0NTE5NzM\u003d", 11:11:21 "artifactDescription": "Auto-generated HEAT Environment deployment artifact", 11:11:21 "artifactTimeout": 0, 11:11:21 "artifactVersion": "1", 11:11:21 "artifactUUID": "ce65d31c-35c0-43a9-90c7-596fc51d0c86", 11:11:21 "generatedFromUUID": "8df6123c-f368-47d3-93be-1972cefbcc35" 11:11:21 } 11:11:21 } 11:11:21 ] 11:11:21 } 11:11:21 11:11:21.885 [kafka-producer-network-thread | mso-123456-producer-414c146f-350c-404b-a200-38d58b83784c] DEBUG org.apache.kafka.clients.NetworkClient - [Producer clientId=mso-123456-producer-414c146f-350c-404b-a200-38d58b83784c] Give up sending metadata request since no node is available 11:11:21 11:11:21.926 [kafka-coordinator-heartbeat-thread | mso-group] DEBUG org.apache.kafka.clients.NetworkClient - [Consumer clientId=mso-123456-consumer-37da6eb2-0a27-47c4-bc3c-8032fdeca549, groupId=mso-group] Give up sending metadata request since no node is available 11:11:21 11:11:21.926 [kafka-coordinator-heartbeat-thread | mso-group] DEBUG org.apache.kafka.clients.consumer.internals.ConsumerCoordinator - [Consumer clientId=mso-123456-consumer-37da6eb2-0a27-47c4-bc3c-8032fdeca549, groupId=mso-group] No broker available to send FindCoordinator request 11:11:21 11:11:21.936 [kafka-producer-network-thread | mso-123456-producer-414c146f-350c-404b-a200-38d58b83784c] DEBUG org.apache.kafka.clients.NetworkClient - [Producer clientId=mso-123456-producer-414c146f-350c-404b-a200-38d58b83784c] Give up sending metadata request since no node is available 11:11:21 11:11:21.939 [pool-14-thread-1] DEBUG org.onap.sdc.impl.NotificationConsumer - Polling for messages from topic: null 11:11:21 11:11:21.986 [kafka-producer-network-thread | mso-123456-producer-414c146f-350c-404b-a200-38d58b83784c] DEBUG org.apache.kafka.clients.NetworkClient - [Producer clientId=mso-123456-producer-414c146f-350c-404b-a200-38d58b83784c] Give up sending metadata request since no node is available 11:11:22 11:11:22.026 [kafka-coordinator-heartbeat-thread | mso-group] DEBUG org.apache.kafka.clients.NetworkClient - [Consumer clientId=mso-123456-consumer-37da6eb2-0a27-47c4-bc3c-8032fdeca549, groupId=mso-group] Give up sending metadata request since no node is available 11:11:22 11:11:22.027 [kafka-coordinator-heartbeat-thread | mso-group] DEBUG org.apache.kafka.clients.consumer.internals.ConsumerCoordinator - [Consumer clientId=mso-123456-consumer-37da6eb2-0a27-47c4-bc3c-8032fdeca549, groupId=mso-group] No broker available to send FindCoordinator request 11:11:22 11:11:22.037 [kafka-producer-network-thread | mso-123456-producer-414c146f-350c-404b-a200-38d58b83784c] DEBUG org.apache.kafka.clients.NetworkClient - [Producer clientId=mso-123456-producer-414c146f-350c-404b-a200-38d58b83784c] Give up sending metadata request since no node is available 11:11:22 11:11:22.040 [pool-14-thread-3] DEBUG org.onap.sdc.impl.NotificationConsumer - Polling for messages from topic: null 11:11:22 11:11:22.088 [kafka-producer-network-thread | mso-123456-producer-414c146f-350c-404b-a200-38d58b83784c] DEBUG org.apache.kafka.clients.NetworkClient - [Producer clientId=mso-123456-producer-414c146f-350c-404b-a200-38d58b83784c] Initialize connection to node localhost:39115 (id: 1 rack: null) for sending metadata request 11:11:22 11:11:22.088 [kafka-producer-network-thread | mso-123456-producer-414c146f-350c-404b-a200-38d58b83784c] DEBUG org.apache.kafka.clients.ClientUtils - Resolved host localhost as 127.0.0.1 11:11:22 11:11:22.088 [kafka-producer-network-thread | mso-123456-producer-414c146f-350c-404b-a200-38d58b83784c] DEBUG org.apache.kafka.clients.NetworkClient - [Producer clientId=mso-123456-producer-414c146f-350c-404b-a200-38d58b83784c] Initiating connection to node localhost:39115 (id: 1 rack: null) using address localhost/127.0.0.1 11:11:22 11:11:22.089 [kafka-producer-network-thread | mso-123456-producer-414c146f-350c-404b-a200-38d58b83784c] DEBUG org.apache.kafka.common.security.authenticator.SaslClientAuthenticator - [Producer clientId=mso-123456-producer-414c146f-350c-404b-a200-38d58b83784c] Set SASL client state to SEND_APIVERSIONS_REQUEST 11:11:22 11:11:22.089 [kafka-producer-network-thread | mso-123456-producer-414c146f-350c-404b-a200-38d58b83784c] DEBUG org.apache.kafka.common.security.authenticator.SaslClientAuthenticator - [Producer clientId=mso-123456-producer-414c146f-350c-404b-a200-38d58b83784c] Creating SaslClient: client=null;service=kafka;serviceHostname=localhost;mechs=[PLAIN] 11:11:22 11:11:22.090 [kafka-producer-network-thread | mso-123456-producer-414c146f-350c-404b-a200-38d58b83784c] DEBUG org.apache.kafka.common.network.Selector - [Producer clientId=mso-123456-producer-414c146f-350c-404b-a200-38d58b83784c] Connection with localhost/127.0.0.1 (channelId=1) disconnected 11:11:22 java.net.ConnectException: Connection refused 11:11:22 at java.base/sun.nio.ch.SocketChannelImpl.checkConnect(Native Method) 11:11:22 at java.base/sun.nio.ch.SocketChannelImpl.finishConnect(SocketChannelImpl.java:777) 11:11:22 at org.apache.kafka.common.network.PlaintextTransportLayer.finishConnect(PlaintextTransportLayer.java:50) 11:11:22 at org.apache.kafka.common.network.KafkaChannel.finishConnect(KafkaChannel.java:224) 11:11:22 at org.apache.kafka.common.network.Selector.pollSelectionKeys(Selector.java:526) 11:11:22 at org.apache.kafka.common.network.Selector.poll(Selector.java:481) 11:11:22 at org.apache.kafka.clients.NetworkClient.poll(NetworkClient.java:560) 11:11:22 at org.apache.kafka.clients.producer.internals.Sender.runOnce(Sender.java:328) 11:11:22 at org.apache.kafka.clients.producer.internals.Sender.run(Sender.java:243) 11:11:22 at java.base/java.lang.Thread.run(Thread.java:829) 11:11:22 11:11:22.090 [kafka-producer-network-thread | mso-123456-producer-414c146f-350c-404b-a200-38d58b83784c] INFO org.apache.kafka.clients.NetworkClient - [Producer clientId=mso-123456-producer-414c146f-350c-404b-a200-38d58b83784c] Node 1 disconnected. 11:11:22 11:11:22.090 [kafka-producer-network-thread | mso-123456-producer-414c146f-350c-404b-a200-38d58b83784c] WARN org.apache.kafka.clients.NetworkClient - [Producer clientId=mso-123456-producer-414c146f-350c-404b-a200-38d58b83784c] Connection to node 1 (localhost/127.0.0.1:39115) could not be established. Broker may not be available. 11:11:22 11:11:22.127 [kafka-coordinator-heartbeat-thread | mso-group] DEBUG org.apache.kafka.clients.NetworkClient - [Consumer clientId=mso-123456-consumer-37da6eb2-0a27-47c4-bc3c-8032fdeca549, groupId=mso-group] Give up sending metadata request since no node is available 11:11:22 11:11:22.127 [kafka-coordinator-heartbeat-thread | mso-group] DEBUG org.apache.kafka.clients.consumer.internals.ConsumerCoordinator - [Consumer clientId=mso-123456-consumer-37da6eb2-0a27-47c4-bc3c-8032fdeca549, groupId=mso-group] No broker available to send FindCoordinator request 11:11:22 11:11:22.139 [pool-14-thread-2] DEBUG org.onap.sdc.impl.NotificationConsumer - Polling for messages from topic: null 11:11:22 11:11:22.191 [kafka-producer-network-thread | mso-123456-producer-414c146f-350c-404b-a200-38d58b83784c] DEBUG org.apache.kafka.clients.NetworkClient - [Producer clientId=mso-123456-producer-414c146f-350c-404b-a200-38d58b83784c] Give up sending metadata request since no node is available 11:11:22 11:11:22.228 [kafka-coordinator-heartbeat-thread | mso-group] DEBUG org.apache.kafka.clients.NetworkClient - [Consumer clientId=mso-123456-consumer-37da6eb2-0a27-47c4-bc3c-8032fdeca549, groupId=mso-group] Give up sending metadata request since no node is available 11:11:22 11:11:22.228 [kafka-coordinator-heartbeat-thread | mso-group] DEBUG org.apache.kafka.clients.consumer.internals.ConsumerCoordinator - [Consumer clientId=mso-123456-consumer-37da6eb2-0a27-47c4-bc3c-8032fdeca549, groupId=mso-group] No broker available to send FindCoordinator request 11:11:22 11:11:22.240 [pool-14-thread-4] DEBUG org.onap.sdc.impl.NotificationConsumer - Polling for messages from topic: null 11:11:22 11:11:22.241 [kafka-producer-network-thread | mso-123456-producer-414c146f-350c-404b-a200-38d58b83784c] DEBUG org.apache.kafka.clients.NetworkClient - [Producer clientId=mso-123456-producer-414c146f-350c-404b-a200-38d58b83784c] Give up sending metadata request since no node is available 11:11:22 11:11:22.292 [kafka-producer-network-thread | mso-123456-producer-414c146f-350c-404b-a200-38d58b83784c] DEBUG org.apache.kafka.clients.NetworkClient - [Producer clientId=mso-123456-producer-414c146f-350c-404b-a200-38d58b83784c] Give up sending metadata request since no node is available 11:11:22 11:11:22.329 [kafka-coordinator-heartbeat-thread | mso-group] DEBUG org.apache.kafka.clients.NetworkClient - [Consumer clientId=mso-123456-consumer-37da6eb2-0a27-47c4-bc3c-8032fdeca549, groupId=mso-group] Give up sending metadata request since no node is available 11:11:22 11:11:22.329 [kafka-coordinator-heartbeat-thread | mso-group] DEBUG org.apache.kafka.clients.consumer.internals.ConsumerCoordinator - [Consumer clientId=mso-123456-consumer-37da6eb2-0a27-47c4-bc3c-8032fdeca549, groupId=mso-group] No broker available to send FindCoordinator request 11:11:22 11:11:22.340 [pool-14-thread-1] DEBUG org.onap.sdc.impl.NotificationConsumer - Polling for messages from topic: null 11:11:22 11:11:22.342 [kafka-producer-network-thread | mso-123456-producer-414c146f-350c-404b-a200-38d58b83784c] DEBUG org.apache.kafka.clients.NetworkClient - [Producer clientId=mso-123456-producer-414c146f-350c-404b-a200-38d58b83784c] Give up sending metadata request since no node is available 11:11:22 11:11:22.392 [kafka-producer-network-thread | mso-123456-producer-414c146f-350c-404b-a200-38d58b83784c] DEBUG org.apache.kafka.clients.NetworkClient - [Producer clientId=mso-123456-producer-414c146f-350c-404b-a200-38d58b83784c] Give up sending metadata request since no node is available 11:11:22 11:11:22.429 [kafka-coordinator-heartbeat-thread | mso-group] DEBUG org.apache.kafka.clients.NetworkClient - [Consumer clientId=mso-123456-consumer-37da6eb2-0a27-47c4-bc3c-8032fdeca549, groupId=mso-group] Give up sending metadata request since no node is available 11:11:22 11:11:22.430 [kafka-coordinator-heartbeat-thread | mso-group] DEBUG org.apache.kafka.clients.consumer.internals.ConsumerCoordinator - [Consumer clientId=mso-123456-consumer-37da6eb2-0a27-47c4-bc3c-8032fdeca549, groupId=mso-group] No broker available to send FindCoordinator request 11:11:22 11:11:22.440 [pool-14-thread-5] DEBUG org.onap.sdc.impl.NotificationConsumer - Polling for messages from topic: null 11:11:22 11:11:22.443 [kafka-producer-network-thread | mso-123456-producer-414c146f-350c-404b-a200-38d58b83784c] DEBUG org.apache.kafka.clients.NetworkClient - [Producer clientId=mso-123456-producer-414c146f-350c-404b-a200-38d58b83784c] Give up sending metadata request since no node is available 11:11:22 11:11:22.494 [kafka-producer-network-thread | mso-123456-producer-414c146f-350c-404b-a200-38d58b83784c] DEBUG org.apache.kafka.clients.NetworkClient - [Producer clientId=mso-123456-producer-414c146f-350c-404b-a200-38d58b83784c] Give up sending metadata request since no node is available 11:11:22 11:11:22.530 [kafka-coordinator-heartbeat-thread | mso-group] DEBUG org.apache.kafka.clients.NetworkClient - [Consumer clientId=mso-123456-consumer-37da6eb2-0a27-47c4-bc3c-8032fdeca549, groupId=mso-group] Give up sending metadata request since no node is available 11:11:22 11:11:22.530 [kafka-coordinator-heartbeat-thread | mso-group] DEBUG org.apache.kafka.clients.consumer.internals.ConsumerCoordinator - [Consumer clientId=mso-123456-consumer-37da6eb2-0a27-47c4-bc3c-8032fdeca549, groupId=mso-group] No broker available to send FindCoordinator request 11:11:22 11:11:22.540 [pool-14-thread-3] DEBUG org.onap.sdc.impl.NotificationConsumer - Polling for messages from topic: null 11:11:22 11:11:22.545 [kafka-producer-network-thread | mso-123456-producer-414c146f-350c-404b-a200-38d58b83784c] DEBUG org.apache.kafka.clients.NetworkClient - [Producer clientId=mso-123456-producer-414c146f-350c-404b-a200-38d58b83784c] Give up sending metadata request since no node is available 11:11:22 11:11:22.595 [kafka-producer-network-thread | mso-123456-producer-414c146f-350c-404b-a200-38d58b83784c] DEBUG org.apache.kafka.clients.NetworkClient - [Producer clientId=mso-123456-producer-414c146f-350c-404b-a200-38d58b83784c] Give up sending metadata request since no node is available 11:11:22 11:11:22.631 [kafka-coordinator-heartbeat-thread | mso-group] DEBUG org.apache.kafka.clients.NetworkClient - [Consumer clientId=mso-123456-consumer-37da6eb2-0a27-47c4-bc3c-8032fdeca549, groupId=mso-group] Initialize connection to node localhost:39115 (id: 1 rack: null) for sending metadata request 11:11:22 11:11:22.631 [kafka-coordinator-heartbeat-thread | mso-group] DEBUG org.apache.kafka.clients.ClientUtils - Resolved host localhost as 127.0.0.1 11:11:22 11:11:22.631 [kafka-coordinator-heartbeat-thread | mso-group] DEBUG org.apache.kafka.clients.NetworkClient - [Consumer clientId=mso-123456-consumer-37da6eb2-0a27-47c4-bc3c-8032fdeca549, groupId=mso-group] Initiating connection to node localhost:39115 (id: 1 rack: null) using address localhost/127.0.0.1 11:11:22 11:11:22.631 [kafka-coordinator-heartbeat-thread | mso-group] DEBUG org.apache.kafka.common.security.authenticator.SaslClientAuthenticator - [Consumer clientId=mso-123456-consumer-37da6eb2-0a27-47c4-bc3c-8032fdeca549, groupId=mso-group] Set SASL client state to SEND_APIVERSIONS_REQUEST 11:11:22 11:11:22.632 [kafka-coordinator-heartbeat-thread | mso-group] DEBUG org.apache.kafka.common.security.authenticator.SaslClientAuthenticator - [Consumer clientId=mso-123456-consumer-37da6eb2-0a27-47c4-bc3c-8032fdeca549, groupId=mso-group] Creating SaslClient: client=null;service=kafka;serviceHostname=localhost;mechs=[PLAIN] 11:11:22 11:11:22.632 [kafka-coordinator-heartbeat-thread | mso-group] DEBUG org.apache.kafka.common.network.Selector - [Consumer clientId=mso-123456-consumer-37da6eb2-0a27-47c4-bc3c-8032fdeca549, groupId=mso-group] Connection with localhost/127.0.0.1 (channelId=1) disconnected 11:11:22 java.net.ConnectException: Connection refused 11:11:22 at java.base/sun.nio.ch.SocketChannelImpl.checkConnect(Native Method) 11:11:22 at java.base/sun.nio.ch.SocketChannelImpl.finishConnect(SocketChannelImpl.java:777) 11:11:22 at org.apache.kafka.common.network.PlaintextTransportLayer.finishConnect(PlaintextTransportLayer.java:50) 11:11:22 at org.apache.kafka.common.network.KafkaChannel.finishConnect(KafkaChannel.java:224) 11:11:22 at org.apache.kafka.common.network.Selector.pollSelectionKeys(Selector.java:526) 11:11:22 at org.apache.kafka.common.network.Selector.poll(Selector.java:481) 11:11:22 at org.apache.kafka.clients.NetworkClient.poll(NetworkClient.java:560) 11:11:22 at org.apache.kafka.clients.consumer.internals.ConsumerNetworkClient.poll(ConsumerNetworkClient.java:280) 11:11:22 at org.apache.kafka.clients.consumer.internals.ConsumerNetworkClient.pollNoWakeup(ConsumerNetworkClient.java:321) 11:11:22 at org.apache.kafka.clients.consumer.internals.AbstractCoordinator$HeartbeatThread.run(AbstractCoordinator.java:1454) 11:11:22 11:11:22.633 [kafka-coordinator-heartbeat-thread | mso-group] INFO org.apache.kafka.clients.NetworkClient - [Consumer clientId=mso-123456-consumer-37da6eb2-0a27-47c4-bc3c-8032fdeca549, groupId=mso-group] Node 1 disconnected. 11:11:22 11:11:22.633 [kafka-coordinator-heartbeat-thread | mso-group] WARN org.apache.kafka.clients.NetworkClient - [Consumer clientId=mso-123456-consumer-37da6eb2-0a27-47c4-bc3c-8032fdeca549, groupId=mso-group] Connection to node 1 (localhost/127.0.0.1:39115) could not be established. Broker may not be available. 11:11:22 11:11:22.633 [kafka-coordinator-heartbeat-thread | mso-group] DEBUG org.apache.kafka.clients.consumer.internals.ConsumerCoordinator - [Consumer clientId=mso-123456-consumer-37da6eb2-0a27-47c4-bc3c-8032fdeca549, groupId=mso-group] No broker available to send FindCoordinator request 11:11:22 11:11:22.640 [pool-14-thread-3] DEBUG org.onap.sdc.impl.NotificationConsumer - Polling for messages from topic: null 11:11:22 [INFO] Tests run: 7, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 7.731 s - in org.onap.sdc.impl.NotificationConsumerTest 11:11:22 11:11:22.645 [kafka-producer-network-thread | mso-123456-producer-414c146f-350c-404b-a200-38d58b83784c] DEBUG org.apache.kafka.clients.NetworkClient - [Producer clientId=mso-123456-producer-414c146f-350c-404b-a200-38d58b83784c] Give up sending metadata request since no node is available 11:11:22 [INFO] Running org.onap.sdc.impl.HeatParserTest 11:11:22 11:11:22.650 [main] DEBUG org.onap.sdc.utils.heat.HeatParser - Start of extracting HEAT parameters from file, file contents: just text 11:11:22 11:11:22.696 [kafka-producer-network-thread | mso-123456-producer-414c146f-350c-404b-a200-38d58b83784c] DEBUG org.apache.kafka.clients.NetworkClient - [Producer clientId=mso-123456-producer-414c146f-350c-404b-a200-38d58b83784c] Give up sending metadata request since no node is available 11:11:22 11:11:22.733 [kafka-coordinator-heartbeat-thread | mso-group] DEBUG org.apache.kafka.clients.NetworkClient - [Consumer clientId=mso-123456-consumer-37da6eb2-0a27-47c4-bc3c-8032fdeca549, groupId=mso-group] Give up sending metadata request since no node is available 11:11:22 11:11:22.734 [kafka-coordinator-heartbeat-thread | mso-group] DEBUG org.apache.kafka.clients.consumer.internals.ConsumerCoordinator - [Consumer clientId=mso-123456-consumer-37da6eb2-0a27-47c4-bc3c-8032fdeca549, groupId=mso-group] No broker available to send FindCoordinator request 11:11:22 11:11:22.746 [kafka-producer-network-thread | mso-123456-producer-414c146f-350c-404b-a200-38d58b83784c] DEBUG org.apache.kafka.clients.NetworkClient - [Producer clientId=mso-123456-producer-414c146f-350c-404b-a200-38d58b83784c] Give up sending metadata request since no node is available 11:11:22 11:11:22.754 [main] ERROR org.onap.sdc.utils.YamlToObjectConverter - Failed to convert YAML just text to object. 11:11:22 org.yaml.snakeyaml.constructor.ConstructorException: Can't construct a java object for tag:yaml.org,2002:org.onap.sdc.utils.heat.HeatConfiguration; exception=No single argument constructor found for class org.onap.sdc.utils.heat.HeatConfiguration : null 11:11:22 in 'string', line 1, column 1: 11:11:22 just text 11:11:22 ^ 11:11:22 11:11:22 at org.yaml.snakeyaml.constructor.Constructor$ConstructYamlObject.construct(Constructor.java:336) 11:11:22 at org.yaml.snakeyaml.constructor.BaseConstructor.constructObjectNoCheck(BaseConstructor.java:230) 11:11:22 at org.yaml.snakeyaml.constructor.BaseConstructor.constructObject(BaseConstructor.java:220) 11:11:22 at org.yaml.snakeyaml.constructor.BaseConstructor.constructDocument(BaseConstructor.java:174) 11:11:22 at org.yaml.snakeyaml.constructor.BaseConstructor.getSingleData(BaseConstructor.java:158) 11:11:22 at org.yaml.snakeyaml.Yaml.loadFromReader(Yaml.java:491) 11:11:22 at org.yaml.snakeyaml.Yaml.loadAs(Yaml.java:470) 11:11:22 at org.onap.sdc.utils.YamlToObjectConverter.convertFromString(YamlToObjectConverter.java:113) 11:11:22 at org.onap.sdc.utils.heat.HeatParser.getHeatParameters(HeatParser.java:60) 11:11:22 at org.onap.sdc.impl.HeatParserTest.testParametersParsingInvalidYaml(HeatParserTest.java:122) 11:11:22 at java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke0(Native Method) 11:11:22 at java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) 11:11:22 at java.base/jdk.internal.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) 11:11:22 at java.base/java.lang.reflect.Method.invoke(Method.java:566) 11:11:22 at org.junit.platform.commons.util.ReflectionUtils.invokeMethod(ReflectionUtils.java:688) 11:11:22 at org.junit.jupiter.engine.execution.MethodInvocation.proceed(MethodInvocation.java:60) 11:11:22 at org.junit.jupiter.engine.execution.InvocationInterceptorChain$ValidatingInvocation.proceed(InvocationInterceptorChain.java:131) 11:11:22 at org.junit.jupiter.engine.extension.TimeoutExtension.intercept(TimeoutExtension.java:149) 11:11:22 at org.junit.jupiter.engine.extension.TimeoutExtension.interceptTestableMethod(TimeoutExtension.java:140) 11:11:22 at org.junit.jupiter.engine.extension.TimeoutExtension.interceptTestMethod(TimeoutExtension.java:84) 11:11:22 at org.junit.jupiter.engine.execution.ExecutableInvoker$ReflectiveInterceptorCall.lambda$ofVoidMethod$0(ExecutableInvoker.java:115) 11:11:22 at org.junit.jupiter.engine.execution.ExecutableInvoker.lambda$invoke$0(ExecutableInvoker.java:105) 11:11:22 at org.junit.jupiter.engine.execution.InvocationInterceptorChain$InterceptedInvocation.proceed(InvocationInterceptorChain.java:106) 11:11:22 at org.junit.jupiter.engine.execution.InvocationInterceptorChain.proceed(InvocationInterceptorChain.java:64) 11:11:22 at org.junit.jupiter.engine.execution.InvocationInterceptorChain.chainAndInvoke(InvocationInterceptorChain.java:45) 11:11:22 at org.junit.jupiter.engine.execution.InvocationInterceptorChain.invoke(InvocationInterceptorChain.java:37) 11:11:22 at org.junit.jupiter.engine.execution.ExecutableInvoker.invoke(ExecutableInvoker.java:104) 11:11:22 at org.junit.jupiter.engine.execution.ExecutableInvoker.invoke(ExecutableInvoker.java:98) 11:11:22 at org.junit.jupiter.engine.descriptor.TestMethodTestDescriptor.lambda$invokeTestMethod$6(TestMethodTestDescriptor.java:210) 11:11:22 at org.junit.platform.engine.support.hierarchical.ThrowableCollector.execute(ThrowableCollector.java:73) 11:11:22 at org.junit.jupiter.engine.descriptor.TestMethodTestDescriptor.invokeTestMethod(TestMethodTestDescriptor.java:206) 11:11:22 at org.junit.jupiter.engine.descriptor.TestMethodTestDescriptor.execute(TestMethodTestDescriptor.java:131) 11:11:22 at org.junit.jupiter.engine.descriptor.TestMethodTestDescriptor.execute(TestMethodTestDescriptor.java:65) 11:11:22 at org.junit.platform.engine.support.hierarchical.NodeTestTask.lambda$executeRecursively$5(NodeTestTask.java:139) 11:11:22 at org.junit.platform.engine.support.hierarchical.ThrowableCollector.execute(ThrowableCollector.java:73) 11:11:22 at org.junit.platform.engine.support.hierarchical.NodeTestTask.lambda$executeRecursively$7(NodeTestTask.java:129) 11:11:22 at org.junit.platform.engine.support.hierarchical.Node.around(Node.java:137) 11:11:22 at org.junit.platform.engine.support.hierarchical.NodeTestTask.lambda$executeRecursively$8(NodeTestTask.java:127) 11:11:22 at org.junit.platform.engine.support.hierarchical.ThrowableCollector.execute(ThrowableCollector.java:73) 11:11:22 at org.junit.platform.engine.support.hierarchical.NodeTestTask.executeRecursively(NodeTestTask.java:126) 11:11:22 at org.junit.platform.engine.support.hierarchical.NodeTestTask.execute(NodeTestTask.java:84) 11:11:22 at java.base/java.util.ArrayList.forEach(ArrayList.java:1541) 11:11:22 at org.junit.platform.engine.support.hierarchical.SameThreadHierarchicalTestExecutorService.invokeAll(SameThreadHierarchicalTestExecutorService.java:38) 11:11:22 at org.junit.platform.engine.support.hierarchical.NodeTestTask.lambda$executeRecursively$5(NodeTestTask.java:143) 11:11:22 at org.junit.platform.engine.support.hierarchical.ThrowableCollector.execute(ThrowableCollector.java:73) 11:11:22 at org.junit.platform.engine.support.hierarchical.NodeTestTask.lambda$executeRecursively$7(NodeTestTask.java:129) 11:11:22 at org.junit.platform.engine.support.hierarchical.Node.around(Node.java:137) 11:11:22 at org.junit.platform.engine.support.hierarchical.NodeTestTask.lambda$executeRecursively$8(NodeTestTask.java:127) 11:11:22 at org.junit.platform.engine.support.hierarchical.ThrowableCollector.execute(ThrowableCollector.java:73) 11:11:22 at org.junit.platform.engine.support.hierarchical.NodeTestTask.executeRecursively(NodeTestTask.java:126) 11:11:22 at org.junit.platform.engine.support.hierarchical.NodeTestTask.execute(NodeTestTask.java:84) 11:11:22 at java.base/java.util.ArrayList.forEach(ArrayList.java:1541) 11:11:22 at org.junit.platform.engine.support.hierarchical.SameThreadHierarchicalTestExecutorService.invokeAll(SameThreadHierarchicalTestExecutorService.java:38) 11:11:22 at org.junit.platform.engine.support.hierarchical.NodeTestTask.lambda$executeRecursively$5(NodeTestTask.java:143) 11:11:22 at org.junit.platform.engine.support.hierarchical.ThrowableCollector.execute(ThrowableCollector.java:73) 11:11:22 at org.junit.platform.engine.support.hierarchical.NodeTestTask.lambda$executeRecursively$7(NodeTestTask.java:129) 11:11:22 at org.junit.platform.engine.support.hierarchical.Node.around(Node.java:137) 11:11:22 at org.junit.platform.engine.support.hierarchical.NodeTestTask.lambda$executeRecursively$8(NodeTestTask.java:127) 11:11:22 at org.junit.platform.engine.support.hierarchical.ThrowableCollector.execute(ThrowableCollector.java:73) 11:11:22 at org.junit.platform.engine.support.hierarchical.NodeTestTask.executeRecursively(NodeTestTask.java:126) 11:11:22 at org.junit.platform.engine.support.hierarchical.NodeTestTask.execute(NodeTestTask.java:84) 11:11:22 at org.junit.platform.engine.support.hierarchical.SameThreadHierarchicalTestExecutorService.submit(SameThreadHierarchicalTestExecutorService.java:32) 11:11:22 at org.junit.platform.engine.support.hierarchical.HierarchicalTestExecutor.execute(HierarchicalTestExecutor.java:57) 11:11:22 at org.junit.platform.engine.support.hierarchical.HierarchicalTestEngine.execute(HierarchicalTestEngine.java:51) 11:11:22 at org.junit.platform.launcher.core.EngineExecutionOrchestrator.execute(EngineExecutionOrchestrator.java:108) 11:11:22 at org.junit.platform.launcher.core.EngineExecutionOrchestrator.execute(EngineExecutionOrchestrator.java:88) 11:11:22 at org.junit.platform.launcher.core.EngineExecutionOrchestrator.lambda$execute$0(EngineExecutionOrchestrator.java:54) 11:11:22 at org.junit.platform.launcher.core.EngineExecutionOrchestrator.withInterceptedStreams(EngineExecutionOrchestrator.java:67) 11:11:22 at org.junit.platform.launcher.core.EngineExecutionOrchestrator.execute(EngineExecutionOrchestrator.java:52) 11:11:22 at org.junit.platform.launcher.core.DefaultLauncher.execute(DefaultLauncher.java:96) 11:11:22 at org.junit.platform.launcher.core.DefaultLauncher.execute(DefaultLauncher.java:75) 11:11:22 at org.apache.maven.surefire.junitplatform.JUnitPlatformProvider.invokeAllTests(JUnitPlatformProvider.java:154) 11:11:22 at org.apache.maven.surefire.junitplatform.JUnitPlatformProvider.invoke(JUnitPlatformProvider.java:127) 11:11:22 at org.apache.maven.surefire.booter.ForkedBooter.runSuitesInProcess(ForkedBooter.java:377) 11:11:22 at org.apache.maven.surefire.booter.ForkedBooter.execute(ForkedBooter.java:138) 11:11:22 at org.apache.maven.surefire.booter.ForkedBooter.run(ForkedBooter.java:465) 11:11:22 at org.apache.maven.surefire.booter.ForkedBooter.main(ForkedBooter.java:451) 11:11:22 Caused by: org.yaml.snakeyaml.error.YAMLException: No single argument constructor found for class org.onap.sdc.utils.heat.HeatConfiguration : null 11:11:22 at org.yaml.snakeyaml.constructor.Constructor$ConstructScalar.construct(Constructor.java:393) 11:11:22 at org.yaml.snakeyaml.constructor.Constructor$ConstructYamlObject.construct(Constructor.java:332) 11:11:22 ... 76 common frames omitted 11:11:22 11:11:22.755 [main] ERROR org.onap.sdc.utils.heat.HeatParser - Couldn't parse HEAT template. 11:11:22 11:11:22.755 [main] WARN org.onap.sdc.utils.heat.HeatParser - HEAT template parameters section wasn't found or is empty. 11:11:22 11:11:22.783 [main] DEBUG org.onap.sdc.utils.heat.HeatParser - Start of extracting HEAT parameters from file, file contents: heat_template_version: 2013-05-23 11:11:22 11:11:22 description: Simple template to deploy a stack with two virtual machine instances 11:11:22 11:11:22 parameters: 11:11:22 image_name_1: 11:11:22 type: string 11:11:22 label: Image Name 11:11:22 description: SCOIMAGE Specify an image name for instance1 11:11:22 default: cirros-0.3.1-x86_64 11:11:22 image_name_2: 11:11:22 type: string 11:11:22 label: Image Name 11:11:22 description: SCOIMAGE Specify an image name for instance2 11:11:22 default: cirros-0.3.1-x86_64 11:11:22 network_id: 11:11:22 type: string 11:11:22 label: Network ID 11:11:22 description: SCONETWORK Network to be used for the compute instance 11:11:22 hidden: true 11:11:22 constraints: 11:11:22 - length: { min: 6, max: 8 } 11:11:22 description: Password length must be between 6 and 8 characters. 11:11:22 - range: { min: 6, max: 8 } 11:11:22 description: Range description 11:11:22 - allowed_values: 11:11:22 - m1.small 11:11:22 - m1.medium 11:11:22 - m1.large 11:11:22 description: Allowed values description 11:11:22 - allowed_pattern: "[a-zA-Z0-9]+" 11:11:22 description: Password must consist of characters and numbers only. 11:11:22 - allowed_pattern: "[A-Z]+[a-zA-Z0-9]*" 11:11:22 description: Password must start with an uppercase character. 11:11:22 - custom_constraint: nova.keypair 11:11:22 description: Custom description 11:11:22 11:11:22 resources: 11:11:22 my_instance1: 11:11:22 type: OS::Nova::Server 11:11:22 properties: 11:11:22 image: { get_param: image_name_1 } 11:11:22 flavor: m1.small 11:11:22 networks: 11:11:22 - network : { get_param : network_id } 11:11:22 my_instance2: 11:11:22 type: OS::Nova::Server 11:11:22 properties: 11:11:22 image: { get_param: image_name_2 } 11:11:22 flavor: m1.tiny 11:11:22 networks: 11:11:22 - network : { get_param : network_id } 11:11:22 11:11:22.797 [kafka-producer-network-thread | mso-123456-producer-414c146f-350c-404b-a200-38d58b83784c] DEBUG org.apache.kafka.clients.NetworkClient - [Producer clientId=mso-123456-producer-414c146f-350c-404b-a200-38d58b83784c] Give up sending metadata request since no node is available 11:11:22 Found HEAT parameters: {image_name_1=type:string, label:Image Name, default:cirros-0.3.1-x86_64, hidden:false, description:SCOIMAGE Specify an image name for instance1, image_name_2=type:string, label:Image Name, default:cirros-0.3.1-x86_64, hidden:false, description:SCOIMAGE Specify an image name for instance2, network_id=type:string, label:Network ID, hidden:true, constraints:[length:{min=6, max=8}, description:Password length must be between 6 and 8 characters., range:{min=6, max=8}, description:Range description, allowed_values:[m1.small, m1.medium, m1.large], description:Allowed values description, allowed_pattern:[a-zA-Z0-9]+, description:Password must consist of characters and numbers only., allowed_pattern:[A-Z]+[a-zA-Z0-9]*, description:Password must start with an uppercase character., custom_constraint:nova.keypair, description:Custom description], description:SCONETWORK Network to be used for the compute instance} 11:11:22 11:11:22.832 [main] DEBUG org.onap.sdc.utils.heat.HeatParser - Found HEAT parameters: {image_name_1=type:string, label:Image Name, default:cirros-0.3.1-x86_64, hidden:false, description:SCOIMAGE Specify an image name for instance1, image_name_2=type:string, label:Image Name, default:cirros-0.3.1-x86_64, hidden:false, description:SCOIMAGE Specify an image name for instance2, network_id=type:string, label:Network ID, hidden:true, constraints:[length:{min=6, max=8}, description:Password length must be between 6 and 8 characters., range:{min=6, max=8}, description:Range description, allowed_values:[m1.small, m1.medium, m1.large], description:Allowed values description, allowed_pattern:[a-zA-Z0-9]+, description:Password must consist of characters and numbers only., allowed_pattern:[A-Z]+[a-zA-Z0-9]*, description:Password must start with an uppercase character., custom_constraint:nova.keypair, description:Custom description], description:SCONETWORK Network to be used for the compute instance} 11:11:22 11:11:22.834 [kafka-coordinator-heartbeat-thread | mso-group] DEBUG org.apache.kafka.clients.NetworkClient - [Consumer clientId=mso-123456-consumer-37da6eb2-0a27-47c4-bc3c-8032fdeca549, groupId=mso-group] Give up sending metadata request since no node is available 11:11:22 11:11:22.835 [kafka-coordinator-heartbeat-thread | mso-group] DEBUG org.apache.kafka.clients.consumer.internals.ConsumerCoordinator - [Consumer clientId=mso-123456-consumer-37da6eb2-0a27-47c4-bc3c-8032fdeca549, groupId=mso-group] No broker available to send FindCoordinator request 11:11:22 11:11:22.835 [main] DEBUG org.onap.sdc.utils.heat.HeatParser - Start of extracting HEAT parameters from file, file contents: heat_template_version: 2013-05-23 11:11:22 11:11:22 description: Simple template to deploy a stack with two virtual machine instances 11:11:22 11:11:22.836 [main] WARN org.onap.sdc.utils.heat.HeatParser - HEAT template parameters section wasn't found or is empty. 11:11:22 [INFO] Tests run: 3, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.189 s - in org.onap.sdc.impl.HeatParserTest 11:11:22 [INFO] Running org.onap.sdc.impl.DistributionStatusMessageImplTest 11:11:22 [INFO] Tests run: 3, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.005 s - in org.onap.sdc.impl.DistributionStatusMessageImplTest 11:11:22 [INFO] Running org.onap.sdc.impl.NotificationCallbackBuilderTest 11:11:22 11:11:22.847 [kafka-producer-network-thread | mso-123456-producer-414c146f-350c-404b-a200-38d58b83784c] DEBUG org.apache.kafka.clients.NetworkClient - [Producer clientId=mso-123456-producer-414c146f-350c-404b-a200-38d58b83784c] Give up sending metadata request since no node is available 11:11:22 [INFO] Tests run: 2, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.009 s - in org.onap.sdc.impl.NotificationCallbackBuilderTest 11:11:22 [INFO] Running org.onap.sdc.impl.DistributionClientDownloadResultTest 11:11:22 [WARNING] Tests run: 7, Failures: 0, Errors: 0, Skipped: 1, Time elapsed: 0.004 s - in org.onap.sdc.impl.DistributionClientDownloadResultTest 11:11:22 [INFO] Running org.onap.sdc.impl.ConfigurationValidatorTest 11:11:22 [INFO] Tests run: 4, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.006 s - in org.onap.sdc.impl.ConfigurationValidatorTest 11:11:22 [INFO] Running org.onap.sdc.impl.DistributionClientTest 11:11:22 11:11:22.873 [main] INFO org.onap.sdc.impl.DistributionClientImpl - DistributionClient - init 11:11:22 11:11:22.875 [main] DEBUG org.onap.sdc.impl.DistributionClientImpl - Artifact types: [HEAT] were validated with SDC server 11:11:22 11:11:22.875 [main] DEBUG org.onap.sdc.impl.DistributionClientImpl - Get MessageBus cluster information from SDC 11:11:22 11:11:22.875 [main] DEBUG org.onap.sdc.impl.DistributionClientImpl - MessageBus cluster info retrieved successfully org.onap.sdc.utils.kafka.KafkaDataResponse@19bd7cf4 11:11:22 11:11:22.877 [main] INFO org.apache.kafka.clients.producer.ProducerConfig - ProducerConfig values: 11:11:22 acks = -1 11:11:22 batch.size = 16384 11:11:22 bootstrap.servers = [localhost:9092] 11:11:22 buffer.memory = 33554432 11:11:22 client.dns.lookup = use_all_dns_ips 11:11:22 client.id = mso-123456-producer-1dc14a95-50e0-4c07-89cc-b235db07ae06 11:11:22 compression.type = none 11:11:22 connections.max.idle.ms = 540000 11:11:22 delivery.timeout.ms = 120000 11:11:22 enable.idempotence = true 11:11:22 interceptor.classes = [] 11:11:22 key.serializer = class org.apache.kafka.common.serialization.StringSerializer 11:11:22 linger.ms = 0 11:11:22 max.block.ms = 60000 11:11:22 max.in.flight.requests.per.connection = 5 11:11:22 max.request.size = 1048576 11:11:22 metadata.max.age.ms = 300000 11:11:22 metadata.max.idle.ms = 300000 11:11:22 metric.reporters = [] 11:11:22 metrics.num.samples = 2 11:11:22 metrics.recording.level = INFO 11:11:22 metrics.sample.window.ms = 30000 11:11:22 partitioner.adaptive.partitioning.enable = true 11:11:22 partitioner.availability.timeout.ms = 0 11:11:22 partitioner.class = null 11:11:22 partitioner.ignore.keys = false 11:11:22 receive.buffer.bytes = 32768 11:11:22 reconnect.backoff.max.ms = 1000 11:11:22 reconnect.backoff.ms = 50 11:11:22 request.timeout.ms = 30000 11:11:22 retries = 2147483647 11:11:22 retry.backoff.ms = 100 11:11:22 sasl.client.callback.handler.class = null 11:11:22 sasl.jaas.config = [hidden] 11:11:22 sasl.kerberos.kinit.cmd = /usr/bin/kinit 11:11:22 sasl.kerberos.min.time.before.relogin = 60000 11:11:22 sasl.kerberos.service.name = null 11:11:22 sasl.kerberos.ticket.renew.jitter = 0.05 11:11:22 sasl.kerberos.ticket.renew.window.factor = 0.8 11:11:22 sasl.login.callback.handler.class = null 11:11:22 sasl.login.class = null 11:11:22 sasl.login.connect.timeout.ms = null 11:11:22 sasl.login.read.timeout.ms = null 11:11:22 sasl.login.refresh.buffer.seconds = 300 11:11:22 sasl.login.refresh.min.period.seconds = 60 11:11:22 sasl.login.refresh.window.factor = 0.8 11:11:22 sasl.login.refresh.window.jitter = 0.05 11:11:22 sasl.login.retry.backoff.max.ms = 10000 11:11:22 sasl.login.retry.backoff.ms = 100 11:11:22 sasl.mechanism = PLAIN 11:11:22 sasl.oauthbearer.clock.skew.seconds = 30 11:11:22 sasl.oauthbearer.expected.audience = null 11:11:22 sasl.oauthbearer.expected.issuer = null 11:11:22 sasl.oauthbearer.jwks.endpoint.refresh.ms = 3600000 11:11:22 sasl.oauthbearer.jwks.endpoint.retry.backoff.max.ms = 10000 11:11:22 sasl.oauthbearer.jwks.endpoint.retry.backoff.ms = 100 11:11:22 sasl.oauthbearer.jwks.endpoint.url = null 11:11:22 sasl.oauthbearer.scope.claim.name = scope 11:11:22 sasl.oauthbearer.sub.claim.name = sub 11:11:22 sasl.oauthbearer.token.endpoint.url = null 11:11:22 security.protocol = SASL_PLAINTEXT 11:11:22 security.providers = null 11:11:22 send.buffer.bytes = 131072 11:11:22 socket.connection.setup.timeout.max.ms = 30000 11:11:22 socket.connection.setup.timeout.ms = 10000 11:11:22 ssl.cipher.suites = null 11:11:22 ssl.enabled.protocols = [TLSv1.2, TLSv1.3] 11:11:22 ssl.endpoint.identification.algorithm = https 11:11:22 ssl.engine.factory.class = null 11:11:22 ssl.key.password = null 11:11:22 ssl.keymanager.algorithm = SunX509 11:11:22 ssl.keystore.certificate.chain = null 11:11:22 ssl.keystore.key = null 11:11:22 ssl.keystore.location = null 11:11:22 ssl.keystore.password = null 11:11:22 ssl.keystore.type = JKS 11:11:22 ssl.protocol = TLSv1.3 11:11:22 ssl.provider = null 11:11:22 ssl.secure.random.implementation = null 11:11:22 ssl.trustmanager.algorithm = PKIX 11:11:22 ssl.truststore.certificates = null 11:11:22 ssl.truststore.location = null 11:11:22 ssl.truststore.password = null 11:11:22 ssl.truststore.type = JKS 11:11:22 transaction.timeout.ms = 60000 11:11:22 transactional.id = null 11:11:22 value.serializer = class org.apache.kafka.common.serialization.StringSerializer 11:11:22 11:11:22 11:11:22.879 [main] INFO org.apache.kafka.clients.producer.KafkaProducer - [Producer clientId=mso-123456-producer-1dc14a95-50e0-4c07-89cc-b235db07ae06] Instantiated an idempotent producer. 11:11:22 11:11:22.881 [main] INFO org.apache.kafka.common.utils.AppInfoParser - Kafka version: 3.3.1 11:11:22 11:11:22.881 [main] INFO org.apache.kafka.common.utils.AppInfoParser - Kafka commitId: e23c59d00e687ff5 11:11:22 11:11:22.881 [main] INFO org.apache.kafka.common.utils.AppInfoParser - Kafka startTimeMs: 1768216282881 11:11:22 11:11:22.881 [kafka-producer-network-thread | mso-123456-producer-1dc14a95-50e0-4c07-89cc-b235db07ae06] DEBUG org.apache.kafka.clients.producer.internals.Sender - [Producer clientId=mso-123456-producer-1dc14a95-50e0-4c07-89cc-b235db07ae06] Starting Kafka producer I/O thread. 11:11:22 11:11:22.881 [kafka-producer-network-thread | mso-123456-producer-1dc14a95-50e0-4c07-89cc-b235db07ae06] DEBUG org.apache.kafka.clients.producer.internals.TransactionManager - [Producer clientId=mso-123456-producer-1dc14a95-50e0-4c07-89cc-b235db07ae06] Transition from state UNINITIALIZED to INITIALIZING 11:11:22 11:11:22.881 [main] DEBUG org.apache.kafka.clients.producer.KafkaProducer - [Producer clientId=mso-123456-producer-1dc14a95-50e0-4c07-89cc-b235db07ae06] Kafka producer started 11:11:22 11:11:22.881 [kafka-producer-network-thread | mso-123456-producer-1dc14a95-50e0-4c07-89cc-b235db07ae06] DEBUG org.apache.kafka.clients.producer.internals.TransactionManager - [Producer clientId=mso-123456-producer-1dc14a95-50e0-4c07-89cc-b235db07ae06] Enqueuing transactional request InitProducerIdRequestData(transactionalId=null, transactionTimeoutMs=2147483647, producerId=-1, producerEpoch=-1) 11:11:22 DistributionClientResultImpl [responseStatus=SUCCESS, responseMessage=distribution client initialized successfully] 11:11:22 11:11:22.882 [kafka-producer-network-thread | mso-123456-producer-1dc14a95-50e0-4c07-89cc-b235db07ae06] DEBUG org.apache.kafka.clients.NetworkClient - [Producer clientId=mso-123456-producer-1dc14a95-50e0-4c07-89cc-b235db07ae06] Initialize connection to node localhost:9092 (id: -1 rack: null) for sending metadata request 11:11:22 11:11:22.882 [kafka-producer-network-thread | mso-123456-producer-1dc14a95-50e0-4c07-89cc-b235db07ae06] DEBUG org.apache.kafka.clients.ClientUtils - Resolved host localhost as 127.0.0.1 11:11:22 11:11:22.883 [kafka-producer-network-thread | mso-123456-producer-1dc14a95-50e0-4c07-89cc-b235db07ae06] DEBUG org.apache.kafka.clients.NetworkClient - [Producer clientId=mso-123456-producer-1dc14a95-50e0-4c07-89cc-b235db07ae06] Initiating connection to node localhost:9092 (id: -1 rack: null) using address localhost/127.0.0.1 11:11:22 11:11:22.883 [main] INFO org.onap.sdc.impl.DistributionClientImpl - DistributionClient - init 11:11:22 11:11:22.883 [main] WARN org.onap.sdc.impl.DistributionClientImpl - distribution client already initialized 11:11:22 11:11:22.883 [kafka-producer-network-thread | mso-123456-producer-1dc14a95-50e0-4c07-89cc-b235db07ae06] DEBUG org.apache.kafka.common.security.authenticator.SaslClientAuthenticator - [Producer clientId=mso-123456-producer-1dc14a95-50e0-4c07-89cc-b235db07ae06] Set SASL client state to SEND_APIVERSIONS_REQUEST 11:11:22 11:11:22.883 [kafka-producer-network-thread | mso-123456-producer-1dc14a95-50e0-4c07-89cc-b235db07ae06] DEBUG org.apache.kafka.common.security.authenticator.SaslClientAuthenticator - [Producer clientId=mso-123456-producer-1dc14a95-50e0-4c07-89cc-b235db07ae06] Creating SaslClient: client=null;service=kafka;serviceHostname=localhost;mechs=[PLAIN] 11:11:22 11:11:22.884 [main] INFO org.onap.sdc.impl.DistributionClientImpl - stop DistributionClient 11:11:22 11:11:22.886 [kafka-producer-network-thread | mso-123456-producer-1dc14a95-50e0-4c07-89cc-b235db07ae06] DEBUG org.apache.kafka.common.network.Selector - [Producer clientId=mso-123456-producer-1dc14a95-50e0-4c07-89cc-b235db07ae06] Connection with localhost/127.0.0.1 (channelId=-1) disconnected 11:11:22 java.net.ConnectException: Connection refused 11:11:22 at java.base/sun.nio.ch.SocketChannelImpl.checkConnect(Native Method) 11:11:22 at java.base/sun.nio.ch.SocketChannelImpl.finishConnect(SocketChannelImpl.java:777) 11:11:22 at org.apache.kafka.common.network.PlaintextTransportLayer.finishConnect(PlaintextTransportLayer.java:50) 11:11:22 at org.apache.kafka.common.network.KafkaChannel.finishConnect(KafkaChannel.java:224) 11:11:22 at org.apache.kafka.common.network.Selector.pollSelectionKeys(Selector.java:526) 11:11:22 at org.apache.kafka.common.network.Selector.poll(Selector.java:481) 11:11:22 at org.apache.kafka.clients.NetworkClient.poll(NetworkClient.java:560) 11:11:22 at org.apache.kafka.clients.NetworkClientUtils.isReady(NetworkClientUtils.java:42) 11:11:22 at org.apache.kafka.clients.NetworkClientUtils.awaitReady(NetworkClientUtils.java:64) 11:11:22 at org.apache.kafka.clients.producer.internals.Sender.awaitNodeReady(Sender.java:534) 11:11:22 at org.apache.kafka.clients.producer.internals.Sender.maybeSendAndPollTransactionalRequest(Sender.java:455) 11:11:22 at org.apache.kafka.clients.producer.internals.Sender.runOnce(Sender.java:316) 11:11:22 at org.apache.kafka.clients.producer.internals.Sender.run(Sender.java:243) 11:11:22 at java.base/java.lang.Thread.run(Thread.java:829) 11:11:22 11:11:22.886 [kafka-producer-network-thread | mso-123456-producer-1dc14a95-50e0-4c07-89cc-b235db07ae06] INFO org.apache.kafka.clients.NetworkClient - [Producer clientId=mso-123456-producer-1dc14a95-50e0-4c07-89cc-b235db07ae06] Node -1 disconnected. 11:11:22 11:11:22.886 [kafka-producer-network-thread | mso-123456-producer-1dc14a95-50e0-4c07-89cc-b235db07ae06] WARN org.apache.kafka.clients.NetworkClient - [Producer clientId=mso-123456-producer-1dc14a95-50e0-4c07-89cc-b235db07ae06] Connection to node -1 (localhost/127.0.0.1:9092) could not be established. Broker may not be available. 11:11:22 11:11:22.887 [kafka-producer-network-thread | mso-123456-producer-1dc14a95-50e0-4c07-89cc-b235db07ae06] WARN org.apache.kafka.clients.NetworkClient - [Producer clientId=mso-123456-producer-1dc14a95-50e0-4c07-89cc-b235db07ae06] Bootstrap broker localhost:9092 (id: -1 rack: null) disconnected 11:11:22 11:11:22.887 [main] INFO org.onap.sdc.impl.DistributionClientImpl - DistributionClient - init 11:11:22 11:11:22.887 [kafka-producer-network-thread | mso-123456-producer-1dc14a95-50e0-4c07-89cc-b235db07ae06] DEBUG org.apache.kafka.clients.producer.internals.Sender - [Producer clientId=mso-123456-producer-1dc14a95-50e0-4c07-89cc-b235db07ae06] Disconnect from localhost:9092 (id: -1 rack: null) while trying to send request InitProducerIdRequestData(transactionalId=null, transactionTimeoutMs=2147483647, producerId=-1, producerEpoch=-1). Going to back off and retry. 11:11:22 java.io.IOException: Connection to localhost:9092 (id: -1 rack: null) failed. 11:11:22 at org.apache.kafka.clients.NetworkClientUtils.awaitReady(NetworkClientUtils.java:70) 11:11:22 at org.apache.kafka.clients.producer.internals.Sender.awaitNodeReady(Sender.java:534) 11:11:22 at org.apache.kafka.clients.producer.internals.Sender.maybeSendAndPollTransactionalRequest(Sender.java:455) 11:11:22 at org.apache.kafka.clients.producer.internals.Sender.runOnce(Sender.java:316) 11:11:22 at org.apache.kafka.clients.producer.internals.Sender.run(Sender.java:243) 11:11:22 at java.base/java.lang.Thread.run(Thread.java:829) 11:11:22 11:11:22.887 [main] ERROR org.onap.sdc.impl.DistributionClientImpl - DistributionClientResultImpl [responseStatus=CONF_MISSING_PASSWORD, responseMessage=configuration is invalid: CONF_MISSING_PASSWORD] 11:11:22 11:11:22.888 [main] INFO org.onap.sdc.impl.DistributionClientImpl - DistributionClient - init 11:11:22 11:11:22.888 [main] ERROR org.onap.sdc.impl.DistributionClientImpl - DistributionClientResultImpl [responseStatus=CONF_MISSING_PASSWORD, responseMessage=configuration is invalid: CONF_MISSING_PASSWORD] 11:11:22 11:11:22.888 [main] INFO org.onap.sdc.impl.DistributionClientImpl - DistributionClient - init 11:11:22 11:11:22.888 [main] ERROR org.onap.sdc.impl.DistributionClientImpl - DistributionClientResultImpl [responseStatus=CONF_MISSING_USERNAME, responseMessage=configuration is invalid: CONF_MISSING_USERNAME] 11:11:22 11:11:22.888 [main] INFO org.onap.sdc.impl.DistributionClientImpl - DistributionClient - init 11:11:22 11:11:22.889 [main] ERROR org.onap.sdc.impl.DistributionClientImpl - DistributionClientResultImpl [responseStatus=CONF_MISSING_USERNAME, responseMessage=configuration is invalid: CONF_MISSING_USERNAME] 11:11:22 11:11:22.889 [main] INFO org.onap.sdc.impl.DistributionClientImpl - DistributionClient - init 11:11:22 11:11:22.889 [main] ERROR org.onap.sdc.impl.DistributionClientImpl - DistributionClientResultImpl [responseStatus=CONF_MISSING_SDC_FQDN, responseMessage=configuration is invalid: CONF_MISSING_SDC_FQDN] 11:11:22 11:11:22.889 [main] INFO org.onap.sdc.impl.DistributionClientImpl - DistributionClient - init 11:11:22 11:11:22.890 [main] ERROR org.onap.sdc.impl.DistributionClientImpl - DistributionClientResultImpl [responseStatus=CONF_MISSING_SDC_FQDN, responseMessage=configuration is invalid: CONF_MISSING_SDC_FQDN] 11:11:22 11:11:22.890 [main] INFO org.onap.sdc.impl.DistributionClientImpl - DistributionClient - init 11:11:22 11:11:22.890 [main] ERROR org.onap.sdc.impl.DistributionClientImpl - DistributionClientResultImpl [responseStatus=CONF_INVALID_SDC_FQDN, responseMessage=configuration is invalid: CONF_INVALID_SDC_FQDN] 11:11:22 11:11:22.890 [main] INFO org.onap.sdc.impl.DistributionClientImpl - DistributionClient - init 11:11:22 11:11:22.891 [main] ERROR org.onap.sdc.impl.DistributionClientImpl - DistributionClientResultImpl [responseStatus=CONF_MISSING_CONSUMER_ID, responseMessage=configuration is invalid: CONF_MISSING_CONSUMER_ID] 11:11:22 11:11:22.891 [main] INFO org.onap.sdc.impl.DistributionClientImpl - DistributionClient - init 11:11:22 11:11:22.891 [main] ERROR org.onap.sdc.impl.DistributionClientImpl - DistributionClientResultImpl [responseStatus=CONF_MISSING_CONSUMER_ID, responseMessage=configuration is invalid: CONF_MISSING_CONSUMER_ID] 11:11:22 11:11:22.892 [main] INFO org.onap.sdc.impl.DistributionClientImpl - DistributionClient - init 11:11:22 11:11:22.892 [main] ERROR org.onap.sdc.impl.DistributionClientImpl - DistributionClientResultImpl [responseStatus=CONF_MISSING_ENVIRONMENT_NAME, responseMessage=configuration is invalid: CONF_MISSING_ENVIRONMENT_NAME] 11:11:22 11:11:22.892 [main] INFO org.onap.sdc.impl.DistributionClientImpl - DistributionClient - init 11:11:22 11:11:22.892 [main] ERROR org.onap.sdc.impl.DistributionClientImpl - DistributionClientResultImpl [responseStatus=CONF_MISSING_ENVIRONMENT_NAME, responseMessage=configuration is invalid: CONF_MISSING_ENVIRONMENT_NAME] 11:11:22 11:11:22.893 [main] INFO org.onap.sdc.impl.DistributionClientImpl - stop DistributionClient 11:11:22 11:11:22.893 [main] DEBUG org.onap.sdc.impl.DistributionClientImpl - client was not initialized 11:11:22 isUseHttpsWithSDC set to true 11:11:22 11:11:22.894 [main] INFO org.onap.sdc.impl.DistributionClientImpl - DistributionClient - init 11:11:22 11:11:22.898 [kafka-producer-network-thread | mso-123456-producer-414c146f-350c-404b-a200-38d58b83784c] DEBUG org.apache.kafka.clients.NetworkClient - [Producer clientId=mso-123456-producer-414c146f-350c-404b-a200-38d58b83784c] Give up sending metadata request since no node is available 11:11:22 11:11:22.935 [kafka-coordinator-heartbeat-thread | mso-group] DEBUG org.apache.kafka.clients.NetworkClient - [Consumer clientId=mso-123456-consumer-37da6eb2-0a27-47c4-bc3c-8032fdeca549, groupId=mso-group] Give up sending metadata request since no node is available 11:11:22 11:11:22.935 [kafka-coordinator-heartbeat-thread | mso-group] DEBUG org.apache.kafka.clients.consumer.internals.ConsumerCoordinator - [Consumer clientId=mso-123456-consumer-37da6eb2-0a27-47c4-bc3c-8032fdeca549, groupId=mso-group] No broker available to send FindCoordinator request 11:11:22 11:11:22.941 [main] DEBUG org.onap.sdc.http.SdcConnectorClient - about to perform get on SDC. requestId= d47858c6-600e-452b-adf0-4418875c1c22 url= /sdc/v1/artifactTypes 11:11:22 11:11:22.942 [main] DEBUG org.onap.sdc.http.HttpSdcClient - url to send https://badhost:8080/sdc/v1/artifactTypes 11:11:22 11:11:22.948 [kafka-producer-network-thread | mso-123456-producer-414c146f-350c-404b-a200-38d58b83784c] DEBUG org.apache.kafka.clients.NetworkClient - [Producer clientId=mso-123456-producer-414c146f-350c-404b-a200-38d58b83784c] Give up sending metadata request since no node is available 11:11:22 11:11:22.987 [kafka-producer-network-thread | mso-123456-producer-1dc14a95-50e0-4c07-89cc-b235db07ae06] DEBUG org.apache.kafka.clients.producer.internals.TransactionManager - [Producer clientId=mso-123456-producer-1dc14a95-50e0-4c07-89cc-b235db07ae06] Enqueuing transactional request InitProducerIdRequestData(transactionalId=null, transactionTimeoutMs=2147483647, producerId=-1, producerEpoch=-1) 11:11:22 11:11:22.988 [kafka-producer-network-thread | mso-123456-producer-1dc14a95-50e0-4c07-89cc-b235db07ae06] DEBUG org.apache.kafka.clients.NetworkClient - [Producer clientId=mso-123456-producer-1dc14a95-50e0-4c07-89cc-b235db07ae06] Initialize connection to node localhost:9092 (id: -1 rack: null) for sending metadata request 11:11:22 11:11:22.988 [kafka-producer-network-thread | mso-123456-producer-1dc14a95-50e0-4c07-89cc-b235db07ae06] DEBUG org.apache.kafka.clients.ClientUtils - Resolved host localhost as 127.0.0.1 11:11:22 11:11:22.988 [kafka-producer-network-thread | mso-123456-producer-1dc14a95-50e0-4c07-89cc-b235db07ae06] DEBUG org.apache.kafka.clients.NetworkClient - [Producer clientId=mso-123456-producer-1dc14a95-50e0-4c07-89cc-b235db07ae06] Initiating connection to node localhost:9092 (id: -1 rack: null) using address localhost/127.0.0.1 11:11:22 11:11:22.988 [kafka-producer-network-thread | mso-123456-producer-1dc14a95-50e0-4c07-89cc-b235db07ae06] DEBUG org.apache.kafka.common.security.authenticator.SaslClientAuthenticator - [Producer clientId=mso-123456-producer-1dc14a95-50e0-4c07-89cc-b235db07ae06] Set SASL client state to SEND_APIVERSIONS_REQUEST 11:11:22 11:11:22.988 [kafka-producer-network-thread | mso-123456-producer-1dc14a95-50e0-4c07-89cc-b235db07ae06] DEBUG org.apache.kafka.common.security.authenticator.SaslClientAuthenticator - [Producer clientId=mso-123456-producer-1dc14a95-50e0-4c07-89cc-b235db07ae06] Creating SaslClient: client=null;service=kafka;serviceHostname=localhost;mechs=[PLAIN] 11:11:22 11:11:22.989 [kafka-producer-network-thread | mso-123456-producer-1dc14a95-50e0-4c07-89cc-b235db07ae06] DEBUG org.apache.kafka.common.network.Selector - [Producer clientId=mso-123456-producer-1dc14a95-50e0-4c07-89cc-b235db07ae06] Connection with localhost/127.0.0.1 (channelId=-1) disconnected 11:11:22 java.net.ConnectException: Connection refused 11:11:22 at java.base/sun.nio.ch.SocketChannelImpl.checkConnect(Native Method) 11:11:22 at java.base/sun.nio.ch.SocketChannelImpl.finishConnect(SocketChannelImpl.java:777) 11:11:22 at org.apache.kafka.common.network.PlaintextTransportLayer.finishConnect(PlaintextTransportLayer.java:50) 11:11:22 at org.apache.kafka.common.network.KafkaChannel.finishConnect(KafkaChannel.java:224) 11:11:22 at org.apache.kafka.common.network.Selector.pollSelectionKeys(Selector.java:526) 11:11:22 at org.apache.kafka.common.network.Selector.poll(Selector.java:481) 11:11:22 at org.apache.kafka.clients.NetworkClient.poll(NetworkClient.java:560) 11:11:22 at org.apache.kafka.clients.NetworkClientUtils.isReady(NetworkClientUtils.java:42) 11:11:22 at org.apache.kafka.clients.NetworkClientUtils.awaitReady(NetworkClientUtils.java:64) 11:11:22 at org.apache.kafka.clients.producer.internals.Sender.awaitNodeReady(Sender.java:534) 11:11:22 at org.apache.kafka.clients.producer.internals.Sender.maybeSendAndPollTransactionalRequest(Sender.java:455) 11:11:22 at org.apache.kafka.clients.producer.internals.Sender.runOnce(Sender.java:316) 11:11:22 at org.apache.kafka.clients.producer.internals.Sender.run(Sender.java:243) 11:11:22 at java.base/java.lang.Thread.run(Thread.java:829) 11:11:22 11:11:22.990 [kafka-producer-network-thread | mso-123456-producer-1dc14a95-50e0-4c07-89cc-b235db07ae06] INFO org.apache.kafka.clients.NetworkClient - [Producer clientId=mso-123456-producer-1dc14a95-50e0-4c07-89cc-b235db07ae06] Node -1 disconnected. 11:11:22 11:11:22.990 [kafka-producer-network-thread | mso-123456-producer-1dc14a95-50e0-4c07-89cc-b235db07ae06] WARN org.apache.kafka.clients.NetworkClient - [Producer clientId=mso-123456-producer-1dc14a95-50e0-4c07-89cc-b235db07ae06] Connection to node -1 (localhost/127.0.0.1:9092) could not be established. Broker may not be available. 11:11:22 11:11:22.990 [kafka-producer-network-thread | mso-123456-producer-1dc14a95-50e0-4c07-89cc-b235db07ae06] WARN org.apache.kafka.clients.NetworkClient - [Producer clientId=mso-123456-producer-1dc14a95-50e0-4c07-89cc-b235db07ae06] Bootstrap broker localhost:9092 (id: -1 rack: null) disconnected 11:11:23 11:11:22.990 [kafka-producer-network-thread | mso-123456-producer-1dc14a95-50e0-4c07-89cc-b235db07ae06] DEBUG org.apache.kafka.clients.producer.internals.Sender - [Producer clientId=mso-123456-producer-1dc14a95-50e0-4c07-89cc-b235db07ae06] Disconnect from localhost:9092 (id: -1 rack: null) while trying to send request InitProducerIdRequestData(transactionalId=null, transactionTimeoutMs=2147483647, producerId=-1, producerEpoch=-1). Going to back off and retry. 11:11:23 java.io.IOException: Connection to localhost:9092 (id: -1 rack: null) failed. 11:11:23 at org.apache.kafka.clients.NetworkClientUtils.awaitReady(NetworkClientUtils.java:70) 11:11:23 at org.apache.kafka.clients.producer.internals.Sender.awaitNodeReady(Sender.java:534) 11:11:23 at org.apache.kafka.clients.producer.internals.Sender.maybeSendAndPollTransactionalRequest(Sender.java:455) 11:11:23 at org.apache.kafka.clients.producer.internals.Sender.runOnce(Sender.java:316) 11:11:23 at org.apache.kafka.clients.producer.internals.Sender.run(Sender.java:243) 11:11:23 at java.base/java.lang.Thread.run(Thread.java:829) 11:11:23 11:11:22.998 [kafka-producer-network-thread | mso-123456-producer-414c146f-350c-404b-a200-38d58b83784c] DEBUG org.apache.kafka.clients.NetworkClient - [Producer clientId=mso-123456-producer-414c146f-350c-404b-a200-38d58b83784c] Give up sending metadata request since no node is available 11:11:23 11:11:23.015 [main] ERROR org.onap.sdc.http.HttpSdcClient - failed to connect to url: /sdc/v1/artifactTypes 11:11:23 java.net.UnknownHostException: badhost: System error 11:11:23 at java.base/java.net.Inet6AddressImpl.lookupAllHostAddr(Native Method) 11:11:23 at java.base/java.net.InetAddress$PlatformNameService.lookupAllHostAddr(InetAddress.java:929) 11:11:23 at java.base/java.net.InetAddress.getAddressesFromNameService(InetAddress.java:1529) 11:11:23 at java.base/java.net.InetAddress$NameServiceAddresses.get(InetAddress.java:848) 11:11:23 at java.base/java.net.InetAddress.getAllByName0(InetAddress.java:1519) 11:11:23 at java.base/java.net.InetAddress.getAllByName(InetAddress.java:1378) 11:11:23 at java.base/java.net.InetAddress.getAllByName(InetAddress.java:1306) 11:11:23 at org.apache.http.impl.conn.SystemDefaultDnsResolver.resolve(SystemDefaultDnsResolver.java:45) 11:11:23 at org.apache.http.impl.conn.DefaultHttpClientConnectionOperator.connect(DefaultHttpClientConnectionOperator.java:112) 11:11:23 at org.apache.http.impl.conn.PoolingHttpClientConnectionManager.connect(PoolingHttpClientConnectionManager.java:376) 11:11:23 at org.apache.http.impl.execchain.MainClientExec.establishRoute(MainClientExec.java:393) 11:11:23 at org.apache.http.impl.execchain.MainClientExec.execute(MainClientExec.java:236) 11:11:23 at org.apache.http.impl.execchain.ProtocolExec.execute(ProtocolExec.java:186) 11:11:23 at org.apache.http.impl.execchain.RetryExec.execute(RetryExec.java:89) 11:11:23 at org.apache.http.impl.execchain.RedirectExec.execute(RedirectExec.java:110) 11:11:23 at org.apache.http.impl.client.InternalHttpClient.doExecute(InternalHttpClient.java:185) 11:11:23 at org.apache.http.impl.client.CloseableHttpClient.execute(CloseableHttpClient.java:83) 11:11:23 at org.apache.http.impl.client.CloseableHttpClient.execute(CloseableHttpClient.java:108) 11:11:23 at org.onap.sdc.http.HttpSdcClient.getRequest(HttpSdcClient.java:116) 11:11:23 at org.onap.sdc.http.SdcConnectorClient.performSdcServerRequest(SdcConnectorClient.java:120) 11:11:23 at org.onap.sdc.http.SdcConnectorClient.getValidArtifactTypesList(SdcConnectorClient.java:74) 11:11:23 at org.onap.sdc.impl.DistributionClientImpl.validateArtifactTypesWithSdcServer(DistributionClientImpl.java:300) 11:11:23 at org.onap.sdc.impl.DistributionClientImpl.init(DistributionClientImpl.java:129) 11:11:23 at java.base/java.lang.invoke.MethodHandle.invokeWithArguments(MethodHandle.java:710) 11:11:23 at org.mockito.internal.util.reflection.InstrumentationMemberAccessor$Dispatcher$ByteBuddy$D6vZKrcQ.invokeWithArguments(Unknown Source) 11:11:23 at org.mockito.internal.util.reflection.InstrumentationMemberAccessor.invoke(InstrumentationMemberAccessor.java:239) 11:11:23 at org.mockito.internal.util.reflection.ModuleMemberAccessor.invoke(ModuleMemberAccessor.java:55) 11:11:23 at org.mockito.internal.creation.bytebuddy.MockMethodAdvice.tryInvoke(MockMethodAdvice.java:333) 11:11:23 at org.mockito.internal.creation.bytebuddy.MockMethodAdvice.access$500(MockMethodAdvice.java:60) 11:11:23 at org.mockito.internal.creation.bytebuddy.MockMethodAdvice$RealMethodCall.invoke(MockMethodAdvice.java:253) 11:11:23 at org.mockito.internal.invocation.InterceptedInvocation.callRealMethod(InterceptedInvocation.java:142) 11:11:23 at org.mockito.internal.stubbing.answers.CallsRealMethods.answer(CallsRealMethods.java:45) 11:11:23 at org.mockito.Answers.answer(Answers.java:99) 11:11:23 at org.mockito.internal.handler.MockHandlerImpl.handle(MockHandlerImpl.java:110) 11:11:23 at org.mockito.internal.handler.NullResultGuardian.handle(NullResultGuardian.java:29) 11:11:23 at org.mockito.internal.handler.InvocationNotifierHandler.handle(InvocationNotifierHandler.java:34) 11:11:23 at org.mockito.internal.creation.bytebuddy.MockMethodInterceptor.doIntercept(MockMethodInterceptor.java:82) 11:11:23 at org.mockito.internal.creation.bytebuddy.MockMethodAdvice.handle(MockMethodAdvice.java:151) 11:11:23 at org.onap.sdc.impl.DistributionClientImpl.init(DistributionClientImpl.java:117) 11:11:23 at org.onap.sdc.impl.DistributionClientTest.initFailedConnectSdcTest(DistributionClientTest.java:189) 11:11:23 at java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke0(Native Method) 11:11:23 at java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) 11:11:23 at java.base/jdk.internal.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) 11:11:23 at java.base/java.lang.reflect.Method.invoke(Method.java:566) 11:11:23 at org.junit.platform.commons.util.ReflectionUtils.invokeMethod(ReflectionUtils.java:688) 11:11:23 at org.junit.jupiter.engine.execution.MethodInvocation.proceed(MethodInvocation.java:60) 11:11:23 at org.junit.jupiter.engine.execution.InvocationInterceptorChain$ValidatingInvocation.proceed(InvocationInterceptorChain.java:131) 11:11:23 at org.junit.jupiter.engine.extension.TimeoutExtension.intercept(TimeoutExtension.java:149) 11:11:23 at org.junit.jupiter.engine.extension.TimeoutExtension.interceptTestableMethod(TimeoutExtension.java:140) 11:11:23 at org.junit.jupiter.engine.extension.TimeoutExtension.interceptTestMethod(TimeoutExtension.java:84) 11:11:23 at org.junit.jupiter.engine.execution.ExecutableInvoker$ReflectiveInterceptorCall.lambda$ofVoidMethod$0(ExecutableInvoker.java:115) 11:11:23 at org.junit.jupiter.engine.execution.ExecutableInvoker.lambda$invoke$0(ExecutableInvoker.java:105) 11:11:23 at org.junit.jupiter.engine.execution.InvocationInterceptorChain$InterceptedInvocation.proceed(InvocationInterceptorChain.java:106) 11:11:23 at org.junit.jupiter.engine.execution.InvocationInterceptorChain.proceed(InvocationInterceptorChain.java:64) 11:11:23 at org.junit.jupiter.engine.execution.InvocationInterceptorChain.chainAndInvoke(InvocationInterceptorChain.java:45) 11:11:23 at org.junit.jupiter.engine.execution.InvocationInterceptorChain.invoke(InvocationInterceptorChain.java:37) 11:11:23 at org.junit.jupiter.engine.execution.ExecutableInvoker.invoke(ExecutableInvoker.java:104) 11:11:23 at org.junit.jupiter.engine.execution.ExecutableInvoker.invoke(ExecutableInvoker.java:98) 11:11:23 at org.junit.jupiter.engine.descriptor.TestMethodTestDescriptor.lambda$invokeTestMethod$6(TestMethodTestDescriptor.java:210) 11:11:23 at org.junit.platform.engine.support.hierarchical.ThrowableCollector.execute(ThrowableCollector.java:73) 11:11:23 at org.junit.jupiter.engine.descriptor.TestMethodTestDescriptor.invokeTestMethod(TestMethodTestDescriptor.java:206) 11:11:23 at org.junit.jupiter.engine.descriptor.TestMethodTestDescriptor.execute(TestMethodTestDescriptor.java:131) 11:11:23 at org.junit.jupiter.engine.descriptor.TestMethodTestDescriptor.execute(TestMethodTestDescriptor.java:65) 11:11:23 at org.junit.platform.engine.support.hierarchical.NodeTestTask.lambda$executeRecursively$5(NodeTestTask.java:139) 11:11:23 at org.junit.platform.engine.support.hierarchical.ThrowableCollector.execute(ThrowableCollector.java:73) 11:11:23 at org.junit.platform.engine.support.hierarchical.NodeTestTask.lambda$executeRecursively$7(NodeTestTask.java:129) 11:11:23 at org.junit.platform.engine.support.hierarchical.Node.around(Node.java:137) 11:11:23 at org.junit.platform.engine.support.hierarchical.NodeTestTask.lambda$executeRecursively$8(NodeTestTask.java:127) 11:11:23 at org.junit.platform.engine.support.hierarchical.ThrowableCollector.execute(ThrowableCollector.java:73) 11:11:23 at org.junit.platform.engine.support.hierarchical.NodeTestTask.executeRecursively(NodeTestTask.java:126) 11:11:23 at org.junit.platform.engine.support.hierarchical.NodeTestTask.execute(NodeTestTask.java:84) 11:11:23 at java.base/java.util.ArrayList.forEach(ArrayList.java:1541) 11:11:23 at org.junit.platform.engine.support.hierarchical.SameThreadHierarchicalTestExecutorService.invokeAll(SameThreadHierarchicalTestExecutorService.java:38) 11:11:23 at org.junit.platform.engine.support.hierarchical.NodeTestTask.lambda$executeRecursively$5(NodeTestTask.java:143) 11:11:23 at org.junit.platform.engine.support.hierarchical.ThrowableCollector.execute(ThrowableCollector.java:73) 11:11:23 at org.junit.platform.engine.support.hierarchical.NodeTestTask.lambda$executeRecursively$7(NodeTestTask.java:129) 11:11:23 at org.junit.platform.engine.support.hierarchical.Node.around(Node.java:137) 11:11:23 at org.junit.platform.engine.support.hierarchical.NodeTestTask.lambda$executeRecursively$8(NodeTestTask.java:127) 11:11:23 at org.junit.platform.engine.support.hierarchical.ThrowableCollector.execute(ThrowableCollector.java:73) 11:11:23 at org.junit.platform.engine.support.hierarchical.NodeTestTask.executeRecursively(NodeTestTask.java:126) 11:11:23 at org.junit.platform.engine.support.hierarchical.NodeTestTask.execute(NodeTestTask.java:84) 11:11:23 at java.base/java.util.ArrayList.forEach(ArrayList.java:1541) 11:11:23 at org.junit.platform.engine.support.hierarchical.SameThreadHierarchicalTestExecutorService.invokeAll(SameThreadHierarchicalTestExecutorService.java:38) 11:11:23 at org.junit.platform.engine.support.hierarchical.NodeTestTask.lambda$executeRecursively$5(NodeTestTask.java:143) 11:11:23 at org.junit.platform.engine.support.hierarchical.ThrowableCollector.execute(ThrowableCollector.java:73) 11:11:23 at org.junit.platform.engine.support.hierarchical.NodeTestTask.lambda$executeRecursively$7(NodeTestTask.java:129) 11:11:23 at org.junit.platform.engine.support.hierarchical.Node.around(Node.java:137) 11:11:23 at org.junit.platform.engine.support.hierarchical.NodeTestTask.lambda$executeRecursively$8(NodeTestTask.java:127) 11:11:23 at org.junit.platform.engine.support.hierarchical.ThrowableCollector.execute(ThrowableCollector.java:73) 11:11:23 at org.junit.platform.engine.support.hierarchical.NodeTestTask.executeRecursively(NodeTestTask.java:126) 11:11:23 at org.junit.platform.engine.support.hierarchical.NodeTestTask.execute(NodeTestTask.java:84) 11:11:23 at org.junit.platform.engine.support.hierarchical.SameThreadHierarchicalTestExecutorService.submit(SameThreadHierarchicalTestExecutorService.java:32) 11:11:23 at org.junit.platform.engine.support.hierarchical.HierarchicalTestExecutor.execute(HierarchicalTestExecutor.java:57) 11:11:23 at org.junit.platform.engine.support.hierarchical.HierarchicalTestEngine.execute(HierarchicalTestEngine.java:51) 11:11:23 at org.junit.platform.launcher.core.EngineExecutionOrchestrator.execute(EngineExecutionOrchestrator.java:108) 11:11:23 at org.junit.platform.launcher.core.EngineExecutionOrchestrator.execute(EngineExecutionOrchestrator.java:88) 11:11:23 at org.junit.platform.launcher.core.EngineExecutionOrchestrator.lambda$execute$0(EngineExecutionOrchestrator.java:54) 11:11:23 at org.junit.platform.launcher.core.EngineExecutionOrchestrator.withInterceptedStreams(EngineExecutionOrchestrator.java:67) 11:11:23 at org.junit.platform.launcher.core.EngineExecutionOrchestrator.execute(EngineExecutionOrchestrator.java:52) 11:11:23 at org.junit.platform.launcher.core.DefaultLauncher.execute(DefaultLauncher.java:96) 11:11:23 at org.junit.platform.launcher.core.DefaultLauncher.execute(DefaultLauncher.java:75) 11:11:23 at org.apache.maven.surefire.junitplatform.JUnitPlatformProvider.invokeAllTests(JUnitPlatformProvider.java:154) 11:11:23 at org.apache.maven.surefire.junitplatform.JUnitPlatformProvider.invoke(JUnitPlatformProvider.java:127) 11:11:23 at org.apache.maven.surefire.booter.ForkedBooter.runSuitesInProcess(ForkedBooter.java:377) 11:11:23 at org.apache.maven.surefire.booter.ForkedBooter.execute(ForkedBooter.java:138) 11:11:23 at org.apache.maven.surefire.booter.ForkedBooter.run(ForkedBooter.java:465) 11:11:23 at org.apache.maven.surefire.booter.ForkedBooter.main(ForkedBooter.java:451) 11:11:23 11:11:23.016 [main] ERROR org.onap.sdc.http.SdcConnectorClient - status from SDC is org.onap.sdc.http.HttpSdcResponse@66e42a03 11:11:23 11:11:23.016 [main] ERROR org.onap.sdc.http.SdcConnectorClient - DistributionClientResultImpl [responseStatus=SDC_CONNECTION_FAILED, responseMessage=SDC server problem] 11:11:23 11:11:23.016 [main] DEBUG org.onap.sdc.http.SdcConnectorClient - error from SDC is: failed to connect 11:11:23 11:11:23.017 [main] INFO org.onap.sdc.impl.DistributionClientImpl - DistributionClient - init 11:11:23 11:11:23.036 [kafka-coordinator-heartbeat-thread | mso-group] DEBUG org.apache.kafka.clients.NetworkClient - [Consumer clientId=mso-123456-consumer-37da6eb2-0a27-47c4-bc3c-8032fdeca549, groupId=mso-group] Give up sending metadata request since no node is available 11:11:23 11:11:23.036 [kafka-coordinator-heartbeat-thread | mso-group] DEBUG org.apache.kafka.clients.consumer.internals.ConsumerCoordinator - [Consumer clientId=mso-123456-consumer-37da6eb2-0a27-47c4-bc3c-8032fdeca549, groupId=mso-group] No broker available to send FindCoordinator request 11:11:23 11:11:23.043 [main] DEBUG org.onap.sdc.http.SdcConnectorClient - about to perform get on SDC. requestId= 6e95b891-a9b3-48d5-a19e-99b52a42d6e2 url= /sdc/v1/artifactTypes 11:11:23 11:11:23.043 [main] DEBUG org.onap.sdc.http.HttpSdcClient - url to send https://localhost:8181/sdc/v1/artifactTypes 11:11:23 11:11:23.047 [main] ERROR org.onap.sdc.http.HttpSdcClient - failed to connect to url: /sdc/v1/artifactTypes 11:11:23 org.apache.http.conn.HttpHostConnectException: Connect to localhost:8181 [localhost/127.0.0.1] failed: Connection refused (Connection refused) 11:11:23 at org.apache.http.impl.conn.DefaultHttpClientConnectionOperator.connect(DefaultHttpClientConnectionOperator.java:156) 11:11:23 at org.apache.http.impl.conn.PoolingHttpClientConnectionManager.connect(PoolingHttpClientConnectionManager.java:376) 11:11:23 at org.apache.http.impl.execchain.MainClientExec.establishRoute(MainClientExec.java:393) 11:11:23 at org.apache.http.impl.execchain.MainClientExec.execute(MainClientExec.java:236) 11:11:23 at org.apache.http.impl.execchain.ProtocolExec.execute(ProtocolExec.java:186) 11:11:23 at org.apache.http.impl.execchain.RetryExec.execute(RetryExec.java:89) 11:11:23 at org.apache.http.impl.execchain.RedirectExec.execute(RedirectExec.java:110) 11:11:23 at org.apache.http.impl.client.InternalHttpClient.doExecute(InternalHttpClient.java:185) 11:11:23 at org.apache.http.impl.client.CloseableHttpClient.execute(CloseableHttpClient.java:83) 11:11:23 at org.apache.http.impl.client.CloseableHttpClient.execute(CloseableHttpClient.java:108) 11:11:23 at org.onap.sdc.http.HttpSdcClient.getRequest(HttpSdcClient.java:116) 11:11:23 at org.onap.sdc.http.SdcConnectorClient.performSdcServerRequest(SdcConnectorClient.java:120) 11:11:23 at org.onap.sdc.http.SdcConnectorClient.getValidArtifactTypesList(SdcConnectorClient.java:74) 11:11:23 at org.onap.sdc.impl.DistributionClientImpl.validateArtifactTypesWithSdcServer(DistributionClientImpl.java:300) 11:11:23 at org.onap.sdc.impl.DistributionClientImpl.init(DistributionClientImpl.java:129) 11:11:23 at java.base/java.lang.invoke.MethodHandle.invokeWithArguments(MethodHandle.java:710) 11:11:23 at org.mockito.internal.util.reflection.InstrumentationMemberAccessor$Dispatcher$ByteBuddy$D6vZKrcQ.invokeWithArguments(Unknown Source) 11:11:23 at org.mockito.internal.util.reflection.InstrumentationMemberAccessor.invoke(InstrumentationMemberAccessor.java:239) 11:11:23 at org.mockito.internal.util.reflection.ModuleMemberAccessor.invoke(ModuleMemberAccessor.java:55) 11:11:23 at org.mockito.internal.creation.bytebuddy.MockMethodAdvice.tryInvoke(MockMethodAdvice.java:333) 11:11:23 at org.mockito.internal.creation.bytebuddy.MockMethodAdvice.access$500(MockMethodAdvice.java:60) 11:11:23 at org.mockito.internal.creation.bytebuddy.MockMethodAdvice$RealMethodCall.invoke(MockMethodAdvice.java:253) 11:11:23 at org.mockito.internal.invocation.InterceptedInvocation.callRealMethod(InterceptedInvocation.java:142) 11:11:23 at org.mockito.internal.stubbing.answers.CallsRealMethods.answer(CallsRealMethods.java:45) 11:11:23 at org.mockito.Answers.answer(Answers.java:99) 11:11:23 at org.mockito.internal.handler.MockHandlerImpl.handle(MockHandlerImpl.java:110) 11:11:23 at org.mockito.internal.handler.NullResultGuardian.handle(NullResultGuardian.java:29) 11:11:23 at org.mockito.internal.handler.InvocationNotifierHandler.handle(InvocationNotifierHandler.java:34) 11:11:23 at org.mockito.internal.creation.bytebuddy.MockMethodInterceptor.doIntercept(MockMethodInterceptor.java:82) 11:11:23 at org.mockito.internal.creation.bytebuddy.MockMethodAdvice.handle(MockMethodAdvice.java:151) 11:11:23 at org.onap.sdc.impl.DistributionClientImpl.init(DistributionClientImpl.java:117) 11:11:23 at org.onap.sdc.impl.DistributionClientTest.initFailedConnectSdcTest(DistributionClientTest.java:195) 11:11:23 at java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke0(Native Method) 11:11:23 at java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) 11:11:23 at java.base/jdk.internal.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) 11:11:23 at java.base/java.lang.reflect.Method.invoke(Method.java:566) 11:11:23 at org.junit.platform.commons.util.ReflectionUtils.invokeMethod(ReflectionUtils.java:688) 11:11:23 at org.junit.jupiter.engine.execution.MethodInvocation.proceed(MethodInvocation.java:60) 11:11:23 at org.junit.jupiter.engine.execution.InvocationInterceptorChain$ValidatingInvocation.proceed(InvocationInterceptorChain.java:131) 11:11:23 at org.junit.jupiter.engine.extension.TimeoutExtension.intercept(TimeoutExtension.java:149) 11:11:23 at org.junit.jupiter.engine.extension.TimeoutExtension.interceptTestableMethod(TimeoutExtension.java:140) 11:11:23 at org.junit.jupiter.engine.extension.TimeoutExtension.interceptTestMethod(TimeoutExtension.java:84) 11:11:23 at org.junit.jupiter.engine.execution.ExecutableInvoker$ReflectiveInterceptorCall.lambda$ofVoidMethod$0(ExecutableInvoker.java:115) 11:11:23 at org.junit.jupiter.engine.execution.ExecutableInvoker.lambda$invoke$0(ExecutableInvoker.java:105) 11:11:23 at org.junit.jupiter.engine.execution.InvocationInterceptorChain$InterceptedInvocation.proceed(InvocationInterceptorChain.java:106) 11:11:23 at org.junit.jupiter.engine.execution.InvocationInterceptorChain.proceed(InvocationInterceptorChain.java:64) 11:11:23 at org.junit.jupiter.engine.execution.InvocationInterceptorChain.chainAndInvoke(InvocationInterceptorChain.java:45) 11:11:23 at org.junit.jupiter.engine.execution.InvocationInterceptorChain.invoke(InvocationInterceptorChain.java:37) 11:11:23 at org.junit.jupiter.engine.execution.ExecutableInvoker.invoke(ExecutableInvoker.java:104) 11:11:23 at org.junit.jupiter.engine.execution.ExecutableInvoker.invoke(ExecutableInvoker.java:98) 11:11:23 at org.junit.jupiter.engine.descriptor.TestMethodTestDescriptor.lambda$invokeTestMethod$6(TestMethodTestDescriptor.java:210) 11:11:23 at org.junit.platform.engine.support.hierarchical.ThrowableCollector.execute(ThrowableCollector.java:73) 11:11:23 at org.junit.jupiter.engine.descriptor.TestMethodTestDescriptor.invokeTestMethod(TestMethodTestDescriptor.java:206) 11:11:23 at org.junit.jupiter.engine.descriptor.TestMethodTestDescriptor.execute(TestMethodTestDescriptor.java:131) 11:11:23 at org.junit.jupiter.engine.descriptor.TestMethodTestDescriptor.execute(TestMethodTestDescriptor.java:65) 11:11:23 at org.junit.platform.engine.support.hierarchical.NodeTestTask.lambda$executeRecursively$5(NodeTestTask.java:139) 11:11:23 at org.junit.platform.engine.support.hierarchical.ThrowableCollector.execute(ThrowableCollector.java:73) 11:11:23 at org.junit.platform.engine.support.hierarchical.NodeTestTask.lambda$executeRecursively$7(NodeTestTask.java:129) 11:11:23 at org.junit.platform.engine.support.hierarchical.Node.around(Node.java:137) 11:11:23 at org.junit.platform.engine.support.hierarchical.NodeTestTask.lambda$executeRecursively$8(NodeTestTask.java:127) 11:11:23 at org.junit.platform.engine.support.hierarchical.ThrowableCollector.execute(ThrowableCollector.java:73) 11:11:23 at org.junit.platform.engine.support.hierarchical.NodeTestTask.executeRecursively(NodeTestTask.java:126) 11:11:23 at org.junit.platform.engine.support.hierarchical.NodeTestTask.execute(NodeTestTask.java:84) 11:11:23 at java.base/java.util.ArrayList.forEach(ArrayList.java:1541) 11:11:23 at org.junit.platform.engine.support.hierarchical.SameThreadHierarchicalTestExecutorService.invokeAll(SameThreadHierarchicalTestExecutorService.java:38) 11:11:23 at org.junit.platform.engine.support.hierarchical.NodeTestTask.lambda$executeRecursively$5(NodeTestTask.java:143) 11:11:23 at org.junit.platform.engine.support.hierarchical.ThrowableCollector.execute(ThrowableCollector.java:73) 11:11:23 at org.junit.platform.engine.support.hierarchical.NodeTestTask.lambda$executeRecursively$7(NodeTestTask.java:129) 11:11:23 at org.junit.platform.engine.support.hierarchical.Node.around(Node.java:137) 11:11:23 at org.junit.platform.engine.support.hierarchical.NodeTestTask.lambda$executeRecursively$8(NodeTestTask.java:127) 11:11:23 at org.junit.platform.engine.support.hierarchical.ThrowableCollector.execute(ThrowableCollector.java:73) 11:11:23 at org.junit.platform.engine.support.hierarchical.NodeTestTask.executeRecursively(NodeTestTask.java:126) 11:11:23 at org.junit.platform.engine.support.hierarchical.NodeTestTask.execute(NodeTestTask.java:84) 11:11:23 at java.base/java.util.ArrayList.forEach(ArrayList.java:1541) 11:11:23 at org.junit.platform.engine.support.hierarchical.SameThreadHierarchicalTestExecutorService.invokeAll(SameThreadHierarchicalTestExecutorService.java:38) 11:11:23 at org.junit.platform.engine.support.hierarchical.NodeTestTask.lambda$executeRecursively$5(NodeTestTask.java:143) 11:11:23 at org.junit.platform.engine.support.hierarchical.ThrowableCollector.execute(ThrowableCollector.java:73) 11:11:23 at org.junit.platform.engine.support.hierarchical.NodeTestTask.lambda$executeRecursively$7(NodeTestTask.java:129) 11:11:23 at org.junit.platform.engine.support.hierarchical.Node.around(Node.java:137) 11:11:23 at org.junit.platform.engine.support.hierarchical.NodeTestTask.lambda$executeRecursively$8(NodeTestTask.java:127) 11:11:23 at org.junit.platform.engine.support.hierarchical.ThrowableCollector.execute(ThrowableCollector.java:73) 11:11:23 at org.junit.platform.engine.support.hierarchical.NodeTestTask.executeRecursively(NodeTestTask.java:126) 11:11:23 at org.junit.platform.engine.support.hierarchical.NodeTestTask.execute(NodeTestTask.java:84) 11:11:23 at org.junit.platform.engine.support.hierarchical.SameThreadHierarchicalTestExecutorService.submit(SameThreadHierarchicalTestExecutorService.java:32) 11:11:23 at org.junit.platform.engine.support.hierarchical.HierarchicalTestExecutor.execute(HierarchicalTestExecutor.java:57) 11:11:23 at org.junit.platform.engine.support.hierarchical.HierarchicalTestEngine.execute(HierarchicalTestEngine.java:51) 11:11:23 at org.junit.platform.launcher.core.EngineExecutionOrchestrator.execute(EngineExecutionOrchestrator.java:108) 11:11:23 at org.junit.platform.launcher.core.EngineExecutionOrchestrator.execute(EngineExecutionOrchestrator.java:88) 11:11:23 at org.junit.platform.launcher.core.EngineExecutionOrchestrator.lambda$execute$0(EngineExecutionOrchestrator.java:54) 11:11:23 at org.junit.platform.launcher.core.EngineExecutionOrchestrator.withInterceptedStreams(EngineExecutionOrchestrator.java:67) 11:11:23 at org.junit.platform.launcher.core.EngineExecutionOrchestrator.execute(EngineExecutionOrchestrator.java:52) 11:11:23 at org.junit.platform.launcher.core.DefaultLauncher.execute(DefaultLauncher.java:96) 11:11:23 at org.junit.platform.launcher.core.DefaultLauncher.execute(DefaultLauncher.java:75) 11:11:23 at org.apache.maven.surefire.junitplatform.JUnitPlatformProvider.invokeAllTests(JUnitPlatformProvider.java:154) 11:11:23 at org.apache.maven.surefire.junitplatform.JUnitPlatformProvider.invoke(JUnitPlatformProvider.java:127) 11:11:23 at org.apache.maven.surefire.booter.ForkedBooter.runSuitesInProcess(ForkedBooter.java:377) 11:11:23 at org.apache.maven.surefire.booter.ForkedBooter.execute(ForkedBooter.java:138) 11:11:23 at org.apache.maven.surefire.booter.ForkedBooter.run(ForkedBooter.java:465) 11:11:23 at org.apache.maven.surefire.booter.ForkedBooter.main(ForkedBooter.java:451) 11:11:23 Caused by: java.net.ConnectException: Connection refused (Connection refused) 11:11:23 at java.base/java.net.PlainSocketImpl.socketConnect(Native Method) 11:11:23 at java.base/java.net.AbstractPlainSocketImpl.doConnect(AbstractPlainSocketImpl.java:412) 11:11:23 at java.base/java.net.AbstractPlainSocketImpl.connectToAddress(AbstractPlainSocketImpl.java:255) 11:11:23 at java.base/java.net.AbstractPlainSocketImpl.connect(AbstractPlainSocketImpl.java:237) 11:11:23 at java.base/java.net.SocksSocketImpl.connect(SocksSocketImpl.java:392) 11:11:23 at java.base/java.net.Socket.connect(Socket.java:609) 11:11:23 at org.apache.http.conn.ssl.SSLConnectionSocketFactory.connectSocket(SSLConnectionSocketFactory.java:368) 11:11:23 at org.apache.http.impl.conn.DefaultHttpClientConnectionOperator.connect(DefaultHttpClientConnectionOperator.java:142) 11:11:23 ... 98 common frames omitted 11:11:23 11:11:23.048 [main] ERROR org.onap.sdc.http.SdcConnectorClient - status from SDC is org.onap.sdc.http.HttpSdcResponse@111e8ebf 11:11:23 11:11:23.048 [main] ERROR org.onap.sdc.http.SdcConnectorClient - DistributionClientResultImpl [responseStatus=SDC_CONNECTION_FAILED, responseMessage=SDC server problem] 11:11:23 11:11:23.048 [main] DEBUG org.onap.sdc.http.SdcConnectorClient - error from SDC is: failed to connect 11:11:23 11:11:23.048 [main] INFO org.onap.sdc.impl.DistributionClientImpl - stop DistributionClient 11:11:23 11:11:23.048 [main] DEBUG org.onap.sdc.impl.DistributionClientImpl - client was not initialized 11:11:23 11:11:23.049 [kafka-producer-network-thread | mso-123456-producer-414c146f-350c-404b-a200-38d58b83784c] DEBUG org.apache.kafka.clients.NetworkClient - [Producer clientId=mso-123456-producer-414c146f-350c-404b-a200-38d58b83784c] Give up sending metadata request since no node is available 11:11:23 11:11:23.051 [main] INFO org.onap.sdc.impl.DistributionClientImpl - DistributionClient - init 11:11:23 11:11:23.052 [main] DEBUG org.onap.sdc.impl.DistributionClientImpl - Artifact types: [HEAT] were validated with SDC server 11:11:23 11:11:23.052 [main] DEBUG org.onap.sdc.impl.DistributionClientImpl - Get MessageBus cluster information from SDC 11:11:23 11:11:23.052 [main] DEBUG org.onap.sdc.impl.DistributionClientImpl - MessageBus cluster info retrieved successfully org.onap.sdc.utils.kafka.KafkaDataResponse@6e41ba2 11:11:23 11:11:23.052 [main] INFO org.apache.kafka.clients.producer.ProducerConfig - ProducerConfig values: 11:11:23 acks = -1 11:11:23 batch.size = 16384 11:11:23 bootstrap.servers = [localhost:9092] 11:11:23 buffer.memory = 33554432 11:11:23 client.dns.lookup = use_all_dns_ips 11:11:23 client.id = mso-123456-producer-cd7771b9-ad5f-4b41-962c-682e479cf108 11:11:23 compression.type = none 11:11:23 connections.max.idle.ms = 540000 11:11:23 delivery.timeout.ms = 120000 11:11:23 enable.idempotence = true 11:11:23 interceptor.classes = [] 11:11:23 key.serializer = class org.apache.kafka.common.serialization.StringSerializer 11:11:23 linger.ms = 0 11:11:23 max.block.ms = 60000 11:11:23 max.in.flight.requests.per.connection = 5 11:11:23 max.request.size = 1048576 11:11:23 metadata.max.age.ms = 300000 11:11:23 metadata.max.idle.ms = 300000 11:11:23 metric.reporters = [] 11:11:23 metrics.num.samples = 2 11:11:23 metrics.recording.level = INFO 11:11:23 metrics.sample.window.ms = 30000 11:11:23 partitioner.adaptive.partitioning.enable = true 11:11:23 partitioner.availability.timeout.ms = 0 11:11:23 partitioner.class = null 11:11:23 partitioner.ignore.keys = false 11:11:23 receive.buffer.bytes = 32768 11:11:23 reconnect.backoff.max.ms = 1000 11:11:23 reconnect.backoff.ms = 50 11:11:23 request.timeout.ms = 30000 11:11:23 retries = 2147483647 11:11:23 retry.backoff.ms = 100 11:11:23 sasl.client.callback.handler.class = null 11:11:23 sasl.jaas.config = [hidden] 11:11:23 sasl.kerberos.kinit.cmd = /usr/bin/kinit 11:11:23 sasl.kerberos.min.time.before.relogin = 60000 11:11:23 sasl.kerberos.service.name = null 11:11:23 sasl.kerberos.ticket.renew.jitter = 0.05 11:11:23 sasl.kerberos.ticket.renew.window.factor = 0.8 11:11:23 sasl.login.callback.handler.class = null 11:11:23 sasl.login.class = null 11:11:23 sasl.login.connect.timeout.ms = null 11:11:23 sasl.login.read.timeout.ms = null 11:11:23 sasl.login.refresh.buffer.seconds = 300 11:11:23 sasl.login.refresh.min.period.seconds = 60 11:11:23 sasl.login.refresh.window.factor = 0.8 11:11:23 sasl.login.refresh.window.jitter = 0.05 11:11:23 sasl.login.retry.backoff.max.ms = 10000 11:11:23 sasl.login.retry.backoff.ms = 100 11:11:23 sasl.mechanism = PLAIN 11:11:23 sasl.oauthbearer.clock.skew.seconds = 30 11:11:23 sasl.oauthbearer.expected.audience = null 11:11:23 sasl.oauthbearer.expected.issuer = null 11:11:23 sasl.oauthbearer.jwks.endpoint.refresh.ms = 3600000 11:11:23 sasl.oauthbearer.jwks.endpoint.retry.backoff.max.ms = 10000 11:11:23 sasl.oauthbearer.jwks.endpoint.retry.backoff.ms = 100 11:11:23 sasl.oauthbearer.jwks.endpoint.url = null 11:11:23 sasl.oauthbearer.scope.claim.name = scope 11:11:23 sasl.oauthbearer.sub.claim.name = sub 11:11:23 sasl.oauthbearer.token.endpoint.url = null 11:11:23 security.protocol = SASL_PLAINTEXT 11:11:23 security.providers = null 11:11:23 send.buffer.bytes = 131072 11:11:23 socket.connection.setup.timeout.max.ms = 30000 11:11:23 socket.connection.setup.timeout.ms = 10000 11:11:23 ssl.cipher.suites = null 11:11:23 ssl.enabled.protocols = [TLSv1.2, TLSv1.3] 11:11:23 ssl.endpoint.identification.algorithm = https 11:11:23 ssl.engine.factory.class = null 11:11:23 ssl.key.password = null 11:11:23 ssl.keymanager.algorithm = SunX509 11:11:23 ssl.keystore.certificate.chain = null 11:11:23 ssl.keystore.key = null 11:11:23 ssl.keystore.location = null 11:11:23 ssl.keystore.password = null 11:11:23 ssl.keystore.type = JKS 11:11:23 ssl.protocol = TLSv1.3 11:11:23 ssl.provider = null 11:11:23 ssl.secure.random.implementation = null 11:11:23 ssl.trustmanager.algorithm = PKIX 11:11:23 ssl.truststore.certificates = null 11:11:23 ssl.truststore.location = null 11:11:23 ssl.truststore.password = null 11:11:23 ssl.truststore.type = JKS 11:11:23 transaction.timeout.ms = 60000 11:11:23 transactional.id = null 11:11:23 value.serializer = class org.apache.kafka.common.serialization.StringSerializer 11:11:23 11:11:23 11:11:23.054 [main] INFO org.apache.kafka.clients.producer.KafkaProducer - [Producer clientId=mso-123456-producer-cd7771b9-ad5f-4b41-962c-682e479cf108] Instantiated an idempotent producer. 11:11:23 11:11:23.056 [main] INFO org.apache.kafka.common.utils.AppInfoParser - Kafka version: 3.3.1 11:11:23 11:11:23.056 [main] INFO org.apache.kafka.common.utils.AppInfoParser - Kafka commitId: e23c59d00e687ff5 11:11:23 11:11:23.056 [main] INFO org.apache.kafka.common.utils.AppInfoParser - Kafka startTimeMs: 1768216283056 11:11:23 11:11:23.056 [main] DEBUG org.apache.kafka.clients.producer.KafkaProducer - [Producer clientId=mso-123456-producer-cd7771b9-ad5f-4b41-962c-682e479cf108] Kafka producer started 11:11:23 DistributionClientResultImpl [responseStatus=SUCCESS, responseMessage=distribution client initialized successfully] 11:11:23 11:11:23.056 [kafka-producer-network-thread | mso-123456-producer-cd7771b9-ad5f-4b41-962c-682e479cf108] DEBUG org.apache.kafka.clients.producer.internals.Sender - [Producer clientId=mso-123456-producer-cd7771b9-ad5f-4b41-962c-682e479cf108] Starting Kafka producer I/O thread. 11:11:23 11:11:23.057 [kafka-producer-network-thread | mso-123456-producer-cd7771b9-ad5f-4b41-962c-682e479cf108] DEBUG org.apache.kafka.clients.producer.internals.TransactionManager - [Producer clientId=mso-123456-producer-cd7771b9-ad5f-4b41-962c-682e479cf108] Transition from state UNINITIALIZED to INITIALIZING 11:11:23 11:11:23.057 [kafka-producer-network-thread | mso-123456-producer-cd7771b9-ad5f-4b41-962c-682e479cf108] DEBUG org.apache.kafka.clients.producer.internals.TransactionManager - [Producer clientId=mso-123456-producer-cd7771b9-ad5f-4b41-962c-682e479cf108] Enqueuing transactional request InitProducerIdRequestData(transactionalId=null, transactionTimeoutMs=2147483647, producerId=-1, producerEpoch=-1) 11:11:23 11:11:23.057 [main] INFO org.onap.sdc.impl.DistributionClientImpl - stop DistributionClient 11:11:23 11:11:23.057 [kafka-producer-network-thread | mso-123456-producer-cd7771b9-ad5f-4b41-962c-682e479cf108] DEBUG org.apache.kafka.clients.NetworkClient - [Producer clientId=mso-123456-producer-cd7771b9-ad5f-4b41-962c-682e479cf108] Initialize connection to node localhost:9092 (id: -1 rack: null) for sending metadata request 11:11:23 11:11:23.058 [kafka-producer-network-thread | mso-123456-producer-cd7771b9-ad5f-4b41-962c-682e479cf108] DEBUG org.apache.kafka.clients.ClientUtils - Resolved host localhost as 127.0.0.1 11:11:23 11:11:23.058 [kafka-producer-network-thread | mso-123456-producer-cd7771b9-ad5f-4b41-962c-682e479cf108] DEBUG org.apache.kafka.clients.NetworkClient - [Producer clientId=mso-123456-producer-cd7771b9-ad5f-4b41-962c-682e479cf108] Initiating connection to node localhost:9092 (id: -1 rack: null) using address localhost/127.0.0.1 11:11:23 11:11:23.058 [kafka-producer-network-thread | mso-123456-producer-cd7771b9-ad5f-4b41-962c-682e479cf108] DEBUG org.apache.kafka.common.security.authenticator.SaslClientAuthenticator - [Producer clientId=mso-123456-producer-cd7771b9-ad5f-4b41-962c-682e479cf108] Set SASL client state to SEND_APIVERSIONS_REQUEST 11:11:23 11:11:23.058 [kafka-producer-network-thread | mso-123456-producer-cd7771b9-ad5f-4b41-962c-682e479cf108] DEBUG org.apache.kafka.common.security.authenticator.SaslClientAuthenticator - [Producer clientId=mso-123456-producer-cd7771b9-ad5f-4b41-962c-682e479cf108] Creating SaslClient: client=null;service=kafka;serviceHostname=localhost;mechs=[PLAIN] 11:11:23 11:11:23.060 [main] INFO org.onap.sdc.impl.DistributionClientImpl - start DistributionClient 11:11:23 11:11:23.060 [main] DEBUG org.onap.sdc.impl.DistributionClientImpl - client was not initialized 11:11:23 11:11:23.060 [kafka-producer-network-thread | mso-123456-producer-cd7771b9-ad5f-4b41-962c-682e479cf108] DEBUG org.apache.kafka.common.network.Selector - [Producer clientId=mso-123456-producer-cd7771b9-ad5f-4b41-962c-682e479cf108] Connection with localhost/127.0.0.1 (channelId=-1) disconnected 11:11:23 java.net.ConnectException: Connection refused 11:11:23 at java.base/sun.nio.ch.SocketChannelImpl.checkConnect(Native Method) 11:11:23 at java.base/sun.nio.ch.SocketChannelImpl.finishConnect(SocketChannelImpl.java:777) 11:11:23 at org.apache.kafka.common.network.PlaintextTransportLayer.finishConnect(PlaintextTransportLayer.java:50) 11:11:23 at org.apache.kafka.common.network.KafkaChannel.finishConnect(KafkaChannel.java:224) 11:11:23 at org.apache.kafka.common.network.Selector.pollSelectionKeys(Selector.java:526) 11:11:23 at org.apache.kafka.common.network.Selector.poll(Selector.java:481) 11:11:23 at org.apache.kafka.clients.NetworkClient.poll(NetworkClient.java:560) 11:11:23 at org.apache.kafka.clients.NetworkClientUtils.isReady(NetworkClientUtils.java:42) 11:11:23 at org.apache.kafka.clients.NetworkClientUtils.awaitReady(NetworkClientUtils.java:64) 11:11:23 at org.apache.kafka.clients.producer.internals.Sender.awaitNodeReady(Sender.java:534) 11:11:23 at org.apache.kafka.clients.producer.internals.Sender.maybeSendAndPollTransactionalRequest(Sender.java:455) 11:11:23 at org.apache.kafka.clients.producer.internals.Sender.runOnce(Sender.java:316) 11:11:23 at org.apache.kafka.clients.producer.internals.Sender.run(Sender.java:243) 11:11:23 at java.base/java.lang.Thread.run(Thread.java:829) 11:11:23 11:11:23.060 [kafka-producer-network-thread | mso-123456-producer-cd7771b9-ad5f-4b41-962c-682e479cf108] INFO org.apache.kafka.clients.NetworkClient - [Producer clientId=mso-123456-producer-cd7771b9-ad5f-4b41-962c-682e479cf108] Node -1 disconnected. 11:11:23 11:11:23.060 [kafka-producer-network-thread | mso-123456-producer-cd7771b9-ad5f-4b41-962c-682e479cf108] WARN org.apache.kafka.clients.NetworkClient - [Producer clientId=mso-123456-producer-cd7771b9-ad5f-4b41-962c-682e479cf108] Connection to node -1 (localhost/127.0.0.1:9092) could not be established. Broker may not be available. 11:11:23 11:11:23.060 [kafka-producer-network-thread | mso-123456-producer-cd7771b9-ad5f-4b41-962c-682e479cf108] WARN org.apache.kafka.clients.NetworkClient - [Producer clientId=mso-123456-producer-cd7771b9-ad5f-4b41-962c-682e479cf108] Bootstrap broker localhost:9092 (id: -1 rack: null) disconnected 11:11:23 11:11:23.061 [kafka-producer-network-thread | mso-123456-producer-cd7771b9-ad5f-4b41-962c-682e479cf108] DEBUG org.apache.kafka.clients.producer.internals.Sender - [Producer clientId=mso-123456-producer-cd7771b9-ad5f-4b41-962c-682e479cf108] Disconnect from localhost:9092 (id: -1 rack: null) while trying to send request InitProducerIdRequestData(transactionalId=null, transactionTimeoutMs=2147483647, producerId=-1, producerEpoch=-1). Going to back off and retry. 11:11:23 java.io.IOException: Connection to localhost:9092 (id: -1 rack: null) failed. 11:11:23 at org.apache.kafka.clients.NetworkClientUtils.awaitReady(NetworkClientUtils.java:70) 11:11:23 at org.apache.kafka.clients.producer.internals.Sender.awaitNodeReady(Sender.java:534) 11:11:23 at org.apache.kafka.clients.producer.internals.Sender.maybeSendAndPollTransactionalRequest(Sender.java:455) 11:11:23 at org.apache.kafka.clients.producer.internals.Sender.runOnce(Sender.java:316) 11:11:23 at org.apache.kafka.clients.producer.internals.Sender.run(Sender.java:243) 11:11:23 at java.base/java.lang.Thread.run(Thread.java:829) 11:11:23 11:11:23.062 [main] INFO org.onap.sdc.impl.DistributionClientImpl - stop DistributionClient 11:11:23 11:11:23.062 [main] DEBUG org.onap.sdc.impl.DistributionClientImpl - client was not initialized 11:11:23 11:11:23.069 [main] INFO org.onap.sdc.impl.DistributionClientImpl - stop DistributionClient 11:11:23 11:11:23.069 [main] DEBUG org.onap.sdc.impl.DistributionClientImpl - client was not initialized 11:11:23 11:11:23.070 [main] ERROR org.onap.sdc.impl.DistributionClientImpl - DistributionClientResultImpl [responseStatus=CONF_MISSING_PASSWORD, responseMessage=configuration is invalid: CONF_MISSING_PASSWORD] 11:11:23 11:11:23.070 [main] ERROR org.onap.sdc.impl.DistributionClientImpl - DistributionClientResultImpl [responseStatus=CONF_MISSING_PASSWORD, responseMessage=configuration is invalid: CONF_MISSING_PASSWORD] 11:11:23 11:11:23.071 [main] INFO org.onap.sdc.impl.DistributionClientImpl - stop DistributionClient 11:11:23 11:11:23.071 [main] DEBUG org.onap.sdc.impl.DistributionClientImpl - client was not initialized 11:11:23 11:11:23.072 [main] INFO org.onap.sdc.impl.DistributionClientImpl - DistributionClient - init 11:11:23 11:11:23.076 [main] DEBUG org.onap.sdc.http.SdcConnectorClient - about to perform get on SDC. requestId= d5705fc9-705f-45a6-ba05-d1f0d6527aa8 url= /sdc/v1/artifactTypes 11:11:23 11:11:23.076 [main] DEBUG org.onap.sdc.http.HttpSdcClient - url to send http://badhost:8080/sdc/v1/artifactTypes 11:11:23 11:11:23.081 [main] ERROR org.onap.sdc.http.HttpSdcClient - failed to connect to url: /sdc/v1/artifactTypes 11:11:23 java.net.UnknownHostException: proxy: System error 11:11:23 at java.base/java.net.Inet6AddressImpl.lookupAllHostAddr(Native Method) 11:11:23 at java.base/java.net.InetAddress$PlatformNameService.lookupAllHostAddr(InetAddress.java:929) 11:11:23 at java.base/java.net.InetAddress.getAddressesFromNameService(InetAddress.java:1529) 11:11:23 at java.base/java.net.InetAddress$NameServiceAddresses.get(InetAddress.java:848) 11:11:23 at java.base/java.net.InetAddress.getAllByName0(InetAddress.java:1519) 11:11:23 at java.base/java.net.InetAddress.getAllByName(InetAddress.java:1378) 11:11:23 at java.base/java.net.InetAddress.getAllByName(InetAddress.java:1306) 11:11:23 at org.apache.http.impl.conn.SystemDefaultDnsResolver.resolve(SystemDefaultDnsResolver.java:45) 11:11:23 at org.apache.http.impl.conn.DefaultHttpClientConnectionOperator.connect(DefaultHttpClientConnectionOperator.java:112) 11:11:23 at org.apache.http.impl.conn.PoolingHttpClientConnectionManager.connect(PoolingHttpClientConnectionManager.java:376) 11:11:23 at org.apache.http.impl.execchain.MainClientExec.establishRoute(MainClientExec.java:401) 11:11:23 at org.apache.http.impl.execchain.MainClientExec.execute(MainClientExec.java:236) 11:11:23 at org.apache.http.impl.execchain.ProtocolExec.execute(ProtocolExec.java:186) 11:11:23 at org.apache.http.impl.execchain.RetryExec.execute(RetryExec.java:89) 11:11:23 at org.apache.http.impl.execchain.RedirectExec.execute(RedirectExec.java:110) 11:11:23 at org.apache.http.impl.client.InternalHttpClient.doExecute(InternalHttpClient.java:185) 11:11:23 at org.apache.http.impl.client.CloseableHttpClient.execute(CloseableHttpClient.java:83) 11:11:23 at org.apache.http.impl.client.CloseableHttpClient.execute(CloseableHttpClient.java:108) 11:11:23 at org.onap.sdc.http.HttpSdcClient.getRequest(HttpSdcClient.java:116) 11:11:23 at org.onap.sdc.http.SdcConnectorClient.performSdcServerRequest(SdcConnectorClient.java:120) 11:11:23 at org.onap.sdc.http.SdcConnectorClient.getValidArtifactTypesList(SdcConnectorClient.java:74) 11:11:23 at org.onap.sdc.impl.DistributionClientImpl.validateArtifactTypesWithSdcServer(DistributionClientImpl.java:300) 11:11:23 at org.onap.sdc.impl.DistributionClientImpl.init(DistributionClientImpl.java:129) 11:11:23 at java.base/java.lang.invoke.MethodHandle.invokeWithArguments(MethodHandle.java:710) 11:11:23 at org.mockito.internal.util.reflection.InstrumentationMemberAccessor$Dispatcher$ByteBuddy$D6vZKrcQ.invokeWithArguments(Unknown Source) 11:11:23 at org.mockito.internal.util.reflection.InstrumentationMemberAccessor.invoke(InstrumentationMemberAccessor.java:239) 11:11:23 at org.mockito.internal.util.reflection.ModuleMemberAccessor.invoke(ModuleMemberAccessor.java:55) 11:11:23 at org.mockito.internal.creation.bytebuddy.MockMethodAdvice.tryInvoke(MockMethodAdvice.java:333) 11:11:23 at org.mockito.internal.creation.bytebuddy.MockMethodAdvice.access$500(MockMethodAdvice.java:60) 11:11:23 at org.mockito.internal.creation.bytebuddy.MockMethodAdvice$RealMethodCall.invoke(MockMethodAdvice.java:253) 11:11:23 at org.mockito.internal.invocation.InterceptedInvocation.callRealMethod(InterceptedInvocation.java:142) 11:11:23 at org.mockito.internal.stubbing.answers.CallsRealMethods.answer(CallsRealMethods.java:45) 11:11:23 at org.mockito.Answers.answer(Answers.java:99) 11:11:23 at org.mockito.internal.handler.MockHandlerImpl.handle(MockHandlerImpl.java:110) 11:11:23 at org.mockito.internal.handler.NullResultGuardian.handle(NullResultGuardian.java:29) 11:11:23 at org.mockito.internal.handler.InvocationNotifierHandler.handle(InvocationNotifierHandler.java:34) 11:11:23 at org.mockito.internal.creation.bytebuddy.MockMethodInterceptor.doIntercept(MockMethodInterceptor.java:82) 11:11:23 at org.mockito.internal.creation.bytebuddy.MockMethodAdvice.handle(MockMethodAdvice.java:151) 11:11:23 at org.onap.sdc.impl.DistributionClientImpl.init(DistributionClientImpl.java:117) 11:11:23 at org.onap.sdc.impl.DistributionClientTest.initFailedConnectSdcInHttpTest(DistributionClientTest.java:207) 11:11:23 at java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke0(Native Method) 11:11:23 at java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) 11:11:23 at java.base/jdk.internal.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) 11:11:23 at java.base/java.lang.reflect.Method.invoke(Method.java:566) 11:11:23 at org.junit.platform.commons.util.ReflectionUtils.invokeMethod(ReflectionUtils.java:688) 11:11:23 at org.junit.jupiter.engine.execution.MethodInvocation.proceed(MethodInvocation.java:60) 11:11:23 at org.junit.jupiter.engine.execution.InvocationInterceptorChain$ValidatingInvocation.proceed(InvocationInterceptorChain.java:131) 11:11:23 at org.junit.jupiter.engine.extension.TimeoutExtension.intercept(TimeoutExtension.java:149) 11:11:23 at org.junit.jupiter.engine.extension.TimeoutExtension.interceptTestableMethod(TimeoutExtension.java:140) 11:11:23 at org.junit.jupiter.engine.extension.TimeoutExtension.interceptTestMethod(TimeoutExtension.java:84) 11:11:23 at org.junit.jupiter.engine.execution.ExecutableInvoker$ReflectiveInterceptorCall.lambda$ofVoidMethod$0(ExecutableInvoker.java:115) 11:11:23 at org.junit.jupiter.engine.execution.ExecutableInvoker.lambda$invoke$0(ExecutableInvoker.java:105) 11:11:23 at org.junit.jupiter.engine.execution.InvocationInterceptorChain$InterceptedInvocation.proceed(InvocationInterceptorChain.java:106) 11:11:23 at org.junit.jupiter.engine.execution.InvocationInterceptorChain.proceed(InvocationInterceptorChain.java:64) 11:11:23 at org.junit.jupiter.engine.execution.InvocationInterceptorChain.chainAndInvoke(InvocationInterceptorChain.java:45) 11:11:23 at org.junit.jupiter.engine.execution.InvocationInterceptorChain.invoke(InvocationInterceptorChain.java:37) 11:11:23 at org.junit.jupiter.engine.execution.ExecutableInvoker.invoke(ExecutableInvoker.java:104) 11:11:23 at org.junit.jupiter.engine.execution.ExecutableInvoker.invoke(ExecutableInvoker.java:98) 11:11:23 at org.junit.jupiter.engine.descriptor.TestMethodTestDescriptor.lambda$invokeTestMethod$6(TestMethodTestDescriptor.java:210) 11:11:23 at org.junit.platform.engine.support.hierarchical.ThrowableCollector.execute(ThrowableCollector.java:73) 11:11:23 at org.junit.jupiter.engine.descriptor.TestMethodTestDescriptor.invokeTestMethod(TestMethodTestDescriptor.java:206) 11:11:23 at org.junit.jupiter.engine.descriptor.TestMethodTestDescriptor.execute(TestMethodTestDescriptor.java:131) 11:11:23 at org.junit.jupiter.engine.descriptor.TestMethodTestDescriptor.execute(TestMethodTestDescriptor.java:65) 11:11:23 at org.junit.platform.engine.support.hierarchical.NodeTestTask.lambda$executeRecursively$5(NodeTestTask.java:139) 11:11:23 at org.junit.platform.engine.support.hierarchical.ThrowableCollector.execute(ThrowableCollector.java:73) 11:11:23 at org.junit.platform.engine.support.hierarchical.NodeTestTask.lambda$executeRecursively$7(NodeTestTask.java:129) 11:11:23 at org.junit.platform.engine.support.hierarchical.Node.around(Node.java:137) 11:11:23 at org.junit.platform.engine.support.hierarchical.NodeTestTask.lambda$executeRecursively$8(NodeTestTask.java:127) 11:11:23 at org.junit.platform.engine.support.hierarchical.ThrowableCollector.execute(ThrowableCollector.java:73) 11:11:23 at org.junit.platform.engine.support.hierarchical.NodeTestTask.executeRecursively(NodeTestTask.java:126) 11:11:23 at org.junit.platform.engine.support.hierarchical.NodeTestTask.execute(NodeTestTask.java:84) 11:11:23 at java.base/java.util.ArrayList.forEach(ArrayList.java:1541) 11:11:23 at org.junit.platform.engine.support.hierarchical.SameThreadHierarchicalTestExecutorService.invokeAll(SameThreadHierarchicalTestExecutorService.java:38) 11:11:23 at org.junit.platform.engine.support.hierarchical.NodeTestTask.lambda$executeRecursively$5(NodeTestTask.java:143) 11:11:23 at org.junit.platform.engine.support.hierarchical.ThrowableCollector.execute(ThrowableCollector.java:73) 11:11:23 at org.junit.platform.engine.support.hierarchical.NodeTestTask.lambda$executeRecursively$7(NodeTestTask.java:129) 11:11:23 at org.junit.platform.engine.support.hierarchical.Node.around(Node.java:137) 11:11:23 at org.junit.platform.engine.support.hierarchical.NodeTestTask.lambda$executeRecursively$8(NodeTestTask.java:127) 11:11:23 at org.junit.platform.engine.support.hierarchical.ThrowableCollector.execute(ThrowableCollector.java:73) 11:11:23 at org.junit.platform.engine.support.hierarchical.NodeTestTask.executeRecursively(NodeTestTask.java:126) 11:11:23 at org.junit.platform.engine.support.hierarchical.NodeTestTask.execute(NodeTestTask.java:84) 11:11:23 at java.base/java.util.ArrayList.forEach(ArrayList.java:1541) 11:11:23 at org.junit.platform.engine.support.hierarchical.SameThreadHierarchicalTestExecutorService.invokeAll(SameThreadHierarchicalTestExecutorService.java:38) 11:11:23 at org.junit.platform.engine.support.hierarchical.NodeTestTask.lambda$executeRecursively$5(NodeTestTask.java:143) 11:11:23 at org.junit.platform.engine.support.hierarchical.ThrowableCollector.execute(ThrowableCollector.java:73) 11:11:23 at org.junit.platform.engine.support.hierarchical.NodeTestTask.lambda$executeRecursively$7(NodeTestTask.java:129) 11:11:23 at org.junit.platform.engine.support.hierarchical.Node.around(Node.java:137) 11:11:23 at org.junit.platform.engine.support.hierarchical.NodeTestTask.lambda$executeRecursively$8(NodeTestTask.java:127) 11:11:23 at org.junit.platform.engine.support.hierarchical.ThrowableCollector.execute(ThrowableCollector.java:73) 11:11:23 at org.junit.platform.engine.support.hierarchical.NodeTestTask.executeRecursively(NodeTestTask.java:126) 11:11:23 at org.junit.platform.engine.support.hierarchical.NodeTestTask.execute(NodeTestTask.java:84) 11:11:23 at org.junit.platform.engine.support.hierarchical.SameThreadHierarchicalTestExecutorService.submit(SameThreadHierarchicalTestExecutorService.java:32) 11:11:23 at org.junit.platform.engine.support.hierarchical.HierarchicalTestExecutor.execute(HierarchicalTestExecutor.java:57) 11:11:23 at org.junit.platform.engine.support.hierarchical.HierarchicalTestEngine.execute(HierarchicalTestEngine.java:51) 11:11:23 at org.junit.platform.launcher.core.EngineExecutionOrchestrator.execute(EngineExecutionOrchestrator.java:108) 11:11:23 at org.junit.platform.launcher.core.EngineExecutionOrchestrator.execute(EngineExecutionOrchestrator.java:88) 11:11:23 at org.junit.platform.launcher.core.EngineExecutionOrchestrator.lambda$execute$0(EngineExecutionOrchestrator.java:54) 11:11:23 at org.junit.platform.launcher.core.EngineExecutionOrchestrator.withInterceptedStreams(EngineExecutionOrchestrator.java:67) 11:11:23 at org.junit.platform.launcher.core.EngineExecutionOrchestrator.execute(EngineExecutionOrchestrator.java:52) 11:11:23 at org.junit.platform.launcher.core.DefaultLauncher.execute(DefaultLauncher.java:96) 11:11:23 at org.junit.platform.launcher.core.DefaultLauncher.execute(DefaultLauncher.java:75) 11:11:23 at org.apache.maven.surefire.junitplatform.JUnitPlatformProvider.invokeAllTests(JUnitPlatformProvider.java:154) 11:11:23 at org.apache.maven.surefire.junitplatform.JUnitPlatformProvider.invoke(JUnitPlatformProvider.java:127) 11:11:23 at org.apache.maven.surefire.booter.ForkedBooter.runSuitesInProcess(ForkedBooter.java:377) 11:11:23 at org.apache.maven.surefire.booter.ForkedBooter.execute(ForkedBooter.java:138) 11:11:23 at org.apache.maven.surefire.booter.ForkedBooter.run(ForkedBooter.java:465) 11:11:23 at org.apache.maven.surefire.booter.ForkedBooter.main(ForkedBooter.java:451) 11:11:23 11:11:23.081 [main] ERROR org.onap.sdc.http.SdcConnectorClient - status from SDC is org.onap.sdc.http.HttpSdcResponse@57727bd9 11:11:23 11:11:23.081 [main] ERROR org.onap.sdc.http.SdcConnectorClient - DistributionClientResultImpl [responseStatus=SDC_CONNECTION_FAILED, responseMessage=SDC server problem] 11:11:23 11:11:23.081 [main] DEBUG org.onap.sdc.http.SdcConnectorClient - error from SDC is: failed to connect 11:11:23 11:11:23.081 [main] INFO org.onap.sdc.impl.DistributionClientImpl - DistributionClient - init 11:11:23 11:11:23.082 [main] DEBUG org.onap.sdc.http.SdcConnectorClient - about to perform get on SDC. requestId= 7324e6da-0b27-4879-aabf-e82393a5e064 url= /sdc/v1/artifactTypes 11:11:23 11:11:23.082 [main] DEBUG org.onap.sdc.http.HttpSdcClient - url to send http://localhost:8181/sdc/v1/artifactTypes 11:11:23 11:11:23.083 [main] ERROR org.onap.sdc.http.HttpSdcClient - failed to connect to url: /sdc/v1/artifactTypes 11:11:23 java.net.UnknownHostException: proxy 11:11:23 at java.base/java.net.InetAddress$CachedAddresses.get(InetAddress.java:797) 11:11:23 at java.base/java.net.InetAddress.getAllByName0(InetAddress.java:1519) 11:11:23 at java.base/java.net.InetAddress.getAllByName(InetAddress.java:1378) 11:11:23 at java.base/java.net.InetAddress.getAllByName(InetAddress.java:1306) 11:11:23 at org.apache.http.impl.conn.SystemDefaultDnsResolver.resolve(SystemDefaultDnsResolver.java:45) 11:11:23 at org.apache.http.impl.conn.DefaultHttpClientConnectionOperator.connect(DefaultHttpClientConnectionOperator.java:112) 11:11:23 at org.apache.http.impl.conn.PoolingHttpClientConnectionManager.connect(PoolingHttpClientConnectionManager.java:376) 11:11:23 at org.apache.http.impl.execchain.MainClientExec.establishRoute(MainClientExec.java:401) 11:11:23 at org.apache.http.impl.execchain.MainClientExec.execute(MainClientExec.java:236) 11:11:23 at org.apache.http.impl.execchain.ProtocolExec.execute(ProtocolExec.java:186) 11:11:23 at org.apache.http.impl.execchain.RetryExec.execute(RetryExec.java:89) 11:11:23 at org.apache.http.impl.execchain.RedirectExec.execute(RedirectExec.java:110) 11:11:23 at org.apache.http.impl.client.InternalHttpClient.doExecute(InternalHttpClient.java:185) 11:11:23 at org.apache.http.impl.client.CloseableHttpClient.execute(CloseableHttpClient.java:83) 11:11:23 at org.apache.http.impl.client.CloseableHttpClient.execute(CloseableHttpClient.java:108) 11:11:23 at org.onap.sdc.http.HttpSdcClient.getRequest(HttpSdcClient.java:116) 11:11:23 at org.onap.sdc.http.SdcConnectorClient.performSdcServerRequest(SdcConnectorClient.java:120) 11:11:23 at org.onap.sdc.http.SdcConnectorClient.getValidArtifactTypesList(SdcConnectorClient.java:74) 11:11:23 at org.onap.sdc.impl.DistributionClientImpl.validateArtifactTypesWithSdcServer(DistributionClientImpl.java:300) 11:11:23 at org.onap.sdc.impl.DistributionClientImpl.init(DistributionClientImpl.java:129) 11:11:23 at java.base/java.lang.invoke.MethodHandle.invokeWithArguments(MethodHandle.java:710) 11:11:23 at org.mockito.internal.util.reflection.InstrumentationMemberAccessor$Dispatcher$ByteBuddy$D6vZKrcQ.invokeWithArguments(Unknown Source) 11:11:23 at org.mockito.internal.util.reflection.InstrumentationMemberAccessor.invoke(InstrumentationMemberAccessor.java:239) 11:11:23 at org.mockito.internal.util.reflection.ModuleMemberAccessor.invoke(ModuleMemberAccessor.java:55) 11:11:23 at org.mockito.internal.creation.bytebuddy.MockMethodAdvice.tryInvoke(MockMethodAdvice.java:333) 11:11:23 at org.mockito.internal.creation.bytebuddy.MockMethodAdvice.access$500(MockMethodAdvice.java:60) 11:11:23 at org.mockito.internal.creation.bytebuddy.MockMethodAdvice$RealMethodCall.invoke(MockMethodAdvice.java:253) 11:11:23 at org.mockito.internal.invocation.InterceptedInvocation.callRealMethod(InterceptedInvocation.java:142) 11:11:23 at org.mockito.internal.stubbing.answers.CallsRealMethods.answer(CallsRealMethods.java:45) 11:11:23 at org.mockito.Answers.answer(Answers.java:99) 11:11:23 at org.mockito.internal.handler.MockHandlerImpl.handle(MockHandlerImpl.java:110) 11:11:23 at org.mockito.internal.handler.NullResultGuardian.handle(NullResultGuardian.java:29) 11:11:23 at org.mockito.internal.handler.InvocationNotifierHandler.handle(InvocationNotifierHandler.java:34) 11:11:23 at org.mockito.internal.creation.bytebuddy.MockMethodInterceptor.doIntercept(MockMethodInterceptor.java:82) 11:11:23 at org.mockito.internal.creation.bytebuddy.MockMethodAdvice.handle(MockMethodAdvice.java:151) 11:11:23 at org.onap.sdc.impl.DistributionClientImpl.init(DistributionClientImpl.java:117) 11:11:23 at org.onap.sdc.impl.DistributionClientTest.initFailedConnectSdcInHttpTest(DistributionClientTest.java:214) 11:11:23 at java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke0(Native Method) 11:11:23 at java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) 11:11:23 at java.base/jdk.internal.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) 11:11:23 at java.base/java.lang.reflect.Method.invoke(Method.java:566) 11:11:23 at org.junit.platform.commons.util.ReflectionUtils.invokeMethod(ReflectionUtils.java:688) 11:11:23 at org.junit.jupiter.engine.execution.MethodInvocation.proceed(MethodInvocation.java:60) 11:11:23 at org.junit.jupiter.engine.execution.InvocationInterceptorChain$ValidatingInvocation.proceed(InvocationInterceptorChain.java:131) 11:11:23 at org.junit.jupiter.engine.extension.TimeoutExtension.intercept(TimeoutExtension.java:149) 11:11:23 at org.junit.jupiter.engine.extension.TimeoutExtension.interceptTestableMethod(TimeoutExtension.java:140) 11:11:23 at org.junit.jupiter.engine.extension.TimeoutExtension.interceptTestMethod(TimeoutExtension.java:84) 11:11:23 at org.junit.jupiter.engine.execution.ExecutableInvoker$ReflectiveInterceptorCall.lambda$ofVoidMethod$0(ExecutableInvoker.java:115) 11:11:23 at org.junit.jupiter.engine.execution.ExecutableInvoker.lambda$invoke$0(ExecutableInvoker.java:105) 11:11:23 at org.junit.jupiter.engine.execution.InvocationInterceptorChain$InterceptedInvocation.proceed(InvocationInterceptorChain.java:106) 11:11:23 at org.junit.jupiter.engine.execution.InvocationInterceptorChain.proceed(InvocationInterceptorChain.java:64) 11:11:23 at org.junit.jupiter.engine.execution.InvocationInterceptorChain.chainAndInvoke(InvocationInterceptorChain.java:45) 11:11:23 at org.junit.jupiter.engine.execution.InvocationInterceptorChain.invoke(InvocationInterceptorChain.java:37) 11:11:23 at org.junit.jupiter.engine.execution.ExecutableInvoker.invoke(ExecutableInvoker.java:104) 11:11:23 at org.junit.jupiter.engine.execution.ExecutableInvoker.invoke(ExecutableInvoker.java:98) 11:11:23 at org.junit.jupiter.engine.descriptor.TestMethodTestDescriptor.lambda$invokeTestMethod$6(TestMethodTestDescriptor.java:210) 11:11:23 at org.junit.platform.engine.support.hierarchical.ThrowableCollector.execute(ThrowableCollector.java:73) 11:11:23 at org.junit.jupiter.engine.descriptor.TestMethodTestDescriptor.invokeTestMethod(TestMethodTestDescriptor.java:206) 11:11:23 at org.junit.jupiter.engine.descriptor.TestMethodTestDescriptor.execute(TestMethodTestDescriptor.java:131) 11:11:23 at org.junit.jupiter.engine.descriptor.TestMethodTestDescriptor.execute(TestMethodTestDescriptor.java:65) 11:11:23 at org.junit.platform.engine.support.hierarchical.NodeTestTask.lambda$executeRecursively$5(NodeTestTask.java:139) 11:11:23 at org.junit.platform.engine.support.hierarchical.ThrowableCollector.execute(ThrowableCollector.java:73) 11:11:23 at org.junit.platform.engine.support.hierarchical.NodeTestTask.lambda$executeRecursively$7(NodeTestTask.java:129) 11:11:23 at org.junit.platform.engine.support.hierarchical.Node.around(Node.java:137) 11:11:23 at org.junit.platform.engine.support.hierarchical.NodeTestTask.lambda$executeRecursively$8(NodeTestTask.java:127) 11:11:23 at org.junit.platform.engine.support.hierarchical.ThrowableCollector.execute(ThrowableCollector.java:73) 11:11:23 at org.junit.platform.engine.support.hierarchical.NodeTestTask.executeRecursively(NodeTestTask.java:126) 11:11:23 at org.junit.platform.engine.support.hierarchical.NodeTestTask.execute(NodeTestTask.java:84) 11:11:23 at java.base/java.util.ArrayList.forEach(ArrayList.java:1541) 11:11:23 at org.junit.platform.engine.support.hierarchical.SameThreadHierarchicalTestExecutorService.invokeAll(SameThreadHierarchicalTestExecutorService.java:38) 11:11:23 at org.junit.platform.engine.support.hierarchical.NodeTestTask.lambda$executeRecursively$5(NodeTestTask.java:143) 11:11:23 at org.junit.platform.engine.support.hierarchical.ThrowableCollector.execute(ThrowableCollector.java:73) 11:11:23 at org.junit.platform.engine.support.hierarchical.NodeTestTask.lambda$executeRecursively$7(NodeTestTask.java:129) 11:11:23 at org.junit.platform.engine.support.hierarchical.Node.around(Node.java:137) 11:11:23 at org.junit.platform.engine.support.hierarchical.NodeTestTask.lambda$executeRecursively$8(NodeTestTask.java:127) 11:11:23 at org.junit.platform.engine.support.hierarchical.ThrowableCollector.execute(ThrowableCollector.java:73) 11:11:23 at org.junit.platform.engine.support.hierarchical.NodeTestTask.executeRecursively(NodeTestTask.java:126) 11:11:23 at org.junit.platform.engine.support.hierarchical.NodeTestTask.execute(NodeTestTask.java:84) 11:11:23 at java.base/java.util.ArrayList.forEach(ArrayList.java:1541) 11:11:23 at org.junit.platform.engine.support.hierarchical.SameThreadHierarchicalTestExecutorService.invokeAll(SameThreadHierarchicalTestExecutorService.java:38) 11:11:23 at org.junit.platform.engine.support.hierarchical.NodeTestTask.lambda$executeRecursively$5(NodeTestTask.java:143) 11:11:23 at org.junit.platform.engine.support.hierarchical.ThrowableCollector.execute(ThrowableCollector.java:73) 11:11:23 at org.junit.platform.engine.support.hierarchical.NodeTestTask.lambda$executeRecursively$7(NodeTestTask.java:129) 11:11:23 at org.junit.platform.engine.support.hierarchical.Node.around(Node.java:137) 11:11:23 at org.junit.platform.engine.support.hierarchical.NodeTestTask.lambda$executeRecursively$8(NodeTestTask.java:127) 11:11:23 at org.junit.platform.engine.support.hierarchical.ThrowableCollector.execute(ThrowableCollector.java:73) 11:11:23 at org.junit.platform.engine.support.hierarchical.NodeTestTask.executeRecursively(NodeTestTask.java:126) 11:11:23 at org.junit.platform.engine.support.hierarchical.NodeTestTask.execute(NodeTestTask.java:84) 11:11:23 at org.junit.platform.engine.support.hierarchical.SameThreadHierarchicalTestExecutorService.submit(SameThreadHierarchicalTestExecutorService.java:32) 11:11:23 at org.junit.platform.engine.support.hierarchical.HierarchicalTestExecutor.execute(HierarchicalTestExecutor.java:57) 11:11:23 at org.junit.platform.engine.support.hierarchical.HierarchicalTestEngine.execute(HierarchicalTestEngine.java:51) 11:11:23 at org.junit.platform.launcher.core.EngineExecutionOrchestrator.execute(EngineExecutionOrchestrator.java:108) 11:11:23 at org.junit.platform.launcher.core.EngineExecutionOrchestrator.execute(EngineExecutionOrchestrator.java:88) 11:11:23 at org.junit.platform.launcher.core.EngineExecutionOrchestrator.lambda$execute$0(EngineExecutionOrchestrator.java:54) 11:11:23 at org.junit.platform.launcher.core.EngineExecutionOrchestrator.withInterceptedStreams(EngineExecutionOrchestrator.java:67) 11:11:23 at org.junit.platform.launcher.core.EngineExecutionOrchestrator.execute(EngineExecutionOrchestrator.java:52) 11:11:23 at org.junit.platform.launcher.core.DefaultLauncher.execute(DefaultLauncher.java:96) 11:11:23 at org.junit.platform.launcher.core.DefaultLauncher.execute(DefaultLauncher.java:75) 11:11:23 at org.apache.maven.surefire.junitplatform.JUnitPlatformProvider.invokeAllTests(JUnitPlatformProvider.java:154) 11:11:23 at org.apache.maven.surefire.junitplatform.JUnitPlatformProvider.invoke(JUnitPlatformProvider.java:127) 11:11:23 at org.apache.maven.surefire.booter.ForkedBooter.runSuitesInProcess(ForkedBooter.java:377) 11:11:23 at org.apache.maven.surefire.booter.ForkedBooter.execute(ForkedBooter.java:138) 11:11:23 at org.apache.maven.surefire.booter.ForkedBooter.run(ForkedBooter.java:465) 11:11:23 at org.apache.maven.surefire.booter.ForkedBooter.main(ForkedBooter.java:451) 11:11:23 11:11:23.083 [main] ERROR org.onap.sdc.http.SdcConnectorClient - status from SDC is org.onap.sdc.http.HttpSdcResponse@5a72b08e 11:11:23 11:11:23.083 [main] ERROR org.onap.sdc.http.SdcConnectorClient - DistributionClientResultImpl [responseStatus=SDC_CONNECTION_FAILED, responseMessage=SDC server problem] 11:11:23 11:11:23.083 [main] DEBUG org.onap.sdc.http.SdcConnectorClient - error from SDC is: failed to connect 11:11:23 11:11:23.084 [main] INFO org.onap.sdc.impl.DistributionClientImpl - stop DistributionClient 11:11:23 11:11:23.084 [main] DEBUG org.onap.sdc.impl.DistributionClientImpl - client was not initialized 11:11:23 11:11:23.085 [main] INFO org.onap.sdc.impl.DistributionClientImpl - stop DistributionClient 11:11:23 11:11:23.085 [main] DEBUG org.onap.sdc.impl.DistributionClientImpl - client was not initialized 11:11:23 11:11:23.086 [main] WARN org.onap.sdc.impl.DistributionClientImpl - polling interval is out of range. value should be greater than or equals to 15 11:11:23 11:11:23.086 [main] WARN org.onap.sdc.impl.DistributionClientImpl - setting polling interval to default: 15 11:11:23 11:11:23.086 [main] WARN org.onap.sdc.impl.DistributionClientImpl - polling interval is out of range. value should be greater than or equals to 15 11:11:23 11:11:23.086 [main] WARN org.onap.sdc.impl.DistributionClientImpl - setting polling interval to default: 15 11:11:23 11:11:23.086 [main] INFO org.onap.sdc.impl.DistributionClientImpl - stop DistributionClient 11:11:23 11:11:23.086 [main] DEBUG org.onap.sdc.impl.DistributionClientImpl - client was not initialized 11:11:23 11:11:23.088 [main] INFO org.onap.sdc.impl.DistributionClientImpl - DistributionClient - init 11:11:23 11:11:23.088 [main] WARN org.onap.sdc.impl.DistributionClientImpl - polling interval is out of range. value should be greater than or equals to 15 11:11:23 11:11:23.088 [main] WARN org.onap.sdc.impl.DistributionClientImpl - setting polling interval to default: 15 11:11:23 11:11:23.089 [main] DEBUG org.onap.sdc.impl.DistributionClientImpl - Artifact types: [HEAT] were validated with SDC server 11:11:23 11:11:23.089 [main] DEBUG org.onap.sdc.impl.DistributionClientImpl - Get MessageBus cluster information from SDC 11:11:23 11:11:23.089 [main] DEBUG org.onap.sdc.impl.DistributionClientImpl - MessageBus cluster info retrieved successfully org.onap.sdc.utils.kafka.KafkaDataResponse@7abe04e0 11:11:23 11:11:23.089 [main] INFO org.apache.kafka.clients.producer.ProducerConfig - ProducerConfig values: 11:11:23 acks = -1 11:11:23 batch.size = 16384 11:11:23 bootstrap.servers = [localhost:9092] 11:11:23 buffer.memory = 33554432 11:11:23 client.dns.lookup = use_all_dns_ips 11:11:23 client.id = mso-123456-producer-573567a6-f65d-49f3-9c3f-a1a56beb573d 11:11:23 compression.type = none 11:11:23 connections.max.idle.ms = 540000 11:11:23 delivery.timeout.ms = 120000 11:11:23 enable.idempotence = true 11:11:23 interceptor.classes = [] 11:11:23 key.serializer = class org.apache.kafka.common.serialization.StringSerializer 11:11:23 linger.ms = 0 11:11:23 max.block.ms = 60000 11:11:23 max.in.flight.requests.per.connection = 5 11:11:23 max.request.size = 1048576 11:11:23 metadata.max.age.ms = 300000 11:11:23 metadata.max.idle.ms = 300000 11:11:23 metric.reporters = [] 11:11:23 metrics.num.samples = 2 11:11:23 metrics.recording.level = INFO 11:11:23 metrics.sample.window.ms = 30000 11:11:23 partitioner.adaptive.partitioning.enable = true 11:11:23 partitioner.availability.timeout.ms = 0 11:11:23 partitioner.class = null 11:11:23 partitioner.ignore.keys = false 11:11:23 receive.buffer.bytes = 32768 11:11:23 reconnect.backoff.max.ms = 1000 11:11:23 reconnect.backoff.ms = 50 11:11:23 request.timeout.ms = 30000 11:11:23 retries = 2147483647 11:11:23 retry.backoff.ms = 100 11:11:23 sasl.client.callback.handler.class = null 11:11:23 sasl.jaas.config = [hidden] 11:11:23 sasl.kerberos.kinit.cmd = /usr/bin/kinit 11:11:23 sasl.kerberos.min.time.before.relogin = 60000 11:11:23 sasl.kerberos.service.name = null 11:11:23 sasl.kerberos.ticket.renew.jitter = 0.05 11:11:23 sasl.kerberos.ticket.renew.window.factor = 0.8 11:11:23 sasl.login.callback.handler.class = null 11:11:23 sasl.login.class = null 11:11:23 sasl.login.connect.timeout.ms = null 11:11:23 sasl.login.read.timeout.ms = null 11:11:23 sasl.login.refresh.buffer.seconds = 300 11:11:23 sasl.login.refresh.min.period.seconds = 60 11:11:23 sasl.login.refresh.window.factor = 0.8 11:11:23 sasl.login.refresh.window.jitter = 0.05 11:11:23 sasl.login.retry.backoff.max.ms = 10000 11:11:23 sasl.login.retry.backoff.ms = 100 11:11:23 sasl.mechanism = PLAIN 11:11:23 sasl.oauthbearer.clock.skew.seconds = 30 11:11:23 sasl.oauthbearer.expected.audience = null 11:11:23 sasl.oauthbearer.expected.issuer = null 11:11:23 sasl.oauthbearer.jwks.endpoint.refresh.ms = 3600000 11:11:23 sasl.oauthbearer.jwks.endpoint.retry.backoff.max.ms = 10000 11:11:23 sasl.oauthbearer.jwks.endpoint.retry.backoff.ms = 100 11:11:23 sasl.oauthbearer.jwks.endpoint.url = null 11:11:23 sasl.oauthbearer.scope.claim.name = scope 11:11:23 sasl.oauthbearer.sub.claim.name = sub 11:11:23 sasl.oauthbearer.token.endpoint.url = null 11:11:23 security.protocol = SASL_PLAINTEXT 11:11:23 security.providers = null 11:11:23 send.buffer.bytes = 131072 11:11:23 socket.connection.setup.timeout.max.ms = 30000 11:11:23 socket.connection.setup.timeout.ms = 10000 11:11:23 ssl.cipher.suites = null 11:11:23 ssl.enabled.protocols = [TLSv1.2, TLSv1.3] 11:11:23 ssl.endpoint.identification.algorithm = https 11:11:23 ssl.engine.factory.class = null 11:11:23 ssl.key.password = null 11:11:23 ssl.keymanager.algorithm = SunX509 11:11:23 ssl.keystore.certificate.chain = null 11:11:23 ssl.keystore.key = null 11:11:23 ssl.keystore.location = null 11:11:23 ssl.keystore.password = null 11:11:23 ssl.keystore.type = JKS 11:11:23 ssl.protocol = TLSv1.3 11:11:23 ssl.provider = null 11:11:23 ssl.secure.random.implementation = null 11:11:23 ssl.trustmanager.algorithm = PKIX 11:11:23 ssl.truststore.certificates = null 11:11:23 ssl.truststore.location = null 11:11:23 ssl.truststore.password = null 11:11:23 ssl.truststore.type = JKS 11:11:23 transaction.timeout.ms = 60000 11:11:23 transactional.id = null 11:11:23 value.serializer = class org.apache.kafka.common.serialization.StringSerializer 11:11:23 11:11:23 11:11:23.090 [kafka-producer-network-thread | mso-123456-producer-1dc14a95-50e0-4c07-89cc-b235db07ae06] DEBUG org.apache.kafka.clients.producer.internals.TransactionManager - [Producer clientId=mso-123456-producer-1dc14a95-50e0-4c07-89cc-b235db07ae06] Enqueuing transactional request InitProducerIdRequestData(transactionalId=null, transactionTimeoutMs=2147483647, producerId=-1, producerEpoch=-1) 11:11:23 11:11:23.090 [main] INFO org.apache.kafka.clients.producer.KafkaProducer - [Producer clientId=mso-123456-producer-573567a6-f65d-49f3-9c3f-a1a56beb573d] Instantiated an idempotent producer. 11:11:23 11:11:23.090 [kafka-producer-network-thread | mso-123456-producer-1dc14a95-50e0-4c07-89cc-b235db07ae06] DEBUG org.apache.kafka.clients.producer.internals.TransactionManager - [Producer clientId=mso-123456-producer-1dc14a95-50e0-4c07-89cc-b235db07ae06] Enqueuing transactional request InitProducerIdRequestData(transactionalId=null, transactionTimeoutMs=2147483647, producerId=-1, producerEpoch=-1) 11:11:23 11:11:23.090 [kafka-producer-network-thread | mso-123456-producer-1dc14a95-50e0-4c07-89cc-b235db07ae06] DEBUG org.apache.kafka.clients.NetworkClient - [Producer clientId=mso-123456-producer-1dc14a95-50e0-4c07-89cc-b235db07ae06] Give up sending metadata request since no node is available 11:11:23 11:11:23.093 [main] INFO org.apache.kafka.common.utils.AppInfoParser - Kafka version: 3.3.1 11:11:23 11:11:23.093 [main] INFO org.apache.kafka.common.utils.AppInfoParser - Kafka commitId: e23c59d00e687ff5 11:11:23 11:11:23.093 [main] INFO org.apache.kafka.common.utils.AppInfoParser - Kafka startTimeMs: 1768216283093 11:11:23 11:11:23.093 [kafka-producer-network-thread | mso-123456-producer-573567a6-f65d-49f3-9c3f-a1a56beb573d] DEBUG org.apache.kafka.clients.producer.internals.Sender - [Producer clientId=mso-123456-producer-573567a6-f65d-49f3-9c3f-a1a56beb573d] Starting Kafka producer I/O thread. 11:11:23 11:11:23.093 [kafka-producer-network-thread | mso-123456-producer-573567a6-f65d-49f3-9c3f-a1a56beb573d] DEBUG org.apache.kafka.clients.producer.internals.TransactionManager - [Producer clientId=mso-123456-producer-573567a6-f65d-49f3-9c3f-a1a56beb573d] Transition from state UNINITIALIZED to INITIALIZING 11:11:23 11:11:23.093 [main] DEBUG org.apache.kafka.clients.producer.KafkaProducer - [Producer clientId=mso-123456-producer-573567a6-f65d-49f3-9c3f-a1a56beb573d] Kafka producer started 11:11:23 11:11:23.093 [kafka-producer-network-thread | mso-123456-producer-573567a6-f65d-49f3-9c3f-a1a56beb573d] DEBUG org.apache.kafka.clients.producer.internals.TransactionManager - [Producer clientId=mso-123456-producer-573567a6-f65d-49f3-9c3f-a1a56beb573d] Enqueuing transactional request InitProducerIdRequestData(transactionalId=null, transactionTimeoutMs=2147483647, producerId=-1, producerEpoch=-1) 11:11:23 11:11:23.094 [kafka-producer-network-thread | mso-123456-producer-573567a6-f65d-49f3-9c3f-a1a56beb573d] DEBUG org.apache.kafka.clients.NetworkClient - [Producer clientId=mso-123456-producer-573567a6-f65d-49f3-9c3f-a1a56beb573d] Initialize connection to node localhost:9092 (id: -1 rack: null) for sending metadata request 11:11:23 11:11:23.094 [kafka-producer-network-thread | mso-123456-producer-573567a6-f65d-49f3-9c3f-a1a56beb573d] DEBUG org.apache.kafka.clients.ClientUtils - Resolved host localhost as 127.0.0.1 11:11:23 11:11:23.094 [kafka-producer-network-thread | mso-123456-producer-573567a6-f65d-49f3-9c3f-a1a56beb573d] DEBUG org.apache.kafka.clients.NetworkClient - [Producer clientId=mso-123456-producer-573567a6-f65d-49f3-9c3f-a1a56beb573d] Initiating connection to node localhost:9092 (id: -1 rack: null) using address localhost/127.0.0.1 11:11:23 11:11:23.094 [kafka-producer-network-thread | mso-123456-producer-573567a6-f65d-49f3-9c3f-a1a56beb573d] DEBUG org.apache.kafka.common.security.authenticator.SaslClientAuthenticator - [Producer clientId=mso-123456-producer-573567a6-f65d-49f3-9c3f-a1a56beb573d] Set SASL client state to SEND_APIVERSIONS_REQUEST 11:11:23 11:11:23.095 [kafka-producer-network-thread | mso-123456-producer-573567a6-f65d-49f3-9c3f-a1a56beb573d] DEBUG org.apache.kafka.common.security.authenticator.SaslClientAuthenticator - [Producer clientId=mso-123456-producer-573567a6-f65d-49f3-9c3f-a1a56beb573d] Creating SaslClient: client=null;service=kafka;serviceHostname=localhost;mechs=[PLAIN] 11:11:23 11:11:23.096 [kafka-producer-network-thread | mso-123456-producer-573567a6-f65d-49f3-9c3f-a1a56beb573d] DEBUG org.apache.kafka.common.network.Selector - [Producer clientId=mso-123456-producer-573567a6-f65d-49f3-9c3f-a1a56beb573d] Connection with localhost/127.0.0.1 (channelId=-1) disconnected 11:11:23 java.net.ConnectException: Connection refused 11:11:23 at java.base/sun.nio.ch.SocketChannelImpl.checkConnect(Native Method) 11:11:23 at java.base/sun.nio.ch.SocketChannelImpl.finishConnect(SocketChannelImpl.java:777) 11:11:23 at org.apache.kafka.common.network.PlaintextTransportLayer.finishConnect(PlaintextTransportLayer.java:50) 11:11:23 at org.apache.kafka.common.network.KafkaChannel.finishConnect(KafkaChannel.java:224) 11:11:23 at org.apache.kafka.common.network.Selector.pollSelectionKeys(Selector.java:526) 11:11:23 at org.apache.kafka.common.network.Selector.poll(Selector.java:481) 11:11:23 at org.apache.kafka.clients.NetworkClient.poll(NetworkClient.java:560) 11:11:23 at org.apache.kafka.clients.NetworkClientUtils.isReady(NetworkClientUtils.java:42) 11:11:23 at org.apache.kafka.clients.NetworkClientUtils.awaitReady(NetworkClientUtils.java:64) 11:11:23 at org.apache.kafka.clients.producer.internals.Sender.awaitNodeReady(Sender.java:534) 11:11:23 at org.apache.kafka.clients.producer.internals.Sender.maybeSendAndPollTransactionalRequest(Sender.java:455) 11:11:23 at org.apache.kafka.clients.producer.internals.Sender.runOnce(Sender.java:316) 11:11:23 at org.apache.kafka.clients.producer.internals.Sender.run(Sender.java:243) 11:11:23 at java.base/java.lang.Thread.run(Thread.java:829) 11:11:23 11:11:23.096 [kafka-producer-network-thread | mso-123456-producer-573567a6-f65d-49f3-9c3f-a1a56beb573d] INFO org.apache.kafka.clients.NetworkClient - [Producer clientId=mso-123456-producer-573567a6-f65d-49f3-9c3f-a1a56beb573d] Node -1 disconnected. 11:11:23 11:11:23.097 [kafka-producer-network-thread | mso-123456-producer-573567a6-f65d-49f3-9c3f-a1a56beb573d] WARN org.apache.kafka.clients.NetworkClient - [Producer clientId=mso-123456-producer-573567a6-f65d-49f3-9c3f-a1a56beb573d] Connection to node -1 (localhost/127.0.0.1:9092) could not be established. Broker may not be available. 11:11:23 11:11:23.097 [kafka-producer-network-thread | mso-123456-producer-573567a6-f65d-49f3-9c3f-a1a56beb573d] WARN org.apache.kafka.clients.NetworkClient - [Producer clientId=mso-123456-producer-573567a6-f65d-49f3-9c3f-a1a56beb573d] Bootstrap broker localhost:9092 (id: -1 rack: null) disconnected 11:11:23 11:11:23.097 [kafka-producer-network-thread | mso-123456-producer-573567a6-f65d-49f3-9c3f-a1a56beb573d] DEBUG org.apache.kafka.clients.producer.internals.Sender - [Producer clientId=mso-123456-producer-573567a6-f65d-49f3-9c3f-a1a56beb573d] Disconnect from localhost:9092 (id: -1 rack: null) while trying to send request InitProducerIdRequestData(transactionalId=null, transactionTimeoutMs=2147483647, producerId=-1, producerEpoch=-1). Going to back off and retry. 11:11:23 java.io.IOException: Connection to localhost:9092 (id: -1 rack: null) failed. 11:11:23 at org.apache.kafka.clients.NetworkClientUtils.awaitReady(NetworkClientUtils.java:70) 11:11:23 at org.apache.kafka.clients.producer.internals.Sender.awaitNodeReady(Sender.java:534) 11:11:23 at org.apache.kafka.clients.producer.internals.Sender.maybeSendAndPollTransactionalRequest(Sender.java:455) 11:11:23 at org.apache.kafka.clients.producer.internals.Sender.runOnce(Sender.java:316) 11:11:23 at org.apache.kafka.clients.producer.internals.Sender.run(Sender.java:243) 11:11:23 at java.base/java.lang.Thread.run(Thread.java:829) 11:11:23 11:11:23.099 [kafka-producer-network-thread | mso-123456-producer-414c146f-350c-404b-a200-38d58b83784c] DEBUG org.apache.kafka.clients.NetworkClient - [Producer clientId=mso-123456-producer-414c146f-350c-404b-a200-38d58b83784c] Give up sending metadata request since no node is available 11:11:23 Configuration [sdcAddress=localhost:8443, user=mso-user, password=password, useHttpsWithSDC=true, pollingInterval=15, sdcStatusTopicName=SDC-DISTR-STATUS-TOPIC-AUTO, sdcNotificationTopicName=SDC-DISTR-NOTIF-TOPIC-AUTO, pollingTimeout=20, relevantArtifactTypes=[HEAT], consumerGroup=mso-group, environmentName=PROD, comsumerID=mso-123456, keyStorePath=src/test/resources/etc/sdc-user-keystore.jks, trustStorePath=src/test/resources/etc/sdc-user-truststore.jks, activateServerTLSAuth=true, filterInEmptyResources=false, consumeProduceStatusTopic=false, useSystemProxy=false, httpProxyHost=proxy, httpProxyPort=8080, httpsProxyHost=null, httpsProxyPort=0] 11:11:23 11:11:23.122 [main] INFO org.onap.sdc.impl.DistributionClientImpl - stop DistributionClient 11:11:23 11:11:23.124 [main] ERROR org.onap.sdc.impl.DistributionClientImpl - DistributionClientResultImpl [responseStatus=CONF_MISSING_USERNAME, responseMessage=configuration is invalid: CONF_MISSING_USERNAME] 11:11:23 11:11:23.124 [main] ERROR org.onap.sdc.impl.DistributionClientImpl - DistributionClientResultImpl [responseStatus=CONF_MISSING_USERNAME, responseMessage=configuration is invalid: CONF_MISSING_USERNAME] 11:11:23 11:11:23.124 [main] INFO org.onap.sdc.impl.DistributionClientImpl - stop DistributionClient 11:11:23 11:11:23.124 [main] DEBUG org.onap.sdc.impl.DistributionClientImpl - client was not initialized 11:11:23 [INFO] Tests run: 12, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.255 s - in org.onap.sdc.impl.DistributionClientTest 11:11:23 11:11:23.136 [kafka-coordinator-heartbeat-thread | mso-group] DEBUG org.apache.kafka.clients.NetworkClient - [Consumer clientId=mso-123456-consumer-37da6eb2-0a27-47c4-bc3c-8032fdeca549, groupId=mso-group] Give up sending metadata request since no node is available 11:11:23 11:11:23.136 [kafka-coordinator-heartbeat-thread | mso-group] DEBUG org.apache.kafka.clients.consumer.internals.ConsumerCoordinator - [Consumer clientId=mso-123456-consumer-37da6eb2-0a27-47c4-bc3c-8032fdeca549, groupId=mso-group] No broker available to send FindCoordinator request 11:11:23 11:11:23.141 [kafka-producer-network-thread | mso-123456-producer-1dc14a95-50e0-4c07-89cc-b235db07ae06] DEBUG org.apache.kafka.clients.NetworkClient - [Producer clientId=mso-123456-producer-1dc14a95-50e0-4c07-89cc-b235db07ae06] Initialize connection to node localhost:9092 (id: -1 rack: null) for sending metadata request 11:11:23 11:11:23.141 [kafka-producer-network-thread | mso-123456-producer-1dc14a95-50e0-4c07-89cc-b235db07ae06] DEBUG org.apache.kafka.clients.ClientUtils - Resolved host localhost as 127.0.0.1 11:11:23 11:11:23.141 [kafka-producer-network-thread | mso-123456-producer-1dc14a95-50e0-4c07-89cc-b235db07ae06] DEBUG org.apache.kafka.clients.NetworkClient - [Producer clientId=mso-123456-producer-1dc14a95-50e0-4c07-89cc-b235db07ae06] Initiating connection to node localhost:9092 (id: -1 rack: null) using address localhost/127.0.0.1 11:11:23 11:11:23.141 [kafka-producer-network-thread | mso-123456-producer-1dc14a95-50e0-4c07-89cc-b235db07ae06] DEBUG org.apache.kafka.common.security.authenticator.SaslClientAuthenticator - [Producer clientId=mso-123456-producer-1dc14a95-50e0-4c07-89cc-b235db07ae06] Set SASL client state to SEND_APIVERSIONS_REQUEST 11:11:23 11:11:23.142 [kafka-producer-network-thread | mso-123456-producer-1dc14a95-50e0-4c07-89cc-b235db07ae06] DEBUG org.apache.kafka.common.security.authenticator.SaslClientAuthenticator - [Producer clientId=mso-123456-producer-1dc14a95-50e0-4c07-89cc-b235db07ae06] Creating SaslClient: client=null;service=kafka;serviceHostname=localhost;mechs=[PLAIN] 11:11:23 11:11:23.143 [kafka-producer-network-thread | mso-123456-producer-1dc14a95-50e0-4c07-89cc-b235db07ae06] DEBUG org.apache.kafka.common.network.Selector - [Producer clientId=mso-123456-producer-1dc14a95-50e0-4c07-89cc-b235db07ae06] Connection with localhost/127.0.0.1 (channelId=-1) disconnected 11:11:23 java.net.ConnectException: Connection refused 11:11:23 at java.base/sun.nio.ch.SocketChannelImpl.checkConnect(Native Method) 11:11:23 at java.base/sun.nio.ch.SocketChannelImpl.finishConnect(SocketChannelImpl.java:777) 11:11:23 at org.apache.kafka.common.network.PlaintextTransportLayer.finishConnect(PlaintextTransportLayer.java:50) 11:11:23 at org.apache.kafka.common.network.KafkaChannel.finishConnect(KafkaChannel.java:224) 11:11:23 at org.apache.kafka.common.network.Selector.pollSelectionKeys(Selector.java:526) 11:11:23 at org.apache.kafka.common.network.Selector.poll(Selector.java:481) 11:11:23 at org.apache.kafka.clients.NetworkClient.poll(NetworkClient.java:560) 11:11:23 at org.apache.kafka.clients.NetworkClientUtils.isReady(NetworkClientUtils.java:42) 11:11:23 at org.apache.kafka.clients.NetworkClientUtils.awaitReady(NetworkClientUtils.java:64) 11:11:23 at org.apache.kafka.clients.producer.internals.Sender.awaitNodeReady(Sender.java:534) 11:11:23 at org.apache.kafka.clients.producer.internals.Sender.maybeSendAndPollTransactionalRequest(Sender.java:455) 11:11:23 at org.apache.kafka.clients.producer.internals.Sender.runOnce(Sender.java:316) 11:11:23 at org.apache.kafka.clients.producer.internals.Sender.run(Sender.java:243) 11:11:23 at java.base/java.lang.Thread.run(Thread.java:829) 11:11:23 11:11:23.143 [kafka-producer-network-thread | mso-123456-producer-1dc14a95-50e0-4c07-89cc-b235db07ae06] INFO org.apache.kafka.clients.NetworkClient - [Producer clientId=mso-123456-producer-1dc14a95-50e0-4c07-89cc-b235db07ae06] Node -1 disconnected. 11:11:23 11:11:23.143 [kafka-producer-network-thread | mso-123456-producer-1dc14a95-50e0-4c07-89cc-b235db07ae06] WARN org.apache.kafka.clients.NetworkClient - [Producer clientId=mso-123456-producer-1dc14a95-50e0-4c07-89cc-b235db07ae06] Connection to node -1 (localhost/127.0.0.1:9092) could not be established. Broker may not be available. 11:11:23 11:11:23.143 [kafka-producer-network-thread | mso-123456-producer-1dc14a95-50e0-4c07-89cc-b235db07ae06] WARN org.apache.kafka.clients.NetworkClient - [Producer clientId=mso-123456-producer-1dc14a95-50e0-4c07-89cc-b235db07ae06] Bootstrap broker localhost:9092 (id: -1 rack: null) disconnected 11:11:23 11:11:23.143 [kafka-producer-network-thread | mso-123456-producer-1dc14a95-50e0-4c07-89cc-b235db07ae06] DEBUG org.apache.kafka.clients.producer.internals.Sender - [Producer clientId=mso-123456-producer-1dc14a95-50e0-4c07-89cc-b235db07ae06] Disconnect from localhost:9092 (id: -1 rack: null) while trying to send request InitProducerIdRequestData(transactionalId=null, transactionTimeoutMs=2147483647, producerId=-1, producerEpoch=-1). Going to back off and retry. 11:11:23 java.io.IOException: Connection to localhost:9092 (id: -1 rack: null) failed. 11:11:23 at org.apache.kafka.clients.NetworkClientUtils.awaitReady(NetworkClientUtils.java:70) 11:11:23 at org.apache.kafka.clients.producer.internals.Sender.awaitNodeReady(Sender.java:534) 11:11:23 at org.apache.kafka.clients.producer.internals.Sender.maybeSendAndPollTransactionalRequest(Sender.java:455) 11:11:23 at org.apache.kafka.clients.producer.internals.Sender.runOnce(Sender.java:316) 11:11:23 at org.apache.kafka.clients.producer.internals.Sender.run(Sender.java:243) 11:11:23 at java.base/java.lang.Thread.run(Thread.java:829) 11:11:23 11:11:23.149 [kafka-producer-network-thread | mso-123456-producer-414c146f-350c-404b-a200-38d58b83784c] DEBUG org.apache.kafka.clients.NetworkClient - [Producer clientId=mso-123456-producer-414c146f-350c-404b-a200-38d58b83784c] Give up sending metadata request since no node is available 11:11:23 11:11:23.161 [kafka-producer-network-thread | mso-123456-producer-cd7771b9-ad5f-4b41-962c-682e479cf108] DEBUG org.apache.kafka.clients.producer.internals.TransactionManager - [Producer clientId=mso-123456-producer-cd7771b9-ad5f-4b41-962c-682e479cf108] Enqueuing transactional request InitProducerIdRequestData(transactionalId=null, transactionTimeoutMs=2147483647, producerId=-1, producerEpoch=-1) 11:11:23 11:11:23.161 [kafka-producer-network-thread | mso-123456-producer-cd7771b9-ad5f-4b41-962c-682e479cf108] DEBUG org.apache.kafka.clients.NetworkClient - [Producer clientId=mso-123456-producer-cd7771b9-ad5f-4b41-962c-682e479cf108] Initialize connection to node localhost:9092 (id: -1 rack: null) for sending metadata request 11:11:23 11:11:23.161 [kafka-producer-network-thread | mso-123456-producer-cd7771b9-ad5f-4b41-962c-682e479cf108] DEBUG org.apache.kafka.clients.ClientUtils - Resolved host localhost as 127.0.0.1 11:11:23 11:11:23.161 [kafka-producer-network-thread | mso-123456-producer-cd7771b9-ad5f-4b41-962c-682e479cf108] DEBUG org.apache.kafka.clients.NetworkClient - [Producer clientId=mso-123456-producer-cd7771b9-ad5f-4b41-962c-682e479cf108] Initiating connection to node localhost:9092 (id: -1 rack: null) using address localhost/127.0.0.1 11:11:23 11:11:23.161 [kafka-producer-network-thread | mso-123456-producer-cd7771b9-ad5f-4b41-962c-682e479cf108] DEBUG org.apache.kafka.common.security.authenticator.SaslClientAuthenticator - [Producer clientId=mso-123456-producer-cd7771b9-ad5f-4b41-962c-682e479cf108] Set SASL client state to SEND_APIVERSIONS_REQUEST 11:11:23 11:11:23.161 [kafka-producer-network-thread | mso-123456-producer-cd7771b9-ad5f-4b41-962c-682e479cf108] DEBUG org.apache.kafka.common.security.authenticator.SaslClientAuthenticator - [Producer clientId=mso-123456-producer-cd7771b9-ad5f-4b41-962c-682e479cf108] Creating SaslClient: client=null;service=kafka;serviceHostname=localhost;mechs=[PLAIN] 11:11:23 11:11:23.162 [kafka-producer-network-thread | mso-123456-producer-cd7771b9-ad5f-4b41-962c-682e479cf108] DEBUG org.apache.kafka.common.network.Selector - [Producer clientId=mso-123456-producer-cd7771b9-ad5f-4b41-962c-682e479cf108] Connection with localhost/127.0.0.1 (channelId=-1) disconnected 11:11:23 java.net.ConnectException: Connection refused 11:11:23 at java.base/sun.nio.ch.SocketChannelImpl.checkConnect(Native Method) 11:11:23 at java.base/sun.nio.ch.SocketChannelImpl.finishConnect(SocketChannelImpl.java:777) 11:11:23 at org.apache.kafka.common.network.PlaintextTransportLayer.finishConnect(PlaintextTransportLayer.java:50) 11:11:23 at org.apache.kafka.common.network.KafkaChannel.finishConnect(KafkaChannel.java:224) 11:11:23 at org.apache.kafka.common.network.Selector.pollSelectionKeys(Selector.java:526) 11:11:23 at org.apache.kafka.common.network.Selector.poll(Selector.java:481) 11:11:23 at org.apache.kafka.clients.NetworkClient.poll(NetworkClient.java:560) 11:11:23 at org.apache.kafka.clients.NetworkClientUtils.isReady(NetworkClientUtils.java:42) 11:11:23 at org.apache.kafka.clients.NetworkClientUtils.awaitReady(NetworkClientUtils.java:64) 11:11:23 at org.apache.kafka.clients.producer.internals.Sender.awaitNodeReady(Sender.java:534) 11:11:23 at org.apache.kafka.clients.producer.internals.Sender.maybeSendAndPollTransactionalRequest(Sender.java:455) 11:11:23 at org.apache.kafka.clients.producer.internals.Sender.runOnce(Sender.java:316) 11:11:23 at org.apache.kafka.clients.producer.internals.Sender.run(Sender.java:243) 11:11:23 at java.base/java.lang.Thread.run(Thread.java:829) 11:11:23 11:11:23.162 [kafka-producer-network-thread | mso-123456-producer-cd7771b9-ad5f-4b41-962c-682e479cf108] INFO org.apache.kafka.clients.NetworkClient - [Producer clientId=mso-123456-producer-cd7771b9-ad5f-4b41-962c-682e479cf108] Node -1 disconnected. 11:11:23 11:11:23.162 [kafka-producer-network-thread | mso-123456-producer-cd7771b9-ad5f-4b41-962c-682e479cf108] WARN org.apache.kafka.clients.NetworkClient - [Producer clientId=mso-123456-producer-cd7771b9-ad5f-4b41-962c-682e479cf108] Connection to node -1 (localhost/127.0.0.1:9092) could not be established. Broker may not be available. 11:11:23 11:11:23.162 [kafka-producer-network-thread | mso-123456-producer-cd7771b9-ad5f-4b41-962c-682e479cf108] WARN org.apache.kafka.clients.NetworkClient - [Producer clientId=mso-123456-producer-cd7771b9-ad5f-4b41-962c-682e479cf108] Bootstrap broker localhost:9092 (id: -1 rack: null) disconnected 11:11:23 11:11:23.162 [kafka-producer-network-thread | mso-123456-producer-cd7771b9-ad5f-4b41-962c-682e479cf108] DEBUG org.apache.kafka.clients.producer.internals.Sender - [Producer clientId=mso-123456-producer-cd7771b9-ad5f-4b41-962c-682e479cf108] Disconnect from localhost:9092 (id: -1 rack: null) while trying to send request InitProducerIdRequestData(transactionalId=null, transactionTimeoutMs=2147483647, producerId=-1, producerEpoch=-1). Going to back off and retry. 11:11:23 java.io.IOException: Connection to localhost:9092 (id: -1 rack: null) failed. 11:11:23 at org.apache.kafka.clients.NetworkClientUtils.awaitReady(NetworkClientUtils.java:70) 11:11:23 at org.apache.kafka.clients.producer.internals.Sender.awaitNodeReady(Sender.java:534) 11:11:23 at org.apache.kafka.clients.producer.internals.Sender.maybeSendAndPollTransactionalRequest(Sender.java:455) 11:11:23 at org.apache.kafka.clients.producer.internals.Sender.runOnce(Sender.java:316) 11:11:23 at org.apache.kafka.clients.producer.internals.Sender.run(Sender.java:243) 11:11:23 at java.base/java.lang.Thread.run(Thread.java:829) 11:11:23 11:11:23.197 [kafka-producer-network-thread | mso-123456-producer-573567a6-f65d-49f3-9c3f-a1a56beb573d] DEBUG org.apache.kafka.clients.producer.internals.TransactionManager - [Producer clientId=mso-123456-producer-573567a6-f65d-49f3-9c3f-a1a56beb573d] Enqueuing transactional request InitProducerIdRequestData(transactionalId=null, transactionTimeoutMs=2147483647, producerId=-1, producerEpoch=-1) 11:11:23 11:11:23.197 [kafka-producer-network-thread | mso-123456-producer-573567a6-f65d-49f3-9c3f-a1a56beb573d] DEBUG org.apache.kafka.clients.NetworkClient - [Producer clientId=mso-123456-producer-573567a6-f65d-49f3-9c3f-a1a56beb573d] Initialize connection to node localhost:9092 (id: -1 rack: null) for sending metadata request 11:11:23 11:11:23.197 [kafka-producer-network-thread | mso-123456-producer-573567a6-f65d-49f3-9c3f-a1a56beb573d] DEBUG org.apache.kafka.clients.ClientUtils - Resolved host localhost as 127.0.0.1 11:11:23 11:11:23.197 [kafka-producer-network-thread | mso-123456-producer-573567a6-f65d-49f3-9c3f-a1a56beb573d] DEBUG org.apache.kafka.clients.NetworkClient - [Producer clientId=mso-123456-producer-573567a6-f65d-49f3-9c3f-a1a56beb573d] Initiating connection to node localhost:9092 (id: -1 rack: null) using address localhost/127.0.0.1 11:11:23 11:11:23.198 [kafka-producer-network-thread | mso-123456-producer-573567a6-f65d-49f3-9c3f-a1a56beb573d] DEBUG org.apache.kafka.common.security.authenticator.SaslClientAuthenticator - [Producer clientId=mso-123456-producer-573567a6-f65d-49f3-9c3f-a1a56beb573d] Set SASL client state to SEND_APIVERSIONS_REQUEST 11:11:23 11:11:23.198 [kafka-producer-network-thread | mso-123456-producer-573567a6-f65d-49f3-9c3f-a1a56beb573d] DEBUG org.apache.kafka.common.security.authenticator.SaslClientAuthenticator - [Producer clientId=mso-123456-producer-573567a6-f65d-49f3-9c3f-a1a56beb573d] Creating SaslClient: client=null;service=kafka;serviceHostname=localhost;mechs=[PLAIN] 11:11:23 11:11:23.198 [kafka-producer-network-thread | mso-123456-producer-573567a6-f65d-49f3-9c3f-a1a56beb573d] DEBUG org.apache.kafka.common.network.Selector - [Producer clientId=mso-123456-producer-573567a6-f65d-49f3-9c3f-a1a56beb573d] Connection with localhost/127.0.0.1 (channelId=-1) disconnected 11:11:23 java.net.ConnectException: Connection refused 11:11:23 at java.base/sun.nio.ch.SocketChannelImpl.checkConnect(Native Method) 11:11:23 at java.base/sun.nio.ch.SocketChannelImpl.finishConnect(SocketChannelImpl.java:777) 11:11:23 at org.apache.kafka.common.network.PlaintextTransportLayer.finishConnect(PlaintextTransportLayer.java:50) 11:11:23 at org.apache.kafka.common.network.KafkaChannel.finishConnect(KafkaChannel.java:224) 11:11:23 at org.apache.kafka.common.network.Selector.pollSelectionKeys(Selector.java:526) 11:11:23 at org.apache.kafka.common.network.Selector.poll(Selector.java:481) 11:11:23 at org.apache.kafka.clients.NetworkClient.poll(NetworkClient.java:560) 11:11:23 at org.apache.kafka.clients.NetworkClientUtils.isReady(NetworkClientUtils.java:42) 11:11:23 at org.apache.kafka.clients.NetworkClientUtils.awaitReady(NetworkClientUtils.java:64) 11:11:23 at org.apache.kafka.clients.producer.internals.Sender.awaitNodeReady(Sender.java:534) 11:11:23 at org.apache.kafka.clients.producer.internals.Sender.maybeSendAndPollTransactionalRequest(Sender.java:455) 11:11:23 at org.apache.kafka.clients.producer.internals.Sender.runOnce(Sender.java:316) 11:11:23 at org.apache.kafka.clients.producer.internals.Sender.run(Sender.java:243) 11:11:23 at java.base/java.lang.Thread.run(Thread.java:829) 11:11:23 11:11:23.198 [kafka-producer-network-thread | mso-123456-producer-573567a6-f65d-49f3-9c3f-a1a56beb573d] INFO org.apache.kafka.clients.NetworkClient - [Producer clientId=mso-123456-producer-573567a6-f65d-49f3-9c3f-a1a56beb573d] Node -1 disconnected. 11:11:23 11:11:23.198 [kafka-producer-network-thread | mso-123456-producer-573567a6-f65d-49f3-9c3f-a1a56beb573d] WARN org.apache.kafka.clients.NetworkClient - [Producer clientId=mso-123456-producer-573567a6-f65d-49f3-9c3f-a1a56beb573d] Connection to node -1 (localhost/127.0.0.1:9092) could not be established. Broker may not be available. 11:11:23 11:11:23.199 [kafka-producer-network-thread | mso-123456-producer-573567a6-f65d-49f3-9c3f-a1a56beb573d] WARN org.apache.kafka.clients.NetworkClient - [Producer clientId=mso-123456-producer-573567a6-f65d-49f3-9c3f-a1a56beb573d] Bootstrap broker localhost:9092 (id: -1 rack: null) disconnected 11:11:23 11:11:23.199 [kafka-producer-network-thread | mso-123456-producer-573567a6-f65d-49f3-9c3f-a1a56beb573d] DEBUG org.apache.kafka.clients.producer.internals.Sender - [Producer clientId=mso-123456-producer-573567a6-f65d-49f3-9c3f-a1a56beb573d] Disconnect from localhost:9092 (id: -1 rack: null) while trying to send request InitProducerIdRequestData(transactionalId=null, transactionTimeoutMs=2147483647, producerId=-1, producerEpoch=-1). Going to back off and retry. 11:11:23 java.io.IOException: Connection to localhost:9092 (id: -1 rack: null) failed. 11:11:23 at org.apache.kafka.clients.NetworkClientUtils.awaitReady(NetworkClientUtils.java:70) 11:11:23 at org.apache.kafka.clients.producer.internals.Sender.awaitNodeReady(Sender.java:534) 11:11:23 at org.apache.kafka.clients.producer.internals.Sender.maybeSendAndPollTransactionalRequest(Sender.java:455) 11:11:23 at org.apache.kafka.clients.producer.internals.Sender.runOnce(Sender.java:316) 11:11:23 at org.apache.kafka.clients.producer.internals.Sender.run(Sender.java:243) 11:11:23 at java.base/java.lang.Thread.run(Thread.java:829) 11:11:23 11:11:23.200 [kafka-producer-network-thread | mso-123456-producer-414c146f-350c-404b-a200-38d58b83784c] DEBUG org.apache.kafka.clients.NetworkClient - [Producer clientId=mso-123456-producer-414c146f-350c-404b-a200-38d58b83784c] Give up sending metadata request since no node is available 11:11:23 11:11:23.237 [kafka-coordinator-heartbeat-thread | mso-group] DEBUG org.apache.kafka.clients.NetworkClient - [Consumer clientId=mso-123456-consumer-37da6eb2-0a27-47c4-bc3c-8032fdeca549, groupId=mso-group] Give up sending metadata request since no node is available 11:11:23 11:11:23.237 [kafka-coordinator-heartbeat-thread | mso-group] DEBUG org.apache.kafka.clients.consumer.internals.ConsumerCoordinator - [Consumer clientId=mso-123456-consumer-37da6eb2-0a27-47c4-bc3c-8032fdeca549, groupId=mso-group] No broker available to send FindCoordinator request 11:11:23 11:11:23.243 [kafka-producer-network-thread | mso-123456-producer-1dc14a95-50e0-4c07-89cc-b235db07ae06] DEBUG org.apache.kafka.clients.producer.internals.TransactionManager - [Producer clientId=mso-123456-producer-1dc14a95-50e0-4c07-89cc-b235db07ae06] Enqueuing transactional request InitProducerIdRequestData(transactionalId=null, transactionTimeoutMs=2147483647, producerId=-1, producerEpoch=-1) 11:11:23 11:11:23.243 [kafka-producer-network-thread | mso-123456-producer-1dc14a95-50e0-4c07-89cc-b235db07ae06] DEBUG org.apache.kafka.clients.producer.internals.TransactionManager - [Producer clientId=mso-123456-producer-1dc14a95-50e0-4c07-89cc-b235db07ae06] Enqueuing transactional request InitProducerIdRequestData(transactionalId=null, transactionTimeoutMs=2147483647, producerId=-1, producerEpoch=-1) 11:11:23 11:11:23.243 [kafka-producer-network-thread | mso-123456-producer-1dc14a95-50e0-4c07-89cc-b235db07ae06] DEBUG org.apache.kafka.clients.NetworkClient - [Producer clientId=mso-123456-producer-1dc14a95-50e0-4c07-89cc-b235db07ae06] Give up sending metadata request since no node is available 11:11:23 [INFO] 11:11:23 [INFO] Results: 11:11:23 [INFO] 11:11:23 [WARNING] Tests run: 71, Failures: 0, Errors: 0, Skipped: 1 11:11:23 [INFO] 11:11:23 [INFO] 11:11:23 [INFO] --- jacoco-maven-plugin:0.8.6:report (post-unit-test) @ sdc-distribution-client --- 11:11:23 [INFO] Loading execution data file /w/workspace/sdc-sdc-distribution-client-master-integration-pairwise/sdc-distribution-client/target/code-coverage/jacoco-ut.exec 11:11:23 [INFO] Analyzed bundle 'sdc-distribution-client' with 44 classes 11:11:23 [INFO] 11:11:23 [INFO] --- maven-jar-plugin:2.4:jar (default-jar) @ sdc-distribution-client --- 11:11:24 [INFO] Building jar: /w/workspace/sdc-sdc-distribution-client-master-integration-pairwise/sdc-distribution-client/target/sdc-distribution-client-2.2.0-SNAPSHOT.jar 11:11:24 [INFO] 11:11:24 [INFO] --- maven-javadoc-plugin:3.2.0:jar (attach-javadocs) @ sdc-distribution-client --- 11:11:24 [INFO] No previous run data found, generating javadoc. 11:11:26 [INFO] 11:11:26 Loading source files for package org.onap.sdc.http... 11:11:26 Loading source files for package org.onap.sdc.utils... 11:11:26 Loading source files for package org.onap.sdc.utils.kafka... 11:11:26 Loading source files for package org.onap.sdc.utils.heat... 11:11:26 Loading source files for package org.onap.sdc.impl... 11:11:26 Constructing Javadoc information... 11:11:26 Standard Doclet version 11.0.16 11:11:26 Building tree for all the packages and classes... 11:11:26 Generating /w/workspace/sdc-sdc-distribution-client-master-integration-pairwise/sdc-distribution-client/target/apidocs/org/onap/sdc/http/HttpClientFactory.html... 11:11:26 Generating /w/workspace/sdc-sdc-distribution-client-master-integration-pairwise/sdc-distribution-client/target/apidocs/org/onap/sdc/http/HttpRequestFactory.html... 11:11:26 Generating /w/workspace/sdc-sdc-distribution-client-master-integration-pairwise/sdc-distribution-client/target/apidocs/org/onap/sdc/http/HttpSdcClient.html... 11:11:26 Generating /w/workspace/sdc-sdc-distribution-client-master-integration-pairwise/sdc-distribution-client/target/apidocs/org/onap/sdc/http/HttpSdcClientException.html... 11:11:26 Generating /w/workspace/sdc-sdc-distribution-client-master-integration-pairwise/sdc-distribution-client/target/apidocs/org/onap/sdc/http/HttpSdcResponse.html... 11:11:26 Generating /w/workspace/sdc-sdc-distribution-client-master-integration-pairwise/sdc-distribution-client/target/apidocs/org/onap/sdc/http/IHttpSdcClient.html... 11:11:26 Generating /w/workspace/sdc-sdc-distribution-client-master-integration-pairwise/sdc-distribution-client/target/apidocs/org/onap/sdc/http/SdcConnectorClient.html... 11:11:26 Generating /w/workspace/sdc-sdc-distribution-client-master-integration-pairwise/sdc-distribution-client/target/apidocs/org/onap/sdc/http/SdcUrls.html... 11:11:26 Generating /w/workspace/sdc-sdc-distribution-client-master-integration-pairwise/sdc-distribution-client/target/apidocs/org/onap/sdc/impl/ArtifactInfo.html... 11:11:26 Generating /w/workspace/sdc-sdc-distribution-client-master-integration-pairwise/sdc-distribution-client/target/apidocs/org/onap/sdc/impl/Configuration.html... 11:11:26 Generating /w/workspace/sdc-sdc-distribution-client-master-integration-pairwise/sdc-distribution-client/target/apidocs/org/onap/sdc/impl/ConfigurationValidator.html... 11:11:26 Generating /w/workspace/sdc-sdc-distribution-client-master-integration-pairwise/sdc-distribution-client/target/apidocs/org/onap/sdc/impl/DistributionClientDownloadResultImpl.html... 11:11:26 Generating /w/workspace/sdc-sdc-distribution-client-master-integration-pairwise/sdc-distribution-client/target/apidocs/org/onap/sdc/impl/DistributionClientFactory.html... 11:11:26 Generating /w/workspace/sdc-sdc-distribution-client-master-integration-pairwise/sdc-distribution-client/target/apidocs/org/onap/sdc/impl/DistributionClientImpl.html... 11:11:26 Generating /w/workspace/sdc-sdc-distribution-client-master-integration-pairwise/sdc-distribution-client/target/apidocs/org/onap/sdc/impl/DistributionClientResultImpl.html... 11:11:26 Generating /w/workspace/sdc-sdc-distribution-client-master-integration-pairwise/sdc-distribution-client/target/apidocs/org/onap/sdc/impl/DistributionStatusMessageJsonBuilderFactory.html... 11:11:26 Generating /w/workspace/sdc-sdc-distribution-client-master-integration-pairwise/sdc-distribution-client/target/apidocs/org/onap/sdc/impl/JsonContainerResourceInstance.html... 11:11:26 Generating /w/workspace/sdc-sdc-distribution-client-master-integration-pairwise/sdc-distribution-client/target/apidocs/org/onap/sdc/impl/NotificationCallbackBuilder.html... 11:11:26 Generating /w/workspace/sdc-sdc-distribution-client-master-integration-pairwise/sdc-distribution-client/target/apidocs/org/onap/sdc/impl/NotificationData.html... 11:11:26 Generating /w/workspace/sdc-sdc-distribution-client-master-integration-pairwise/sdc-distribution-client/target/apidocs/org/onap/sdc/impl/NotificationDataImpl.html... 11:11:26 Generating /w/workspace/sdc-sdc-distribution-client-master-integration-pairwise/sdc-distribution-client/target/apidocs/org/onap/sdc/impl/ResourceInstance.html... 11:11:26 Generating /w/workspace/sdc-sdc-distribution-client-master-integration-pairwise/sdc-distribution-client/target/apidocs/org/onap/sdc/impl/StatusDataImpl.html... 11:11:26 Generating /w/workspace/sdc-sdc-distribution-client-master-integration-pairwise/sdc-distribution-client/target/apidocs/org/onap/sdc/utils/ArtifactTypeEnum.html... 11:11:26 Generating /w/workspace/sdc-sdc-distribution-client-master-integration-pairwise/sdc-distribution-client/target/apidocs/org/onap/sdc/utils/CaseInsensitiveMap.html... 11:11:26 Generating /w/workspace/sdc-sdc-distribution-client-master-integration-pairwise/sdc-distribution-client/target/apidocs/org/onap/sdc/utils/DistributionActionResultEnum.html... 11:11:26 Generating /w/workspace/sdc-sdc-distribution-client-master-integration-pairwise/sdc-distribution-client/target/apidocs/org/onap/sdc/utils/DistributionClientConstants.html... 11:11:26 Generating /w/workspace/sdc-sdc-distribution-client-master-integration-pairwise/sdc-distribution-client/target/apidocs/org/onap/sdc/utils/DistributionStatusEnum.html... 11:11:26 Generating /w/workspace/sdc-sdc-distribution-client-master-integration-pairwise/sdc-distribution-client/target/apidocs/org/onap/sdc/utils/GeneralUtils.html... 11:11:26 Generating /w/workspace/sdc-sdc-distribution-client-master-integration-pairwise/sdc-distribution-client/target/apidocs/org/onap/sdc/utils/NotificationSender.html... 11:11:26 Generating /w/workspace/sdc-sdc-distribution-client-master-integration-pairwise/sdc-distribution-client/target/apidocs/org/onap/sdc/utils/Pair.html... 11:11:26 Generating /w/workspace/sdc-sdc-distribution-client-master-integration-pairwise/sdc-distribution-client/target/apidocs/org/onap/sdc/utils/Wrapper.html... 11:11:26 Generating /w/workspace/sdc-sdc-distribution-client-master-integration-pairwise/sdc-distribution-client/target/apidocs/org/onap/sdc/utils/YamlToObjectConverter.html... 11:11:26 Generating /w/workspace/sdc-sdc-distribution-client-master-integration-pairwise/sdc-distribution-client/target/apidocs/org/onap/sdc/utils/heat/HeatConfiguration.html... 11:11:26 Generating /w/workspace/sdc-sdc-distribution-client-master-integration-pairwise/sdc-distribution-client/target/apidocs/org/onap/sdc/utils/heat/HeatParameter.html... 11:11:26 Generating /w/workspace/sdc-sdc-distribution-client-master-integration-pairwise/sdc-distribution-client/target/apidocs/org/onap/sdc/utils/heat/HeatParameterConstraint.html... 11:11:26 Generating /w/workspace/sdc-sdc-distribution-client-master-integration-pairwise/sdc-distribution-client/target/apidocs/org/onap/sdc/utils/heat/HeatParser.html... 11:11:26 Generating /w/workspace/sdc-sdc-distribution-client-master-integration-pairwise/sdc-distribution-client/target/apidocs/org/onap/sdc/utils/kafka/KafkaCommonConfig.html... 11:11:26 Generating /w/workspace/sdc-sdc-distribution-client-master-integration-pairwise/sdc-distribution-client/target/apidocs/org/onap/sdc/utils/kafka/KafkaDataResponse.html... 11:11:26 Generating /w/workspace/sdc-sdc-distribution-client-master-integration-pairwise/sdc-distribution-client/target/apidocs/org/onap/sdc/utils/kafka/SdcKafkaConsumer.html... 11:11:26 Generating /w/workspace/sdc-sdc-distribution-client-master-integration-pairwise/sdc-distribution-client/target/apidocs/org/onap/sdc/utils/kafka/SdcKafkaProducer.html... 11:11:26 Generating /w/workspace/sdc-sdc-distribution-client-master-integration-pairwise/sdc-distribution-client/target/apidocs/org/onap/sdc/http/package-summary.html... 11:11:26 Generating /w/workspace/sdc-sdc-distribution-client-master-integration-pairwise/sdc-distribution-client/target/apidocs/org/onap/sdc/http/package-tree.html... 11:11:26 Generating /w/workspace/sdc-sdc-distribution-client-master-integration-pairwise/sdc-distribution-client/target/apidocs/org/onap/sdc/impl/package-summary.html... 11:11:26 Generating /w/workspace/sdc-sdc-distribution-client-master-integration-pairwise/sdc-distribution-client/target/apidocs/org/onap/sdc/impl/package-tree.html... 11:11:26 Generating /w/workspace/sdc-sdc-distribution-client-master-integration-pairwise/sdc-distribution-client/target/apidocs/org/onap/sdc/utils/package-summary.html... 11:11:26 Generating /w/workspace/sdc-sdc-distribution-client-master-integration-pairwise/sdc-distribution-client/target/apidocs/org/onap/sdc/utils/package-tree.html... 11:11:26 Generating /w/workspace/sdc-sdc-distribution-client-master-integration-pairwise/sdc-distribution-client/target/apidocs/org/onap/sdc/utils/heat/package-summary.html... 11:11:26 Generating /w/workspace/sdc-sdc-distribution-client-master-integration-pairwise/sdc-distribution-client/target/apidocs/org/onap/sdc/utils/heat/package-tree.html... 11:11:26 Generating /w/workspace/sdc-sdc-distribution-client-master-integration-pairwise/sdc-distribution-client/target/apidocs/org/onap/sdc/utils/kafka/package-summary.html... 11:11:26 Generating /w/workspace/sdc-sdc-distribution-client-master-integration-pairwise/sdc-distribution-client/target/apidocs/org/onap/sdc/utils/kafka/package-tree.html... 11:11:26 Generating /w/workspace/sdc-sdc-distribution-client-master-integration-pairwise/sdc-distribution-client/target/apidocs/constant-values.html... 11:11:26 Generating /w/workspace/sdc-sdc-distribution-client-master-integration-pairwise/sdc-distribution-client/target/apidocs/serialized-form.html... 11:11:26 Generating /w/workspace/sdc-sdc-distribution-client-master-integration-pairwise/sdc-distribution-client/target/apidocs/org/onap/sdc/http/class-use/HttpSdcClient.html... 11:11:26 Generating /w/workspace/sdc-sdc-distribution-client-master-integration-pairwise/sdc-distribution-client/target/apidocs/org/onap/sdc/http/class-use/HttpSdcClientException.html... 11:11:26 Generating /w/workspace/sdc-sdc-distribution-client-master-integration-pairwise/sdc-distribution-client/target/apidocs/org/onap/sdc/http/class-use/SdcUrls.html... 11:11:26 Generating /w/workspace/sdc-sdc-distribution-client-master-integration-pairwise/sdc-distribution-client/target/apidocs/org/onap/sdc/http/class-use/HttpClientFactory.html... 11:11:26 Generating /w/workspace/sdc-sdc-distribution-client-master-integration-pairwise/sdc-distribution-client/target/apidocs/org/onap/sdc/http/class-use/HttpRequestFactory.html... 11:11:26 Generating /w/workspace/sdc-sdc-distribution-client-master-integration-pairwise/sdc-distribution-client/target/apidocs/org/onap/sdc/http/class-use/HttpSdcResponse.html... 11:11:26 Generating /w/workspace/sdc-sdc-distribution-client-master-integration-pairwise/sdc-distribution-client/target/apidocs/org/onap/sdc/http/class-use/SdcConnectorClient.html... 11:11:26 Generating /w/workspace/sdc-sdc-distribution-client-master-integration-pairwise/sdc-distribution-client/target/apidocs/org/onap/sdc/http/class-use/IHttpSdcClient.html... 11:11:26 Generating /w/workspace/sdc-sdc-distribution-client-master-integration-pairwise/sdc-distribution-client/target/apidocs/org/onap/sdc/utils/class-use/NotificationSender.html... 11:11:26 Generating /w/workspace/sdc-sdc-distribution-client-master-integration-pairwise/sdc-distribution-client/target/apidocs/org/onap/sdc/utils/class-use/CaseInsensitiveMap.html... 11:11:26 Generating /w/workspace/sdc-sdc-distribution-client-master-integration-pairwise/sdc-distribution-client/target/apidocs/org/onap/sdc/utils/class-use/Wrapper.html... 11:11:26 Generating /w/workspace/sdc-sdc-distribution-client-master-integration-pairwise/sdc-distribution-client/target/apidocs/org/onap/sdc/utils/class-use/YamlToObjectConverter.html... 11:11:26 Generating /w/workspace/sdc-sdc-distribution-client-master-integration-pairwise/sdc-distribution-client/target/apidocs/org/onap/sdc/utils/class-use/DistributionActionResultEnum.html... 11:11:26 Generating /w/workspace/sdc-sdc-distribution-client-master-integration-pairwise/sdc-distribution-client/target/apidocs/org/onap/sdc/utils/class-use/DistributionClientConstants.html... 11:11:26 Generating /w/workspace/sdc-sdc-distribution-client-master-integration-pairwise/sdc-distribution-client/target/apidocs/org/onap/sdc/utils/class-use/GeneralUtils.html... 11:11:26 Generating /w/workspace/sdc-sdc-distribution-client-master-integration-pairwise/sdc-distribution-client/target/apidocs/org/onap/sdc/utils/class-use/DistributionStatusEnum.html... 11:11:26 Generating /w/workspace/sdc-sdc-distribution-client-master-integration-pairwise/sdc-distribution-client/target/apidocs/org/onap/sdc/utils/class-use/ArtifactTypeEnum.html... 11:11:26 Generating /w/workspace/sdc-sdc-distribution-client-master-integration-pairwise/sdc-distribution-client/target/apidocs/org/onap/sdc/utils/class-use/Pair.html... 11:11:26 Generating /w/workspace/sdc-sdc-distribution-client-master-integration-pairwise/sdc-distribution-client/target/apidocs/org/onap/sdc/utils/kafka/class-use/SdcKafkaConsumer.html... 11:11:26 Generating /w/workspace/sdc-sdc-distribution-client-master-integration-pairwise/sdc-distribution-client/target/apidocs/org/onap/sdc/utils/kafka/class-use/SdcKafkaProducer.html... 11:11:26 Generating /w/workspace/sdc-sdc-distribution-client-master-integration-pairwise/sdc-distribution-client/target/apidocs/org/onap/sdc/utils/kafka/class-use/KafkaCommonConfig.html... 11:11:26 Generating /w/workspace/sdc-sdc-distribution-client-master-integration-pairwise/sdc-distribution-client/target/apidocs/org/onap/sdc/utils/kafka/class-use/KafkaDataResponse.html... 11:11:26 Generating /w/workspace/sdc-sdc-distribution-client-master-integration-pairwise/sdc-distribution-client/target/apidocs/org/onap/sdc/utils/heat/class-use/HeatParameterConstraint.html... 11:11:26 Generating /w/workspace/sdc-sdc-distribution-client-master-integration-pairwise/sdc-distribution-client/target/apidocs/org/onap/sdc/utils/heat/class-use/HeatConfiguration.html... 11:11:26 Generating /w/workspace/sdc-sdc-distribution-client-master-integration-pairwise/sdc-distribution-client/target/apidocs/org/onap/sdc/utils/heat/class-use/HeatParameter.html... 11:11:26 Generating /w/workspace/sdc-sdc-distribution-client-master-integration-pairwise/sdc-distribution-client/target/apidocs/org/onap/sdc/utils/heat/class-use/HeatParser.html... 11:11:26 Generating /w/workspace/sdc-sdc-distribution-client-master-integration-pairwise/sdc-distribution-client/target/apidocs/org/onap/sdc/impl/class-use/DistributionClientFactory.html... 11:11:26 Generating /w/workspace/sdc-sdc-distribution-client-master-integration-pairwise/sdc-distribution-client/target/apidocs/org/onap/sdc/impl/class-use/DistributionStatusMessageJsonBuilderFactory.html... 11:11:26 Generating /w/workspace/sdc-sdc-distribution-client-master-integration-pairwise/sdc-distribution-client/target/apidocs/org/onap/sdc/impl/class-use/DistributionClientResultImpl.html... 11:11:26 Generating /w/workspace/sdc-sdc-distribution-client-master-integration-pairwise/sdc-distribution-client/target/apidocs/org/onap/sdc/impl/class-use/NotificationDataImpl.html... 11:11:26 Generating /w/workspace/sdc-sdc-distribution-client-master-integration-pairwise/sdc-distribution-client/target/apidocs/org/onap/sdc/impl/class-use/ConfigurationValidator.html... 11:11:26 Generating /w/workspace/sdc-sdc-distribution-client-master-integration-pairwise/sdc-distribution-client/target/apidocs/org/onap/sdc/impl/class-use/DistributionClientDownloadResultImpl.html... 11:11:26 Generating /w/workspace/sdc-sdc-distribution-client-master-integration-pairwise/sdc-distribution-client/target/apidocs/org/onap/sdc/impl/class-use/ArtifactInfo.html... 11:11:26 Generating /w/workspace/sdc-sdc-distribution-client-master-integration-pairwise/sdc-distribution-client/target/apidocs/org/onap/sdc/impl/class-use/NotificationData.html... 11:11:26 Generating /w/workspace/sdc-sdc-distribution-client-master-integration-pairwise/sdc-distribution-client/target/apidocs/org/onap/sdc/impl/class-use/ResourceInstance.html... 11:11:26 Generating /w/workspace/sdc-sdc-distribution-client-master-integration-pairwise/sdc-distribution-client/target/apidocs/org/onap/sdc/impl/class-use/NotificationCallbackBuilder.html... 11:11:26 Generating /w/workspace/sdc-sdc-distribution-client-master-integration-pairwise/sdc-distribution-client/target/apidocs/org/onap/sdc/impl/class-use/StatusDataImpl.html... 11:11:26 Generating /w/workspace/sdc-sdc-distribution-client-master-integration-pairwise/sdc-distribution-client/target/apidocs/org/onap/sdc/impl/class-use/JsonContainerResourceInstance.html... 11:11:26 Generating /w/workspace/sdc-sdc-distribution-client-master-integration-pairwise/sdc-distribution-client/target/apidocs/org/onap/sdc/impl/class-use/DistributionClientImpl.html... 11:11:26 Generating /w/workspace/sdc-sdc-distribution-client-master-integration-pairwise/sdc-distribution-client/target/apidocs/org/onap/sdc/impl/class-use/Configuration.html... 11:11:26 Generating /w/workspace/sdc-sdc-distribution-client-master-integration-pairwise/sdc-distribution-client/target/apidocs/org/onap/sdc/http/package-use.html... 11:11:26 Generating /w/workspace/sdc-sdc-distribution-client-master-integration-pairwise/sdc-distribution-client/target/apidocs/org/onap/sdc/impl/package-use.html... 11:11:26 Generating /w/workspace/sdc-sdc-distribution-client-master-integration-pairwise/sdc-distribution-client/target/apidocs/org/onap/sdc/utils/package-use.html... 11:11:26 Generating /w/workspace/sdc-sdc-distribution-client-master-integration-pairwise/sdc-distribution-client/target/apidocs/org/onap/sdc/utils/heat/package-use.html... 11:11:26 Generating /w/workspace/sdc-sdc-distribution-client-master-integration-pairwise/sdc-distribution-client/target/apidocs/org/onap/sdc/utils/kafka/package-use.html... 11:11:26 Building index for all the packages and classes... 11:11:26 Generating /w/workspace/sdc-sdc-distribution-client-master-integration-pairwise/sdc-distribution-client/target/apidocs/overview-tree.html... 11:11:26 Generating /w/workspace/sdc-sdc-distribution-client-master-integration-pairwise/sdc-distribution-client/target/apidocs/index-all.html... 11:11:26 Building index for all classes... 11:11:26 Generating /w/workspace/sdc-sdc-distribution-client-master-integration-pairwise/sdc-distribution-client/target/apidocs/allclasses-index.html... 11:11:26 Generating /w/workspace/sdc-sdc-distribution-client-master-integration-pairwise/sdc-distribution-client/target/apidocs/allpackages-index.html... 11:11:26 Generating /w/workspace/sdc-sdc-distribution-client-master-integration-pairwise/sdc-distribution-client/target/apidocs/deprecated-list.html... 11:11:26 Building index for all classes... 11:11:26 Generating /w/workspace/sdc-sdc-distribution-client-master-integration-pairwise/sdc-distribution-client/target/apidocs/allclasses.html... 11:11:26 Generating /w/workspace/sdc-sdc-distribution-client-master-integration-pairwise/sdc-distribution-client/target/apidocs/allclasses.html... 11:11:26 Generating /w/workspace/sdc-sdc-distribution-client-master-integration-pairwise/sdc-distribution-client/target/apidocs/index.html... 11:11:26 Generating /w/workspace/sdc-sdc-distribution-client-master-integration-pairwise/sdc-distribution-client/target/apidocs/overview-summary.html... 11:11:26 Generating /w/workspace/sdc-sdc-distribution-client-master-integration-pairwise/sdc-distribution-client/target/apidocs/help-doc.html... 11:11:26 [INFO] Building jar: /w/workspace/sdc-sdc-distribution-client-master-integration-pairwise/sdc-distribution-client/target/sdc-distribution-client-2.2.0-SNAPSHOT-javadoc.jar 11:11:26 [INFO] 11:11:26 [INFO] --- jacoco-maven-plugin:0.8.6:prepare-agent (pre-integration-test) @ sdc-distribution-client --- 11:11:26 [INFO] failsafeArgLine set to -javaagent:/home/jenkins/.m2/repository/org/jacoco/org.jacoco.agent/0.8.6/org.jacoco.agent-0.8.6-runtime.jar=destfile=/w/workspace/sdc-sdc-distribution-client-master-integration-pairwise/sdc-distribution-client/target/code-coverage/jacoco-it.exec,excludes=**/gen/**:**/generated-sources/**:**/yang-gen/**:**/pax/** 11:11:26 [INFO] 11:11:26 [INFO] --- maven-failsafe-plugin:3.0.0-M4:integration-test (integration-tests) @ sdc-distribution-client --- 11:11:26 [INFO] 11:11:26 [INFO] --- jacoco-maven-plugin:0.8.6:report (post-integration-test) @ sdc-distribution-client --- 11:11:26 [INFO] Skipping JaCoCo execution due to missing execution data file. 11:11:26 [INFO] 11:11:26 [INFO] --- maven-failsafe-plugin:3.0.0-M4:verify (integration-tests) @ sdc-distribution-client --- 11:11:26 [INFO] 11:11:26 [INFO] --- maven-install-plugin:2.4:install (default-install) @ sdc-distribution-client --- 11:11:26 [INFO] Installing /w/workspace/sdc-sdc-distribution-client-master-integration-pairwise/sdc-distribution-client/target/sdc-distribution-client-2.2.0-SNAPSHOT.jar to /home/jenkins/.m2/repository/org/onap/sdc/sdc-distribution-client/sdc-distribution-client/2.2.0-SNAPSHOT/sdc-distribution-client-2.2.0-SNAPSHOT.jar 11:11:26 [INFO] Installing /w/workspace/sdc-sdc-distribution-client-master-integration-pairwise/sdc-distribution-client/pom.xml to /home/jenkins/.m2/repository/org/onap/sdc/sdc-distribution-client/sdc-distribution-client/2.2.0-SNAPSHOT/sdc-distribution-client-2.2.0-SNAPSHOT.pom 11:11:26 [INFO] Installing /w/workspace/sdc-sdc-distribution-client-master-integration-pairwise/sdc-distribution-client/target/sdc-distribution-client-2.2.0-SNAPSHOT-javadoc.jar to /home/jenkins/.m2/repository/org/onap/sdc/sdc-distribution-client/sdc-distribution-client/2.2.0-SNAPSHOT/sdc-distribution-client-2.2.0-SNAPSHOT-javadoc.jar 11:11:26 [INFO] 11:11:26 [INFO] ------< org.onap.sdc.sdc-distribution-client:sdc-distribution-ci >------ 11:11:26 [INFO] Building sdc-distribution-ci 2.2.0-SNAPSHOT [4/4] 11:11:26 [INFO] --------------------------------[ jar ]--------------------------------- 11:11:27 [INFO] 11:11:27 [INFO] --- maven-clean-plugin:2.5:clean (default-clean) @ sdc-distribution-ci --- 11:11:27 [INFO] 11:11:27 [INFO] --- maven-enforcer-plugin:3.0.0-M2:enforce (enforce-property) @ sdc-distribution-ci --- 11:11:27 [INFO] 11:11:27 [INFO] --- maven-enforcer-plugin:3.0.0-M2:enforce (enforce-no-snapshots) @ sdc-distribution-ci --- 11:11:27 [INFO] 11:11:27 [INFO] --- jacoco-maven-plugin:0.8.6:prepare-agent (pre-unit-test) @ sdc-distribution-ci --- 11:11:27 [INFO] surefireArgLine set to -javaagent:/home/jenkins/.m2/repository/org/jacoco/org.jacoco.agent/0.8.6/org.jacoco.agent-0.8.6-runtime.jar=destfile=/w/workspace/sdc-sdc-distribution-client-master-integration-pairwise/sdc-distribution-ci/target/code-coverage/jacoco-ut.exec,excludes=**/gen/**:**/generated-sources/**:**/yang-gen/**:**/pax/** 11:11:27 [INFO] 11:11:27 [INFO] --- jacoco-maven-plugin:0.8.6:prepare-agent (prepare-agent) @ sdc-distribution-ci --- 11:11:27 [INFO] argLine set to -javaagent:/home/jenkins/.m2/repository/org/jacoco/org.jacoco.agent/0.8.6/org.jacoco.agent-0.8.6-runtime.jar=destfile=/w/workspace/sdc-sdc-distribution-client-master-integration-pairwise/sdc-distribution-ci/target/jacoco.exec,excludes=**/gen/**:**/generated-sources/**:**/yang-gen/**:**/pax/** 11:11:27 [INFO] 11:11:27 [INFO] --- maven-checkstyle-plugin:3.1.1:check (onap-license) @ sdc-distribution-ci --- 11:11:27 [INFO] 11:11:27 [INFO] --- maven-checkstyle-plugin:3.1.1:check (onap-java-style) @ sdc-distribution-ci --- 11:11:27 [INFO] 11:11:27 [INFO] --- maven-resources-plugin:2.6:resources (default-resources) @ sdc-distribution-ci --- 11:11:27 [INFO] Using 'UTF-8' encoding to copy filtered resources. 11:11:27 [INFO] Copying 1 resource 11:11:27 [INFO] 11:11:27 [INFO] --- maven-compiler-plugin:3.8.1:compile (default-compile) @ sdc-distribution-ci --- 11:11:27 [INFO] Changes detected - recompiling the module! 11:11:27 [INFO] Compiling 10 source files to /w/workspace/sdc-sdc-distribution-client-master-integration-pairwise/sdc-distribution-ci/target/classes 11:11:27 [INFO] /w/workspace/sdc-sdc-distribution-client-master-integration-pairwise/sdc-distribution-ci/src/main/java/org/onap/test/core/service/ClientNotifyCallback.java: /w/workspace/sdc-sdc-distribution-client-master-integration-pairwise/sdc-distribution-ci/src/main/java/org/onap/test/core/service/ClientNotifyCallback.java uses or overrides a deprecated API. 11:11:27 [INFO] /w/workspace/sdc-sdc-distribution-client-master-integration-pairwise/sdc-distribution-ci/src/main/java/org/onap/test/core/service/ClientNotifyCallback.java: Recompile with -Xlint:deprecation for details. 11:11:27 [INFO] 11:11:27 [INFO] --- maven-resources-plugin:2.6:testResources (default-testResources) @ sdc-distribution-ci --- 11:11:27 [INFO] Using 'UTF-8' encoding to copy filtered resources. 11:11:27 [INFO] Copying 2 resources 11:11:27 [INFO] 11:11:27 [INFO] --- maven-compiler-plugin:3.8.1:testCompile (default-testCompile) @ sdc-distribution-ci --- 11:11:27 [INFO] Changes detected - recompiling the module! 11:11:27 [INFO] Compiling 2 source files to /w/workspace/sdc-sdc-distribution-client-master-integration-pairwise/sdc-distribution-ci/target/test-classes 11:11:27 [INFO] /w/workspace/sdc-sdc-distribution-client-master-integration-pairwise/sdc-distribution-ci/src/test/java/org/onap/test/core/service/CustomKafkaContainer.java: /w/workspace/sdc-sdc-distribution-client-master-integration-pairwise/sdc-distribution-ci/src/test/java/org/onap/test/core/service/CustomKafkaContainer.java uses or overrides a deprecated API. 11:11:27 [INFO] /w/workspace/sdc-sdc-distribution-client-master-integration-pairwise/sdc-distribution-ci/src/test/java/org/onap/test/core/service/CustomKafkaContainer.java: Recompile with -Xlint:deprecation for details. 11:11:27 [INFO] 11:11:27 [INFO] --- maven-surefire-plugin:3.0.0-M4:test (default-test) @ sdc-distribution-ci --- 11:11:27 [INFO] 11:11:27 [INFO] ------------------------------------------------------- 11:11:27 [INFO] T E S T S 11:11:27 [INFO] ------------------------------------------------------- 11:11:28 [INFO] Running org.onap.test.core.service.ClientInitializerTest 11:11:28 EnvironmentVariableExtension: This extension uses reflection to mutate JDK-internal state, which is fragile. Check the Javadoc or documentation for more details. 11:11:29 11:11:29.092 [main] WARN org.testcontainers.utility.TestcontainersConfiguration - Attempted to read Testcontainers configuration file at file:/home/jenkins/.testcontainers.properties but the file was not found. Exception message: FileNotFoundException: /home/jenkins/.testcontainers.properties (No such file or directory) 11:11:29 11:11:29.101 [main] INFO org.testcontainers.utility.ImageNameSubstitutor - Image name substitution will be performed by: DefaultImageNameSubstitutor (composite of 'ConfigurationFileImageNameSubstitutor' and 'PrefixingImageNameSubstitutor') 11:11:30 11:11:30.131 [main] INFO org.testcontainers.dockerclient.DockerClientProviderStrategy - Found Docker environment with local Unix socket (unix:///var/run/docker.sock) 11:11:30 11:11:30.144 [main] INFO org.testcontainers.DockerClientFactory - Docker host IP address is localhost 11:11:30 11:11:30.199 [main] INFO org.testcontainers.DockerClientFactory - Connected to docker: 11:11:30 Server Version: 20.10.18 11:11:30 API Version: 1.41 11:11:30 Operating System: Ubuntu 18.04.6 LTS 11:11:30 Total Memory: 32167 MB 11:11:30 11:11:30.237 [main] INFO 🐳 [testcontainers/ryuk:0.3.3] - Pulling docker image: testcontainers/ryuk:0.3.3. Please be patient; this may take some time but only needs to be done once. 11:11:30 11:11:30.246 [main] INFO org.testcontainers.utility.RegistryAuthLocator - Failure when attempting to lookup auth config. Please ignore if you don't have images in an authenticated registry. Details: (dockerImageName: testcontainers/ryuk:latest, configFile: /home/jenkins/.docker/config.json. Falling back to docker-java default behaviour. Exception message: /home/jenkins/.docker/config.json (No such file or directory) 11:11:30 11:11:30.629 [docker-java-stream-256183458] INFO 🐳 [testcontainers/ryuk:0.3.3] - Starting to pull image 11:11:30 11:11:30.664 [docker-java-stream-256183458] INFO 🐳 [testcontainers/ryuk:0.3.3] - Pulling image layers: 0 pending, 0 downloaded, 0 extracted, (0 bytes/0 bytes) 11:11:30 11:11:30.883 [docker-java-stream-256183458] INFO 🐳 [testcontainers/ryuk:0.3.3] - Pulling image layers: 2 pending, 1 downloaded, 0 extracted, (326 KB/? MB) 11:11:30 11:11:30.907 [docker-java-stream-256183458] INFO 🐳 [testcontainers/ryuk:0.3.3] - Pulling image layers: 1 pending, 2 downloaded, 0 extracted, (326 KB/? MB) 11:11:30 11:11:30.915 [docker-java-stream-256183458] INFO 🐳 [testcontainers/ryuk:0.3.3] - Pulling image layers: 0 pending, 3 downloaded, 0 extracted, (326 KB/5 MB) 11:11:31 11:11:31.093 [docker-java-stream-256183458] INFO 🐳 [testcontainers/ryuk:0.3.3] - Pulling image layers: 0 pending, 3 downloaded, 1 extracted, (2 MB/5 MB) 11:11:31 11:11:31.249 [docker-java-stream-256183458] INFO 🐳 [testcontainers/ryuk:0.3.3] - Pulling image layers: 0 pending, 3 downloaded, 2 extracted, (2 MB/5 MB) 11:11:31 11:11:31.383 [docker-java-stream-256183458] INFO 🐳 [testcontainers/ryuk:0.3.3] - Pulling image layers: 0 pending, 3 downloaded, 3 extracted, (5 MB/5 MB) 11:11:32 11:11:32.416 [main] INFO org.testcontainers.utility.RyukResourceReaper - Ryuk started - will monitor and terminate Testcontainers containers on JVM exit 11:11:32 11:11:32.416 [main] INFO org.testcontainers.DockerClientFactory - Checking the system... 11:11:32 11:11:32.417 [main] INFO org.testcontainers.DockerClientFactory - ✔︎ Docker server version should be at least 1.6.0 11:11:32 11:11:32.506 [main] INFO org.testcontainers.DockerClientFactory - ✔︎ Docker environment should have more than 2GB free disk space 11:11:32 11:11:32.514 [main] INFO 🐳 [confluentinc/cp-kafka:6.2.1] - Pulling docker image: confluentinc/cp-kafka:6.2.1. Please be patient; this may take some time but only needs to be done once. 11:11:32 11:11:32.869 [docker-java-stream-1639074418] INFO 🐳 [confluentinc/cp-kafka:6.2.1] - Starting to pull image 11:11:32 11:11:32.871 [docker-java-stream-1639074418] INFO 🐳 [confluentinc/cp-kafka:6.2.1] - Pulling image layers: 0 pending, 0 downloaded, 0 extracted, (0 bytes/0 bytes) 11:11:32 11:11:32.985 [docker-java-stream-1639074418] INFO 🐳 [confluentinc/cp-kafka:6.2.1] - Pulling image layers: 10 pending, 1 downloaded, 0 extracted, (1 KB/? MB) 11:11:33 11:11:33.155 [docker-java-stream-1639074418] INFO 🐳 [confluentinc/cp-kafka:6.2.1] - Pulling image layers: 9 pending, 2 downloaded, 0 extracted, (24 MB/? MB) 11:11:33 11:11:33.256 [docker-java-stream-1639074418] INFO 🐳 [confluentinc/cp-kafka:6.2.1] - Pulling image layers: 8 pending, 3 downloaded, 0 extracted, (49 MB/? MB) 11:11:33 11:11:33.381 [docker-java-stream-1639074418] INFO 🐳 [confluentinc/cp-kafka:6.2.1] - Pulling image layers: 7 pending, 4 downloaded, 0 extracted, (73 MB/? MB) 11:11:33 11:11:33.483 [docker-java-stream-1639074418] INFO 🐳 [confluentinc/cp-kafka:6.2.1] - Pulling image layers: 6 pending, 5 downloaded, 0 extracted, (80 MB/? MB) 11:11:33 11:11:33.578 [docker-java-stream-1639074418] INFO 🐳 [confluentinc/cp-kafka:6.2.1] - Pulling image layers: 5 pending, 6 downloaded, 0 extracted, (85 MB/? MB) 11:11:33 11:11:33.671 [docker-java-stream-1639074418] INFO 🐳 [confluentinc/cp-kafka:6.2.1] - Pulling image layers: 4 pending, 7 downloaded, 0 extracted, (98 MB/? MB) 11:11:34 11:11:34.413 [docker-java-stream-1639074418] INFO 🐳 [confluentinc/cp-kafka:6.2.1] - Pulling image layers: 3 pending, 8 downloaded, 0 extracted, (253 MB/? MB) 11:11:34 11:11:34.451 [docker-java-stream-1639074418] INFO 🐳 [confluentinc/cp-kafka:6.2.1] - Pulling image layers: 3 pending, 8 downloaded, 1 extracted, (253 MB/? MB) 11:11:34 11:11:34.520 [docker-java-stream-1639074418] INFO 🐳 [confluentinc/cp-kafka:6.2.1] - Pulling image layers: 2 pending, 9 downloaded, 1 extracted, (265 MB/? MB) 11:11:34 11:11:34.570 [docker-java-stream-1639074418] INFO 🐳 [confluentinc/cp-kafka:6.2.1] - Pulling image layers: 2 pending, 9 downloaded, 2 extracted, (281 MB/? MB) 11:11:34 11:11:34.686 [docker-java-stream-1639074418] INFO 🐳 [confluentinc/cp-kafka:6.2.1] - Pulling image layers: 1 pending, 10 downloaded, 2 extracted, (304 MB/? MB) 11:11:35 11:11:35.181 [docker-java-stream-1639074418] INFO 🐳 [confluentinc/cp-kafka:6.2.1] - Pulling image layers: 0 pending, 11 downloaded, 2 extracted, (341 MB/370 MB) 11:11:40 11:11:40.212 [docker-java-stream-1639074418] INFO 🐳 [confluentinc/cp-kafka:6.2.1] - Pulling image layers: 0 pending, 11 downloaded, 3 extracted, (350 MB/370 MB) 11:11:40 11:11:40.412 [docker-java-stream-1639074418] INFO 🐳 [confluentinc/cp-kafka:6.2.1] - Pulling image layers: 0 pending, 11 downloaded, 4 extracted, (357 MB/370 MB) 11:11:40 11:11:40.534 [docker-java-stream-1639074418] INFO 🐳 [confluentinc/cp-kafka:6.2.1] - Pulling image layers: 0 pending, 11 downloaded, 5 extracted, (357 MB/370 MB) 11:11:40 11:11:40.876 [docker-java-stream-1639074418] INFO 🐳 [confluentinc/cp-kafka:6.2.1] - Pulling image layers: 0 pending, 11 downloaded, 6 extracted, (366 MB/370 MB) 11:11:40 11:11:40.974 [docker-java-stream-1639074418] INFO 🐳 [confluentinc/cp-kafka:6.2.1] - Pulling image layers: 0 pending, 11 downloaded, 7 extracted, (366 MB/370 MB) 11:11:41 11:11:41.086 [docker-java-stream-1639074418] INFO 🐳 [confluentinc/cp-kafka:6.2.1] - Pulling image layers: 0 pending, 11 downloaded, 8 extracted, (366 MB/370 MB) 11:11:41 11:11:41.185 [docker-java-stream-1639074418] INFO 🐳 [confluentinc/cp-kafka:6.2.1] - Pulling image layers: 0 pending, 11 downloaded, 9 extracted, (366 MB/370 MB) 11:11:41 11:11:41.896 [docker-java-stream-1639074418] INFO 🐳 [confluentinc/cp-kafka:6.2.1] - Pulling image layers: 0 pending, 11 downloaded, 10 extracted, (370 MB/370 MB) 11:11:42 11:11:42.014 [docker-java-stream-1639074418] INFO 🐳 [confluentinc/cp-kafka:6.2.1] - Pulling image layers: 0 pending, 11 downloaded, 11 extracted, (370 MB/370 MB) 11:11:42 11:11:42.034 [docker-java-stream-1639074418] INFO 🐳 [confluentinc/cp-kafka:6.2.1] - Pull complete. 11 layers, pulled in 9s (downloaded 370 MB at 41 MB/s) 11:11:42 11:11:42.046 [main] INFO 🐳 [confluentinc/cp-kafka:6.2.1] - Creating container for image: confluentinc/cp-kafka:6.2.1 11:11:45 11:11:45.278 [main] INFO 🐳 [confluentinc/cp-kafka:6.2.1] - Container confluentinc/cp-kafka:6.2.1 is starting: 52ab82079595f0a33f23dad283bab387d1e6bac17a86b1bb24cb9920c18adb14 11:11:50 11:11:50.425 [main] INFO 🐳 [confluentinc/cp-kafka:6.2.1] - Container confluentinc/cp-kafka:6.2.1 started in PT17.915535S 11:11:52 11:11:52.200 [main] INFO 🐳 [nexus3.onap.org:10001/onap/onap-component-mock-sdc:master] - Pulling docker image: nexus3.onap.org:10001/onap/onap-component-mock-sdc:master. Please be patient; this may take some time but only needs to be done once. 11:11:52 11:11:52.201 [main] INFO org.testcontainers.utility.RegistryAuthLocator - Failure when attempting to lookup auth config. Please ignore if you don't have images in an authenticated registry. Details: (dockerImageName: nexus3.onap.org:10001/onap/onap-component-mock-sdc:latest, configFile: /home/jenkins/.docker/config.json. Falling back to docker-java default behaviour. Exception message: /home/jenkins/.docker/config.json (No such file or directory) 11:11:52 11:11:52.903 [docker-java-stream--807276005] INFO 🐳 [nexus3.onap.org:10001/onap/onap-component-mock-sdc:master] - Starting to pull image 11:11:52 11:11:52.904 [docker-java-stream--807276005] INFO 🐳 [nexus3.onap.org:10001/onap/onap-component-mock-sdc:master] - Pulling image layers: 0 pending, 0 downloaded, 0 extracted, (0 bytes/0 bytes) 11:11:53 11:11:53.391 [docker-java-stream--807276005] INFO 🐳 [nexus3.onap.org:10001/onap/onap-component-mock-sdc:master] - Pulling image layers: 0 pending, 1 downloaded, 0 extracted, (62 KB/5 MB) 11:11:53 11:11:53.538 [docker-java-stream--807276005] INFO 🐳 [nexus3.onap.org:10001/onap/onap-component-mock-sdc:master] - Pulling image layers: 0 pending, 1 downloaded, 1 extracted, (5 MB/5 MB) 11:11:53 11:11:53.557 [main] INFO 🐳 [nexus3.onap.org:10001/onap/onap-component-mock-sdc:master] - Creating container for image: nexus3.onap.org:10001/onap/onap-component-mock-sdc:master 11:11:53 11:11:53.679 [main] INFO 🐳 [nexus3.onap.org:10001/onap/onap-component-mock-sdc:master] - Container nexus3.onap.org:10001/onap/onap-component-mock-sdc:master is starting: 921a1d44598f5e108b3fef86091d6001bc8868c28cf08b9c48c5fa8398df0b45 11:11:54 11:11:54.067 [main] INFO org.testcontainers.containers.wait.strategy.HttpWaitStrategy - /dazzling_meninsky: Waiting for 60 seconds for URL: http://localhost:49155/sdc/v1/artifactTypes (where port 49155 maps to container port 30206) 11:11:54 11:11:54.092 [main] INFO 🐳 [nexus3.onap.org:10001/onap/onap-component-mock-sdc:master] - Container nexus3.onap.org:10001/onap/onap-component-mock-sdc:master started in PT1.894339S 11:11:55 11:11:55.236 [main] INFO org.apache.kafka.clients.producer.ProducerConfig - ProducerConfig values: 11:11:55 acks = -1 11:11:55 batch.size = 16384 11:11:55 bootstrap.servers = [localhost:43219] 11:11:55 buffer.memory = 33554432 11:11:55 client.dns.lookup = use_all_dns_ips 11:11:55 client.id = dcae-openapi-manager-producer-7925c1be-6325-4366-a05c-d0451c70d679 11:11:55 compression.type = none 11:11:55 connections.max.idle.ms = 540000 11:11:55 delivery.timeout.ms = 120000 11:11:55 enable.idempotence = true 11:11:55 interceptor.classes = [] 11:11:55 key.serializer = class org.apache.kafka.common.serialization.StringSerializer 11:11:55 linger.ms = 0 11:11:55 max.block.ms = 60000 11:11:55 max.in.flight.requests.per.connection = 5 11:11:55 max.request.size = 1048576 11:11:55 metadata.max.age.ms = 300000 11:11:55 metadata.max.idle.ms = 300000 11:11:55 metric.reporters = [] 11:11:55 metrics.num.samples = 2 11:11:55 metrics.recording.level = INFO 11:11:55 metrics.sample.window.ms = 30000 11:11:55 partitioner.adaptive.partitioning.enable = true 11:11:55 partitioner.availability.timeout.ms = 0 11:11:55 partitioner.class = null 11:11:55 partitioner.ignore.keys = false 11:11:55 receive.buffer.bytes = 32768 11:11:55 reconnect.backoff.max.ms = 1000 11:11:55 reconnect.backoff.ms = 50 11:11:55 request.timeout.ms = 30000 11:11:55 retries = 2147483647 11:11:55 retry.backoff.ms = 100 11:11:55 sasl.client.callback.handler.class = null 11:11:55 sasl.jaas.config = [hidden] 11:11:55 sasl.kerberos.kinit.cmd = /usr/bin/kinit 11:11:55 sasl.kerberos.min.time.before.relogin = 60000 11:11:55 sasl.kerberos.service.name = null 11:11:55 sasl.kerberos.ticket.renew.jitter = 0.05 11:11:55 sasl.kerberos.ticket.renew.window.factor = 0.8 11:11:55 sasl.login.callback.handler.class = null 11:11:55 sasl.login.class = null 11:11:55 sasl.login.connect.timeout.ms = null 11:11:55 sasl.login.read.timeout.ms = null 11:11:55 sasl.login.refresh.buffer.seconds = 300 11:11:55 sasl.login.refresh.min.period.seconds = 60 11:11:55 sasl.login.refresh.window.factor = 0.8 11:11:55 sasl.login.refresh.window.jitter = 0.05 11:11:55 sasl.login.retry.backoff.max.ms = 10000 11:11:55 sasl.login.retry.backoff.ms = 100 11:11:55 sasl.mechanism = PLAIN 11:11:55 sasl.oauthbearer.clock.skew.seconds = 30 11:11:55 sasl.oauthbearer.expected.audience = null 11:11:55 sasl.oauthbearer.expected.issuer = null 11:11:55 sasl.oauthbearer.jwks.endpoint.refresh.ms = 3600000 11:11:55 sasl.oauthbearer.jwks.endpoint.retry.backoff.max.ms = 10000 11:11:55 sasl.oauthbearer.jwks.endpoint.retry.backoff.ms = 100 11:11:55 sasl.oauthbearer.jwks.endpoint.url = null 11:11:55 sasl.oauthbearer.scope.claim.name = scope 11:11:55 sasl.oauthbearer.sub.claim.name = sub 11:11:55 sasl.oauthbearer.token.endpoint.url = null 11:11:55 security.protocol = SASL_PLAINTEXT 11:11:55 security.providers = null 11:11:55 send.buffer.bytes = 131072 11:11:55 socket.connection.setup.timeout.max.ms = 30000 11:11:55 socket.connection.setup.timeout.ms = 10000 11:11:55 ssl.cipher.suites = null 11:11:55 ssl.enabled.protocols = [TLSv1.2, TLSv1.3] 11:11:55 ssl.endpoint.identification.algorithm = https 11:11:55 ssl.engine.factory.class = null 11:11:55 ssl.key.password = null 11:11:55 ssl.keymanager.algorithm = SunX509 11:11:55 ssl.keystore.certificate.chain = null 11:11:55 ssl.keystore.key = null 11:11:55 ssl.keystore.location = null 11:11:55 ssl.keystore.password = null 11:11:55 ssl.keystore.type = JKS 11:11:55 ssl.protocol = TLSv1.3 11:11:55 ssl.provider = null 11:11:55 ssl.secure.random.implementation = null 11:11:55 ssl.trustmanager.algorithm = PKIX 11:11:55 ssl.truststore.certificates = null 11:11:55 ssl.truststore.location = null 11:11:55 ssl.truststore.password = null 11:11:55 ssl.truststore.type = JKS 11:11:55 transaction.timeout.ms = 60000 11:11:55 transactional.id = null 11:11:55 value.serializer = class org.apache.kafka.common.serialization.StringSerializer 11:11:55 11:11:55 11:11:55.344 [main] INFO org.apache.kafka.clients.producer.KafkaProducer - [Producer clientId=dcae-openapi-manager-producer-7925c1be-6325-4366-a05c-d0451c70d679] Instantiated an idempotent producer. 11:11:55 11:11:55.399 [main] INFO org.apache.kafka.common.security.authenticator.AbstractLogin - Successfully logged in. 11:11:55 11:11:55.446 [main] INFO org.apache.kafka.common.utils.AppInfoParser - Kafka version: 3.3.1 11:11:55 11:11:55.447 [main] INFO org.apache.kafka.common.utils.AppInfoParser - Kafka commitId: e23c59d00e687ff5 11:11:55 11:11:55.447 [main] INFO org.apache.kafka.common.utils.AppInfoParser - Kafka startTimeMs: 1768216315443 11:11:55 11:11:55.450 [main] INFO org.onap.test.core.service.ClientInitializer - distribution client initialized successfully 11:11:55 11:11:55.451 [main] INFO org.onap.test.core.service.ClientInitializer - ======================================== 11:11:55 11:11:55.451 [main] INFO org.onap.test.core.service.ClientInitializer - ======================================== 11:11:55 11:11:55.468 [main] INFO org.apache.kafka.clients.consumer.ConsumerConfig - ConsumerConfig values: 11:11:55 allow.auto.create.topics = false 11:11:55 auto.commit.interval.ms = 5000 11:11:55 auto.offset.reset = latest 11:11:55 bootstrap.servers = [localhost:43219] 11:11:55 check.crcs = true 11:11:55 client.dns.lookup = use_all_dns_ips 11:11:55 client.id = dcae-openapi-manager-consumer-e00e3481-fed3-4484-a104-0a1bdbef7219 11:11:55 client.rack = 11:11:55 connections.max.idle.ms = 540000 11:11:55 default.api.timeout.ms = 60000 11:11:55 enable.auto.commit = true 11:11:55 exclude.internal.topics = true 11:11:55 fetch.max.bytes = 52428800 11:11:55 fetch.max.wait.ms = 500 11:11:55 fetch.min.bytes = 1 11:11:55 group.id = noapp 11:11:55 group.instance.id = null 11:11:55 heartbeat.interval.ms = 3000 11:11:55 interceptor.classes = [] 11:11:55 internal.leave.group.on.close = true 11:11:55 internal.throw.on.fetch.stable.offset.unsupported = false 11:11:55 isolation.level = read_uncommitted 11:11:55 key.deserializer = class org.apache.kafka.common.serialization.StringDeserializer 11:11:55 max.partition.fetch.bytes = 1048576 11:11:55 max.poll.interval.ms = 300000 11:11:55 max.poll.records = 500 11:11:55 metadata.max.age.ms = 300000 11:11:55 metric.reporters = [] 11:11:55 metrics.num.samples = 2 11:11:55 metrics.recording.level = INFO 11:11:55 metrics.sample.window.ms = 30000 11:11:55 partition.assignment.strategy = [class org.apache.kafka.clients.consumer.RangeAssignor, class org.apache.kafka.clients.consumer.CooperativeStickyAssignor] 11:11:55 receive.buffer.bytes = 65536 11:11:55 reconnect.backoff.max.ms = 1000 11:11:55 reconnect.backoff.ms = 50 11:11:55 request.timeout.ms = 30000 11:11:55 retry.backoff.ms = 100 11:11:55 sasl.client.callback.handler.class = null 11:11:55 sasl.jaas.config = [hidden] 11:11:55 sasl.kerberos.kinit.cmd = /usr/bin/kinit 11:11:55 sasl.kerberos.min.time.before.relogin = 60000 11:11:55 sasl.kerberos.service.name = null 11:11:55 sasl.kerberos.ticket.renew.jitter = 0.05 11:11:55 sasl.kerberos.ticket.renew.window.factor = 0.8 11:11:55 sasl.login.callback.handler.class = null 11:11:55 sasl.login.class = null 11:11:55 sasl.login.connect.timeout.ms = null 11:11:55 sasl.login.read.timeout.ms = null 11:11:55 sasl.login.refresh.buffer.seconds = 300 11:11:55 sasl.login.refresh.min.period.seconds = 60 11:11:55 sasl.login.refresh.window.factor = 0.8 11:11:55 sasl.login.refresh.window.jitter = 0.05 11:11:55 sasl.login.retry.backoff.max.ms = 10000 11:11:55 sasl.login.retry.backoff.ms = 100 11:11:55 sasl.mechanism = PLAIN 11:11:55 sasl.oauthbearer.clock.skew.seconds = 30 11:11:55 sasl.oauthbearer.expected.audience = null 11:11:55 sasl.oauthbearer.expected.issuer = null 11:11:55 sasl.oauthbearer.jwks.endpoint.refresh.ms = 3600000 11:11:55 sasl.oauthbearer.jwks.endpoint.retry.backoff.max.ms = 10000 11:11:55 sasl.oauthbearer.jwks.endpoint.retry.backoff.ms = 100 11:11:55 sasl.oauthbearer.jwks.endpoint.url = null 11:11:55 sasl.oauthbearer.scope.claim.name = scope 11:11:55 sasl.oauthbearer.sub.claim.name = sub 11:11:55 sasl.oauthbearer.token.endpoint.url = null 11:11:55 security.protocol = SASL_PLAINTEXT 11:11:55 security.providers = null 11:11:55 send.buffer.bytes = 131072 11:11:55 session.timeout.ms = 45000 11:11:55 socket.connection.setup.timeout.max.ms = 30000 11:11:55 socket.connection.setup.timeout.ms = 10000 11:11:55 ssl.cipher.suites = null 11:11:55 ssl.enabled.protocols = [TLSv1.2, TLSv1.3] 11:11:55 ssl.endpoint.identification.algorithm = https 11:11:55 ssl.engine.factory.class = null 11:11:55 ssl.key.password = null 11:11:55 ssl.keymanager.algorithm = SunX509 11:11:55 ssl.keystore.certificate.chain = null 11:11:55 ssl.keystore.key = null 11:11:55 ssl.keystore.location = null 11:11:55 ssl.keystore.password = null 11:11:55 ssl.keystore.type = JKS 11:11:55 ssl.protocol = TLSv1.3 11:11:55 ssl.provider = null 11:11:55 ssl.secure.random.implementation = null 11:11:55 ssl.trustmanager.algorithm = PKIX 11:11:55 ssl.truststore.certificates = null 11:11:55 ssl.truststore.location = null 11:11:55 ssl.truststore.password = null 11:11:55 ssl.truststore.type = JKS 11:11:55 value.deserializer = class org.apache.kafka.common.serialization.StringDeserializer 11:11:55 11:11:55 11:11:55.552 [main] INFO org.apache.kafka.common.utils.AppInfoParser - Kafka version: 3.3.1 11:11:55 11:11:55.552 [main] INFO org.apache.kafka.common.utils.AppInfoParser - Kafka commitId: e23c59d00e687ff5 11:11:55 11:11:55.552 [main] INFO org.apache.kafka.common.utils.AppInfoParser - Kafka startTimeMs: 1768216315552 11:11:55 11:11:55.553 [main] INFO org.apache.kafka.clients.consumer.KafkaConsumer - [Consumer clientId=dcae-openapi-manager-consumer-e00e3481-fed3-4484-a104-0a1bdbef7219, groupId=noapp] Subscribed to topic(s): SDC-DIST-NOTIF-TOPIC 11:11:55 11:11:55.557 [main] INFO org.onap.test.core.service.ClientInitializer - distribution client started successfully 11:11:55 11:11:55.557 [main] INFO org.onap.test.core.service.ClientInitializer - ======================================== 11:11:56 11:11:56.052 [kafka-producer-network-thread | dcae-openapi-manager-producer-7925c1be-6325-4366-a05c-d0451c70d679] INFO org.apache.kafka.clients.Metadata - [Producer clientId=dcae-openapi-manager-producer-7925c1be-6325-4366-a05c-d0451c70d679] Cluster ID: JWWDFyr0QjmC2lkkiTU9-A 11:11:56 11:11:56.057 [kafka-producer-network-thread | dcae-openapi-manager-producer-7925c1be-6325-4366-a05c-d0451c70d679] INFO org.apache.kafka.clients.producer.internals.TransactionManager - [Producer clientId=dcae-openapi-manager-producer-7925c1be-6325-4366-a05c-d0451c70d679] ProducerId set to 0 with epoch 0 11:11:56 11:11:56.057 [pool-1-thread-1] WARN org.apache.kafka.clients.NetworkClient - [Consumer clientId=dcae-openapi-manager-consumer-e00e3481-fed3-4484-a104-0a1bdbef7219, groupId=noapp] Error while fetching metadata with correlation id 2 : {SDC-DIST-NOTIF-TOPIC=UNKNOWN_TOPIC_OR_PARTITION} 11:11:56 11:11:56.058 [pool-1-thread-1] INFO org.apache.kafka.clients.Metadata - [Consumer clientId=dcae-openapi-manager-consumer-e00e3481-fed3-4484-a104-0a1bdbef7219, groupId=noapp] Cluster ID: JWWDFyr0QjmC2lkkiTU9-A 11:11:56 11:11:56.175 [pool-1-thread-1] WARN org.apache.kafka.clients.NetworkClient - [Consumer clientId=dcae-openapi-manager-consumer-e00e3481-fed3-4484-a104-0a1bdbef7219, groupId=noapp] Error while fetching metadata with correlation id 4 : {SDC-DIST-NOTIF-TOPIC=UNKNOWN_TOPIC_OR_PARTITION} 11:11:56 11:11:56.279 [pool-1-thread-1] WARN org.apache.kafka.clients.NetworkClient - [Consumer clientId=dcae-openapi-manager-consumer-e00e3481-fed3-4484-a104-0a1bdbef7219, groupId=noapp] Error while fetching metadata with correlation id 6 : {SDC-DIST-NOTIF-TOPIC=UNKNOWN_TOPIC_OR_PARTITION} 11:11:56 11:11:56.288 [pool-1-thread-1] INFO org.apache.kafka.clients.consumer.internals.ConsumerCoordinator - [Consumer clientId=dcae-openapi-manager-consumer-e00e3481-fed3-4484-a104-0a1bdbef7219, groupId=noapp] Discovered group coordinator localhost:43219 (id: 2147483646 rack: null) 11:11:56 11:11:56.303 [pool-1-thread-1] INFO org.apache.kafka.clients.consumer.internals.ConsumerCoordinator - [Consumer clientId=dcae-openapi-manager-consumer-e00e3481-fed3-4484-a104-0a1bdbef7219, groupId=noapp] (Re-)joining group 11:11:56 11:11:56.333 [pool-1-thread-1] INFO org.apache.kafka.clients.consumer.internals.ConsumerCoordinator - [Consumer clientId=dcae-openapi-manager-consumer-e00e3481-fed3-4484-a104-0a1bdbef7219, groupId=noapp] Request joining group due to: need to re-join with the given member-id: dcae-openapi-manager-consumer-e00e3481-fed3-4484-a104-0a1bdbef7219-790933f8-6ff9-4182-98d2-7cf32aecd755 11:11:56 11:11:56.334 [pool-1-thread-1] INFO org.apache.kafka.clients.consumer.internals.ConsumerCoordinator - [Consumer clientId=dcae-openapi-manager-consumer-e00e3481-fed3-4484-a104-0a1bdbef7219, groupId=noapp] Request joining group due to: rebalance failed due to 'The group member needs to have a valid member id before actually entering a consumer group.' (MemberIdRequiredException) 11:11:56 11:11:56.334 [pool-1-thread-1] INFO org.apache.kafka.clients.consumer.internals.ConsumerCoordinator - [Consumer clientId=dcae-openapi-manager-consumer-e00e3481-fed3-4484-a104-0a1bdbef7219, groupId=noapp] (Re-)joining group 11:11:56 11:11:56.359 [pool-1-thread-1] INFO org.apache.kafka.clients.consumer.internals.ConsumerCoordinator - [Consumer clientId=dcae-openapi-manager-consumer-e00e3481-fed3-4484-a104-0a1bdbef7219, groupId=noapp] Successfully joined group with generation Generation{generationId=1, memberId='dcae-openapi-manager-consumer-e00e3481-fed3-4484-a104-0a1bdbef7219-790933f8-6ff9-4182-98d2-7cf32aecd755', protocol='range'} 11:11:56 11:11:56.383 [pool-1-thread-1] WARN org.apache.kafka.clients.NetworkClient - [Consumer clientId=dcae-openapi-manager-consumer-e00e3481-fed3-4484-a104-0a1bdbef7219, groupId=noapp] Error while fetching metadata with correlation id 11 : {SDC-DIST-NOTIF-TOPIC=UNKNOWN_TOPIC_OR_PARTITION} 11:11:56 11:11:56.386 [pool-1-thread-1] INFO org.apache.kafka.clients.consumer.internals.ConsumerCoordinator - [Consumer clientId=dcae-openapi-manager-consumer-e00e3481-fed3-4484-a104-0a1bdbef7219, groupId=noapp] Finished assignment for group at generation 1: {dcae-openapi-manager-consumer-e00e3481-fed3-4484-a104-0a1bdbef7219-790933f8-6ff9-4182-98d2-7cf32aecd755=Assignment(partitions=[])} 11:11:56 11:11:56.444 [pool-1-thread-1] INFO org.apache.kafka.clients.consumer.internals.ConsumerCoordinator - [Consumer clientId=dcae-openapi-manager-consumer-e00e3481-fed3-4484-a104-0a1bdbef7219, groupId=noapp] Successfully synced group in generation Generation{generationId=1, memberId='dcae-openapi-manager-consumer-e00e3481-fed3-4484-a104-0a1bdbef7219-790933f8-6ff9-4182-98d2-7cf32aecd755', protocol='range'} 11:11:56 11:11:56.444 [pool-1-thread-1] INFO org.apache.kafka.clients.consumer.internals.ConsumerCoordinator - [Consumer clientId=dcae-openapi-manager-consumer-e00e3481-fed3-4484-a104-0a1bdbef7219, groupId=noapp] Notifying assignor about the new Assignment(partitions=[]) 11:11:56 11:11:56.445 [pool-1-thread-1] INFO org.apache.kafka.clients.consumer.internals.ConsumerCoordinator - [Consumer clientId=dcae-openapi-manager-consumer-e00e3481-fed3-4484-a104-0a1bdbef7219, groupId=noapp] Adding newly assigned partitions: 11:11:56 11:11:56.487 [pool-1-thread-1] WARN org.apache.kafka.clients.NetworkClient - [Consumer clientId=dcae-openapi-manager-consumer-e00e3481-fed3-4484-a104-0a1bdbef7219, groupId=noapp] Error while fetching metadata with correlation id 13 : {SDC-DIST-NOTIF-TOPIC=UNKNOWN_TOPIC_OR_PARTITION} 11:11:56 11:11:56.559 [main] INFO org.apache.kafka.clients.producer.ProducerConfig - ProducerConfig values: 11:11:56 acks = -1 11:11:56 batch.size = 16384 11:11:56 bootstrap.servers = [PLAINTEXT://localhost:43219] 11:11:56 buffer.memory = 33554432 11:11:56 client.dns.lookup = use_all_dns_ips 11:11:56 client.id = producer-1 11:11:56 compression.type = none 11:11:56 connections.max.idle.ms = 540000 11:11:56 delivery.timeout.ms = 120000 11:11:56 enable.idempotence = true 11:11:56 interceptor.classes = [] 11:11:56 key.serializer = class org.apache.kafka.common.serialization.StringSerializer 11:11:56 linger.ms = 0 11:11:56 max.block.ms = 60000 11:11:56 max.in.flight.requests.per.connection = 5 11:11:56 max.request.size = 1048576 11:11:56 metadata.max.age.ms = 300000 11:11:56 metadata.max.idle.ms = 300000 11:11:56 metric.reporters = [] 11:11:56 metrics.num.samples = 2 11:11:56 metrics.recording.level = INFO 11:11:56 metrics.sample.window.ms = 30000 11:11:56 partitioner.adaptive.partitioning.enable = true 11:11:56 partitioner.availability.timeout.ms = 0 11:11:56 partitioner.class = null 11:11:56 partitioner.ignore.keys = false 11:11:56 receive.buffer.bytes = 32768 11:11:56 reconnect.backoff.max.ms = 1000 11:11:56 reconnect.backoff.ms = 50 11:11:56 request.timeout.ms = 30000 11:11:56 retries = 2147483647 11:11:56 retry.backoff.ms = 100 11:11:56 sasl.client.callback.handler.class = null 11:11:56 sasl.jaas.config = [hidden] 11:11:56 sasl.kerberos.kinit.cmd = /usr/bin/kinit 11:11:56 sasl.kerberos.min.time.before.relogin = 60000 11:11:56 sasl.kerberos.service.name = null 11:11:56 sasl.kerberos.ticket.renew.jitter = 0.05 11:11:56 sasl.kerberos.ticket.renew.window.factor = 0.8 11:11:56 sasl.login.callback.handler.class = null 11:11:56 sasl.login.class = null 11:11:56 sasl.login.connect.timeout.ms = null 11:11:56 sasl.login.read.timeout.ms = null 11:11:56 sasl.login.refresh.buffer.seconds = 300 11:11:56 sasl.login.refresh.min.period.seconds = 60 11:11:56 sasl.login.refresh.window.factor = 0.8 11:11:56 sasl.login.refresh.window.jitter = 0.05 11:11:56 sasl.login.retry.backoff.max.ms = 10000 11:11:56 sasl.login.retry.backoff.ms = 100 11:11:56 sasl.mechanism = PLAIN 11:11:56 sasl.oauthbearer.clock.skew.seconds = 30 11:11:56 sasl.oauthbearer.expected.audience = null 11:11:56 sasl.oauthbearer.expected.issuer = null 11:11:56 sasl.oauthbearer.jwks.endpoint.refresh.ms = 3600000 11:11:56 sasl.oauthbearer.jwks.endpoint.retry.backoff.max.ms = 10000 11:11:56 sasl.oauthbearer.jwks.endpoint.retry.backoff.ms = 100 11:11:56 sasl.oauthbearer.jwks.endpoint.url = null 11:11:56 sasl.oauthbearer.scope.claim.name = scope 11:11:56 sasl.oauthbearer.sub.claim.name = sub 11:11:56 sasl.oauthbearer.token.endpoint.url = null 11:11:56 security.protocol = SASL_PLAINTEXT 11:11:56 security.providers = null 11:11:56 send.buffer.bytes = 131072 11:11:56 socket.connection.setup.timeout.max.ms = 30000 11:11:56 socket.connection.setup.timeout.ms = 10000 11:11:56 ssl.cipher.suites = null 11:11:56 ssl.enabled.protocols = [TLSv1.2, TLSv1.3] 11:11:56 ssl.endpoint.identification.algorithm = https 11:11:56 ssl.engine.factory.class = null 11:11:56 ssl.key.password = null 11:11:56 ssl.keymanager.algorithm = SunX509 11:11:56 ssl.keystore.certificate.chain = null 11:11:56 ssl.keystore.key = null 11:11:56 ssl.keystore.location = null 11:11:56 ssl.keystore.password = null 11:11:56 ssl.keystore.type = JKS 11:11:56 ssl.protocol = TLSv1.3 11:11:56 ssl.provider = null 11:11:56 ssl.secure.random.implementation = null 11:11:56 ssl.trustmanager.algorithm = PKIX 11:11:56 ssl.truststore.certificates = null 11:11:56 ssl.truststore.location = null 11:11:56 ssl.truststore.password = null 11:11:56 ssl.truststore.type = JKS 11:11:56 transaction.timeout.ms = 60000 11:11:56 transactional.id = null 11:11:56 value.serializer = class org.apache.kafka.common.serialization.StringSerializer 11:11:56 11:11:56 11:11:56.562 [main] INFO org.apache.kafka.clients.producer.KafkaProducer - [Producer clientId=producer-1] Instantiated an idempotent producer. 11:11:56 11:11:56.570 [main] INFO org.apache.kafka.common.utils.AppInfoParser - Kafka version: 3.3.1 11:11:56 11:11:56.571 [main] INFO org.apache.kafka.common.utils.AppInfoParser - Kafka commitId: e23c59d00e687ff5 11:11:56 11:11:56.571 [main] INFO org.apache.kafka.common.utils.AppInfoParser - Kafka startTimeMs: 1768216316570 11:11:56 11:11:56.591 [pool-1-thread-1] WARN org.apache.kafka.clients.NetworkClient - [Consumer clientId=dcae-openapi-manager-consumer-e00e3481-fed3-4484-a104-0a1bdbef7219, groupId=noapp] Error while fetching metadata with correlation id 14 : {SDC-DIST-NOTIF-TOPIC=UNKNOWN_TOPIC_OR_PARTITION} 11:11:56 11:11:56.601 [kafka-producer-network-thread | producer-1] WARN org.apache.kafka.clients.NetworkClient - [Producer clientId=producer-1] Error while fetching metadata with correlation id 1 : {SDC-DIST-NOTIF-TOPIC=LEADER_NOT_AVAILABLE} 11:11:56 11:11:56.601 [kafka-producer-network-thread | producer-1] INFO org.apache.kafka.clients.Metadata - [Producer clientId=producer-1] Cluster ID: JWWDFyr0QjmC2lkkiTU9-A 11:11:56 11:11:56.603 [kafka-producer-network-thread | producer-1] INFO org.apache.kafka.clients.producer.internals.TransactionManager - [Producer clientId=producer-1] ProducerId set to 1 with epoch 0 11:11:56 11:11:56.699 [pool-1-thread-1] INFO org.apache.kafka.clients.Metadata - [Consumer clientId=dcae-openapi-manager-consumer-e00e3481-fed3-4484-a104-0a1bdbef7219, groupId=noapp] Resetting the last seen epoch of partition SDC-DIST-NOTIF-TOPIC-0 to 0 since the associated topicId changed from null to YfyM4CyoSkS38gX4SnLTYw 11:11:56 11:11:56.703 [pool-1-thread-1] INFO org.apache.kafka.clients.consumer.internals.ConsumerCoordinator - [Consumer clientId=dcae-openapi-manager-consumer-e00e3481-fed3-4484-a104-0a1bdbef7219, groupId=noapp] Request joining group due to: cached metadata has changed from (version5: {}) at the beginning of the rebalance to (version8: {SDC-DIST-NOTIF-TOPIC=1}) 11:11:56 11:11:56.704 [pool-1-thread-1] INFO org.apache.kafka.clients.consumer.internals.ConsumerCoordinator - [Consumer clientId=dcae-openapi-manager-consumer-e00e3481-fed3-4484-a104-0a1bdbef7219, groupId=noapp] Revoke previously assigned partitions 11:11:56 11:11:56.705 [pool-1-thread-1] INFO org.apache.kafka.clients.consumer.internals.ConsumerCoordinator - [Consumer clientId=dcae-openapi-manager-consumer-e00e3481-fed3-4484-a104-0a1bdbef7219, groupId=noapp] (Re-)joining group 11:11:56 11:11:56.711 [pool-1-thread-1] INFO org.apache.kafka.clients.consumer.internals.ConsumerCoordinator - [Consumer clientId=dcae-openapi-manager-consumer-e00e3481-fed3-4484-a104-0a1bdbef7219, groupId=noapp] Successfully joined group with generation Generation{generationId=2, memberId='dcae-openapi-manager-consumer-e00e3481-fed3-4484-a104-0a1bdbef7219-790933f8-6ff9-4182-98d2-7cf32aecd755', protocol='range'} 11:11:56 11:11:56.712 [pool-1-thread-1] INFO org.apache.kafka.clients.consumer.internals.ConsumerCoordinator - [Consumer clientId=dcae-openapi-manager-consumer-e00e3481-fed3-4484-a104-0a1bdbef7219, groupId=noapp] Finished assignment for group at generation 2: {dcae-openapi-manager-consumer-e00e3481-fed3-4484-a104-0a1bdbef7219-790933f8-6ff9-4182-98d2-7cf32aecd755=Assignment(partitions=[SDC-DIST-NOTIF-TOPIC-0])} 11:11:56 11:11:56.718 [kafka-producer-network-thread | producer-1] INFO org.apache.kafka.clients.Metadata - [Producer clientId=producer-1] Resetting the last seen epoch of partition SDC-DIST-NOTIF-TOPIC-0 to 0 since the associated topicId changed from null to YfyM4CyoSkS38gX4SnLTYw 11:11:56 11:11:56.720 [kafka-coordinator-heartbeat-thread | noapp] INFO org.apache.kafka.clients.consumer.internals.ConsumerCoordinator - [Consumer clientId=dcae-openapi-manager-consumer-e00e3481-fed3-4484-a104-0a1bdbef7219, groupId=noapp] Successfully synced group in generation Generation{generationId=2, memberId='dcae-openapi-manager-consumer-e00e3481-fed3-4484-a104-0a1bdbef7219-790933f8-6ff9-4182-98d2-7cf32aecd755', protocol='range'} 11:11:56 11:11:56.721 [pool-1-thread-1] INFO org.apache.kafka.clients.consumer.internals.ConsumerCoordinator - [Consumer clientId=dcae-openapi-manager-consumer-e00e3481-fed3-4484-a104-0a1bdbef7219, groupId=noapp] Notifying assignor about the new Assignment(partitions=[SDC-DIST-NOTIF-TOPIC-0]) 11:11:56 11:11:56.726 [pool-1-thread-1] INFO org.apache.kafka.clients.consumer.internals.ConsumerCoordinator - [Consumer clientId=dcae-openapi-manager-consumer-e00e3481-fed3-4484-a104-0a1bdbef7219, groupId=noapp] Adding newly assigned partitions: SDC-DIST-NOTIF-TOPIC-0 11:11:56 11:11:56.739 [pool-1-thread-1] INFO org.apache.kafka.clients.consumer.internals.ConsumerCoordinator - [Consumer clientId=dcae-openapi-manager-consumer-e00e3481-fed3-4484-a104-0a1bdbef7219, groupId=noapp] Found no committed offset for partition SDC-DIST-NOTIF-TOPIC-0 11:11:56 11:11:56.772 [pool-1-thread-1] INFO org.apache.kafka.clients.consumer.internals.SubscriptionState - [Consumer clientId=dcae-openapi-manager-consumer-e00e3481-fed3-4484-a104-0a1bdbef7219, groupId=noapp] Resetting offset for partition SDC-DIST-NOTIF-TOPIC-0 to position FetchPosition{offset=0, offsetEpoch=Optional.empty, currentLeader=LeaderAndEpoch{leader=Optional[localhost:43219 (id: 1 rack: null)], epoch=0}}. 11:11:56 11:11:56.813 [main] INFO org.apache.kafka.clients.producer.KafkaProducer - [Producer clientId=producer-1] Closing the Kafka producer with timeoutMillis = 9223372036854775807 ms. 11:11:56 11:11:56.823 [main] INFO org.apache.kafka.common.metrics.Metrics - Metrics scheduler closed 11:11:56 11:11:56.823 [main] INFO org.apache.kafka.common.metrics.Metrics - Closing reporter org.apache.kafka.common.metrics.JmxReporter 11:11:56 11:11:56.823 [main] INFO org.apache.kafka.common.metrics.Metrics - Metrics reporters closed 11:11:56 11:11:56.823 [main] INFO org.apache.kafka.common.utils.AppInfoParser - App info kafka.producer for producer-1 unregistered 11:11:56 11:11:56.825 [main] INFO org.onap.test.core.service.ClientInitializerTest - Waiting for artifacts 11:11:56 11:11:56.872 [pool-1-thread-1] INFO org.onap.sdc.utils.NotificationSender - DistributionClient - sendStatus 11:11:56 11:11:56.872 [pool-1-thread-1] INFO org.onap.sdc.utils.NotificationSender - Trying to send status: { 11:11:56 "distributionID": "bf3df55e-cdc6-4bf7-b3b3-0fdccab91106", 11:11:56 "consumerID": "dcae-openapi-manager", 11:11:56 "timestamp": 1768216315557, 11:11:56 "artifactURL": "/sdc/v1/catalog/services/DemovlbCds/1.0/resourceInstances/vlb_cds68b6da5968e40/artifacts/k8s-tca-clamp-policy-05082019.yaml", 11:11:56 "status": "NOT_NOTIFIED" 11:11:56 } 11:11:56 to topic SDC-DIST-STATUS-TOPIC 11:11:56 11:11:56.894 [kafka-producer-network-thread | dcae-openapi-manager-producer-7925c1be-6325-4366-a05c-d0451c70d679] WARN org.apache.kafka.clients.NetworkClient - [Producer clientId=dcae-openapi-manager-producer-7925c1be-6325-4366-a05c-d0451c70d679] Error while fetching metadata with correlation id 4 : {SDC-DIST-STATUS-TOPIC=LEADER_NOT_AVAILABLE} 11:11:57 11:11:56.999 [kafka-producer-network-thread | dcae-openapi-manager-producer-7925c1be-6325-4366-a05c-d0451c70d679] INFO org.apache.kafka.clients.Metadata - [Producer clientId=dcae-openapi-manager-producer-7925c1be-6325-4366-a05c-d0451c70d679] Resetting the last seen epoch of partition SDC-DIST-STATUS-TOPIC-0 to 0 since the associated topicId changed from null to z5GiP3RbQLW2w1Jt3Nr_KA 11:11:58 11:11:58.005 [pool-1-thread-1] INFO org.onap.sdc.utils.NotificationSender - DistributionClient - sendStatus 11:11:58 11:11:58.005 [pool-1-thread-1] INFO org.onap.sdc.utils.NotificationSender - Trying to send status: { 11:11:58 "distributionID": "bf3df55e-cdc6-4bf7-b3b3-0fdccab91106", 11:11:58 "consumerID": "dcae-openapi-manager", 11:11:58 "timestamp": 1768216315557, 11:11:58 "artifactURL": "/sdc/v1/catalog/services/DemovlbCds/1.0/resourceInstances/vlb_cds68b6da5968e40/artifacts/vf-license-model.xml", 11:11:58 "status": "NOT_NOTIFIED" 11:11:58 } 11:11:58 to topic SDC-DIST-STATUS-TOPIC 11:11:59 11:11:59.007 [pool-1-thread-1] INFO org.onap.sdc.utils.NotificationSender - DistributionClient - sendStatus 11:11:59 11:11:59.008 [pool-1-thread-1] INFO org.onap.sdc.utils.NotificationSender - Trying to send status: { 11:11:59 "distributionID": "bf3df55e-cdc6-4bf7-b3b3-0fdccab91106", 11:11:59 "consumerID": "dcae-openapi-manager", 11:11:59 "timestamp": 1768216315557, 11:11:59 "artifactURL": "/sdc/v1/catalog/services/DemovlbCds/1.0/resourceInstances/vlb_cds68b6da5968e40/artifacts/base_template.env", 11:11:59 "status": "NOT_NOTIFIED" 11:11:59 } 11:11:59 to topic SDC-DIST-STATUS-TOPIC 11:12:00 11:12:00.010 [pool-1-thread-1] INFO org.onap.sdc.utils.NotificationSender - DistributionClient - sendStatus 11:12:00 11:12:00.011 [pool-1-thread-1] INFO org.onap.sdc.utils.NotificationSender - Trying to send status: { 11:12:00 "distributionID": "bf3df55e-cdc6-4bf7-b3b3-0fdccab91106", 11:12:00 "consumerID": "dcae-openapi-manager", 11:12:00 "timestamp": 1768216315557, 11:12:00 "artifactURL": "/sdc/v1/catalog/services/DemovlbCds/1.0/resourceInstances/vlb_cds68b6da5968e40/artifacts/vlb_cds68b6da5968e40_modules.json", 11:12:00 "status": "NOT_NOTIFIED" 11:12:00 } 11:12:00 to topic SDC-DIST-STATUS-TOPIC 11:12:01 11:12:01.014 [pool-1-thread-1] INFO org.onap.sdc.utils.NotificationSender - DistributionClient - sendStatus 11:12:01 11:12:01.015 [pool-1-thread-1] INFO org.onap.sdc.utils.NotificationSender - Trying to send status: { 11:12:01 "distributionID": "bf3df55e-cdc6-4bf7-b3b3-0fdccab91106", 11:12:01 "consumerID": "dcae-openapi-manager", 11:12:01 "timestamp": 1768216315557, 11:12:01 "artifactURL": "/", 11:12:01 "status": "NOTIFIED" 11:12:01 } 11:12:01 to topic SDC-DIST-STATUS-TOPIC 11:12:02 11:12:02.016 [pool-1-thread-1] INFO org.onap.sdc.utils.NotificationSender - DistributionClient - sendStatus 11:12:02 11:12:02.017 [pool-1-thread-1] INFO org.onap.sdc.utils.NotificationSender - Trying to send status: { 11:12:02 "distributionID": "bf3df55e-cdc6-4bf7-b3b3-0fdccab91106", 11:12:02 "consumerID": "dcae-openapi-manager", 11:12:02 "timestamp": 1768216315557, 11:12:02 "artifactURL": "/sdc/v1/catalog/services/DemovlbCds/1.0/resourceInstances/vlb_cds68b6da5968e40/artifacts/vdns.env", 11:12:02 "status": "NOT_NOTIFIED" 11:12:02 } 11:12:02 to topic SDC-DIST-STATUS-TOPIC 11:12:03 11:12:03.019 [pool-1-thread-1] INFO org.onap.sdc.utils.NotificationSender - DistributionClient - sendStatus 11:12:03 11:12:03.019 [pool-1-thread-1] INFO org.onap.sdc.utils.NotificationSender - Trying to send status: { 11:12:03 "distributionID": "bf3df55e-cdc6-4bf7-b3b3-0fdccab91106", 11:12:03 "consumerID": "dcae-openapi-manager", 11:12:03 "timestamp": 1768216315557, 11:12:03 "artifactURL": "/sdc/v1/catalog/services/DemovlbCds/1.0/resourceInstances/vlb_cds68b6da5968e40/artifacts/vendor-license-model.xml", 11:12:03 "status": "NOT_NOTIFIED" 11:12:03 } 11:12:03 to topic SDC-DIST-STATUS-TOPIC 11:12:04 11:12:04.021 [pool-1-thread-1] INFO org.onap.sdc.utils.NotificationSender - DistributionClient - sendStatus 11:12:04 11:12:04.021 [pool-1-thread-1] INFO org.onap.sdc.utils.NotificationSender - Trying to send status: { 11:12:04 "distributionID": "bf3df55e-cdc6-4bf7-b3b3-0fdccab91106", 11:12:04 "consumerID": "dcae-openapi-manager", 11:12:04 "timestamp": 1768216315557, 11:12:04 "artifactURL": "/", 11:12:04 "status": "NOTIFIED" 11:12:04 } 11:12:04 to topic SDC-DIST-STATUS-TOPIC 11:12:05 11:12:05.026 [pool-1-thread-1] INFO org.onap.sdc.utils.NotificationSender - DistributionClient - sendStatus 11:12:05 11:12:05.026 [pool-1-thread-1] INFO org.onap.sdc.utils.NotificationSender - Trying to send status: { 11:12:05 "distributionID": "bf3df55e-cdc6-4bf7-b3b3-0fdccab91106", 11:12:05 "consumerID": "dcae-openapi-manager", 11:12:05 "timestamp": 1768216315557, 11:12:05 "artifactURL": "/sdc/v1/catalog/services/DemovlbCds/1.0/resourceInstances/vlb_cds68b6da5968e40/artifacts/vlb.env", 11:12:05 "status": "NOT_NOTIFIED" 11:12:05 } 11:12:05 to topic SDC-DIST-STATUS-TOPIC 11:12:06 11:12:06.028 [pool-1-thread-1] INFO org.onap.sdc.utils.NotificationSender - DistributionClient - sendStatus 11:12:06 11:12:06.028 [pool-1-thread-1] INFO org.onap.sdc.utils.NotificationSender - Trying to send status: { 11:12:06 "distributionID": "bf3df55e-cdc6-4bf7-b3b3-0fdccab91106", 11:12:06 "consumerID": "dcae-openapi-manager", 11:12:06 "timestamp": 1768216315557, 11:12:06 "artifactURL": "/sdc/v1/catalog/services/DemovlbCds/1.0/resourceInstances/vlb_cds68b6da5968e40/artifacts/vpkg.env", 11:12:06 "status": "NOT_NOTIFIED" 11:12:06 } 11:12:06 to topic SDC-DIST-STATUS-TOPIC 11:12:07 11:12:07.030 [pool-1-thread-1] INFO org.onap.sdc.utils.NotificationSender - DistributionClient - sendStatus 11:12:07 11:12:07.030 [pool-1-thread-1] INFO org.onap.sdc.utils.NotificationSender - Trying to send status: { 11:12:07 "distributionID": "bf3df55e-cdc6-4bf7-b3b3-0fdccab91106", 11:12:07 "consumerID": "dcae-openapi-manager", 11:12:07 "timestamp": 1768216315557, 11:12:07 "artifactURL": "/", 11:12:07 "status": "NOTIFIED" 11:12:07 } 11:12:07 to topic SDC-DIST-STATUS-TOPIC 11:12:08 11:12:08.032 [pool-1-thread-1] INFO org.onap.sdc.utils.NotificationSender - DistributionClient - sendStatus 11:12:08 11:12:08.033 [pool-1-thread-1] INFO org.onap.sdc.utils.NotificationSender - Trying to send status: { 11:12:08 "distributionID": "bf3df55e-cdc6-4bf7-b3b3-0fdccab91106", 11:12:08 "consumerID": "dcae-openapi-manager", 11:12:08 "timestamp": 1768216315557, 11:12:08 "artifactURL": "/", 11:12:08 "status": "NOTIFIED" 11:12:08 } 11:12:08 to topic SDC-DIST-STATUS-TOPIC 11:12:09 11:12:09.035 [pool-1-thread-1] INFO org.onap.sdc.utils.NotificationSender - DistributionClient - sendStatus 11:12:09 11:12:09.035 [pool-1-thread-1] INFO org.onap.sdc.utils.NotificationSender - Trying to send status: { 11:12:09 "distributionID": "bf3df55e-cdc6-4bf7-b3b3-0fdccab91106", 11:12:09 "consumerID": "dcae-openapi-manager", 11:12:09 "timestamp": 1768216315557, 11:12:09 "artifactURL": "/sdc/v1/catalog/services/DemovlbCds/1.0/artifacts/service-DemovlbCds-template.yml", 11:12:09 "status": "NOT_NOTIFIED" 11:12:09 } 11:12:09 to topic SDC-DIST-STATUS-TOPIC 11:12:10 11:12:10.037 [pool-1-thread-1] INFO org.onap.sdc.utils.NotificationSender - DistributionClient - sendStatus 11:12:10 11:12:10.037 [pool-1-thread-1] INFO org.onap.sdc.utils.NotificationSender - Trying to send status: { 11:12:10 "distributionID": "bf3df55e-cdc6-4bf7-b3b3-0fdccab91106", 11:12:10 "consumerID": "dcae-openapi-manager", 11:12:10 "timestamp": 1768216315557, 11:12:10 "artifactURL": "/sdc/v1/catalog/services/DemovlbCds/1.0/artifacts/service-DemovlbCds-csar.csar", 11:12:10 "status": "NOT_NOTIFIED" 11:12:10 } 11:12:10 to topic SDC-DIST-STATUS-TOPIC 11:12:11 11:12:11.042 [pool-1-thread-1] INFO org.onap.test.core.service.ClientNotifyCallback - ================================================= 11:12:11 11:12:11.042 [pool-1-thread-1] INFO org.onap.test.core.service.ClientNotifyCallback - Distrubuted service information 11:12:11 11:12:11.042 [pool-1-thread-1] INFO org.onap.test.core.service.ClientNotifyCallback - Service UUID: d2192fd5-6ba4-40d2-9078-e3642d9175ee 11:12:11 11:12:11.043 [pool-1-thread-1] INFO org.onap.test.core.service.ClientNotifyCallback - Service name: demoVLB_CDS 11:12:11 11:12:11.043 [pool-1-thread-1] INFO org.onap.test.core.service.ClientNotifyCallback - Service resources: 11:12:11 11:12:11.044 [pool-1-thread-1] INFO org.onap.test.core.service.ClientNotifyCallback - - Resource: vLB_CDS 68b6da59-68e4 11:12:11 11:12:11.044 [pool-1-thread-1] INFO org.onap.test.core.service.ClientNotifyCallback - Artifacts: 11:12:11 11:12:11.045 [pool-1-thread-1] INFO org.onap.test.core.service.ClientNotifyCallback - - Name: vpkg.yaml 11:12:11 11:12:11.045 [pool-1-thread-1] INFO org.onap.test.core.service.ClientNotifyCallback - - Name: vlb.yaml 11:12:11 11:12:11.045 [pool-1-thread-1] INFO org.onap.test.core.service.ClientNotifyCallback - - Name: vdns.yaml 11:12:11 11:12:11.046 [pool-1-thread-1] INFO org.onap.test.core.service.ClientNotifyCallback - - Name: base_template.yaml 11:12:11 11:12:11.046 [pool-1-thread-1] INFO org.onap.test.core.service.ClientNotifyCallback - ================================================= 11:12:11 11:12:11.046 [pool-1-thread-1] INFO org.onap.test.core.service.ArtifactsDownloader - Downloading artifacts... 11:12:11 11:12:11.060 [pool-1-thread-1] ERROR org.onap.sdc.http.HttpSdcClient - failed to connect to url: / 11:12:11 org.apache.http.conn.HttpHostConnectException: Connect to localhost:30206 [localhost/127.0.0.1] failed: Connection refused (Connection refused) 11:12:11 at org.apache.http.impl.conn.DefaultHttpClientConnectionOperator.connect(DefaultHttpClientConnectionOperator.java:156) 11:12:11 at org.apache.http.impl.conn.PoolingHttpClientConnectionManager.connect(PoolingHttpClientConnectionManager.java:376) 11:12:11 at org.apache.http.impl.execchain.MainClientExec.establishRoute(MainClientExec.java:393) 11:12:11 at org.apache.http.impl.execchain.MainClientExec.execute(MainClientExec.java:236) 11:12:11 at org.apache.http.impl.execchain.ProtocolExec.execute(ProtocolExec.java:186) 11:12:11 at org.apache.http.impl.execchain.RetryExec.execute(RetryExec.java:89) 11:12:11 at org.apache.http.impl.execchain.RedirectExec.execute(RedirectExec.java:110) 11:12:11 at org.apache.http.impl.client.InternalHttpClient.doExecute(InternalHttpClient.java:185) 11:12:11 at org.apache.http.impl.client.CloseableHttpClient.execute(CloseableHttpClient.java:83) 11:12:11 at org.apache.http.impl.client.CloseableHttpClient.execute(CloseableHttpClient.java:108) 11:12:11 at org.onap.sdc.http.HttpSdcClient.getRequest(HttpSdcClient.java:116) 11:12:11 at org.onap.sdc.http.SdcConnectorClient.downloadArtifact(SdcConnectorClient.java:136) 11:12:11 at java.base/java.util.stream.ReferencePipeline$3$1.accept(ReferencePipeline.java:195) 11:12:11 at java.base/java.util.ArrayList$ArrayListSpliterator.forEachRemaining(ArrayList.java:1655) 11:12:11 at java.base/java.util.stream.ReferencePipeline$Head.forEach(ReferencePipeline.java:658) 11:12:11 at java.base/java.util.stream.ReferencePipeline$7$1.accept(ReferencePipeline.java:274) 11:12:11 at java.base/java.util.ArrayList$ArrayListSpliterator.forEachRemaining(ArrayList.java:1655) 11:12:11 at java.base/java.util.stream.AbstractPipeline.copyInto(AbstractPipeline.java:484) 11:12:11 at java.base/java.util.stream.AbstractPipeline.wrapAndCopyInto(AbstractPipeline.java:474) 11:12:11 at java.base/java.util.stream.ReduceOps$ReduceOp.evaluateSequential(ReduceOps.java:913) 11:12:11 at java.base/java.util.stream.AbstractPipeline.evaluate(AbstractPipeline.java:234) 11:12:11 at java.base/java.util.stream.ReferencePipeline.collect(ReferencePipeline.java:578) 11:12:11 at org.onap.test.core.service.ArtifactsDownloader.pullArtifacts(ArtifactsDownloader.java:56) 11:12:11 at org.onap.test.core.service.ClientNotifyCallback.activateCallback(ClientNotifyCallback.java:65) 11:12:11 at org.onap.sdc.impl.NotificationConsumer.run(NotificationConsumer.java:62) 11:12:11 at java.base/java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:515) 11:12:11 at java.base/java.util.concurrent.FutureTask.runAndReset(FutureTask.java:305) 11:12:11 at java.base/java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(ScheduledThreadPoolExecutor.java:305) 11:12:11 at java.base/java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128) 11:12:11 at java.base/java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628) 11:12:11 at java.base/java.lang.Thread.run(Thread.java:829) 11:12:11 Caused by: java.net.ConnectException: Connection refused (Connection refused) 11:12:11 at java.base/java.net.PlainSocketImpl.socketConnect(Native Method) 11:12:11 at java.base/java.net.AbstractPlainSocketImpl.doConnect(AbstractPlainSocketImpl.java:412) 11:12:11 at java.base/java.net.AbstractPlainSocketImpl.connectToAddress(AbstractPlainSocketImpl.java:255) 11:12:11 at java.base/java.net.AbstractPlainSocketImpl.connect(AbstractPlainSocketImpl.java:237) 11:12:11 at java.base/java.net.SocksSocketImpl.connect(SocksSocketImpl.java:392) 11:12:11 at java.base/java.net.Socket.connect(Socket.java:609) 11:12:11 at org.apache.http.conn.socket.PlainConnectionSocketFactory.connectSocket(PlainConnectionSocketFactory.java:75) 11:12:11 at org.apache.http.impl.conn.DefaultHttpClientConnectionOperator.connect(DefaultHttpClientConnectionOperator.java:142) 11:12:11 ... 30 common frames omitted 11:12:11 11:12:11.062 [pool-1-thread-1] ERROR org.onap.sdc.http.SdcConnectorClient - status from SDC is org.onap.sdc.http.HttpSdcResponse@5b009436 11:12:11 11:12:11.066 [pool-1-thread-1] ERROR org.onap.sdc.http.SdcConnectorClient - DistributionClientResultImpl [responseStatus=GENERAL_ERROR, responseMessage=failed to send request to SDC] 11:12:11 11:12:11.068 [pool-1-thread-1] ERROR org.onap.sdc.http.HttpSdcClient - failed to connect to url: / 11:12:11 org.apache.http.conn.HttpHostConnectException: Connect to localhost:30206 [localhost/127.0.0.1] failed: Connection refused (Connection refused) 11:12:11 at org.apache.http.impl.conn.DefaultHttpClientConnectionOperator.connect(DefaultHttpClientConnectionOperator.java:156) 11:12:11 at org.apache.http.impl.conn.PoolingHttpClientConnectionManager.connect(PoolingHttpClientConnectionManager.java:376) 11:12:11 at org.apache.http.impl.execchain.MainClientExec.establishRoute(MainClientExec.java:393) 11:12:11 at org.apache.http.impl.execchain.MainClientExec.execute(MainClientExec.java:236) 11:12:11 at org.apache.http.impl.execchain.ProtocolExec.execute(ProtocolExec.java:186) 11:12:11 at org.apache.http.impl.execchain.RetryExec.execute(RetryExec.java:89) 11:12:11 at org.apache.http.impl.execchain.RedirectExec.execute(RedirectExec.java:110) 11:12:11 at org.apache.http.impl.client.InternalHttpClient.doExecute(InternalHttpClient.java:185) 11:12:11 at org.apache.http.impl.client.CloseableHttpClient.execute(CloseableHttpClient.java:83) 11:12:11 at org.apache.http.impl.client.CloseableHttpClient.execute(CloseableHttpClient.java:108) 11:12:11 at org.onap.sdc.http.HttpSdcClient.getRequest(HttpSdcClient.java:116) 11:12:11 at org.onap.sdc.http.SdcConnectorClient.downloadArtifact(SdcConnectorClient.java:136) 11:12:11 at java.base/java.util.stream.ReferencePipeline$3$1.accept(ReferencePipeline.java:195) 11:12:11 at java.base/java.util.ArrayList$ArrayListSpliterator.forEachRemaining(ArrayList.java:1655) 11:12:11 at java.base/java.util.stream.ReferencePipeline$Head.forEach(ReferencePipeline.java:658) 11:12:11 at java.base/java.util.stream.ReferencePipeline$7$1.accept(ReferencePipeline.java:274) 11:12:11 at java.base/java.util.ArrayList$ArrayListSpliterator.forEachRemaining(ArrayList.java:1655) 11:12:11 at java.base/java.util.stream.AbstractPipeline.copyInto(AbstractPipeline.java:484) 11:12:11 at java.base/java.util.stream.AbstractPipeline.wrapAndCopyInto(AbstractPipeline.java:474) 11:12:11 at java.base/java.util.stream.ReduceOps$ReduceOp.evaluateSequential(ReduceOps.java:913) 11:12:11 at java.base/java.util.stream.AbstractPipeline.evaluate(AbstractPipeline.java:234) 11:12:11 at java.base/java.util.stream.ReferencePipeline.collect(ReferencePipeline.java:578) 11:12:11 at org.onap.test.core.service.ArtifactsDownloader.pullArtifacts(ArtifactsDownloader.java:56) 11:12:11 at org.onap.test.core.service.ClientNotifyCallback.activateCallback(ClientNotifyCallback.java:65) 11:12:11 at org.onap.sdc.impl.NotificationConsumer.run(NotificationConsumer.java:62) 11:12:11 at java.base/java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:515) 11:12:11 at java.base/java.util.concurrent.FutureTask.runAndReset(FutureTask.java:305) 11:12:11 at java.base/java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(ScheduledThreadPoolExecutor.java:305) 11:12:11 at java.base/java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128) 11:12:11 at java.base/java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628) 11:12:11 at java.base/java.lang.Thread.run(Thread.java:829) 11:12:11 Caused by: java.net.ConnectException: Connection refused (Connection refused) 11:12:11 at java.base/java.net.PlainSocketImpl.socketConnect(Native Method) 11:12:11 at java.base/java.net.AbstractPlainSocketImpl.doConnect(AbstractPlainSocketImpl.java:412) 11:12:11 at java.base/java.net.AbstractPlainSocketImpl.connectToAddress(AbstractPlainSocketImpl.java:255) 11:12:11 at java.base/java.net.AbstractPlainSocketImpl.connect(AbstractPlainSocketImpl.java:237) 11:12:11 at java.base/java.net.SocksSocketImpl.connect(SocksSocketImpl.java:392) 11:12:11 at java.base/java.net.Socket.connect(Socket.java:609) 11:12:11 at org.apache.http.conn.socket.PlainConnectionSocketFactory.connectSocket(PlainConnectionSocketFactory.java:75) 11:12:11 at org.apache.http.impl.conn.DefaultHttpClientConnectionOperator.connect(DefaultHttpClientConnectionOperator.java:142) 11:12:11 ... 30 common frames omitted 11:12:11 11:12:11.068 [pool-1-thread-1] ERROR org.onap.sdc.http.SdcConnectorClient - status from SDC is org.onap.sdc.http.HttpSdcResponse@21d9d6c5 11:12:11 11:12:11.069 [pool-1-thread-1] ERROR org.onap.sdc.http.SdcConnectorClient - DistributionClientResultImpl [responseStatus=GENERAL_ERROR, responseMessage=failed to send request to SDC] 11:12:11 11:12:11.070 [pool-1-thread-1] ERROR org.onap.sdc.http.HttpSdcClient - failed to connect to url: / 11:12:11 org.apache.http.conn.HttpHostConnectException: Connect to localhost:30206 [localhost/127.0.0.1] failed: Connection refused (Connection refused) 11:12:11 at org.apache.http.impl.conn.DefaultHttpClientConnectionOperator.connect(DefaultHttpClientConnectionOperator.java:156) 11:12:11 at org.apache.http.impl.conn.PoolingHttpClientConnectionManager.connect(PoolingHttpClientConnectionManager.java:376) 11:12:11 at org.apache.http.impl.execchain.MainClientExec.establishRoute(MainClientExec.java:393) 11:12:11 at org.apache.http.impl.execchain.MainClientExec.execute(MainClientExec.java:236) 11:12:11 at org.apache.http.impl.execchain.ProtocolExec.execute(ProtocolExec.java:186) 11:12:11 at org.apache.http.impl.execchain.RetryExec.execute(RetryExec.java:89) 11:12:11 at org.apache.http.impl.execchain.RedirectExec.execute(RedirectExec.java:110) 11:12:11 at org.apache.http.impl.client.InternalHttpClient.doExecute(InternalHttpClient.java:185) 11:12:11 at org.apache.http.impl.client.CloseableHttpClient.execute(CloseableHttpClient.java:83) 11:12:11 at org.apache.http.impl.client.CloseableHttpClient.execute(CloseableHttpClient.java:108) 11:12:11 at org.onap.sdc.http.HttpSdcClient.getRequest(HttpSdcClient.java:116) 11:12:11 at org.onap.sdc.http.SdcConnectorClient.downloadArtifact(SdcConnectorClient.java:136) 11:12:11 at java.base/java.util.stream.ReferencePipeline$3$1.accept(ReferencePipeline.java:195) 11:12:11 at java.base/java.util.ArrayList$ArrayListSpliterator.forEachRemaining(ArrayList.java:1655) 11:12:11 at java.base/java.util.stream.ReferencePipeline$Head.forEach(ReferencePipeline.java:658) 11:12:11 at java.base/java.util.stream.ReferencePipeline$7$1.accept(ReferencePipeline.java:274) 11:12:11 at java.base/java.util.ArrayList$ArrayListSpliterator.forEachRemaining(ArrayList.java:1655) 11:12:11 at java.base/java.util.stream.AbstractPipeline.copyInto(AbstractPipeline.java:484) 11:12:11 at java.base/java.util.stream.AbstractPipeline.wrapAndCopyInto(AbstractPipeline.java:474) 11:12:11 at java.base/java.util.stream.ReduceOps$ReduceOp.evaluateSequential(ReduceOps.java:913) 11:12:11 at java.base/java.util.stream.AbstractPipeline.evaluate(AbstractPipeline.java:234) 11:12:11 at java.base/java.util.stream.ReferencePipeline.collect(ReferencePipeline.java:578) 11:12:11 at org.onap.test.core.service.ArtifactsDownloader.pullArtifacts(ArtifactsDownloader.java:56) 11:12:11 at org.onap.test.core.service.ClientNotifyCallback.activateCallback(ClientNotifyCallback.java:65) 11:12:11 at org.onap.sdc.impl.NotificationConsumer.run(NotificationConsumer.java:62) 11:12:11 at java.base/java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:515) 11:12:11 at java.base/java.util.concurrent.FutureTask.runAndReset(FutureTask.java:305) 11:12:11 at java.base/java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(ScheduledThreadPoolExecutor.java:305) 11:12:11 at java.base/java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128) 11:12:11 at java.base/java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628) 11:12:11 at java.base/java.lang.Thread.run(Thread.java:829) 11:12:11 Caused by: java.net.ConnectException: Connection refused (Connection refused) 11:12:11 at java.base/java.net.PlainSocketImpl.socketConnect(Native Method) 11:12:11 at java.base/java.net.AbstractPlainSocketImpl.doConnect(AbstractPlainSocketImpl.java:412) 11:12:11 at java.base/java.net.AbstractPlainSocketImpl.connectToAddress(AbstractPlainSocketImpl.java:255) 11:12:11 at java.base/java.net.AbstractPlainSocketImpl.connect(AbstractPlainSocketImpl.java:237) 11:12:11 at java.base/java.net.SocksSocketImpl.connect(SocksSocketImpl.java:392) 11:12:11 at java.base/java.net.Socket.connect(Socket.java:609) 11:12:11 at org.apache.http.conn.socket.PlainConnectionSocketFactory.connectSocket(PlainConnectionSocketFactory.java:75) 11:12:11 at org.apache.http.impl.conn.DefaultHttpClientConnectionOperator.connect(DefaultHttpClientConnectionOperator.java:142) 11:12:11 ... 30 common frames omitted 11:12:11 11:12:11.070 [pool-1-thread-1] ERROR org.onap.sdc.http.SdcConnectorClient - status from SDC is org.onap.sdc.http.HttpSdcResponse@2a511756 11:12:11 11:12:11.070 [pool-1-thread-1] ERROR org.onap.sdc.http.SdcConnectorClient - DistributionClientResultImpl [responseStatus=GENERAL_ERROR, responseMessage=failed to send request to SDC] 11:12:11 11:12:11.071 [pool-1-thread-1] ERROR org.onap.sdc.http.HttpSdcClient - failed to connect to url: / 11:12:11 org.apache.http.conn.HttpHostConnectException: Connect to localhost:30206 [localhost/127.0.0.1] failed: Connection refused (Connection refused) 11:12:11 at org.apache.http.impl.conn.DefaultHttpClientConnectionOperator.connect(DefaultHttpClientConnectionOperator.java:156) 11:12:11 at org.apache.http.impl.conn.PoolingHttpClientConnectionManager.connect(PoolingHttpClientConnectionManager.java:376) 11:12:11 at org.apache.http.impl.execchain.MainClientExec.establishRoute(MainClientExec.java:393) 11:12:11 at org.apache.http.impl.execchain.MainClientExec.execute(MainClientExec.java:236) 11:12:11 at org.apache.http.impl.execchain.ProtocolExec.execute(ProtocolExec.java:186) 11:12:11 at org.apache.http.impl.execchain.RetryExec.execute(RetryExec.java:89) 11:12:11 at org.apache.http.impl.execchain.RedirectExec.execute(RedirectExec.java:110) 11:12:11 at org.apache.http.impl.client.InternalHttpClient.doExecute(InternalHttpClient.java:185) 11:12:11 at org.apache.http.impl.client.CloseableHttpClient.execute(CloseableHttpClient.java:83) 11:12:11 at org.apache.http.impl.client.CloseableHttpClient.execute(CloseableHttpClient.java:108) 11:12:11 at org.onap.sdc.http.HttpSdcClient.getRequest(HttpSdcClient.java:116) 11:12:11 at org.onap.sdc.http.SdcConnectorClient.downloadArtifact(SdcConnectorClient.java:136) 11:12:11 at java.base/java.util.stream.ReferencePipeline$3$1.accept(ReferencePipeline.java:195) 11:12:11 at java.base/java.util.ArrayList$ArrayListSpliterator.forEachRemaining(ArrayList.java:1655) 11:12:11 at java.base/java.util.stream.ReferencePipeline$Head.forEach(ReferencePipeline.java:658) 11:12:11 at java.base/java.util.stream.ReferencePipeline$7$1.accept(ReferencePipeline.java:274) 11:12:11 at java.base/java.util.ArrayList$ArrayListSpliterator.forEachRemaining(ArrayList.java:1655) 11:12:11 at java.base/java.util.stream.AbstractPipeline.copyInto(AbstractPipeline.java:484) 11:12:11 at java.base/java.util.stream.AbstractPipeline.wrapAndCopyInto(AbstractPipeline.java:474) 11:12:11 at java.base/java.util.stream.ReduceOps$ReduceOp.evaluateSequential(ReduceOps.java:913) 11:12:11 at java.base/java.util.stream.AbstractPipeline.evaluate(AbstractPipeline.java:234) 11:12:11 at java.base/java.util.stream.ReferencePipeline.collect(ReferencePipeline.java:578) 11:12:11 at org.onap.test.core.service.ArtifactsDownloader.pullArtifacts(ArtifactsDownloader.java:56) 11:12:11 at org.onap.test.core.service.ClientNotifyCallback.activateCallback(ClientNotifyCallback.java:65) 11:12:11 at org.onap.sdc.impl.NotificationConsumer.run(NotificationConsumer.java:62) 11:12:11 at java.base/java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:515) 11:12:11 at java.base/java.util.concurrent.FutureTask.runAndReset(FutureTask.java:305) 11:12:11 at java.base/java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(ScheduledThreadPoolExecutor.java:305) 11:12:11 at java.base/java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128) 11:12:11 at java.base/java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628) 11:12:11 at java.base/java.lang.Thread.run(Thread.java:829) 11:12:11 Caused by: java.net.ConnectException: Connection refused (Connection refused) 11:12:11 at java.base/java.net.PlainSocketImpl.socketConnect(Native Method) 11:12:11 at java.base/java.net.AbstractPlainSocketImpl.doConnect(AbstractPlainSocketImpl.java:412) 11:12:11 at java.base/java.net.AbstractPlainSocketImpl.connectToAddress(AbstractPlainSocketImpl.java:255) 11:12:11 at java.base/java.net.AbstractPlainSocketImpl.connect(AbstractPlainSocketImpl.java:237) 11:12:11 at java.base/java.net.SocksSocketImpl.connect(SocksSocketImpl.java:392) 11:12:11 at java.base/java.net.Socket.connect(Socket.java:609) 11:12:11 at org.apache.http.conn.socket.PlainConnectionSocketFactory.connectSocket(PlainConnectionSocketFactory.java:75) 11:12:11 at org.apache.http.impl.conn.DefaultHttpClientConnectionOperator.connect(DefaultHttpClientConnectionOperator.java:142) 11:12:11 ... 30 common frames omitted 11:12:11 11:12:11.072 [pool-1-thread-1] ERROR org.onap.sdc.http.SdcConnectorClient - status from SDC is org.onap.sdc.http.HttpSdcResponse@48b43328 11:12:11 11:12:11.072 [pool-1-thread-1] ERROR org.onap.sdc.http.SdcConnectorClient - DistributionClientResultImpl [responseStatus=GENERAL_ERROR, responseMessage=failed to send request to SDC] 11:12:11 11:12:11.127 [main] INFO org.onap.test.core.service.ClientInitializer - ======================================== 11:12:11 11:12:11.128 [main] INFO org.onap.test.core.service.ClientInitializer - distribution client stopped successfully 11:12:11 11:12:11.128 [main] INFO org.onap.test.core.service.ClientInitializer - ======================================== 11:12:11 11:12:11.561 [kafka-producer-network-thread | dcae-openapi-manager-producer-7925c1be-6325-4366-a05c-d0451c70d679] INFO org.apache.kafka.clients.NetworkClient - [Producer clientId=dcae-openapi-manager-producer-7925c1be-6325-4366-a05c-d0451c70d679] Node 1 disconnected. 11:12:11 11:12:11.565 [kafka-producer-network-thread | dcae-openapi-manager-producer-7925c1be-6325-4366-a05c-d0451c70d679] INFO org.apache.kafka.clients.NetworkClient - [Producer clientId=dcae-openapi-manager-producer-7925c1be-6325-4366-a05c-d0451c70d679] Node -1 disconnected. 11:12:11 11:12:11.612 [kafka-coordinator-heartbeat-thread | noapp] INFO org.apache.kafka.clients.NetworkClient - [Consumer clientId=dcae-openapi-manager-consumer-e00e3481-fed3-4484-a104-0a1bdbef7219, groupId=noapp] Node 1 disconnected. 11:12:11 11:12:11.613 [kafka-coordinator-heartbeat-thread | noapp] INFO org.apache.kafka.clients.NetworkClient - [Consumer clientId=dcae-openapi-manager-consumer-e00e3481-fed3-4484-a104-0a1bdbef7219, groupId=noapp] Node -1 disconnected. 11:12:11 11:12:11.613 [kafka-coordinator-heartbeat-thread | noapp] INFO org.apache.kafka.clients.NetworkClient - [Consumer clientId=dcae-openapi-manager-consumer-e00e3481-fed3-4484-a104-0a1bdbef7219, groupId=noapp] Node 2147483646 disconnected. 11:12:11 11:12:11.614 [kafka-coordinator-heartbeat-thread | noapp] INFO org.apache.kafka.clients.consumer.internals.ConsumerCoordinator - [Consumer clientId=dcae-openapi-manager-consumer-e00e3481-fed3-4484-a104-0a1bdbef7219, groupId=noapp] Group coordinator localhost:43219 (id: 2147483646 rack: null) is unavailable or invalid due to cause: coordinator unavailable. isDisconnected: true. Rediscovery will be attempted. 11:12:11 11:12:11.664 [kafka-producer-network-thread | dcae-openapi-manager-producer-7925c1be-6325-4366-a05c-d0451c70d679] INFO org.apache.kafka.clients.NetworkClient - [Producer clientId=dcae-openapi-manager-producer-7925c1be-6325-4366-a05c-d0451c70d679] Node 1 disconnected. 11:12:11 11:12:11.664 [kafka-producer-network-thread | dcae-openapi-manager-producer-7925c1be-6325-4366-a05c-d0451c70d679] WARN org.apache.kafka.clients.NetworkClient - [Producer clientId=dcae-openapi-manager-producer-7925c1be-6325-4366-a05c-d0451c70d679] Connection to node 1 (localhost/127.0.0.1:43219) could not be established. Broker may not be available. 11:12:11 11:12:11.717 [kafka-coordinator-heartbeat-thread | noapp] INFO org.apache.kafka.clients.NetworkClient - [Consumer clientId=dcae-openapi-manager-consumer-e00e3481-fed3-4484-a104-0a1bdbef7219, groupId=noapp] Node 1 disconnected. 11:12:11 11:12:11.717 [kafka-coordinator-heartbeat-thread | noapp] WARN org.apache.kafka.clients.NetworkClient - [Consumer clientId=dcae-openapi-manager-consumer-e00e3481-fed3-4484-a104-0a1bdbef7219, groupId=noapp] Connection to node 1 (localhost/127.0.0.1:43219) could not be established. Broker may not be available. 11:12:11 11:12:11.818 [kafka-producer-network-thread | dcae-openapi-manager-producer-7925c1be-6325-4366-a05c-d0451c70d679] INFO org.apache.kafka.clients.NetworkClient - [Producer clientId=dcae-openapi-manager-producer-7925c1be-6325-4366-a05c-d0451c70d679] Node 1 disconnected. 11:12:11 11:12:11.819 [kafka-producer-network-thread | dcae-openapi-manager-producer-7925c1be-6325-4366-a05c-d0451c70d679] WARN org.apache.kafka.clients.NetworkClient - [Producer clientId=dcae-openapi-manager-producer-7925c1be-6325-4366-a05c-d0451c70d679] Connection to node 1 (localhost/127.0.0.1:43219) could not be established. Broker may not be available. 11:12:11 11:12:11.920 [kafka-coordinator-heartbeat-thread | noapp] INFO org.apache.kafka.clients.NetworkClient - [Consumer clientId=dcae-openapi-manager-consumer-e00e3481-fed3-4484-a104-0a1bdbef7219, groupId=noapp] Node 1 disconnected. 11:12:11 11:12:11.921 [kafka-coordinator-heartbeat-thread | noapp] WARN org.apache.kafka.clients.NetworkClient - [Consumer clientId=dcae-openapi-manager-consumer-e00e3481-fed3-4484-a104-0a1bdbef7219, groupId=noapp] Connection to node 1 (localhost/127.0.0.1:43219) could not be established. Broker may not be available. 11:12:11 [INFO] Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 43.148 s - in org.onap.test.core.service.ClientInitializerTest 11:12:12 [INFO] 11:12:12 [INFO] Results: 11:12:12 [INFO] 11:12:12 [INFO] Tests run: 1, Failures: 0, Errors: 0, Skipped: 0 11:12:12 [INFO] 11:12:12 [INFO] 11:12:12 [INFO] --- jacoco-maven-plugin:0.8.6:report (post-unit-test) @ sdc-distribution-ci --- 11:12:12 [INFO] Loading execution data file /w/workspace/sdc-sdc-distribution-client-master-integration-pairwise/sdc-distribution-ci/target/code-coverage/jacoco-ut.exec 11:12:12 [INFO] Analyzed bundle 'sdc-distribution-ci' with 9 classes 11:12:12 [INFO] 11:12:12 [INFO] --- maven-jar-plugin:2.4:jar (default-jar) @ sdc-distribution-ci --- 11:12:12 [INFO] Building jar: /w/workspace/sdc-sdc-distribution-client-master-integration-pairwise/sdc-distribution-ci/target/client-initialization.jar 11:12:12 [INFO] 11:12:12 [INFO] --- maven-javadoc-plugin:3.2.0:jar (attach-javadocs) @ sdc-distribution-ci --- 11:12:12 [INFO] No previous run data found, generating javadoc. 11:12:14 [INFO] 11:12:14 Loading source files for package org.onap.test.core.service... 11:12:14 Loading source files for package org.onap.test.core.config... 11:12:14 Loading source files for package org.onap.test.it... 11:12:14 Constructing Javadoc information... 11:12:14 Standard Doclet version 11.0.16 11:12:14 Building tree for all the packages and classes... 11:12:14 Generating /w/workspace/sdc-sdc-distribution-client-master-integration-pairwise/sdc-distribution-ci/target/apidocs/org/onap/test/core/config/ArtifactTypeEnum.html... 11:12:14 Generating /w/workspace/sdc-sdc-distribution-client-master-integration-pairwise/sdc-distribution-ci/target/apidocs/org/onap/test/core/config/DistributionClientConfig.html... 11:12:14 Generating /w/workspace/sdc-sdc-distribution-client-master-integration-pairwise/sdc-distribution-ci/target/apidocs/org/onap/test/core/service/ArtifactsDownloader.html... 11:12:14 Generating /w/workspace/sdc-sdc-distribution-client-master-integration-pairwise/sdc-distribution-ci/target/apidocs/org/onap/test/core/service/ArtifactsValidator.html... 11:12:14 Generating /w/workspace/sdc-sdc-distribution-client-master-integration-pairwise/sdc-distribution-ci/target/apidocs/org/onap/test/core/service/ClientInitializer.html... 11:12:14 Generating /w/workspace/sdc-sdc-distribution-client-master-integration-pairwise/sdc-distribution-ci/target/apidocs/org/onap/test/core/service/ClientNotifyCallback.html... 11:12:14 Generating /w/workspace/sdc-sdc-distribution-client-master-integration-pairwise/sdc-distribution-ci/target/apidocs/org/onap/test/core/service/DistributionStatusMessage.html... 11:12:14 Generating /w/workspace/sdc-sdc-distribution-client-master-integration-pairwise/sdc-distribution-ci/target/apidocs/org/onap/test/core/service/ValidationMessage.html... 11:12:14 Generating /w/workspace/sdc-sdc-distribution-client-master-integration-pairwise/sdc-distribution-ci/target/apidocs/org/onap/test/core/service/ValidationResult.html... 11:12:14 Generating /w/workspace/sdc-sdc-distribution-client-master-integration-pairwise/sdc-distribution-ci/target/apidocs/org/onap/test/it/RegisterToSdcTopicIT.html... 11:12:14 Generating /w/workspace/sdc-sdc-distribution-client-master-integration-pairwise/sdc-distribution-ci/target/apidocs/org/onap/test/core/config/package-summary.html... 11:12:14 Generating /w/workspace/sdc-sdc-distribution-client-master-integration-pairwise/sdc-distribution-ci/target/apidocs/org/onap/test/core/config/package-tree.html... 11:12:14 Generating /w/workspace/sdc-sdc-distribution-client-master-integration-pairwise/sdc-distribution-ci/target/apidocs/org/onap/test/core/service/package-summary.html... 11:12:14 Generating /w/workspace/sdc-sdc-distribution-client-master-integration-pairwise/sdc-distribution-ci/target/apidocs/org/onap/test/core/service/package-tree.html... 11:12:14 Generating /w/workspace/sdc-sdc-distribution-client-master-integration-pairwise/sdc-distribution-ci/target/apidocs/org/onap/test/it/package-summary.html... 11:12:14 Generating /w/workspace/sdc-sdc-distribution-client-master-integration-pairwise/sdc-distribution-ci/target/apidocs/org/onap/test/it/package-tree.html... 11:12:14 Generating /w/workspace/sdc-sdc-distribution-client-master-integration-pairwise/sdc-distribution-ci/target/apidocs/constant-values.html... 11:12:14 Generating /w/workspace/sdc-sdc-distribution-client-master-integration-pairwise/sdc-distribution-ci/target/apidocs/org/onap/test/core/service/class-use/ArtifactsDownloader.html... 11:12:14 Generating /w/workspace/sdc-sdc-distribution-client-master-integration-pairwise/sdc-distribution-ci/target/apidocs/org/onap/test/core/service/class-use/ClientInitializer.html... 11:12:14 Generating /w/workspace/sdc-sdc-distribution-client-master-integration-pairwise/sdc-distribution-ci/target/apidocs/org/onap/test/core/service/class-use/ValidationResult.html... 11:12:14 Generating /w/workspace/sdc-sdc-distribution-client-master-integration-pairwise/sdc-distribution-ci/target/apidocs/org/onap/test/core/service/class-use/ValidationMessage.html... 11:12:14 Generating /w/workspace/sdc-sdc-distribution-client-master-integration-pairwise/sdc-distribution-ci/target/apidocs/org/onap/test/core/service/class-use/ArtifactsValidator.html... 11:12:14 Generating /w/workspace/sdc-sdc-distribution-client-master-integration-pairwise/sdc-distribution-ci/target/apidocs/org/onap/test/core/service/class-use/DistributionStatusMessage.html... 11:12:14 Generating /w/workspace/sdc-sdc-distribution-client-master-integration-pairwise/sdc-distribution-ci/target/apidocs/org/onap/test/core/service/class-use/ClientNotifyCallback.html... 11:12:14 Generating /w/workspace/sdc-sdc-distribution-client-master-integration-pairwise/sdc-distribution-ci/target/apidocs/org/onap/test/core/config/class-use/DistributionClientConfig.html... 11:12:14 Generating /w/workspace/sdc-sdc-distribution-client-master-integration-pairwise/sdc-distribution-ci/target/apidocs/org/onap/test/core/config/class-use/ArtifactTypeEnum.html... 11:12:14 Generating /w/workspace/sdc-sdc-distribution-client-master-integration-pairwise/sdc-distribution-ci/target/apidocs/org/onap/test/it/class-use/RegisterToSdcTopicIT.html... 11:12:14 Generating /w/workspace/sdc-sdc-distribution-client-master-integration-pairwise/sdc-distribution-ci/target/apidocs/org/onap/test/core/config/package-use.html... 11:12:14 Generating /w/workspace/sdc-sdc-distribution-client-master-integration-pairwise/sdc-distribution-ci/target/apidocs/org/onap/test/core/service/package-use.html... 11:12:14 Generating /w/workspace/sdc-sdc-distribution-client-master-integration-pairwise/sdc-distribution-ci/target/apidocs/org/onap/test/it/package-use.html... 11:12:14 Building index for all the packages and classes... 11:12:14 Generating /w/workspace/sdc-sdc-distribution-client-master-integration-pairwise/sdc-distribution-ci/target/apidocs/overview-tree.html... 11:12:14 Generating /w/workspace/sdc-sdc-distribution-client-master-integration-pairwise/sdc-distribution-ci/target/apidocs/index-all.html... 11:12:14 Building index for all classes... 11:12:14 Generating /w/workspace/sdc-sdc-distribution-client-master-integration-pairwise/sdc-distribution-ci/target/apidocs/allclasses-index.html... 11:12:14 Generating /w/workspace/sdc-sdc-distribution-client-master-integration-pairwise/sdc-distribution-ci/target/apidocs/allpackages-index.html... 11:12:14 Generating /w/workspace/sdc-sdc-distribution-client-master-integration-pairwise/sdc-distribution-ci/target/apidocs/deprecated-list.html... 11:12:14 Building index for all classes... 11:12:14 Generating /w/workspace/sdc-sdc-distribution-client-master-integration-pairwise/sdc-distribution-ci/target/apidocs/allclasses.html... 11:12:14 Generating /w/workspace/sdc-sdc-distribution-client-master-integration-pairwise/sdc-distribution-ci/target/apidocs/allclasses.html... 11:12:14 Generating /w/workspace/sdc-sdc-distribution-client-master-integration-pairwise/sdc-distribution-ci/target/apidocs/index.html... 11:12:14 Generating /w/workspace/sdc-sdc-distribution-client-master-integration-pairwise/sdc-distribution-ci/target/apidocs/overview-summary.html... 11:12:14 Generating /w/workspace/sdc-sdc-distribution-client-master-integration-pairwise/sdc-distribution-ci/target/apidocs/help-doc.html... 11:12:14 [INFO] Building jar: /w/workspace/sdc-sdc-distribution-client-master-integration-pairwise/sdc-distribution-ci/target/client-initialization-javadoc.jar 11:12:14 [INFO] 11:12:14 [INFO] --- jacoco-maven-plugin:0.8.6:prepare-agent (pre-integration-test) @ sdc-distribution-ci --- 11:12:14 [INFO] failsafeArgLine set to -javaagent:/home/jenkins/.m2/repository/org/jacoco/org.jacoco.agent/0.8.6/org.jacoco.agent-0.8.6-runtime.jar=destfile=/w/workspace/sdc-sdc-distribution-client-master-integration-pairwise/sdc-distribution-ci/target/code-coverage/jacoco-it.exec,excludes=**/gen/**:**/generated-sources/**:**/yang-gen/**:**/pax/** 11:12:14 [INFO] 11:12:14 [INFO] --- maven-failsafe-plugin:3.0.0-M4:integration-test (integration-tests) @ sdc-distribution-ci --- 11:12:14 [INFO] 11:12:14 [INFO] --- jacoco-maven-plugin:0.8.6:report (post-integration-test) @ sdc-distribution-ci --- 11:12:14 [INFO] Skipping JaCoCo execution due to missing execution data file. 11:12:14 [INFO] 11:12:14 [INFO] --- maven-failsafe-plugin:3.0.0-M4:verify (integration-tests) @ sdc-distribution-ci --- 11:12:14 [INFO] 11:12:14 [INFO] --- maven-install-plugin:2.4:install (default-install) @ sdc-distribution-ci --- 11:12:14 [INFO] Installing /w/workspace/sdc-sdc-distribution-client-master-integration-pairwise/sdc-distribution-ci/target/client-initialization.jar to /home/jenkins/.m2/repository/org/onap/sdc/sdc-distribution-client/sdc-distribution-ci/2.2.0-SNAPSHOT/sdc-distribution-ci-2.2.0-SNAPSHOT.jar 11:12:14 [INFO] Installing /w/workspace/sdc-sdc-distribution-client-master-integration-pairwise/sdc-distribution-ci/pom.xml to /home/jenkins/.m2/repository/org/onap/sdc/sdc-distribution-client/sdc-distribution-ci/2.2.0-SNAPSHOT/sdc-distribution-ci-2.2.0-SNAPSHOT.pom 11:12:14 [INFO] Installing /w/workspace/sdc-sdc-distribution-client-master-integration-pairwise/sdc-distribution-ci/target/client-initialization-javadoc.jar to /home/jenkins/.m2/repository/org/onap/sdc/sdc-distribution-client/sdc-distribution-ci/2.2.0-SNAPSHOT/sdc-distribution-ci-2.2.0-SNAPSHOT-javadoc.jar 11:12:14 [INFO] ------------------------------------------------------------------------ 11:12:14 [INFO] Reactor Summary for sdc-sdc-distribution-client 2.2.0-SNAPSHOT: 11:12:14 [INFO] 11:12:14 [INFO] sdc-sdc-distribution-client ........................ SUCCESS [ 8.393 s] 11:12:14 [INFO] sdc-distribution-client-api ........................ SUCCESS [ 4.654 s] 11:12:14 [INFO] sdc-distribution-client ............................ SUCCESS [ 49.700 s] 11:12:14 [INFO] sdc-distribution-ci ................................ SUCCESS [ 47.857 s] 11:12:14 [INFO] ------------------------------------------------------------------------ 11:12:14 [INFO] BUILD SUCCESS 11:12:14 [INFO] ------------------------------------------------------------------------ 11:12:14 [INFO] Total time: 01:51 min 11:12:14 [INFO] Finished at: 2026-01-12T11:12:14Z 11:12:14 [INFO] ------------------------------------------------------------------------ 11:12:14 $ ssh-agent -k 11:12:14 unset SSH_AUTH_SOCK; 11:12:14 unset SSH_AGENT_PID; 11:12:14 echo Agent pid 2060 killed; 11:12:14 [ssh-agent] Stopped. 11:12:14 [PostBuildScript] - [INFO] Executing post build scripts. 11:12:14 [sdc-sdc-distribution-client-master-integration-pairwise] $ /bin/bash /tmp/jenkins7358108675306042256.sh 11:12:14 ---> sysstat.sh 11:12:15 [sdc-sdc-distribution-client-master-integration-pairwise] $ /bin/bash /tmp/jenkins6096487739917928260.sh 11:12:15 ---> package-listing.sh 11:12:15 ++ tr '[:upper:]' '[:lower:]' 11:12:15 ++ facter osfamily 11:12:15 + OS_FAMILY=debian 11:12:15 + workspace=/w/workspace/sdc-sdc-distribution-client-master-integration-pairwise 11:12:15 + START_PACKAGES=/tmp/packages_start.txt 11:12:15 + END_PACKAGES=/tmp/packages_end.txt 11:12:15 + DIFF_PACKAGES=/tmp/packages_diff.txt 11:12:15 + PACKAGES=/tmp/packages_start.txt 11:12:15 + '[' /w/workspace/sdc-sdc-distribution-client-master-integration-pairwise ']' 11:12:15 + PACKAGES=/tmp/packages_end.txt 11:12:15 + case "${OS_FAMILY}" in 11:12:15 + dpkg -l 11:12:15 + grep '^ii' 11:12:15 + '[' -f /tmp/packages_start.txt ']' 11:12:15 + '[' -f /tmp/packages_end.txt ']' 11:12:15 + diff /tmp/packages_start.txt /tmp/packages_end.txt 11:12:15 + '[' /w/workspace/sdc-sdc-distribution-client-master-integration-pairwise ']' 11:12:15 + mkdir -p /w/workspace/sdc-sdc-distribution-client-master-integration-pairwise/archives/ 11:12:15 + cp -f /tmp/packages_diff.txt /tmp/packages_end.txt /tmp/packages_start.txt /w/workspace/sdc-sdc-distribution-client-master-integration-pairwise/archives/ 11:12:15 [sdc-sdc-distribution-client-master-integration-pairwise] $ /bin/bash /tmp/jenkins8876087063735260208.sh 11:12:15 ---> capture-instance-metadata.sh 11:12:15 Setup pyenv: 11:12:15 system 11:12:15 3.8.13 11:12:15 3.9.13 11:12:15 * 3.10.6 (set by /w/workspace/sdc-sdc-distribution-client-master-integration-pairwise/.python-version) 11:12:15 lf-activate-venv(): INFO: Reuse venv:/tmp/venv-L4o3 from file:/tmp/.os_lf_venv 11:12:15 lf-activate-venv(): INFO: Installing base packages (pip, setuptools, virtualenv) 11:12:15 lf-activate-venv(): INFO: Attempting to install with network-safe options... 11:12:17 lf-activate-venv(): INFO: Base packages installed successfully 11:12:17 lf-activate-venv(): INFO: Installing additional packages: lftools 11:12:26 lf-activate-venv(): INFO: Adding /tmp/venv-L4o3/bin to PATH 11:12:26 INFO: Running in OpenStack, capturing instance metadata 11:12:27 [sdc-sdc-distribution-client-master-integration-pairwise] $ /bin/bash /tmp/jenkins92638103882056984.sh 11:12:27 provisioning config files... 11:12:27 copy managed file [jenkins-log-archives-settings] to file:/w/workspace/sdc-sdc-distribution-client-master-integration-pairwise@tmp/config6920303094218789977tmp 11:12:27 Regular expression run condition: Expression=[^.*logs-s3.*], Label=[] 11:12:27 Run condition [Regular expression match] preventing perform for step [Provide Configuration files] 11:12:27 [EnvInject] - Injecting environment variables from a build step. 11:12:27 [EnvInject] - Injecting as environment variables the properties content 11:12:27 SERVER_ID=logs 11:12:27 11:12:27 [EnvInject] - Variables injected successfully. 11:12:27 [sdc-sdc-distribution-client-master-integration-pairwise] $ /bin/bash /tmp/jenkins6889870674178222554.sh 11:12:27 ---> create-netrc.sh 11:12:27 [sdc-sdc-distribution-client-master-integration-pairwise] $ /bin/bash /tmp/jenkins9562576887851686909.sh 11:12:27 ---> python-tools-install.sh 11:12:27 Setup pyenv: 11:12:27 system 11:12:27 3.8.13 11:12:27 3.9.13 11:12:27 * 3.10.6 (set by /w/workspace/sdc-sdc-distribution-client-master-integration-pairwise/.python-version) 11:12:27 lf-activate-venv(): INFO: Reuse venv:/tmp/venv-L4o3 from file:/tmp/.os_lf_venv 11:12:27 lf-activate-venv(): INFO: Installing base packages (pip, setuptools, virtualenv) 11:12:27 lf-activate-venv(): INFO: Attempting to install with network-safe options... 11:12:29 lf-activate-venv(): INFO: Base packages installed successfully 11:12:29 lf-activate-venv(): INFO: Installing additional packages: lftools 11:12:38 lf-activate-venv(): INFO: Adding /tmp/venv-L4o3/bin to PATH 11:12:38 [sdc-sdc-distribution-client-master-integration-pairwise] $ /bin/bash /tmp/jenkins1924730058467811024.sh 11:12:38 ---> sudo-logs.sh 11:12:38 Archiving 'sudo' log.. 11:12:38 [sdc-sdc-distribution-client-master-integration-pairwise] $ /bin/bash /tmp/jenkins6744646776473380218.sh 11:12:38 ---> job-cost.sh 11:12:38 Setup pyenv: 11:12:38 system 11:12:38 3.8.13 11:12:38 3.9.13 11:12:38 * 3.10.6 (set by /w/workspace/sdc-sdc-distribution-client-master-integration-pairwise/.python-version) 11:12:38 lf-activate-venv(): INFO: Reuse venv:/tmp/venv-L4o3 from file:/tmp/.os_lf_venv 11:12:38 lf-activate-venv(): INFO: Installing base packages (pip, setuptools, virtualenv) 11:12:38 lf-activate-venv(): INFO: Attempting to install with network-safe options... 11:12:40 lf-activate-venv(): INFO: Base packages installed successfully 11:12:40 lf-activate-venv(): INFO: Installing additional packages: zipp==1.1.0 python-openstackclient urllib3~=1.26.15 11:12:45 lf-activate-venv(): INFO: Adding /tmp/venv-L4o3/bin to PATH 11:12:45 INFO: No Stack... 11:12:46 INFO: Retrieving Pricing Info for: v3-standard-8 11:12:46 INFO: Archiving Costs 11:12:46 [sdc-sdc-distribution-client-master-integration-pairwise] $ /bin/bash -l /tmp/jenkins15986644382750104927.sh 11:12:46 ---> logs-deploy.sh 11:12:46 Setup pyenv: 11:12:46 system 11:12:46 3.8.13 11:12:46 3.9.13 11:12:46 * 3.10.6 (set by /w/workspace/sdc-sdc-distribution-client-master-integration-pairwise/.python-version) 11:12:46 lf-activate-venv(): INFO: Reuse venv:/tmp/venv-L4o3 from file:/tmp/.os_lf_venv 11:12:46 lf-activate-venv(): INFO: Installing base packages (pip, setuptools, virtualenv) 11:12:46 lf-activate-venv(): INFO: Attempting to install with network-safe options... 11:12:48 lf-activate-venv(): INFO: Base packages installed successfully 11:12:48 lf-activate-venv(): INFO: Installing additional packages: lftools urllib3~=1.26.15 11:12:56 lf-activate-venv(): INFO: Adding /tmp/venv-L4o3/bin to PATH 11:12:56 INFO: Nexus URL https://nexus.onap.org path production/vex-yul-ecomp-jenkins-1/sdc-sdc-distribution-client-master-integration-pairwise/1248 11:12:56 INFO: archiving workspace using pattern(s): -p **/target/surefire-reports/*-output.txt 11:12:57 Archives upload complete. 11:12:58 INFO: archiving logs to Nexus 11:12:59 ---> uname -a: 11:12:59 Linux prd-ubuntu1804-docker-8c-8g-4909 4.15.0-192-generic #203-Ubuntu SMP Wed Aug 10 17:40:03 UTC 2022 x86_64 x86_64 x86_64 GNU/Linux 11:12:59 11:12:59 11:12:59 ---> lscpu: 11:12:59 Architecture: x86_64 11:12:59 CPU op-mode(s): 32-bit, 64-bit 11:12:59 Byte Order: Little Endian 11:12:59 CPU(s): 8 11:12:59 On-line CPU(s) list: 0-7 11:12:59 Thread(s) per core: 1 11:12:59 Core(s) per socket: 1 11:12:59 Socket(s): 8 11:12:59 NUMA node(s): 1 11:12:59 Vendor ID: AuthenticAMD 11:12:59 CPU family: 23 11:12:59 Model: 49 11:12:59 Model name: AMD EPYC-Rome Processor 11:12:59 Stepping: 0 11:12:59 CPU MHz: 2799.998 11:12:59 BogoMIPS: 5599.99 11:12:59 Virtualization: AMD-V 11:12:59 Hypervisor vendor: KVM 11:12:59 Virtualization type: full 11:12:59 L1d cache: 32K 11:12:59 L1i cache: 32K 11:12:59 L2 cache: 512K 11:12:59 L3 cache: 16384K 11:12:59 NUMA node0 CPU(s): 0-7 11:12:59 Flags: fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush mmx fxsr sse sse2 syscall nx mmxext fxsr_opt pdpe1gb rdtscp lm rep_good nopl xtopology cpuid extd_apicid tsc_known_freq pni pclmulqdq ssse3 fma cx16 sse4_1 sse4_2 x2apic movbe popcnt tsc_deadline_timer aes xsave avx f16c rdrand hypervisor lahf_lm cmp_legacy svm cr8_legacy abm sse4a misalignsse 3dnowprefetch osvw topoext perfctr_core ssbd ibrs ibpb stibp vmmcall fsgsbase tsc_adjust bmi1 avx2 smep bmi2 rdseed adx smap clflushopt clwb sha_ni xsaveopt xsavec xgetbv1 xsaves clzero xsaveerptr arat npt nrip_save umip rdpid arch_capabilities 11:12:59 11:12:59 11:12:59 ---> nproc: 11:12:59 8 11:12:59 11:12:59 11:12:59 ---> df -h: 11:12:59 Filesystem Size Used Avail Use% Mounted on 11:12:59 udev 16G 0 16G 0% /dev 11:12:59 tmpfs 3.2G 716K 3.2G 1% /run 11:12:59 /dev/vda1 155G 11G 145G 8% / 11:12:59 tmpfs 16G 0 16G 0% /dev/shm 11:12:59 tmpfs 5.0M 0 5.0M 0% /run/lock 11:12:59 tmpfs 16G 0 16G 0% /sys/fs/cgroup 11:12:59 /dev/vda15 105M 4.4M 100M 5% /boot/efi 11:12:59 tmpfs 3.2G 0 3.2G 0% /run/user/1001 11:12:59 11:12:59 11:12:59 ---> free -m: 11:12:59 total used free shared buff/cache available 11:12:59 Mem: 32167 858 28174 0 3134 30857 11:12:59 Swap: 1023 0 1023 11:12:59 11:12:59 11:12:59 ---> ip addr: 11:12:59 1: lo: mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000 11:12:59 link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00 11:12:59 inet 127.0.0.1/8 scope host lo 11:12:59 valid_lft forever preferred_lft forever 11:12:59 inet6 ::1/128 scope host 11:12:59 valid_lft forever preferred_lft forever 11:12:59 2: ens3: mtu 1458 qdisc mq state UP group default qlen 1000 11:12:59 link/ether fa:16:3e:c0:ac:42 brd ff:ff:ff:ff:ff:ff 11:12:59 inet 10.30.107.130/23 brd 10.30.107.255 scope global dynamic ens3 11:12:59 valid_lft 86141sec preferred_lft 86141sec 11:12:59 inet6 fe80::f816:3eff:fec0:ac42/64 scope link 11:12:59 valid_lft forever preferred_lft forever 11:12:59 3: docker0: mtu 1500 qdisc noqueue state DOWN group default 11:12:59 link/ether 02:42:f4:b1:19:21 brd ff:ff:ff:ff:ff:ff 11:12:59 inet 10.250.0.254/24 brd 10.250.0.255 scope global docker0 11:12:59 valid_lft forever preferred_lft forever 11:12:59 inet6 fe80::42:f4ff:feb1:1921/64 scope link 11:12:59 valid_lft forever preferred_lft forever 11:12:59 11:12:59 11:12:59 ---> sar -b -r -n DEV: 11:12:59 Linux 4.15.0-192-generic (prd-ubuntu1804-docker-8c-8g-4909) 01/12/26 _x86_64_ (8 CPU) 11:12:59 11:12:59 11:08:41 LINUX RESTART (8 CPU) 11:12:59 11:12:59 11:09:02 tps rtps wtps bread/s bwrtn/s 11:12:59 11:10:01 373.16 73.28 299.88 5250.23 106156.92 11:12:59 11:11:01 208.46 22.39 186.07 905.70 81373.41 11:12:59 11:12:01 101.28 3.68 97.60 474.19 39033.49 11:12:59 Average: 226.82 32.89 193.93 2192.98 75350.49 11:12:59 11:12:59 11:09:02 kbmemfree kbavail kbmemused %memused kbbuffers kbcached kbcommit %commit kbactive kbinact kbdirty 11:12:59 11:10:01 30187456 31732976 2751764 8.35 66688 1791132 1403560 4.13 815084 1648196 173484 11:12:59 11:11:01 28305568 30144700 4633652 14.07 84468 2052780 3439648 10.12 2450820 1860956 11632 11:12:59 11:12:01 26965148 29697328 5974072 18.14 104564 2897608 6331848 18.63 3017168 2553248 696 11:12:59 Average: 28486057 30525001 4453163 13.52 85240 2247173 3725019 10.96 2094357 2020800 61937 11:12:59 11:12:59 11:09:02 IFACE rxpck/s txpck/s rxkB/s txkB/s rxcmp/s txcmp/s rxmcst/s %ifutil 11:12:59 11:10:01 docker0 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 11:12:59 11:10:01 ens3 375.72 271.45 1850.06 75.39 0.00 0.00 0.00 0.00 11:12:59 11:10:01 lo 1.90 1.90 0.20 0.20 0.00 0.00 0.00 0.00 11:12:59 11:11:01 docker0 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 11:12:59 11:11:01 ens3 743.89 518.13 2382.53 149.67 0.00 0.00 0.00 0.00 11:12:59 11:11:01 lo 14.40 14.40 2.01 2.01 0.00 0.00 0.00 0.00 11:12:59 11:12:01 docker0 1.73 2.33 0.37 0.44 0.00 0.00 0.00 0.00 11:12:59 11:12:01 veth268ff19 0.13 0.30 0.02 0.04 0.00 0.00 0.00 0.00 11:12:59 11:12:01 ens3 555.17 352.92 6941.82 72.96 0.00 0.00 0.00 0.00 11:12:59 11:12:01 veth1caa651 1.50 2.10 0.37 0.41 0.00 0.00 0.00 0.00 11:12:59 Average: docker0 0.58 0.78 0.12 0.15 0.00 0.00 0.00 0.00 11:12:59 Average: veth268ff19 0.04 0.10 0.01 0.01 0.00 0.00 0.00 0.00 11:12:59 Average: ens3 559.29 381.45 3735.20 99.48 0.00 0.00 0.00 0.00 11:12:59 Average: veth1caa651 0.50 0.70 0.12 0.14 0.00 0.00 0.00 0.00 11:12:59 11:12:59 11:12:59 ---> sar -P ALL: 11:12:59 Linux 4.15.0-192-generic (prd-ubuntu1804-docker-8c-8g-4909) 01/12/26 _x86_64_ (8 CPU) 11:12:59 11:12:59 11:08:41 LINUX RESTART (8 CPU) 11:12:59 11:12:59 11:09:02 CPU %user %nice %system %iowait %steal %idle 11:12:59 11:10:01 all 10.22 0.00 1.33 4.30 0.04 84.11 11:12:59 11:10:01 0 3.52 0.00 2.29 0.66 0.02 93.52 11:12:59 11:10:01 1 34.71 0.00 2.00 1.10 0.07 62.11 11:12:59 11:10:01 2 6.82 0.00 0.82 0.65 0.05 91.66 11:12:59 11:10:01 3 4.94 0.00 0.95 16.09 0.05 77.96 11:12:59 11:10:01 4 3.22 0.00 1.79 9.86 0.03 85.10 11:12:59 11:10:01 5 15.02 0.00 1.36 3.79 0.05 79.79 11:12:59 11:10:01 6 5.41 0.00 0.71 0.75 0.02 93.12 11:12:59 11:10:01 7 8.14 0.00 0.71 1.53 0.03 89.59 11:12:59 11:11:01 all 22.38 0.00 1.54 1.77 0.06 74.25 11:12:59 11:11:01 0 22.05 0.00 1.00 0.07 0.07 76.82 11:12:59 11:11:01 1 12.44 0.00 1.92 0.28 0.03 85.33 11:12:59 11:11:01 2 23.58 0.00 0.77 0.03 0.05 75.56 11:12:59 11:11:01 3 31.54 0.00 2.09 8.97 0.08 57.31 11:12:59 11:11:01 4 25.70 0.00 1.87 3.24 0.08 69.10 11:12:59 11:11:01 5 21.43 0.00 2.20 0.73 0.05 75.59 11:12:59 11:11:01 6 21.05 0.00 1.00 0.20 0.05 77.70 11:12:59 11:11:01 7 21.28 0.00 1.45 0.65 0.07 76.56 11:12:59 11:12:01 all 15.82 0.00 2.73 0.97 0.06 80.42 11:12:59 11:12:01 0 17.42 0.00 2.21 0.02 0.05 80.30 11:12:59 11:12:01 1 15.19 0.00 2.42 0.28 0.07 82.04 11:12:59 11:12:01 2 14.57 0.00 3.60 4.54 0.07 77.22 11:12:59 11:12:01 3 19.35 0.00 3.50 1.51 0.07 75.57 11:12:59 11:12:01 4 17.45 0.00 2.35 0.13 0.07 80.00 11:12:59 11:12:01 5 13.25 0.00 2.15 0.12 0.05 84.43 11:12:59 11:12:01 6 16.17 0.00 2.92 0.52 0.07 80.32 11:12:59 11:12:01 7 13.12 0.00 2.70 0.62 0.05 83.50 11:12:59 Average: all 16.17 0.00 1.87 2.34 0.05 79.57 11:12:59 Average: 0 14.38 0.00 1.83 0.25 0.04 83.50 11:12:59 Average: 1 20.70 0.00 2.12 0.55 0.06 76.57 11:12:59 Average: 2 15.04 0.00 1.73 1.74 0.06 81.42 11:12:59 Average: 3 18.69 0.00 2.19 8.83 0.07 70.23 11:12:59 Average: 4 15.53 0.00 2.00 4.38 0.06 78.02 11:12:59 Average: 5 16.59 0.00 1.91 1.54 0.05 79.92 11:12:59 Average: 6 14.25 0.00 1.55 0.49 0.04 83.67 11:12:59 Average: 7 14.21 0.00 1.62 0.93 0.05 83.19 11:12:59 11:12:59 11:12:59