09:07:42 Triggered by Gerrit: https://gerrit.onap.org/r/c/sdc/sdc-distribution-client/+/143280 09:07:42 Running as SYSTEM 09:07:42 [EnvInject] - Loading node environment variables. 09:07:42 Building remotely on prd-ubuntu1804-docker-8c-8g-8464 (ubuntu1804-docker-8c-8g) in workspace /w/workspace/sdc-sdc-distribution-client-master-integration-pairwise 09:07:42 [ssh-agent] Looking for ssh-agent implementation... 09:07:42 $ ssh-agent 09:07:42 SSH_AUTH_SOCK=/tmp/ssh-GMUfAsqZThPt/agent.2055 09:07:42 SSH_AGENT_PID=2057 09:07:42 [ssh-agent] Started. 09:07:42 Running ssh-add (command line suppressed) 09:07:42 Identity added: /w/workspace/sdc-sdc-distribution-client-master-integration-pairwise@tmp/private_key_14264843090633455971.key (/w/workspace/sdc-sdc-distribution-client-master-integration-pairwise@tmp/private_key_14264843090633455971.key) 09:07:42 [ssh-agent] Using credentials onap-jobbuiler (Gerrit user) 09:07:42 The recommended git tool is: NONE 09:07:44 using credential onap-jenkins-ssh 09:07:44 Wiping out workspace first. 09:07:44 Cloning the remote Git repository 09:07:44 Cloning repository git://cloud.onap.org/mirror/sdc/sdc-distribution-client.git 09:07:44 > git init /w/workspace/sdc-sdc-distribution-client-master-integration-pairwise # timeout=10 09:07:44 Fetching upstream changes from git://cloud.onap.org/mirror/sdc/sdc-distribution-client.git 09:07:44 > git --version # timeout=10 09:07:44 > git --version # 'git version 2.17.1' 09:07:44 using GIT_SSH to set credentials Gerrit user 09:07:44 Verifying host key using manually-configured host key entries 09:07:44 > git fetch --tags --progress -- git://cloud.onap.org/mirror/sdc/sdc-distribution-client.git +refs/heads/*:refs/remotes/origin/* # timeout=30 09:07:45 > git config remote.origin.url git://cloud.onap.org/mirror/sdc/sdc-distribution-client.git # timeout=10 09:07:45 > git config --add remote.origin.fetch +refs/heads/*:refs/remotes/origin/* # timeout=10 09:07:46 > git config remote.origin.url git://cloud.onap.org/mirror/sdc/sdc-distribution-client.git # timeout=10 09:07:46 Fetching upstream changes from git://cloud.onap.org/mirror/sdc/sdc-distribution-client.git 09:07:46 using GIT_SSH to set credentials Gerrit user 09:07:46 Verifying host key using manually-configured host key entries 09:07:46 > git fetch --tags --progress -- git://cloud.onap.org/mirror/sdc/sdc-distribution-client.git refs/changes/80/143280/1 # timeout=30 09:07:46 > git rev-parse acf2499763e4095f201ce9b2081f3b0550417cd4^{commit} # timeout=10 09:07:46 JENKINS-19022: warning: possible memory leak due to Git plugin usage; see: https://plugins.jenkins.io/git/#remove-git-plugin-buildsbybranch-builddata-script 09:07:46 Checking out Revision acf2499763e4095f201ce9b2081f3b0550417cd4 (refs/changes/80/143280/1) 09:07:46 > git config core.sparsecheckout # timeout=10 09:07:46 > git checkout -f acf2499763e4095f201ce9b2081f3b0550417cd4 # timeout=30 09:07:49 Commit message: "CI: Deploy python based Github2Gerrit" 09:07:49 > git rev-parse FETCH_HEAD^{commit} # timeout=10 09:07:49 > git rev-list --no-walk 1c0fa91062a4af64d089eff9ba6f83aa6b52813b # timeout=10 09:07:50 [sdc-sdc-distribution-client-master-integration-pairwise] $ /bin/bash /tmp/jenkins18397121672334398285.sh 09:07:50 ---> python-tools-install.sh 09:07:50 Setup pyenv: 09:07:50 * system (set by /opt/pyenv/version) 09:07:50 * 3.8.13 (set by /opt/pyenv/version) 09:07:50 * 3.9.13 (set by /opt/pyenv/version) 09:07:50 * 3.10.6 (set by /opt/pyenv/version) 09:07:55 lf-activate-venv(): INFO: Creating python3 venv at /tmp/venv-Qtqx 09:07:55 lf-activate-venv(): INFO: Save venv in file: /tmp/.os_lf_venv 09:07:55 lf-activate-venv(): INFO: Installing base packages (pip, setuptools, virtualenv) 09:07:55 lf-activate-venv(): INFO: Attempting to install with network-safe options... 09:08:00 lf-activate-venv(): INFO: Base packages installed successfully 09:08:00 lf-activate-venv(): INFO: Installing additional packages: lftools 09:08:30 lf-activate-venv(): INFO: Adding /tmp/venv-Qtqx/bin to PATH 09:08:30 Generating Requirements File 09:08:51 ERROR: pip's dependency resolver does not currently take into account all the packages that are installed. This behaviour is the source of the following dependency conflicts. 09:08:51 httplib2 0.30.2 requires pyparsing<4,>=3.0.4, but you have pyparsing 2.4.7 which is incompatible. 09:08:52 Python 3.10.6 09:08:52 pip 26.0.1 from /tmp/venv-Qtqx/lib/python3.10/site-packages/pip (python 3.10) 09:08:52 appdirs==1.4.4 09:08:52 argcomplete==3.6.3 09:08:52 aspy.yaml==1.3.0 09:08:52 attrs==25.4.0 09:08:52 autopage==0.6.0 09:08:52 backports.strenum==1.3.1 09:08:52 beautifulsoup4==4.14.3 09:08:52 boto3==1.42.48 09:08:52 botocore==1.42.48 09:08:52 bs4==0.0.2 09:08:52 certifi==2026.1.4 09:08:52 cffi==2.0.0 09:08:52 cfgv==3.5.0 09:08:52 chardet==5.2.0 09:08:52 charset-normalizer==3.4.4 09:08:52 click==8.3.1 09:08:52 cliff==4.13.1 09:08:52 cmd2==3.2.0 09:08:52 cryptography==3.3.2 09:08:52 debtcollector==3.0.0 09:08:52 decorator==5.2.1 09:08:52 defusedxml==0.7.1 09:08:52 Deprecated==1.3.1 09:08:52 distlib==0.4.0 09:08:52 dnspython==2.8.0 09:08:52 docker==7.1.0 09:08:52 dogpile.cache==1.5.0 09:08:52 durationpy==0.10 09:08:52 email-validator==2.3.0 09:08:52 filelock==3.21.2 09:08:52 future==1.0.0 09:08:52 gitdb==4.0.12 09:08:52 GitPython==3.1.46 09:08:52 httplib2==0.30.2 09:08:52 identify==2.6.16 09:08:52 idna==3.11 09:08:52 importlib-resources==1.5.0 09:08:52 iso8601==2.1.0 09:08:52 Jinja2==3.1.6 09:08:52 jmespath==1.1.0 09:08:52 jsonpatch==1.33 09:08:52 jsonpointer==3.0.0 09:08:52 jsonschema==4.26.0 09:08:52 jsonschema-specifications==2025.9.1 09:08:52 keystoneauth1==5.13.0 09:08:52 kubernetes==35.0.0 09:08:52 lftools==0.37.21 09:08:52 lxml==6.0.2 09:08:52 markdown-it-py==4.0.0 09:08:52 MarkupSafe==3.0.3 09:08:52 mdurl==0.1.2 09:08:52 msgpack==1.1.2 09:08:52 multi_key_dict==2.0.3 09:08:52 munch==4.0.0 09:08:52 netaddr==1.3.0 09:08:52 niet==1.4.2 09:08:52 nodeenv==1.10.0 09:08:52 oauth2client==4.1.3 09:08:52 oauthlib==3.3.1 09:08:52 openstacksdk==4.9.0 09:08:52 os-service-types==1.8.2 09:08:52 osc-lib==4.3.0 09:08:52 oslo.config==10.2.0 09:08:52 oslo.context==6.2.0 09:08:52 oslo.i18n==6.7.1 09:08:52 oslo.log==8.0.0 09:08:52 oslo.serialization==5.9.0 09:08:52 oslo.utils==9.2.0 09:08:52 packaging==26.0 09:08:52 pbr==7.0.3 09:08:52 platformdirs==4.7.0 09:08:52 prettytable==3.17.0 09:08:52 psutil==7.2.2 09:08:52 pyasn1==0.6.2 09:08:52 pyasn1_modules==0.4.2 09:08:52 pycparser==3.0 09:08:52 pygerrit2==2.0.15 09:08:52 PyGithub==2.8.1 09:08:52 Pygments==2.19.2 09:08:52 PyJWT==2.11.0 09:08:52 PyNaCl==1.6.2 09:08:52 pyparsing==2.4.7 09:08:52 pyperclip==1.11.0 09:08:52 pyrsistent==0.20.0 09:08:52 python-cinderclient==9.8.0 09:08:52 python-dateutil==2.9.0.post0 09:08:52 python-heatclient==5.0.0 09:08:52 python-jenkins==1.8.3 09:08:52 python-keystoneclient==5.7.0 09:08:52 python-magnumclient==4.9.0 09:08:52 python-openstackclient==8.3.0 09:08:52 python-swiftclient==4.9.0 09:08:52 PyYAML==6.0.3 09:08:52 referencing==0.37.0 09:08:52 requests==2.32.5 09:08:52 requests-oauthlib==2.0.0 09:08:52 requestsexceptions==1.4.0 09:08:52 rfc3986==2.0.0 09:08:52 rich==14.3.2 09:08:52 rich-argparse==1.7.2 09:08:52 rpds-py==0.30.0 09:08:52 rsa==4.9.1 09:08:52 ruamel.yaml==0.19.1 09:08:52 ruamel.yaml.clib==0.2.15 09:08:52 s3transfer==0.16.0 09:08:52 simplejson==3.20.2 09:08:52 six==1.17.0 09:08:52 smmap==5.0.2 09:08:52 soupsieve==2.8.3 09:08:52 stevedore==5.6.0 09:08:52 tabulate==0.9.0 09:08:52 toml==0.10.2 09:08:52 tomlkit==0.14.0 09:08:52 tqdm==4.67.3 09:08:52 typing_extensions==4.15.0 09:08:52 tzdata==2025.3 09:08:52 urllib3==1.26.20 09:08:52 virtualenv==20.36.1 09:08:52 wcwidth==0.6.0 09:08:52 websocket-client==1.9.0 09:08:52 wrapt==2.1.1 09:08:52 xdg==6.0.0 09:08:52 xmltodict==1.0.2 09:08:52 yq==3.4.3 09:08:52 [EnvInject] - Injecting environment variables from a build step. 09:08:52 [EnvInject] - Injecting as environment variables the properties content 09:08:52 SET_JDK_VERSION=openjdk11 09:08:52 GIT_URL="git://cloud.onap.org/mirror" 09:08:52 09:08:52 [EnvInject] - Variables injected successfully. 09:08:52 [sdc-sdc-distribution-client-master-integration-pairwise] $ /bin/sh /tmp/jenkins456141069745193913.sh 09:08:52 ---> update-java-alternatives.sh 09:08:52 ---> Updating Java version 09:08:52 ---> Ubuntu/Debian system detected 09:08:53 update-alternatives: using /usr/lib/jvm/java-11-openjdk-amd64/bin/java to provide /usr/bin/java (java) in manual mode 09:08:53 update-alternatives: using /usr/lib/jvm/java-11-openjdk-amd64/bin/javac to provide /usr/bin/javac (javac) in manual mode 09:08:53 update-alternatives: using /usr/lib/jvm/java-11-openjdk-amd64 to provide /usr/lib/jvm/java-openjdk (java_sdk_openjdk) in manual mode 09:08:53 openjdk version "11.0.16" 2022-07-19 09:08:53 OpenJDK Runtime Environment (build 11.0.16+8-post-Ubuntu-0ubuntu118.04) 09:08:53 OpenJDK 64-Bit Server VM (build 11.0.16+8-post-Ubuntu-0ubuntu118.04, mixed mode) 09:08:53 JAVA_HOME=/usr/lib/jvm/java-11-openjdk-amd64 09:08:53 [EnvInject] - Injecting environment variables from a build step. 09:08:53 [EnvInject] - Injecting as environment variables the properties file path '/tmp/java.env' 09:08:53 [EnvInject] - Variables injected successfully. 09:08:53 provisioning config files... 09:08:53 copy managed file [global-settings] to file:/w/workspace/sdc-sdc-distribution-client-master-integration-pairwise@tmp/config1414240723910304214tmp 09:08:53 copy managed file [sdc-sdc-distribution-client-settings] to file:/w/workspace/sdc-sdc-distribution-client-master-integration-pairwise@tmp/config7235248453170007093tmp 09:08:53 [EnvInject] - Injecting environment variables from a build step. 09:08:53 Unpacking https://repo.maven.apache.org/maven2/org/apache/maven/apache-maven/3.6.3/apache-maven-3.6.3-bin.zip to /w/tools/hudson.tasks.Maven_MavenInstallation/mvn36 on prd-ubuntu1804-docker-8c-8g-8464 09:08:54 using settings config with name sdc-sdc-distribution-client-settings 09:08:54 Replacing all maven server entries not found in credentials list is true 09:08:54 using global settings config with name global-settings 09:08:54 Replacing all maven server entries not found in credentials list is true 09:08:54 [sdc-sdc-distribution-client-master-integration-pairwise] $ /w/tools/hudson.tasks.Maven_MavenInstallation/mvn36/bin/mvn -s /tmp/settings14768415621510870835.xml -gs /tmp/global-settings2434134149779041705.xml -DGERRIT_BRANCH=master -DGERRIT_PATCHSET_REVISION=acf2499763e4095f201ce9b2081f3b0550417cd4 -DGERRIT_HOST=gerrit.onap.org -DMVN=/w/tools/hudson.tasks.Maven_MavenInstallation/mvn36/bin/mvn -DGERRIT_CHANGE_OWNER_EMAIL=ksandi@contractor.linuxfoundation.org "-DGERRIT_EVENT_ACCOUNT_NAME=Kevin Sandi" -DGERRIT_CHANGE_URL=https://gerrit.onap.org/r/c/sdc/sdc-distribution-client/+/143280 -DGERRIT_PATCHSET_UPLOADER_EMAIL=ksandi@contractor.linuxfoundation.org "-DARCHIVE_ARTIFACTS= **/target/surefire-reports/*-output.txt" -DGERRIT_EVENT_TYPE=patchset-created -DSTACK_NAME=$JOB_NAME-$BUILD_NUMBER -DGERRIT_PROJECT=sdc/sdc-distribution-client -DGERRIT_PATCHSET_UPLOADER_USERNAME=kevin.sandi -DGERRIT_CHANGE_NUMBER=143280 -DGERRIT_SCHEME=ssh '-DGERRIT_PATCHSET_UPLOADER=\"Kevin Sandi\" ' -DGERRIT_PORT=29418 -DGERRIT_CHANGE_PRIVATE_STATE=false -DGERRIT_REFSPEC=refs/changes/80/143280/1 "-DGERRIT_PATCHSET_UPLOADER_NAME=Kevin Sandi" '-DGERRIT_CHANGE_OWNER=\"Kevin Sandi\" ' -DPROJECT=sdc/sdc-distribution-client -DGERRIT_HASHTAGS= -DGERRIT_CHANGE_COMMIT_MESSAGE=Q0k6IERlcGxveSBweXRob24gYmFzZWQgR2l0aHViMkdlcnJpdAoKSXNzdWUtSUQ6IENJTUFOLTMzCkNoYW5nZS1JZDogSTI1ZjcyNTA0ZGVkMTA3M2Q1ZmMxOTMxNTM2ZmYyMWY2MGZiNTFiMTAKU2lnbmVkLW9mZi1ieTogS2V2aW4gU2FuZGkgPGtzYW5kaUBjb250cmFjdG9yLmxpbnV4Zm91bmRhdGlvbi5vcmc+Cg== -DGERRIT_NAME=Primary -DGERRIT_TOPIC= "-DGERRIT_CHANGE_SUBJECT=CI: Deploy python based Github2Gerrit" -DGERRIT_EVENT_ACCOUNT_USERNAME=kevin.sandi -DGERRIT_CHANGE_OWNER_USERNAME=kevin.sandi '-DGERRIT_EVENT_ACCOUNT=\"Kevin Sandi\" ' -DGERRIT_CHANGE_WIP_STATE=false -DGERRIT_CHANGE_ID=I25f72504ded1073d5fc1931536ff21f60fb51b10 -DGERRIT_EVENT_HASH=-1266625831 -DGERRIT_VERSION=3.7.2 -DGERRIT_EVENT_ACCOUNT_EMAIL=ksandi@contractor.linuxfoundation.org -DGERRIT_PATCHSET_NUMBER=1 "-DMAVEN_PARAMS= -P integration-pairwise" "-DGERRIT_CHANGE_OWNER_NAME=Kevin Sandi" -DMAVEN_OPTS='' clean install -B -Dorg.slf4j.simpleLogger.log.org.apache.maven.cli.transfer.Slf4jMavenTransferListener=warn -P integration-pairwise 09:08:55 [INFO] Scanning for projects... 09:08:56 [INFO] ------------------------------------------------------------------------ 09:08:56 [INFO] Reactor Build Order: 09:08:56 [INFO] 09:08:56 [INFO] sdc-sdc-distribution-client [pom] 09:08:56 [INFO] sdc-distribution-client-api [jar] 09:08:56 [INFO] sdc-distribution-client [jar] 09:08:56 [INFO] sdc-distribution-ci [jar] 09:08:56 [INFO] 09:08:56 [INFO] --< org.onap.sdc.sdc-distribution-client:sdc-main-distribution-client >-- 09:08:56 [INFO] Building sdc-sdc-distribution-client 2.2.0-SNAPSHOT [1/4] 09:08:56 [INFO] --------------------------------[ pom ]--------------------------------- 09:08:56 [INFO] 09:08:56 [INFO] --- maven-clean-plugin:2.5:clean (default-clean) @ sdc-main-distribution-client --- 09:08:57 [INFO] 09:08:57 [INFO] --- maven-enforcer-plugin:3.0.0-M2:enforce (enforce-property) @ sdc-main-distribution-client --- 09:09:05 [INFO] 09:09:05 [INFO] --- maven-enforcer-plugin:3.0.0-M2:enforce (enforce-no-snapshots) @ sdc-main-distribution-client --- 09:09:05 [INFO] 09:09:05 [INFO] --- jacoco-maven-plugin:0.8.6:prepare-agent (pre-unit-test) @ sdc-main-distribution-client --- 09:09:06 [INFO] surefireArgLine set to -javaagent:/home/jenkins/.m2/repository/org/jacoco/org.jacoco.agent/0.8.6/org.jacoco.agent-0.8.6-runtime.jar=destfile=/w/workspace/sdc-sdc-distribution-client-master-integration-pairwise/target/code-coverage/jacoco-ut.exec,excludes=**/gen/**:**/generated-sources/**:**/yang-gen/**:**/pax/** 09:09:06 [INFO] 09:09:06 [INFO] --- jacoco-maven-plugin:0.8.6:prepare-agent (prepare-agent) @ sdc-main-distribution-client --- 09:09:06 [INFO] argLine set to -javaagent:/home/jenkins/.m2/repository/org/jacoco/org.jacoco.agent/0.8.6/org.jacoco.agent-0.8.6-runtime.jar=destfile=/w/workspace/sdc-sdc-distribution-client-master-integration-pairwise/target/jacoco.exec,excludes=**/gen/**:**/generated-sources/**:**/yang-gen/**:**/pax/** 09:09:06 [INFO] 09:09:06 [INFO] --- maven-checkstyle-plugin:3.1.1:check (onap-license) @ sdc-main-distribution-client --- 09:09:09 [INFO] 09:09:09 [INFO] --- maven-checkstyle-plugin:3.1.1:check (onap-java-style) @ sdc-main-distribution-client --- 09:09:09 [INFO] 09:09:09 [INFO] --- jacoco-maven-plugin:0.8.6:report (post-unit-test) @ sdc-main-distribution-client --- 09:09:09 [INFO] Skipping JaCoCo execution due to missing execution data file. 09:09:09 [INFO] 09:09:09 [INFO] --- maven-javadoc-plugin:3.2.0:jar (attach-javadocs) @ sdc-main-distribution-client --- 09:09:10 [INFO] Not executing Javadoc as the project is not a Java classpath-capable package 09:09:10 [INFO] 09:09:10 [INFO] --- jacoco-maven-plugin:0.8.6:prepare-agent (pre-integration-test) @ sdc-main-distribution-client --- 09:09:10 [INFO] failsafeArgLine set to -javaagent:/home/jenkins/.m2/repository/org/jacoco/org.jacoco.agent/0.8.6/org.jacoco.agent-0.8.6-runtime.jar=destfile=/w/workspace/sdc-sdc-distribution-client-master-integration-pairwise/target/code-coverage/jacoco-it.exec,excludes=**/gen/**:**/generated-sources/**:**/yang-gen/**:**/pax/** 09:09:10 [INFO] 09:09:10 [INFO] --- maven-failsafe-plugin:3.0.0-M4:integration-test (integration-tests) @ sdc-main-distribution-client --- 09:09:10 [INFO] No tests to run. 09:09:11 [INFO] 09:09:11 [INFO] --- jacoco-maven-plugin:0.8.6:report (post-integration-test) @ sdc-main-distribution-client --- 09:09:11 [INFO] Skipping JaCoCo execution due to missing execution data file. 09:09:11 [INFO] 09:09:11 [INFO] --- maven-failsafe-plugin:3.0.0-M4:verify (integration-tests) @ sdc-main-distribution-client --- 09:09:11 [INFO] 09:09:11 [INFO] --- maven-install-plugin:2.4:install (default-install) @ sdc-main-distribution-client --- 09:09:11 [INFO] Installing /w/workspace/sdc-sdc-distribution-client-master-integration-pairwise/pom.xml to /home/jenkins/.m2/repository/org/onap/sdc/sdc-distribution-client/sdc-main-distribution-client/2.2.0-SNAPSHOT/sdc-main-distribution-client-2.2.0-SNAPSHOT.pom 09:09:11 [INFO] 09:09:11 [INFO] --< org.onap.sdc.sdc-distribution-client:sdc-distribution-client-api >-- 09:09:11 [INFO] Building sdc-distribution-client-api 2.2.0-SNAPSHOT [2/4] 09:09:11 [INFO] --------------------------------[ jar ]--------------------------------- 09:09:11 [INFO] 09:09:11 [INFO] --- maven-clean-plugin:2.5:clean (default-clean) @ sdc-distribution-client-api --- 09:09:11 [INFO] 09:09:11 [INFO] --- maven-enforcer-plugin:3.0.0-M2:enforce (enforce-property) @ sdc-distribution-client-api --- 09:09:11 [INFO] 09:09:11 [INFO] --- maven-enforcer-plugin:3.0.0-M2:enforce (enforce-no-snapshots) @ sdc-distribution-client-api --- 09:09:11 [INFO] 09:09:11 [INFO] --- jacoco-maven-plugin:0.8.6:prepare-agent (pre-unit-test) @ sdc-distribution-client-api --- 09:09:11 [INFO] surefireArgLine set to -javaagent:/home/jenkins/.m2/repository/org/jacoco/org.jacoco.agent/0.8.6/org.jacoco.agent-0.8.6-runtime.jar=destfile=/w/workspace/sdc-sdc-distribution-client-master-integration-pairwise/sdc-distribution-client-api/target/code-coverage/jacoco-ut.exec,excludes=**/gen/**:**/generated-sources/**:**/yang-gen/**:**/pax/** 09:09:11 [INFO] 09:09:11 [INFO] --- jacoco-maven-plugin:0.8.6:prepare-agent (prepare-agent) @ sdc-distribution-client-api --- 09:09:11 [INFO] argLine set to -javaagent:/home/jenkins/.m2/repository/org/jacoco/org.jacoco.agent/0.8.6/org.jacoco.agent-0.8.6-runtime.jar=destfile=/w/workspace/sdc-sdc-distribution-client-master-integration-pairwise/sdc-distribution-client-api/target/jacoco.exec,excludes=**/gen/**:**/generated-sources/**:**/yang-gen/**:**/pax/** 09:09:11 [INFO] 09:09:11 [INFO] --- maven-checkstyle-plugin:3.1.1:check (onap-license) @ sdc-distribution-client-api --- 09:09:11 [INFO] 09:09:11 [INFO] --- maven-checkstyle-plugin:3.1.1:check (onap-java-style) @ sdc-distribution-client-api --- 09:09:11 [INFO] 09:09:11 [INFO] --- maven-resources-plugin:2.6:resources (default-resources) @ sdc-distribution-client-api --- 09:09:11 [INFO] Using 'UTF-8' encoding to copy filtered resources. 09:09:11 [INFO] skip non existing resourceDirectory /w/workspace/sdc-sdc-distribution-client-master-integration-pairwise/sdc-distribution-client-api/src/main/resources 09:09:11 [INFO] 09:09:11 [INFO] --- maven-compiler-plugin:3.8.1:compile (default-compile) @ sdc-distribution-client-api --- 09:09:12 [INFO] Changes detected - recompiling the module! 09:09:12 [INFO] Compiling 23 source files to /w/workspace/sdc-sdc-distribution-client-master-integration-pairwise/sdc-distribution-client-api/target/classes 09:09:14 [INFO] 09:09:14 [INFO] --- maven-resources-plugin:2.6:testResources (default-testResources) @ sdc-distribution-client-api --- 09:09:14 [INFO] Using 'UTF-8' encoding to copy filtered resources. 09:09:14 [INFO] skip non existing resourceDirectory /w/workspace/sdc-sdc-distribution-client-master-integration-pairwise/sdc-distribution-client-api/src/test/resources 09:09:14 [INFO] 09:09:14 [INFO] --- maven-compiler-plugin:3.8.1:testCompile (default-testCompile) @ sdc-distribution-client-api --- 09:09:14 [INFO] No sources to compile 09:09:14 [INFO] 09:09:14 [INFO] --- maven-surefire-plugin:3.0.0-M4:test (default-test) @ sdc-distribution-client-api --- 09:09:14 [INFO] No tests to run. 09:09:14 [INFO] 09:09:14 [INFO] --- jacoco-maven-plugin:0.8.6:report (post-unit-test) @ sdc-distribution-client-api --- 09:09:14 [INFO] Skipping JaCoCo execution due to missing execution data file. 09:09:14 [INFO] 09:09:14 [INFO] --- maven-jar-plugin:2.4:jar (default-jar) @ sdc-distribution-client-api --- 09:09:14 [INFO] Building jar: /w/workspace/sdc-sdc-distribution-client-master-integration-pairwise/sdc-distribution-client-api/target/sdc-distribution-client-api-2.2.0-SNAPSHOT.jar 09:09:14 [INFO] 09:09:14 [INFO] --- maven-javadoc-plugin:3.2.0:jar (attach-javadocs) @ sdc-distribution-client-api --- 09:09:14 [INFO] No previous run data found, generating javadoc. 09:09:16 [INFO] 09:09:16 Loading source files for package org.onap.sdc.api.consumer... 09:09:16 Loading source files for package org.onap.sdc.api... 09:09:16 Loading source files for package org.onap.sdc.api.notification... 09:09:16 Loading source files for package org.onap.sdc.api.results... 09:09:16 Constructing Javadoc information... 09:09:16 Standard Doclet version 11.0.16 09:09:16 Building tree for all the packages and classes... 09:09:16 Generating /w/workspace/sdc-sdc-distribution-client-master-integration-pairwise/sdc-distribution-client-api/target/apidocs/org/onap/sdc/api/ArtifactInfo.html... 09:09:16 Generating /w/workspace/sdc-sdc-distribution-client-master-integration-pairwise/sdc-distribution-client-api/target/apidocs/org/onap/sdc/api/DistributionClient.html... 09:09:16 Generating /w/workspace/sdc-sdc-distribution-client-master-integration-pairwise/sdc-distribution-client-api/target/apidocs/org/onap/sdc/api/DownloadResult.html... 09:09:16 Generating /w/workspace/sdc-sdc-distribution-client-master-integration-pairwise/sdc-distribution-client-api/target/apidocs/org/onap/sdc/api/IDistributionClient.html... 09:09:16 Generating /w/workspace/sdc-sdc-distribution-client-master-integration-pairwise/sdc-distribution-client-api/target/apidocs/org/onap/sdc/api/IDistributionStatusMessageJsonBuilder.html... 09:09:16 Generating /w/workspace/sdc-sdc-distribution-client-master-integration-pairwise/sdc-distribution-client-api/target/apidocs/org/onap/sdc/api/StatusMessage.html... 09:09:16 Generating /w/workspace/sdc-sdc-distribution-client-master-integration-pairwise/sdc-distribution-client-api/target/apidocs/org/onap/sdc/api/consumer/IComponentDoneStatusMessage.html... 09:09:16 Generating /w/workspace/sdc-sdc-distribution-client-master-integration-pairwise/sdc-distribution-client-api/target/apidocs/org/onap/sdc/api/consumer/IConfiguration.html... 09:09:16 Generating /w/workspace/sdc-sdc-distribution-client-master-integration-pairwise/sdc-distribution-client-api/target/apidocs/org/onap/sdc/api/consumer/IDistributionStatusMessage.html... 09:09:16 Generating /w/workspace/sdc-sdc-distribution-client-master-integration-pairwise/sdc-distribution-client-api/target/apidocs/org/onap/sdc/api/consumer/IDistributionStatusMessageBasic.html... 09:09:16 Generating /w/workspace/sdc-sdc-distribution-client-master-integration-pairwise/sdc-distribution-client-api/target/apidocs/org/onap/sdc/api/consumer/IFinalDistrStatusMessage.html... 09:09:16 Generating /w/workspace/sdc-sdc-distribution-client-master-integration-pairwise/sdc-distribution-client-api/target/apidocs/org/onap/sdc/api/consumer/INotificationCallback.html... 09:09:16 Generating /w/workspace/sdc-sdc-distribution-client-master-integration-pairwise/sdc-distribution-client-api/target/apidocs/org/onap/sdc/api/consumer/IStatusCallback.html... 09:09:16 Generating /w/workspace/sdc-sdc-distribution-client-master-integration-pairwise/sdc-distribution-client-api/target/apidocs/org/onap/sdc/api/notification/DistributionStatusEnum.html... 09:09:16 Generating /w/workspace/sdc-sdc-distribution-client-master-integration-pairwise/sdc-distribution-client-api/target/apidocs/org/onap/sdc/api/notification/IArtifactInfo.html... 09:09:16 Generating /w/workspace/sdc-sdc-distribution-client-master-integration-pairwise/sdc-distribution-client-api/target/apidocs/org/onap/sdc/api/notification/INotificationData.html... 09:09:16 Generating /w/workspace/sdc-sdc-distribution-client-master-integration-pairwise/sdc-distribution-client-api/target/apidocs/org/onap/sdc/api/notification/IResourceInstance.html... 09:09:16 Generating /w/workspace/sdc-sdc-distribution-client-master-integration-pairwise/sdc-distribution-client-api/target/apidocs/org/onap/sdc/api/notification/IStatusData.html... 09:09:16 Generating /w/workspace/sdc-sdc-distribution-client-master-integration-pairwise/sdc-distribution-client-api/target/apidocs/org/onap/sdc/api/notification/IVfModuleMetadata.html... 09:09:16 Generating /w/workspace/sdc-sdc-distribution-client-master-integration-pairwise/sdc-distribution-client-api/target/apidocs/org/onap/sdc/api/notification/StatusMessage.html... 09:09:16 Generating /w/workspace/sdc-sdc-distribution-client-master-integration-pairwise/sdc-distribution-client-api/target/apidocs/org/onap/sdc/api/results/DistributionActionResultEnum.html... 09:09:16 Generating /w/workspace/sdc-sdc-distribution-client-master-integration-pairwise/sdc-distribution-client-api/target/apidocs/org/onap/sdc/api/results/IDistributionClientDownloadResult.html... 09:09:16 Generating /w/workspace/sdc-sdc-distribution-client-master-integration-pairwise/sdc-distribution-client-api/target/apidocs/org/onap/sdc/api/results/IDistributionClientResult.html... 09:09:16 Generating /w/workspace/sdc-sdc-distribution-client-master-integration-pairwise/sdc-distribution-client-api/target/apidocs/org/onap/sdc/api/package-summary.html... 09:09:16 Generating /w/workspace/sdc-sdc-distribution-client-master-integration-pairwise/sdc-distribution-client-api/target/apidocs/org/onap/sdc/api/package-tree.html... 09:09:16 Generating /w/workspace/sdc-sdc-distribution-client-master-integration-pairwise/sdc-distribution-client-api/target/apidocs/org/onap/sdc/api/consumer/package-summary.html... 09:09:16 Generating /w/workspace/sdc-sdc-distribution-client-master-integration-pairwise/sdc-distribution-client-api/target/apidocs/org/onap/sdc/api/consumer/package-tree.html... 09:09:16 Generating /w/workspace/sdc-sdc-distribution-client-master-integration-pairwise/sdc-distribution-client-api/target/apidocs/org/onap/sdc/api/notification/package-summary.html... 09:09:16 Generating /w/workspace/sdc-sdc-distribution-client-master-integration-pairwise/sdc-distribution-client-api/target/apidocs/org/onap/sdc/api/notification/package-tree.html... 09:09:16 Generating /w/workspace/sdc-sdc-distribution-client-master-integration-pairwise/sdc-distribution-client-api/target/apidocs/org/onap/sdc/api/results/package-summary.html... 09:09:16 Generating /w/workspace/sdc-sdc-distribution-client-master-integration-pairwise/sdc-distribution-client-api/target/apidocs/org/onap/sdc/api/results/package-tree.html... 09:09:16 Generating /w/workspace/sdc-sdc-distribution-client-master-integration-pairwise/sdc-distribution-client-api/target/apidocs/constant-values.html... 09:09:16 Generating /w/workspace/sdc-sdc-distribution-client-master-integration-pairwise/sdc-distribution-client-api/target/apidocs/org/onap/sdc/api/consumer/class-use/IDistributionStatusMessage.html... 09:09:16 Generating /w/workspace/sdc-sdc-distribution-client-master-integration-pairwise/sdc-distribution-client-api/target/apidocs/org/onap/sdc/api/consumer/class-use/IDistributionStatusMessageBasic.html... 09:09:16 Generating /w/workspace/sdc-sdc-distribution-client-master-integration-pairwise/sdc-distribution-client-api/target/apidocs/org/onap/sdc/api/consumer/class-use/IStatusCallback.html... 09:09:16 Generating /w/workspace/sdc-sdc-distribution-client-master-integration-pairwise/sdc-distribution-client-api/target/apidocs/org/onap/sdc/api/consumer/class-use/IFinalDistrStatusMessage.html... 09:09:16 Generating /w/workspace/sdc-sdc-distribution-client-master-integration-pairwise/sdc-distribution-client-api/target/apidocs/org/onap/sdc/api/consumer/class-use/INotificationCallback.html... 09:09:16 Generating /w/workspace/sdc-sdc-distribution-client-master-integration-pairwise/sdc-distribution-client-api/target/apidocs/org/onap/sdc/api/consumer/class-use/IComponentDoneStatusMessage.html... 09:09:16 Generating /w/workspace/sdc-sdc-distribution-client-master-integration-pairwise/sdc-distribution-client-api/target/apidocs/org/onap/sdc/api/consumer/class-use/IConfiguration.html... 09:09:16 Generating /w/workspace/sdc-sdc-distribution-client-master-integration-pairwise/sdc-distribution-client-api/target/apidocs/org/onap/sdc/api/class-use/DistributionClient.html... 09:09:16 Generating /w/workspace/sdc-sdc-distribution-client-master-integration-pairwise/sdc-distribution-client-api/target/apidocs/org/onap/sdc/api/class-use/IDistributionClient.html... 09:09:16 Generating /w/workspace/sdc-sdc-distribution-client-master-integration-pairwise/sdc-distribution-client-api/target/apidocs/org/onap/sdc/api/class-use/ArtifactInfo.html... 09:09:16 Generating /w/workspace/sdc-sdc-distribution-client-master-integration-pairwise/sdc-distribution-client-api/target/apidocs/org/onap/sdc/api/class-use/StatusMessage.html... 09:09:16 Generating /w/workspace/sdc-sdc-distribution-client-master-integration-pairwise/sdc-distribution-client-api/target/apidocs/org/onap/sdc/api/class-use/IDistributionStatusMessageJsonBuilder.html... 09:09:16 Generating /w/workspace/sdc-sdc-distribution-client-master-integration-pairwise/sdc-distribution-client-api/target/apidocs/org/onap/sdc/api/class-use/DownloadResult.html... 09:09:16 Generating /w/workspace/sdc-sdc-distribution-client-master-integration-pairwise/sdc-distribution-client-api/target/apidocs/org/onap/sdc/api/notification/class-use/IArtifactInfo.html... 09:09:16 Generating /w/workspace/sdc-sdc-distribution-client-master-integration-pairwise/sdc-distribution-client-api/target/apidocs/org/onap/sdc/api/notification/class-use/IVfModuleMetadata.html... 09:09:16 Generating /w/workspace/sdc-sdc-distribution-client-master-integration-pairwise/sdc-distribution-client-api/target/apidocs/org/onap/sdc/api/notification/class-use/IResourceInstance.html... 09:09:16 Generating /w/workspace/sdc-sdc-distribution-client-master-integration-pairwise/sdc-distribution-client-api/target/apidocs/org/onap/sdc/api/notification/class-use/IStatusData.html... 09:09:16 Generating /w/workspace/sdc-sdc-distribution-client-master-integration-pairwise/sdc-distribution-client-api/target/apidocs/org/onap/sdc/api/notification/class-use/INotificationData.html... 09:09:16 Generating /w/workspace/sdc-sdc-distribution-client-master-integration-pairwise/sdc-distribution-client-api/target/apidocs/org/onap/sdc/api/notification/class-use/StatusMessage.html... 09:09:16 Generating /w/workspace/sdc-sdc-distribution-client-master-integration-pairwise/sdc-distribution-client-api/target/apidocs/org/onap/sdc/api/notification/class-use/DistributionStatusEnum.html... 09:09:16 Generating /w/workspace/sdc-sdc-distribution-client-master-integration-pairwise/sdc-distribution-client-api/target/apidocs/org/onap/sdc/api/results/class-use/IDistributionClientDownloadResult.html... 09:09:16 Generating /w/workspace/sdc-sdc-distribution-client-master-integration-pairwise/sdc-distribution-client-api/target/apidocs/org/onap/sdc/api/results/class-use/DistributionActionResultEnum.html... 09:09:16 Generating /w/workspace/sdc-sdc-distribution-client-master-integration-pairwise/sdc-distribution-client-api/target/apidocs/org/onap/sdc/api/results/class-use/IDistributionClientResult.html... 09:09:16 Generating /w/workspace/sdc-sdc-distribution-client-master-integration-pairwise/sdc-distribution-client-api/target/apidocs/org/onap/sdc/api/package-use.html... 09:09:16 Generating /w/workspace/sdc-sdc-distribution-client-master-integration-pairwise/sdc-distribution-client-api/target/apidocs/org/onap/sdc/api/consumer/package-use.html... 09:09:16 Generating /w/workspace/sdc-sdc-distribution-client-master-integration-pairwise/sdc-distribution-client-api/target/apidocs/org/onap/sdc/api/notification/package-use.html... 09:09:16 Generating /w/workspace/sdc-sdc-distribution-client-master-integration-pairwise/sdc-distribution-client-api/target/apidocs/org/onap/sdc/api/results/package-use.html... 09:09:16 Building index for all the packages and classes... 09:09:16 Generating /w/workspace/sdc-sdc-distribution-client-master-integration-pairwise/sdc-distribution-client-api/target/apidocs/overview-tree.html... 09:09:16 Generating /w/workspace/sdc-sdc-distribution-client-master-integration-pairwise/sdc-distribution-client-api/target/apidocs/index-all.html... 09:09:16 Building index for all classes... 09:09:16 Generating /w/workspace/sdc-sdc-distribution-client-master-integration-pairwise/sdc-distribution-client-api/target/apidocs/allclasses-index.html... 09:09:16 Generating /w/workspace/sdc-sdc-distribution-client-master-integration-pairwise/sdc-distribution-client-api/target/apidocs/allpackages-index.html... 09:09:16 Generating /w/workspace/sdc-sdc-distribution-client-master-integration-pairwise/sdc-distribution-client-api/target/apidocs/deprecated-list.html... 09:09:16 Building index for all classes... 09:09:16 Generating /w/workspace/sdc-sdc-distribution-client-master-integration-pairwise/sdc-distribution-client-api/target/apidocs/allclasses.html... 09:09:16 Generating /w/workspace/sdc-sdc-distribution-client-master-integration-pairwise/sdc-distribution-client-api/target/apidocs/allclasses.html... 09:09:16 Generating /w/workspace/sdc-sdc-distribution-client-master-integration-pairwise/sdc-distribution-client-api/target/apidocs/index.html... 09:09:16 Generating /w/workspace/sdc-sdc-distribution-client-master-integration-pairwise/sdc-distribution-client-api/target/apidocs/overview-summary.html... 09:09:16 Generating /w/workspace/sdc-sdc-distribution-client-master-integration-pairwise/sdc-distribution-client-api/target/apidocs/help-doc.html... 09:09:16 3 warnings 09:09:16 [WARNING] Javadoc Warnings 09:09:16 [WARNING] /w/workspace/sdc-sdc-distribution-client-master-integration-pairwise/sdc-distribution-client-api/src/main/java/org/onap/sdc/api/consumer/IConfiguration.java:199: warning - Tag @link: reference not found: INotificationData#getResources() 09:09:16 [WARNING] /w/workspace/sdc-sdc-distribution-client-master-integration-pairwise/sdc-distribution-client-api/src/main/java/org/onap/sdc/api/consumer/IConfiguration.java:199: warning - Tag @link: reference not found: INotificationData#getResources() 09:09:16 [WARNING] /w/workspace/sdc-sdc-distribution-client-master-integration-pairwise/sdc-distribution-client-api/src/main/java/org/onap/sdc/api/consumer/IConfiguration.java:199: warning - Tag @link: reference not found: INotificationData#getResources() 09:09:16 [INFO] Building jar: /w/workspace/sdc-sdc-distribution-client-master-integration-pairwise/sdc-distribution-client-api/target/sdc-distribution-client-api-2.2.0-SNAPSHOT-javadoc.jar 09:09:16 [INFO] 09:09:16 [INFO] --- jacoco-maven-plugin:0.8.6:prepare-agent (pre-integration-test) @ sdc-distribution-client-api --- 09:09:16 [INFO] failsafeArgLine set to -javaagent:/home/jenkins/.m2/repository/org/jacoco/org.jacoco.agent/0.8.6/org.jacoco.agent-0.8.6-runtime.jar=destfile=/w/workspace/sdc-sdc-distribution-client-master-integration-pairwise/sdc-distribution-client-api/target/code-coverage/jacoco-it.exec,excludes=**/gen/**:**/generated-sources/**:**/yang-gen/**:**/pax/** 09:09:16 [INFO] 09:09:16 [INFO] --- maven-failsafe-plugin:3.0.0-M4:integration-test (integration-tests) @ sdc-distribution-client-api --- 09:09:16 [INFO] No tests to run. 09:09:16 [INFO] 09:09:16 [INFO] --- jacoco-maven-plugin:0.8.6:report (post-integration-test) @ sdc-distribution-client-api --- 09:09:16 [INFO] Skipping JaCoCo execution due to missing execution data file. 09:09:16 [INFO] 09:09:16 [INFO] --- maven-failsafe-plugin:3.0.0-M4:verify (integration-tests) @ sdc-distribution-client-api --- 09:09:16 [INFO] 09:09:16 [INFO] --- maven-install-plugin:2.4:install (default-install) @ sdc-distribution-client-api --- 09:09:16 [INFO] Installing /w/workspace/sdc-sdc-distribution-client-master-integration-pairwise/sdc-distribution-client-api/target/sdc-distribution-client-api-2.2.0-SNAPSHOT.jar to /home/jenkins/.m2/repository/org/onap/sdc/sdc-distribution-client/sdc-distribution-client-api/2.2.0-SNAPSHOT/sdc-distribution-client-api-2.2.0-SNAPSHOT.jar 09:09:16 [INFO] Installing /w/workspace/sdc-sdc-distribution-client-master-integration-pairwise/sdc-distribution-client-api/pom.xml to /home/jenkins/.m2/repository/org/onap/sdc/sdc-distribution-client/sdc-distribution-client-api/2.2.0-SNAPSHOT/sdc-distribution-client-api-2.2.0-SNAPSHOT.pom 09:09:16 [INFO] Installing /w/workspace/sdc-sdc-distribution-client-master-integration-pairwise/sdc-distribution-client-api/target/sdc-distribution-client-api-2.2.0-SNAPSHOT-javadoc.jar to /home/jenkins/.m2/repository/org/onap/sdc/sdc-distribution-client/sdc-distribution-client-api/2.2.0-SNAPSHOT/sdc-distribution-client-api-2.2.0-SNAPSHOT-javadoc.jar 09:09:16 [INFO] 09:09:16 [INFO] ----< org.onap.sdc.sdc-distribution-client:sdc-distribution-client >---- 09:09:16 [INFO] Building sdc-distribution-client 2.2.0-SNAPSHOT [3/4] 09:09:16 [INFO] --------------------------------[ jar ]--------------------------------- 09:09:21 [INFO] 09:09:21 [INFO] --- maven-clean-plugin:2.5:clean (default-clean) @ sdc-distribution-client --- 09:09:21 [INFO] 09:09:21 [INFO] --- maven-enforcer-plugin:3.0.0-M2:enforce (enforce-property) @ sdc-distribution-client --- 09:09:21 [INFO] 09:09:21 [INFO] --- maven-enforcer-plugin:3.0.0-M2:enforce (enforce-no-snapshots) @ sdc-distribution-client --- 09:09:21 [INFO] 09:09:21 [INFO] --- jacoco-maven-plugin:0.8.6:prepare-agent (pre-unit-test) @ sdc-distribution-client --- 09:09:21 [INFO] surefireArgLine set to -javaagent:/home/jenkins/.m2/repository/org/jacoco/org.jacoco.agent/0.8.6/org.jacoco.agent-0.8.6-runtime.jar=destfile=/w/workspace/sdc-sdc-distribution-client-master-integration-pairwise/sdc-distribution-client/target/code-coverage/jacoco-ut.exec,excludes=**/gen/**:**/generated-sources/**:**/yang-gen/**:**/pax/** 09:09:21 [INFO] 09:09:21 [INFO] --- jacoco-maven-plugin:0.8.6:prepare-agent (prepare-agent) @ sdc-distribution-client --- 09:09:21 [INFO] argLine set to -javaagent:/home/jenkins/.m2/repository/org/jacoco/org.jacoco.agent/0.8.6/org.jacoco.agent-0.8.6-runtime.jar=destfile=/w/workspace/sdc-sdc-distribution-client-master-integration-pairwise/sdc-distribution-client/target/jacoco.exec,excludes=**/gen/**:**/generated-sources/**:**/yang-gen/**:**/pax/** 09:09:21 [INFO] 09:09:21 [INFO] --- maven-checkstyle-plugin:3.1.1:check (onap-license) @ sdc-distribution-client --- 09:09:21 [INFO] 09:09:21 [INFO] --- maven-checkstyle-plugin:3.1.1:check (onap-java-style) @ sdc-distribution-client --- 09:09:21 [INFO] 09:09:21 [INFO] --- maven-resources-plugin:2.6:resources (default-resources) @ sdc-distribution-client --- 09:09:21 [INFO] Using 'UTF-8' encoding to copy filtered resources. 09:09:21 [INFO] skip non existing resourceDirectory /w/workspace/sdc-sdc-distribution-client-master-integration-pairwise/sdc-distribution-client/src/main/resources 09:09:21 [INFO] 09:09:21 [INFO] --- maven-compiler-plugin:3.8.1:compile (default-compile) @ sdc-distribution-client --- 09:09:21 [INFO] Changes detected - recompiling the module! 09:09:21 [INFO] Compiling 44 source files to /w/workspace/sdc-sdc-distribution-client-master-integration-pairwise/sdc-distribution-client/target/classes 09:09:23 [INFO] /w/workspace/sdc-sdc-distribution-client-master-integration-pairwise/sdc-distribution-client/src/main/java/org/onap/sdc/http/SdcConnectorClient.java: Some input files use or override a deprecated API. 09:09:23 [INFO] /w/workspace/sdc-sdc-distribution-client-master-integration-pairwise/sdc-distribution-client/src/main/java/org/onap/sdc/http/SdcConnectorClient.java: Recompile with -Xlint:deprecation for details. 09:09:23 [INFO] 09:09:23 [INFO] --- maven-resources-plugin:2.6:testResources (default-testResources) @ sdc-distribution-client --- 09:09:23 [INFO] Using 'UTF-8' encoding to copy filtered resources. 09:09:23 [INFO] Copying 10 resources 09:09:23 [INFO] 09:09:23 [INFO] --- maven-compiler-plugin:3.8.1:testCompile (default-testCompile) @ sdc-distribution-client --- 09:09:23 [INFO] Changes detected - recompiling the module! 09:09:23 [INFO] Compiling 24 source files to /w/workspace/sdc-sdc-distribution-client-master-integration-pairwise/sdc-distribution-client/target/test-classes 09:09:24 [INFO] /w/workspace/sdc-sdc-distribution-client-master-integration-pairwise/sdc-distribution-client/src/test/java/org/onap/sdc/http/SdcConnectorClientTest.java: Some input files use or override a deprecated API. 09:09:24 [INFO] /w/workspace/sdc-sdc-distribution-client-master-integration-pairwise/sdc-distribution-client/src/test/java/org/onap/sdc/http/SdcConnectorClientTest.java: Recompile with -Xlint:deprecation for details. 09:09:24 [INFO] /w/workspace/sdc-sdc-distribution-client-master-integration-pairwise/sdc-distribution-client/src/test/java/org/onap/sdc/utils/NotificationSenderTest.java: /w/workspace/sdc-sdc-distribution-client-master-integration-pairwise/sdc-distribution-client/src/test/java/org/onap/sdc/utils/NotificationSenderTest.java uses unchecked or unsafe operations. 09:09:24 [INFO] /w/workspace/sdc-sdc-distribution-client-master-integration-pairwise/sdc-distribution-client/src/test/java/org/onap/sdc/utils/NotificationSenderTest.java: Recompile with -Xlint:unchecked for details. 09:09:24 [INFO] 09:09:24 [INFO] --- maven-surefire-plugin:3.0.0-M4:test (default-test) @ sdc-distribution-client --- 09:09:24 [INFO] 09:09:24 [INFO] ------------------------------------------------------- 09:09:24 [INFO] T E S T S 09:09:24 [INFO] ------------------------------------------------------- 09:09:25 [INFO] Running org.onap.sdc.http.HttpSdcClientResponseTest 09:09:27 [INFO] Tests run: 3, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 1.24 s - in org.onap.sdc.http.HttpSdcClientResponseTest 09:09:27 [INFO] Running org.onap.sdc.http.HttpSdcClientTest 09:09:27 09:09:27.945 [main] DEBUG org.onap.sdc.http.HttpSdcClient - url to send http://localhost:8443http://127.0.0.1:8080/target 09:09:28 09:09:28.671 [main] DEBUG org.onap.sdc.http.HttpSdcClient - url to send http://localhost:8443http://127.0.0.1:8080/target 09:09:28 09:09:28.673 [main] DEBUG org.onap.sdc.http.HttpSdcClient - GET Response Status 200 09:09:28 [INFO] Tests run: 4, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 1.659 s - in org.onap.sdc.http.HttpSdcClientTest 09:09:28 [INFO] Running org.onap.sdc.http.HttpClientFactoryTest 09:09:29 [INFO] Tests run: 3, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.386 s - in org.onap.sdc.http.HttpClientFactoryTest 09:09:29 [INFO] Running org.onap.sdc.http.HttpRequestFactoryTest 09:09:29 [INFO] Tests run: 2, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.026 s - in org.onap.sdc.http.HttpRequestFactoryTest 09:09:29 [INFO] Running org.onap.sdc.http.SdcConnectorClientTest 09:09:29 09:09:29.515 [main] DEBUG org.onap.sdc.http.SdcConnectorClient - about to perform get on SDC. requestId= b034b5fc-aa00-4ebd-bb9b-90b1b8f9159b url= /sdc/v1/artifactTypes 09:09:29 09:09:29.520 [main] ERROR org.onap.sdc.http.SdcConnectorClient - status from SDC is Mock for HttpSdcResponse, hashCode: 36331282 09:09:29 09:09:29.528 [main] ERROR org.onap.sdc.http.SdcConnectorClient - DistributionClientResultImpl [responseStatus=SDC_SERVER_TIMEOUT, responseMessage=SDC server problem] 09:09:29 09:09:29.529 [main] DEBUG org.onap.sdc.http.SdcConnectorClient - error from SDC is: ["Service","Resource","VF","VFC"] 09:09:29 09:09:29.530 [main] ERROR org.onap.sdc.http.SdcConnectorClient - failed to close http response 09:09:29 09:09:29.548 [main] DEBUG org.onap.sdc.http.SdcConnectorClient - about to perform get on SDC. requestId= e2d7db0f-c575-4068-ac72-524d0d7e3966 url= /sdc/v1/artifactTypes 09:09:29 09:09:29.552 [main] ERROR org.onap.sdc.http.SdcConnectorClient - failed to parse response from SDC. error: 09:09:29 java.io.IOException: Not implemented. This is expected as the implementation is for unit tests only. 09:09:29 at org.onap.sdc.http.SdcConnectorClientTest$ThrowingInputStreamForTesting.read(SdcConnectorClientTest.java:312) 09:09:29 at java.base/java.io.InputStream.read(InputStream.java:271) 09:09:29 at java.base/sun.nio.cs.StreamDecoder.readBytes(StreamDecoder.java:284) 09:09:29 at java.base/sun.nio.cs.StreamDecoder.implRead(StreamDecoder.java:326) 09:09:29 at java.base/sun.nio.cs.StreamDecoder.read(StreamDecoder.java:178) 09:09:29 at java.base/java.io.InputStreamReader.read(InputStreamReader.java:181) 09:09:29 at java.base/java.io.Reader.read(Reader.java:229) 09:09:29 at org.apache.commons.io.IOUtils.copyLarge(IOUtils.java:1282) 09:09:29 at org.apache.commons.io.IOUtils.copyLarge(IOUtils.java:1261) 09:09:29 at org.apache.commons.io.IOUtils.copy(IOUtils.java:1108) 09:09:29 at org.apache.commons.io.IOUtils.copy(IOUtils.java:922) 09:09:29 at org.apache.commons.io.IOUtils.toString(IOUtils.java:2681) 09:09:29 at org.apache.commons.io.IOUtils.toString(IOUtils.java:2661) 09:09:29 at org.onap.sdc.http.SdcConnectorClient.parseGetValidArtifactTypesResponse(SdcConnectorClient.java:155) 09:09:29 at org.onap.sdc.http.SdcConnectorClient.getValidArtifactTypesList(SdcConnectorClient.java:79) 09:09:29 at java.base/java.lang.invoke.MethodHandle.invokeWithArguments(MethodHandle.java:710) 09:09:29 at org.mockito.internal.util.reflection.InstrumentationMemberAccessor$Dispatcher$ByteBuddy$bEIykPBv.invokeWithArguments(Unknown Source) 09:09:29 at org.mockito.internal.util.reflection.InstrumentationMemberAccessor.invoke(InstrumentationMemberAccessor.java:239) 09:09:29 at org.mockito.internal.util.reflection.ModuleMemberAccessor.invoke(ModuleMemberAccessor.java:55) 09:09:29 at org.mockito.internal.creation.bytebuddy.MockMethodAdvice.tryInvoke(MockMethodAdvice.java:333) 09:09:29 at org.mockito.internal.creation.bytebuddy.MockMethodAdvice.access$500(MockMethodAdvice.java:60) 09:09:29 at org.mockito.internal.creation.bytebuddy.MockMethodAdvice$RealMethodCall.invoke(MockMethodAdvice.java:253) 09:09:29 at org.mockito.internal.invocation.InterceptedInvocation.callRealMethod(InterceptedInvocation.java:142) 09:09:29 at org.mockito.internal.stubbing.answers.CallsRealMethods.answer(CallsRealMethods.java:45) 09:09:29 at org.mockito.Answers.answer(Answers.java:99) 09:09:29 at org.mockito.internal.handler.MockHandlerImpl.handle(MockHandlerImpl.java:110) 09:09:29 at org.mockito.internal.handler.NullResultGuardian.handle(NullResultGuardian.java:29) 09:09:29 at org.mockito.internal.handler.InvocationNotifierHandler.handle(InvocationNotifierHandler.java:34) 09:09:29 at org.mockito.internal.creation.bytebuddy.MockMethodInterceptor.doIntercept(MockMethodInterceptor.java:82) 09:09:29 at org.mockito.internal.creation.bytebuddy.MockMethodAdvice.handle(MockMethodAdvice.java:151) 09:09:29 at org.onap.sdc.http.SdcConnectorClient.getValidArtifactTypesList(SdcConnectorClient.java:74) 09:09:29 at org.onap.sdc.http.SdcConnectorClientTest.getValidArtifactTypesListParsingExceptionHandlingTest(SdcConnectorClientTest.java:216) 09:09:29 at java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke0(Native Method) 09:09:29 at java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) 09:09:29 at java.base/jdk.internal.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) 09:09:29 at java.base/java.lang.reflect.Method.invoke(Method.java:566) 09:09:29 at org.junit.platform.commons.util.ReflectionUtils.invokeMethod(ReflectionUtils.java:688) 09:09:29 at org.junit.jupiter.engine.execution.MethodInvocation.proceed(MethodInvocation.java:60) 09:09:29 at org.junit.jupiter.engine.execution.InvocationInterceptorChain$ValidatingInvocation.proceed(InvocationInterceptorChain.java:131) 09:09:29 at org.junit.jupiter.engine.extension.TimeoutExtension.intercept(TimeoutExtension.java:149) 09:09:29 at org.junit.jupiter.engine.extension.TimeoutExtension.interceptTestableMethod(TimeoutExtension.java:140) 09:09:29 at org.junit.jupiter.engine.extension.TimeoutExtension.interceptTestMethod(TimeoutExtension.java:84) 09:09:29 at org.junit.jupiter.engine.execution.ExecutableInvoker$ReflectiveInterceptorCall.lambda$ofVoidMethod$0(ExecutableInvoker.java:115) 09:09:29 at org.junit.jupiter.engine.execution.ExecutableInvoker.lambda$invoke$0(ExecutableInvoker.java:105) 09:09:29 at org.junit.jupiter.engine.execution.InvocationInterceptorChain$InterceptedInvocation.proceed(InvocationInterceptorChain.java:106) 09:09:29 at org.junit.jupiter.engine.execution.InvocationInterceptorChain.proceed(InvocationInterceptorChain.java:64) 09:09:29 at org.junit.jupiter.engine.execution.InvocationInterceptorChain.chainAndInvoke(InvocationInterceptorChain.java:45) 09:09:29 at org.junit.jupiter.engine.execution.InvocationInterceptorChain.invoke(InvocationInterceptorChain.java:37) 09:09:29 at org.junit.jupiter.engine.execution.ExecutableInvoker.invoke(ExecutableInvoker.java:104) 09:09:29 at org.junit.jupiter.engine.execution.ExecutableInvoker.invoke(ExecutableInvoker.java:98) 09:09:29 at org.junit.jupiter.engine.descriptor.TestMethodTestDescriptor.lambda$invokeTestMethod$6(TestMethodTestDescriptor.java:210) 09:09:29 at org.junit.platform.engine.support.hierarchical.ThrowableCollector.execute(ThrowableCollector.java:73) 09:09:29 at org.junit.jupiter.engine.descriptor.TestMethodTestDescriptor.invokeTestMethod(TestMethodTestDescriptor.java:206) 09:09:29 at org.junit.jupiter.engine.descriptor.TestMethodTestDescriptor.execute(TestMethodTestDescriptor.java:131) 09:09:29 at org.junit.jupiter.engine.descriptor.TestMethodTestDescriptor.execute(TestMethodTestDescriptor.java:65) 09:09:29 at org.junit.platform.engine.support.hierarchical.NodeTestTask.lambda$executeRecursively$5(NodeTestTask.java:139) 09:09:29 at org.junit.platform.engine.support.hierarchical.ThrowableCollector.execute(ThrowableCollector.java:73) 09:09:29 at org.junit.platform.engine.support.hierarchical.NodeTestTask.lambda$executeRecursively$7(NodeTestTask.java:129) 09:09:29 at org.junit.platform.engine.support.hierarchical.Node.around(Node.java:137) 09:09:29 at org.junit.platform.engine.support.hierarchical.NodeTestTask.lambda$executeRecursively$8(NodeTestTask.java:127) 09:09:29 at org.junit.platform.engine.support.hierarchical.ThrowableCollector.execute(ThrowableCollector.java:73) 09:09:29 at org.junit.platform.engine.support.hierarchical.NodeTestTask.executeRecursively(NodeTestTask.java:126) 09:09:29 at org.junit.platform.engine.support.hierarchical.NodeTestTask.execute(NodeTestTask.java:84) 09:09:29 at java.base/java.util.ArrayList.forEach(ArrayList.java:1541) 09:09:29 at org.junit.platform.engine.support.hierarchical.SameThreadHierarchicalTestExecutorService.invokeAll(SameThreadHierarchicalTestExecutorService.java:38) 09:09:29 at org.junit.platform.engine.support.hierarchical.NodeTestTask.lambda$executeRecursively$5(NodeTestTask.java:143) 09:09:29 at org.junit.platform.engine.support.hierarchical.ThrowableCollector.execute(ThrowableCollector.java:73) 09:09:29 at org.junit.platform.engine.support.hierarchical.NodeTestTask.lambda$executeRecursively$7(NodeTestTask.java:129) 09:09:29 at org.junit.platform.engine.support.hierarchical.Node.around(Node.java:137) 09:09:29 at org.junit.platform.engine.support.hierarchical.NodeTestTask.lambda$executeRecursively$8(NodeTestTask.java:127) 09:09:29 at org.junit.platform.engine.support.hierarchical.ThrowableCollector.execute(ThrowableCollector.java:73) 09:09:29 at org.junit.platform.engine.support.hierarchical.NodeTestTask.executeRecursively(NodeTestTask.java:126) 09:09:29 at org.junit.platform.engine.support.hierarchical.NodeTestTask.execute(NodeTestTask.java:84) 09:09:29 at java.base/java.util.ArrayList.forEach(ArrayList.java:1541) 09:09:29 at org.junit.platform.engine.support.hierarchical.SameThreadHierarchicalTestExecutorService.invokeAll(SameThreadHierarchicalTestExecutorService.java:38) 09:09:29 at org.junit.platform.engine.support.hierarchical.NodeTestTask.lambda$executeRecursively$5(NodeTestTask.java:143) 09:09:29 at org.junit.platform.engine.support.hierarchical.ThrowableCollector.execute(ThrowableCollector.java:73) 09:09:29 at org.junit.platform.engine.support.hierarchical.NodeTestTask.lambda$executeRecursively$7(NodeTestTask.java:129) 09:09:29 at org.junit.platform.engine.support.hierarchical.Node.around(Node.java:137) 09:09:29 at org.junit.platform.engine.support.hierarchical.NodeTestTask.lambda$executeRecursively$8(NodeTestTask.java:127) 09:09:29 at org.junit.platform.engine.support.hierarchical.ThrowableCollector.execute(ThrowableCollector.java:73) 09:09:29 at org.junit.platform.engine.support.hierarchical.NodeTestTask.executeRecursively(NodeTestTask.java:126) 09:09:29 at org.junit.platform.engine.support.hierarchical.NodeTestTask.execute(NodeTestTask.java:84) 09:09:29 at org.junit.platform.engine.support.hierarchical.SameThreadHierarchicalTestExecutorService.submit(SameThreadHierarchicalTestExecutorService.java:32) 09:09:29 at org.junit.platform.engine.support.hierarchical.HierarchicalTestExecutor.execute(HierarchicalTestExecutor.java:57) 09:09:29 at org.junit.platform.engine.support.hierarchical.HierarchicalTestEngine.execute(HierarchicalTestEngine.java:51) 09:09:29 at org.junit.platform.launcher.core.EngineExecutionOrchestrator.execute(EngineExecutionOrchestrator.java:108) 09:09:29 at org.junit.platform.launcher.core.EngineExecutionOrchestrator.execute(EngineExecutionOrchestrator.java:88) 09:09:29 at org.junit.platform.launcher.core.EngineExecutionOrchestrator.lambda$execute$0(EngineExecutionOrchestrator.java:54) 09:09:29 at org.junit.platform.launcher.core.EngineExecutionOrchestrator.withInterceptedStreams(EngineExecutionOrchestrator.java:67) 09:09:29 at org.junit.platform.launcher.core.EngineExecutionOrchestrator.execute(EngineExecutionOrchestrator.java:52) 09:09:29 at org.junit.platform.launcher.core.DefaultLauncher.execute(DefaultLauncher.java:96) 09:09:29 at org.junit.platform.launcher.core.DefaultLauncher.execute(DefaultLauncher.java:75) 09:09:29 at org.apache.maven.surefire.junitplatform.JUnitPlatformProvider.invokeAllTests(JUnitPlatformProvider.java:154) 09:09:29 at org.apache.maven.surefire.junitplatform.JUnitPlatformProvider.invoke(JUnitPlatformProvider.java:127) 09:09:29 at org.apache.maven.surefire.booter.ForkedBooter.runSuitesInProcess(ForkedBooter.java:377) 09:09:29 at org.apache.maven.surefire.booter.ForkedBooter.execute(ForkedBooter.java:138) 09:09:29 at org.apache.maven.surefire.booter.ForkedBooter.run(ForkedBooter.java:465) 09:09:29 at org.apache.maven.surefire.booter.ForkedBooter.main(ForkedBooter.java:451) 09:09:29 09:09:29.674 [main] ERROR org.onap.sdc.http.SdcConnectorClient - failed to get artifact from response 09:09:29 09:09:29.679 [main] DEBUG org.onap.sdc.http.SdcConnectorClient - about to perform get on SDC. requestId= a46bcb44-9dac-4e60-b2e3-ef81d12fe392 url= /sdc/v1/artifactTypes 09:09:29 09:09:29.680 [main] ERROR org.onap.sdc.http.SdcConnectorClient - status from SDC is Mock for HttpSdcResponse, hashCode: 753877793 09:09:29 09:09:29.680 [main] ERROR org.onap.sdc.http.SdcConnectorClient - DistributionClientResultImpl [responseStatus=SDC_SERVER_TIMEOUT, responseMessage=SDC server problem] 09:09:29 09:09:29.681 [main] DEBUG org.onap.sdc.http.SdcConnectorClient - error from SDC is: It just didn't work 09:09:29 09:09:29.685 [main] DEBUG org.onap.sdc.http.SdcConnectorClient - about to perform get on SDC. requestId= af904010-0cb1-429d-9e3d-113a22429728 url= /sdc/v1/distributionKafkaData 09:09:29 09:09:29.685 [main] ERROR org.onap.sdc.http.SdcConnectorClient - status from SDC is Mock for HttpSdcResponse, hashCode: 1750724866 09:09:29 09:09:29.685 [main] ERROR org.onap.sdc.http.SdcConnectorClient - DistributionClientResultImpl [responseStatus=SDC_SERVER_TIMEOUT, responseMessage=SDC server problem] 09:09:29 09:09:29.686 [main] DEBUG org.onap.sdc.http.SdcConnectorClient - error from SDC is: It just didn't work 09:09:29 09:09:29.694 [main] ERROR org.onap.sdc.http.SdcConnectorClient - status from SDC is Mock for HttpSdcResponse, hashCode: 1787450613 09:09:29 09:09:29.694 [main] ERROR org.onap.sdc.http.SdcConnectorClient - DistributionClientResultImpl [responseStatus=SDC_SERVER_PROBLEM, responseMessage=SDC server problem] 09:09:29 09:09:29.695 [main] ERROR org.onap.sdc.http.SdcConnectorClient - During error handling another exception occurred: 09:09:29 java.io.IOException: Not implemented. This is expected as the implementation is for unit tests only. 09:09:29 at org.onap.sdc.http.SdcConnectorClientTest$ThrowingInputStreamForTesting.read(SdcConnectorClientTest.java:312) 09:09:29 at java.base/java.io.InputStream.read(InputStream.java:271) 09:09:29 at java.base/sun.nio.cs.StreamDecoder.readBytes(StreamDecoder.java:284) 09:09:29 at java.base/sun.nio.cs.StreamDecoder.implRead(StreamDecoder.java:326) 09:09:29 at java.base/sun.nio.cs.StreamDecoder.read(StreamDecoder.java:178) 09:09:29 at java.base/java.io.InputStreamReader.read(InputStreamReader.java:181) 09:09:29 at java.base/java.io.Reader.read(Reader.java:229) 09:09:29 at org.apache.commons.io.IOUtils.copyLarge(IOUtils.java:1282) 09:09:29 at org.apache.commons.io.IOUtils.copyLarge(IOUtils.java:1261) 09:09:29 at org.apache.commons.io.IOUtils.copy(IOUtils.java:1108) 09:09:29 at org.apache.commons.io.IOUtils.copy(IOUtils.java:922) 09:09:29 at org.apache.commons.io.IOUtils.toString(IOUtils.java:2681) 09:09:29 at org.apache.commons.io.IOUtils.toString(IOUtils.java:2661) 09:09:29 at org.onap.sdc.http.SdcConnectorClient.handleSdcDownloadArtifactError(SdcConnectorClient.java:256) 09:09:29 at org.onap.sdc.http.SdcConnectorClient.downloadArtifact(SdcConnectorClient.java:144) 09:09:29 at java.base/java.lang.invoke.MethodHandle.invokeWithArguments(MethodHandle.java:710) 09:09:29 at org.mockito.internal.util.reflection.InstrumentationMemberAccessor$Dispatcher$ByteBuddy$bEIykPBv.invokeWithArguments(Unknown Source) 09:09:29 at org.mockito.internal.util.reflection.InstrumentationMemberAccessor.invoke(InstrumentationMemberAccessor.java:239) 09:09:29 at org.mockito.internal.util.reflection.ModuleMemberAccessor.invoke(ModuleMemberAccessor.java:55) 09:09:29 at org.mockito.internal.creation.bytebuddy.MockMethodAdvice.tryInvoke(MockMethodAdvice.java:333) 09:09:29 at org.mockito.internal.creation.bytebuddy.MockMethodAdvice.access$500(MockMethodAdvice.java:60) 09:09:29 at org.mockito.internal.creation.bytebuddy.MockMethodAdvice$RealMethodCall.invoke(MockMethodAdvice.java:253) 09:09:29 at org.mockito.internal.invocation.InterceptedInvocation.callRealMethod(InterceptedInvocation.java:142) 09:09:29 at org.mockito.internal.stubbing.answers.CallsRealMethods.answer(CallsRealMethods.java:45) 09:09:29 at org.mockito.Answers.answer(Answers.java:99) 09:09:29 at org.mockito.internal.handler.MockHandlerImpl.handle(MockHandlerImpl.java:110) 09:09:29 at org.mockito.internal.handler.NullResultGuardian.handle(NullResultGuardian.java:29) 09:09:29 at org.mockito.internal.handler.InvocationNotifierHandler.handle(InvocationNotifierHandler.java:34) 09:09:29 at org.mockito.internal.creation.bytebuddy.MockMethodInterceptor.doIntercept(MockMethodInterceptor.java:82) 09:09:29 at org.mockito.internal.creation.bytebuddy.MockMethodAdvice.handle(MockMethodAdvice.java:151) 09:09:29 at org.onap.sdc.http.SdcConnectorClient.downloadArtifact(SdcConnectorClient.java:130) 09:09:29 at org.onap.sdc.http.SdcConnectorClientTest.downloadArtifactHandleDownloadErrorTest(SdcConnectorClientTest.java:304) 09:09:29 at java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke0(Native Method) 09:09:29 at java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) 09:09:29 at java.base/jdk.internal.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) 09:09:29 at java.base/java.lang.reflect.Method.invoke(Method.java:566) 09:09:29 at org.junit.platform.commons.util.ReflectionUtils.invokeMethod(ReflectionUtils.java:688) 09:09:29 at org.junit.jupiter.engine.execution.MethodInvocation.proceed(MethodInvocation.java:60) 09:09:29 at org.junit.jupiter.engine.execution.InvocationInterceptorChain$ValidatingInvocation.proceed(InvocationInterceptorChain.java:131) 09:09:29 at org.junit.jupiter.engine.extension.TimeoutExtension.intercept(TimeoutExtension.java:149) 09:09:29 at org.junit.jupiter.engine.extension.TimeoutExtension.interceptTestableMethod(TimeoutExtension.java:140) 09:09:29 at org.junit.jupiter.engine.extension.TimeoutExtension.interceptTestMethod(TimeoutExtension.java:84) 09:09:29 at org.junit.jupiter.engine.execution.ExecutableInvoker$ReflectiveInterceptorCall.lambda$ofVoidMethod$0(ExecutableInvoker.java:115) 09:09:29 at org.junit.jupiter.engine.execution.ExecutableInvoker.lambda$invoke$0(ExecutableInvoker.java:105) 09:09:29 at org.junit.jupiter.engine.execution.InvocationInterceptorChain$InterceptedInvocation.proceed(InvocationInterceptorChain.java:106) 09:09:29 at org.junit.jupiter.engine.execution.InvocationInterceptorChain.proceed(InvocationInterceptorChain.java:64) 09:09:29 at org.junit.jupiter.engine.execution.InvocationInterceptorChain.chainAndInvoke(InvocationInterceptorChain.java:45) 09:09:29 at org.junit.jupiter.engine.execution.InvocationInterceptorChain.invoke(InvocationInterceptorChain.java:37) 09:09:29 at org.junit.jupiter.engine.execution.ExecutableInvoker.invoke(ExecutableInvoker.java:104) 09:09:29 at org.junit.jupiter.engine.execution.ExecutableInvoker.invoke(ExecutableInvoker.java:98) 09:09:29 at org.junit.jupiter.engine.descriptor.TestMethodTestDescriptor.lambda$invokeTestMethod$6(TestMethodTestDescriptor.java:210) 09:09:29 at org.junit.platform.engine.support.hierarchical.ThrowableCollector.execute(ThrowableCollector.java:73) 09:09:29 at org.junit.jupiter.engine.descriptor.TestMethodTestDescriptor.invokeTestMethod(TestMethodTestDescriptor.java:206) 09:09:29 at org.junit.jupiter.engine.descriptor.TestMethodTestDescriptor.execute(TestMethodTestDescriptor.java:131) 09:09:29 at org.junit.jupiter.engine.descriptor.TestMethodTestDescriptor.execute(TestMethodTestDescriptor.java:65) 09:09:29 at org.junit.platform.engine.support.hierarchical.NodeTestTask.lambda$executeRecursively$5(NodeTestTask.java:139) 09:09:29 at org.junit.platform.engine.support.hierarchical.ThrowableCollector.execute(ThrowableCollector.java:73) 09:09:29 at org.junit.platform.engine.support.hierarchical.NodeTestTask.lambda$executeRecursively$7(NodeTestTask.java:129) 09:09:29 at org.junit.platform.engine.support.hierarchical.Node.around(Node.java:137) 09:09:29 at org.junit.platform.engine.support.hierarchical.NodeTestTask.lambda$executeRecursively$8(NodeTestTask.java:127) 09:09:29 at org.junit.platform.engine.support.hierarchical.ThrowableCollector.execute(ThrowableCollector.java:73) 09:09:29 at org.junit.platform.engine.support.hierarchical.NodeTestTask.executeRecursively(NodeTestTask.java:126) 09:09:29 at org.junit.platform.engine.support.hierarchical.NodeTestTask.execute(NodeTestTask.java:84) 09:09:29 at java.base/java.util.ArrayList.forEach(ArrayList.java:1541) 09:09:29 at org.junit.platform.engine.support.hierarchical.SameThreadHierarchicalTestExecutorService.invokeAll(SameThreadHierarchicalTestExecutorService.java:38) 09:09:29 at org.junit.platform.engine.support.hierarchical.NodeTestTask.lambda$executeRecursively$5(NodeTestTask.java:143) 09:09:29 at org.junit.platform.engine.support.hierarchical.ThrowableCollector.execute(ThrowableCollector.java:73) 09:09:29 at org.junit.platform.engine.support.hierarchical.NodeTestTask.lambda$executeRecursively$7(NodeTestTask.java:129) 09:09:29 at org.junit.platform.engine.support.hierarchical.Node.around(Node.java:137) 09:09:29 at org.junit.platform.engine.support.hierarchical.NodeTestTask.lambda$executeRecursively$8(NodeTestTask.java:127) 09:09:29 at org.junit.platform.engine.support.hierarchical.ThrowableCollector.execute(ThrowableCollector.java:73) 09:09:29 at org.junit.platform.engine.support.hierarchical.NodeTestTask.executeRecursively(NodeTestTask.java:126) 09:09:29 at org.junit.platform.engine.support.hierarchical.NodeTestTask.execute(NodeTestTask.java:84) 09:09:29 at java.base/java.util.ArrayList.forEach(ArrayList.java:1541) 09:09:29 at org.junit.platform.engine.support.hierarchical.SameThreadHierarchicalTestExecutorService.invokeAll(SameThreadHierarchicalTestExecutorService.java:38) 09:09:29 at org.junit.platform.engine.support.hierarchical.NodeTestTask.lambda$executeRecursively$5(NodeTestTask.java:143) 09:09:29 at org.junit.platform.engine.support.hierarchical.ThrowableCollector.execute(ThrowableCollector.java:73) 09:09:29 at org.junit.platform.engine.support.hierarchical.NodeTestTask.lambda$executeRecursively$7(NodeTestTask.java:129) 09:09:29 at org.junit.platform.engine.support.hierarchical.Node.around(Node.java:137) 09:09:29 at org.junit.platform.engine.support.hierarchical.NodeTestTask.lambda$executeRecursively$8(NodeTestTask.java:127) 09:09:29 at org.junit.platform.engine.support.hierarchical.ThrowableCollector.execute(ThrowableCollector.java:73) 09:09:29 at org.junit.platform.engine.support.hierarchical.NodeTestTask.executeRecursively(NodeTestTask.java:126) 09:09:29 at org.junit.platform.engine.support.hierarchical.NodeTestTask.execute(NodeTestTask.java:84) 09:09:29 at org.junit.platform.engine.support.hierarchical.SameThreadHierarchicalTestExecutorService.submit(SameThreadHierarchicalTestExecutorService.java:32) 09:09:29 at org.junit.platform.engine.support.hierarchical.HierarchicalTestExecutor.execute(HierarchicalTestExecutor.java:57) 09:09:29 at org.junit.platform.engine.support.hierarchical.HierarchicalTestEngine.execute(HierarchicalTestEngine.java:51) 09:09:29 at org.junit.platform.launcher.core.EngineExecutionOrchestrator.execute(EngineExecutionOrchestrator.java:108) 09:09:29 at org.junit.platform.launcher.core.EngineExecutionOrchestrator.execute(EngineExecutionOrchestrator.java:88) 09:09:29 at org.junit.platform.launcher.core.EngineExecutionOrchestrator.lambda$execute$0(EngineExecutionOrchestrator.java:54) 09:09:29 at org.junit.platform.launcher.core.EngineExecutionOrchestrator.withInterceptedStreams(EngineExecutionOrchestrator.java:67) 09:09:29 at org.junit.platform.launcher.core.EngineExecutionOrchestrator.execute(EngineExecutionOrchestrator.java:52) 09:09:29 at org.junit.platform.launcher.core.DefaultLauncher.execute(DefaultLauncher.java:96) 09:09:29 at org.junit.platform.launcher.core.DefaultLauncher.execute(DefaultLauncher.java:75) 09:09:29 at org.apache.maven.surefire.junitplatform.JUnitPlatformProvider.invokeAllTests(JUnitPlatformProvider.java:154) 09:09:29 at org.apache.maven.surefire.junitplatform.JUnitPlatformProvider.invoke(JUnitPlatformProvider.java:127) 09:09:29 at org.apache.maven.surefire.booter.ForkedBooter.runSuitesInProcess(ForkedBooter.java:377) 09:09:29 at org.apache.maven.surefire.booter.ForkedBooter.execute(ForkedBooter.java:138) 09:09:29 at org.apache.maven.surefire.booter.ForkedBooter.run(ForkedBooter.java:465) 09:09:29 at org.apache.maven.surefire.booter.ForkedBooter.main(ForkedBooter.java:451) 09:09:29 09:09:29.721 [main] DEBUG org.onap.sdc.http.SdcConnectorClient - about to perform get on SDC. requestId= cb20088f-29ca-4b6e-b39f-47ee1e3ccaa0 url= /sdc/v1/artifactTypes 09:09:29 09:09:29.729 [main] DEBUG org.onap.sdc.http.SdcConnectorClient - about to perform get on SDC. requestId= 83a1788c-7498-4a11-a855-c3aea3adda8b url= /sdc/v1/distributionKafkaData 09:09:29 [INFO] Tests run: 11, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.609 s - in org.onap.sdc.http.SdcConnectorClientTest 09:09:29 [INFO] Running org.onap.sdc.utils.SdcKafkaTest 09:09:29 09:09:29.765 [main] INFO com.salesforce.kafka.test.ZookeeperTestServer - Starting Zookeeper test server 09:09:29 09:09:29.976 [Thread-2] INFO org.apache.zookeeper.server.quorum.QuorumPeerConfig - clientPortAddress is 0.0.0.0:46481 09:09:29 09:09:29.977 [Thread-2] INFO org.apache.zookeeper.server.quorum.QuorumPeerConfig - secureClientPort is not set 09:09:29 09:09:29.977 [Thread-2] INFO org.apache.zookeeper.server.quorum.QuorumPeerConfig - observerMasterPort is not set 09:09:29 09:09:29.977 [Thread-2] INFO org.apache.zookeeper.server.quorum.QuorumPeerConfig - metricsProvider.className is org.apache.zookeeper.metrics.impl.DefaultMetricsProvider 09:09:29 09:09:29.980 [Thread-2] INFO org.apache.zookeeper.server.ZooKeeperServerMain - Starting server 09:09:30 09:09:29.997 [Thread-2] INFO org.apache.zookeeper.server.ServerMetrics - ServerMetrics initialized with provider org.apache.zookeeper.metrics.impl.DefaultMetricsProvider@78a0f944 09:09:30 09:09:30.003 [Thread-2] DEBUG org.apache.zookeeper.server.persistence.FileTxnSnapLog - Opening datadir:/tmp/kafka-unit12795613514470425753 snapDir:/tmp/kafka-unit12795613514470425753 09:09:30 09:09:30.003 [Thread-2] INFO org.apache.zookeeper.server.persistence.FileTxnSnapLog - zookeeper.snapshot.trust.empty : false 09:09:30 09:09:30.029 [Thread-2] INFO org.apache.zookeeper.server.ZooKeeperServer - 09:09:30 09:09:30.030 [Thread-2] INFO org.apache.zookeeper.server.ZooKeeperServer - ______ _ 09:09:30 09:09:30.030 [Thread-2] INFO org.apache.zookeeper.server.ZooKeeperServer - |___ / | | 09:09:30 09:09:30.030 [Thread-2] INFO org.apache.zookeeper.server.ZooKeeperServer - / / ___ ___ | | __ ___ ___ _ __ ___ _ __ 09:09:30 09:09:30.030 [Thread-2] INFO org.apache.zookeeper.server.ZooKeeperServer - / / / _ \ / _ \ | |/ / / _ \ / _ \ | '_ \ / _ \ | '__| 09:09:30 09:09:30.030 [Thread-2] INFO org.apache.zookeeper.server.ZooKeeperServer - / /__ | (_) | | (_) | | < | __/ | __/ | |_) | | __/ | | 09:09:30 09:09:30.030 [Thread-2] INFO org.apache.zookeeper.server.ZooKeeperServer - /_____| \___/ \___/ |_|\_\ \___| \___| | .__/ \___| |_| 09:09:30 09:09:30.030 [Thread-2] INFO org.apache.zookeeper.server.ZooKeeperServer - | | 09:09:30 09:09:30.030 [Thread-2] INFO org.apache.zookeeper.server.ZooKeeperServer - |_| 09:09:30 09:09:30.030 [Thread-2] INFO org.apache.zookeeper.server.ZooKeeperServer - 09:09:30 09:09:30.033 [Thread-2] INFO org.apache.zookeeper.server.ZooKeeperServer - Server environment:zookeeper.version=3.6.3--6401e4ad2087061bc6b9f80dec2d69f2e3c8660a, built on 04/08/2021 16:35 GMT 09:09:30 09:09:30.033 [Thread-2] INFO org.apache.zookeeper.server.ZooKeeperServer - Server environment:host.name=prd-ubuntu1804-docker-8c-8g-8464 09:09:30 09:09:30.033 [Thread-2] INFO org.apache.zookeeper.server.ZooKeeperServer - Server environment:java.version=11.0.16 09:09:30 09:09:30.033 [Thread-2] INFO org.apache.zookeeper.server.ZooKeeperServer - Server environment:java.vendor=Ubuntu 09:09:30 09:09:30.033 [Thread-2] INFO org.apache.zookeeper.server.ZooKeeperServer - Server environment:java.home=/usr/lib/jvm/java-11-openjdk-amd64 09:09:30 09:09:30.033 [Thread-2] INFO org.apache.zookeeper.server.ZooKeeperServer - Server environment:java.class.path=/w/workspace/sdc-sdc-distribution-client-master-integration-pairwise/sdc-distribution-client/target/test-classes:/w/workspace/sdc-sdc-distribution-client-master-integration-pairwise/sdc-distribution-client/target/classes:/w/workspace/sdc-sdc-distribution-client-master-integration-pairwise/sdc-distribution-client-api/target/sdc-distribution-client-api-2.2.0-SNAPSHOT.jar:/home/jenkins/.m2/repository/org/apache/kafka/kafka-clients/3.3.1/kafka-clients-3.3.1.jar:/home/jenkins/.m2/repository/com/github/luben/zstd-jni/1.5.2-1/zstd-jni-1.5.2-1.jar:/home/jenkins/.m2/repository/org/lz4/lz4-java/1.8.0/lz4-java-1.8.0.jar:/home/jenkins/.m2/repository/org/xerial/snappy/snappy-java/1.1.8.4/snappy-java-1.1.8.4.jar:/home/jenkins/.m2/repository/com/fasterxml/jackson/core/jackson-core/2.15.2/jackson-core-2.15.2.jar:/home/jenkins/.m2/repository/com/fasterxml/jackson/core/jackson-databind/2.15.2/jackson-databind-2.15.2.jar:/home/jenkins/.m2/repository/com/fasterxml/jackson/core/jackson-annotations/2.15.2/jackson-annotations-2.15.2.jar:/home/jenkins/.m2/repository/org/projectlombok/lombok/1.18.24/lombok-1.18.24.jar:/home/jenkins/.m2/repository/org/slf4j/slf4j-api/1.7.30/slf4j-api-1.7.30.jar:/home/jenkins/.m2/repository/com/google/code/gson/gson/2.8.9/gson-2.8.9.jar:/home/jenkins/.m2/repository/org/functionaljava/functionaljava/4.8/functionaljava-4.8.jar:/home/jenkins/.m2/repository/commons-io/commons-io/2.8.0/commons-io-2.8.0.jar:/home/jenkins/.m2/repository/org/apache/httpcomponents/httpclient/4.5.13/httpclient-4.5.13.jar:/home/jenkins/.m2/repository/commons-logging/commons-logging/1.2/commons-logging-1.2.jar:/home/jenkins/.m2/repository/org/yaml/snakeyaml/1.30/snakeyaml-1.30.jar:/home/jenkins/.m2/repository/org/apache/httpcomponents/httpcore/4.4.15/httpcore-4.4.15.jar:/home/jenkins/.m2/repository/com/google/guava/guava/32.1.2-jre/guava-32.1.2-jre.jar:/home/jenkins/.m2/repository/com/google/guava/failureaccess/1.0.1/failureaccess-1.0.1.jar:/home/jenkins/.m2/repository/com/google/guava/listenablefuture/9999.0-empty-to-avoid-conflict-with-guava/listenablefuture-9999.0-empty-to-avoid-conflict-with-guava.jar:/home/jenkins/.m2/repository/com/google/code/findbugs/jsr305/3.0.2/jsr305-3.0.2.jar:/home/jenkins/.m2/repository/org/checkerframework/checker-qual/3.33.0/checker-qual-3.33.0.jar:/home/jenkins/.m2/repository/com/google/errorprone/error_prone_annotations/2.18.0/error_prone_annotations-2.18.0.jar:/home/jenkins/.m2/repository/com/google/j2objc/j2objc-annotations/2.8/j2objc-annotations-2.8.jar:/home/jenkins/.m2/repository/org/eclipse/jetty/jetty-servlet/9.4.48.v20220622/jetty-servlet-9.4.48.v20220622.jar:/home/jenkins/.m2/repository/org/eclipse/jetty/jetty-util-ajax/9.4.48.v20220622/jetty-util-ajax-9.4.48.v20220622.jar:/home/jenkins/.m2/repository/org/eclipse/jetty/jetty-webapp/9.4.48.v20220622/jetty-webapp-9.4.48.v20220622.jar:/home/jenkins/.m2/repository/org/eclipse/jetty/jetty-xml/9.4.48.v20220622/jetty-xml-9.4.48.v20220622.jar:/home/jenkins/.m2/repository/org/eclipse/jetty/jetty-util/9.4.48.v20220622/jetty-util-9.4.48.v20220622.jar:/home/jenkins/.m2/repository/org/junit/jupiter/junit-jupiter/5.7.2/junit-jupiter-5.7.2.jar:/home/jenkins/.m2/repository/org/junit/jupiter/junit-jupiter-api/5.7.2/junit-jupiter-api-5.7.2.jar:/home/jenkins/.m2/repository/org/apiguardian/apiguardian-api/1.1.0/apiguardian-api-1.1.0.jar:/home/jenkins/.m2/repository/org/opentest4j/opentest4j/1.2.0/opentest4j-1.2.0.jar:/home/jenkins/.m2/repository/org/junit/jupiter/junit-jupiter-params/5.7.2/junit-jupiter-params-5.7.2.jar:/home/jenkins/.m2/repository/org/junit/jupiter/junit-jupiter-engine/5.7.2/junit-jupiter-engine-5.7.2.jar:/home/jenkins/.m2/repository/org/junit/platform/junit-platform-engine/1.7.2/junit-platform-engine-1.7.2.jar:/home/jenkins/.m2/repository/org/mockito/mockito-junit-jupiter/3.12.4/mockito-junit-jupiter-3.12.4.jar:/home/jenkins/.m2/repository/org/mockito/mockito-inline/3.12.4/mockito-inline-3.12.4.jar:/home/jenkins/.m2/repository/org/junit-pioneer/junit-pioneer/1.4.2/junit-pioneer-1.4.2.jar:/home/jenkins/.m2/repository/org/junit/platform/junit-platform-commons/1.7.1/junit-platform-commons-1.7.1.jar:/home/jenkins/.m2/repository/org/junit/platform/junit-platform-launcher/1.7.1/junit-platform-launcher-1.7.1.jar:/home/jenkins/.m2/repository/org/mockito/mockito-core/3.12.4/mockito-core-3.12.4.jar:/home/jenkins/.m2/repository/net/bytebuddy/byte-buddy/1.11.13/byte-buddy-1.11.13.jar:/home/jenkins/.m2/repository/net/bytebuddy/byte-buddy-agent/1.11.13/byte-buddy-agent-1.11.13.jar:/home/jenkins/.m2/repository/org/objenesis/objenesis/3.2/objenesis-3.2.jar:/home/jenkins/.m2/repository/com/google/code/bean-matchers/bean-matchers/0.12/bean-matchers-0.12.jar:/home/jenkins/.m2/repository/org/hamcrest/hamcrest/2.2/hamcrest-2.2.jar:/home/jenkins/.m2/repository/org/assertj/assertj-core/3.18.1/assertj-core-3.18.1.jar:/home/jenkins/.m2/repository/io/github/hakky54/logcaptor/2.7.10/logcaptor-2.7.10.jar:/home/jenkins/.m2/repository/ch/qos/logback/logback-classic/1.2.3/logback-classic-1.2.3.jar:/home/jenkins/.m2/repository/ch/qos/logback/logback-core/1.2.3/logback-core-1.2.3.jar:/home/jenkins/.m2/repository/org/apache/logging/log4j/log4j-to-slf4j/2.17.2/log4j-to-slf4j-2.17.2.jar:/home/jenkins/.m2/repository/org/apache/logging/log4j/log4j-api/2.17.2/log4j-api-2.17.2.jar:/home/jenkins/.m2/repository/org/slf4j/jul-to-slf4j/1.7.36/jul-to-slf4j-1.7.36.jar:/home/jenkins/.m2/repository/com/salesforce/kafka/test/kafka-junit5/3.2.4/kafka-junit5-3.2.4.jar:/home/jenkins/.m2/repository/com/salesforce/kafka/test/kafka-junit-core/3.2.4/kafka-junit-core-3.2.4.jar:/home/jenkins/.m2/repository/org/apache/curator/curator-test/2.12.0/curator-test-2.12.0.jar:/home/jenkins/.m2/repository/org/javassist/javassist/3.18.1-GA/javassist-3.18.1-GA.jar:/home/jenkins/.m2/repository/org/apache/kafka/kafka_2.13/3.3.1/kafka_2.13-3.3.1.jar:/home/jenkins/.m2/repository/org/scala-lang/scala-library/2.13.8/scala-library-2.13.8.jar:/home/jenkins/.m2/repository/org/apache/kafka/kafka-server-common/3.3.1/kafka-server-common-3.3.1.jar:/home/jenkins/.m2/repository/org/apache/kafka/kafka-metadata/3.3.1/kafka-metadata-3.3.1.jar:/home/jenkins/.m2/repository/org/apache/kafka/kafka-raft/3.3.1/kafka-raft-3.3.1.jar:/home/jenkins/.m2/repository/org/apache/kafka/kafka-storage/3.3.1/kafka-storage-3.3.1.jar:/home/jenkins/.m2/repository/org/apache/kafka/kafka-storage-api/3.3.1/kafka-storage-api-3.3.1.jar:/home/jenkins/.m2/repository/net/sourceforge/argparse4j/argparse4j/0.7.0/argparse4j-0.7.0.jar:/home/jenkins/.m2/repository/net/sf/jopt-simple/jopt-simple/5.0.4/jopt-simple-5.0.4.jar:/home/jenkins/.m2/repository/org/bitbucket/b_c/jose4j/0.7.9/jose4j-0.7.9.jar:/home/jenkins/.m2/repository/com/yammer/metrics/metrics-core/2.2.0/metrics-core-2.2.0.jar:/home/jenkins/.m2/repository/org/scala-lang/modules/scala-collection-compat_2.13/2.6.0/scala-collection-compat_2.13-2.6.0.jar:/home/jenkins/.m2/repository/org/scala-lang/modules/scala-java8-compat_2.13/1.0.2/scala-java8-compat_2.13-1.0.2.jar:/home/jenkins/.m2/repository/org/scala-lang/scala-reflect/2.13.8/scala-reflect-2.13.8.jar:/home/jenkins/.m2/repository/com/typesafe/scala-logging/scala-logging_2.13/3.9.4/scala-logging_2.13-3.9.4.jar:/home/jenkins/.m2/repository/io/dropwizard/metrics/metrics-core/4.1.12.1/metrics-core-4.1.12.1.jar:/home/jenkins/.m2/repository/org/apache/zookeeper/zookeeper/3.6.3/zookeeper-3.6.3.jar:/home/jenkins/.m2/repository/org/apache/zookeeper/zookeeper-jute/3.6.3/zookeeper-jute-3.6.3.jar:/home/jenkins/.m2/repository/org/apache/yetus/audience-annotations/0.5.0/audience-annotations-0.5.0.jar:/home/jenkins/.m2/repository/io/netty/netty-handler/4.1.63.Final/netty-handler-4.1.63.Final.jar:/home/jenkins/.m2/repository/io/netty/netty-common/4.1.63.Final/netty-common-4.1.63.Final.jar:/home/jenkins/.m2/repository/io/netty/netty-resolver/4.1.63.Final/netty-resolver-4.1.63.Final.jar:/home/jenkins/.m2/repository/io/netty/netty-buffer/4.1.63.Final/netty-buffer-4.1.63.Final.jar:/home/jenkins/.m2/repository/io/netty/netty-transport/4.1.63.Final/netty-transport-4.1.63.Final.jar:/home/jenkins/.m2/repository/io/netty/netty-codec/4.1.63.Final/netty-codec-4.1.63.Final.jar:/home/jenkins/.m2/repository/io/netty/netty-transport-native-epoll/4.1.63.Final/netty-transport-native-epoll-4.1.63.Final.jar:/home/jenkins/.m2/repository/io/netty/netty-transport-native-unix-common/4.1.63.Final/netty-transport-native-unix-common-4.1.63.Final.jar:/home/jenkins/.m2/repository/commons-cli/commons-cli/1.4/commons-cli-1.4.jar:/home/jenkins/.m2/repository/org/skyscreamer/jsonassert/1.5.3/jsonassert-1.5.3.jar:/home/jenkins/.m2/repository/com/vaadin/external/google/android-json/0.0.20131108.vaadin1/android-json-0.0.20131108.vaadin1.jar: 09:09:30 09:09:30.033 [Thread-2] INFO org.apache.zookeeper.server.ZooKeeperServer - Server environment:java.library.path=/usr/java/packages/lib:/usr/lib/x86_64-linux-gnu/jni:/lib/x86_64-linux-gnu:/usr/lib/x86_64-linux-gnu:/usr/lib/jni:/lib:/usr/lib 09:09:30 09:09:30.033 [Thread-2] INFO org.apache.zookeeper.server.ZooKeeperServer - Server environment:java.io.tmpdir=/tmp 09:09:30 09:09:30.033 [Thread-2] INFO org.apache.zookeeper.server.ZooKeeperServer - Server environment:java.compiler= 09:09:30 09:09:30.033 [Thread-2] INFO org.apache.zookeeper.server.ZooKeeperServer - Server environment:os.name=Linux 09:09:30 09:09:30.034 [Thread-2] INFO org.apache.zookeeper.server.ZooKeeperServer - Server environment:os.arch=amd64 09:09:30 09:09:30.034 [Thread-2] INFO org.apache.zookeeper.server.ZooKeeperServer - Server environment:os.version=4.15.0-192-generic 09:09:30 09:09:30.034 [Thread-2] INFO org.apache.zookeeper.server.ZooKeeperServer - Server environment:user.name=jenkins 09:09:30 09:09:30.034 [Thread-2] INFO org.apache.zookeeper.server.ZooKeeperServer - Server environment:user.home=/home/jenkins 09:09:30 09:09:30.034 [Thread-2] INFO org.apache.zookeeper.server.ZooKeeperServer - Server environment:user.dir=/w/workspace/sdc-sdc-distribution-client-master-integration-pairwise/sdc-distribution-client 09:09:30 09:09:30.034 [Thread-2] INFO org.apache.zookeeper.server.ZooKeeperServer - Server environment:os.memory.free=444MB 09:09:30 09:09:30.034 [Thread-2] INFO org.apache.zookeeper.server.ZooKeeperServer - Server environment:os.memory.max=8042MB 09:09:30 09:09:30.034 [Thread-2] INFO org.apache.zookeeper.server.ZooKeeperServer - Server environment:os.memory.total=504MB 09:09:30 09:09:30.034 [Thread-2] INFO org.apache.zookeeper.server.ZooKeeperServer - zookeeper.enableEagerACLCheck = false 09:09:30 09:09:30.034 [Thread-2] INFO org.apache.zookeeper.server.ZooKeeperServer - zookeeper.digest.enabled = true 09:09:30 09:09:30.034 [Thread-2] INFO org.apache.zookeeper.server.ZooKeeperServer - zookeeper.closeSessionTxn.enabled = true 09:09:30 09:09:30.034 [Thread-2] INFO org.apache.zookeeper.server.ZooKeeperServer - zookeeper.flushDelay=0 09:09:30 09:09:30.034 [Thread-2] INFO org.apache.zookeeper.server.ZooKeeperServer - zookeeper.maxWriteQueuePollTime=0 09:09:30 09:09:30.034 [Thread-2] INFO org.apache.zookeeper.server.ZooKeeperServer - zookeeper.maxBatchSize=1000 09:09:30 09:09:30.034 [Thread-2] INFO org.apache.zookeeper.server.ZooKeeperServer - zookeeper.intBufferStartingSizeBytes = 1024 09:09:30 09:09:30.037 [Thread-2] INFO org.apache.zookeeper.server.BlueThrottle - Weighed connection throttling is disabled 09:09:30 09:09:30.039 [Thread-2] INFO org.apache.zookeeper.server.ZooKeeperServer - minSessionTimeout set to 6000 09:09:30 09:09:30.039 [Thread-2] INFO org.apache.zookeeper.server.ZooKeeperServer - maxSessionTimeout set to 60000 09:09:30 09:09:30.040 [Thread-2] INFO org.apache.zookeeper.server.ResponseCache - Response cache size is initialized with value 400. 09:09:30 09:09:30.040 [Thread-2] INFO org.apache.zookeeper.server.ResponseCache - Response cache size is initialized with value 400. 09:09:30 09:09:30.042 [Thread-2] INFO org.apache.zookeeper.server.util.RequestPathMetricsCollector - zookeeper.pathStats.slotCapacity = 60 09:09:30 09:09:30.042 [Thread-2] INFO org.apache.zookeeper.server.util.RequestPathMetricsCollector - zookeeper.pathStats.slotDuration = 15 09:09:30 09:09:30.042 [Thread-2] INFO org.apache.zookeeper.server.util.RequestPathMetricsCollector - zookeeper.pathStats.maxDepth = 6 09:09:30 09:09:30.042 [Thread-2] INFO org.apache.zookeeper.server.util.RequestPathMetricsCollector - zookeeper.pathStats.initialDelay = 5 09:09:30 09:09:30.042 [Thread-2] INFO org.apache.zookeeper.server.util.RequestPathMetricsCollector - zookeeper.pathStats.delay = 5 09:09:30 09:09:30.042 [Thread-2] INFO org.apache.zookeeper.server.util.RequestPathMetricsCollector - zookeeper.pathStats.enabled = false 09:09:30 09:09:30.044 [Thread-2] INFO org.apache.zookeeper.server.ZooKeeperServer - The max bytes for all large requests are set to 104857600 09:09:30 09:09:30.045 [Thread-2] INFO org.apache.zookeeper.server.ZooKeeperServer - The large request threshold is set to -1 09:09:30 09:09:30.045 [Thread-2] INFO org.apache.zookeeper.server.ZooKeeperServer - Created server with tickTime 3000 minSessionTimeout 6000 maxSessionTimeout 60000 clientPortListenBacklog -1 datadir /tmp/kafka-unit12795613514470425753/version-2 snapdir /tmp/kafka-unit12795613514470425753/version-2 09:09:30 09:09:30.094 [Thread-2] INFO org.apache.zookeeper.server.ServerCnxnFactory - Using org.apache.zookeeper.server.NIOServerCnxnFactory as server connection factory 09:09:30 09:09:30.110 [Thread-2] INFO org.apache.zookeeper.common.X509Util - Setting -D jdk.tls.rejectClientInitiatedRenegotiation=true to disable client-initiated TLS renegotiation 09:09:30 09:09:30.139 [Thread-2] INFO org.apache.zookeeper.Login - Server successfully logged in. 09:09:30 09:09:30.141 [Thread-2] WARN org.apache.zookeeper.server.ServerCnxnFactory - maxCnxns is not configured, using default value 0. 09:09:30 09:09:30.144 [Thread-2] INFO org.apache.zookeeper.server.NIOServerCnxnFactory - Configuring NIO connection handler with 10s sessionless connection timeout, 2 selector thread(s), 16 worker threads, and 64 kB direct buffers. 09:09:30 09:09:30.159 [Thread-2] INFO org.apache.zookeeper.server.NIOServerCnxnFactory - binding to port 0.0.0.0/0.0.0.0:46481 09:09:30 09:09:30.192 [Thread-2] INFO org.apache.zookeeper.server.watch.WatchManagerFactory - Using org.apache.zookeeper.server.watch.WatchManager as watch manager 09:09:30 09:09:30.192 [Thread-2] INFO org.apache.zookeeper.server.watch.WatchManagerFactory - Using org.apache.zookeeper.server.watch.WatchManager as watch manager 09:09:30 09:09:30.193 [Thread-2] INFO org.apache.zookeeper.server.ZKDatabase - zookeeper.snapshotSizeFactor = 0.33 09:09:30 09:09:30.193 [Thread-2] INFO org.apache.zookeeper.server.ZKDatabase - zookeeper.commitLogCount=500 09:09:30 09:09:30.203 [Thread-2] INFO org.apache.zookeeper.server.persistence.SnapStream - zookeeper.snapshot.compression.method = CHECKED 09:09:30 09:09:30.203 [Thread-2] INFO org.apache.zookeeper.server.persistence.FileTxnSnapLog - Snapshotting: 0x0 to /tmp/kafka-unit12795613514470425753/version-2/snapshot.0 09:09:30 09:09:30.208 [Thread-2] INFO org.apache.zookeeper.server.ZKDatabase - Snapshot loaded in 15 ms, highest zxid is 0x0, digest is 1371985504 09:09:30 09:09:30.208 [Thread-2] INFO org.apache.zookeeper.server.persistence.FileTxnSnapLog - Snapshotting: 0x0 to /tmp/kafka-unit12795613514470425753/version-2/snapshot.0 09:09:30 09:09:30.209 [Thread-2] INFO org.apache.zookeeper.server.ZooKeeperServer - Snapshot taken in 1 ms 09:09:30 09:09:30.226 [ProcessThread(sid:0 cport:46481):] INFO org.apache.zookeeper.server.PrepRequestProcessor - PrepRequestProcessor (sid:0) started, reconfigEnabled=false 09:09:30 09:09:30.227 [Thread-2] INFO org.apache.zookeeper.server.RequestThrottler - zookeeper.request_throttler.shutdownTimeout = 10000 09:09:30 09:09:30.244 [Thread-2] INFO org.apache.zookeeper.server.ContainerManager - Using checkIntervalMs=60000 maxPerMinute=10000 maxNeverUsedIntervalMs=0 09:09:30 09:09:30.247 [Thread-2] INFO org.apache.zookeeper.audit.ZKAuditProvider - ZooKeeper audit is disabled. 09:09:31 09:09:31.799 [main] INFO kafka.server.KafkaConfig - KafkaConfig values: 09:09:31 advertised.listeners = SASL_PLAINTEXT://localhost:40117 09:09:31 alter.config.policy.class.name = null 09:09:31 alter.log.dirs.replication.quota.window.num = 11 09:09:31 alter.log.dirs.replication.quota.window.size.seconds = 1 09:09:31 authorizer.class.name = 09:09:31 auto.create.topics.enable = true 09:09:31 auto.leader.rebalance.enable = true 09:09:31 background.threads = 10 09:09:31 broker.heartbeat.interval.ms = 2000 09:09:31 broker.id = 1 09:09:31 broker.id.generation.enable = true 09:09:31 broker.rack = null 09:09:31 broker.session.timeout.ms = 9000 09:09:31 client.quota.callback.class = null 09:09:31 compression.type = producer 09:09:31 connection.failed.authentication.delay.ms = 100 09:09:31 connections.max.idle.ms = 600000 09:09:31 connections.max.reauth.ms = 0 09:09:31 control.plane.listener.name = null 09:09:31 controlled.shutdown.enable = true 09:09:31 controlled.shutdown.max.retries = 3 09:09:31 controlled.shutdown.retry.backoff.ms = 5000 09:09:31 controller.listener.names = null 09:09:31 controller.quorum.append.linger.ms = 25 09:09:31 controller.quorum.election.backoff.max.ms = 1000 09:09:31 controller.quorum.election.timeout.ms = 1000 09:09:31 controller.quorum.fetch.timeout.ms = 2000 09:09:31 controller.quorum.request.timeout.ms = 2000 09:09:31 controller.quorum.retry.backoff.ms = 20 09:09:31 controller.quorum.voters = [] 09:09:31 controller.quota.window.num = 11 09:09:31 controller.quota.window.size.seconds = 1 09:09:31 controller.socket.timeout.ms = 30000 09:09:31 create.topic.policy.class.name = null 09:09:31 default.replication.factor = 1 09:09:31 delegation.token.expiry.check.interval.ms = 3600000 09:09:31 delegation.token.expiry.time.ms = 86400000 09:09:31 delegation.token.master.key = null 09:09:31 delegation.token.max.lifetime.ms = 604800000 09:09:31 delegation.token.secret.key = null 09:09:31 delete.records.purgatory.purge.interval.requests = 1 09:09:31 delete.topic.enable = true 09:09:31 early.start.listeners = null 09:09:31 fetch.max.bytes = 57671680 09:09:31 fetch.purgatory.purge.interval.requests = 1000 09:09:31 group.initial.rebalance.delay.ms = 3000 09:09:31 group.max.session.timeout.ms = 1800000 09:09:31 group.max.size = 2147483647 09:09:31 group.min.session.timeout.ms = 6000 09:09:31 initial.broker.registration.timeout.ms = 60000 09:09:31 inter.broker.listener.name = null 09:09:31 inter.broker.protocol.version = 3.3-IV3 09:09:31 kafka.metrics.polling.interval.secs = 10 09:09:31 kafka.metrics.reporters = [] 09:09:31 leader.imbalance.check.interval.seconds = 300 09:09:31 leader.imbalance.per.broker.percentage = 10 09:09:31 listener.security.protocol.map = PLAINTEXT:PLAINTEXT,SSL:SSL,SASL_PLAINTEXT:SASL_PLAINTEXT,SASL_SSL:SASL_SSL 09:09:31 listeners = SASL_PLAINTEXT://localhost:40117 09:09:31 log.cleaner.backoff.ms = 15000 09:09:31 log.cleaner.dedupe.buffer.size = 134217728 09:09:31 log.cleaner.delete.retention.ms = 86400000 09:09:31 log.cleaner.enable = true 09:09:31 log.cleaner.io.buffer.load.factor = 0.9 09:09:31 log.cleaner.io.buffer.size = 524288 09:09:31 log.cleaner.io.max.bytes.per.second = 1.7976931348623157E308 09:09:31 log.cleaner.max.compaction.lag.ms = 9223372036854775807 09:09:31 log.cleaner.min.cleanable.ratio = 0.5 09:09:31 log.cleaner.min.compaction.lag.ms = 0 09:09:31 log.cleaner.threads = 1 09:09:31 log.cleanup.policy = [delete] 09:09:31 log.dir = /tmp/kafka-unit11182757027218931278 09:09:31 log.dirs = null 09:09:31 log.flush.interval.messages = 1 09:09:31 log.flush.interval.ms = null 09:09:31 log.flush.offset.checkpoint.interval.ms = 60000 09:09:31 log.flush.scheduler.interval.ms = 9223372036854775807 09:09:31 log.flush.start.offset.checkpoint.interval.ms = 60000 09:09:31 log.index.interval.bytes = 4096 09:09:31 log.index.size.max.bytes = 10485760 09:09:31 log.message.downconversion.enable = true 09:09:31 log.message.format.version = 3.0-IV1 09:09:31 log.message.timestamp.difference.max.ms = 9223372036854775807 09:09:31 log.message.timestamp.type = CreateTime 09:09:31 log.preallocate = false 09:09:31 log.retention.bytes = -1 09:09:31 log.retention.check.interval.ms = 300000 09:09:31 log.retention.hours = 168 09:09:31 log.retention.minutes = null 09:09:31 log.retention.ms = null 09:09:31 log.roll.hours = 168 09:09:31 log.roll.jitter.hours = 0 09:09:31 log.roll.jitter.ms = null 09:09:31 log.roll.ms = null 09:09:31 log.segment.bytes = 1073741824 09:09:31 log.segment.delete.delay.ms = 60000 09:09:31 max.connection.creation.rate = 2147483647 09:09:31 max.connections = 2147483647 09:09:31 max.connections.per.ip = 2147483647 09:09:31 max.connections.per.ip.overrides = 09:09:31 max.incremental.fetch.session.cache.slots = 1000 09:09:31 message.max.bytes = 1048588 09:09:31 metadata.log.dir = null 09:09:31 metadata.log.max.record.bytes.between.snapshots = 20971520 09:09:31 metadata.log.segment.bytes = 1073741824 09:09:31 metadata.log.segment.min.bytes = 8388608 09:09:31 metadata.log.segment.ms = 604800000 09:09:31 metadata.max.idle.interval.ms = 500 09:09:31 metadata.max.retention.bytes = -1 09:09:31 metadata.max.retention.ms = 604800000 09:09:31 metric.reporters = [] 09:09:31 metrics.num.samples = 2 09:09:31 metrics.recording.level = INFO 09:09:31 metrics.sample.window.ms = 30000 09:09:31 min.insync.replicas = 1 09:09:31 node.id = 1 09:09:31 num.io.threads = 2 09:09:31 num.network.threads = 2 09:09:31 num.partitions = 1 09:09:31 num.recovery.threads.per.data.dir = 1 09:09:31 num.replica.alter.log.dirs.threads = null 09:09:31 num.replica.fetchers = 1 09:09:31 offset.metadata.max.bytes = 4096 09:09:31 offsets.commit.required.acks = -1 09:09:31 offsets.commit.timeout.ms = 5000 09:09:31 offsets.load.buffer.size = 5242880 09:09:31 offsets.retention.check.interval.ms = 600000 09:09:31 offsets.retention.minutes = 10080 09:09:31 offsets.topic.compression.codec = 0 09:09:31 offsets.topic.num.partitions = 50 09:09:31 offsets.topic.replication.factor = 1 09:09:31 offsets.topic.segment.bytes = 104857600 09:09:31 password.encoder.cipher.algorithm = AES/CBC/PKCS5Padding 09:09:31 password.encoder.iterations = 4096 09:09:31 password.encoder.key.length = 128 09:09:31 password.encoder.keyfactory.algorithm = null 09:09:31 password.encoder.old.secret = null 09:09:31 password.encoder.secret = null 09:09:31 principal.builder.class = class org.apache.kafka.common.security.authenticator.DefaultKafkaPrincipalBuilder 09:09:31 process.roles = [] 09:09:31 producer.purgatory.purge.interval.requests = 1000 09:09:31 queued.max.request.bytes = -1 09:09:31 queued.max.requests = 500 09:09:31 quota.window.num = 11 09:09:31 quota.window.size.seconds = 1 09:09:31 remote.log.index.file.cache.total.size.bytes = 1073741824 09:09:31 remote.log.manager.task.interval.ms = 30000 09:09:31 remote.log.manager.task.retry.backoff.max.ms = 30000 09:09:31 remote.log.manager.task.retry.backoff.ms = 500 09:09:31 remote.log.manager.task.retry.jitter = 0.2 09:09:31 remote.log.manager.thread.pool.size = 10 09:09:31 remote.log.metadata.manager.class.name = null 09:09:31 remote.log.metadata.manager.class.path = null 09:09:31 remote.log.metadata.manager.impl.prefix = null 09:09:31 remote.log.metadata.manager.listener.name = null 09:09:31 remote.log.reader.max.pending.tasks = 100 09:09:31 remote.log.reader.threads = 10 09:09:31 remote.log.storage.manager.class.name = null 09:09:31 remote.log.storage.manager.class.path = null 09:09:31 remote.log.storage.manager.impl.prefix = null 09:09:31 remote.log.storage.system.enable = false 09:09:31 replica.fetch.backoff.ms = 1000 09:09:31 replica.fetch.max.bytes = 1048576 09:09:31 replica.fetch.min.bytes = 1 09:09:31 replica.fetch.response.max.bytes = 10485760 09:09:31 replica.fetch.wait.max.ms = 500 09:09:31 replica.high.watermark.checkpoint.interval.ms = 5000 09:09:31 replica.lag.time.max.ms = 30000 09:09:31 replica.selector.class = null 09:09:31 replica.socket.receive.buffer.bytes = 65536 09:09:31 replica.socket.timeout.ms = 30000 09:09:31 replication.quota.window.num = 11 09:09:31 replication.quota.window.size.seconds = 1 09:09:31 request.timeout.ms = 30000 09:09:31 reserved.broker.max.id = 1000 09:09:31 sasl.client.callback.handler.class = null 09:09:31 sasl.enabled.mechanisms = [PLAIN] 09:09:31 sasl.jaas.config = null 09:09:31 sasl.kerberos.kinit.cmd = /usr/bin/kinit 09:09:31 sasl.kerberos.min.time.before.relogin = 60000 09:09:31 sasl.kerberos.principal.to.local.rules = [DEFAULT] 09:09:31 sasl.kerberos.service.name = null 09:09:31 sasl.kerberos.ticket.renew.jitter = 0.05 09:09:31 sasl.kerberos.ticket.renew.window.factor = 0.8 09:09:31 sasl.login.callback.handler.class = null 09:09:31 sasl.login.class = null 09:09:31 sasl.login.connect.timeout.ms = null 09:09:31 sasl.login.read.timeout.ms = null 09:09:31 sasl.login.refresh.buffer.seconds = 300 09:09:31 sasl.login.refresh.min.period.seconds = 60 09:09:31 sasl.login.refresh.window.factor = 0.8 09:09:31 sasl.login.refresh.window.jitter = 0.05 09:09:31 sasl.login.retry.backoff.max.ms = 10000 09:09:31 sasl.login.retry.backoff.ms = 100 09:09:31 sasl.mechanism.controller.protocol = GSSAPI 09:09:31 sasl.mechanism.inter.broker.protocol = PLAIN 09:09:31 sasl.oauthbearer.clock.skew.seconds = 30 09:09:31 sasl.oauthbearer.expected.audience = null 09:09:31 sasl.oauthbearer.expected.issuer = null 09:09:31 sasl.oauthbearer.jwks.endpoint.refresh.ms = 3600000 09:09:31 sasl.oauthbearer.jwks.endpoint.retry.backoff.max.ms = 10000 09:09:31 sasl.oauthbearer.jwks.endpoint.retry.backoff.ms = 100 09:09:31 sasl.oauthbearer.jwks.endpoint.url = null 09:09:31 sasl.oauthbearer.scope.claim.name = scope 09:09:31 sasl.oauthbearer.sub.claim.name = sub 09:09:31 sasl.oauthbearer.token.endpoint.url = null 09:09:31 sasl.server.callback.handler.class = null 09:09:31 sasl.server.max.receive.size = 524288 09:09:31 security.inter.broker.protocol = SASL_PLAINTEXT 09:09:31 security.providers = null 09:09:31 socket.connection.setup.timeout.max.ms = 30000 09:09:31 socket.connection.setup.timeout.ms = 10000 09:09:31 socket.listen.backlog.size = 50 09:09:31 socket.receive.buffer.bytes = 102400 09:09:31 socket.request.max.bytes = 104857600 09:09:31 socket.send.buffer.bytes = 102400 09:09:31 ssl.cipher.suites = [] 09:09:31 ssl.client.auth = none 09:09:31 ssl.enabled.protocols = [TLSv1.2, TLSv1.3] 09:09:31 ssl.endpoint.identification.algorithm = https 09:09:31 ssl.engine.factory.class = null 09:09:31 ssl.key.password = null 09:09:31 ssl.keymanager.algorithm = SunX509 09:09:31 ssl.keystore.certificate.chain = null 09:09:31 ssl.keystore.key = null 09:09:31 ssl.keystore.location = null 09:09:31 ssl.keystore.password = null 09:09:31 ssl.keystore.type = JKS 09:09:31 ssl.principal.mapping.rules = DEFAULT 09:09:31 ssl.protocol = TLSv1.3 09:09:31 ssl.provider = null 09:09:31 ssl.secure.random.implementation = null 09:09:31 ssl.trustmanager.algorithm = PKIX 09:09:31 ssl.truststore.certificates = null 09:09:31 ssl.truststore.location = null 09:09:31 ssl.truststore.password = null 09:09:31 ssl.truststore.type = JKS 09:09:31 transaction.abort.timed.out.transaction.cleanup.interval.ms = 10000 09:09:31 transaction.max.timeout.ms = 900000 09:09:31 transaction.remove.expired.transaction.cleanup.interval.ms = 3600000 09:09:31 transaction.state.log.load.buffer.size = 5242880 09:09:31 transaction.state.log.min.isr = 1 09:09:31 transaction.state.log.num.partitions = 4 09:09:31 transaction.state.log.replication.factor = 1 09:09:31 transaction.state.log.segment.bytes = 104857600 09:09:31 transactional.id.expiration.ms = 604800000 09:09:31 unclean.leader.election.enable = false 09:09:31 zookeeper.clientCnxnSocket = null 09:09:31 zookeeper.connect = 127.0.0.1:46481 09:09:31 zookeeper.connection.timeout.ms = null 09:09:31 zookeeper.max.in.flight.requests = 10 09:09:31 zookeeper.session.timeout.ms = 30000 09:09:31 zookeeper.set.acl = false 09:09:31 zookeeper.ssl.cipher.suites = null 09:09:31 zookeeper.ssl.client.enable = false 09:09:31 zookeeper.ssl.crl.enable = false 09:09:31 zookeeper.ssl.enabled.protocols = null 09:09:31 zookeeper.ssl.endpoint.identification.algorithm = HTTPS 09:09:31 zookeeper.ssl.keystore.location = null 09:09:31 zookeeper.ssl.keystore.password = null 09:09:31 zookeeper.ssl.keystore.type = null 09:09:31 zookeeper.ssl.ocsp.enable = false 09:09:31 zookeeper.ssl.protocol = TLSv1.2 09:09:31 zookeeper.ssl.truststore.location = null 09:09:31 zookeeper.ssl.truststore.password = null 09:09:31 zookeeper.ssl.truststore.type = null 09:09:31 09:09:31 09:09:31.869 [main] INFO kafka.utils.Log4jControllerRegistration$ - Registered kafka:type=kafka.Log4jController MBean 09:09:32 09:09:32.014 [main] DEBUG org.apache.kafka.common.security.JaasUtils - Checking login config for Zookeeper JAAS context [java.security.auth.login.config=src/test/resources/jaas.conf, zookeeper.sasl.client=default:true, zookeeper.sasl.clientconfig=default:Client] 09:09:32 09:09:32.019 [main] INFO kafka.server.KafkaServer - starting 09:09:32 09:09:32.020 [main] INFO kafka.server.KafkaServer - Connecting to zookeeper on 127.0.0.1:46481 09:09:32 09:09:32.020 [main] DEBUG org.apache.kafka.common.security.JaasUtils - Checking login config for Zookeeper JAAS context [java.security.auth.login.config=src/test/resources/jaas.conf, zookeeper.sasl.client=default:true, zookeeper.sasl.clientconfig=default:Client] 09:09:32 09:09:32.043 [main] INFO kafka.zookeeper.ZooKeeperClient - [ZooKeeperClient Kafka server] Initializing a new session to 127.0.0.1:46481. 09:09:32 09:09:32.050 [main] INFO org.apache.zookeeper.ZooKeeper - Client environment:zookeeper.version=3.6.3--6401e4ad2087061bc6b9f80dec2d69f2e3c8660a, built on 04/08/2021 16:35 GMT 09:09:32 09:09:32.051 [main] INFO org.apache.zookeeper.ZooKeeper - Client environment:host.name=prd-ubuntu1804-docker-8c-8g-8464 09:09:32 09:09:32.051 [main] INFO org.apache.zookeeper.ZooKeeper - Client environment:java.version=11.0.16 09:09:32 09:09:32.051 [main] INFO org.apache.zookeeper.ZooKeeper - Client environment:java.vendor=Ubuntu 09:09:32 09:09:32.051 [main] INFO org.apache.zookeeper.ZooKeeper - Client environment:java.home=/usr/lib/jvm/java-11-openjdk-amd64 09:09:32 09:09:32.051 [main] INFO org.apache.zookeeper.ZooKeeper - Client environment:java.class.path=/w/workspace/sdc-sdc-distribution-client-master-integration-pairwise/sdc-distribution-client/target/test-classes:/w/workspace/sdc-sdc-distribution-client-master-integration-pairwise/sdc-distribution-client/target/classes:/w/workspace/sdc-sdc-distribution-client-master-integration-pairwise/sdc-distribution-client-api/target/sdc-distribution-client-api-2.2.0-SNAPSHOT.jar:/home/jenkins/.m2/repository/org/apache/kafka/kafka-clients/3.3.1/kafka-clients-3.3.1.jar:/home/jenkins/.m2/repository/com/github/luben/zstd-jni/1.5.2-1/zstd-jni-1.5.2-1.jar:/home/jenkins/.m2/repository/org/lz4/lz4-java/1.8.0/lz4-java-1.8.0.jar:/home/jenkins/.m2/repository/org/xerial/snappy/snappy-java/1.1.8.4/snappy-java-1.1.8.4.jar:/home/jenkins/.m2/repository/com/fasterxml/jackson/core/jackson-core/2.15.2/jackson-core-2.15.2.jar:/home/jenkins/.m2/repository/com/fasterxml/jackson/core/jackson-databind/2.15.2/jackson-databind-2.15.2.jar:/home/jenkins/.m2/repository/com/fasterxml/jackson/core/jackson-annotations/2.15.2/jackson-annotations-2.15.2.jar:/home/jenkins/.m2/repository/org/projectlombok/lombok/1.18.24/lombok-1.18.24.jar:/home/jenkins/.m2/repository/org/slf4j/slf4j-api/1.7.30/slf4j-api-1.7.30.jar:/home/jenkins/.m2/repository/com/google/code/gson/gson/2.8.9/gson-2.8.9.jar:/home/jenkins/.m2/repository/org/functionaljava/functionaljava/4.8/functionaljava-4.8.jar:/home/jenkins/.m2/repository/commons-io/commons-io/2.8.0/commons-io-2.8.0.jar:/home/jenkins/.m2/repository/org/apache/httpcomponents/httpclient/4.5.13/httpclient-4.5.13.jar:/home/jenkins/.m2/repository/commons-logging/commons-logging/1.2/commons-logging-1.2.jar:/home/jenkins/.m2/repository/org/yaml/snakeyaml/1.30/snakeyaml-1.30.jar:/home/jenkins/.m2/repository/org/apache/httpcomponents/httpcore/4.4.15/httpcore-4.4.15.jar:/home/jenkins/.m2/repository/com/google/guava/guava/32.1.2-jre/guava-32.1.2-jre.jar:/home/jenkins/.m2/repository/com/google/guava/failureaccess/1.0.1/failureaccess-1.0.1.jar:/home/jenkins/.m2/repository/com/google/guava/listenablefuture/9999.0-empty-to-avoid-conflict-with-guava/listenablefuture-9999.0-empty-to-avoid-conflict-with-guava.jar:/home/jenkins/.m2/repository/com/google/code/findbugs/jsr305/3.0.2/jsr305-3.0.2.jar:/home/jenkins/.m2/repository/org/checkerframework/checker-qual/3.33.0/checker-qual-3.33.0.jar:/home/jenkins/.m2/repository/com/google/errorprone/error_prone_annotations/2.18.0/error_prone_annotations-2.18.0.jar:/home/jenkins/.m2/repository/com/google/j2objc/j2objc-annotations/2.8/j2objc-annotations-2.8.jar:/home/jenkins/.m2/repository/org/eclipse/jetty/jetty-servlet/9.4.48.v20220622/jetty-servlet-9.4.48.v20220622.jar:/home/jenkins/.m2/repository/org/eclipse/jetty/jetty-util-ajax/9.4.48.v20220622/jetty-util-ajax-9.4.48.v20220622.jar:/home/jenkins/.m2/repository/org/eclipse/jetty/jetty-webapp/9.4.48.v20220622/jetty-webapp-9.4.48.v20220622.jar:/home/jenkins/.m2/repository/org/eclipse/jetty/jetty-xml/9.4.48.v20220622/jetty-xml-9.4.48.v20220622.jar:/home/jenkins/.m2/repository/org/eclipse/jetty/jetty-util/9.4.48.v20220622/jetty-util-9.4.48.v20220622.jar:/home/jenkins/.m2/repository/org/junit/jupiter/junit-jupiter/5.7.2/junit-jupiter-5.7.2.jar:/home/jenkins/.m2/repository/org/junit/jupiter/junit-jupiter-api/5.7.2/junit-jupiter-api-5.7.2.jar:/home/jenkins/.m2/repository/org/apiguardian/apiguardian-api/1.1.0/apiguardian-api-1.1.0.jar:/home/jenkins/.m2/repository/org/opentest4j/opentest4j/1.2.0/opentest4j-1.2.0.jar:/home/jenkins/.m2/repository/org/junit/jupiter/junit-jupiter-params/5.7.2/junit-jupiter-params-5.7.2.jar:/home/jenkins/.m2/repository/org/junit/jupiter/junit-jupiter-engine/5.7.2/junit-jupiter-engine-5.7.2.jar:/home/jenkins/.m2/repository/org/junit/platform/junit-platform-engine/1.7.2/junit-platform-engine-1.7.2.jar:/home/jenkins/.m2/repository/org/mockito/mockito-junit-jupiter/3.12.4/mockito-junit-jupiter-3.12.4.jar:/home/jenkins/.m2/repository/org/mockito/mockito-inline/3.12.4/mockito-inline-3.12.4.jar:/home/jenkins/.m2/repository/org/junit-pioneer/junit-pioneer/1.4.2/junit-pioneer-1.4.2.jar:/home/jenkins/.m2/repository/org/junit/platform/junit-platform-commons/1.7.1/junit-platform-commons-1.7.1.jar:/home/jenkins/.m2/repository/org/junit/platform/junit-platform-launcher/1.7.1/junit-platform-launcher-1.7.1.jar:/home/jenkins/.m2/repository/org/mockito/mockito-core/3.12.4/mockito-core-3.12.4.jar:/home/jenkins/.m2/repository/net/bytebuddy/byte-buddy/1.11.13/byte-buddy-1.11.13.jar:/home/jenkins/.m2/repository/net/bytebuddy/byte-buddy-agent/1.11.13/byte-buddy-agent-1.11.13.jar:/home/jenkins/.m2/repository/org/objenesis/objenesis/3.2/objenesis-3.2.jar:/home/jenkins/.m2/repository/com/google/code/bean-matchers/bean-matchers/0.12/bean-matchers-0.12.jar:/home/jenkins/.m2/repository/org/hamcrest/hamcrest/2.2/hamcrest-2.2.jar:/home/jenkins/.m2/repository/org/assertj/assertj-core/3.18.1/assertj-core-3.18.1.jar:/home/jenkins/.m2/repository/io/github/hakky54/logcaptor/2.7.10/logcaptor-2.7.10.jar:/home/jenkins/.m2/repository/ch/qos/logback/logback-classic/1.2.3/logback-classic-1.2.3.jar:/home/jenkins/.m2/repository/ch/qos/logback/logback-core/1.2.3/logback-core-1.2.3.jar:/home/jenkins/.m2/repository/org/apache/logging/log4j/log4j-to-slf4j/2.17.2/log4j-to-slf4j-2.17.2.jar:/home/jenkins/.m2/repository/org/apache/logging/log4j/log4j-api/2.17.2/log4j-api-2.17.2.jar:/home/jenkins/.m2/repository/org/slf4j/jul-to-slf4j/1.7.36/jul-to-slf4j-1.7.36.jar:/home/jenkins/.m2/repository/com/salesforce/kafka/test/kafka-junit5/3.2.4/kafka-junit5-3.2.4.jar:/home/jenkins/.m2/repository/com/salesforce/kafka/test/kafka-junit-core/3.2.4/kafka-junit-core-3.2.4.jar:/home/jenkins/.m2/repository/org/apache/curator/curator-test/2.12.0/curator-test-2.12.0.jar:/home/jenkins/.m2/repository/org/javassist/javassist/3.18.1-GA/javassist-3.18.1-GA.jar:/home/jenkins/.m2/repository/org/apache/kafka/kafka_2.13/3.3.1/kafka_2.13-3.3.1.jar:/home/jenkins/.m2/repository/org/scala-lang/scala-library/2.13.8/scala-library-2.13.8.jar:/home/jenkins/.m2/repository/org/apache/kafka/kafka-server-common/3.3.1/kafka-server-common-3.3.1.jar:/home/jenkins/.m2/repository/org/apache/kafka/kafka-metadata/3.3.1/kafka-metadata-3.3.1.jar:/home/jenkins/.m2/repository/org/apache/kafka/kafka-raft/3.3.1/kafka-raft-3.3.1.jar:/home/jenkins/.m2/repository/org/apache/kafka/kafka-storage/3.3.1/kafka-storage-3.3.1.jar:/home/jenkins/.m2/repository/org/apache/kafka/kafka-storage-api/3.3.1/kafka-storage-api-3.3.1.jar:/home/jenkins/.m2/repository/net/sourceforge/argparse4j/argparse4j/0.7.0/argparse4j-0.7.0.jar:/home/jenkins/.m2/repository/net/sf/jopt-simple/jopt-simple/5.0.4/jopt-simple-5.0.4.jar:/home/jenkins/.m2/repository/org/bitbucket/b_c/jose4j/0.7.9/jose4j-0.7.9.jar:/home/jenkins/.m2/repository/com/yammer/metrics/metrics-core/2.2.0/metrics-core-2.2.0.jar:/home/jenkins/.m2/repository/org/scala-lang/modules/scala-collection-compat_2.13/2.6.0/scala-collection-compat_2.13-2.6.0.jar:/home/jenkins/.m2/repository/org/scala-lang/modules/scala-java8-compat_2.13/1.0.2/scala-java8-compat_2.13-1.0.2.jar:/home/jenkins/.m2/repository/org/scala-lang/scala-reflect/2.13.8/scala-reflect-2.13.8.jar:/home/jenkins/.m2/repository/com/typesafe/scala-logging/scala-logging_2.13/3.9.4/scala-logging_2.13-3.9.4.jar:/home/jenkins/.m2/repository/io/dropwizard/metrics/metrics-core/4.1.12.1/metrics-core-4.1.12.1.jar:/home/jenkins/.m2/repository/org/apache/zookeeper/zookeeper/3.6.3/zookeeper-3.6.3.jar:/home/jenkins/.m2/repository/org/apache/zookeeper/zookeeper-jute/3.6.3/zookeeper-jute-3.6.3.jar:/home/jenkins/.m2/repository/org/apache/yetus/audience-annotations/0.5.0/audience-annotations-0.5.0.jar:/home/jenkins/.m2/repository/io/netty/netty-handler/4.1.63.Final/netty-handler-4.1.63.Final.jar:/home/jenkins/.m2/repository/io/netty/netty-common/4.1.63.Final/netty-common-4.1.63.Final.jar:/home/jenkins/.m2/repository/io/netty/netty-resolver/4.1.63.Final/netty-resolver-4.1.63.Final.jar:/home/jenkins/.m2/repository/io/netty/netty-buffer/4.1.63.Final/netty-buffer-4.1.63.Final.jar:/home/jenkins/.m2/repository/io/netty/netty-transport/4.1.63.Final/netty-transport-4.1.63.Final.jar:/home/jenkins/.m2/repository/io/netty/netty-codec/4.1.63.Final/netty-codec-4.1.63.Final.jar:/home/jenkins/.m2/repository/io/netty/netty-transport-native-epoll/4.1.63.Final/netty-transport-native-epoll-4.1.63.Final.jar:/home/jenkins/.m2/repository/io/netty/netty-transport-native-unix-common/4.1.63.Final/netty-transport-native-unix-common-4.1.63.Final.jar:/home/jenkins/.m2/repository/commons-cli/commons-cli/1.4/commons-cli-1.4.jar:/home/jenkins/.m2/repository/org/skyscreamer/jsonassert/1.5.3/jsonassert-1.5.3.jar:/home/jenkins/.m2/repository/com/vaadin/external/google/android-json/0.0.20131108.vaadin1/android-json-0.0.20131108.vaadin1.jar: 09:09:32 09:09:32.051 [main] INFO org.apache.zookeeper.ZooKeeper - Client environment:java.library.path=/usr/java/packages/lib:/usr/lib/x86_64-linux-gnu/jni:/lib/x86_64-linux-gnu:/usr/lib/x86_64-linux-gnu:/usr/lib/jni:/lib:/usr/lib 09:09:32 09:09:32.051 [main] INFO org.apache.zookeeper.ZooKeeper - Client environment:java.io.tmpdir=/tmp 09:09:32 09:09:32.051 [main] INFO org.apache.zookeeper.ZooKeeper - Client environment:java.compiler= 09:09:32 09:09:32.051 [main] INFO org.apache.zookeeper.ZooKeeper - Client environment:os.name=Linux 09:09:32 09:09:32.051 [main] INFO org.apache.zookeeper.ZooKeeper - Client environment:os.arch=amd64 09:09:32 09:09:32.051 [main] INFO org.apache.zookeeper.ZooKeeper - Client environment:os.version=4.15.0-192-generic 09:09:32 09:09:32.051 [main] INFO org.apache.zookeeper.ZooKeeper - Client environment:user.name=jenkins 09:09:32 09:09:32.051 [main] INFO org.apache.zookeeper.ZooKeeper - Client environment:user.home=/home/jenkins 09:09:32 09:09:32.051 [main] INFO org.apache.zookeeper.ZooKeeper - Client environment:user.dir=/w/workspace/sdc-sdc-distribution-client-master-integration-pairwise/sdc-distribution-client 09:09:32 09:09:32.051 [main] INFO org.apache.zookeeper.ZooKeeper - Client environment:os.memory.free=536MB 09:09:32 09:09:32.051 [main] INFO org.apache.zookeeper.ZooKeeper - Client environment:os.memory.max=8042MB 09:09:32 09:09:32.051 [main] INFO org.apache.zookeeper.ZooKeeper - Client environment:os.memory.total=640MB 09:09:32 09:09:32.055 [main] INFO org.apache.zookeeper.ZooKeeper - Initiating client connection, connectString=127.0.0.1:46481 sessionTimeout=30000 watcher=kafka.zookeeper.ZooKeeperClient$ZooKeeperClientWatcher$@2e9290bc 09:09:32 09:09:32.060 [main] INFO org.apache.zookeeper.ClientCnxnSocket - jute.maxbuffer value is 4194304 Bytes 09:09:32 09:09:32.071 [main] INFO org.apache.zookeeper.ClientCnxn - zookeeper.request.timeout value is 0. feature enabled=false 09:09:32 09:09:32.073 [main] DEBUG kafka.utils.KafkaScheduler - Initializing task scheduler. 09:09:32 09:09:32.074 [main] INFO kafka.zookeeper.ZooKeeperClient - [ZooKeeperClient Kafka server] Waiting until connected. 09:09:32 09:09:32.087 [main-SendThread(127.0.0.1:46481)] DEBUG org.apache.zookeeper.SaslServerPrincipal - Canonicalized address to localhost 09:09:32 09:09:32.089 [main-SendThread(127.0.0.1:46481)] DEBUG org.apache.zookeeper.client.ZooKeeperSaslClient - JAAS loginContext is: Client 09:09:32 09:09:32.090 [main-SendThread(127.0.0.1:46481)] INFO org.apache.zookeeper.Login - Client successfully logged in. 09:09:32 09:09:32.092 [main-SendThread(127.0.0.1:46481)] INFO org.apache.zookeeper.client.ZooKeeperSaslClient - Client will use DIGEST-MD5 as SASL mechanism. 09:09:32 09:09:32.131 [main-SendThread(127.0.0.1:46481)] INFO org.apache.zookeeper.ClientCnxn - Opening socket connection to server localhost/127.0.0.1:46481. 09:09:32 09:09:32.132 [main-SendThread(127.0.0.1:46481)] INFO org.apache.zookeeper.ClientCnxn - SASL config status: Will attempt to SASL-authenticate using Login Context section 'Client' 09:09:32 09:09:32.136 [main-SendThread(127.0.0.1:46481)] INFO org.apache.zookeeper.ClientCnxn - Socket connection established, initiating session, client: /127.0.0.1:45792, server: localhost/127.0.0.1:46481 09:09:32 09:09:32.136 [NIOServerCxnFactory.AcceptThread:0.0.0.0/0.0.0.0:46481] DEBUG org.apache.zookeeper.server.NIOServerCnxnFactory - Accepted socket connection from /127.0.0.1:45792 09:09:32 09:09:32.139 [main-SendThread(127.0.0.1:46481)] DEBUG org.apache.zookeeper.ClientCnxn - Session establishment request sent on localhost/127.0.0.1:46481 09:09:32 09:09:32.160 [NIOWorkerThread-1] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Session establishment request from client /127.0.0.1:45792 client's lastZxid is 0x0 09:09:32 09:09:32.164 [NIOWorkerThread-1] DEBUG org.apache.zookeeper.server.SessionTrackerImpl - Adding session 0x1000002945e0000 09:09:32 09:09:32.165 [NIOWorkerThread-1] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Client attempting to establish new session: session = 0x1000002945e0000, zxid = 0x0, timeout = 30000, address = /127.0.0.1:45792 09:09:32 09:09:32.171 [ProcessThread(sid:0 cport:46481):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from data tree is: 1371985504 09:09:32 09:09:32.172 [SyncThread:0] INFO org.apache.zookeeper.server.persistence.FileTxnLog - Creating new log file: log.1 09:09:32 09:09:32.268 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - Processing request:: sessionid:0x1000002945e0000 type:createSession cxid:0x0 zxid:0x1 txntype:-10 reqpath:n/a 09:09:32 09:09:32.273 [SyncThread:0] DEBUG org.apache.zookeeper.server.DataTree - Digests are matching for Zxid: 1, Digest in log and actual tree: 1371985504 09:09:32 09:09:32.277 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - sessionid:0x1000002945e0000 type:createSession cxid:0x0 zxid:0x1 txntype:-10 reqpath:n/a 09:09:32 09:09:32.283 [SyncThread:0] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Established session 0x1000002945e0000 with negotiated timeout 30000 for client /127.0.0.1:45792 09:09:32 09:09:32.286 [main-SendThread(127.0.0.1:46481)] INFO org.apache.zookeeper.ClientCnxn - Session establishment complete on server localhost/127.0.0.1:46481, session id = 0x1000002945e0000, negotiated timeout = 30000 09:09:32 09:09:32.289 [main-SendThread(127.0.0.1:46481)] DEBUG org.apache.zookeeper.client.ZooKeeperSaslClient - ClientCnxn:sendSaslPacket:length=0 09:09:32 09:09:32.293 [main-EventThread] DEBUG kafka.zookeeper.ZooKeeperClient - [ZooKeeperClient Kafka server] Received event: WatchedEvent state:SyncConnected type:None path:null 09:09:32 09:09:32.295 [NIOWorkerThread-3] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Responding to client SASL token. 09:09:32 09:09:32.295 [NIOWorkerThread-3] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Size of client SASL token: 0 09:09:32 09:09:32.295 [main] INFO kafka.zookeeper.ZooKeeperClient - [ZooKeeperClient Kafka server] Connected. 09:09:32 09:09:32.296 [NIOWorkerThread-3] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Size of server SASL response: 101 09:09:32 09:09:32.300 [main-SendThread(127.0.0.1:46481)] DEBUG org.apache.zookeeper.client.ZooKeeperSaslClient - saslClient.evaluateChallenge(len=101) 09:09:32 09:09:32.304 [main-SendThread(127.0.0.1:46481)] DEBUG org.apache.zookeeper.client.ZooKeeperSaslClient - ClientCnxn:sendSaslPacket:length=284 09:09:32 09:09:32.305 [NIOWorkerThread-5] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Responding to client SASL token. 09:09:32 09:09:32.305 [NIOWorkerThread-5] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Size of client SASL token: 284 09:09:32 09:09:32.306 [NIOWorkerThread-5] DEBUG org.apache.zookeeper.server.auth.SaslServerCallbackHandler - client supplied realm: zk-sasl-md5 09:09:32 09:09:32.306 [NIOWorkerThread-5] INFO org.apache.zookeeper.server.auth.SaslServerCallbackHandler - Successfully authenticated client: authenticationID=zooclient; authorizationID=zooclient. 09:09:32 09:09:32.349 [NIOWorkerThread-5] INFO org.apache.zookeeper.server.auth.SaslServerCallbackHandler - Setting authorizedID: zooclient 09:09:32 09:09:32.351 [NIOWorkerThread-5] INFO org.apache.zookeeper.server.ZooKeeperServer - adding SASL authorization for authorizationID: zooclient 09:09:32 09:09:32.351 [NIOWorkerThread-5] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Size of server SASL response: 40 09:09:32 09:09:32.350 [main-SendThread(127.0.0.1:46481)] DEBUG org.apache.zookeeper.ClientCnxnSocketNIO - Deferring non-priming packet clientPath:/consumers serverPath:/consumers finished:false header:: 0,1 replyHeader:: 0,0,0 request:: '/consumers,,v{s{31,s{'world,'anyone}}},0 response:: until SASL authentication completes. 09:09:32 09:09:32.354 [main-SendThread(127.0.0.1:46481)] DEBUG org.apache.zookeeper.ClientCnxnSocketNIO - Deferring non-priming packet clientPath:/consumers serverPath:/consumers finished:false header:: 0,1 replyHeader:: 0,0,0 request:: '/consumers,,v{s{31,s{'world,'anyone}}},0 response:: until SASL authentication completes. 09:09:32 09:09:32.355 [main-SendThread(127.0.0.1:46481)] DEBUG org.apache.zookeeper.client.ZooKeeperSaslClient - saslClient.evaluateChallenge(len=40) 09:09:32 09:09:32.355 [main-SendThread(127.0.0.1:46481)] DEBUG org.apache.zookeeper.ClientCnxnSocketNIO - Deferring non-priming packet clientPath:/consumers serverPath:/consumers finished:false header:: 0,1 replyHeader:: 0,0,0 request:: '/consumers,,v{s{31,s{'world,'anyone}}},0 response:: until SASL authentication completes. 09:09:32 09:09:32.356 [main-EventThread] DEBUG kafka.zookeeper.ZooKeeperClient - [ZooKeeperClient Kafka server] Received event: WatchedEvent state:SaslAuthenticated type:None path:null 09:09:32 09:09:32.358 [ProcessThread(sid:0 cport:46481):] DEBUG org.apache.zookeeper.server.SessionTrackerImpl - Checking session 0x1000002945e0000 09:09:32 09:09:32.358 [ProcessThread(sid:0 cport:46481):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Processing ACL: 31,s{'world,'anyone} 09:09:32 09:09:32 09:09:32.361 [ProcessThread(sid:0 cport:46481):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Permission requested: 4 09:09:32 09:09:32.361 [ProcessThread(sid:0 cport:46481):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - ACLs for node: [31,s{'world,'anyone} 09:09:32 ] 09:09:32 09:09:32.361 [ProcessThread(sid:0 cport:46481):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Client credentials: ['sasl,'zooclient 09:09:32 , 'ip,'127.0.0.1 09:09:32 ] 09:09:32 09:09:32.369 [ProcessThread(sid:0 cport:46481):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from data tree is: 1371985504 09:09:32 09:09:32.369 [ProcessThread(sid:0 cport:46481):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 1355400778 09:09:32 09:09:32.375 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - Processing request:: sessionid:0x1000002945e0000 type:create cxid:0x3 zxid:0x2 txntype:1 reqpath:n/a 09:09:32 09:09:32.400 [SyncThread:0] DEBUG org.apache.zookeeper.common.PathTrie - consumers 09:09:32 09:09:32.403 [SyncThread:0] DEBUG org.apache.zookeeper.server.DataTree - Digests are matching for Zxid: 2, Digest in log and actual tree: 3874180304 09:09:32 09:09:32.403 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - sessionid:0x1000002945e0000 type:create cxid:0x3 zxid:0x2 txntype:1 reqpath:n/a 09:09:32 09:09:32.405 [main-SendThread(127.0.0.1:46481)] DEBUG org.apache.zookeeper.ClientCnxn - Reading reply session id: 0x1000002945e0000, packet:: clientPath:/consumers serverPath:/consumers finished:false header:: 3,1 replyHeader:: 3,2,0 request:: '/consumers,,v{s{31,s{'world,'anyone}}},0 response:: '/consumers 09:09:32 09:09:32.437 [ProcessThread(sid:0 cport:46481):] DEBUG org.apache.zookeeper.server.SessionTrackerImpl - Checking session 0x1000002945e0000 09:09:32 09:09:32.437 [ProcessThread(sid:0 cport:46481):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Processing ACL: 31,s{'world,'anyone} 09:09:32 09:09:32 09:09:32.442 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - Processing request:: sessionid:0x1000002945e0000 type:create cxid:0x4 zxid:0x3 txntype:-1 reqpath:n/a 09:09:32 09:09:32.442 [SyncThread:0] DEBUG org.apache.zookeeper.server.DataTree - Ignoring processTxn failure hdr: -1 : error: -101 09:09:32 09:09:32.444 [main-SendThread(127.0.0.1:46481)] DEBUG org.apache.zookeeper.ClientCnxn - Reading reply session id: 0x1000002945e0000, packet:: clientPath:/brokers/ids serverPath:/brokers/ids finished:false header:: 4,1 replyHeader:: 4,3,-101 request:: '/brokers/ids,,v{s{31,s{'world,'anyone}}},0 response:: 09:09:32 09:09:32.446 [ProcessThread(sid:0 cport:46481):] DEBUG org.apache.zookeeper.server.SessionTrackerImpl - Checking session 0x1000002945e0000 09:09:32 09:09:32.447 [ProcessThread(sid:0 cport:46481):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Processing ACL: 31,s{'world,'anyone} 09:09:32 09:09:32 09:09:32.447 [ProcessThread(sid:0 cport:46481):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Permission requested: 4 09:09:32 09:09:32.447 [ProcessThread(sid:0 cport:46481):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - ACLs for node: [31,s{'world,'anyone} 09:09:32 ] 09:09:32 09:09:32.447 [ProcessThread(sid:0 cport:46481):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Client credentials: ['sasl,'zooclient 09:09:32 , 'ip,'127.0.0.1 09:09:32 ] 09:09:32 09:09:32.448 [ProcessThread(sid:0 cport:46481):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from data tree is: 3874180304 09:09:32 09:09:32.449 [ProcessThread(sid:0 cport:46481):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 3992141530 09:09:32 09:09:32.450 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - Processing request:: sessionid:0x1000002945e0000 type:create cxid:0x5 zxid:0x4 txntype:1 reqpath:n/a 09:09:32 09:09:32.451 [SyncThread:0] DEBUG org.apache.zookeeper.common.PathTrie - brokers 09:09:32 09:09:32.451 [SyncThread:0] DEBUG org.apache.zookeeper.server.DataTree - Digests are matching for Zxid: 4, Digest in log and actual tree: 8185728049 09:09:32 09:09:32.451 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - sessionid:0x1000002945e0000 type:create cxid:0x5 zxid:0x4 txntype:1 reqpath:n/a 09:09:32 09:09:32.453 [main-SendThread(127.0.0.1:46481)] DEBUG org.apache.zookeeper.ClientCnxn - Reading reply session id: 0x1000002945e0000, packet:: clientPath:/brokers serverPath:/brokers finished:false header:: 5,1 replyHeader:: 5,4,0 request:: '/brokers,,v{s{31,s{'world,'anyone}}},0 response:: '/brokers 09:09:32 09:09:32.455 [ProcessThread(sid:0 cport:46481):] DEBUG org.apache.zookeeper.server.SessionTrackerImpl - Checking session 0x1000002945e0000 09:09:32 09:09:32.455 [ProcessThread(sid:0 cport:46481):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Processing ACL: 31,s{'world,'anyone} 09:09:32 09:09:32 09:09:32.455 [ProcessThread(sid:0 cport:46481):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Permission requested: 4 09:09:32 09:09:32.456 [ProcessThread(sid:0 cport:46481):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - ACLs for node: [31,s{'world,'anyone} 09:09:32 ] 09:09:32 09:09:32.456 [ProcessThread(sid:0 cport:46481):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Client credentials: ['sasl,'zooclient 09:09:32 , 'ip,'127.0.0.1 09:09:32 ] 09:09:32 09:09:32.456 [ProcessThread(sid:0 cport:46481):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from data tree is: 8185728049 09:09:32 09:09:32.456 [ProcessThread(sid:0 cport:46481):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 6753288871 09:09:32 09:09:32.457 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - Processing request:: sessionid:0x1000002945e0000 type:create cxid:0x6 zxid:0x5 txntype:1 reqpath:n/a 09:09:32 09:09:32.458 [SyncThread:0] DEBUG org.apache.zookeeper.common.PathTrie - brokers 09:09:32 09:09:32.458 [SyncThread:0] DEBUG org.apache.zookeeper.server.DataTree - Digests are matching for Zxid: 5, Digest in log and actual tree: 8473520008 09:09:32 09:09:32.459 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - sessionid:0x1000002945e0000 type:create cxid:0x6 zxid:0x5 txntype:1 reqpath:n/a 09:09:32 09:09:32.460 [main-SendThread(127.0.0.1:46481)] DEBUG org.apache.zookeeper.ClientCnxn - Reading reply session id: 0x1000002945e0000, packet:: clientPath:/brokers/ids serverPath:/brokers/ids finished:false header:: 6,1 replyHeader:: 6,5,0 request:: '/brokers/ids,,v{s{31,s{'world,'anyone}}},0 response:: '/brokers/ids 09:09:32 09:09:32.463 [ProcessThread(sid:0 cport:46481):] DEBUG org.apache.zookeeper.server.SessionTrackerImpl - Checking session 0x1000002945e0000 09:09:32 09:09:32.464 [ProcessThread(sid:0 cport:46481):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Processing ACL: 31,s{'world,'anyone} 09:09:32 09:09:32 09:09:32.465 [ProcessThread(sid:0 cport:46481):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Permission requested: 4 09:09:32 09:09:32.465 [ProcessThread(sid:0 cport:46481):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - ACLs for node: [31,s{'world,'anyone} 09:09:32 ] 09:09:32 09:09:32.466 [ProcessThread(sid:0 cport:46481):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Client credentials: ['sasl,'zooclient 09:09:32 , 'ip,'127.0.0.1 09:09:32 ] 09:09:32 09:09:32.466 [ProcessThread(sid:0 cport:46481):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from data tree is: 8473520008 09:09:32 09:09:32.467 [ProcessThread(sid:0 cport:46481):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 6840153118 09:09:32 09:09:32.468 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - Processing request:: sessionid:0x1000002945e0000 type:create cxid:0x7 zxid:0x6 txntype:1 reqpath:n/a 09:09:32 09:09:32.468 [SyncThread:0] DEBUG org.apache.zookeeper.common.PathTrie - brokers 09:09:32 09:09:32.469 [SyncThread:0] DEBUG org.apache.zookeeper.server.DataTree - Digests are matching for Zxid: 6, Digest in log and actual tree: 9481179820 09:09:32 09:09:32.469 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - sessionid:0x1000002945e0000 type:create cxid:0x7 zxid:0x6 txntype:1 reqpath:n/a 09:09:32 09:09:32.470 [main-SendThread(127.0.0.1:46481)] DEBUG org.apache.zookeeper.ClientCnxn - Reading reply session id: 0x1000002945e0000, packet:: clientPath:/brokers/topics serverPath:/brokers/topics finished:false header:: 7,1 replyHeader:: 7,6,0 request:: '/brokers/topics,,v{s{31,s{'world,'anyone}}},0 response:: '/brokers/topics 09:09:32 09:09:32.472 [ProcessThread(sid:0 cport:46481):] DEBUG org.apache.zookeeper.server.SessionTrackerImpl - Checking session 0x1000002945e0000 09:09:32 09:09:32.472 [ProcessThread(sid:0 cport:46481):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Processing ACL: 31,s{'world,'anyone} 09:09:32 09:09:32 09:09:32.491 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - Processing request:: sessionid:0x1000002945e0000 type:create cxid:0x8 zxid:0x7 txntype:-1 reqpath:n/a 09:09:32 09:09:32.492 [SyncThread:0] DEBUG org.apache.zookeeper.server.DataTree - Ignoring processTxn failure hdr: -1 : error: -101 09:09:32 09:09:32.493 [main-SendThread(127.0.0.1:46481)] DEBUG org.apache.zookeeper.ClientCnxn - Reading reply session id: 0x1000002945e0000, packet:: clientPath:/config/changes serverPath:/config/changes finished:false header:: 8,1 replyHeader:: 8,7,-101 request:: '/config/changes,,v{s{31,s{'world,'anyone}}},0 response:: 09:09:32 09:09:32.496 [ProcessThread(sid:0 cport:46481):] DEBUG org.apache.zookeeper.server.SessionTrackerImpl - Checking session 0x1000002945e0000 09:09:32 09:09:32.496 [ProcessThread(sid:0 cport:46481):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Processing ACL: 31,s{'world,'anyone} 09:09:32 09:09:32 09:09:32.496 [ProcessThread(sid:0 cport:46481):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Permission requested: 4 09:09:32 09:09:32.496 [ProcessThread(sid:0 cport:46481):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - ACLs for node: [31,s{'world,'anyone} 09:09:32 ] 09:09:32 09:09:32.497 [ProcessThread(sid:0 cport:46481):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Client credentials: ['sasl,'zooclient 09:09:32 , 'ip,'127.0.0.1 09:09:32 ] 09:09:32 09:09:32.497 [ProcessThread(sid:0 cport:46481):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from data tree is: 9481179820 09:09:32 09:09:32.497 [ProcessThread(sid:0 cport:46481):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 11014254696 09:09:32 09:09:32.498 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - Processing request:: sessionid:0x1000002945e0000 type:create cxid:0x9 zxid:0x8 txntype:1 reqpath:n/a 09:09:32 09:09:32.498 [SyncThread:0] DEBUG org.apache.zookeeper.common.PathTrie - config 09:09:32 09:09:32.499 [SyncThread:0] DEBUG org.apache.zookeeper.server.DataTree - Digests are matching for Zxid: 8, Digest in log and actual tree: 13665429434 09:09:32 09:09:32.499 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - sessionid:0x1000002945e0000 type:create cxid:0x9 zxid:0x8 txntype:1 reqpath:n/a 09:09:32 09:09:32.500 [main-SendThread(127.0.0.1:46481)] DEBUG org.apache.zookeeper.ClientCnxn - Reading reply session id: 0x1000002945e0000, packet:: clientPath:/config serverPath:/config finished:false header:: 9,1 replyHeader:: 9,8,0 request:: '/config,,v{s{31,s{'world,'anyone}}},0 response:: '/config 09:09:32 09:09:32.501 [ProcessThread(sid:0 cport:46481):] DEBUG org.apache.zookeeper.server.SessionTrackerImpl - Checking session 0x1000002945e0000 09:09:32 09:09:32.502 [ProcessThread(sid:0 cport:46481):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Processing ACL: 31,s{'world,'anyone} 09:09:32 09:09:32 09:09:32.502 [ProcessThread(sid:0 cport:46481):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Permission requested: 4 09:09:32 09:09:32.502 [ProcessThread(sid:0 cport:46481):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - ACLs for node: [31,s{'world,'anyone} 09:09:32 ] 09:09:32 09:09:32.502 [ProcessThread(sid:0 cport:46481):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Client credentials: ['sasl,'zooclient 09:09:32 , 'ip,'127.0.0.1 09:09:32 ] 09:09:32 09:09:32.502 [ProcessThread(sid:0 cport:46481):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from data tree is: 13665429434 09:09:32 09:09:32.502 [ProcessThread(sid:0 cport:46481):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 14292331056 09:09:32 09:09:32.503 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - Processing request:: sessionid:0x1000002945e0000 type:create cxid:0xa zxid:0x9 txntype:1 reqpath:n/a 09:09:32 09:09:32.504 [SyncThread:0] DEBUG org.apache.zookeeper.common.PathTrie - config 09:09:32 09:09:32.504 [SyncThread:0] DEBUG org.apache.zookeeper.server.DataTree - Digests are matching for Zxid: 9, Digest in log and actual tree: 14416726297 09:09:32 09:09:32.504 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - sessionid:0x1000002945e0000 type:create cxid:0xa zxid:0x9 txntype:1 reqpath:n/a 09:09:32 09:09:32.504 [main-SendThread(127.0.0.1:46481)] DEBUG org.apache.zookeeper.ClientCnxn - Reading reply session id: 0x1000002945e0000, packet:: clientPath:/config/changes serverPath:/config/changes finished:false header:: 10,1 replyHeader:: 10,9,0 request:: '/config/changes,,v{s{31,s{'world,'anyone}}},0 response:: '/config/changes 09:09:32 09:09:32.506 [ProcessThread(sid:0 cport:46481):] DEBUG org.apache.zookeeper.server.SessionTrackerImpl - Checking session 0x1000002945e0000 09:09:32 09:09:32.506 [ProcessThread(sid:0 cport:46481):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Processing ACL: 31,s{'world,'anyone} 09:09:32 09:09:32 09:09:32.508 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - Processing request:: sessionid:0x1000002945e0000 type:create cxid:0xb zxid:0xa txntype:-1 reqpath:n/a 09:09:32 09:09:32.508 [SyncThread:0] DEBUG org.apache.zookeeper.server.DataTree - Ignoring processTxn failure hdr: -1 : error: -101 09:09:32 09:09:32.508 [main-SendThread(127.0.0.1:46481)] DEBUG org.apache.zookeeper.ClientCnxn - Reading reply session id: 0x1000002945e0000, packet:: clientPath:/admin/delete_topics serverPath:/admin/delete_topics finished:false header:: 11,1 replyHeader:: 11,10,-101 request:: '/admin/delete_topics,,v{s{31,s{'world,'anyone}}},0 response:: 09:09:32 09:09:32.510 [ProcessThread(sid:0 cport:46481):] DEBUG org.apache.zookeeper.server.SessionTrackerImpl - Checking session 0x1000002945e0000 09:09:32 09:09:32.510 [ProcessThread(sid:0 cport:46481):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Processing ACL: 31,s{'world,'anyone} 09:09:32 09:09:32 09:09:32.510 [ProcessThread(sid:0 cport:46481):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Permission requested: 4 09:09:32 09:09:32.510 [ProcessThread(sid:0 cport:46481):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - ACLs for node: [31,s{'world,'anyone} 09:09:32 ] 09:09:32 09:09:32.510 [ProcessThread(sid:0 cport:46481):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Client credentials: ['sasl,'zooclient 09:09:32 , 'ip,'127.0.0.1 09:09:32 ] 09:09:32 09:09:32.511 [ProcessThread(sid:0 cport:46481):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from data tree is: 14416726297 09:09:32 09:09:32.511 [ProcessThread(sid:0 cport:46481):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 13828354529 09:09:32 09:09:32.512 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - Processing request:: sessionid:0x1000002945e0000 type:create cxid:0xc zxid:0xb txntype:1 reqpath:n/a 09:09:32 09:09:32.512 [SyncThread:0] DEBUG org.apache.zookeeper.common.PathTrie - admin 09:09:32 09:09:32.512 [SyncThread:0] DEBUG org.apache.zookeeper.server.DataTree - Digests are matching for Zxid: b, Digest in log and actual tree: 16628144506 09:09:32 09:09:32.512 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - sessionid:0x1000002945e0000 type:create cxid:0xc zxid:0xb txntype:1 reqpath:n/a 09:09:32 09:09:32.512 [main-SendThread(127.0.0.1:46481)] DEBUG org.apache.zookeeper.ClientCnxn - Reading reply session id: 0x1000002945e0000, packet:: clientPath:/admin serverPath:/admin finished:false header:: 12,1 replyHeader:: 12,11,0 request:: '/admin,,v{s{31,s{'world,'anyone}}},0 response:: '/admin 09:09:32 09:09:32.514 [ProcessThread(sid:0 cport:46481):] DEBUG org.apache.zookeeper.server.SessionTrackerImpl - Checking session 0x1000002945e0000 09:09:32 09:09:32.514 [ProcessThread(sid:0 cport:46481):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Processing ACL: 31,s{'world,'anyone} 09:09:32 09:09:32 09:09:32.514 [ProcessThread(sid:0 cport:46481):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Permission requested: 4 09:09:32 09:09:32.514 [ProcessThread(sid:0 cport:46481):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - ACLs for node: [31,s{'world,'anyone} 09:09:32 ] 09:09:32 09:09:32.514 [ProcessThread(sid:0 cport:46481):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Client credentials: ['sasl,'zooclient 09:09:32 , 'ip,'127.0.0.1 09:09:32 ] 09:09:32 09:09:32.515 [ProcessThread(sid:0 cport:46481):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from data tree is: 16628144506 09:09:32 09:09:32.515 [ProcessThread(sid:0 cport:46481):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 15158331812 09:09:32 09:09:32.516 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - Processing request:: sessionid:0x1000002945e0000 type:create cxid:0xd zxid:0xc txntype:1 reqpath:n/a 09:09:32 09:09:32.517 [SyncThread:0] DEBUG org.apache.zookeeper.common.PathTrie - admin 09:09:32 09:09:32.517 [SyncThread:0] DEBUG org.apache.zookeeper.server.DataTree - Digests are matching for Zxid: c, Digest in log and actual tree: 15916342506 09:09:32 09:09:32.517 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - sessionid:0x1000002945e0000 type:create cxid:0xd zxid:0xc txntype:1 reqpath:n/a 09:09:32 09:09:32.517 [main-SendThread(127.0.0.1:46481)] DEBUG org.apache.zookeeper.ClientCnxn - Reading reply session id: 0x1000002945e0000, packet:: clientPath:/admin/delete_topics serverPath:/admin/delete_topics finished:false header:: 13,1 replyHeader:: 13,12,0 request:: '/admin/delete_topics,,v{s{31,s{'world,'anyone}}},0 response:: '/admin/delete_topics 09:09:32 09:09:32.519 [ProcessThread(sid:0 cport:46481):] DEBUG org.apache.zookeeper.server.SessionTrackerImpl - Checking session 0x1000002945e0000 09:09:32 09:09:32.519 [ProcessThread(sid:0 cport:46481):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Processing ACL: 31,s{'world,'anyone} 09:09:32 09:09:32 09:09:32.520 [ProcessThread(sid:0 cport:46481):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Permission requested: 4 09:09:32 09:09:32.520 [ProcessThread(sid:0 cport:46481):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - ACLs for node: [31,s{'world,'anyone} 09:09:32 ] 09:09:32 09:09:32.520 [ProcessThread(sid:0 cport:46481):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Client credentials: ['sasl,'zooclient 09:09:32 , 'ip,'127.0.0.1 09:09:32 ] 09:09:32 09:09:32.520 [ProcessThread(sid:0 cport:46481):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from data tree is: 15916342506 09:09:32 09:09:32.520 [ProcessThread(sid:0 cport:46481):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 15198152831 09:09:32 09:09:32.521 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - Processing request:: sessionid:0x1000002945e0000 type:create cxid:0xe zxid:0xd txntype:1 reqpath:n/a 09:09:32 09:09:32.521 [SyncThread:0] DEBUG org.apache.zookeeper.common.PathTrie - brokers 09:09:32 09:09:32.521 [SyncThread:0] DEBUG org.apache.zookeeper.server.DataTree - Digests are matching for Zxid: d, Digest in log and actual tree: 18198449913 09:09:32 09:09:32.521 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - sessionid:0x1000002945e0000 type:create cxid:0xe zxid:0xd txntype:1 reqpath:n/a 09:09:32 09:09:32.522 [main-SendThread(127.0.0.1:46481)] DEBUG org.apache.zookeeper.ClientCnxn - Reading reply session id: 0x1000002945e0000, packet:: clientPath:/brokers/seqid serverPath:/brokers/seqid finished:false header:: 14,1 replyHeader:: 14,13,0 request:: '/brokers/seqid,,v{s{31,s{'world,'anyone}}},0 response:: '/brokers/seqid 09:09:32 09:09:32.523 [ProcessThread(sid:0 cport:46481):] DEBUG org.apache.zookeeper.server.SessionTrackerImpl - Checking session 0x1000002945e0000 09:09:32 09:09:32.523 [ProcessThread(sid:0 cport:46481):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Processing ACL: 31,s{'world,'anyone} 09:09:32 09:09:32 09:09:32.523 [ProcessThread(sid:0 cport:46481):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Permission requested: 4 09:09:32 09:09:32.524 [ProcessThread(sid:0 cport:46481):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - ACLs for node: [31,s{'world,'anyone} 09:09:32 ] 09:09:32 09:09:32.524 [ProcessThread(sid:0 cport:46481):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Client credentials: ['sasl,'zooclient 09:09:32 , 'ip,'127.0.0.1 09:09:32 ] 09:09:32 09:09:32.524 [ProcessThread(sid:0 cport:46481):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from data tree is: 18198449913 09:09:32 09:09:32.524 [ProcessThread(sid:0 cport:46481):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 18985764638 09:09:32 09:09:32.526 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - Processing request:: sessionid:0x1000002945e0000 type:create cxid:0xf zxid:0xe txntype:1 reqpath:n/a 09:09:32 09:09:32.526 [SyncThread:0] DEBUG org.apache.zookeeper.common.PathTrie - isr_change_notification 09:09:32 09:09:32.527 [SyncThread:0] DEBUG org.apache.zookeeper.server.DataTree - Digests are matching for Zxid: e, Digest in log and actual tree: 19607496074 09:09:32 09:09:32.527 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - sessionid:0x1000002945e0000 type:create cxid:0xf zxid:0xe txntype:1 reqpath:n/a 09:09:32 09:09:32.527 [main-SendThread(127.0.0.1:46481)] DEBUG org.apache.zookeeper.ClientCnxn - Reading reply session id: 0x1000002945e0000, packet:: clientPath:/isr_change_notification serverPath:/isr_change_notification finished:false header:: 15,1 replyHeader:: 15,14,0 request:: '/isr_change_notification,,v{s{31,s{'world,'anyone}}},0 response:: '/isr_change_notification 09:09:32 09:09:32.529 [ProcessThread(sid:0 cport:46481):] DEBUG org.apache.zookeeper.server.SessionTrackerImpl - Checking session 0x1000002945e0000 09:09:32 09:09:32.529 [ProcessThread(sid:0 cport:46481):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Processing ACL: 31,s{'world,'anyone} 09:09:32 09:09:32 09:09:32.529 [ProcessThread(sid:0 cport:46481):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Permission requested: 4 09:09:32 09:09:32.529 [ProcessThread(sid:0 cport:46481):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - ACLs for node: [31,s{'world,'anyone} 09:09:32 ] 09:09:32 09:09:32.529 [ProcessThread(sid:0 cport:46481):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Client credentials: ['sasl,'zooclient 09:09:32 , 'ip,'127.0.0.1 09:09:32 ] 09:09:32 09:09:32.529 [ProcessThread(sid:0 cport:46481):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from data tree is: 19607496074 09:09:32 09:09:32.529 [ProcessThread(sid:0 cport:46481):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 18987372121 09:09:32 09:09:32.534 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - Processing request:: sessionid:0x1000002945e0000 type:create cxid:0x10 zxid:0xf txntype:1 reqpath:n/a 09:09:32 09:09:32.535 [SyncThread:0] DEBUG org.apache.zookeeper.common.PathTrie - latest_producer_id_block 09:09:32 09:09:32.535 [SyncThread:0] DEBUG org.apache.zookeeper.server.DataTree - Digests are matching for Zxid: f, Digest in log and actual tree: 22723350762 09:09:32 09:09:32.535 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - sessionid:0x1000002945e0000 type:create cxid:0x10 zxid:0xf txntype:1 reqpath:n/a 09:09:32 09:09:32.536 [main-SendThread(127.0.0.1:46481)] DEBUG org.apache.zookeeper.ClientCnxn - Reading reply session id: 0x1000002945e0000, packet:: clientPath:/latest_producer_id_block serverPath:/latest_producer_id_block finished:false header:: 16,1 replyHeader:: 16,15,0 request:: '/latest_producer_id_block,,v{s{31,s{'world,'anyone}}},0 response:: '/latest_producer_id_block 09:09:32 09:09:32.538 [ProcessThread(sid:0 cport:46481):] DEBUG org.apache.zookeeper.server.SessionTrackerImpl - Checking session 0x1000002945e0000 09:09:32 09:09:32.538 [ProcessThread(sid:0 cport:46481):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Processing ACL: 31,s{'world,'anyone} 09:09:32 09:09:32 09:09:32.538 [ProcessThread(sid:0 cport:46481):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Permission requested: 4 09:09:32 09:09:32.538 [ProcessThread(sid:0 cport:46481):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - ACLs for node: [31,s{'world,'anyone} 09:09:32 ] 09:09:32 09:09:32.538 [ProcessThread(sid:0 cport:46481):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Client credentials: ['sasl,'zooclient 09:09:32 , 'ip,'127.0.0.1 09:09:32 ] 09:09:32 09:09:32.538 [ProcessThread(sid:0 cport:46481):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from data tree is: 22723350762 09:09:32 09:09:32.538 [ProcessThread(sid:0 cport:46481):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 21900890723 09:09:32 09:09:32.539 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - Processing request:: sessionid:0x1000002945e0000 type:create cxid:0x11 zxid:0x10 txntype:1 reqpath:n/a 09:09:32 09:09:32.540 [SyncThread:0] DEBUG org.apache.zookeeper.common.PathTrie - log_dir_event_notification 09:09:32 09:09:32.540 [SyncThread:0] DEBUG org.apache.zookeeper.server.DataTree - Digests are matching for Zxid: 10, Digest in log and actual tree: 25947530525 09:09:32 09:09:32.540 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - sessionid:0x1000002945e0000 type:create cxid:0x11 zxid:0x10 txntype:1 reqpath:n/a 09:09:32 09:09:32.541 [main-SendThread(127.0.0.1:46481)] DEBUG org.apache.zookeeper.ClientCnxn - Reading reply session id: 0x1000002945e0000, packet:: clientPath:/log_dir_event_notification serverPath:/log_dir_event_notification finished:false header:: 17,1 replyHeader:: 17,16,0 request:: '/log_dir_event_notification,,v{s{31,s{'world,'anyone}}},0 response:: '/log_dir_event_notification 09:09:32 09:09:32.543 [ProcessThread(sid:0 cport:46481):] DEBUG org.apache.zookeeper.server.SessionTrackerImpl - Checking session 0x1000002945e0000 09:09:32 09:09:32.543 [ProcessThread(sid:0 cport:46481):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Processing ACL: 31,s{'world,'anyone} 09:09:32 09:09:32 09:09:32.543 [ProcessThread(sid:0 cport:46481):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Permission requested: 4 09:09:32 09:09:32.543 [ProcessThread(sid:0 cport:46481):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - ACLs for node: [31,s{'world,'anyone} 09:09:32 ] 09:09:32 09:09:32.544 [ProcessThread(sid:0 cport:46481):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Client credentials: ['sasl,'zooclient 09:09:32 , 'ip,'127.0.0.1 09:09:32 ] 09:09:32 09:09:32.544 [ProcessThread(sid:0 cport:46481):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from data tree is: 25947530525 09:09:32 09:09:32.544 [ProcessThread(sid:0 cport:46481):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 25894080538 09:09:32 09:09:32.574 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - Processing request:: sessionid:0x1000002945e0000 type:create cxid:0x12 zxid:0x11 txntype:1 reqpath:n/a 09:09:32 09:09:32.575 [SyncThread:0] DEBUG org.apache.zookeeper.common.PathTrie - config 09:09:32 09:09:32.575 [SyncThread:0] DEBUG org.apache.zookeeper.server.DataTree - Digests are matching for Zxid: 11, Digest in log and actual tree: 27797572323 09:09:32 09:09:32.575 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - sessionid:0x1000002945e0000 type:create cxid:0x12 zxid:0x11 txntype:1 reqpath:n/a 09:09:32 09:09:32.576 [main-SendThread(127.0.0.1:46481)] DEBUG org.apache.zookeeper.ClientCnxn - Reading reply session id: 0x1000002945e0000, packet:: clientPath:/config/topics serverPath:/config/topics finished:false header:: 18,1 replyHeader:: 18,17,0 request:: '/config/topics,,v{s{31,s{'world,'anyone}}},0 response:: '/config/topics 09:09:32 09:09:32.579 [ProcessThread(sid:0 cport:46481):] DEBUG org.apache.zookeeper.server.SessionTrackerImpl - Checking session 0x1000002945e0000 09:09:32 09:09:32.579 [ProcessThread(sid:0 cport:46481):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Processing ACL: 31,s{'world,'anyone} 09:09:32 09:09:32 09:09:32.579 [ProcessThread(sid:0 cport:46481):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Permission requested: 4 09:09:32 09:09:32.579 [ProcessThread(sid:0 cport:46481):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - ACLs for node: [31,s{'world,'anyone} 09:09:32 ] 09:09:32 09:09:32.580 [ProcessThread(sid:0 cport:46481):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Client credentials: ['sasl,'zooclient 09:09:32 , 'ip,'127.0.0.1 09:09:32 ] 09:09:32 09:09:32.580 [ProcessThread(sid:0 cport:46481):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from data tree is: 27797572323 09:09:32 09:09:32.580 [ProcessThread(sid:0 cport:46481):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 25776973886 09:09:32 09:09:32.581 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - Processing request:: sessionid:0x1000002945e0000 type:create cxid:0x13 zxid:0x12 txntype:1 reqpath:n/a 09:09:32 09:09:32.581 [SyncThread:0] DEBUG org.apache.zookeeper.common.PathTrie - config 09:09:32 09:09:32.582 [SyncThread:0] DEBUG org.apache.zookeeper.server.DataTree - Digests are matching for Zxid: 12, Digest in log and actual tree: 27306039796 09:09:32 09:09:32.582 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - sessionid:0x1000002945e0000 type:create cxid:0x13 zxid:0x12 txntype:1 reqpath:n/a 09:09:32 09:09:32.583 [main-SendThread(127.0.0.1:46481)] DEBUG org.apache.zookeeper.ClientCnxn - Reading reply session id: 0x1000002945e0000, packet:: clientPath:/config/clients serverPath:/config/clients finished:false header:: 19,1 replyHeader:: 19,18,0 request:: '/config/clients,,v{s{31,s{'world,'anyone}}},0 response:: '/config/clients 09:09:32 09:09:32.584 [ProcessThread(sid:0 cport:46481):] DEBUG org.apache.zookeeper.server.SessionTrackerImpl - Checking session 0x1000002945e0000 09:09:32 09:09:32.585 [ProcessThread(sid:0 cport:46481):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Processing ACL: 31,s{'world,'anyone} 09:09:32 09:09:32 09:09:32.585 [ProcessThread(sid:0 cport:46481):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Permission requested: 4 09:09:32 09:09:32.585 [ProcessThread(sid:0 cport:46481):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - ACLs for node: [31,s{'world,'anyone} 09:09:32 ] 09:09:32 09:09:32.585 [ProcessThread(sid:0 cport:46481):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Client credentials: ['sasl,'zooclient 09:09:32 , 'ip,'127.0.0.1 09:09:32 ] 09:09:32 09:09:32.585 [ProcessThread(sid:0 cport:46481):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from data tree is: 27306039796 09:09:32 09:09:32.585 [ProcessThread(sid:0 cport:46481):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 29227232059 09:09:32 09:09:32.586 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - Processing request:: sessionid:0x1000002945e0000 type:create cxid:0x14 zxid:0x13 txntype:1 reqpath:n/a 09:09:32 09:09:32.587 [SyncThread:0] DEBUG org.apache.zookeeper.common.PathTrie - config 09:09:32 09:09:32.587 [SyncThread:0] DEBUG org.apache.zookeeper.server.DataTree - Digests are matching for Zxid: 13, Digest in log and actual tree: 30095873607 09:09:32 09:09:32.587 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - sessionid:0x1000002945e0000 type:create cxid:0x14 zxid:0x13 txntype:1 reqpath:n/a 09:09:32 09:09:32.587 [main-SendThread(127.0.0.1:46481)] DEBUG org.apache.zookeeper.ClientCnxn - Reading reply session id: 0x1000002945e0000, packet:: clientPath:/config/users serverPath:/config/users finished:false header:: 20,1 replyHeader:: 20,19,0 request:: '/config/users,,v{s{31,s{'world,'anyone}}},0 response:: '/config/users 09:09:32 09:09:32.588 [ProcessThread(sid:0 cport:46481):] DEBUG org.apache.zookeeper.server.SessionTrackerImpl - Checking session 0x1000002945e0000 09:09:32 09:09:32.589 [ProcessThread(sid:0 cport:46481):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Processing ACL: 31,s{'world,'anyone} 09:09:32 09:09:32 09:09:32.589 [ProcessThread(sid:0 cport:46481):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Permission requested: 4 09:09:32 09:09:32.589 [ProcessThread(sid:0 cport:46481):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - ACLs for node: [31,s{'world,'anyone} 09:09:32 ] 09:09:32 09:09:32.589 [ProcessThread(sid:0 cport:46481):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Client credentials: ['sasl,'zooclient 09:09:32 , 'ip,'127.0.0.1 09:09:32 ] 09:09:32 09:09:32.589 [ProcessThread(sid:0 cport:46481):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from data tree is: 30095873607 09:09:32 09:09:32.589 [ProcessThread(sid:0 cport:46481):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 28378080797 09:09:32 09:09:32.590 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - Processing request:: sessionid:0x1000002945e0000 type:create cxid:0x15 zxid:0x14 txntype:1 reqpath:n/a 09:09:32 09:09:32.591 [SyncThread:0] DEBUG org.apache.zookeeper.common.PathTrie - config 09:09:32 09:09:32.591 [SyncThread:0] DEBUG org.apache.zookeeper.server.DataTree - Digests are matching for Zxid: 14, Digest in log and actual tree: 29476376186 09:09:32 09:09:32.591 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - sessionid:0x1000002945e0000 type:create cxid:0x15 zxid:0x14 txntype:1 reqpath:n/a 09:09:32 09:09:32.591 [main-SendThread(127.0.0.1:46481)] DEBUG org.apache.zookeeper.ClientCnxn - Reading reply session id: 0x1000002945e0000, packet:: clientPath:/config/brokers serverPath:/config/brokers finished:false header:: 21,1 replyHeader:: 21,20,0 request:: '/config/brokers,,v{s{31,s{'world,'anyone}}},0 response:: '/config/brokers 09:09:32 09:09:32.593 [ProcessThread(sid:0 cport:46481):] DEBUG org.apache.zookeeper.server.SessionTrackerImpl - Checking session 0x1000002945e0000 09:09:32 09:09:32.593 [ProcessThread(sid:0 cport:46481):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Processing ACL: 31,s{'world,'anyone} 09:09:32 09:09:32 09:09:32.593 [ProcessThread(sid:0 cport:46481):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Permission requested: 4 09:09:32 09:09:32.593 [ProcessThread(sid:0 cport:46481):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - ACLs for node: [31,s{'world,'anyone} 09:09:32 ] 09:09:32 09:09:32.593 [ProcessThread(sid:0 cport:46481):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Client credentials: ['sasl,'zooclient 09:09:32 , 'ip,'127.0.0.1 09:09:32 ] 09:09:32 09:09:32.593 [ProcessThread(sid:0 cport:46481):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from data tree is: 29476376186 09:09:32 09:09:32.593 [ProcessThread(sid:0 cport:46481):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 29928719401 09:09:32 09:09:32.594 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - Processing request:: sessionid:0x1000002945e0000 type:create cxid:0x16 zxid:0x15 txntype:1 reqpath:n/a 09:09:32 09:09:32.594 [SyncThread:0] DEBUG org.apache.zookeeper.common.PathTrie - config 09:09:32 09:09:32.594 [SyncThread:0] DEBUG org.apache.zookeeper.server.DataTree - Digests are matching for Zxid: 15, Digest in log and actual tree: 31252681370 09:09:32 09:09:32.594 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - sessionid:0x1000002945e0000 type:create cxid:0x16 zxid:0x15 txntype:1 reqpath:n/a 09:09:32 09:09:32.595 [main-SendThread(127.0.0.1:46481)] DEBUG org.apache.zookeeper.ClientCnxn - Reading reply session id: 0x1000002945e0000, packet:: clientPath:/config/ips serverPath:/config/ips finished:false header:: 22,1 replyHeader:: 22,21,0 request:: '/config/ips,,v{s{31,s{'world,'anyone}}},0 response:: '/config/ips 09:09:32 09:09:32.612 [ProcessThread(sid:0 cport:46481):] DEBUG org.apache.zookeeper.server.SessionTrackerImpl - Checking session 0x1000002945e0000 09:09:32 09:09:32.612 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - Processing request:: sessionid:0x1000002945e0000 type:getData cxid:0x17 zxid:0xfffffffffffffffe txntype:unknown reqpath:/cluster/id 09:09:32 09:09:32.614 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - sessionid:0x1000002945e0000 type:getData cxid:0x17 zxid:0xfffffffffffffffe txntype:unknown reqpath:/cluster/id 09:09:32 09:09:32.615 [main-SendThread(127.0.0.1:46481)] DEBUG org.apache.zookeeper.ClientCnxn - Reading reply session id: 0x1000002945e0000, packet:: clientPath:/cluster/id serverPath:/cluster/id finished:false header:: 23,4 replyHeader:: 23,21,-101 request:: '/cluster/id,F response:: 09:09:32 09:09:32.952 [ProcessThread(sid:0 cport:46481):] DEBUG org.apache.zookeeper.server.SessionTrackerImpl - Checking session 0x1000002945e0000 09:09:32 09:09:32.953 [ProcessThread(sid:0 cport:46481):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Processing ACL: 31,s{'world,'anyone} 09:09:32 09:09:32 09:09:32.956 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - Processing request:: sessionid:0x1000002945e0000 type:create cxid:0x18 zxid:0x16 txntype:-1 reqpath:n/a 09:09:32 09:09:32.956 [SyncThread:0] DEBUG org.apache.zookeeper.server.DataTree - Ignoring processTxn failure hdr: -1 : error: -101 09:09:32 09:09:32.957 [main-SendThread(127.0.0.1:46481)] DEBUG org.apache.zookeeper.ClientCnxn - Reading reply session id: 0x1000002945e0000, packet:: clientPath:/cluster/id serverPath:/cluster/id finished:false header:: 24,1 replyHeader:: 24,22,-101 request:: '/cluster/id,#7b2276657273696f6e223a2231222c226964223a226c63704f7959312d5159324d4d546867484747675341227d,v{s{31,s{'world,'anyone}}},0 response:: 09:09:32 09:09:32.959 [ProcessThread(sid:0 cport:46481):] DEBUG org.apache.zookeeper.server.SessionTrackerImpl - Checking session 0x1000002945e0000 09:09:32 09:09:32.960 [ProcessThread(sid:0 cport:46481):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Processing ACL: 31,s{'world,'anyone} 09:09:32 09:09:32 09:09:32.960 [ProcessThread(sid:0 cport:46481):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Permission requested: 4 09:09:32 09:09:32.960 [ProcessThread(sid:0 cport:46481):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - ACLs for node: [31,s{'world,'anyone} 09:09:32 ] 09:09:32 09:09:32.960 [ProcessThread(sid:0 cport:46481):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Client credentials: ['sasl,'zooclient 09:09:32 , 'ip,'127.0.0.1 09:09:32 ] 09:09:32 09:09:32.960 [ProcessThread(sid:0 cport:46481):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from data tree is: 31252681370 09:09:32 09:09:32.960 [ProcessThread(sid:0 cport:46481):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 30929825204 09:09:32 09:09:32.961 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - Processing request:: sessionid:0x1000002945e0000 type:create cxid:0x19 zxid:0x17 txntype:1 reqpath:n/a 09:09:32 09:09:32.962 [SyncThread:0] DEBUG org.apache.zookeeper.common.PathTrie - cluster 09:09:32 09:09:32.962 [SyncThread:0] DEBUG org.apache.zookeeper.server.DataTree - Digests are matching for Zxid: 17, Digest in log and actual tree: 31864508666 09:09:32 09:09:32.962 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - sessionid:0x1000002945e0000 type:create cxid:0x19 zxid:0x17 txntype:1 reqpath:n/a 09:09:32 09:09:32.963 [main-SendThread(127.0.0.1:46481)] DEBUG org.apache.zookeeper.ClientCnxn - Reading reply session id: 0x1000002945e0000, packet:: clientPath:/cluster serverPath:/cluster finished:false header:: 25,1 replyHeader:: 25,23,0 request:: '/cluster,,v{s{31,s{'world,'anyone}}},0 response:: '/cluster 09:09:32 09:09:32.964 [ProcessThread(sid:0 cport:46481):] DEBUG org.apache.zookeeper.server.SessionTrackerImpl - Checking session 0x1000002945e0000 09:09:32 09:09:32.964 [ProcessThread(sid:0 cport:46481):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Processing ACL: 31,s{'world,'anyone} 09:09:32 09:09:32 09:09:32.965 [ProcessThread(sid:0 cport:46481):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Permission requested: 4 09:09:32 09:09:32.965 [ProcessThread(sid:0 cport:46481):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - ACLs for node: [31,s{'world,'anyone} 09:09:32 ] 09:09:32 09:09:32.965 [ProcessThread(sid:0 cport:46481):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Client credentials: ['sasl,'zooclient 09:09:32 , 'ip,'127.0.0.1 09:09:32 ] 09:09:32 09:09:32.965 [ProcessThread(sid:0 cport:46481):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from data tree is: 31864508666 09:09:32 09:09:32.965 [ProcessThread(sid:0 cport:46481):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 30975192086 09:09:33 09:09:33.061 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - Processing request:: sessionid:0x1000002945e0000 type:create cxid:0x1a zxid:0x18 txntype:1 reqpath:n/a 09:09:33 09:09:33.062 [SyncThread:0] DEBUG org.apache.zookeeper.common.PathTrie - cluster 09:09:33 09:09:33.062 [SyncThread:0] DEBUG org.apache.zookeeper.server.DataTree - Digests are matching for Zxid: 18, Digest in log and actual tree: 33170292673 09:09:33 09:09:33.062 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - sessionid:0x1000002945e0000 type:create cxid:0x1a zxid:0x18 txntype:1 reqpath:n/a 09:09:33 09:09:33.064 [main-SendThread(127.0.0.1:46481)] DEBUG org.apache.zookeeper.ClientCnxn - Reading reply session id: 0x1000002945e0000, packet:: clientPath:/cluster/id serverPath:/cluster/id finished:false header:: 26,1 replyHeader:: 26,24,0 request:: '/cluster/id,#7b2276657273696f6e223a2231222c226964223a226c63704f7959312d5159324d4d546867484747675341227d,v{s{31,s{'world,'anyone}}},0 response:: '/cluster/id 09:09:33 09:09:33.067 [main] INFO kafka.server.KafkaServer - Cluster ID = lcpOyY1-QY2MMThgHGGgSA 09:09:33 09:09:33.073 [main] WARN kafka.server.BrokerMetadataCheckpoint - No meta.properties file under dir /tmp/kafka-unit11182757027218931278/meta.properties 09:09:33 09:09:33.082 [ProcessThread(sid:0 cport:46481):] DEBUG org.apache.zookeeper.server.SessionTrackerImpl - Checking session 0x1000002945e0000 09:09:33 09:09:33.083 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - Processing request:: sessionid:0x1000002945e0000 type:getData cxid:0x1b zxid:0xfffffffffffffffe txntype:unknown reqpath:/config/brokers/ 09:09:33 09:09:33.083 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - sessionid:0x1000002945e0000 type:getData cxid:0x1b zxid:0xfffffffffffffffe txntype:unknown reqpath:/config/brokers/ 09:09:33 09:09:33.084 [main-SendThread(127.0.0.1:46481)] DEBUG org.apache.zookeeper.ClientCnxn - Reading reply session id: 0x1000002945e0000, packet:: clientPath:/config/brokers/ serverPath:/config/brokers/ finished:false header:: 27,4 replyHeader:: 27,24,-101 request:: '/config/brokers/,F response:: 09:09:33 09:09:33.131 [ProcessThread(sid:0 cport:46481):] DEBUG org.apache.zookeeper.server.SessionTrackerImpl - Checking session 0x1000002945e0000 09:09:33 09:09:33.131 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - Processing request:: sessionid:0x1000002945e0000 type:getData cxid:0x1c zxid:0xfffffffffffffffe txntype:unknown reqpath:/config/brokers/1 09:09:33 09:09:33.131 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - sessionid:0x1000002945e0000 type:getData cxid:0x1c zxid:0xfffffffffffffffe txntype:unknown reqpath:/config/brokers/1 09:09:33 09:09:33.132 [main-SendThread(127.0.0.1:46481)] DEBUG org.apache.zookeeper.ClientCnxn - Reading reply session id: 0x1000002945e0000, packet:: clientPath:/config/brokers/1 serverPath:/config/brokers/1 finished:false header:: 28,4 replyHeader:: 28,24,-101 request:: '/config/brokers/1,F response:: 09:09:33 09:09:33.134 [main] INFO kafka.server.KafkaConfig - KafkaConfig values: 09:09:33 advertised.listeners = SASL_PLAINTEXT://localhost:40117 09:09:33 alter.config.policy.class.name = null 09:09:33 alter.log.dirs.replication.quota.window.num = 11 09:09:33 alter.log.dirs.replication.quota.window.size.seconds = 1 09:09:33 authorizer.class.name = 09:09:33 auto.create.topics.enable = true 09:09:33 auto.leader.rebalance.enable = true 09:09:33 background.threads = 10 09:09:33 broker.heartbeat.interval.ms = 2000 09:09:33 broker.id = 1 09:09:33 broker.id.generation.enable = true 09:09:33 broker.rack = null 09:09:33 broker.session.timeout.ms = 9000 09:09:33 client.quota.callback.class = null 09:09:33 compression.type = producer 09:09:33 connection.failed.authentication.delay.ms = 100 09:09:33 connections.max.idle.ms = 600000 09:09:33 connections.max.reauth.ms = 0 09:09:33 control.plane.listener.name = null 09:09:33 controlled.shutdown.enable = true 09:09:33 controlled.shutdown.max.retries = 3 09:09:33 controlled.shutdown.retry.backoff.ms = 5000 09:09:33 controller.listener.names = null 09:09:33 controller.quorum.append.linger.ms = 25 09:09:33 controller.quorum.election.backoff.max.ms = 1000 09:09:33 controller.quorum.election.timeout.ms = 1000 09:09:33 controller.quorum.fetch.timeout.ms = 2000 09:09:33 controller.quorum.request.timeout.ms = 2000 09:09:33 controller.quorum.retry.backoff.ms = 20 09:09:33 controller.quorum.voters = [] 09:09:33 controller.quota.window.num = 11 09:09:33 controller.quota.window.size.seconds = 1 09:09:33 controller.socket.timeout.ms = 30000 09:09:33 create.topic.policy.class.name = null 09:09:33 default.replication.factor = 1 09:09:33 delegation.token.expiry.check.interval.ms = 3600000 09:09:33 delegation.token.expiry.time.ms = 86400000 09:09:33 delegation.token.master.key = null 09:09:33 delegation.token.max.lifetime.ms = 604800000 09:09:33 delegation.token.secret.key = null 09:09:33 delete.records.purgatory.purge.interval.requests = 1 09:09:33 delete.topic.enable = true 09:09:33 early.start.listeners = null 09:09:33 fetch.max.bytes = 57671680 09:09:33 fetch.purgatory.purge.interval.requests = 1000 09:09:33 group.initial.rebalance.delay.ms = 3000 09:09:33 group.max.session.timeout.ms = 1800000 09:09:33 group.max.size = 2147483647 09:09:33 group.min.session.timeout.ms = 6000 09:09:33 initial.broker.registration.timeout.ms = 60000 09:09:33 inter.broker.listener.name = null 09:09:33 inter.broker.protocol.version = 3.3-IV3 09:09:33 kafka.metrics.polling.interval.secs = 10 09:09:33 kafka.metrics.reporters = [] 09:09:33 leader.imbalance.check.interval.seconds = 300 09:09:33 leader.imbalance.per.broker.percentage = 10 09:09:33 listener.security.protocol.map = PLAINTEXT:PLAINTEXT,SSL:SSL,SASL_PLAINTEXT:SASL_PLAINTEXT,SASL_SSL:SASL_SSL 09:09:33 listeners = SASL_PLAINTEXT://localhost:40117 09:09:33 log.cleaner.backoff.ms = 15000 09:09:33 log.cleaner.dedupe.buffer.size = 134217728 09:09:33 log.cleaner.delete.retention.ms = 86400000 09:09:33 log.cleaner.enable = true 09:09:33 log.cleaner.io.buffer.load.factor = 0.9 09:09:33 log.cleaner.io.buffer.size = 524288 09:09:33 log.cleaner.io.max.bytes.per.second = 1.7976931348623157E308 09:09:33 log.cleaner.max.compaction.lag.ms = 9223372036854775807 09:09:33 log.cleaner.min.cleanable.ratio = 0.5 09:09:33 log.cleaner.min.compaction.lag.ms = 0 09:09:33 log.cleaner.threads = 1 09:09:33 log.cleanup.policy = [delete] 09:09:33 log.dir = /tmp/kafka-unit11182757027218931278 09:09:33 log.dirs = null 09:09:33 log.flush.interval.messages = 1 09:09:33 log.flush.interval.ms = null 09:09:33 log.flush.offset.checkpoint.interval.ms = 60000 09:09:33 log.flush.scheduler.interval.ms = 9223372036854775807 09:09:33 log.flush.start.offset.checkpoint.interval.ms = 60000 09:09:33 log.index.interval.bytes = 4096 09:09:33 log.index.size.max.bytes = 10485760 09:09:33 log.message.downconversion.enable = true 09:09:33 log.message.format.version = 3.0-IV1 09:09:33 log.message.timestamp.difference.max.ms = 9223372036854775807 09:09:33 log.message.timestamp.type = CreateTime 09:09:33 log.preallocate = false 09:09:33 log.retention.bytes = -1 09:09:33 log.retention.check.interval.ms = 300000 09:09:33 log.retention.hours = 168 09:09:33 log.retention.minutes = null 09:09:33 log.retention.ms = null 09:09:33 log.roll.hours = 168 09:09:33 log.roll.jitter.hours = 0 09:09:33 log.roll.jitter.ms = null 09:09:33 log.roll.ms = null 09:09:33 log.segment.bytes = 1073741824 09:09:33 log.segment.delete.delay.ms = 60000 09:09:33 max.connection.creation.rate = 2147483647 09:09:33 max.connections = 2147483647 09:09:33 max.connections.per.ip = 2147483647 09:09:33 max.connections.per.ip.overrides = 09:09:33 max.incremental.fetch.session.cache.slots = 1000 09:09:33 message.max.bytes = 1048588 09:09:33 metadata.log.dir = null 09:09:33 metadata.log.max.record.bytes.between.snapshots = 20971520 09:09:33 metadata.log.segment.bytes = 1073741824 09:09:33 metadata.log.segment.min.bytes = 8388608 09:09:33 metadata.log.segment.ms = 604800000 09:09:33 metadata.max.idle.interval.ms = 500 09:09:33 metadata.max.retention.bytes = -1 09:09:33 metadata.max.retention.ms = 604800000 09:09:33 metric.reporters = [] 09:09:33 metrics.num.samples = 2 09:09:33 metrics.recording.level = INFO 09:09:33 metrics.sample.window.ms = 30000 09:09:33 min.insync.replicas = 1 09:09:33 node.id = 1 09:09:33 num.io.threads = 2 09:09:33 num.network.threads = 2 09:09:33 num.partitions = 1 09:09:33 num.recovery.threads.per.data.dir = 1 09:09:33 num.replica.alter.log.dirs.threads = null 09:09:33 num.replica.fetchers = 1 09:09:33 offset.metadata.max.bytes = 4096 09:09:33 offsets.commit.required.acks = -1 09:09:33 offsets.commit.timeout.ms = 5000 09:09:33 offsets.load.buffer.size = 5242880 09:09:33 offsets.retention.check.interval.ms = 600000 09:09:33 offsets.retention.minutes = 10080 09:09:33 offsets.topic.compression.codec = 0 09:09:33 offsets.topic.num.partitions = 50 09:09:33 offsets.topic.replication.factor = 1 09:09:33 offsets.topic.segment.bytes = 104857600 09:09:33 password.encoder.cipher.algorithm = AES/CBC/PKCS5Padding 09:09:33 password.encoder.iterations = 4096 09:09:33 password.encoder.key.length = 128 09:09:33 password.encoder.keyfactory.algorithm = null 09:09:33 password.encoder.old.secret = null 09:09:33 password.encoder.secret = null 09:09:33 principal.builder.class = class org.apache.kafka.common.security.authenticator.DefaultKafkaPrincipalBuilder 09:09:33 process.roles = [] 09:09:33 producer.purgatory.purge.interval.requests = 1000 09:09:33 queued.max.request.bytes = -1 09:09:33 queued.max.requests = 500 09:09:33 quota.window.num = 11 09:09:33 quota.window.size.seconds = 1 09:09:33 remote.log.index.file.cache.total.size.bytes = 1073741824 09:09:33 remote.log.manager.task.interval.ms = 30000 09:09:33 remote.log.manager.task.retry.backoff.max.ms = 30000 09:09:33 remote.log.manager.task.retry.backoff.ms = 500 09:09:33 remote.log.manager.task.retry.jitter = 0.2 09:09:33 remote.log.manager.thread.pool.size = 10 09:09:33 remote.log.metadata.manager.class.name = null 09:09:33 remote.log.metadata.manager.class.path = null 09:09:33 remote.log.metadata.manager.impl.prefix = null 09:09:33 remote.log.metadata.manager.listener.name = null 09:09:33 remote.log.reader.max.pending.tasks = 100 09:09:33 remote.log.reader.threads = 10 09:09:33 remote.log.storage.manager.class.name = null 09:09:33 remote.log.storage.manager.class.path = null 09:09:33 remote.log.storage.manager.impl.prefix = null 09:09:33 remote.log.storage.system.enable = false 09:09:33 replica.fetch.backoff.ms = 1000 09:09:33 replica.fetch.max.bytes = 1048576 09:09:33 replica.fetch.min.bytes = 1 09:09:33 replica.fetch.response.max.bytes = 10485760 09:09:33 replica.fetch.wait.max.ms = 500 09:09:33 replica.high.watermark.checkpoint.interval.ms = 5000 09:09:33 replica.lag.time.max.ms = 30000 09:09:33 replica.selector.class = null 09:09:33 replica.socket.receive.buffer.bytes = 65536 09:09:33 replica.socket.timeout.ms = 30000 09:09:33 replication.quota.window.num = 11 09:09:33 replication.quota.window.size.seconds = 1 09:09:33 request.timeout.ms = 30000 09:09:33 reserved.broker.max.id = 1000 09:09:33 sasl.client.callback.handler.class = null 09:09:33 sasl.enabled.mechanisms = [PLAIN] 09:09:33 sasl.jaas.config = null 09:09:33 sasl.kerberos.kinit.cmd = /usr/bin/kinit 09:09:33 sasl.kerberos.min.time.before.relogin = 60000 09:09:33 sasl.kerberos.principal.to.local.rules = [DEFAULT] 09:09:33 sasl.kerberos.service.name = null 09:09:33 sasl.kerberos.ticket.renew.jitter = 0.05 09:09:33 sasl.kerberos.ticket.renew.window.factor = 0.8 09:09:33 sasl.login.callback.handler.class = null 09:09:33 sasl.login.class = null 09:09:33 sasl.login.connect.timeout.ms = null 09:09:33 sasl.login.read.timeout.ms = null 09:09:33 sasl.login.refresh.buffer.seconds = 300 09:09:33 sasl.login.refresh.min.period.seconds = 60 09:09:33 sasl.login.refresh.window.factor = 0.8 09:09:33 sasl.login.refresh.window.jitter = 0.05 09:09:33 sasl.login.retry.backoff.max.ms = 10000 09:09:33 sasl.login.retry.backoff.ms = 100 09:09:33 sasl.mechanism.controller.protocol = GSSAPI 09:09:33 sasl.mechanism.inter.broker.protocol = PLAIN 09:09:33 sasl.oauthbearer.clock.skew.seconds = 30 09:09:33 sasl.oauthbearer.expected.audience = null 09:09:33 sasl.oauthbearer.expected.issuer = null 09:09:33 sasl.oauthbearer.jwks.endpoint.refresh.ms = 3600000 09:09:33 sasl.oauthbearer.jwks.endpoint.retry.backoff.max.ms = 10000 09:09:33 sasl.oauthbearer.jwks.endpoint.retry.backoff.ms = 100 09:09:33 sasl.oauthbearer.jwks.endpoint.url = null 09:09:33 sasl.oauthbearer.scope.claim.name = scope 09:09:33 sasl.oauthbearer.sub.claim.name = sub 09:09:33 sasl.oauthbearer.token.endpoint.url = null 09:09:33 sasl.server.callback.handler.class = null 09:09:33 sasl.server.max.receive.size = 524288 09:09:33 security.inter.broker.protocol = SASL_PLAINTEXT 09:09:33 security.providers = null 09:09:33 socket.connection.setup.timeout.max.ms = 30000 09:09:33 socket.connection.setup.timeout.ms = 10000 09:09:33 socket.listen.backlog.size = 50 09:09:33 socket.receive.buffer.bytes = 102400 09:09:33 socket.request.max.bytes = 104857600 09:09:33 socket.send.buffer.bytes = 102400 09:09:33 ssl.cipher.suites = [] 09:09:33 ssl.client.auth = none 09:09:33 ssl.enabled.protocols = [TLSv1.2, TLSv1.3] 09:09:33 ssl.endpoint.identification.algorithm = https 09:09:33 ssl.engine.factory.class = null 09:09:33 ssl.key.password = null 09:09:33 ssl.keymanager.algorithm = SunX509 09:09:33 ssl.keystore.certificate.chain = null 09:09:33 ssl.keystore.key = null 09:09:33 ssl.keystore.location = null 09:09:33 ssl.keystore.password = null 09:09:33 ssl.keystore.type = JKS 09:09:33 ssl.principal.mapping.rules = DEFAULT 09:09:33 ssl.protocol = TLSv1.3 09:09:33 ssl.provider = null 09:09:33 ssl.secure.random.implementation = null 09:09:33 ssl.trustmanager.algorithm = PKIX 09:09:33 ssl.truststore.certificates = null 09:09:33 ssl.truststore.location = null 09:09:33 ssl.truststore.password = null 09:09:33 ssl.truststore.type = JKS 09:09:33 transaction.abort.timed.out.transaction.cleanup.interval.ms = 10000 09:09:33 transaction.max.timeout.ms = 900000 09:09:33 transaction.remove.expired.transaction.cleanup.interval.ms = 3600000 09:09:33 transaction.state.log.load.buffer.size = 5242880 09:09:33 transaction.state.log.min.isr = 1 09:09:33 transaction.state.log.num.partitions = 4 09:09:33 transaction.state.log.replication.factor = 1 09:09:33 transaction.state.log.segment.bytes = 104857600 09:09:33 transactional.id.expiration.ms = 604800000 09:09:33 unclean.leader.election.enable = false 09:09:33 zookeeper.clientCnxnSocket = null 09:09:33 zookeeper.connect = 127.0.0.1:46481 09:09:33 zookeeper.connection.timeout.ms = null 09:09:33 zookeeper.max.in.flight.requests = 10 09:09:33 zookeeper.session.timeout.ms = 30000 09:09:33 zookeeper.set.acl = false 09:09:33 zookeeper.ssl.cipher.suites = null 09:09:33 zookeeper.ssl.client.enable = false 09:09:33 zookeeper.ssl.crl.enable = false 09:09:33 zookeeper.ssl.enabled.protocols = null 09:09:33 zookeeper.ssl.endpoint.identification.algorithm = HTTPS 09:09:33 zookeeper.ssl.keystore.location = null 09:09:33 zookeeper.ssl.keystore.password = null 09:09:33 zookeeper.ssl.keystore.type = null 09:09:33 zookeeper.ssl.ocsp.enable = false 09:09:33 zookeeper.ssl.protocol = TLSv1.2 09:09:33 zookeeper.ssl.truststore.location = null 09:09:33 zookeeper.ssl.truststore.password = null 09:09:33 zookeeper.ssl.truststore.type = null 09:09:33 09:09:33 09:09:33.139 [main] DEBUG kafka.utils.KafkaScheduler - Initializing task scheduler. 09:09:33 09:09:33.209 [TestBroker:1ThrottledChannelReaper-Produce] INFO kafka.server.ClientQuotaManager$ThrottledChannelReaper - [TestBroker:1ThrottledChannelReaper-Produce]: Starting 09:09:33 09:09:33.209 [TestBroker:1ThrottledChannelReaper-Fetch] INFO kafka.server.ClientQuotaManager$ThrottledChannelReaper - [TestBroker:1ThrottledChannelReaper-Fetch]: Starting 09:09:33 09:09:33.210 [TestBroker:1ThrottledChannelReaper-Request] INFO kafka.server.ClientQuotaManager$ThrottledChannelReaper - [TestBroker:1ThrottledChannelReaper-Request]: Starting 09:09:33 09:09:33.212 [TestBroker:1ThrottledChannelReaper-ControllerMutation] INFO kafka.server.ClientQuotaManager$ThrottledChannelReaper - [TestBroker:1ThrottledChannelReaper-ControllerMutation]: Starting 09:09:33 09:09:33.252 [ProcessThread(sid:0 cport:46481):] DEBUG org.apache.zookeeper.server.SessionTrackerImpl - Checking session 0x1000002945e0000 09:09:33 09:09:33.253 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - Processing request:: sessionid:0x1000002945e0000 type:getChildren2 cxid:0x1d zxid:0xfffffffffffffffe txntype:unknown reqpath:/brokers/topics 09:09:33 09:09:33.253 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - sessionid:0x1000002945e0000 type:getChildren2 cxid:0x1d zxid:0xfffffffffffffffe txntype:unknown reqpath:/brokers/topics 09:09:33 09:09:33.253 [SyncThread:0] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Permission requested: 1 09:09:33 09:09:33.253 [SyncThread:0] DEBUG org.apache.zookeeper.server.ZooKeeperServer - ACLs for node: [31,s{'world,'anyone} 09:09:33 ] 09:09:33 09:09:33.253 [SyncThread:0] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Client credentials: ['sasl,'zooclient 09:09:33 , 'ip,'127.0.0.1 09:09:33 ] 09:09:33 09:09:33.254 [main-SendThread(127.0.0.1:46481)] DEBUG org.apache.zookeeper.ClientCnxn - Reading reply session id: 0x1000002945e0000, packet:: clientPath:/brokers/topics serverPath:/brokers/topics finished:false header:: 29,12 replyHeader:: 29,24,0 request:: '/brokers/topics,F response:: v{},s{6,6,1770973772463,1770973772463,0,0,0,0,0,0,6} 09:09:33 09:09:33.257 [main] INFO kafka.log.LogManager - Loading logs from log dirs ArraySeq(/tmp/kafka-unit11182757027218931278) 09:09:33 09:09:33.260 [main] INFO kafka.log.LogManager - Attempting recovery for all logs in /tmp/kafka-unit11182757027218931278 since no clean shutdown file was found 09:09:33 09:09:33.263 [main] DEBUG kafka.log.LogManager - Adding log recovery metrics 09:09:33 09:09:33.268 [main] DEBUG kafka.log.LogManager - Removing log recovery metrics 09:09:33 09:09:33.272 [main] INFO kafka.log.LogManager - Loaded 0 logs in 14ms. 09:09:33 09:09:33.272 [main] INFO kafka.log.LogManager - Starting log cleanup with a period of 300000 ms. 09:09:33 09:09:33.273 [main] DEBUG kafka.utils.KafkaScheduler - Scheduling task kafka-log-retention with initial delay 30000 ms and period 300000 ms. 09:09:33 09:09:33.274 [main] INFO kafka.log.LogManager - Starting log flusher with a default period of 9223372036854775807 ms. 09:09:33 09:09:33.274 [main] DEBUG kafka.utils.KafkaScheduler - Scheduling task kafka-log-flusher with initial delay 30000 ms and period 9223372036854775807 ms. 09:09:33 09:09:33.275 [main] DEBUG kafka.utils.KafkaScheduler - Scheduling task kafka-recovery-point-checkpoint with initial delay 30000 ms and period 60000 ms. 09:09:33 09:09:33.276 [main] DEBUG kafka.utils.KafkaScheduler - Scheduling task kafka-log-start-offset-checkpoint with initial delay 30000 ms and period 60000 ms. 09:09:33 09:09:33.276 [main] DEBUG kafka.utils.KafkaScheduler - Scheduling task kafka-delete-logs with initial delay 30000 ms and period -1 ms. 09:09:33 09:09:33.290 [main] INFO kafka.log.LogCleaner - Starting the log cleaner 09:09:33 09:09:33.339 [kafka-log-cleaner-thread-0] INFO kafka.log.LogCleaner - [kafka-log-cleaner-thread-0]: Starting 09:09:33 09:09:33.361 [feature-zk-node-event-process-thread] INFO kafka.server.FinalizedFeatureChangeListener$ChangeNotificationProcessorThread - [feature-zk-node-event-process-thread]: Starting 09:09:33 09:09:33.366 [ProcessThread(sid:0 cport:46481):] DEBUG org.apache.zookeeper.server.SessionTrackerImpl - Checking session 0x1000002945e0000 09:09:33 09:09:33.366 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - Processing request:: sessionid:0x1000002945e0000 type:exists cxid:0x1e zxid:0xfffffffffffffffe txntype:unknown reqpath:/feature 09:09:33 09:09:33.366 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - sessionid:0x1000002945e0000 type:exists cxid:0x1e zxid:0xfffffffffffffffe txntype:unknown reqpath:/feature 09:09:33 09:09:33.368 [main-SendThread(127.0.0.1:46481)] DEBUG org.apache.zookeeper.ClientCnxn - Reading reply session id: 0x1000002945e0000, packet:: clientPath:/feature serverPath:/feature finished:false header:: 30,3 replyHeader:: 30,24,-101 request:: '/feature,T response:: 09:09:33 09:09:33.372 [feature-zk-node-event-process-thread] DEBUG kafka.server.FinalizedFeatureChangeListener - Reading feature ZK node at path: /feature 09:09:33 09:09:33.374 [ProcessThread(sid:0 cport:46481):] DEBUG org.apache.zookeeper.server.SessionTrackerImpl - Checking session 0x1000002945e0000 09:09:33 09:09:33.374 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - Processing request:: sessionid:0x1000002945e0000 type:getData cxid:0x1f zxid:0xfffffffffffffffe txntype:unknown reqpath:/feature 09:09:33 09:09:33.374 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - sessionid:0x1000002945e0000 type:getData cxid:0x1f zxid:0xfffffffffffffffe txntype:unknown reqpath:/feature 09:09:33 09:09:33.375 [main-SendThread(127.0.0.1:46481)] DEBUG org.apache.zookeeper.ClientCnxn - Reading reply session id: 0x1000002945e0000, packet:: clientPath:/feature serverPath:/feature finished:false header:: 31,4 replyHeader:: 31,24,-101 request:: '/feature,T response:: 09:09:33 09:09:33.376 [feature-zk-node-event-process-thread] INFO kafka.server.FinalizedFeatureChangeListener - Feature ZK node at path: /feature does not exist 09:09:33 09:09:33.398 [main] INFO org.apache.kafka.common.security.authenticator.AbstractLogin - Successfully logged in. 09:09:33 09:09:33.437 [TestBroker:1:BrokerToControllerChannelManager broker=1 name=forwarding] INFO kafka.server.BrokerToControllerRequestThread - [TestBroker:1:BrokerToControllerChannelManager broker=1 name=forwarding]: Starting 09:09:33 09:09:33.438 [TestBroker:1:BrokerToControllerChannelManager broker=1 name=forwarding] DEBUG kafka.server.BrokerToControllerRequestThread - [TestBroker:1:BrokerToControllerChannelManager broker=1 name=forwarding]: Controller isn't cached, looking for local metadata changes 09:09:33 09:09:33.439 [TestBroker:1:BrokerToControllerChannelManager broker=1 name=forwarding] DEBUG kafka.server.BrokerToControllerRequestThread - [TestBroker:1:BrokerToControllerChannelManager broker=1 name=forwarding]: No controller defined in metadata cache, retrying after backoff 09:09:33 09:09:33.554 [TestBroker:1:BrokerToControllerChannelManager broker=1 name=forwarding] DEBUG kafka.server.BrokerToControllerRequestThread - [TestBroker:1:BrokerToControllerChannelManager broker=1 name=forwarding]: Controller isn't cached, looking for local metadata changes 09:09:33 09:09:33.554 [TestBroker:1:BrokerToControllerChannelManager broker=1 name=forwarding] DEBUG kafka.server.BrokerToControllerRequestThread - [TestBroker:1:BrokerToControllerChannelManager broker=1 name=forwarding]: No controller defined in metadata cache, retrying after backoff 09:09:33 09:09:33.655 [TestBroker:1:BrokerToControllerChannelManager broker=1 name=forwarding] DEBUG kafka.server.BrokerToControllerRequestThread - [TestBroker:1:BrokerToControllerChannelManager broker=1 name=forwarding]: Controller isn't cached, looking for local metadata changes 09:09:33 09:09:33.655 [TestBroker:1:BrokerToControllerChannelManager broker=1 name=forwarding] DEBUG kafka.server.BrokerToControllerRequestThread - [TestBroker:1:BrokerToControllerChannelManager broker=1 name=forwarding]: No controller defined in metadata cache, retrying after backoff 09:09:33 09:09:33.756 [TestBroker:1:BrokerToControllerChannelManager broker=1 name=forwarding] DEBUG kafka.server.BrokerToControllerRequestThread - [TestBroker:1:BrokerToControllerChannelManager broker=1 name=forwarding]: Controller isn't cached, looking for local metadata changes 09:09:33 09:09:33.756 [TestBroker:1:BrokerToControllerChannelManager broker=1 name=forwarding] DEBUG kafka.server.BrokerToControllerRequestThread - [TestBroker:1:BrokerToControllerChannelManager broker=1 name=forwarding]: No controller defined in metadata cache, retrying after backoff 09:09:33 09:09:33.857 [TestBroker:1:BrokerToControllerChannelManager broker=1 name=forwarding] DEBUG kafka.server.BrokerToControllerRequestThread - [TestBroker:1:BrokerToControllerChannelManager broker=1 name=forwarding]: Controller isn't cached, looking for local metadata changes 09:09:33 09:09:33.857 [TestBroker:1:BrokerToControllerChannelManager broker=1 name=forwarding] DEBUG kafka.server.BrokerToControllerRequestThread - [TestBroker:1:BrokerToControllerChannelManager broker=1 name=forwarding]: No controller defined in metadata cache, retrying after backoff 09:09:33 09:09:33.958 [TestBroker:1:BrokerToControllerChannelManager broker=1 name=forwarding] DEBUG kafka.server.BrokerToControllerRequestThread - [TestBroker:1:BrokerToControllerChannelManager broker=1 name=forwarding]: Controller isn't cached, looking for local metadata changes 09:09:33 09:09:33.958 [TestBroker:1:BrokerToControllerChannelManager broker=1 name=forwarding] DEBUG kafka.server.BrokerToControllerRequestThread - [TestBroker:1:BrokerToControllerChannelManager broker=1 name=forwarding]: No controller defined in metadata cache, retrying after backoff 09:09:34 09:09:34.024 [main] INFO kafka.network.ConnectionQuotas - Updated connection-accept-rate max connection creation rate to 2147483647 09:09:34 09:09:34.029 [main] INFO kafka.network.DataPlaneAcceptor - Awaiting socket connections on localhost:40117. 09:09:34 09:09:34.059 [TestBroker:1:BrokerToControllerChannelManager broker=1 name=forwarding] DEBUG kafka.server.BrokerToControllerRequestThread - [TestBroker:1:BrokerToControllerChannelManager broker=1 name=forwarding]: Controller isn't cached, looking for local metadata changes 09:09:34 09:09:34.059 [TestBroker:1:BrokerToControllerChannelManager broker=1 name=forwarding] DEBUG kafka.server.BrokerToControllerRequestThread - [TestBroker:1:BrokerToControllerChannelManager broker=1 name=forwarding]: No controller defined in metadata cache, retrying after backoff 09:09:34 09:09:34.068 [main] INFO kafka.network.SocketServer - [SocketServer listenerType=ZK_BROKER, nodeId=1] Created data-plane acceptor and processors for endpoint : ListenerName(SASL_PLAINTEXT) 09:09:34 09:09:34.077 [TestBroker:1:BrokerToControllerChannelManager broker=1 name=alterPartition] INFO kafka.server.BrokerToControllerRequestThread - [TestBroker:1:BrokerToControllerChannelManager broker=1 name=alterPartition]: Starting 09:09:34 09:09:34.078 [TestBroker:1:BrokerToControllerChannelManager broker=1 name=alterPartition] DEBUG kafka.server.BrokerToControllerRequestThread - [TestBroker:1:BrokerToControllerChannelManager broker=1 name=alterPartition]: Controller isn't cached, looking for local metadata changes 09:09:34 09:09:34.078 [TestBroker:1:BrokerToControllerChannelManager broker=1 name=alterPartition] DEBUG kafka.server.BrokerToControllerRequestThread - [TestBroker:1:BrokerToControllerChannelManager broker=1 name=alterPartition]: No controller defined in metadata cache, retrying after backoff 09:09:34 09:09:34.111 [ExpirationReaper-1-Produce] INFO kafka.server.DelayedOperationPurgatory$ExpiredOperationReaper - [ExpirationReaper-1-Produce]: Starting 09:09:34 09:09:34.114 [ExpirationReaper-1-Fetch] INFO kafka.server.DelayedOperationPurgatory$ExpiredOperationReaper - [ExpirationReaper-1-Fetch]: Starting 09:09:34 09:09:34.117 [ExpirationReaper-1-DeleteRecords] INFO kafka.server.DelayedOperationPurgatory$ExpiredOperationReaper - [ExpirationReaper-1-DeleteRecords]: Starting 09:09:34 09:09:34.118 [ExpirationReaper-1-ElectLeader] INFO kafka.server.DelayedOperationPurgatory$ExpiredOperationReaper - [ExpirationReaper-1-ElectLeader]: Starting 09:09:34 09:09:34.135 [main] DEBUG kafka.utils.KafkaScheduler - Scheduling task isr-expiration with initial delay 0 ms and period 15000 ms. 09:09:34 09:09:34.136 [main] DEBUG kafka.utils.KafkaScheduler - Scheduling task shutdown-idle-replica-alter-log-dirs-thread with initial delay 0 ms and period 10000 ms. 09:09:34 09:09:34.139 [LogDirFailureHandler] INFO kafka.server.ReplicaManager$LogDirFailureHandler - [LogDirFailureHandler]: Starting 09:09:34 09:09:34.140 [ProcessThread(sid:0 cport:46481):] DEBUG org.apache.zookeeper.server.SessionTrackerImpl - Checking session 0x1000002945e0000 09:09:34 09:09:34.140 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - Processing request:: sessionid:0x1000002945e0000 type:getChildren2 cxid:0x20 zxid:0xfffffffffffffffe txntype:unknown reqpath:/brokers/ids 09:09:34 09:09:34.140 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - sessionid:0x1000002945e0000 type:getChildren2 cxid:0x20 zxid:0xfffffffffffffffe txntype:unknown reqpath:/brokers/ids 09:09:34 09:09:34.140 [SyncThread:0] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Permission requested: 1 09:09:34 09:09:34.140 [SyncThread:0] DEBUG org.apache.zookeeper.server.ZooKeeperServer - ACLs for node: [31,s{'world,'anyone} 09:09:34 ] 09:09:34 09:09:34.140 [SyncThread:0] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Client credentials: ['sasl,'zooclient 09:09:34 , 'ip,'127.0.0.1 09:09:34 ] 09:09:34 09:09:34.141 [main-SendThread(127.0.0.1:46481)] DEBUG org.apache.zookeeper.ClientCnxn - Reading reply session id: 0x1000002945e0000, packet:: clientPath:/brokers/ids serverPath:/brokers/ids finished:false header:: 32,12 replyHeader:: 32,24,0 request:: '/brokers/ids,F response:: v{},s{5,5,1770973772455,1770973772455,0,0,0,0,0,0,5} 09:09:34 09:09:34.160 [TestBroker:1:BrokerToControllerChannelManager broker=1 name=forwarding] DEBUG kafka.server.BrokerToControllerRequestThread - [TestBroker:1:BrokerToControllerChannelManager broker=1 name=forwarding]: Controller isn't cached, looking for local metadata changes 09:09:34 09:09:34.160 [TestBroker:1:BrokerToControllerChannelManager broker=1 name=forwarding] DEBUG kafka.server.BrokerToControllerRequestThread - [TestBroker:1:BrokerToControllerChannelManager broker=1 name=forwarding]: No controller defined in metadata cache, retrying after backoff 09:09:34 09:09:34.175 [main] INFO kafka.zk.KafkaZkClient - Creating /brokers/ids/1 (is it secure? false) 09:09:34 09:09:34.179 [TestBroker:1:BrokerToControllerChannelManager broker=1 name=alterPartition] DEBUG kafka.server.BrokerToControllerRequestThread - [TestBroker:1:BrokerToControllerChannelManager broker=1 name=alterPartition]: Controller isn't cached, looking for local metadata changes 09:09:34 09:09:34.179 [TestBroker:1:BrokerToControllerChannelManager broker=1 name=alterPartition] DEBUG kafka.server.BrokerToControllerRequestThread - [TestBroker:1:BrokerToControllerChannelManager broker=1 name=alterPartition]: No controller defined in metadata cache, retrying after backoff 09:09:34 09:09:34.191 [ProcessThread(sid:0 cport:46481):] DEBUG org.apache.zookeeper.server.SessionTrackerImpl - Checking session 0x1000002945e0000 09:09:34 09:09:34.191 [ProcessThread(sid:0 cport:46481):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Processing ACL: 31,s{'world,'anyone} 09:09:34 09:09:34 09:09:34.191 [ProcessThread(sid:0 cport:46481):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Permission requested: 4 09:09:34 09:09:34.191 [ProcessThread(sid:0 cport:46481):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - ACLs for node: [31,s{'world,'anyone} 09:09:34 ] 09:09:34 09:09:34.191 [ProcessThread(sid:0 cport:46481):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Client credentials: ['sasl,'zooclient 09:09:34 , 'ip,'127.0.0.1 09:09:34 ] 09:09:34 09:09:34.192 [ProcessThread(sid:0 cport:46481):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from data tree is: 33170292673 09:09:34 09:09:34.192 [ProcessThread(sid:0 cport:46481):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 33259369720 09:09:34 09:09:34.193 [ProcessThread(sid:0 cport:46481):] DEBUG org.apache.zookeeper.server.SessionTrackerImpl - Checking session 0x1000002945e0000 09:09:34 09:09:34.193 [ProcessThread(sid:0 cport:46481):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Permission requested: 2 09:09:34 09:09:34.193 [ProcessThread(sid:0 cport:46481):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - ACLs for node: [31,s{'world,'anyone} 09:09:34 ] 09:09:34 09:09:34.193 [ProcessThread(sid:0 cport:46481):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Client credentials: ['sasl,'zooclient 09:09:34 , 'ip,'127.0.0.1 09:09:34 ] 09:09:34 09:09:34.194 [ProcessThread(sid:0 cport:46481):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 35749280264 09:09:34 09:09:34.195 [ProcessThread(sid:0 cport:46481):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 36802961103 09:09:34 09:09:34.200 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - Processing request:: sessionid:0x1000002945e0000 type:multi cxid:0x21 zxid:0x19 txntype:14 reqpath:n/a 09:09:34 09:09:34.201 [SyncThread:0] DEBUG org.apache.zookeeper.common.PathTrie - brokers 09:09:34 09:09:34.201 [SyncThread:0] DEBUG org.apache.zookeeper.common.PathTrie - brokers 09:09:34 09:09:34.201 [SyncThread:0] DEBUG org.apache.zookeeper.server.DataTree - Digests are matching for Zxid: 19, Digest in log and actual tree: 36802961103 09:09:34 09:09:34.201 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - sessionid:0x1000002945e0000 type:multi cxid:0x21 zxid:0x19 txntype:14 reqpath:n/a 09:09:34 09:09:34.202 [main-SendThread(127.0.0.1:46481)] DEBUG org.apache.zookeeper.ClientCnxn - Reading reply session id: 0x1000002945e0000, packet:: clientPath:null serverPath:null finished:false header:: 33,14 replyHeader:: 33,25,0 request:: org.apache.zookeeper.MultiOperationRecord@16413850 response:: org.apache.zookeeper.MultiResponse@1dbbce85 09:09:34 09:09:34.208 [main] INFO kafka.zk.KafkaZkClient - Stat of the created znode at /brokers/ids/1 is: 25,25,1770973774190,1770973774190,1,0,0,72057605117050880,209,0,25 09:09:34 09:09:34 09:09:34.209 [main] INFO kafka.zk.KafkaZkClient - Registered broker 1 at path /brokers/ids/1 with addresses: SASL_PLAINTEXT://localhost:40117, czxid (broker epoch): 25 09:09:34 09:09:34.261 [TestBroker:1:BrokerToControllerChannelManager broker=1 name=forwarding] DEBUG kafka.server.BrokerToControllerRequestThread - [TestBroker:1:BrokerToControllerChannelManager broker=1 name=forwarding]: Controller isn't cached, looking for local metadata changes 09:09:34 09:09:34.261 [TestBroker:1:BrokerToControllerChannelManager broker=1 name=forwarding] DEBUG kafka.server.BrokerToControllerRequestThread - [TestBroker:1:BrokerToControllerChannelManager broker=1 name=forwarding]: No controller defined in metadata cache, retrying after backoff 09:09:34 09:09:34.281 [TestBroker:1:BrokerToControllerChannelManager broker=1 name=alterPartition] DEBUG kafka.server.BrokerToControllerRequestThread - [TestBroker:1:BrokerToControllerChannelManager broker=1 name=alterPartition]: Controller isn't cached, looking for local metadata changes 09:09:34 09:09:34.281 [TestBroker:1:BrokerToControllerChannelManager broker=1 name=alterPartition] DEBUG kafka.server.BrokerToControllerRequestThread - [TestBroker:1:BrokerToControllerChannelManager broker=1 name=alterPartition]: No controller defined in metadata cache, retrying after backoff 09:09:34 09:09:34.362 [TestBroker:1:BrokerToControllerChannelManager broker=1 name=forwarding] DEBUG kafka.server.BrokerToControllerRequestThread - [TestBroker:1:BrokerToControllerChannelManager broker=1 name=forwarding]: Controller isn't cached, looking for local metadata changes 09:09:34 09:09:34.362 [TestBroker:1:BrokerToControllerChannelManager broker=1 name=forwarding] DEBUG kafka.server.BrokerToControllerRequestThread - [TestBroker:1:BrokerToControllerChannelManager broker=1 name=forwarding]: No controller defined in metadata cache, retrying after backoff 09:09:34 09:09:34.382 [TestBroker:1:BrokerToControllerChannelManager broker=1 name=alterPartition] DEBUG kafka.server.BrokerToControllerRequestThread - [TestBroker:1:BrokerToControllerChannelManager broker=1 name=alterPartition]: Controller isn't cached, looking for local metadata changes 09:09:34 09:09:34.382 [TestBroker:1:BrokerToControllerChannelManager broker=1 name=alterPartition] DEBUG kafka.server.BrokerToControllerRequestThread - [TestBroker:1:BrokerToControllerChannelManager broker=1 name=alterPartition]: No controller defined in metadata cache, retrying after backoff 09:09:34 09:09:34.387 [controller-event-thread] INFO kafka.controller.ControllerEventManager$ControllerEventThread - [ControllerEventThread controllerId=1] Starting 09:09:34 09:09:34.399 [ExpirationReaper-1-topic] INFO kafka.server.DelayedOperationPurgatory$ExpiredOperationReaper - [ExpirationReaper-1-topic]: Starting 09:09:34 09:09:34.407 [ExpirationReaper-1-Heartbeat] INFO kafka.server.DelayedOperationPurgatory$ExpiredOperationReaper - [ExpirationReaper-1-Heartbeat]: Starting 09:09:34 09:09:34.408 [ExpirationReaper-1-Rebalance] INFO kafka.server.DelayedOperationPurgatory$ExpiredOperationReaper - [ExpirationReaper-1-Rebalance]: Starting 09:09:34 09:09:34.410 [ProcessThread(sid:0 cport:46481):] DEBUG org.apache.zookeeper.server.SessionTrackerImpl - Checking session 0x1000002945e0000 09:09:34 09:09:34.411 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - Processing request:: sessionid:0x1000002945e0000 type:exists cxid:0x22 zxid:0xfffffffffffffffe txntype:unknown reqpath:/controller 09:09:34 09:09:34.411 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - sessionid:0x1000002945e0000 type:exists cxid:0x22 zxid:0xfffffffffffffffe txntype:unknown reqpath:/controller 09:09:34 09:09:34.411 [main-SendThread(127.0.0.1:46481)] DEBUG org.apache.zookeeper.ClientCnxn - Reading reply session id: 0x1000002945e0000, packet:: clientPath:/controller serverPath:/controller finished:false header:: 34,3 replyHeader:: 34,25,-101 request:: '/controller,T response:: 09:09:34 09:09:34.413 [ProcessThread(sid:0 cport:46481):] DEBUG org.apache.zookeeper.server.SessionTrackerImpl - Checking session 0x1000002945e0000 09:09:34 09:09:34.413 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - Processing request:: sessionid:0x1000002945e0000 type:getData cxid:0x23 zxid:0xfffffffffffffffe txntype:unknown reqpath:/controller 09:09:34 09:09:34.413 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - sessionid:0x1000002945e0000 type:getData cxid:0x23 zxid:0xfffffffffffffffe txntype:unknown reqpath:/controller 09:09:34 09:09:34.414 [main-SendThread(127.0.0.1:46481)] DEBUG org.apache.zookeeper.ClientCnxn - Reading reply session id: 0x1000002945e0000, packet:: clientPath:/controller serverPath:/controller finished:false header:: 35,4 replyHeader:: 35,25,-101 request:: '/controller,T response:: 09:09:34 09:09:34.416 [ProcessThread(sid:0 cport:46481):] DEBUG org.apache.zookeeper.server.SessionTrackerImpl - Checking session 0x1000002945e0000 09:09:34 09:09:34.416 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - Processing request:: sessionid:0x1000002945e0000 type:getData cxid:0x24 zxid:0xfffffffffffffffe txntype:unknown reqpath:/controller_epoch 09:09:34 09:09:34.416 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - sessionid:0x1000002945e0000 type:getData cxid:0x24 zxid:0xfffffffffffffffe txntype:unknown reqpath:/controller_epoch 09:09:34 09:09:34.416 [main-SendThread(127.0.0.1:46481)] DEBUG org.apache.zookeeper.ClientCnxn - Reading reply session id: 0x1000002945e0000, packet:: clientPath:/controller_epoch serverPath:/controller_epoch finished:false header:: 36,4 replyHeader:: 36,25,-101 request:: '/controller_epoch,F response:: 09:09:34 09:09:34.418 [ProcessThread(sid:0 cport:46481):] DEBUG org.apache.zookeeper.server.SessionTrackerImpl - Checking session 0x1000002945e0000 09:09:34 09:09:34.418 [ProcessThread(sid:0 cport:46481):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Processing ACL: 31,s{'world,'anyone} 09:09:34 09:09:34 09:09:34.419 [ProcessThread(sid:0 cport:46481):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Permission requested: 4 09:09:34 09:09:34.419 [ProcessThread(sid:0 cport:46481):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - ACLs for node: [31,s{'world,'anyone} 09:09:34 ] 09:09:34 09:09:34.419 [ProcessThread(sid:0 cport:46481):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Client credentials: ['sasl,'zooclient 09:09:34 , 'ip,'127.0.0.1 09:09:34 ] 09:09:34 09:09:34.419 [ProcessThread(sid:0 cport:46481):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from data tree is: 36802961103 09:09:34 09:09:34.419 [ProcessThread(sid:0 cport:46481):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 40551418404 09:09:34 09:09:34.436 [main] INFO kafka.coordinator.group.GroupCoordinator - [GroupCoordinator 1]: Starting up. 09:09:34 09:09:34.451 [ProcessThread(sid:0 cport:46481):] DEBUG org.apache.zookeeper.server.SessionTrackerImpl - Checking session 0x1000002945e0000 09:09:34 09:09:34.463 [TestBroker:1:BrokerToControllerChannelManager broker=1 name=forwarding] DEBUG kafka.server.BrokerToControllerRequestThread - [TestBroker:1:BrokerToControllerChannelManager broker=1 name=forwarding]: Controller isn't cached, looking for local metadata changes 09:09:34 09:09:34.463 [TestBroker:1:BrokerToControllerChannelManager broker=1 name=forwarding] DEBUG kafka.server.BrokerToControllerRequestThread - [TestBroker:1:BrokerToControllerChannelManager broker=1 name=forwarding]: No controller defined in metadata cache, retrying after backoff 09:09:34 09:09:34.482 [TestBroker:1:BrokerToControllerChannelManager broker=1 name=alterPartition] DEBUG kafka.server.BrokerToControllerRequestThread - [TestBroker:1:BrokerToControllerChannelManager broker=1 name=alterPartition]: Controller isn't cached, looking for local metadata changes 09:09:34 09:09:34.482 [TestBroker:1:BrokerToControllerChannelManager broker=1 name=alterPartition] DEBUG kafka.server.BrokerToControllerRequestThread - [TestBroker:1:BrokerToControllerChannelManager broker=1 name=alterPartition]: No controller defined in metadata cache, retrying after backoff 09:09:34 09:09:34.484 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - Processing request:: sessionid:0x1000002945e0000 type:create cxid:0x25 zxid:0x1a txntype:1 reqpath:n/a 09:09:34 09:09:34.485 [SyncThread:0] DEBUG org.apache.zookeeper.common.PathTrie - controller_epoch 09:09:34 09:09:34.485 [SyncThread:0] DEBUG org.apache.zookeeper.server.DataTree - Digests are matching for Zxid: 1a, Digest in log and actual tree: 44723862549 09:09:34 09:09:34.485 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - sessionid:0x1000002945e0000 type:create cxid:0x25 zxid:0x1a txntype:1 reqpath:n/a 09:09:34 09:09:34.485 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - Processing request:: sessionid:0x1000002945e0000 type:getData cxid:0x26 zxid:0xfffffffffffffffe txntype:unknown reqpath:/brokers/topics/__consumer_offsets 09:09:34 09:09:34.485 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - sessionid:0x1000002945e0000 type:getData cxid:0x26 zxid:0xfffffffffffffffe txntype:unknown reqpath:/brokers/topics/__consumer_offsets 09:09:34 09:09:34.486 [main-SendThread(127.0.0.1:46481)] DEBUG org.apache.zookeeper.ClientCnxn - Reading reply session id: 0x1000002945e0000, packet:: clientPath:/controller_epoch serverPath:/controller_epoch finished:false header:: 37,1 replyHeader:: 37,26,0 request:: '/controller_epoch,#30,v{s{31,s{'world,'anyone}}},0 response:: '/controller_epoch 09:09:34 09:09:34.486 [main-SendThread(127.0.0.1:46481)] DEBUG org.apache.zookeeper.ClientCnxn - Reading reply session id: 0x1000002945e0000, packet:: clientPath:/brokers/topics/__consumer_offsets serverPath:/brokers/topics/__consumer_offsets finished:false header:: 38,4 replyHeader:: 38,26,-101 request:: '/brokers/topics/__consumer_offsets,F response:: 09:09:34 09:09:34.487 [controller-event-thread] INFO kafka.zk.KafkaZkClient - Successfully created /controller_epoch with initial epoch 0 09:09:34 09:09:34.488 [controller-event-thread] DEBUG kafka.zk.KafkaZkClient - Try to create /controller and increment controller epoch to 1 with expected controller epoch zkVersion 0 09:09:34 09:09:34.488 [main] DEBUG kafka.utils.KafkaScheduler - Initializing task scheduler. 09:09:34 09:09:34.489 [main] DEBUG kafka.utils.KafkaScheduler - Scheduling task delete-expired-group-metadata with initial delay 0 ms and period 600000 ms. 09:09:34 09:09:34.490 [main] INFO kafka.coordinator.group.GroupCoordinator - [GroupCoordinator 1]: Startup complete. 09:09:34 09:09:34.493 [ProcessThread(sid:0 cport:46481):] DEBUG org.apache.zookeeper.server.SessionTrackerImpl - Checking session 0x1000002945e0000 09:09:34 09:09:34.493 [ProcessThread(sid:0 cport:46481):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Processing ACL: 31,s{'world,'anyone} 09:09:34 09:09:34 09:09:34.493 [ProcessThread(sid:0 cport:46481):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Permission requested: 4 09:09:34 09:09:34.493 [ProcessThread(sid:0 cport:46481):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - ACLs for node: [31,s{'world,'anyone} 09:09:34 ] 09:09:34 09:09:34.493 [ProcessThread(sid:0 cport:46481):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Client credentials: ['sasl,'zooclient 09:09:34 , 'ip,'127.0.0.1 09:09:34 ] 09:09:34 09:09:34.494 [ProcessThread(sid:0 cport:46481):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from data tree is: 44723862549 09:09:34 09:09:34.494 [ProcessThread(sid:0 cport:46481):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 44508496710 09:09:34 09:09:34.494 [ProcessThread(sid:0 cport:46481):] DEBUG org.apache.zookeeper.server.SessionTrackerImpl - Checking session 0x1000002945e0000 09:09:34 09:09:34.494 [ProcessThread(sid:0 cport:46481):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Permission requested: 2 09:09:34 09:09:34.494 [ProcessThread(sid:0 cport:46481):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - ACLs for node: [31,s{'world,'anyone} 09:09:34 ] 09:09:34 09:09:34.494 [ProcessThread(sid:0 cport:46481):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Client credentials: ['sasl,'zooclient 09:09:34 , 'ip,'127.0.0.1 09:09:34 ] 09:09:34 09:09:34.494 [ProcessThread(sid:0 cport:46481):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 44651828189 09:09:34 09:09:34.495 [ProcessThread(sid:0 cport:46481):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 41201361577 09:09:34 09:09:34.496 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - Processing request:: sessionid:0x1000002945e0000 type:multi cxid:0x27 zxid:0x1b txntype:14 reqpath:n/a 09:09:34 09:09:34.496 [SyncThread:0] DEBUG org.apache.zookeeper.common.PathTrie - controller 09:09:34 09:09:34.498 [SyncThread:0] DEBUG org.apache.zookeeper.common.PathTrie - controller_epoch 09:09:34 09:09:34.498 [SyncThread:0] DEBUG org.apache.zookeeper.server.DataTree - Digests are matching for Zxid: 1b, Digest in log and actual tree: 41201361577 09:09:34 09:09:34.499 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - sessionid:0x1000002945e0000 type:multi cxid:0x27 zxid:0x1b txntype:14 reqpath:n/a 09:09:34 09:09:34.499 [main-SendThread(127.0.0.1:46481)] DEBUG org.apache.zookeeper.ClientCnxn - Got notification session id: 0x1000002945e0000 09:09:34 09:09:34.499 [main-SendThread(127.0.0.1:46481)] DEBUG org.apache.zookeeper.ClientCnxn - Got WatchedEvent state:SyncConnected type:NodeCreated path:/controller for session id 0x1000002945e0000 09:09:34 09:09:34.499 [main-EventThread] DEBUG kafka.zookeeper.ZooKeeperClient - [ZooKeeperClient Kafka server] Received event: WatchedEvent state:SyncConnected type:NodeCreated path:/controller 09:09:34 09:09:34.499 [main-SendThread(127.0.0.1:46481)] DEBUG org.apache.zookeeper.ClientCnxn - Reading reply session id: 0x1000002945e0000, packet:: clientPath:null serverPath:null finished:false header:: 39,14 replyHeader:: 39,27,0 request:: org.apache.zookeeper.MultiOperationRecord@71c0d6c2 response:: org.apache.zookeeper.MultiResponse@f3584fa6 09:09:34 09:09:34.501 [controller-event-thread] INFO kafka.controller.KafkaController - [Controller id=1] 1 successfully elected as the controller. Epoch incremented to 1 and epoch zk version is now 1 09:09:34 09:09:34.501 [ProcessThread(sid:0 cport:46481):] DEBUG org.apache.zookeeper.server.SessionTrackerImpl - Checking session 0x1000002945e0000 09:09:34 09:09:34.502 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - Processing request:: sessionid:0x1000002945e0000 type:getData cxid:0x28 zxid:0xfffffffffffffffe txntype:unknown reqpath:/feature 09:09:34 09:09:34.502 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - sessionid:0x1000002945e0000 type:getData cxid:0x28 zxid:0xfffffffffffffffe txntype:unknown reqpath:/feature 09:09:34 09:09:34.502 [main-SendThread(127.0.0.1:46481)] DEBUG org.apache.zookeeper.ClientCnxn - Reading reply session id: 0x1000002945e0000, packet:: clientPath:/feature serverPath:/feature finished:false header:: 40,4 replyHeader:: 40,27,-101 request:: '/feature,T response:: 09:09:34 09:09:34.505 [controller-event-thread] INFO kafka.controller.KafkaController - [Controller id=1] Creating FeatureZNode at path: /feature with contents: FeatureZNode(2,Enabled,Map()) 09:09:34 09:09:34.506 [ProcessThread(sid:0 cport:46481):] DEBUG org.apache.zookeeper.server.SessionTrackerImpl - Checking session 0x1000002945e0000 09:09:34 09:09:34.506 [ProcessThread(sid:0 cport:46481):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Processing ACL: 31,s{'world,'anyone} 09:09:34 09:09:34 09:09:34.506 [ProcessThread(sid:0 cport:46481):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Permission requested: 4 09:09:34 09:09:34.506 [ProcessThread(sid:0 cport:46481):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - ACLs for node: [31,s{'world,'anyone} 09:09:34 ] 09:09:34 09:09:34.506 [ProcessThread(sid:0 cport:46481):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Client credentials: ['sasl,'zooclient 09:09:34 , 'ip,'127.0.0.1 09:09:34 ] 09:09:34 09:09:34.507 [ProcessThread(sid:0 cport:46481):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from data tree is: 41201361577 09:09:34 09:09:34.507 [ProcessThread(sid:0 cport:46481):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 38388847347 09:09:34 09:09:34.508 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - Processing request:: sessionid:0x1000002945e0000 type:create cxid:0x29 zxid:0x1c txntype:1 reqpath:n/a 09:09:34 09:09:34.508 [SyncThread:0] DEBUG org.apache.zookeeper.common.PathTrie - feature 09:09:34 09:09:34.508 [SyncThread:0] DEBUG org.apache.zookeeper.server.DataTree - Digests are matching for Zxid: 1c, Digest in log and actual tree: 41751846345 09:09:34 09:09:34.508 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - sessionid:0x1000002945e0000 type:create cxid:0x29 zxid:0x1c txntype:1 reqpath:n/a 09:09:34 09:09:34.509 [main-SendThread(127.0.0.1:46481)] DEBUG org.apache.zookeeper.ClientCnxn - Got notification session id: 0x1000002945e0000 09:09:34 09:09:34.509 [main-SendThread(127.0.0.1:46481)] DEBUG org.apache.zookeeper.ClientCnxn - Got WatchedEvent state:SyncConnected type:NodeCreated path:/feature for session id 0x1000002945e0000 09:09:34 09:09:34.509 [main-EventThread] DEBUG kafka.zookeeper.ZooKeeperClient - [ZooKeeperClient Kafka server] Received event: WatchedEvent state:SyncConnected type:NodeCreated path:/feature 09:09:34 09:09:34.509 [main-SendThread(127.0.0.1:46481)] DEBUG org.apache.zookeeper.ClientCnxn - Reading reply session id: 0x1000002945e0000, packet:: clientPath:/feature serverPath:/feature finished:false header:: 41,1 replyHeader:: 41,28,0 request:: '/feature,#7b226665617475726573223a7b7d2c2276657273696f6e223a322c22737461747573223a317d,v{s{31,s{'world,'anyone}}},0 response:: '/feature 09:09:34 09:09:34.509 [main-EventThread] INFO kafka.server.FinalizedFeatureChangeListener - Feature ZK node created at path: /feature 09:09:34 09:09:34.510 [feature-zk-node-event-process-thread] DEBUG kafka.server.FinalizedFeatureChangeListener - Reading feature ZK node at path: /feature 09:09:34 09:09:34.510 [ProcessThread(sid:0 cport:46481):] DEBUG org.apache.zookeeper.server.SessionTrackerImpl - Checking session 0x1000002945e0000 09:09:34 09:09:34.510 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - Processing request:: sessionid:0x1000002945e0000 type:getData cxid:0x2a zxid:0xfffffffffffffffe txntype:unknown reqpath:/feature 09:09:34 09:09:34.510 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - sessionid:0x1000002945e0000 type:getData cxid:0x2a zxid:0xfffffffffffffffe txntype:unknown reqpath:/feature 09:09:34 09:09:34.510 [SyncThread:0] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Permission requested: 1 09:09:34 09:09:34.510 [SyncThread:0] DEBUG org.apache.zookeeper.server.ZooKeeperServer - ACLs for node: [31,s{'world,'anyone} 09:09:34 ] 09:09:34 09:09:34.511 [ProcessThread(sid:0 cport:46481):] DEBUG org.apache.zookeeper.server.SessionTrackerImpl - Checking session 0x1000002945e0000 09:09:34 09:09:34.510 [SyncThread:0] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Client credentials: ['sasl,'zooclient 09:09:34 , 'ip,'127.0.0.1 09:09:34 ] 09:09:34 09:09:34.512 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - Processing request:: sessionid:0x1000002945e0000 type:getData cxid:0x2b zxid:0xfffffffffffffffe txntype:unknown reqpath:/feature 09:09:34 09:09:34.512 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - sessionid:0x1000002945e0000 type:getData cxid:0x2b zxid:0xfffffffffffffffe txntype:unknown reqpath:/feature 09:09:34 09:09:34.512 [SyncThread:0] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Permission requested: 1 09:09:34 09:09:34.512 [SyncThread:0] DEBUG org.apache.zookeeper.server.ZooKeeperServer - ACLs for node: [31,s{'world,'anyone} 09:09:34 ] 09:09:34 09:09:34.513 [SyncThread:0] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Client credentials: ['sasl,'zooclient 09:09:34 , 'ip,'127.0.0.1 09:09:34 ] 09:09:34 09:09:34.513 [main-SendThread(127.0.0.1:46481)] DEBUG org.apache.zookeeper.ClientCnxn - Reading reply session id: 0x1000002945e0000, packet:: clientPath:/feature serverPath:/feature finished:false header:: 42,4 replyHeader:: 42,28,0 request:: '/feature,T response:: #7b226665617475726573223a7b7d2c2276657273696f6e223a322c22737461747573223a317d,s{28,28,1770973774506,1770973774506,0,0,0,0,38,0,28} 09:09:34 09:09:34.513 [main-SendThread(127.0.0.1:46481)] DEBUG org.apache.zookeeper.ClientCnxn - Reading reply session id: 0x1000002945e0000, packet:: clientPath:/feature serverPath:/feature finished:false header:: 43,4 replyHeader:: 43,28,0 request:: '/feature,T response:: #7b226665617475726573223a7b7d2c2276657273696f6e223a322c22737461747573223a317d,s{28,28,1770973774506,1770973774506,0,0,0,0,38,0,28} 09:09:34 09:09:34.519 [main] INFO kafka.coordinator.transaction.TransactionCoordinator - [TransactionCoordinator id=1] Starting up. 09:09:34 09:09:34.519 [main] DEBUG kafka.utils.KafkaScheduler - Initializing task scheduler. 09:09:34 09:09:34.519 [main] DEBUG kafka.utils.KafkaScheduler - Scheduling task transaction-abort with initial delay 10000 ms and period 10000 ms. 09:09:34 09:09:34.521 [ProcessThread(sid:0 cport:46481):] DEBUG org.apache.zookeeper.server.SessionTrackerImpl - Checking session 0x1000002945e0000 09:09:34 09:09:34.521 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - Processing request:: sessionid:0x1000002945e0000 type:getData cxid:0x2c zxid:0xfffffffffffffffe txntype:unknown reqpath:/brokers/topics/__transaction_state 09:09:34 09:09:34.521 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - sessionid:0x1000002945e0000 type:getData cxid:0x2c zxid:0xfffffffffffffffe txntype:unknown reqpath:/brokers/topics/__transaction_state 09:09:34 09:09:34.521 [main-SendThread(127.0.0.1:46481)] DEBUG org.apache.zookeeper.ClientCnxn - Reading reply session id: 0x1000002945e0000, packet:: clientPath:/brokers/topics/__transaction_state serverPath:/brokers/topics/__transaction_state finished:false header:: 44,4 replyHeader:: 44,28,-101 request:: '/brokers/topics/__transaction_state,F response:: 09:09:34 09:09:34.522 [main] DEBUG kafka.utils.KafkaScheduler - Scheduling task transactionalId-expiration with initial delay 3600000 ms and period 3600000 ms. 09:09:34 09:09:34.523 [main] INFO kafka.coordinator.transaction.TransactionCoordinator - [TransactionCoordinator id=1] Startup complete. 09:09:34 09:09:34.524 [TxnMarkerSenderThread-1] INFO kafka.coordinator.transaction.TransactionMarkerChannelManager - [Transaction Marker Channel Manager 1]: Starting 09:09:34 09:09:34.549 [controller-event-thread] INFO kafka.controller.KafkaController - [Controller id=1] Registering handlers 09:09:34 09:09:34.549 [feature-zk-node-event-process-thread] INFO kafka.server.metadata.ZkMetadataCache - [MetadataCache brokerId=1] Updated cache from existing to latest FinalizedFeaturesAndEpoch(features=Map(), epoch=0). 09:09:34 09:09:34.551 [ProcessThread(sid:0 cport:46481):] DEBUG org.apache.zookeeper.server.SessionTrackerImpl - Checking session 0x1000002945e0000 09:09:34 09:09:34.551 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - Processing request:: sessionid:0x1000002945e0000 type:exists cxid:0x2d zxid:0xfffffffffffffffe txntype:unknown reqpath:/admin/preferred_replica_election 09:09:34 09:09:34.551 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - sessionid:0x1000002945e0000 type:exists cxid:0x2d zxid:0xfffffffffffffffe txntype:unknown reqpath:/admin/preferred_replica_election 09:09:34 09:09:34.556 [main-SendThread(127.0.0.1:46481)] DEBUG org.apache.zookeeper.ClientCnxn - Reading reply session id: 0x1000002945e0000, packet:: clientPath:/admin/preferred_replica_election serverPath:/admin/preferred_replica_election finished:false header:: 45,3 replyHeader:: 45,28,-101 request:: '/admin/preferred_replica_election,T response:: 09:09:34 09:09:34.557 [ProcessThread(sid:0 cport:46481):] DEBUG org.apache.zookeeper.server.SessionTrackerImpl - Checking session 0x1000002945e0000 09:09:34 09:09:34.558 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - Processing request:: sessionid:0x1000002945e0000 type:exists cxid:0x2e zxid:0xfffffffffffffffe txntype:unknown reqpath:/admin/reassign_partitions 09:09:34 09:09:34.558 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - sessionid:0x1000002945e0000 type:exists cxid:0x2e zxid:0xfffffffffffffffe txntype:unknown reqpath:/admin/reassign_partitions 09:09:34 09:09:34.558 [main-SendThread(127.0.0.1:46481)] DEBUG org.apache.zookeeper.ClientCnxn - Reading reply session id: 0x1000002945e0000, packet:: clientPath:/admin/reassign_partitions serverPath:/admin/reassign_partitions finished:false header:: 46,3 replyHeader:: 46,28,-101 request:: '/admin/reassign_partitions,T response:: 09:09:34 09:09:34.559 [controller-event-thread] INFO kafka.controller.KafkaController - [Controller id=1] Deleting log dir event notifications 09:09:34 09:09:34.559 [ProcessThread(sid:0 cport:46481):] DEBUG org.apache.zookeeper.server.SessionTrackerImpl - Checking session 0x1000002945e0000 09:09:34 09:09:34.559 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - Processing request:: sessionid:0x1000002945e0000 type:getChildren2 cxid:0x2f zxid:0xfffffffffffffffe txntype:unknown reqpath:/log_dir_event_notification 09:09:34 09:09:34.560 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - sessionid:0x1000002945e0000 type:getChildren2 cxid:0x2f zxid:0xfffffffffffffffe txntype:unknown reqpath:/log_dir_event_notification 09:09:34 09:09:34.560 [SyncThread:0] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Permission requested: 1 09:09:34 09:09:34.560 [SyncThread:0] DEBUG org.apache.zookeeper.server.ZooKeeperServer - ACLs for node: [31,s{'world,'anyone} 09:09:34 ] 09:09:34 09:09:34.560 [SyncThread:0] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Client credentials: ['sasl,'zooclient 09:09:34 , 'ip,'127.0.0.1 09:09:34 ] 09:09:34 09:09:34.560 [main-SendThread(127.0.0.1:46481)] DEBUG org.apache.zookeeper.ClientCnxn - Reading reply session id: 0x1000002945e0000, packet:: clientPath:/log_dir_event_notification serverPath:/log_dir_event_notification finished:false header:: 47,12 replyHeader:: 47,28,0 request:: '/log_dir_event_notification,T response:: v{},s{16,16,1770973772537,1770973772537,0,0,0,0,0,0,16} 09:09:34 09:09:34.563 [controller-event-thread] INFO kafka.controller.KafkaController - [Controller id=1] Deleting isr change notifications 09:09:34 09:09:34.563 [TestBroker:1:BrokerToControllerChannelManager broker=1 name=forwarding] DEBUG kafka.server.BrokerToControllerRequestThread - [TestBroker:1:BrokerToControllerChannelManager broker=1 name=forwarding]: Controller isn't cached, looking for local metadata changes 09:09:34 09:09:34.564 [TestBroker:1:BrokerToControllerChannelManager broker=1 name=forwarding] DEBUG kafka.server.BrokerToControllerRequestThread - [TestBroker:1:BrokerToControllerChannelManager broker=1 name=forwarding]: No controller defined in metadata cache, retrying after backoff 09:09:34 09:09:34.564 [ProcessThread(sid:0 cport:46481):] DEBUG org.apache.zookeeper.server.SessionTrackerImpl - Checking session 0x1000002945e0000 09:09:34 09:09:34.564 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - Processing request:: sessionid:0x1000002945e0000 type:getChildren2 cxid:0x30 zxid:0xfffffffffffffffe txntype:unknown reqpath:/isr_change_notification 09:09:34 09:09:34.564 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - sessionid:0x1000002945e0000 type:getChildren2 cxid:0x30 zxid:0xfffffffffffffffe txntype:unknown reqpath:/isr_change_notification 09:09:34 09:09:34.564 [SyncThread:0] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Permission requested: 1 09:09:34 09:09:34.564 [SyncThread:0] DEBUG org.apache.zookeeper.server.ZooKeeperServer - ACLs for node: [31,s{'world,'anyone} 09:09:34 ] 09:09:34 09:09:34.564 [SyncThread:0] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Client credentials: ['sasl,'zooclient 09:09:34 , 'ip,'127.0.0.1 09:09:34 ] 09:09:34 09:09:34.565 [main-SendThread(127.0.0.1:46481)] DEBUG org.apache.zookeeper.ClientCnxn - Reading reply session id: 0x1000002945e0000, packet:: clientPath:/isr_change_notification serverPath:/isr_change_notification finished:false header:: 48,12 replyHeader:: 48,28,0 request:: '/isr_change_notification,T response:: v{},s{14,14,1770973772523,1770973772523,0,0,0,0,0,0,14} 09:09:34 09:09:34.566 [controller-event-thread] INFO kafka.controller.KafkaController - [Controller id=1] Initializing controller context 09:09:34 09:09:34.567 [ProcessThread(sid:0 cport:46481):] DEBUG org.apache.zookeeper.server.SessionTrackerImpl - Checking session 0x1000002945e0000 09:09:34 09:09:34.567 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - Processing request:: sessionid:0x1000002945e0000 type:getChildren2 cxid:0x31 zxid:0xfffffffffffffffe txntype:unknown reqpath:/brokers/ids 09:09:34 09:09:34.567 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - sessionid:0x1000002945e0000 type:getChildren2 cxid:0x31 zxid:0xfffffffffffffffe txntype:unknown reqpath:/brokers/ids 09:09:34 09:09:34.567 [SyncThread:0] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Permission requested: 1 09:09:34 09:09:34.567 [SyncThread:0] DEBUG org.apache.zookeeper.server.ZooKeeperServer - ACLs for node: [31,s{'world,'anyone} 09:09:34 ] 09:09:34 09:09:34.568 [SyncThread:0] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Client credentials: ['sasl,'zooclient 09:09:34 , 'ip,'127.0.0.1 09:09:34 ] 09:09:34 09:09:34.568 [main-SendThread(127.0.0.1:46481)] DEBUG org.apache.zookeeper.ClientCnxn - Reading reply session id: 0x1000002945e0000, packet:: clientPath:/brokers/ids serverPath:/brokers/ids finished:false header:: 49,12 replyHeader:: 49,28,0 request:: '/brokers/ids,T response:: v{'1},s{5,5,1770973772455,1770973772455,0,1,0,0,0,1,25} 09:09:34 09:09:34.569 [ProcessThread(sid:0 cport:46481):] DEBUG org.apache.zookeeper.server.SessionTrackerImpl - Checking session 0x1000002945e0000 09:09:34 09:09:34.569 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - Processing request:: sessionid:0x1000002945e0000 type:getData cxid:0x32 zxid:0xfffffffffffffffe txntype:unknown reqpath:/brokers/ids/1 09:09:34 09:09:34.570 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - sessionid:0x1000002945e0000 type:getData cxid:0x32 zxid:0xfffffffffffffffe txntype:unknown reqpath:/brokers/ids/1 09:09:34 09:09:34.570 [SyncThread:0] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Permission requested: 1 09:09:34 09:09:34.570 [SyncThread:0] DEBUG org.apache.zookeeper.server.ZooKeeperServer - ACLs for node: [31,s{'world,'anyone} 09:09:34 ] 09:09:34 09:09:34.570 [SyncThread:0] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Client credentials: ['sasl,'zooclient 09:09:34 , 'ip,'127.0.0.1 09:09:34 ] 09:09:34 09:09:34.570 [main-SendThread(127.0.0.1:46481)] DEBUG org.apache.zookeeper.ClientCnxn - Reading reply session id: 0x1000002945e0000, packet:: clientPath:/brokers/ids/1 serverPath:/brokers/ids/1 finished:false header:: 50,4 replyHeader:: 50,28,0 request:: '/brokers/ids/1,F response:: #7b226665617475726573223a7b7d2c226c697374656e65725f73656375726974795f70726f746f636f6c5f6d6170223a7b225341534c5f504c41494e54455854223a225341534c5f504c41494e54455854227d2c22656e64706f696e7473223a5b225341534c5f504c41494e544558543a2f2f6c6f63616c686f73743a3430313137225d2c226a6d785f706f7274223a2d312c22706f7274223a2d312c22686f7374223a6e756c6c2c2276657273696f6e223a352c2274696d657374616d70223a2231373730393733373734313533227d,s{25,25,1770973774190,1770973774190,1,0,0,72057605117050880,209,0,25} 09:09:34 09:09:34.582 [ExpirationReaper-1-AlterAcls] INFO kafka.server.DelayedOperationPurgatory$ExpiredOperationReaper - [ExpirationReaper-1-AlterAcls]: Starting 09:09:34 09:09:34.583 [TestBroker:1:BrokerToControllerChannelManager broker=1 name=alterPartition] DEBUG kafka.server.BrokerToControllerRequestThread - [TestBroker:1:BrokerToControllerChannelManager broker=1 name=alterPartition]: Controller isn't cached, looking for local metadata changes 09:09:34 09:09:34.583 [TestBroker:1:BrokerToControllerChannelManager broker=1 name=alterPartition] DEBUG kafka.server.BrokerToControllerRequestThread - [TestBroker:1:BrokerToControllerChannelManager broker=1 name=alterPartition]: No controller defined in metadata cache, retrying after backoff 09:09:34 09:09:34.587 [controller-event-thread] INFO kafka.controller.KafkaController - [Controller id=1] Initialized broker epochs cache: HashMap(1 -> 25) 09:09:34 09:09:34.588 [ProcessThread(sid:0 cport:46481):] DEBUG org.apache.zookeeper.server.SessionTrackerImpl - Checking session 0x1000002945e0000 09:09:34 09:09:34.588 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - Processing request:: sessionid:0x1000002945e0000 type:getChildren2 cxid:0x33 zxid:0xfffffffffffffffe txntype:unknown reqpath:/brokers/topics 09:09:34 09:09:34.588 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - sessionid:0x1000002945e0000 type:getChildren2 cxid:0x33 zxid:0xfffffffffffffffe txntype:unknown reqpath:/brokers/topics 09:09:34 09:09:34.588 [SyncThread:0] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Permission requested: 1 09:09:34 09:09:34.588 [SyncThread:0] DEBUG org.apache.zookeeper.server.ZooKeeperServer - ACLs for node: [31,s{'world,'anyone} 09:09:34 ] 09:09:34 09:09:34.589 [SyncThread:0] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Client credentials: ['sasl,'zooclient 09:09:34 , 'ip,'127.0.0.1 09:09:34 ] 09:09:34 09:09:34.589 [main-SendThread(127.0.0.1:46481)] DEBUG org.apache.zookeeper.ClientCnxn - Reading reply session id: 0x1000002945e0000, packet:: clientPath:/brokers/topics serverPath:/brokers/topics finished:false header:: 51,12 replyHeader:: 51,28,0 request:: '/brokers/topics,T response:: v{},s{6,6,1770973772463,1770973772463,0,0,0,0,0,0,6} 09:09:34 09:09:34.593 [controller-event-thread] DEBUG kafka.controller.KafkaController - [Controller id=1] Register BrokerModifications handler for Set(1) 09:09:34 09:09:34.595 [ProcessThread(sid:0 cport:46481):] DEBUG org.apache.zookeeper.server.SessionTrackerImpl - Checking session 0x1000002945e0000 09:09:34 09:09:34.596 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - Processing request:: sessionid:0x1000002945e0000 type:exists cxid:0x34 zxid:0xfffffffffffffffe txntype:unknown reqpath:/brokers/ids/1 09:09:34 09:09:34.596 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - sessionid:0x1000002945e0000 type:exists cxid:0x34 zxid:0xfffffffffffffffe txntype:unknown reqpath:/brokers/ids/1 09:09:34 09:09:34.597 [main-SendThread(127.0.0.1:46481)] DEBUG org.apache.zookeeper.ClientCnxn - Reading reply session id: 0x1000002945e0000, packet:: clientPath:/brokers/ids/1 serverPath:/brokers/ids/1 finished:false header:: 52,3 replyHeader:: 52,28,0 request:: '/brokers/ids/1,T response:: s{25,25,1770973774190,1770973774190,1,0,0,72057605117050880,209,0,25} 09:09:34 09:09:34.601 [controller-event-thread] DEBUG kafka.controller.ControllerChannelManager - [Channel manager on controller 1]: Controller 1 trying to connect to broker 1 09:09:34 09:09:34.613 [TestBroker:1:Controller-1-to-broker-1-send-thread] INFO kafka.controller.RequestSendThread - [RequestSendThread controllerId=1] Starting 09:09:34 09:09:34.616 [controller-event-thread] INFO kafka.controller.KafkaController - [Controller id=1] Currently active brokers in the cluster: Set(1) 09:09:34 09:09:34.617 [controller-event-thread] INFO kafka.controller.KafkaController - [Controller id=1] Currently shutting brokers in the cluster: HashSet() 09:09:34 09:09:34.617 [controller-event-thread] INFO kafka.controller.KafkaController - [Controller id=1] Current list of topics in the cluster: HashSet() 09:09:34 09:09:34.618 [controller-event-thread] INFO kafka.controller.KafkaController - [Controller id=1] Fetching topic deletions in progress 09:09:34 09:09:34.619 [/config/changes-event-process-thread] INFO kafka.common.ZkNodeChangeNotificationListener$ChangeEventProcessThread - [/config/changes-event-process-thread]: Starting 09:09:34 09:09:34.619 [ProcessThread(sid:0 cport:46481):] DEBUG org.apache.zookeeper.server.SessionTrackerImpl - Checking session 0x1000002945e0000 09:09:34 09:09:34.619 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - Processing request:: sessionid:0x1000002945e0000 type:getChildren2 cxid:0x35 zxid:0xfffffffffffffffe txntype:unknown reqpath:/admin/delete_topics 09:09:34 09:09:34.619 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - sessionid:0x1000002945e0000 type:getChildren2 cxid:0x35 zxid:0xfffffffffffffffe txntype:unknown reqpath:/admin/delete_topics 09:09:34 09:09:34.619 [SyncThread:0] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Permission requested: 1 09:09:34 09:09:34.619 [SyncThread:0] DEBUG org.apache.zookeeper.server.ZooKeeperServer - ACLs for node: [31,s{'world,'anyone} 09:09:34 ] 09:09:34 09:09:34.619 [SyncThread:0] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Client credentials: ['sasl,'zooclient 09:09:34 , 'ip,'127.0.0.1 09:09:34 ] 09:09:34 09:09:34.620 [main-SendThread(127.0.0.1:46481)] DEBUG org.apache.zookeeper.ClientCnxn - Reading reply session id: 0x1000002945e0000, packet:: clientPath:/admin/delete_topics serverPath:/admin/delete_topics finished:false header:: 53,12 replyHeader:: 53,28,0 request:: '/admin/delete_topics,T response:: v{},s{12,12,1770973772514,1770973772514,0,0,0,0,0,0,12} 09:09:34 09:09:34.621 [ProcessThread(sid:0 cport:46481):] DEBUG org.apache.zookeeper.server.SessionTrackerImpl - Checking session 0x1000002945e0000 09:09:34 09:09:34.621 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - Processing request:: sessionid:0x1000002945e0000 type:getChildren2 cxid:0x36 zxid:0xfffffffffffffffe txntype:unknown reqpath:/config/topics 09:09:34 09:09:34.621 [ProcessThread(sid:0 cport:46481):] DEBUG org.apache.zookeeper.server.SessionTrackerImpl - Checking session 0x1000002945e0000 09:09:34 09:09:34.621 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - sessionid:0x1000002945e0000 type:getChildren2 cxid:0x36 zxid:0xfffffffffffffffe txntype:unknown reqpath:/config/topics 09:09:34 09:09:34.621 [SyncThread:0] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Permission requested: 1 09:09:34 09:09:34.621 [SyncThread:0] DEBUG org.apache.zookeeper.server.ZooKeeperServer - ACLs for node: [31,s{'world,'anyone} 09:09:34 ] 09:09:34 09:09:34.621 [SyncThread:0] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Client credentials: ['sasl,'zooclient 09:09:34 , 'ip,'127.0.0.1 09:09:34 ] 09:09:34 09:09:34.621 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - Processing request:: sessionid:0x1000002945e0000 type:getChildren2 cxid:0x37 zxid:0xfffffffffffffffe txntype:unknown reqpath:/config/changes 09:09:34 09:09:34.621 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - sessionid:0x1000002945e0000 type:getChildren2 cxid:0x37 zxid:0xfffffffffffffffe txntype:unknown reqpath:/config/changes 09:09:34 09:09:34.621 [SyncThread:0] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Permission requested: 1 09:09:34 09:09:34.621 [SyncThread:0] DEBUG org.apache.zookeeper.server.ZooKeeperServer - ACLs for node: [31,s{'world,'anyone} 09:09:34 ] 09:09:34 09:09:34.622 [SyncThread:0] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Client credentials: ['sasl,'zooclient 09:09:34 , 'ip,'127.0.0.1 09:09:34 ] 09:09:34 09:09:34.622 [controller-event-thread] INFO kafka.controller.KafkaController - [Controller id=1] List of topics to be deleted: 09:09:34 09:09:34.622 [controller-event-thread] INFO kafka.controller.KafkaController - [Controller id=1] List of topics ineligible for deletion: 09:09:34 09:09:34.622 [controller-event-thread] INFO kafka.controller.KafkaController - [Controller id=1] Initializing topic deletion manager 09:09:34 09:09:34.622 [main-SendThread(127.0.0.1:46481)] DEBUG org.apache.zookeeper.ClientCnxn - Reading reply session id: 0x1000002945e0000, packet:: clientPath:/config/topics serverPath:/config/topics finished:false header:: 54,12 replyHeader:: 54,28,0 request:: '/config/topics,F response:: v{},s{17,17,1770973772543,1770973772543,0,0,0,0,0,0,17} 09:09:34 09:09:34.623 [controller-event-thread] INFO kafka.controller.TopicDeletionManager - [Topic Deletion Manager 1] Initializing manager with initial deletions: Set(), initial ineligible deletions: HashSet() 09:09:34 09:09:34.623 [main-SendThread(127.0.0.1:46481)] DEBUG org.apache.zookeeper.ClientCnxn - Reading reply session id: 0x1000002945e0000, packet:: clientPath:/config/changes serverPath:/config/changes finished:false header:: 55,12 replyHeader:: 55,28,0 request:: '/config/changes,T response:: v{},s{9,9,1770973772501,1770973772501,0,0,0,0,0,0,9} 09:09:34 09:09:34.623 [controller-event-thread] INFO kafka.controller.KafkaController - [Controller id=1] Sending update metadata request 09:09:34 09:09:34.624 [ProcessThread(sid:0 cport:46481):] DEBUG org.apache.zookeeper.server.SessionTrackerImpl - Checking session 0x1000002945e0000 09:09:34 09:09:34.624 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - Processing request:: sessionid:0x1000002945e0000 type:getChildren2 cxid:0x38 zxid:0xfffffffffffffffe txntype:unknown reqpath:/config/clients 09:09:34 09:09:34.624 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - sessionid:0x1000002945e0000 type:getChildren2 cxid:0x38 zxid:0xfffffffffffffffe txntype:unknown reqpath:/config/clients 09:09:34 09:09:34.624 [SyncThread:0] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Permission requested: 1 09:09:34 09:09:34.624 [SyncThread:0] DEBUG org.apache.zookeeper.server.ZooKeeperServer - ACLs for node: [31,s{'world,'anyone} 09:09:34 ] 09:09:34 09:09:34.625 [SyncThread:0] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Client credentials: ['sasl,'zooclient 09:09:34 , 'ip,'127.0.0.1 09:09:34 ] 09:09:34 09:09:34.626 [main-SendThread(127.0.0.1:46481)] DEBUG org.apache.zookeeper.ClientCnxn - Reading reply session id: 0x1000002945e0000, packet:: clientPath:/config/clients serverPath:/config/clients finished:false header:: 56,12 replyHeader:: 56,28,0 request:: '/config/clients,F response:: v{},s{18,18,1770973772579,1770973772579,0,0,0,0,0,0,18} 09:09:34 09:09:34.626 [controller-event-thread] INFO state.change.logger - [Controller id=1 epoch=1] Sending UpdateMetadata request to brokers HashSet(1) for 0 partitions 09:09:34 09:09:34.627 [ProcessThread(sid:0 cport:46481):] DEBUG org.apache.zookeeper.server.SessionTrackerImpl - Checking session 0x1000002945e0000 09:09:34 09:09:34.627 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - Processing request:: sessionid:0x1000002945e0000 type:getChildren2 cxid:0x39 zxid:0xfffffffffffffffe txntype:unknown reqpath:/config/users 09:09:34 09:09:34.627 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - sessionid:0x1000002945e0000 type:getChildren2 cxid:0x39 zxid:0xfffffffffffffffe txntype:unknown reqpath:/config/users 09:09:34 09:09:34.627 [SyncThread:0] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Permission requested: 1 09:09:34 09:09:34.627 [SyncThread:0] DEBUG org.apache.zookeeper.server.ZooKeeperServer - ACLs for node: [31,s{'world,'anyone} 09:09:34 ] 09:09:34 09:09:34.627 [SyncThread:0] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Client credentials: ['sasl,'zooclient 09:09:34 , 'ip,'127.0.0.1 09:09:34 ] 09:09:34 09:09:34.628 [main-SendThread(127.0.0.1:46481)] DEBUG org.apache.zookeeper.ClientCnxn - Reading reply session id: 0x1000002945e0000, packet:: clientPath:/config/users serverPath:/config/users finished:false header:: 57,12 replyHeader:: 57,28,0 request:: '/config/users,F response:: v{},s{19,19,1770973772584,1770973772584,0,0,0,0,0,0,19} 09:09:34 09:09:34.630 [ProcessThread(sid:0 cport:46481):] DEBUG org.apache.zookeeper.server.SessionTrackerImpl - Checking session 0x1000002945e0000 09:09:34 09:09:34.630 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - Processing request:: sessionid:0x1000002945e0000 type:getChildren2 cxid:0x3a zxid:0xfffffffffffffffe txntype:unknown reqpath:/config/users 09:09:34 09:09:34.630 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - sessionid:0x1000002945e0000 type:getChildren2 cxid:0x3a zxid:0xfffffffffffffffe txntype:unknown reqpath:/config/users 09:09:34 09:09:34.630 [SyncThread:0] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Permission requested: 1 09:09:34 09:09:34.630 [SyncThread:0] DEBUG org.apache.zookeeper.server.ZooKeeperServer - ACLs for node: [31,s{'world,'anyone} 09:09:34 ] 09:09:34 09:09:34.630 [SyncThread:0] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Client credentials: ['sasl,'zooclient 09:09:34 , 'ip,'127.0.0.1 09:09:34 ] 09:09:34 09:09:34.630 [main-SendThread(127.0.0.1:46481)] DEBUG org.apache.zookeeper.ClientCnxn - Reading reply session id: 0x1000002945e0000, packet:: clientPath:/config/users serverPath:/config/users finished:false header:: 58,12 replyHeader:: 58,28,0 request:: '/config/users,F response:: v{},s{19,19,1770973772584,1770973772584,0,0,0,0,0,0,19} 09:09:34 09:09:34.633 [ProcessThread(sid:0 cport:46481):] DEBUG org.apache.zookeeper.server.SessionTrackerImpl - Checking session 0x1000002945e0000 09:09:34 09:09:34.633 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - Processing request:: sessionid:0x1000002945e0000 type:getChildren2 cxid:0x3b zxid:0xfffffffffffffffe txntype:unknown reqpath:/config/ips 09:09:34 09:09:34.633 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - sessionid:0x1000002945e0000 type:getChildren2 cxid:0x3b zxid:0xfffffffffffffffe txntype:unknown reqpath:/config/ips 09:09:34 09:09:34.633 [SyncThread:0] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Permission requested: 1 09:09:34 09:09:34.633 [SyncThread:0] DEBUG org.apache.zookeeper.server.ZooKeeperServer - ACLs for node: [31,s{'world,'anyone} 09:09:34 ] 09:09:34 09:09:34.633 [SyncThread:0] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Client credentials: ['sasl,'zooclient 09:09:34 , 'ip,'127.0.0.1 09:09:34 ] 09:09:34 09:09:34.633 [main-SendThread(127.0.0.1:46481)] DEBUG org.apache.zookeeper.ClientCnxn - Reading reply session id: 0x1000002945e0000, packet:: clientPath:/config/ips serverPath:/config/ips finished:false header:: 59,12 replyHeader:: 59,28,0 request:: '/config/ips,F response:: v{},s{21,21,1770973772593,1770973772593,0,0,0,0,0,0,21} 09:09:34 09:09:34.634 [controller-event-thread] INFO kafka.controller.ZkReplicaStateMachine - [ReplicaStateMachine controllerId=1] Initializing replica state 09:09:34 09:09:34.634 [ProcessThread(sid:0 cport:46481):] DEBUG org.apache.zookeeper.server.SessionTrackerImpl - Checking session 0x1000002945e0000 09:09:34 09:09:34.634 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - Processing request:: sessionid:0x1000002945e0000 type:getChildren2 cxid:0x3c zxid:0xfffffffffffffffe txntype:unknown reqpath:/config/brokers 09:09:34 09:09:34.634 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - sessionid:0x1000002945e0000 type:getChildren2 cxid:0x3c zxid:0xfffffffffffffffe txntype:unknown reqpath:/config/brokers 09:09:34 09:09:34.635 [SyncThread:0] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Permission requested: 1 09:09:34 09:09:34.635 [SyncThread:0] DEBUG org.apache.zookeeper.server.ZooKeeperServer - ACLs for node: [31,s{'world,'anyone} 09:09:34 ] 09:09:34 09:09:34.635 [SyncThread:0] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Client credentials: ['sasl,'zooclient 09:09:34 , 'ip,'127.0.0.1 09:09:34 ] 09:09:34 09:09:34.635 [controller-event-thread] INFO kafka.controller.ZkReplicaStateMachine - [ReplicaStateMachine controllerId=1] Triggering online replica state changes 09:09:34 09:09:34.635 [main-SendThread(127.0.0.1:46481)] DEBUG org.apache.zookeeper.ClientCnxn - Reading reply session id: 0x1000002945e0000, packet:: clientPath:/config/brokers serverPath:/config/brokers finished:false header:: 60,12 replyHeader:: 60,28,0 request:: '/config/brokers,F response:: v{},s{20,20,1770973772588,1770973772588,0,0,0,0,0,0,20} 09:09:34 09:09:34.636 [main] INFO kafka.network.SocketServer - [SocketServer listenerType=ZK_BROKER, nodeId=1] Enabling request processing. 09:09:34 09:09:34.637 [TestBroker:1:Controller-1-to-broker-1-send-thread] DEBUG org.apache.kafka.clients.ClientUtils - Resolved host localhost as 127.0.0.1 09:09:34 09:09:34.637 [TestBroker:1:Controller-1-to-broker-1-send-thread] DEBUG org.apache.kafka.clients.NetworkClient - [Controller id=1, targetBrokerId=1] Initiating connection to node localhost:40117 (id: 1 rack: null) using address localhost/127.0.0.1 09:09:34 09:09:34.639 [main] DEBUG kafka.network.DataPlaneAcceptor - Starting processors for listener ListenerName(SASL_PLAINTEXT) 09:09:34 09:09:34.639 [controller-event-thread] INFO kafka.controller.ZkReplicaStateMachine - [ReplicaStateMachine controllerId=1] Triggering offline replica state changes 09:09:34 09:09:34.640 [controller-event-thread] DEBUG kafka.controller.ZkReplicaStateMachine - [ReplicaStateMachine controllerId=1] Started replica state machine with initial state -> HashMap() 09:09:34 09:09:34.640 [controller-event-thread] INFO kafka.controller.ZkPartitionStateMachine - [PartitionStateMachine controllerId=1] Initializing partition state 09:09:34 09:09:34.640 [main] DEBUG kafka.network.DataPlaneAcceptor - Starting acceptor thread for listener ListenerName(SASL_PLAINTEXT) 09:09:34 09:09:34.640 [controller-event-thread] INFO kafka.controller.ZkPartitionStateMachine - [PartitionStateMachine controllerId=1] Triggering online partition state changes 09:09:34 09:09:34.643 [controller-event-thread] DEBUG kafka.controller.ZkPartitionStateMachine - [PartitionStateMachine controllerId=1] Started partition state machine with initial state -> HashMap() 09:09:34 09:09:34.644 [controller-event-thread] INFO kafka.controller.KafkaController - [Controller id=1] Ready to serve as the new controller with epoch 1 09:09:34 09:09:34.644 [ProcessThread(sid:0 cport:46481):] DEBUG org.apache.zookeeper.server.SessionTrackerImpl - Checking session 0x1000002945e0000 09:09:34 09:09:34.644 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - Processing request:: sessionid:0x1000002945e0000 type:exists cxid:0x3d zxid:0xfffffffffffffffe txntype:unknown reqpath:/admin/reassign_partitions 09:09:34 09:09:34.644 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - sessionid:0x1000002945e0000 type:exists cxid:0x3d zxid:0xfffffffffffffffe txntype:unknown reqpath:/admin/reassign_partitions 09:09:34 09:09:34.645 [main-SendThread(127.0.0.1:46481)] DEBUG org.apache.zookeeper.ClientCnxn - Reading reply session id: 0x1000002945e0000, packet:: clientPath:/admin/reassign_partitions serverPath:/admin/reassign_partitions finished:false header:: 61,3 replyHeader:: 61,28,-101 request:: '/admin/reassign_partitions,T response:: 09:09:34 09:09:34.648 [ProcessThread(sid:0 cport:46481):] DEBUG org.apache.zookeeper.server.SessionTrackerImpl - Checking session 0x1000002945e0000 09:09:34 09:09:34.648 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - Processing request:: sessionid:0x1000002945e0000 type:getData cxid:0x3e zxid:0xfffffffffffffffe txntype:unknown reqpath:/admin/preferred_replica_election 09:09:34 09:09:34.648 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - sessionid:0x1000002945e0000 type:getData cxid:0x3e zxid:0xfffffffffffffffe txntype:unknown reqpath:/admin/preferred_replica_election 09:09:34 09:09:34.649 [main-SendThread(127.0.0.1:46481)] DEBUG org.apache.zookeeper.ClientCnxn - Reading reply session id: 0x1000002945e0000, packet:: clientPath:/admin/preferred_replica_election serverPath:/admin/preferred_replica_election finished:false header:: 62,4 replyHeader:: 62,28,-101 request:: '/admin/preferred_replica_election,T response:: 09:09:34 09:09:34.650 [TestBroker:1:Controller-1-to-broker-1-send-thread] DEBUG org.apache.kafka.common.security.authenticator.SaslClientAuthenticator - [Controller id=1, targetBrokerId=1] Set SASL client state to SEND_APIVERSIONS_REQUEST 09:09:34 09:09:34.650 [TestBroker:1:Controller-1-to-broker-1-send-thread] DEBUG org.apache.kafka.common.security.authenticator.SaslClientAuthenticator - [Controller id=1, targetBrokerId=1] Creating SaslClient: client=null;service=kafka;serviceHostname=localhost;mechs=[PLAIN] 09:09:34 09:09:34.650 [controller-event-thread] INFO kafka.controller.KafkaController - [Controller id=1] Partitions undergoing preferred replica election: 09:09:34 09:09:34.651 [main] INFO org.apache.kafka.common.utils.AppInfoParser - Kafka version: 3.3.1 09:09:34 09:09:34.651 [main] INFO org.apache.kafka.common.utils.AppInfoParser - Kafka commitId: e23c59d00e687ff5 09:09:34 09:09:34.651 [main] INFO org.apache.kafka.common.utils.AppInfoParser - Kafka startTimeMs: 1770973774645 09:09:34 09:09:34.651 [controller-event-thread] INFO kafka.controller.KafkaController - [Controller id=1] Partitions that completed preferred replica election: 09:09:34 09:09:34.651 [controller-event-thread] INFO kafka.controller.KafkaController - [Controller id=1] Skipping preferred replica election for partitions due to topic deletion: 09:09:34 09:09:34.652 [controller-event-thread] INFO kafka.controller.KafkaController - [Controller id=1] Resuming preferred replica election for partitions: 09:09:34 09:09:34.653 [controller-event-thread] INFO kafka.controller.KafkaController - [Controller id=1] Starting replica leader election (PREFERRED) for partitions triggered by ZkTriggered 09:09:34 09:09:34.653 [main] INFO kafka.server.KafkaServer - [KafkaServer id=1] started 09:09:34 09:09:34.659 [ProcessThread(sid:0 cport:46481):] DEBUG org.apache.zookeeper.server.SessionTrackerImpl - Checking session 0x1000002945e0000 09:09:34 09:09:34.659 [ProcessThread(sid:0 cport:46481):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Permission requested: 1 09:09:34 09:09:34.659 [ProcessThread(sid:0 cport:46481):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - ACLs for node: [31,s{'world,'anyone} 09:09:34 ] 09:09:34 09:09:34.659 [ProcessThread(sid:0 cport:46481):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Client credentials: ['sasl,'zooclient 09:09:34 , 'ip,'127.0.0.1 09:09:34 ] 09:09:34 09:09:34.660 [ProcessThread(sid:0 cport:46481):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from data tree is: 41751846345 09:09:34 09:09:34.660 [ProcessThread(sid:0 cport:46481):] DEBUG org.apache.zookeeper.server.SessionTrackerImpl - Checking session 0x1000002945e0000 09:09:34 09:09:34.660 [ProcessThread(sid:0 cport:46481):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Permission requested: 8 09:09:34 09:09:34.660 [ProcessThread(sid:0 cport:46481):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - ACLs for node: [31,s{'world,'anyone} 09:09:34 ] 09:09:34 09:09:34.660 [ProcessThread(sid:0 cport:46481):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Client credentials: ['sasl,'zooclient 09:09:34 , 'ip,'127.0.0.1 09:09:34 ] 09:09:34 09:09:34.660 [ProcessThread(sid:0 cport:46481):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from data tree is: 41751846345 09:09:34 09:09:34.660 [data-plane-kafka-socket-acceptor-ListenerName(SASL_PLAINTEXT)-SASL_PLAINTEXT-40117] DEBUG kafka.network.DataPlaneAcceptor - Accepted connection from /127.0.0.1:49630 on /127.0.0.1:40117 and assigned it to processor 0, sendBufferSize [actual|requested]: [102400|102400] recvBufferSize [actual|requested]: [102400|102400] 09:09:34 09:09:34.665 [data-plane-kafka-network-thread-1-ListenerName(SASL_PLAINTEXT)-SASL_PLAINTEXT-0] DEBUG kafka.network.Processor - Processor 0 listening to new connection from /127.0.0.1:49630 09:09:34 09:09:34.666 [TestBroker:1:Controller-1-to-broker-1-send-thread] DEBUG org.apache.kafka.common.network.Selector - [Controller id=1, targetBrokerId=1] Created socket with SO_RCVBUF = 65536, SO_SNDBUF = 1313280, SO_TIMEOUT = 0 to node 1 09:09:34 09:09:34.666 [TestBroker:1:BrokerToControllerChannelManager broker=1 name=forwarding] DEBUG kafka.server.BrokerToControllerRequestThread - [TestBroker:1:BrokerToControllerChannelManager broker=1 name=forwarding]: Controller isn't cached, looking for local metadata changes 09:09:34 09:09:34.666 [TestBroker:1:BrokerToControllerChannelManager broker=1 name=forwarding] DEBUG kafka.server.BrokerToControllerRequestThread - [TestBroker:1:BrokerToControllerChannelManager broker=1 name=forwarding]: No controller defined in metadata cache, retrying after backoff 09:09:34 09:09:34.672 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - Processing request:: sessionid:0x1000002945e0000 type:multi cxid:0x3f zxid:0x1d txntype:14 reqpath:n/a 09:09:34 09:09:34.672 [SyncThread:0] DEBUG org.apache.zookeeper.server.DataTree - Ignoring processTxn failure hdr: -1 : error: -101 09:09:34 09:09:34.673 [SyncThread:0] DEBUG org.apache.zookeeper.server.DataTree - Ignoring processTxn failure hdr: 14 : error: -101 09:09:34 09:09:34.673 [SyncThread:0] DEBUG org.apache.zookeeper.server.DataTree - Digests are matching for Zxid: 1d, Digest in log and actual tree: 41751846345 09:09:34 09:09:34.673 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - sessionid:0x1000002945e0000 type:multi cxid:0x3f zxid:0x1d txntype:14 reqpath:n/a 09:09:34 09:09:34.675 [main-SendThread(127.0.0.1:46481)] DEBUG org.apache.zookeeper.ClientCnxn - Reading reply session id: 0x1000002945e0000, packet:: clientPath:null serverPath:null finished:false header:: 63,14 replyHeader:: 63,29,0 request:: org.apache.zookeeper.MultiOperationRecord@228011e8 response:: org.apache.zookeeper.MultiResponse@441 09:09:34 09:09:34.693 [TestBroker:1:BrokerToControllerChannelManager broker=1 name=alterPartition] DEBUG kafka.server.BrokerToControllerRequestThread - [TestBroker:1:BrokerToControllerChannelManager broker=1 name=alterPartition]: Controller isn't cached, looking for local metadata changes 09:09:34 09:09:34.693 [TestBroker:1:BrokerToControllerChannelManager broker=1 name=alterPartition] DEBUG kafka.server.BrokerToControllerRequestThread - [TestBroker:1:BrokerToControllerChannelManager broker=1 name=alterPartition]: No controller defined in metadata cache, retrying after backoff 09:09:34 09:09:34.698 [main] INFO org.apache.kafka.clients.admin.AdminClientConfig - AdminClientConfig values: 09:09:34 bootstrap.servers = [SASL_PLAINTEXT://localhost:40117] 09:09:34 client.dns.lookup = use_all_dns_ips 09:09:34 client.id = test-consumer-id 09:09:34 connections.max.idle.ms = 300000 09:09:34 default.api.timeout.ms = 60000 09:09:34 metadata.max.age.ms = 300000 09:09:34 metric.reporters = [] 09:09:34 metrics.num.samples = 2 09:09:34 metrics.recording.level = INFO 09:09:34 metrics.sample.window.ms = 30000 09:09:34 receive.buffer.bytes = 65536 09:09:34 reconnect.backoff.max.ms = 1000 09:09:34 reconnect.backoff.ms = 50 09:09:34 request.timeout.ms = 15000 09:09:34 retries = 2147483647 09:09:34 retry.backoff.ms = 100 09:09:34 sasl.client.callback.handler.class = null 09:09:34 sasl.jaas.config = [hidden] 09:09:34 sasl.kerberos.kinit.cmd = /usr/bin/kinit 09:09:34 sasl.kerberos.min.time.before.relogin = 60000 09:09:34 sasl.kerberos.service.name = null 09:09:34 sasl.kerberos.ticket.renew.jitter = 0.05 09:09:34 sasl.kerberos.ticket.renew.window.factor = 0.8 09:09:34 sasl.login.callback.handler.class = null 09:09:34 sasl.login.class = null 09:09:34 sasl.login.connect.timeout.ms = null 09:09:34 sasl.login.read.timeout.ms = null 09:09:34 sasl.login.refresh.buffer.seconds = 300 09:09:34 sasl.login.refresh.min.period.seconds = 60 09:09:34 sasl.login.refresh.window.factor = 0.8 09:09:34 sasl.login.refresh.window.jitter = 0.05 09:09:34 sasl.login.retry.backoff.max.ms = 10000 09:09:34 sasl.login.retry.backoff.ms = 100 09:09:34 sasl.mechanism = PLAIN 09:09:34 sasl.oauthbearer.clock.skew.seconds = 30 09:09:34 sasl.oauthbearer.expected.audience = null 09:09:34 sasl.oauthbearer.expected.issuer = null 09:09:34 sasl.oauthbearer.jwks.endpoint.refresh.ms = 3600000 09:09:34 sasl.oauthbearer.jwks.endpoint.retry.backoff.max.ms = 10000 09:09:34 sasl.oauthbearer.jwks.endpoint.retry.backoff.ms = 100 09:09:34 sasl.oauthbearer.jwks.endpoint.url = null 09:09:34 sasl.oauthbearer.scope.claim.name = scope 09:09:34 sasl.oauthbearer.sub.claim.name = sub 09:09:34 sasl.oauthbearer.token.endpoint.url = null 09:09:34 security.protocol = SASL_PLAINTEXT 09:09:34 security.providers = null 09:09:34 send.buffer.bytes = 131072 09:09:34 socket.connection.setup.timeout.max.ms = 30000 09:09:34 socket.connection.setup.timeout.ms = 10000 09:09:34 ssl.cipher.suites = null 09:09:34 ssl.enabled.protocols = [TLSv1.2, TLSv1.3] 09:09:34 ssl.endpoint.identification.algorithm = https 09:09:34 ssl.engine.factory.class = null 09:09:34 ssl.key.password = null 09:09:34 ssl.keymanager.algorithm = SunX509 09:09:34 ssl.keystore.certificate.chain = null 09:09:34 ssl.keystore.key = null 09:09:34 ssl.keystore.location = null 09:09:34 ssl.keystore.password = null 09:09:34 ssl.keystore.type = JKS 09:09:34 ssl.protocol = TLSv1.3 09:09:34 ssl.provider = null 09:09:34 ssl.secure.random.implementation = null 09:09:34 ssl.trustmanager.algorithm = PKIX 09:09:34 ssl.truststore.certificates = null 09:09:34 ssl.truststore.location = null 09:09:34 ssl.truststore.password = null 09:09:34 ssl.truststore.type = JKS 09:09:34 09:09:34 09:09:34.702 [controller-event-thread] INFO kafka.controller.KafkaController - [Controller id=1] Starting the controller scheduler 09:09:34 09:09:34.703 [controller-event-thread] DEBUG kafka.utils.KafkaScheduler - Initializing task scheduler. 09:09:34 09:09:34.703 [controller-event-thread] DEBUG kafka.utils.KafkaScheduler - Scheduling task auto-leader-rebalance-task with initial delay 5000 ms and period -1000 ms. 09:09:34 09:09:34.710 [ProcessThread(sid:0 cport:46481):] DEBUG org.apache.zookeeper.server.SessionTrackerImpl - Checking session 0x1000002945e0000 09:09:34 09:09:34.710 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - Processing request:: sessionid:0x1000002945e0000 type:exists cxid:0x40 zxid:0xfffffffffffffffe txntype:unknown reqpath:/controller 09:09:34 09:09:34.710 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - sessionid:0x1000002945e0000 type:exists cxid:0x40 zxid:0xfffffffffffffffe txntype:unknown reqpath:/controller 09:09:34 09:09:34.711 [main-SendThread(127.0.0.1:46481)] DEBUG org.apache.zookeeper.ClientCnxn - Reading reply session id: 0x1000002945e0000, packet:: clientPath:/controller serverPath:/controller finished:false header:: 64,3 replyHeader:: 64,29,0 request:: '/controller,T response:: s{27,27,1770973774493,1770973774493,0,0,0,72057605117050880,54,0,27} 09:09:34 09:09:34.713 [ProcessThread(sid:0 cport:46481):] DEBUG org.apache.zookeeper.server.SessionTrackerImpl - Checking session 0x1000002945e0000 09:09:34 09:09:34.713 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - Processing request:: sessionid:0x1000002945e0000 type:getData cxid:0x41 zxid:0xfffffffffffffffe txntype:unknown reqpath:/controller 09:09:34 09:09:34.713 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - sessionid:0x1000002945e0000 type:getData cxid:0x41 zxid:0xfffffffffffffffe txntype:unknown reqpath:/controller 09:09:34 09:09:34.713 [SyncThread:0] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Permission requested: 1 09:09:34 09:09:34.713 [SyncThread:0] DEBUG org.apache.zookeeper.server.ZooKeeperServer - ACLs for node: [31,s{'world,'anyone} 09:09:34 ] 09:09:34 09:09:34.713 [SyncThread:0] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Client credentials: ['sasl,'zooclient 09:09:34 , 'ip,'127.0.0.1 09:09:34 ] 09:09:34 09:09:34.714 [main-SendThread(127.0.0.1:46481)] DEBUG org.apache.zookeeper.ClientCnxn - Reading reply session id: 0x1000002945e0000, packet:: clientPath:/controller serverPath:/controller finished:false header:: 65,4 replyHeader:: 65,29,0 request:: '/controller,T response:: #7b2276657273696f6e223a312c2262726f6b65726964223a312c2274696d657374616d70223a2231373730393733373734343135227d,s{27,27,1770973774493,1770973774493,0,0,0,72057605117050880,54,0,27} 09:09:34 09:09:34.718 [ProcessThread(sid:0 cport:46481):] DEBUG org.apache.zookeeper.server.SessionTrackerImpl - Checking session 0x1000002945e0000 09:09:34 09:09:34.718 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - Processing request:: sessionid:0x1000002945e0000 type:exists cxid:0x42 zxid:0xfffffffffffffffe txntype:unknown reqpath:/admin/preferred_replica_election 09:09:34 09:09:34.718 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - sessionid:0x1000002945e0000 type:exists cxid:0x42 zxid:0xfffffffffffffffe txntype:unknown reqpath:/admin/preferred_replica_election 09:09:34 09:09:34.719 [main-SendThread(127.0.0.1:46481)] DEBUG org.apache.zookeeper.ClientCnxn - Reading reply session id: 0x1000002945e0000, packet:: clientPath:/admin/preferred_replica_election serverPath:/admin/preferred_replica_election finished:false header:: 66,3 replyHeader:: 66,29,-101 request:: '/admin/preferred_replica_election,T response:: 09:09:34 09:09:34.724 [main] DEBUG org.apache.kafka.clients.admin.internals.AdminMetadataManager - [AdminClient clientId=test-consumer-id] Setting bootstrap cluster metadata Cluster(id = null, nodes = [localhost:40117 (id: -1 rack: null)], partitions = [], controller = null). 09:09:34 09:09:34.724 [TestBroker:1:Controller-1-to-broker-1-send-thread] DEBUG org.apache.kafka.common.security.authenticator.SaslClientAuthenticator - [Controller id=1, targetBrokerId=1] Set SASL client state to RECEIVE_APIVERSIONS_RESPONSE 09:09:34 09:09:34.725 [TestBroker:1:Controller-1-to-broker-1-send-thread] DEBUG org.apache.kafka.clients.NetworkClient - [Controller id=1, targetBrokerId=1] Completed connection to node 1. Ready. 09:09:34 09:09:34.725 [main] INFO org.apache.kafka.common.security.authenticator.AbstractLogin - Successfully logged in. 09:09:34 09:09:34.727 [data-plane-kafka-network-thread-1-ListenerName(SASL_PLAINTEXT)-SASL_PLAINTEXT-0] DEBUG org.apache.kafka.common.security.authenticator.SaslServerAuthenticator - Set SASL server state to HANDSHAKE_OR_VERSIONS_REQUEST during authentication 09:09:34 09:09:34.727 [data-plane-kafka-network-thread-1-ListenerName(SASL_PLAINTEXT)-SASL_PLAINTEXT-0] DEBUG org.apache.kafka.common.security.authenticator.SaslServerAuthenticator - Handling Kafka request API_VERSIONS during authentication 09:09:34 09:09:34.730 [main] INFO org.apache.kafka.common.utils.AppInfoParser - Kafka version: 3.3.1 09:09:34 09:09:34.730 [main] INFO org.apache.kafka.common.utils.AppInfoParser - Kafka commitId: e23c59d00e687ff5 09:09:34 09:09:34.730 [main] INFO org.apache.kafka.common.utils.AppInfoParser - Kafka startTimeMs: 1770973774730 09:09:34 09:09:34.730 [main] DEBUG org.apache.kafka.clients.admin.KafkaAdminClient - [AdminClient clientId=test-consumer-id] Kafka admin client initialized 09:09:34 09:09:34.730 [kafka-admin-client-thread | test-consumer-id] DEBUG org.apache.kafka.clients.admin.KafkaAdminClient - [AdminClient clientId=test-consumer-id] Thread starting 09:09:34 09:09:34.732 [main] DEBUG org.apache.kafka.clients.admin.KafkaAdminClient - [AdminClient clientId=test-consumer-id] Queueing Call(callName=listNodes, deadlineMs=1770973834731, tries=0, nextAllowedTryMs=0) with a timeout 15000 ms from now. 09:09:34 09:09:34.736 [kafka-admin-client-thread | test-consumer-id] DEBUG org.apache.kafka.clients.ClientUtils - Resolved host localhost as 127.0.0.1 09:09:34 09:09:34.736 [kafka-admin-client-thread | test-consumer-id] DEBUG org.apache.kafka.clients.NetworkClient - [AdminClient clientId=test-consumer-id] Initiating connection to node localhost:40117 (id: -1 rack: null) using address localhost/127.0.0.1 09:09:34 09:09:34.736 [kafka-admin-client-thread | test-consumer-id] DEBUG org.apache.kafka.common.security.authenticator.SaslClientAuthenticator - [AdminClient clientId=test-consumer-id] Set SASL client state to SEND_APIVERSIONS_REQUEST 09:09:34 09:09:34.736 [kafka-admin-client-thread | test-consumer-id] DEBUG org.apache.kafka.common.security.authenticator.SaslClientAuthenticator - [AdminClient clientId=test-consumer-id] Creating SaslClient: client=null;service=kafka;serviceHostname=localhost;mechs=[PLAIN] 09:09:34 09:09:34.740 [data-plane-kafka-socket-acceptor-ListenerName(SASL_PLAINTEXT)-SASL_PLAINTEXT-40117] DEBUG kafka.network.DataPlaneAcceptor - Accepted connection from /127.0.0.1:49632 on /127.0.0.1:40117 and assigned it to processor 1, sendBufferSize [actual|requested]: [102400|102400] recvBufferSize [actual|requested]: [102400|102400] 09:09:34 09:09:34.742 [data-plane-kafka-network-thread-1-ListenerName(SASL_PLAINTEXT)-SASL_PLAINTEXT-1] DEBUG kafka.network.Processor - Processor 1 listening to new connection from /127.0.0.1:49632 09:09:34 09:09:34.745 [kafka-admin-client-thread | test-consumer-id] DEBUG org.apache.kafka.common.network.Selector - [AdminClient clientId=test-consumer-id] Created socket with SO_RCVBUF = 65536, SO_SNDBUF = 131072, SO_TIMEOUT = 0 to node -1 09:09:34 09:09:34.746 [kafka-admin-client-thread | test-consumer-id] DEBUG org.apache.kafka.common.security.authenticator.SaslClientAuthenticator - [AdminClient clientId=test-consumer-id] Set SASL client state to RECEIVE_APIVERSIONS_RESPONSE 09:09:34 09:09:34.746 [kafka-admin-client-thread | test-consumer-id] DEBUG org.apache.kafka.clients.NetworkClient - [AdminClient clientId=test-consumer-id] Completed connection to node -1. Fetching API versions. 09:09:34 09:09:34.746 [data-plane-kafka-network-thread-1-ListenerName(SASL_PLAINTEXT)-SASL_PLAINTEXT-1] DEBUG org.apache.kafka.common.security.authenticator.SaslServerAuthenticator - Set SASL server state to HANDSHAKE_OR_VERSIONS_REQUEST during authentication 09:09:34 09:09:34.746 [data-plane-kafka-network-thread-1-ListenerName(SASL_PLAINTEXT)-SASL_PLAINTEXT-1] DEBUG org.apache.kafka.common.security.authenticator.SaslServerAuthenticator - Handling Kafka request API_VERSIONS during authentication 09:09:34 09:09:34.767 [TestBroker:1:BrokerToControllerChannelManager broker=1 name=forwarding] DEBUG kafka.server.BrokerToControllerRequestThread - [TestBroker:1:BrokerToControllerChannelManager broker=1 name=forwarding]: Controller isn't cached, looking for local metadata changes 09:09:34 09:09:34.767 [TestBroker:1:BrokerToControllerChannelManager broker=1 name=forwarding] DEBUG kafka.server.BrokerToControllerRequestThread - [TestBroker:1:BrokerToControllerChannelManager broker=1 name=forwarding]: No controller defined in metadata cache, retrying after backoff 09:09:34 09:09:34.773 [data-plane-kafka-network-thread-1-ListenerName(SASL_PLAINTEXT)-SASL_PLAINTEXT-1] DEBUG org.apache.kafka.common.security.authenticator.SaslServerAuthenticator - Set SASL server state to HANDSHAKE_REQUEST during authentication 09:09:34 09:09:34.775 [data-plane-kafka-network-thread-1-ListenerName(SASL_PLAINTEXT)-SASL_PLAINTEXT-0] DEBUG org.apache.kafka.common.security.authenticator.SaslServerAuthenticator - Set SASL server state to HANDSHAKE_REQUEST during authentication 09:09:34 09:09:34.776 [kafka-admin-client-thread | test-consumer-id] DEBUG org.apache.kafka.common.security.authenticator.SaslClientAuthenticator - [AdminClient clientId=test-consumer-id] Set SASL client state to SEND_HANDSHAKE_REQUEST 09:09:34 09:09:34.777 [TestBroker:1:Controller-1-to-broker-1-send-thread] DEBUG org.apache.kafka.common.security.authenticator.SaslClientAuthenticator - [Controller id=1, targetBrokerId=1] Set SASL client state to SEND_HANDSHAKE_REQUEST 09:09:34 09:09:34.777 [kafka-admin-client-thread | test-consumer-id] DEBUG org.apache.kafka.common.security.authenticator.SaslClientAuthenticator - [AdminClient clientId=test-consumer-id] Set SASL client state to RECEIVE_HANDSHAKE_RESPONSE 09:09:34 09:09:34.777 [data-plane-kafka-network-thread-1-ListenerName(SASL_PLAINTEXT)-SASL_PLAINTEXT-1] DEBUG org.apache.kafka.common.security.authenticator.SaslServerAuthenticator - Handling Kafka request SASL_HANDSHAKE during authentication 09:09:34 09:09:34.777 [TestBroker:1:Controller-1-to-broker-1-send-thread] DEBUG org.apache.kafka.common.security.authenticator.SaslClientAuthenticator - [Controller id=1, targetBrokerId=1] Set SASL client state to RECEIVE_HANDSHAKE_RESPONSE 09:09:34 09:09:34.777 [data-plane-kafka-network-thread-1-ListenerName(SASL_PLAINTEXT)-SASL_PLAINTEXT-1] DEBUG org.apache.kafka.common.security.authenticator.SaslServerAuthenticator - Using SASL mechanism 'PLAIN' provided by client 09:09:34 09:09:34.778 [data-plane-kafka-network-thread-1-ListenerName(SASL_PLAINTEXT)-SASL_PLAINTEXT-0] DEBUG org.apache.kafka.common.security.authenticator.SaslServerAuthenticator - Handling Kafka request SASL_HANDSHAKE during authentication 09:09:34 09:09:34.778 [data-plane-kafka-network-thread-1-ListenerName(SASL_PLAINTEXT)-SASL_PLAINTEXT-0] DEBUG org.apache.kafka.common.security.authenticator.SaslServerAuthenticator - Using SASL mechanism 'PLAIN' provided by client 09:09:34 09:09:34.778 [kafka-admin-client-thread | test-consumer-id] DEBUG org.apache.kafka.common.security.authenticator.SaslClientAuthenticator - [AdminClient clientId=test-consumer-id] Set SASL client state to INITIAL 09:09:34 09:09:34.778 [TestBroker:1:Controller-1-to-broker-1-send-thread] DEBUG org.apache.kafka.common.security.authenticator.SaslClientAuthenticator - [Controller id=1, targetBrokerId=1] Set SASL client state to INITIAL 09:09:34 09:09:34.782 [data-plane-kafka-network-thread-1-ListenerName(SASL_PLAINTEXT)-SASL_PLAINTEXT-1] DEBUG org.apache.kafka.common.security.authenticator.SaslServerAuthenticator - Set SASL server state to AUTHENTICATE during authentication 09:09:34 09:09:34.783 [data-plane-kafka-network-thread-1-ListenerName(SASL_PLAINTEXT)-SASL_PLAINTEXT-0] DEBUG org.apache.kafka.common.security.authenticator.SaslServerAuthenticator - Set SASL server state to AUTHENTICATE during authentication 09:09:34 09:09:34.786 [kafka-admin-client-thread | test-consumer-id] DEBUG org.apache.kafka.common.security.authenticator.SaslClientAuthenticator - [AdminClient clientId=test-consumer-id] Set SASL client state to INTERMEDIATE 09:09:34 09:09:34.786 [TestBroker:1:Controller-1-to-broker-1-send-thread] DEBUG org.apache.kafka.common.security.authenticator.SaslClientAuthenticator - [Controller id=1, targetBrokerId=1] Set SASL client state to INTERMEDIATE 09:09:34 09:09:34.786 [data-plane-kafka-network-thread-1-ListenerName(SASL_PLAINTEXT)-SASL_PLAINTEXT-1] DEBUG org.apache.kafka.common.security.authenticator.SaslServerAuthenticator - Authentication complete; session max lifetime from broker config=0 ms, no credential expiration; no session expiration, sending 0 ms to client 09:09:34 09:09:34.786 [data-plane-kafka-network-thread-1-ListenerName(SASL_PLAINTEXT)-SASL_PLAINTEXT-0] DEBUG org.apache.kafka.common.security.authenticator.SaslServerAuthenticator - Authentication complete; session max lifetime from broker config=0 ms, no credential expiration; no session expiration, sending 0 ms to client 09:09:34 09:09:34.787 [data-plane-kafka-network-thread-1-ListenerName(SASL_PLAINTEXT)-SASL_PLAINTEXT-0] DEBUG org.apache.kafka.common.security.authenticator.SaslServerAuthenticator - Set SASL server state to COMPLETE during authentication 09:09:34 09:09:34.787 [data-plane-kafka-network-thread-1-ListenerName(SASL_PLAINTEXT)-SASL_PLAINTEXT-1] DEBUG org.apache.kafka.common.security.authenticator.SaslServerAuthenticator - Set SASL server state to COMPLETE during authentication 09:09:34 09:09:34.787 [data-plane-kafka-network-thread-1-ListenerName(SASL_PLAINTEXT)-SASL_PLAINTEXT-1] DEBUG org.apache.kafka.common.network.Selector - [SocketServer listenerType=ZK_BROKER, nodeId=1] Successfully authenticated with /127.0.0.1 09:09:34 09:09:34.787 [kafka-admin-client-thread | test-consumer-id] DEBUG org.apache.kafka.common.security.authenticator.SaslClientAuthenticator - [AdminClient clientId=test-consumer-id] Set SASL client state to COMPLETE 09:09:34 09:09:34.787 [TestBroker:1:Controller-1-to-broker-1-send-thread] DEBUG org.apache.kafka.common.security.authenticator.SaslClientAuthenticator - [Controller id=1, targetBrokerId=1] Set SASL client state to COMPLETE 09:09:34 09:09:34.787 [kafka-admin-client-thread | test-consumer-id] DEBUG org.apache.kafka.common.security.authenticator.SaslClientAuthenticator - [AdminClient clientId=test-consumer-id] Finished authentication with no session expiration and no session re-authentication 09:09:34 09:09:34.787 [TestBroker:1:Controller-1-to-broker-1-send-thread] DEBUG org.apache.kafka.common.security.authenticator.SaslClientAuthenticator - [Controller id=1, targetBrokerId=1] Finished authentication with no session expiration and no session re-authentication 09:09:34 09:09:34.788 [TestBroker:1:Controller-1-to-broker-1-send-thread] DEBUG org.apache.kafka.common.network.Selector - [Controller id=1, targetBrokerId=1] Successfully authenticated with localhost/127.0.0.1 09:09:34 09:09:34.788 [kafka-admin-client-thread | test-consumer-id] DEBUG org.apache.kafka.common.network.Selector - [AdminClient clientId=test-consumer-id] Successfully authenticated with localhost/127.0.0.1 09:09:34 09:09:34.788 [kafka-admin-client-thread | test-consumer-id] DEBUG org.apache.kafka.clients.NetworkClient - [AdminClient clientId=test-consumer-id] Initiating API versions fetch from node -1. 09:09:34 09:09:34.788 [kafka-admin-client-thread | test-consumer-id] DEBUG org.apache.kafka.clients.NetworkClient - [AdminClient clientId=test-consumer-id] Sending API_VERSIONS request with header RequestHeader(apiKey=API_VERSIONS, apiVersion=3, clientId=test-consumer-id, correlationId=0) and timeout 3600000 to node -1: ApiVersionsRequestData(clientSoftwareName='apache-kafka-java', clientSoftwareVersion='3.3.1') 09:09:34 09:09:34.789 [TestBroker:1:Controller-1-to-broker-1-send-thread] INFO kafka.controller.RequestSendThread - [RequestSendThread controllerId=1] Controller 1 connected to localhost:40117 (id: 1 rack: null) for sending state change requests 09:09:34 09:09:34.789 [data-plane-kafka-network-thread-1-ListenerName(SASL_PLAINTEXT)-SASL_PLAINTEXT-0] DEBUG org.apache.kafka.common.network.Selector - [SocketServer listenerType=ZK_BROKER, nodeId=1] Successfully authenticated with /127.0.0.1 09:09:34 09:09:34.792 [TestBroker:1:Controller-1-to-broker-1-send-thread] DEBUG org.apache.kafka.clients.NetworkClient - [Controller id=1, targetBrokerId=1] Sending UPDATE_METADATA request with header RequestHeader(apiKey=UPDATE_METADATA, apiVersion=7, clientId=1, correlationId=0) and timeout 30000 to node 1: UpdateMetadataRequestData(controllerId=1, controllerEpoch=1, brokerEpoch=25, ungroupedPartitionStates=[], topicStates=[], liveBrokers=[UpdateMetadataBroker(id=1, v0Host='', v0Port=0, endpoints=[UpdateMetadataEndpoint(port=40117, host='localhost', listener='SASL_PLAINTEXT', securityProtocol=2)], rack=null)]) 09:09:34 09:09:34.794 [TestBroker:1:BrokerToControllerChannelManager broker=1 name=alterPartition] DEBUG kafka.server.BrokerToControllerRequestThread - [TestBroker:1:BrokerToControllerChannelManager broker=1 name=alterPartition]: Controller isn't cached, looking for local metadata changes 09:09:34 09:09:34.794 [TestBroker:1:BrokerToControllerChannelManager broker=1 name=alterPartition] DEBUG kafka.server.BrokerToControllerRequestThread - [TestBroker:1:BrokerToControllerChannelManager broker=1 name=alterPartition]: No controller defined in metadata cache, retrying after backoff 09:09:34 09:09:34.814 [kafka-admin-client-thread | test-consumer-id] DEBUG org.apache.kafka.clients.NetworkClient - [AdminClient clientId=test-consumer-id] Received API_VERSIONS response from node -1 for request with header RequestHeader(apiKey=API_VERSIONS, apiVersion=3, clientId=test-consumer-id, correlationId=0): ApiVersionsResponseData(errorCode=0, apiKeys=[ApiVersion(apiKey=0, minVersion=0, maxVersion=9), ApiVersion(apiKey=1, minVersion=0, maxVersion=13), ApiVersion(apiKey=2, minVersion=0, maxVersion=7), ApiVersion(apiKey=3, minVersion=0, maxVersion=12), ApiVersion(apiKey=4, minVersion=0, maxVersion=6), ApiVersion(apiKey=5, minVersion=0, maxVersion=3), ApiVersion(apiKey=6, minVersion=0, maxVersion=7), ApiVersion(apiKey=7, minVersion=0, maxVersion=3), ApiVersion(apiKey=8, minVersion=0, maxVersion=8), ApiVersion(apiKey=9, minVersion=0, maxVersion=8), ApiVersion(apiKey=10, minVersion=0, maxVersion=4), ApiVersion(apiKey=11, minVersion=0, maxVersion=9), ApiVersion(apiKey=12, minVersion=0, maxVersion=4), ApiVersion(apiKey=13, minVersion=0, maxVersion=5), ApiVersion(apiKey=14, minVersion=0, maxVersion=5), ApiVersion(apiKey=15, minVersion=0, maxVersion=5), ApiVersion(apiKey=16, minVersion=0, maxVersion=4), ApiVersion(apiKey=17, minVersion=0, maxVersion=1), ApiVersion(apiKey=18, minVersion=0, maxVersion=3), ApiVersion(apiKey=19, minVersion=0, maxVersion=7), ApiVersion(apiKey=20, minVersion=0, maxVersion=6), ApiVersion(apiKey=21, minVersion=0, maxVersion=2), ApiVersion(apiKey=22, minVersion=0, maxVersion=4), ApiVersion(apiKey=23, minVersion=0, maxVersion=4), ApiVersion(apiKey=24, minVersion=0, maxVersion=3), ApiVersion(apiKey=25, minVersion=0, maxVersion=3), ApiVersion(apiKey=26, minVersion=0, maxVersion=3), ApiVersion(apiKey=27, minVersion=0, maxVersion=1), ApiVersion(apiKey=28, minVersion=0, maxVersion=3), ApiVersion(apiKey=29, minVersion=0, maxVersion=3), ApiVersion(apiKey=30, minVersion=0, maxVersion=3), ApiVersion(apiKey=31, minVersion=0, maxVersion=3), ApiVersion(apiKey=32, minVersion=0, maxVersion=4), ApiVersion(apiKey=33, minVersion=0, maxVersion=2), ApiVersion(apiKey=34, minVersion=0, maxVersion=2), ApiVersion(apiKey=35, minVersion=0, maxVersion=4), ApiVersion(apiKey=36, minVersion=0, maxVersion=2), ApiVersion(apiKey=37, minVersion=0, maxVersion=3), ApiVersion(apiKey=38, minVersion=0, maxVersion=3), ApiVersion(apiKey=39, minVersion=0, maxVersion=2), ApiVersion(apiKey=40, minVersion=0, maxVersion=2), ApiVersion(apiKey=41, minVersion=0, maxVersion=3), ApiVersion(apiKey=42, minVersion=0, maxVersion=2), ApiVersion(apiKey=43, minVersion=0, maxVersion=2), ApiVersion(apiKey=44, minVersion=0, maxVersion=1), ApiVersion(apiKey=45, minVersion=0, maxVersion=0), ApiVersion(apiKey=46, minVersion=0, maxVersion=0), ApiVersion(apiKey=47, minVersion=0, maxVersion=0), ApiVersion(apiKey=48, minVersion=0, maxVersion=1), ApiVersion(apiKey=49, minVersion=0, maxVersion=1), ApiVersion(apiKey=50, minVersion=0, maxVersion=0), ApiVersion(apiKey=51, minVersion=0, maxVersion=0), ApiVersion(apiKey=56, minVersion=0, maxVersion=2), ApiVersion(apiKey=57, minVersion=0, maxVersion=1), ApiVersion(apiKey=60, minVersion=0, maxVersion=0), ApiVersion(apiKey=61, minVersion=0, maxVersion=0), ApiVersion(apiKey=65, minVersion=0, maxVersion=0), ApiVersion(apiKey=66, minVersion=0, maxVersion=0), ApiVersion(apiKey=67, minVersion=0, maxVersion=0)], throttleTimeMs=0, supportedFeatures=[], finalizedFeaturesEpoch=0, finalizedFeatures=[]) 09:09:34 09:09:34.816 [TestBroker:1:Controller-1-to-broker-1-send-thread] DEBUG org.apache.kafka.clients.NetworkClient - [Controller id=1, targetBrokerId=1] Received UPDATE_METADATA response from node 1 for request with header RequestHeader(apiKey=UPDATE_METADATA, apiVersion=7, clientId=1, correlationId=0): UpdateMetadataResponseData(errorCode=0) 09:09:34 09:09:34.822 [kafka-admin-client-thread | test-consumer-id] DEBUG org.apache.kafka.clients.NetworkClient - [AdminClient clientId=test-consumer-id] Node -1 has finalized features epoch: 0, finalized features: [], supported features: [], API versions: (Produce(0): 0 to 9 [usable: 9], Fetch(1): 0 to 13 [usable: 13], ListOffsets(2): 0 to 7 [usable: 7], Metadata(3): 0 to 12 [usable: 12], LeaderAndIsr(4): 0 to 6 [usable: 6], StopReplica(5): 0 to 3 [usable: 3], UpdateMetadata(6): 0 to 7 [usable: 7], ControlledShutdown(7): 0 to 3 [usable: 3], OffsetCommit(8): 0 to 8 [usable: 8], OffsetFetch(9): 0 to 8 [usable: 8], FindCoordinator(10): 0 to 4 [usable: 4], JoinGroup(11): 0 to 9 [usable: 9], Heartbeat(12): 0 to 4 [usable: 4], LeaveGroup(13): 0 to 5 [usable: 5], SyncGroup(14): 0 to 5 [usable: 5], DescribeGroups(15): 0 to 5 [usable: 5], ListGroups(16): 0 to 4 [usable: 4], SaslHandshake(17): 0 to 1 [usable: 1], ApiVersions(18): 0 to 3 [usable: 3], CreateTopics(19): 0 to 7 [usable: 7], DeleteTopics(20): 0 to 6 [usable: 6], DeleteRecords(21): 0 to 2 [usable: 2], InitProducerId(22): 0 to 4 [usable: 4], OffsetForLeaderEpoch(23): 0 to 4 [usable: 4], AddPartitionsToTxn(24): 0 to 3 [usable: 3], AddOffsetsToTxn(25): 0 to 3 [usable: 3], EndTxn(26): 0 to 3 [usable: 3], WriteTxnMarkers(27): 0 to 1 [usable: 1], TxnOffsetCommit(28): 0 to 3 [usable: 3], DescribeAcls(29): 0 to 3 [usable: 3], CreateAcls(30): 0 to 3 [usable: 3], DeleteAcls(31): 0 to 3 [usable: 3], DescribeConfigs(32): 0 to 4 [usable: 4], AlterConfigs(33): 0 to 2 [usable: 2], AlterReplicaLogDirs(34): 0 to 2 [usable: 2], DescribeLogDirs(35): 0 to 4 [usable: 4], SaslAuthenticate(36): 0 to 2 [usable: 2], CreatePartitions(37): 0 to 3 [usable: 3], CreateDelegationToken(38): 0 to 3 [usable: 3], RenewDelegationToken(39): 0 to 2 [usable: 2], ExpireDelegationToken(40): 0 to 2 [usable: 2], DescribeDelegationToken(41): 0 to 3 [usable: 3], DeleteGroups(42): 0 to 2 [usable: 2], ElectLeaders(43): 0 to 2 [usable: 2], IncrementalAlterConfigs(44): 0 to 1 [usable: 1], AlterPartitionReassignments(45): 0 [usable: 0], ListPartitionReassignments(46): 0 [usable: 0], OffsetDelete(47): 0 [usable: 0], DescribeClientQuotas(48): 0 to 1 [usable: 1], AlterClientQuotas(49): 0 to 1 [usable: 1], DescribeUserScramCredentials(50): 0 [usable: 0], AlterUserScramCredentials(51): 0 [usable: 0], AlterPartition(56): 0 to 2 [usable: 2], UpdateFeatures(57): 0 to 1 [usable: 1], DescribeCluster(60): 0 [usable: 0], DescribeProducers(61): 0 [usable: 0], DescribeTransactions(65): 0 [usable: 0], ListTransactions(66): 0 [usable: 0], AllocateProducerIds(67): 0 [usable: 0]). 09:09:34 09:09:34.823 [kafka-admin-client-thread | test-consumer-id] DEBUG org.apache.kafka.clients.admin.KafkaAdminClient - [AdminClient clientId=test-consumer-id] Sending MetadataRequestData(topics=[], allowAutoTopicCreation=true, includeClusterAuthorizedOperations=false, includeTopicAuthorizedOperations=false) to localhost:40117 (id: -1 rack: null). correlationId=1, timeoutMs=14908 09:09:34 09:09:34.824 [kafka-admin-client-thread | test-consumer-id] DEBUG org.apache.kafka.clients.NetworkClient - [AdminClient clientId=test-consumer-id] Sending METADATA request with header RequestHeader(apiKey=METADATA, apiVersion=12, clientId=test-consumer-id, correlationId=1) and timeout 14908 to node -1: MetadataRequestData(topics=[], allowAutoTopicCreation=true, includeClusterAuthorizedOperations=false, includeTopicAuthorizedOperations=false) 09:09:34 09:09:34.835 [data-plane-kafka-network-thread-1-ListenerName(SASL_PLAINTEXT)-SASL_PLAINTEXT-0] DEBUG kafka.request.logger - Completed request:{"isForwarded":false,"requestHeader":{"requestApiKey":6,"requestApiVersion":7,"correlationId":0,"clientId":"1","requestApiKeyName":"UPDATE_METADATA"},"request":{"controllerId":1,"controllerEpoch":1,"brokerEpoch":25,"topicStates":[],"liveBrokers":[{"id":1,"endpoints":[{"port":40117,"host":"localhost","listener":"SASL_PLAINTEXT","securityProtocol":2}],"rack":null}]},"response":{"errorCode":0},"connection":"127.0.0.1:40117-127.0.0.1:49630-0","totalTimeMs":21.934,"requestQueueTimeMs":11.963,"localTimeMs":9.334,"remoteTimeMs":0.0,"throttleTimeMs":0,"responseQueueTimeMs":0.177,"sendTimeMs":0.459,"securityProtocol":"SASL_PLAINTEXT","principal":"User:admin","listener":"SASL_PLAINTEXT","clientInformation":{"softwareName":"unknown","softwareVersion":"unknown"}} 09:09:34 09:09:34.835 [data-plane-kafka-network-thread-1-ListenerName(SASL_PLAINTEXT)-SASL_PLAINTEXT-1] DEBUG kafka.request.logger - Completed request:{"isForwarded":false,"requestHeader":{"requestApiKey":18,"requestApiVersion":3,"correlationId":0,"clientId":"test-consumer-id","requestApiKeyName":"API_VERSIONS"},"request":{"clientSoftwareName":"apache-kafka-java","clientSoftwareVersion":"3.3.1"},"response":{"errorCode":0,"apiKeys":[{"apiKey":0,"minVersion":0,"maxVersion":9},{"apiKey":1,"minVersion":0,"maxVersion":13},{"apiKey":2,"minVersion":0,"maxVersion":7},{"apiKey":3,"minVersion":0,"maxVersion":12},{"apiKey":4,"minVersion":0,"maxVersion":6},{"apiKey":5,"minVersion":0,"maxVersion":3},{"apiKey":6,"minVersion":0,"maxVersion":7},{"apiKey":7,"minVersion":0,"maxVersion":3},{"apiKey":8,"minVersion":0,"maxVersion":8},{"apiKey":9,"minVersion":0,"maxVersion":8},{"apiKey":10,"minVersion":0,"maxVersion":4},{"apiKey":11,"minVersion":0,"maxVersion":9},{"apiKey":12,"minVersion":0,"maxVersion":4},{"apiKey":13,"minVersion":0,"maxVersion":5},{"apiKey":14,"minVersion":0,"maxVersion":5},{"apiKey":15,"minVersion":0,"maxVersion":5},{"apiKey":16,"minVersion":0,"maxVersion":4},{"apiKey":17,"minVersion":0,"maxVersion":1},{"apiKey":18,"minVersion":0,"maxVersion":3},{"apiKey":19,"minVersion":0,"maxVersion":7},{"apiKey":20,"minVersion":0,"maxVersion":6},{"apiKey":21,"minVersion":0,"maxVersion":2},{"apiKey":22,"minVersion":0,"maxVersion":4},{"apiKey":23,"minVersion":0,"maxVersion":4},{"apiKey":24,"minVersion":0,"maxVersion":3},{"apiKey":25,"minVersion":0,"maxVersion":3},{"apiKey":26,"minVersion":0,"maxVersion":3},{"apiKey":27,"minVersion":0,"maxVersion":1},{"apiKey":28,"minVersion":0,"maxVersion":3},{"apiKey":29,"minVersion":0,"maxVersion":3},{"apiKey":30,"minVersion":0,"maxVersion":3},{"apiKey":31,"minVersion":0,"maxVersion":3},{"apiKey":32,"minVersion":0,"maxVersion":4},{"apiKey":33,"minVersion":0,"maxVersion":2},{"apiKey":34,"minVersion":0,"maxVersion":2},{"apiKey":35,"minVersion":0,"maxVersion":4},{"apiKey":36,"minVersion":0,"maxVersion":2},{"apiKey":37,"minVersion":0,"maxVersion":3},{"apiKey":38,"minVersion":0,"maxVersion":3},{"apiKey":39,"minVersion":0,"maxVersion":2},{"apiKey":40,"minVersion":0,"maxVersion":2},{"apiKey":41,"minVersion":0,"maxVersion":3},{"apiKey":42,"minVersion":0,"maxVersion":2},{"apiKey":43,"minVersion":0,"maxVersion":2},{"apiKey":44,"minVersion":0,"maxVersion":1},{"apiKey":45,"minVersion":0,"maxVersion":0},{"apiKey":46,"minVersion":0,"maxVersion":0},{"apiKey":47,"minVersion":0,"maxVersion":0},{"apiKey":48,"minVersion":0,"maxVersion":1},{"apiKey":49,"minVersion":0,"maxVersion":1},{"apiKey":50,"minVersion":0,"maxVersion":0},{"apiKey":51,"minVersion":0,"maxVersion":0},{"apiKey":56,"minVersion":0,"maxVersion":2},{"apiKey":57,"minVersion":0,"maxVersion":1},{"apiKey":60,"minVersion":0,"maxVersion":0},{"apiKey":61,"minVersion":0,"maxVersion":0},{"apiKey":65,"minVersion":0,"maxVersion":0},{"apiKey":66,"minVersion":0,"maxVersion":0},{"apiKey":67,"minVersion":0,"maxVersion":0}],"throttleTimeMs":0,"finalizedFeaturesEpoch":0},"connection":"127.0.0.1:40117-127.0.0.1:49632-0","totalTimeMs":22.518,"requestQueueTimeMs":13.931,"localTimeMs":6.649,"remoteTimeMs":0.0,"throttleTimeMs":0,"responseQueueTimeMs":0.2,"sendTimeMs":1.736,"securityProtocol":"SASL_PLAINTEXT","principal":"User:kafkaclient","listener":"SASL_PLAINTEXT","clientInformation":{"softwareName":"unknown","softwareVersion":"unknown"}} 09:09:34 09:09:34.864 [kafka-admin-client-thread | test-consumer-id] DEBUG org.apache.kafka.clients.NetworkClient - [AdminClient clientId=test-consumer-id] Received METADATA response from node -1 for request with header RequestHeader(apiKey=METADATA, apiVersion=12, clientId=test-consumer-id, correlationId=1): MetadataResponseData(throttleTimeMs=0, brokers=[MetadataResponseBroker(nodeId=1, host='localhost', port=40117, rack=null)], clusterId='lcpOyY1-QY2MMThgHGGgSA', controllerId=1, topics=[], clusterAuthorizedOperations=-2147483648) 09:09:34 09:09:34.865 [data-plane-kafka-network-thread-1-ListenerName(SASL_PLAINTEXT)-SASL_PLAINTEXT-1] DEBUG kafka.request.logger - Completed request:{"isForwarded":false,"requestHeader":{"requestApiKey":3,"requestApiVersion":12,"correlationId":1,"clientId":"test-consumer-id","requestApiKeyName":"METADATA"},"request":{"topics":[],"allowAutoTopicCreation":true,"includeTopicAuthorizedOperations":false},"response":{"throttleTimeMs":0,"brokers":[{"nodeId":1,"host":"localhost","port":40117,"rack":null}],"clusterId":"lcpOyY1-QY2MMThgHGGgSA","controllerId":1,"topics":[]},"connection":"127.0.0.1:40117-127.0.0.1:49632-0","totalTimeMs":27.839,"requestQueueTimeMs":0.987,"localTimeMs":26.454,"remoteTimeMs":0.0,"throttleTimeMs":0,"responseQueueTimeMs":0.109,"sendTimeMs":0.287,"securityProtocol":"SASL_PLAINTEXT","principal":"User:kafkaclient","listener":"SASL_PLAINTEXT","clientInformation":{"softwareName":"apache-kafka-java","softwareVersion":"3.3.1"}} 09:09:34 09:09:34.868 [kafka-admin-client-thread | test-consumer-id] DEBUG org.apache.kafka.clients.admin.internals.AdminMetadataManager - [AdminClient clientId=test-consumer-id] Updating cluster metadata to Cluster(id = lcpOyY1-QY2MMThgHGGgSA, nodes = [localhost:40117 (id: 1 rack: null)], partitions = [], controller = localhost:40117 (id: 1 rack: null)) 09:09:34 09:09:34.868 [TestBroker:1:BrokerToControllerChannelManager broker=1 name=forwarding] DEBUG kafka.server.BrokerToControllerRequestThread - [TestBroker:1:BrokerToControllerChannelManager broker=1 name=forwarding]: Controller isn't cached, looking for local metadata changes 09:09:34 09:09:34.869 [kafka-admin-client-thread | test-consumer-id] DEBUG org.apache.kafka.clients.ClientUtils - Resolved host localhost as 127.0.0.1 09:09:34 09:09:34.869 [kafka-admin-client-thread | test-consumer-id] DEBUG org.apache.kafka.clients.NetworkClient - [AdminClient clientId=test-consumer-id] Initiating connection to node localhost:40117 (id: 1 rack: null) using address localhost/127.0.0.1 09:09:34 09:09:34.869 [kafka-admin-client-thread | test-consumer-id] DEBUG org.apache.kafka.common.security.authenticator.SaslClientAuthenticator - [AdminClient clientId=test-consumer-id] Set SASL client state to SEND_APIVERSIONS_REQUEST 09:09:34 09:09:34.869 [kafka-admin-client-thread | test-consumer-id] DEBUG org.apache.kafka.common.security.authenticator.SaslClientAuthenticator - [AdminClient clientId=test-consumer-id] Creating SaslClient: client=null;service=kafka;serviceHostname=localhost;mechs=[PLAIN] 09:09:34 09:09:34.870 [TestBroker:1:BrokerToControllerChannelManager broker=1 name=forwarding] INFO kafka.server.BrokerToControllerRequestThread - [TestBroker:1:BrokerToControllerChannelManager broker=1 name=forwarding]: Recorded new controller, from now on will use broker localhost:40117 (id: 1 rack: null) 09:09:34 09:09:34.871 [kafka-admin-client-thread | test-consumer-id] DEBUG org.apache.kafka.common.network.Selector - [AdminClient clientId=test-consumer-id] Created socket with SO_RCVBUF = 65536, SO_SNDBUF = 131072, SO_TIMEOUT = 0 to node 1 09:09:34 09:09:34.872 [kafka-admin-client-thread | test-consumer-id] DEBUG org.apache.kafka.common.security.authenticator.SaslClientAuthenticator - [AdminClient clientId=test-consumer-id] Set SASL client state to RECEIVE_APIVERSIONS_RESPONSE 09:09:34 09:09:34.872 [kafka-admin-client-thread | test-consumer-id] DEBUG org.apache.kafka.clients.NetworkClient - [AdminClient clientId=test-consumer-id] Completed connection to node 1. Fetching API versions. 09:09:34 09:09:34.879 [data-plane-kafka-socket-acceptor-ListenerName(SASL_PLAINTEXT)-SASL_PLAINTEXT-40117] DEBUG kafka.network.DataPlaneAcceptor - Accepted connection from /127.0.0.1:49634 on /127.0.0.1:40117 and assigned it to processor 0, sendBufferSize [actual|requested]: [102400|102400] recvBufferSize [actual|requested]: [102400|102400] 09:09:34 09:09:34.879 [data-plane-kafka-network-thread-1-ListenerName(SASL_PLAINTEXT)-SASL_PLAINTEXT-0] DEBUG kafka.network.Processor - Processor 0 listening to new connection from /127.0.0.1:49634 09:09:34 09:09:34.880 [data-plane-kafka-network-thread-1-ListenerName(SASL_PLAINTEXT)-SASL_PLAINTEXT-0] DEBUG org.apache.kafka.common.security.authenticator.SaslServerAuthenticator - Set SASL server state to HANDSHAKE_OR_VERSIONS_REQUEST during authentication 09:09:34 09:09:34.880 [data-plane-kafka-network-thread-1-ListenerName(SASL_PLAINTEXT)-SASL_PLAINTEXT-0] DEBUG org.apache.kafka.common.security.authenticator.SaslServerAuthenticator - Handling Kafka request API_VERSIONS during authentication 09:09:34 09:09:34.881 [data-plane-kafka-network-thread-1-ListenerName(SASL_PLAINTEXT)-SASL_PLAINTEXT-0] DEBUG org.apache.kafka.common.security.authenticator.SaslServerAuthenticator - Set SASL server state to HANDSHAKE_REQUEST during authentication 09:09:34 09:09:34.881 [kafka-admin-client-thread | test-consumer-id] DEBUG org.apache.kafka.common.security.authenticator.SaslClientAuthenticator - [AdminClient clientId=test-consumer-id] Set SASL client state to SEND_HANDSHAKE_REQUEST 09:09:34 09:09:34.881 [kafka-admin-client-thread | test-consumer-id] DEBUG org.apache.kafka.common.security.authenticator.SaslClientAuthenticator - [AdminClient clientId=test-consumer-id] Set SASL client state to RECEIVE_HANDSHAKE_RESPONSE 09:09:34 09:09:34.881 [data-plane-kafka-network-thread-1-ListenerName(SASL_PLAINTEXT)-SASL_PLAINTEXT-0] DEBUG org.apache.kafka.common.security.authenticator.SaslServerAuthenticator - Handling Kafka request SASL_HANDSHAKE during authentication 09:09:34 09:09:34.881 [data-plane-kafka-network-thread-1-ListenerName(SASL_PLAINTEXT)-SASL_PLAINTEXT-0] DEBUG org.apache.kafka.common.security.authenticator.SaslServerAuthenticator - Using SASL mechanism 'PLAIN' provided by client 09:09:34 09:09:34.882 [kafka-admin-client-thread | test-consumer-id] DEBUG org.apache.kafka.common.security.authenticator.SaslClientAuthenticator - [AdminClient clientId=test-consumer-id] Set SASL client state to INITIAL 09:09:34 09:09:34.882 [data-plane-kafka-network-thread-1-ListenerName(SASL_PLAINTEXT)-SASL_PLAINTEXT-0] DEBUG org.apache.kafka.common.security.authenticator.SaslServerAuthenticator - Set SASL server state to AUTHENTICATE during authentication 09:09:34 09:09:34.882 [kafka-admin-client-thread | test-consumer-id] DEBUG org.apache.kafka.common.security.authenticator.SaslClientAuthenticator - [AdminClient clientId=test-consumer-id] Set SASL client state to INTERMEDIATE 09:09:34 09:09:34.882 [data-plane-kafka-network-thread-1-ListenerName(SASL_PLAINTEXT)-SASL_PLAINTEXT-0] DEBUG org.apache.kafka.common.security.authenticator.SaslServerAuthenticator - Authentication complete; session max lifetime from broker config=0 ms, no credential expiration; no session expiration, sending 0 ms to client 09:09:34 09:09:34.882 [data-plane-kafka-network-thread-1-ListenerName(SASL_PLAINTEXT)-SASL_PLAINTEXT-0] DEBUG org.apache.kafka.common.security.authenticator.SaslServerAuthenticator - Set SASL server state to COMPLETE during authentication 09:09:34 09:09:34.882 [data-plane-kafka-network-thread-1-ListenerName(SASL_PLAINTEXT)-SASL_PLAINTEXT-0] DEBUG org.apache.kafka.common.network.Selector - [SocketServer listenerType=ZK_BROKER, nodeId=1] Successfully authenticated with /127.0.0.1 09:09:34 09:09:34.883 [kafka-admin-client-thread | test-consumer-id] DEBUG org.apache.kafka.common.security.authenticator.SaslClientAuthenticator - [AdminClient clientId=test-consumer-id] Set SASL client state to COMPLETE 09:09:34 09:09:34.883 [kafka-admin-client-thread | test-consumer-id] DEBUG org.apache.kafka.common.security.authenticator.SaslClientAuthenticator - [AdminClient clientId=test-consumer-id] Finished authentication with no session expiration and no session re-authentication 09:09:34 09:09:34.883 [kafka-admin-client-thread | test-consumer-id] DEBUG org.apache.kafka.common.network.Selector - [AdminClient clientId=test-consumer-id] Successfully authenticated with localhost/127.0.0.1 09:09:34 09:09:34.883 [kafka-admin-client-thread | test-consumer-id] DEBUG org.apache.kafka.clients.NetworkClient - [AdminClient clientId=test-consumer-id] Initiating API versions fetch from node 1. 09:09:34 09:09:34.883 [kafka-admin-client-thread | test-consumer-id] DEBUG org.apache.kafka.clients.NetworkClient - [AdminClient clientId=test-consumer-id] Sending API_VERSIONS request with header RequestHeader(apiKey=API_VERSIONS, apiVersion=3, clientId=test-consumer-id, correlationId=2) and timeout 3600000 to node 1: ApiVersionsRequestData(clientSoftwareName='apache-kafka-java', clientSoftwareVersion='3.3.1') 09:09:34 09:09:34.887 [kafka-admin-client-thread | test-consumer-id] DEBUG org.apache.kafka.clients.NetworkClient - [AdminClient clientId=test-consumer-id] Received API_VERSIONS response from node 1 for request with header RequestHeader(apiKey=API_VERSIONS, apiVersion=3, clientId=test-consumer-id, correlationId=2): ApiVersionsResponseData(errorCode=0, apiKeys=[ApiVersion(apiKey=0, minVersion=0, maxVersion=9), ApiVersion(apiKey=1, minVersion=0, maxVersion=13), ApiVersion(apiKey=2, minVersion=0, maxVersion=7), ApiVersion(apiKey=3, minVersion=0, maxVersion=12), ApiVersion(apiKey=4, minVersion=0, maxVersion=6), ApiVersion(apiKey=5, minVersion=0, maxVersion=3), ApiVersion(apiKey=6, minVersion=0, maxVersion=7), ApiVersion(apiKey=7, minVersion=0, maxVersion=3), ApiVersion(apiKey=8, minVersion=0, maxVersion=8), ApiVersion(apiKey=9, minVersion=0, maxVersion=8), ApiVersion(apiKey=10, minVersion=0, maxVersion=4), ApiVersion(apiKey=11, minVersion=0, maxVersion=9), ApiVersion(apiKey=12, minVersion=0, maxVersion=4), ApiVersion(apiKey=13, minVersion=0, maxVersion=5), ApiVersion(apiKey=14, minVersion=0, maxVersion=5), ApiVersion(apiKey=15, minVersion=0, maxVersion=5), ApiVersion(apiKey=16, minVersion=0, maxVersion=4), ApiVersion(apiKey=17, minVersion=0, maxVersion=1), ApiVersion(apiKey=18, minVersion=0, maxVersion=3), ApiVersion(apiKey=19, minVersion=0, maxVersion=7), ApiVersion(apiKey=20, minVersion=0, maxVersion=6), ApiVersion(apiKey=21, minVersion=0, maxVersion=2), ApiVersion(apiKey=22, minVersion=0, maxVersion=4), ApiVersion(apiKey=23, minVersion=0, maxVersion=4), ApiVersion(apiKey=24, minVersion=0, maxVersion=3), ApiVersion(apiKey=25, minVersion=0, maxVersion=3), ApiVersion(apiKey=26, minVersion=0, maxVersion=3), ApiVersion(apiKey=27, minVersion=0, maxVersion=1), ApiVersion(apiKey=28, minVersion=0, maxVersion=3), ApiVersion(apiKey=29, minVersion=0, maxVersion=3), ApiVersion(apiKey=30, minVersion=0, maxVersion=3), ApiVersion(apiKey=31, minVersion=0, maxVersion=3), ApiVersion(apiKey=32, minVersion=0, maxVersion=4), ApiVersion(apiKey=33, minVersion=0, maxVersion=2), ApiVersion(apiKey=34, minVersion=0, maxVersion=2), ApiVersion(apiKey=35, minVersion=0, maxVersion=4), ApiVersion(apiKey=36, minVersion=0, maxVersion=2), ApiVersion(apiKey=37, minVersion=0, maxVersion=3), ApiVersion(apiKey=38, minVersion=0, maxVersion=3), ApiVersion(apiKey=39, minVersion=0, maxVersion=2), ApiVersion(apiKey=40, minVersion=0, maxVersion=2), ApiVersion(apiKey=41, minVersion=0, maxVersion=3), ApiVersion(apiKey=42, minVersion=0, maxVersion=2), ApiVersion(apiKey=43, minVersion=0, maxVersion=2), ApiVersion(apiKey=44, minVersion=0, maxVersion=1), ApiVersion(apiKey=45, minVersion=0, maxVersion=0), ApiVersion(apiKey=46, minVersion=0, maxVersion=0), ApiVersion(apiKey=47, minVersion=0, maxVersion=0), ApiVersion(apiKey=48, minVersion=0, maxVersion=1), ApiVersion(apiKey=49, minVersion=0, maxVersion=1), ApiVersion(apiKey=50, minVersion=0, maxVersion=0), ApiVersion(apiKey=51, minVersion=0, maxVersion=0), ApiVersion(apiKey=56, minVersion=0, maxVersion=2), ApiVersion(apiKey=57, minVersion=0, maxVersion=1), ApiVersion(apiKey=60, minVersion=0, maxVersion=0), ApiVersion(apiKey=61, minVersion=0, maxVersion=0), ApiVersion(apiKey=65, minVersion=0, maxVersion=0), ApiVersion(apiKey=66, minVersion=0, maxVersion=0), ApiVersion(apiKey=67, minVersion=0, maxVersion=0)], throttleTimeMs=0, supportedFeatures=[], finalizedFeaturesEpoch=0, finalizedFeatures=[]) 09:09:34 09:09:34.888 [data-plane-kafka-network-thread-1-ListenerName(SASL_PLAINTEXT)-SASL_PLAINTEXT-0] DEBUG kafka.request.logger - Completed request:{"isForwarded":false,"requestHeader":{"requestApiKey":18,"requestApiVersion":3,"correlationId":2,"clientId":"test-consumer-id","requestApiKeyName":"API_VERSIONS"},"request":{"clientSoftwareName":"apache-kafka-java","clientSoftwareVersion":"3.3.1"},"response":{"errorCode":0,"apiKeys":[{"apiKey":0,"minVersion":0,"maxVersion":9},{"apiKey":1,"minVersion":0,"maxVersion":13},{"apiKey":2,"minVersion":0,"maxVersion":7},{"apiKey":3,"minVersion":0,"maxVersion":12},{"apiKey":4,"minVersion":0,"maxVersion":6},{"apiKey":5,"minVersion":0,"maxVersion":3},{"apiKey":6,"minVersion":0,"maxVersion":7},{"apiKey":7,"minVersion":0,"maxVersion":3},{"apiKey":8,"minVersion":0,"maxVersion":8},{"apiKey":9,"minVersion":0,"maxVersion":8},{"apiKey":10,"minVersion":0,"maxVersion":4},{"apiKey":11,"minVersion":0,"maxVersion":9},{"apiKey":12,"minVersion":0,"maxVersion":4},{"apiKey":13,"minVersion":0,"maxVersion":5},{"apiKey":14,"minVersion":0,"maxVersion":5},{"apiKey":15,"minVersion":0,"maxVersion":5},{"apiKey":16,"minVersion":0,"maxVersion":4},{"apiKey":17,"minVersion":0,"maxVersion":1},{"apiKey":18,"minVersion":0,"maxVersion":3},{"apiKey":19,"minVersion":0,"maxVersion":7},{"apiKey":20,"minVersion":0,"maxVersion":6},{"apiKey":21,"minVersion":0,"maxVersion":2},{"apiKey":22,"minVersion":0,"maxVersion":4},{"apiKey":23,"minVersion":0,"maxVersion":4},{"apiKey":24,"minVersion":0,"maxVersion":3},{"apiKey":25,"minVersion":0,"maxVersion":3},{"apiKey":26,"minVersion":0,"maxVersion":3},{"apiKey":27,"minVersion":0,"maxVersion":1},{"apiKey":28,"minVersion":0,"maxVersion":3},{"apiKey":29,"minVersion":0,"maxVersion":3},{"apiKey":30,"minVersion":0,"maxVersion":3},{"apiKey":31,"minVersion":0,"maxVersion":3},{"apiKey":32,"minVersion":0,"maxVersion":4},{"apiKey":33,"minVersion":0,"maxVersion":2},{"apiKey":34,"minVersion":0,"maxVersion":2},{"apiKey":35,"minVersion":0,"maxVersion":4},{"apiKey":36,"minVersion":0,"maxVersion":2},{"apiKey":37,"minVersion":0,"maxVersion":3},{"apiKey":38,"minVersion":0,"maxVersion":3},{"apiKey":39,"minVersion":0,"maxVersion":2},{"apiKey":40,"minVersion":0,"maxVersion":2},{"apiKey":41,"minVersion":0,"maxVersion":3},{"apiKey":42,"minVersion":0,"maxVersion":2},{"apiKey":43,"minVersion":0,"maxVersion":2},{"apiKey":44,"minVersion":0,"maxVersion":1},{"apiKey":45,"minVersion":0,"maxVersion":0},{"apiKey":46,"minVersion":0,"maxVersion":0},{"apiKey":47,"minVersion":0,"maxVersion":0},{"apiKey":48,"minVersion":0,"maxVersion":1},{"apiKey":49,"minVersion":0,"maxVersion":1},{"apiKey":50,"minVersion":0,"maxVersion":0},{"apiKey":51,"minVersion":0,"maxVersion":0},{"apiKey":56,"minVersion":0,"maxVersion":2},{"apiKey":57,"minVersion":0,"maxVersion":1},{"apiKey":60,"minVersion":0,"maxVersion":0},{"apiKey":61,"minVersion":0,"maxVersion":0},{"apiKey":65,"minVersion":0,"maxVersion":0},{"apiKey":66,"minVersion":0,"maxVersion":0},{"apiKey":67,"minVersion":0,"maxVersion":0}],"throttleTimeMs":0,"finalizedFeaturesEpoch":0},"connection":"127.0.0.1:40117-127.0.0.1:49634-1","totalTimeMs":2.306,"requestQueueTimeMs":0.454,"localTimeMs":1.565,"remoteTimeMs":0.0,"throttleTimeMs":0,"responseQueueTimeMs":0.097,"sendTimeMs":0.189,"securityProtocol":"SASL_PLAINTEXT","principal":"User:kafkaclient","listener":"SASL_PLAINTEXT","clientInformation":{"softwareName":"unknown","softwareVersion":"unknown"}} 09:09:34 09:09:34.888 [kafka-admin-client-thread | test-consumer-id] DEBUG org.apache.kafka.clients.NetworkClient - [AdminClient clientId=test-consumer-id] Node 1 has finalized features epoch: 0, finalized features: [], supported features: [], API versions: (Produce(0): 0 to 9 [usable: 9], Fetch(1): 0 to 13 [usable: 13], ListOffsets(2): 0 to 7 [usable: 7], Metadata(3): 0 to 12 [usable: 12], LeaderAndIsr(4): 0 to 6 [usable: 6], StopReplica(5): 0 to 3 [usable: 3], UpdateMetadata(6): 0 to 7 [usable: 7], ControlledShutdown(7): 0 to 3 [usable: 3], OffsetCommit(8): 0 to 8 [usable: 8], OffsetFetch(9): 0 to 8 [usable: 8], FindCoordinator(10): 0 to 4 [usable: 4], JoinGroup(11): 0 to 9 [usable: 9], Heartbeat(12): 0 to 4 [usable: 4], LeaveGroup(13): 0 to 5 [usable: 5], SyncGroup(14): 0 to 5 [usable: 5], DescribeGroups(15): 0 to 5 [usable: 5], ListGroups(16): 0 to 4 [usable: 4], SaslHandshake(17): 0 to 1 [usable: 1], ApiVersions(18): 0 to 3 [usable: 3], CreateTopics(19): 0 to 7 [usable: 7], DeleteTopics(20): 0 to 6 [usable: 6], DeleteRecords(21): 0 to 2 [usable: 2], InitProducerId(22): 0 to 4 [usable: 4], OffsetForLeaderEpoch(23): 0 to 4 [usable: 4], AddPartitionsToTxn(24): 0 to 3 [usable: 3], AddOffsetsToTxn(25): 0 to 3 [usable: 3], EndTxn(26): 0 to 3 [usable: 3], WriteTxnMarkers(27): 0 to 1 [usable: 1], TxnOffsetCommit(28): 0 to 3 [usable: 3], DescribeAcls(29): 0 to 3 [usable: 3], CreateAcls(30): 0 to 3 [usable: 3], DeleteAcls(31): 0 to 3 [usable: 3], DescribeConfigs(32): 0 to 4 [usable: 4], AlterConfigs(33): 0 to 2 [usable: 2], AlterReplicaLogDirs(34): 0 to 2 [usable: 2], DescribeLogDirs(35): 0 to 4 [usable: 4], SaslAuthenticate(36): 0 to 2 [usable: 2], CreatePartitions(37): 0 to 3 [usable: 3], CreateDelegationToken(38): 0 to 3 [usable: 3], RenewDelegationToken(39): 0 to 2 [usable: 2], ExpireDelegationToken(40): 0 to 2 [usable: 2], DescribeDelegationToken(41): 0 to 3 [usable: 3], DeleteGroups(42): 0 to 2 [usable: 2], ElectLeaders(43): 0 to 2 [usable: 2], IncrementalAlterConfigs(44): 0 to 1 [usable: 1], AlterPartitionReassignments(45): 0 [usable: 0], ListPartitionReassignments(46): 0 [usable: 0], OffsetDelete(47): 0 [usable: 0], DescribeClientQuotas(48): 0 to 1 [usable: 1], AlterClientQuotas(49): 0 to 1 [usable: 1], DescribeUserScramCredentials(50): 0 [usable: 0], AlterUserScramCredentials(51): 0 [usable: 0], AlterPartition(56): 0 to 2 [usable: 2], UpdateFeatures(57): 0 to 1 [usable: 1], DescribeCluster(60): 0 [usable: 0], DescribeProducers(61): 0 [usable: 0], DescribeTransactions(65): 0 [usable: 0], ListTransactions(66): 0 [usable: 0], AllocateProducerIds(67): 0 [usable: 0]). 09:09:34 09:09:34.893 [kafka-admin-client-thread | test-consumer-id] DEBUG org.apache.kafka.clients.admin.KafkaAdminClient - [AdminClient clientId=test-consumer-id] Sending DescribeClusterRequestData(includeClusterAuthorizedOperations=false) to localhost:40117 (id: 1 rack: null). correlationId=3, timeoutMs=14972 09:09:34 09:09:34.893 [kafka-admin-client-thread | test-consumer-id] DEBUG org.apache.kafka.clients.NetworkClient - [AdminClient clientId=test-consumer-id] Sending DESCRIBE_CLUSTER request with header RequestHeader(apiKey=DESCRIBE_CLUSTER, apiVersion=0, clientId=test-consumer-id, correlationId=3) and timeout 14972 to node 1: DescribeClusterRequestData(includeClusterAuthorizedOperations=false) 09:09:34 09:09:34.900 [kafka-admin-client-thread | test-consumer-id] DEBUG org.apache.kafka.clients.NetworkClient - [AdminClient clientId=test-consumer-id] Received DESCRIBE_CLUSTER response from node 1 for request with header RequestHeader(apiKey=DESCRIBE_CLUSTER, apiVersion=0, clientId=test-consumer-id, correlationId=3): DescribeClusterResponseData(throttleTimeMs=0, errorCode=0, errorMessage=null, clusterId='lcpOyY1-QY2MMThgHGGgSA', controllerId=1, brokers=[DescribeClusterBroker(brokerId=1, host='localhost', port=40117, rack=null)], clusterAuthorizedOperations=-2147483648) 09:09:34 09:09:34.900 [data-plane-kafka-network-thread-1-ListenerName(SASL_PLAINTEXT)-SASL_PLAINTEXT-0] DEBUG kafka.request.logger - Completed request:{"isForwarded":false,"requestHeader":{"requestApiKey":60,"requestApiVersion":0,"correlationId":3,"clientId":"test-consumer-id","requestApiKeyName":"DESCRIBE_CLUSTER"},"request":{"includeClusterAuthorizedOperations":false},"response":{"throttleTimeMs":0,"errorCode":0,"errorMessage":null,"clusterId":"lcpOyY1-QY2MMThgHGGgSA","controllerId":1,"brokers":[{"brokerId":1,"host":"localhost","port":40117,"rack":null}],"clusterAuthorizedOperations":-2147483648},"connection":"127.0.0.1:40117-127.0.0.1:49634-1","totalTimeMs":6.293,"requestQueueTimeMs":1.099,"localTimeMs":4.847,"remoteTimeMs":0.0,"throttleTimeMs":0,"responseQueueTimeMs":0.139,"sendTimeMs":0.207,"securityProtocol":"SASL_PLAINTEXT","principal":"User:kafkaclient","listener":"SASL_PLAINTEXT","clientInformation":{"softwareName":"apache-kafka-java","softwareVersion":"3.3.1"}} 09:09:34 09:09:34.901 [main] DEBUG org.apache.kafka.clients.admin.KafkaAdminClient - [AdminClient clientId=test-consumer-id] Initiating close operation. 09:09:34 09:09:34.901 [main] DEBUG org.apache.kafka.clients.admin.KafkaAdminClient - [AdminClient clientId=test-consumer-id] Waiting for the I/O thread to exit. Hard shutdown in 31536000000 ms. 09:09:34 09:09:34.901 [kafka-admin-client-thread | test-consumer-id] INFO org.apache.kafka.common.utils.AppInfoParser - App info kafka.admin.client for test-consumer-id unregistered 09:09:34 09:09:34.902 [TestBroker:1:BrokerToControllerChannelManager broker=1 name=alterPartition] DEBUG kafka.server.BrokerToControllerRequestThread - [TestBroker:1:BrokerToControllerChannelManager broker=1 name=alterPartition]: Controller isn't cached, looking for local metadata changes 09:09:34 09:09:34.902 [TestBroker:1:BrokerToControllerChannelManager broker=1 name=alterPartition] INFO kafka.server.BrokerToControllerRequestThread - [TestBroker:1:BrokerToControllerChannelManager broker=1 name=alterPartition]: Recorded new controller, from now on will use broker localhost:40117 (id: 1 rack: null) 09:09:34 09:09:34.903 [data-plane-kafka-network-thread-1-ListenerName(SASL_PLAINTEXT)-SASL_PLAINTEXT-0] DEBUG org.apache.kafka.common.network.Selector - [SocketServer listenerType=ZK_BROKER, nodeId=1] Connection with /127.0.0.1 (channelId=127.0.0.1:40117-127.0.0.1:49634-1) disconnected 09:09:34 java.io.EOFException: null 09:09:34 at org.apache.kafka.common.network.NetworkReceive.readFrom(NetworkReceive.java:97) 09:09:34 at org.apache.kafka.common.network.KafkaChannel.receive(KafkaChannel.java:452) 09:09:34 at org.apache.kafka.common.network.KafkaChannel.read(KafkaChannel.java:402) 09:09:34 at org.apache.kafka.common.network.Selector.attemptRead(Selector.java:674) 09:09:34 at org.apache.kafka.common.network.Selector.pollSelectionKeys(Selector.java:576) 09:09:34 at org.apache.kafka.common.network.Selector.poll(Selector.java:481) 09:09:34 at kafka.network.Processor.poll(SocketServer.scala:1055) 09:09:34 at kafka.network.Processor.run(SocketServer.scala:959) 09:09:34 at java.base/java.lang.Thread.run(Thread.java:829) 09:09:34 09:09:34.905 [kafka-admin-client-thread | test-consumer-id] INFO org.apache.kafka.common.metrics.Metrics - Metrics scheduler closed 09:09:34 09:09:34.905 [kafka-admin-client-thread | test-consumer-id] INFO org.apache.kafka.common.metrics.Metrics - Closing reporter org.apache.kafka.common.metrics.JmxReporter 09:09:34 09:09:34.905 [kafka-admin-client-thread | test-consumer-id] INFO org.apache.kafka.common.metrics.Metrics - Metrics reporters closed 09:09:34 09:09:34.905 [kafka-admin-client-thread | test-consumer-id] DEBUG org.apache.kafka.clients.admin.KafkaAdminClient - [AdminClient clientId=test-consumer-id] Exiting AdminClientRunnable thread. 09:09:34 09:09:34.907 [main] DEBUG org.apache.kafka.clients.admin.KafkaAdminClient - [AdminClient clientId=test-consumer-id] Kafka admin client closed. 09:09:34 09:09:34.958 [data-plane-kafka-network-thread-1-ListenerName(SASL_PLAINTEXT)-SASL_PLAINTEXT-1] DEBUG org.apache.kafka.common.network.Selector - [SocketServer listenerType=ZK_BROKER, nodeId=1] Connection with /127.0.0.1 (channelId=127.0.0.1:40117-127.0.0.1:49632-0) disconnected 09:09:34 java.io.EOFException: null 09:09:34 at org.apache.kafka.common.network.NetworkReceive.readFrom(NetworkReceive.java:97) 09:09:34 at org.apache.kafka.common.network.KafkaChannel.receive(KafkaChannel.java:452) 09:09:34 at org.apache.kafka.common.network.KafkaChannel.read(KafkaChannel.java:402) 09:09:34 at org.apache.kafka.common.network.Selector.attemptRead(Selector.java:674) 09:09:34 at org.apache.kafka.common.network.Selector.pollSelectionKeys(Selector.java:576) 09:09:34 at org.apache.kafka.common.network.Selector.poll(Selector.java:481) 09:09:34 at kafka.network.Processor.poll(SocketServer.scala:1055) 09:09:34 at kafka.network.Processor.run(SocketServer.scala:959) 09:09:34 at java.base/java.lang.Thread.run(Thread.java:829) 09:09:34 09:09:34.958 [main] INFO com.salesforce.kafka.test.KafkaTestCluster - Found 1 brokers on-line, cluster is ready. 09:09:34 09:09:34.958 [main] DEBUG org.onap.sdc.utils.SdcKafkaTest - Cluster started at: SASL_PLAINTEXT://localhost:40117 09:09:34 09:09:34.959 [main] INFO org.apache.kafka.clients.admin.AdminClientConfig - AdminClientConfig values: 09:09:34 bootstrap.servers = [SASL_PLAINTEXT://localhost:40117] 09:09:34 client.dns.lookup = use_all_dns_ips 09:09:34 client.id = test-consumer-id 09:09:34 connections.max.idle.ms = 300000 09:09:34 default.api.timeout.ms = 60000 09:09:34 metadata.max.age.ms = 300000 09:09:34 metric.reporters = [] 09:09:34 metrics.num.samples = 2 09:09:34 metrics.recording.level = INFO 09:09:34 metrics.sample.window.ms = 30000 09:09:34 receive.buffer.bytes = 65536 09:09:34 reconnect.backoff.max.ms = 1000 09:09:34 reconnect.backoff.ms = 50 09:09:34 request.timeout.ms = 15000 09:09:34 retries = 2147483647 09:09:34 retry.backoff.ms = 100 09:09:34 sasl.client.callback.handler.class = null 09:09:34 sasl.jaas.config = [hidden] 09:09:34 sasl.kerberos.kinit.cmd = /usr/bin/kinit 09:09:34 sasl.kerberos.min.time.before.relogin = 60000 09:09:34 sasl.kerberos.service.name = null 09:09:34 sasl.kerberos.ticket.renew.jitter = 0.05 09:09:34 sasl.kerberos.ticket.renew.window.factor = 0.8 09:09:34 sasl.login.callback.handler.class = null 09:09:34 sasl.login.class = null 09:09:34 sasl.login.connect.timeout.ms = null 09:09:34 sasl.login.read.timeout.ms = null 09:09:34 sasl.login.refresh.buffer.seconds = 300 09:09:34 sasl.login.refresh.min.period.seconds = 60 09:09:34 sasl.login.refresh.window.factor = 0.8 09:09:34 sasl.login.refresh.window.jitter = 0.05 09:09:34 sasl.login.retry.backoff.max.ms = 10000 09:09:34 sasl.login.retry.backoff.ms = 100 09:09:34 sasl.mechanism = PLAIN 09:09:34 sasl.oauthbearer.clock.skew.seconds = 30 09:09:34 sasl.oauthbearer.expected.audience = null 09:09:34 sasl.oauthbearer.expected.issuer = null 09:09:34 sasl.oauthbearer.jwks.endpoint.refresh.ms = 3600000 09:09:34 sasl.oauthbearer.jwks.endpoint.retry.backoff.max.ms = 10000 09:09:34 sasl.oauthbearer.jwks.endpoint.retry.backoff.ms = 100 09:09:34 sasl.oauthbearer.jwks.endpoint.url = null 09:09:34 sasl.oauthbearer.scope.claim.name = scope 09:09:34 sasl.oauthbearer.sub.claim.name = sub 09:09:34 sasl.oauthbearer.token.endpoint.url = null 09:09:34 security.protocol = SASL_PLAINTEXT 09:09:34 security.providers = null 09:09:34 send.buffer.bytes = 131072 09:09:34 socket.connection.setup.timeout.max.ms = 30000 09:09:34 socket.connection.setup.timeout.ms = 10000 09:09:34 ssl.cipher.suites = null 09:09:34 ssl.enabled.protocols = [TLSv1.2, TLSv1.3] 09:09:34 ssl.endpoint.identification.algorithm = https 09:09:34 ssl.engine.factory.class = null 09:09:34 ssl.key.password = null 09:09:34 ssl.keymanager.algorithm = SunX509 09:09:34 ssl.keystore.certificate.chain = null 09:09:34 ssl.keystore.key = null 09:09:34 ssl.keystore.location = null 09:09:34 ssl.keystore.password = null 09:09:34 ssl.keystore.type = JKS 09:09:34 ssl.protocol = TLSv1.3 09:09:34 ssl.provider = null 09:09:34 ssl.secure.random.implementation = null 09:09:34 ssl.trustmanager.algorithm = PKIX 09:09:34 ssl.truststore.certificates = null 09:09:34 ssl.truststore.location = null 09:09:34 ssl.truststore.password = null 09:09:34 ssl.truststore.type = JKS 09:09:34 09:09:34 09:09:34.960 [main] DEBUG org.apache.kafka.clients.admin.internals.AdminMetadataManager - [AdminClient clientId=test-consumer-id] Setting bootstrap cluster metadata Cluster(id = null, nodes = [localhost:40117 (id: -1 rack: null)], partitions = [], controller = null). 09:09:34 09:09:34.961 [main] INFO org.apache.kafka.common.security.authenticator.AbstractLogin - Successfully logged in. 09:09:34 09:09:34.965 [main] INFO org.apache.kafka.common.utils.AppInfoParser - Kafka version: 3.3.1 09:09:34 09:09:34.965 [main] INFO org.apache.kafka.common.utils.AppInfoParser - Kafka commitId: e23c59d00e687ff5 09:09:34 09:09:34.965 [main] INFO org.apache.kafka.common.utils.AppInfoParser - Kafka startTimeMs: 1770973774965 09:09:34 09:09:34.965 [main] DEBUG org.apache.kafka.clients.admin.KafkaAdminClient - [AdminClient clientId=test-consumer-id] Kafka admin client initialized 09:09:34 09:09:34.965 [kafka-admin-client-thread | test-consumer-id] DEBUG org.apache.kafka.clients.admin.KafkaAdminClient - [AdminClient clientId=test-consumer-id] Thread starting 09:09:34 09:09:34.966 [kafka-admin-client-thread | test-consumer-id] DEBUG org.apache.kafka.clients.ClientUtils - Resolved host localhost as 127.0.0.1 09:09:34 09:09:34.966 [kafka-admin-client-thread | test-consumer-id] DEBUG org.apache.kafka.clients.NetworkClient - [AdminClient clientId=test-consumer-id] Initiating connection to node localhost:40117 (id: -1 rack: null) using address localhost/127.0.0.1 09:09:34 09:09:34.966 [kafka-admin-client-thread | test-consumer-id] DEBUG org.apache.kafka.common.security.authenticator.SaslClientAuthenticator - [AdminClient clientId=test-consumer-id] Set SASL client state to SEND_APIVERSIONS_REQUEST 09:09:34 09:09:34.966 [kafka-admin-client-thread | test-consumer-id] DEBUG org.apache.kafka.common.security.authenticator.SaslClientAuthenticator - [AdminClient clientId=test-consumer-id] Creating SaslClient: client=null;service=kafka;serviceHostname=localhost;mechs=[PLAIN] 09:09:34 09:09:34.967 [data-plane-kafka-socket-acceptor-ListenerName(SASL_PLAINTEXT)-SASL_PLAINTEXT-40117] DEBUG kafka.network.DataPlaneAcceptor - Accepted connection from /127.0.0.1:49636 on /127.0.0.1:40117 and assigned it to processor 1, sendBufferSize [actual|requested]: [102400|102400] recvBufferSize [actual|requested]: [102400|102400] 09:09:34 09:09:34.967 [data-plane-kafka-network-thread-1-ListenerName(SASL_PLAINTEXT)-SASL_PLAINTEXT-1] DEBUG kafka.network.Processor - Processor 1 listening to new connection from /127.0.0.1:49636 09:09:34 09:09:34.970 [kafka-admin-client-thread | test-consumer-id] DEBUG org.apache.kafka.common.network.Selector - [AdminClient clientId=test-consumer-id] Created socket with SO_RCVBUF = 65536, SO_SNDBUF = 131072, SO_TIMEOUT = 0 to node -1 09:09:34 09:09:34.970 [kafka-admin-client-thread | test-consumer-id] DEBUG org.apache.kafka.common.security.authenticator.SaslClientAuthenticator - [AdminClient clientId=test-consumer-id] Set SASL client state to RECEIVE_APIVERSIONS_RESPONSE 09:09:34 09:09:34.971 [data-plane-kafka-network-thread-1-ListenerName(SASL_PLAINTEXT)-SASL_PLAINTEXT-1] DEBUG org.apache.kafka.common.security.authenticator.SaslServerAuthenticator - Set SASL server state to HANDSHAKE_OR_VERSIONS_REQUEST during authentication 09:09:34 09:09:34.971 [kafka-admin-client-thread | test-consumer-id] DEBUG org.apache.kafka.clients.NetworkClient - [AdminClient clientId=test-consumer-id] Completed connection to node -1. Fetching API versions. 09:09:34 09:09:34.971 [data-plane-kafka-network-thread-1-ListenerName(SASL_PLAINTEXT)-SASL_PLAINTEXT-1] DEBUG org.apache.kafka.common.security.authenticator.SaslServerAuthenticator - Handling Kafka request API_VERSIONS during authentication 09:09:34 09:09:34.971 [data-plane-kafka-network-thread-1-ListenerName(SASL_PLAINTEXT)-SASL_PLAINTEXT-1] DEBUG org.apache.kafka.common.security.authenticator.SaslServerAuthenticator - Set SASL server state to HANDSHAKE_REQUEST during authentication 09:09:34 09:09:34.972 [main] DEBUG org.apache.kafka.clients.admin.KafkaAdminClient - [AdminClient clientId=test-consumer-id] Queueing Call(callName=createTopics, deadlineMs=1770973834970, tries=0, nextAllowedTryMs=0) with a timeout 15000 ms from now. 09:09:34 09:09:34.972 [kafka-admin-client-thread | test-consumer-id] DEBUG org.apache.kafka.common.security.authenticator.SaslClientAuthenticator - [AdminClient clientId=test-consumer-id] Set SASL client state to SEND_HANDSHAKE_REQUEST 09:09:34 09:09:34.972 [kafka-admin-client-thread | test-consumer-id] DEBUG org.apache.kafka.common.security.authenticator.SaslClientAuthenticator - [AdminClient clientId=test-consumer-id] Set SASL client state to RECEIVE_HANDSHAKE_RESPONSE 09:09:34 09:09:34.972 [data-plane-kafka-network-thread-1-ListenerName(SASL_PLAINTEXT)-SASL_PLAINTEXT-1] DEBUG org.apache.kafka.common.security.authenticator.SaslServerAuthenticator - Handling Kafka request SASL_HANDSHAKE during authentication 09:09:34 09:09:34.972 [data-plane-kafka-network-thread-1-ListenerName(SASL_PLAINTEXT)-SASL_PLAINTEXT-1] DEBUG org.apache.kafka.common.security.authenticator.SaslServerAuthenticator - Using SASL mechanism 'PLAIN' provided by client 09:09:34 09:09:34.973 [kafka-admin-client-thread | test-consumer-id] DEBUG org.apache.kafka.common.security.authenticator.SaslClientAuthenticator - [AdminClient clientId=test-consumer-id] Set SASL client state to INITIAL 09:09:34 09:09:34.973 [kafka-admin-client-thread | test-consumer-id] DEBUG org.apache.kafka.common.security.authenticator.SaslClientAuthenticator - [AdminClient clientId=test-consumer-id] Set SASL client state to INTERMEDIATE 09:09:34 09:09:34.973 [data-plane-kafka-network-thread-1-ListenerName(SASL_PLAINTEXT)-SASL_PLAINTEXT-1] DEBUG org.apache.kafka.common.security.authenticator.SaslServerAuthenticator - Set SASL server state to AUTHENTICATE during authentication 09:09:34 09:09:34.973 [data-plane-kafka-network-thread-1-ListenerName(SASL_PLAINTEXT)-SASL_PLAINTEXT-1] DEBUG org.apache.kafka.common.security.authenticator.SaslServerAuthenticator - Authentication complete; session max lifetime from broker config=0 ms, no credential expiration; no session expiration, sending 0 ms to client 09:09:34 09:09:34.974 [data-plane-kafka-network-thread-1-ListenerName(SASL_PLAINTEXT)-SASL_PLAINTEXT-1] DEBUG org.apache.kafka.common.security.authenticator.SaslServerAuthenticator - Set SASL server state to COMPLETE during authentication 09:09:34 09:09:34.974 [data-plane-kafka-network-thread-1-ListenerName(SASL_PLAINTEXT)-SASL_PLAINTEXT-1] DEBUG org.apache.kafka.common.network.Selector - [SocketServer listenerType=ZK_BROKER, nodeId=1] Successfully authenticated with /127.0.0.1 09:09:34 09:09:34.974 [kafka-admin-client-thread | test-consumer-id] DEBUG org.apache.kafka.common.security.authenticator.SaslClientAuthenticator - [AdminClient clientId=test-consumer-id] Set SASL client state to COMPLETE 09:09:34 09:09:34.974 [kafka-admin-client-thread | test-consumer-id] DEBUG org.apache.kafka.common.security.authenticator.SaslClientAuthenticator - [AdminClient clientId=test-consumer-id] Finished authentication with no session expiration and no session re-authentication 09:09:34 09:09:34.974 [kafka-admin-client-thread | test-consumer-id] DEBUG org.apache.kafka.common.network.Selector - [AdminClient clientId=test-consumer-id] Successfully authenticated with localhost/127.0.0.1 09:09:34 09:09:34.974 [kafka-admin-client-thread | test-consumer-id] DEBUG org.apache.kafka.clients.NetworkClient - [AdminClient clientId=test-consumer-id] Initiating API versions fetch from node -1. 09:09:34 09:09:34.974 [kafka-admin-client-thread | test-consumer-id] DEBUG org.apache.kafka.clients.NetworkClient - [AdminClient clientId=test-consumer-id] Sending API_VERSIONS request with header RequestHeader(apiKey=API_VERSIONS, apiVersion=3, clientId=test-consumer-id, correlationId=0) and timeout 3600000 to node -1: ApiVersionsRequestData(clientSoftwareName='apache-kafka-java', clientSoftwareVersion='3.3.1') 09:09:34 09:09:34.976 [kafka-admin-client-thread | test-consumer-id] DEBUG org.apache.kafka.clients.NetworkClient - [AdminClient clientId=test-consumer-id] Received API_VERSIONS response from node -1 for request with header RequestHeader(apiKey=API_VERSIONS, apiVersion=3, clientId=test-consumer-id, correlationId=0): ApiVersionsResponseData(errorCode=0, apiKeys=[ApiVersion(apiKey=0, minVersion=0, maxVersion=9), ApiVersion(apiKey=1, minVersion=0, maxVersion=13), ApiVersion(apiKey=2, minVersion=0, maxVersion=7), ApiVersion(apiKey=3, minVersion=0, maxVersion=12), ApiVersion(apiKey=4, minVersion=0, maxVersion=6), ApiVersion(apiKey=5, minVersion=0, maxVersion=3), ApiVersion(apiKey=6, minVersion=0, maxVersion=7), ApiVersion(apiKey=7, minVersion=0, maxVersion=3), ApiVersion(apiKey=8, minVersion=0, maxVersion=8), ApiVersion(apiKey=9, minVersion=0, maxVersion=8), ApiVersion(apiKey=10, minVersion=0, maxVersion=4), ApiVersion(apiKey=11, minVersion=0, maxVersion=9), ApiVersion(apiKey=12, minVersion=0, maxVersion=4), ApiVersion(apiKey=13, minVersion=0, maxVersion=5), ApiVersion(apiKey=14, minVersion=0, maxVersion=5), ApiVersion(apiKey=15, minVersion=0, maxVersion=5), ApiVersion(apiKey=16, minVersion=0, maxVersion=4), ApiVersion(apiKey=17, minVersion=0, maxVersion=1), ApiVersion(apiKey=18, minVersion=0, maxVersion=3), ApiVersion(apiKey=19, minVersion=0, maxVersion=7), ApiVersion(apiKey=20, minVersion=0, maxVersion=6), ApiVersion(apiKey=21, minVersion=0, maxVersion=2), ApiVersion(apiKey=22, minVersion=0, maxVersion=4), ApiVersion(apiKey=23, minVersion=0, maxVersion=4), ApiVersion(apiKey=24, minVersion=0, maxVersion=3), ApiVersion(apiKey=25, minVersion=0, maxVersion=3), ApiVersion(apiKey=26, minVersion=0, maxVersion=3), ApiVersion(apiKey=27, minVersion=0, maxVersion=1), ApiVersion(apiKey=28, minVersion=0, maxVersion=3), ApiVersion(apiKey=29, minVersion=0, maxVersion=3), ApiVersion(apiKey=30, minVersion=0, maxVersion=3), ApiVersion(apiKey=31, minVersion=0, maxVersion=3), ApiVersion(apiKey=32, minVersion=0, maxVersion=4), ApiVersion(apiKey=33, minVersion=0, maxVersion=2), ApiVersion(apiKey=34, minVersion=0, maxVersion=2), ApiVersion(apiKey=35, minVersion=0, maxVersion=4), ApiVersion(apiKey=36, minVersion=0, maxVersion=2), ApiVersion(apiKey=37, minVersion=0, maxVersion=3), ApiVersion(apiKey=38, minVersion=0, maxVersion=3), ApiVersion(apiKey=39, minVersion=0, maxVersion=2), ApiVersion(apiKey=40, minVersion=0, maxVersion=2), ApiVersion(apiKey=41, minVersion=0, maxVersion=3), ApiVersion(apiKey=42, minVersion=0, maxVersion=2), ApiVersion(apiKey=43, minVersion=0, maxVersion=2), ApiVersion(apiKey=44, minVersion=0, maxVersion=1), ApiVersion(apiKey=45, minVersion=0, maxVersion=0), ApiVersion(apiKey=46, minVersion=0, maxVersion=0), ApiVersion(apiKey=47, minVersion=0, maxVersion=0), ApiVersion(apiKey=48, minVersion=0, maxVersion=1), ApiVersion(apiKey=49, minVersion=0, maxVersion=1), ApiVersion(apiKey=50, minVersion=0, maxVersion=0), ApiVersion(apiKey=51, minVersion=0, maxVersion=0), ApiVersion(apiKey=56, minVersion=0, maxVersion=2), ApiVersion(apiKey=57, minVersion=0, maxVersion=1), ApiVersion(apiKey=60, minVersion=0, maxVersion=0), ApiVersion(apiKey=61, minVersion=0, maxVersion=0), ApiVersion(apiKey=65, minVersion=0, maxVersion=0), ApiVersion(apiKey=66, minVersion=0, maxVersion=0), ApiVersion(apiKey=67, minVersion=0, maxVersion=0)], throttleTimeMs=0, supportedFeatures=[], finalizedFeaturesEpoch=0, finalizedFeatures=[]) 09:09:34 09:09:34.977 [kafka-admin-client-thread | test-consumer-id] DEBUG org.apache.kafka.clients.NetworkClient - [AdminClient clientId=test-consumer-id] Node -1 has finalized features epoch: 0, finalized features: [], supported features: [], API versions: (Produce(0): 0 to 9 [usable: 9], Fetch(1): 0 to 13 [usable: 13], ListOffsets(2): 0 to 7 [usable: 7], Metadata(3): 0 to 12 [usable: 12], LeaderAndIsr(4): 0 to 6 [usable: 6], StopReplica(5): 0 to 3 [usable: 3], UpdateMetadata(6): 0 to 7 [usable: 7], ControlledShutdown(7): 0 to 3 [usable: 3], OffsetCommit(8): 0 to 8 [usable: 8], OffsetFetch(9): 0 to 8 [usable: 8], FindCoordinator(10): 0 to 4 [usable: 4], JoinGroup(11): 0 to 9 [usable: 9], Heartbeat(12): 0 to 4 [usable: 4], LeaveGroup(13): 0 to 5 [usable: 5], SyncGroup(14): 0 to 5 [usable: 5], DescribeGroups(15): 0 to 5 [usable: 5], ListGroups(16): 0 to 4 [usable: 4], SaslHandshake(17): 0 to 1 [usable: 1], ApiVersions(18): 0 to 3 [usable: 3], CreateTopics(19): 0 to 7 [usable: 7], DeleteTopics(20): 0 to 6 [usable: 6], DeleteRecords(21): 0 to 2 [usable: 2], InitProducerId(22): 0 to 4 [usable: 4], OffsetForLeaderEpoch(23): 0 to 4 [usable: 4], AddPartitionsToTxn(24): 0 to 3 [usable: 3], AddOffsetsToTxn(25): 0 to 3 [usable: 3], EndTxn(26): 0 to 3 [usable: 3], WriteTxnMarkers(27): 0 to 1 [usable: 1], TxnOffsetCommit(28): 0 to 3 [usable: 3], DescribeAcls(29): 0 to 3 [usable: 3], CreateAcls(30): 0 to 3 [usable: 3], DeleteAcls(31): 0 to 3 [usable: 3], DescribeConfigs(32): 0 to 4 [usable: 4], AlterConfigs(33): 0 to 2 [usable: 2], AlterReplicaLogDirs(34): 0 to 2 [usable: 2], DescribeLogDirs(35): 0 to 4 [usable: 4], SaslAuthenticate(36): 0 to 2 [usable: 2], CreatePartitions(37): 0 to 3 [usable: 3], CreateDelegationToken(38): 0 to 3 [usable: 3], RenewDelegationToken(39): 0 to 2 [usable: 2], ExpireDelegationToken(40): 0 to 2 [usable: 2], DescribeDelegationToken(41): 0 to 3 [usable: 3], DeleteGroups(42): 0 to 2 [usable: 2], ElectLeaders(43): 0 to 2 [usable: 2], IncrementalAlterConfigs(44): 0 to 1 [usable: 1], AlterPartitionReassignments(45): 0 [usable: 0], ListPartitionReassignments(46): 0 [usable: 0], OffsetDelete(47): 0 [usable: 0], DescribeClientQuotas(48): 0 to 1 [usable: 1], AlterClientQuotas(49): 0 to 1 [usable: 1], DescribeUserScramCredentials(50): 0 [usable: 0], AlterUserScramCredentials(51): 0 [usable: 0], AlterPartition(56): 0 to 2 [usable: 2], UpdateFeatures(57): 0 to 1 [usable: 1], DescribeCluster(60): 0 [usable: 0], DescribeProducers(61): 0 [usable: 0], DescribeTransactions(65): 0 [usable: 0], ListTransactions(66): 0 [usable: 0], AllocateProducerIds(67): 0 [usable: 0]). 09:09:34 09:09:34.977 [data-plane-kafka-network-thread-1-ListenerName(SASL_PLAINTEXT)-SASL_PLAINTEXT-1] DEBUG kafka.request.logger - Completed request:{"isForwarded":false,"requestHeader":{"requestApiKey":18,"requestApiVersion":3,"correlationId":0,"clientId":"test-consumer-id","requestApiKeyName":"API_VERSIONS"},"request":{"clientSoftwareName":"apache-kafka-java","clientSoftwareVersion":"3.3.1"},"response":{"errorCode":0,"apiKeys":[{"apiKey":0,"minVersion":0,"maxVersion":9},{"apiKey":1,"minVersion":0,"maxVersion":13},{"apiKey":2,"minVersion":0,"maxVersion":7},{"apiKey":3,"minVersion":0,"maxVersion":12},{"apiKey":4,"minVersion":0,"maxVersion":6},{"apiKey":5,"minVersion":0,"maxVersion":3},{"apiKey":6,"minVersion":0,"maxVersion":7},{"apiKey":7,"minVersion":0,"maxVersion":3},{"apiKey":8,"minVersion":0,"maxVersion":8},{"apiKey":9,"minVersion":0,"maxVersion":8},{"apiKey":10,"minVersion":0,"maxVersion":4},{"apiKey":11,"minVersion":0,"maxVersion":9},{"apiKey":12,"minVersion":0,"maxVersion":4},{"apiKey":13,"minVersion":0,"maxVersion":5},{"apiKey":14,"minVersion":0,"maxVersion":5},{"apiKey":15,"minVersion":0,"maxVersion":5},{"apiKey":16,"minVersion":0,"maxVersion":4},{"apiKey":17,"minVersion":0,"maxVersion":1},{"apiKey":18,"minVersion":0,"maxVersion":3},{"apiKey":19,"minVersion":0,"maxVersion":7},{"apiKey":20,"minVersion":0,"maxVersion":6},{"apiKey":21,"minVersion":0,"maxVersion":2},{"apiKey":22,"minVersion":0,"maxVersion":4},{"apiKey":23,"minVersion":0,"maxVersion":4},{"apiKey":24,"minVersion":0,"maxVersion":3},{"apiKey":25,"minVersion":0,"maxVersion":3},{"apiKey":26,"minVersion":0,"maxVersion":3},{"apiKey":27,"minVersion":0,"maxVersion":1},{"apiKey":28,"minVersion":0,"maxVersion":3},{"apiKey":29,"minVersion":0,"maxVersion":3},{"apiKey":30,"minVersion":0,"maxVersion":3},{"apiKey":31,"minVersion":0,"maxVersion":3},{"apiKey":32,"minVersion":0,"maxVersion":4},{"apiKey":33,"minVersion":0,"maxVersion":2},{"apiKey":34,"minVersion":0,"maxVersion":2},{"apiKey":35,"minVersion":0,"maxVersion":4},{"apiKey":36,"minVersion":0,"maxVersion":2},{"apiKey":37,"minVersion":0,"maxVersion":3},{"apiKey":38,"minVersion":0,"maxVersion":3},{"apiKey":39,"minVersion":0,"maxVersion":2},{"apiKey":40,"minVersion":0,"maxVersion":2},{"apiKey":41,"minVersion":0,"maxVersion":3},{"apiKey":42,"minVersion":0,"maxVersion":2},{"apiKey":43,"minVersion":0,"maxVersion":2},{"apiKey":44,"minVersion":0,"maxVersion":1},{"apiKey":45,"minVersion":0,"maxVersion":0},{"apiKey":46,"minVersion":0,"maxVersion":0},{"apiKey":47,"minVersion":0,"maxVersion":0},{"apiKey":48,"minVersion":0,"maxVersion":1},{"apiKey":49,"minVersion":0,"maxVersion":1},{"apiKey":50,"minVersion":0,"maxVersion":0},{"apiKey":51,"minVersion":0,"maxVersion":0},{"apiKey":56,"minVersion":0,"maxVersion":2},{"apiKey":57,"minVersion":0,"maxVersion":1},{"apiKey":60,"minVersion":0,"maxVersion":0},{"apiKey":61,"minVersion":0,"maxVersion":0},{"apiKey":65,"minVersion":0,"maxVersion":0},{"apiKey":66,"minVersion":0,"maxVersion":0},{"apiKey":67,"minVersion":0,"maxVersion":0}],"throttleTimeMs":0,"finalizedFeaturesEpoch":0},"connection":"127.0.0.1:40117-127.0.0.1:49636-1","totalTimeMs":1.481,"requestQueueTimeMs":0.265,"localTimeMs":0.955,"remoteTimeMs":0.0,"throttleTimeMs":0,"responseQueueTimeMs":0.066,"sendTimeMs":0.194,"securityProtocol":"SASL_PLAINTEXT","principal":"User:kafkaclient","listener":"SASL_PLAINTEXT","clientInformation":{"softwareName":"unknown","softwareVersion":"unknown"}} 09:09:34 09:09:34.977 [kafka-admin-client-thread | test-consumer-id] DEBUG org.apache.kafka.clients.admin.KafkaAdminClient - [AdminClient clientId=test-consumer-id] Sending MetadataRequestData(topics=[], allowAutoTopicCreation=true, includeClusterAuthorizedOperations=false, includeTopicAuthorizedOperations=false) to localhost:40117 (id: -1 rack: null). correlationId=1, timeoutMs=14988 09:09:34 09:09:34.977 [kafka-admin-client-thread | test-consumer-id] DEBUG org.apache.kafka.clients.NetworkClient - [AdminClient clientId=test-consumer-id] Sending METADATA request with header RequestHeader(apiKey=METADATA, apiVersion=12, clientId=test-consumer-id, correlationId=1) and timeout 14988 to node -1: MetadataRequestData(topics=[], allowAutoTopicCreation=true, includeClusterAuthorizedOperations=false, includeTopicAuthorizedOperations=false) 09:09:34 09:09:34.978 [kafka-admin-client-thread | test-consumer-id] DEBUG org.apache.kafka.clients.NetworkClient - [AdminClient clientId=test-consumer-id] Received METADATA response from node -1 for request with header RequestHeader(apiKey=METADATA, apiVersion=12, clientId=test-consumer-id, correlationId=1): MetadataResponseData(throttleTimeMs=0, brokers=[MetadataResponseBroker(nodeId=1, host='localhost', port=40117, rack=null)], clusterId='lcpOyY1-QY2MMThgHGGgSA', controllerId=1, topics=[], clusterAuthorizedOperations=-2147483648) 09:09:34 09:09:34.979 [kafka-admin-client-thread | test-consumer-id] DEBUG org.apache.kafka.clients.admin.internals.AdminMetadataManager - [AdminClient clientId=test-consumer-id] Updating cluster metadata to Cluster(id = lcpOyY1-QY2MMThgHGGgSA, nodes = [localhost:40117 (id: 1 rack: null)], partitions = [], controller = localhost:40117 (id: 1 rack: null)) 09:09:35 09:09:34.979 [data-plane-kafka-network-thread-1-ListenerName(SASL_PLAINTEXT)-SASL_PLAINTEXT-1] DEBUG kafka.request.logger - Completed request:{"isForwarded":false,"requestHeader":{"requestApiKey":3,"requestApiVersion":12,"correlationId":1,"clientId":"test-consumer-id","requestApiKeyName":"METADATA"},"request":{"topics":[],"allowAutoTopicCreation":true,"includeTopicAuthorizedOperations":false},"response":{"throttleTimeMs":0,"brokers":[{"nodeId":1,"host":"localhost","port":40117,"rack":null}],"clusterId":"lcpOyY1-QY2MMThgHGGgSA","controllerId":1,"topics":[]},"connection":"127.0.0.1:40117-127.0.0.1:49636-1","totalTimeMs":0.945,"requestQueueTimeMs":0.095,"localTimeMs":0.65,"remoteTimeMs":0.0,"throttleTimeMs":0,"responseQueueTimeMs":0.051,"sendTimeMs":0.147,"securityProtocol":"SASL_PLAINTEXT","principal":"User:kafkaclient","listener":"SASL_PLAINTEXT","clientInformation":{"softwareName":"apache-kafka-java","softwareVersion":"3.3.1"}} 09:09:35 09:09:34.979 [kafka-admin-client-thread | test-consumer-id] DEBUG org.apache.kafka.clients.ClientUtils - Resolved host localhost as 127.0.0.1 09:09:35 09:09:34.979 [kafka-admin-client-thread | test-consumer-id] DEBUG org.apache.kafka.clients.NetworkClient - [AdminClient clientId=test-consumer-id] Initiating connection to node localhost:40117 (id: 1 rack: null) using address localhost/127.0.0.1 09:09:35 09:09:34.979 [kafka-admin-client-thread | test-consumer-id] DEBUG org.apache.kafka.common.security.authenticator.SaslClientAuthenticator - [AdminClient clientId=test-consumer-id] Set SASL client state to SEND_APIVERSIONS_REQUEST 09:09:35 09:09:34.979 [kafka-admin-client-thread | test-consumer-id] DEBUG org.apache.kafka.common.security.authenticator.SaslClientAuthenticator - [AdminClient clientId=test-consumer-id] Creating SaslClient: client=null;service=kafka;serviceHostname=localhost;mechs=[PLAIN] 09:09:35 09:09:34.979 [data-plane-kafka-socket-acceptor-ListenerName(SASL_PLAINTEXT)-SASL_PLAINTEXT-40117] DEBUG kafka.network.DataPlaneAcceptor - Accepted connection from /127.0.0.1:49638 on /127.0.0.1:40117 and assigned it to processor 0, sendBufferSize [actual|requested]: [102400|102400] recvBufferSize [actual|requested]: [102400|102400] 09:09:35 09:09:34.979 [data-plane-kafka-network-thread-1-ListenerName(SASL_PLAINTEXT)-SASL_PLAINTEXT-0] DEBUG kafka.network.Processor - Processor 0 listening to new connection from /127.0.0.1:49638 09:09:35 09:09:34.980 [kafka-admin-client-thread | test-consumer-id] DEBUG org.apache.kafka.common.network.Selector - [AdminClient clientId=test-consumer-id] Created socket with SO_RCVBUF = 65536, SO_SNDBUF = 131072, SO_TIMEOUT = 0 to node 1 09:09:35 09:09:34.980 [kafka-admin-client-thread | test-consumer-id] DEBUG org.apache.kafka.common.security.authenticator.SaslClientAuthenticator - [AdminClient clientId=test-consumer-id] Set SASL client state to RECEIVE_APIVERSIONS_RESPONSE 09:09:35 09:09:34.980 [kafka-admin-client-thread | test-consumer-id] DEBUG org.apache.kafka.clients.NetworkClient - [AdminClient clientId=test-consumer-id] Completed connection to node 1. Fetching API versions. 09:09:35 09:09:34.980 [data-plane-kafka-network-thread-1-ListenerName(SASL_PLAINTEXT)-SASL_PLAINTEXT-0] DEBUG org.apache.kafka.common.security.authenticator.SaslServerAuthenticator - Set SASL server state to HANDSHAKE_OR_VERSIONS_REQUEST during authentication 09:09:35 09:09:34.980 [data-plane-kafka-network-thread-1-ListenerName(SASL_PLAINTEXT)-SASL_PLAINTEXT-0] DEBUG org.apache.kafka.common.security.authenticator.SaslServerAuthenticator - Handling Kafka request API_VERSIONS during authentication 09:09:35 09:09:34.981 [data-plane-kafka-network-thread-1-ListenerName(SASL_PLAINTEXT)-SASL_PLAINTEXT-0] DEBUG org.apache.kafka.common.security.authenticator.SaslServerAuthenticator - Set SASL server state to HANDSHAKE_REQUEST during authentication 09:09:35 09:09:34.981 [kafka-admin-client-thread | test-consumer-id] DEBUG org.apache.kafka.common.security.authenticator.SaslClientAuthenticator - [AdminClient clientId=test-consumer-id] Set SASL client state to SEND_HANDSHAKE_REQUEST 09:09:35 09:09:34.981 [kafka-admin-client-thread | test-consumer-id] DEBUG org.apache.kafka.common.security.authenticator.SaslClientAuthenticator - [AdminClient clientId=test-consumer-id] Set SASL client state to RECEIVE_HANDSHAKE_RESPONSE 09:09:35 09:09:34.981 [data-plane-kafka-network-thread-1-ListenerName(SASL_PLAINTEXT)-SASL_PLAINTEXT-0] DEBUG org.apache.kafka.common.security.authenticator.SaslServerAuthenticator - Handling Kafka request SASL_HANDSHAKE during authentication 09:09:35 09:09:34.981 [data-plane-kafka-network-thread-1-ListenerName(SASL_PLAINTEXT)-SASL_PLAINTEXT-0] DEBUG org.apache.kafka.common.security.authenticator.SaslServerAuthenticator - Using SASL mechanism 'PLAIN' provided by client 09:09:35 09:09:34.982 [kafka-admin-client-thread | test-consumer-id] DEBUG org.apache.kafka.common.security.authenticator.SaslClientAuthenticator - [AdminClient clientId=test-consumer-id] Set SASL client state to INITIAL 09:09:35 09:09:34.982 [kafka-admin-client-thread | test-consumer-id] DEBUG org.apache.kafka.common.security.authenticator.SaslClientAuthenticator - [AdminClient clientId=test-consumer-id] Set SASL client state to INTERMEDIATE 09:09:35 09:09:34.982 [data-plane-kafka-network-thread-1-ListenerName(SASL_PLAINTEXT)-SASL_PLAINTEXT-0] DEBUG org.apache.kafka.common.security.authenticator.SaslServerAuthenticator - Set SASL server state to AUTHENTICATE during authentication 09:09:35 09:09:34.982 [data-plane-kafka-network-thread-1-ListenerName(SASL_PLAINTEXT)-SASL_PLAINTEXT-0] DEBUG org.apache.kafka.common.security.authenticator.SaslServerAuthenticator - Authentication complete; session max lifetime from broker config=0 ms, no credential expiration; no session expiration, sending 0 ms to client 09:09:35 09:09:34.982 [data-plane-kafka-network-thread-1-ListenerName(SASL_PLAINTEXT)-SASL_PLAINTEXT-0] DEBUG org.apache.kafka.common.security.authenticator.SaslServerAuthenticator - Set SASL server state to COMPLETE during authentication 09:09:35 09:09:34.982 [data-plane-kafka-network-thread-1-ListenerName(SASL_PLAINTEXT)-SASL_PLAINTEXT-0] DEBUG org.apache.kafka.common.network.Selector - [SocketServer listenerType=ZK_BROKER, nodeId=1] Successfully authenticated with /127.0.0.1 09:09:35 09:09:34.982 [kafka-admin-client-thread | test-consumer-id] DEBUG org.apache.kafka.common.security.authenticator.SaslClientAuthenticator - [AdminClient clientId=test-consumer-id] Set SASL client state to COMPLETE 09:09:35 09:09:34.982 [kafka-admin-client-thread | test-consumer-id] DEBUG org.apache.kafka.common.security.authenticator.SaslClientAuthenticator - [AdminClient clientId=test-consumer-id] Finished authentication with no session expiration and no session re-authentication 09:09:35 09:09:34.982 [kafka-admin-client-thread | test-consumer-id] DEBUG org.apache.kafka.common.network.Selector - [AdminClient clientId=test-consumer-id] Successfully authenticated with localhost/127.0.0.1 09:09:35 09:09:34.982 [kafka-admin-client-thread | test-consumer-id] DEBUG org.apache.kafka.clients.NetworkClient - [AdminClient clientId=test-consumer-id] Initiating API versions fetch from node 1. 09:09:35 09:09:34.983 [kafka-admin-client-thread | test-consumer-id] DEBUG org.apache.kafka.clients.NetworkClient - [AdminClient clientId=test-consumer-id] Sending API_VERSIONS request with header RequestHeader(apiKey=API_VERSIONS, apiVersion=3, clientId=test-consumer-id, correlationId=2) and timeout 3600000 to node 1: ApiVersionsRequestData(clientSoftwareName='apache-kafka-java', clientSoftwareVersion='3.3.1') 09:09:35 09:09:34.985 [kafka-admin-client-thread | test-consumer-id] DEBUG org.apache.kafka.clients.NetworkClient - [AdminClient clientId=test-consumer-id] Received API_VERSIONS response from node 1 for request with header RequestHeader(apiKey=API_VERSIONS, apiVersion=3, clientId=test-consumer-id, correlationId=2): ApiVersionsResponseData(errorCode=0, apiKeys=[ApiVersion(apiKey=0, minVersion=0, maxVersion=9), ApiVersion(apiKey=1, minVersion=0, maxVersion=13), ApiVersion(apiKey=2, minVersion=0, maxVersion=7), ApiVersion(apiKey=3, minVersion=0, maxVersion=12), ApiVersion(apiKey=4, minVersion=0, maxVersion=6), ApiVersion(apiKey=5, minVersion=0, maxVersion=3), ApiVersion(apiKey=6, minVersion=0, maxVersion=7), ApiVersion(apiKey=7, minVersion=0, maxVersion=3), ApiVersion(apiKey=8, minVersion=0, maxVersion=8), ApiVersion(apiKey=9, minVersion=0, maxVersion=8), ApiVersion(apiKey=10, minVersion=0, maxVersion=4), ApiVersion(apiKey=11, minVersion=0, maxVersion=9), ApiVersion(apiKey=12, minVersion=0, maxVersion=4), ApiVersion(apiKey=13, minVersion=0, maxVersion=5), ApiVersion(apiKey=14, minVersion=0, maxVersion=5), ApiVersion(apiKey=15, minVersion=0, maxVersion=5), ApiVersion(apiKey=16, minVersion=0, maxVersion=4), ApiVersion(apiKey=17, minVersion=0, maxVersion=1), ApiVersion(apiKey=18, minVersion=0, maxVersion=3), ApiVersion(apiKey=19, minVersion=0, maxVersion=7), ApiVersion(apiKey=20, minVersion=0, maxVersion=6), ApiVersion(apiKey=21, minVersion=0, maxVersion=2), ApiVersion(apiKey=22, minVersion=0, maxVersion=4), ApiVersion(apiKey=23, minVersion=0, maxVersion=4), ApiVersion(apiKey=24, minVersion=0, maxVersion=3), ApiVersion(apiKey=25, minVersion=0, maxVersion=3), ApiVersion(apiKey=26, minVersion=0, maxVersion=3), ApiVersion(apiKey=27, minVersion=0, maxVersion=1), ApiVersion(apiKey=28, minVersion=0, maxVersion=3), ApiVersion(apiKey=29, minVersion=0, maxVersion=3), ApiVersion(apiKey=30, minVersion=0, maxVersion=3), ApiVersion(apiKey=31, minVersion=0, maxVersion=3), ApiVersion(apiKey=32, minVersion=0, maxVersion=4), ApiVersion(apiKey=33, minVersion=0, maxVersion=2), ApiVersion(apiKey=34, minVersion=0, maxVersion=2), ApiVersion(apiKey=35, minVersion=0, maxVersion=4), ApiVersion(apiKey=36, minVersion=0, maxVersion=2), ApiVersion(apiKey=37, minVersion=0, maxVersion=3), ApiVersion(apiKey=38, minVersion=0, maxVersion=3), ApiVersion(apiKey=39, minVersion=0, maxVersion=2), ApiVersion(apiKey=40, minVersion=0, maxVersion=2), ApiVersion(apiKey=41, minVersion=0, maxVersion=3), ApiVersion(apiKey=42, minVersion=0, maxVersion=2), ApiVersion(apiKey=43, minVersion=0, maxVersion=2), ApiVersion(apiKey=44, minVersion=0, maxVersion=1), ApiVersion(apiKey=45, minVersion=0, maxVersion=0), ApiVersion(apiKey=46, minVersion=0, maxVersion=0), ApiVersion(apiKey=47, minVersion=0, maxVersion=0), ApiVersion(apiKey=48, minVersion=0, maxVersion=1), ApiVersion(apiKey=49, minVersion=0, maxVersion=1), ApiVersion(apiKey=50, minVersion=0, maxVersion=0), ApiVersion(apiKey=51, minVersion=0, maxVersion=0), ApiVersion(apiKey=56, minVersion=0, maxVersion=2), ApiVersion(apiKey=57, minVersion=0, maxVersion=1), ApiVersion(apiKey=60, minVersion=0, maxVersion=0), ApiVersion(apiKey=61, minVersion=0, maxVersion=0), ApiVersion(apiKey=65, minVersion=0, maxVersion=0), ApiVersion(apiKey=66, minVersion=0, maxVersion=0), ApiVersion(apiKey=67, minVersion=0, maxVersion=0)], throttleTimeMs=0, supportedFeatures=[], finalizedFeaturesEpoch=0, finalizedFeatures=[]) 09:09:35 09:09:34.985 [data-plane-kafka-network-thread-1-ListenerName(SASL_PLAINTEXT)-SASL_PLAINTEXT-0] DEBUG kafka.request.logger - Completed request:{"isForwarded":false,"requestHeader":{"requestApiKey":18,"requestApiVersion":3,"correlationId":2,"clientId":"test-consumer-id","requestApiKeyName":"API_VERSIONS"},"request":{"clientSoftwareName":"apache-kafka-java","clientSoftwareVersion":"3.3.1"},"response":{"errorCode":0,"apiKeys":[{"apiKey":0,"minVersion":0,"maxVersion":9},{"apiKey":1,"minVersion":0,"maxVersion":13},{"apiKey":2,"minVersion":0,"maxVersion":7},{"apiKey":3,"minVersion":0,"maxVersion":12},{"apiKey":4,"minVersion":0,"maxVersion":6},{"apiKey":5,"minVersion":0,"maxVersion":3},{"apiKey":6,"minVersion":0,"maxVersion":7},{"apiKey":7,"minVersion":0,"maxVersion":3},{"apiKey":8,"minVersion":0,"maxVersion":8},{"apiKey":9,"minVersion":0,"maxVersion":8},{"apiKey":10,"minVersion":0,"maxVersion":4},{"apiKey":11,"minVersion":0,"maxVersion":9},{"apiKey":12,"minVersion":0,"maxVersion":4},{"apiKey":13,"minVersion":0,"maxVersion":5},{"apiKey":14,"minVersion":0,"maxVersion":5},{"apiKey":15,"minVersion":0,"maxVersion":5},{"apiKey":16,"minVersion":0,"maxVersion":4},{"apiKey":17,"minVersion":0,"maxVersion":1},{"apiKey":18,"minVersion":0,"maxVersion":3},{"apiKey":19,"minVersion":0,"maxVersion":7},{"apiKey":20,"minVersion":0,"maxVersion":6},{"apiKey":21,"minVersion":0,"maxVersion":2},{"apiKey":22,"minVersion":0,"maxVersion":4},{"apiKey":23,"minVersion":0,"maxVersion":4},{"apiKey":24,"minVersion":0,"maxVersion":3},{"apiKey":25,"minVersion":0,"maxVersion":3},{"apiKey":26,"minVersion":0,"maxVersion":3},{"apiKey":27,"minVersion":0,"maxVersion":1},{"apiKey":28,"minVersion":0,"maxVersion":3},{"apiKey":29,"minVersion":0,"maxVersion":3},{"apiKey":30,"minVersion":0,"maxVersion":3},{"apiKey":31,"minVersion":0,"maxVersion":3},{"apiKey":32,"minVersion":0,"maxVersion":4},{"apiKey":33,"minVersion":0,"maxVersion":2},{"apiKey":34,"minVersion":0,"maxVersion":2},{"apiKey":35,"minVersion":0,"maxVersion":4},{"apiKey":36,"minVersion":0,"maxVersion":2},{"apiKey":37,"minVersion":0,"maxVersion":3},{"apiKey":38,"minVersion":0,"maxVersion":3},{"apiKey":39,"minVersion":0,"maxVersion":2},{"apiKey":40,"minVersion":0,"maxVersion":2},{"apiKey":41,"minVersion":0,"maxVersion":3},{"apiKey":42,"minVersion":0,"maxVersion":2},{"apiKey":43,"minVersion":0,"maxVersion":2},{"apiKey":44,"minVersion":0,"maxVersion":1},{"apiKey":45,"minVersion":0,"maxVersion":0},{"apiKey":46,"minVersion":0,"maxVersion":0},{"apiKey":47,"minVersion":0,"maxVersion":0},{"apiKey":48,"minVersion":0,"maxVersion":1},{"apiKey":49,"minVersion":0,"maxVersion":1},{"apiKey":50,"minVersion":0,"maxVersion":0},{"apiKey":51,"minVersion":0,"maxVersion":0},{"apiKey":56,"minVersion":0,"maxVersion":2},{"apiKey":57,"minVersion":0,"maxVersion":1},{"apiKey":60,"minVersion":0,"maxVersion":0},{"apiKey":61,"minVersion":0,"maxVersion":0},{"apiKey":65,"minVersion":0,"maxVersion":0},{"apiKey":66,"minVersion":0,"maxVersion":0},{"apiKey":67,"minVersion":0,"maxVersion":0}],"throttleTimeMs":0,"finalizedFeaturesEpoch":0},"connection":"127.0.0.1:40117-127.0.0.1:49638-2","totalTimeMs":1.321,"requestQueueTimeMs":0.209,"localTimeMs":0.872,"remoteTimeMs":0.0,"throttleTimeMs":0,"responseQueueTimeMs":0.063,"sendTimeMs":0.174,"securityProtocol":"SASL_PLAINTEXT","principal":"User:kafkaclient","listener":"SASL_PLAINTEXT","clientInformation":{"softwareName":"unknown","softwareVersion":"unknown"}} 09:09:35 09:09:34.986 [kafka-admin-client-thread | test-consumer-id] DEBUG org.apache.kafka.clients.NetworkClient - [AdminClient clientId=test-consumer-id] Node 1 has finalized features epoch: 0, finalized features: [], supported features: [], API versions: (Produce(0): 0 to 9 [usable: 9], Fetch(1): 0 to 13 [usable: 13], ListOffsets(2): 0 to 7 [usable: 7], Metadata(3): 0 to 12 [usable: 12], LeaderAndIsr(4): 0 to 6 [usable: 6], StopReplica(5): 0 to 3 [usable: 3], UpdateMetadata(6): 0 to 7 [usable: 7], ControlledShutdown(7): 0 to 3 [usable: 3], OffsetCommit(8): 0 to 8 [usable: 8], OffsetFetch(9): 0 to 8 [usable: 8], FindCoordinator(10): 0 to 4 [usable: 4], JoinGroup(11): 0 to 9 [usable: 9], Heartbeat(12): 0 to 4 [usable: 4], LeaveGroup(13): 0 to 5 [usable: 5], SyncGroup(14): 0 to 5 [usable: 5], DescribeGroups(15): 0 to 5 [usable: 5], ListGroups(16): 0 to 4 [usable: 4], SaslHandshake(17): 0 to 1 [usable: 1], ApiVersions(18): 0 to 3 [usable: 3], CreateTopics(19): 0 to 7 [usable: 7], DeleteTopics(20): 0 to 6 [usable: 6], DeleteRecords(21): 0 to 2 [usable: 2], InitProducerId(22): 0 to 4 [usable: 4], OffsetForLeaderEpoch(23): 0 to 4 [usable: 4], AddPartitionsToTxn(24): 0 to 3 [usable: 3], AddOffsetsToTxn(25): 0 to 3 [usable: 3], EndTxn(26): 0 to 3 [usable: 3], WriteTxnMarkers(27): 0 to 1 [usable: 1], TxnOffsetCommit(28): 0 to 3 [usable: 3], DescribeAcls(29): 0 to 3 [usable: 3], CreateAcls(30): 0 to 3 [usable: 3], DeleteAcls(31): 0 to 3 [usable: 3], DescribeConfigs(32): 0 to 4 [usable: 4], AlterConfigs(33): 0 to 2 [usable: 2], AlterReplicaLogDirs(34): 0 to 2 [usable: 2], DescribeLogDirs(35): 0 to 4 [usable: 4], SaslAuthenticate(36): 0 to 2 [usable: 2], CreatePartitions(37): 0 to 3 [usable: 3], CreateDelegationToken(38): 0 to 3 [usable: 3], RenewDelegationToken(39): 0 to 2 [usable: 2], ExpireDelegationToken(40): 0 to 2 [usable: 2], DescribeDelegationToken(41): 0 to 3 [usable: 3], DeleteGroups(42): 0 to 2 [usable: 2], ElectLeaders(43): 0 to 2 [usable: 2], IncrementalAlterConfigs(44): 0 to 1 [usable: 1], AlterPartitionReassignments(45): 0 [usable: 0], ListPartitionReassignments(46): 0 [usable: 0], OffsetDelete(47): 0 [usable: 0], DescribeClientQuotas(48): 0 to 1 [usable: 1], AlterClientQuotas(49): 0 to 1 [usable: 1], DescribeUserScramCredentials(50): 0 [usable: 0], AlterUserScramCredentials(51): 0 [usable: 0], AlterPartition(56): 0 to 2 [usable: 2], UpdateFeatures(57): 0 to 1 [usable: 1], DescribeCluster(60): 0 [usable: 0], DescribeProducers(61): 0 [usable: 0], DescribeTransactions(65): 0 [usable: 0], ListTransactions(66): 0 [usable: 0], AllocateProducerIds(67): 0 [usable: 0]). 09:09:35 09:09:34.986 [kafka-admin-client-thread | test-consumer-id] DEBUG org.apache.kafka.clients.admin.KafkaAdminClient - [AdminClient clientId=test-consumer-id] Sending CreateTopicsRequestData(topics=[CreatableTopic(name='my-test-topic', numPartitions=1, replicationFactor=1, assignments=[], configs=[])], timeoutMs=14992, validateOnly=false) to localhost:40117 (id: 1 rack: null). correlationId=3, timeoutMs=14992 09:09:35 09:09:34.987 [kafka-admin-client-thread | test-consumer-id] DEBUG org.apache.kafka.clients.NetworkClient - [AdminClient clientId=test-consumer-id] Sending CREATE_TOPICS request with header RequestHeader(apiKey=CREATE_TOPICS, apiVersion=7, clientId=test-consumer-id, correlationId=3) and timeout 14992 to node 1: CreateTopicsRequestData(topics=[CreatableTopic(name='my-test-topic', numPartitions=1, replicationFactor=1, assignments=[], configs=[])], timeoutMs=14992, validateOnly=false) 09:09:35 09:09:35.011 [ProcessThread(sid:0 cport:46481):] DEBUG org.apache.zookeeper.server.SessionTrackerImpl - Checking session 0x1000002945e0000 09:09:35 09:09:35.011 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - Processing request:: sessionid:0x1000002945e0000 type:exists cxid:0x43 zxid:0xfffffffffffffffe txntype:unknown reqpath:/admin/delete_topics/my-test-topic 09:09:35 09:09:35.011 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - sessionid:0x1000002945e0000 type:exists cxid:0x43 zxid:0xfffffffffffffffe txntype:unknown reqpath:/admin/delete_topics/my-test-topic 09:09:35 09:09:35.011 [main-SendThread(127.0.0.1:46481)] DEBUG org.apache.zookeeper.ClientCnxn - Reading reply session id: 0x1000002945e0000, packet:: clientPath:/admin/delete_topics/my-test-topic serverPath:/admin/delete_topics/my-test-topic finished:false header:: 67,3 replyHeader:: 67,29,-101 request:: '/admin/delete_topics/my-test-topic,F response:: 09:09:35 09:09:35.012 [ProcessThread(sid:0 cport:46481):] DEBUG org.apache.zookeeper.server.SessionTrackerImpl - Checking session 0x1000002945e0000 09:09:35 09:09:35.012 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - Processing request:: sessionid:0x1000002945e0000 type:exists cxid:0x44 zxid:0xfffffffffffffffe txntype:unknown reqpath:/brokers/topics/my-test-topic 09:09:35 09:09:35.012 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - sessionid:0x1000002945e0000 type:exists cxid:0x44 zxid:0xfffffffffffffffe txntype:unknown reqpath:/brokers/topics/my-test-topic 09:09:35 09:09:35.012 [main-SendThread(127.0.0.1:46481)] DEBUG org.apache.zookeeper.ClientCnxn - Reading reply session id: 0x1000002945e0000, packet:: clientPath:/brokers/topics/my-test-topic serverPath:/brokers/topics/my-test-topic finished:false header:: 68,3 replyHeader:: 68,29,-101 request:: '/brokers/topics/my-test-topic,F response:: 09:09:35 09:09:35.038 [data-plane-kafka-request-handler-0] INFO kafka.zk.AdminZkClient - Creating topic my-test-topic with configuration {} and initial partition assignment HashMap(0 -> ArrayBuffer(1)) 09:09:35 09:09:35.041 [ProcessThread(sid:0 cport:46481):] DEBUG org.apache.zookeeper.server.SessionTrackerImpl - Checking session 0x1000002945e0000 09:09:35 09:09:35.043 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - Processing request:: sessionid:0x1000002945e0000 type:setData cxid:0x45 zxid:0x1e txntype:-1 reqpath:n/a 09:09:35 09:09:35.043 [SyncThread:0] DEBUG org.apache.zookeeper.server.DataTree - Ignoring processTxn failure hdr: -1 : error: -101 09:09:35 09:09:35.044 [main-SendThread(127.0.0.1:46481)] DEBUG org.apache.zookeeper.ClientCnxn - Reading reply session id: 0x1000002945e0000, packet:: clientPath:/config/topics/my-test-topic serverPath:/config/topics/my-test-topic finished:false header:: 69,5 replyHeader:: 69,30,-101 request:: '/config/topics/my-test-topic,#7b2276657273696f6e223a312c22636f6e666967223a7b7d7d,-1 response:: 09:09:35 09:09:35.045 [ProcessThread(sid:0 cport:46481):] DEBUG org.apache.zookeeper.server.SessionTrackerImpl - Checking session 0x1000002945e0000 09:09:35 09:09:35.045 [ProcessThread(sid:0 cport:46481):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Processing ACL: 31,s{'world,'anyone} 09:09:35 09:09:35 09:09:35.046 [ProcessThread(sid:0 cport:46481):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Permission requested: 4 09:09:35 09:09:35.046 [ProcessThread(sid:0 cport:46481):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - ACLs for node: [31,s{'world,'anyone} 09:09:35 ] 09:09:35 09:09:35.046 [ProcessThread(sid:0 cport:46481):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Client credentials: ['sasl,'zooclient 09:09:35 , 'ip,'127.0.0.1 09:09:35 ] 09:09:35 09:09:35.046 [ProcessThread(sid:0 cport:46481):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from data tree is: 41751846345 09:09:35 09:09:35.046 [ProcessThread(sid:0 cport:46481):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 43145532402 09:09:35 09:09:35.047 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - Processing request:: sessionid:0x1000002945e0000 type:create cxid:0x46 zxid:0x1f txntype:1 reqpath:n/a 09:09:35 09:09:35.047 [SyncThread:0] DEBUG org.apache.zookeeper.common.PathTrie - config 09:09:35 09:09:35.047 [SyncThread:0] DEBUG org.apache.zookeeper.server.DataTree - Digests are matching for Zxid: 1f, Digest in log and actual tree: 46093055912 09:09:35 09:09:35.047 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - sessionid:0x1000002945e0000 type:create cxid:0x46 zxid:0x1f txntype:1 reqpath:n/a 09:09:35 09:09:35.048 [main-SendThread(127.0.0.1:46481)] DEBUG org.apache.zookeeper.ClientCnxn - Reading reply session id: 0x1000002945e0000, packet:: clientPath:/config/topics/my-test-topic serverPath:/config/topics/my-test-topic finished:false header:: 70,1 replyHeader:: 70,31,0 request:: '/config/topics/my-test-topic,#7b2276657273696f6e223a312c22636f6e666967223a7b7d7d,v{s{31,s{'world,'anyone}}},0 response:: '/config/topics/my-test-topic 09:09:35 09:09:35.057 [ProcessThread(sid:0 cport:46481):] DEBUG org.apache.zookeeper.server.SessionTrackerImpl - Checking session 0x1000002945e0000 09:09:35 09:09:35.057 [ProcessThread(sid:0 cport:46481):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Processing ACL: 31,s{'world,'anyone} 09:09:35 09:09:35 09:09:35.057 [ProcessThread(sid:0 cport:46481):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Permission requested: 4 09:09:35 09:09:35.057 [ProcessThread(sid:0 cport:46481):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - ACLs for node: [31,s{'world,'anyone} 09:09:35 ] 09:09:35 09:09:35.057 [ProcessThread(sid:0 cport:46481):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Client credentials: ['sasl,'zooclient 09:09:35 , 'ip,'127.0.0.1 09:09:35 ] 09:09:35 09:09:35.057 [ProcessThread(sid:0 cport:46481):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from data tree is: 46093055912 09:09:35 09:09:35.057 [ProcessThread(sid:0 cport:46481):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 44261972427 09:09:35 09:09:35.058 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - Processing request:: sessionid:0x1000002945e0000 type:create cxid:0x47 zxid:0x20 txntype:1 reqpath:n/a 09:09:35 09:09:35.058 [SyncThread:0] DEBUG org.apache.zookeeper.common.PathTrie - brokers 09:09:35 09:09:35.058 [SyncThread:0] DEBUG org.apache.zookeeper.server.DataTree - Digests are matching for Zxid: 20, Digest in log and actual tree: 47131980593 09:09:35 09:09:35.058 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - sessionid:0x1000002945e0000 type:create cxid:0x47 zxid:0x20 txntype:1 reqpath:n/a 09:09:35 09:09:35.059 [main-SendThread(127.0.0.1:46481)] DEBUG org.apache.zookeeper.ClientCnxn - Got notification session id: 0x1000002945e0000 09:09:35 09:09:35.059 [main-SendThread(127.0.0.1:46481)] DEBUG org.apache.zookeeper.ClientCnxn - Got WatchedEvent state:SyncConnected type:NodeChildrenChanged path:/brokers/topics for session id 0x1000002945e0000 09:09:35 09:09:35.059 [main-SendThread(127.0.0.1:46481)] DEBUG org.apache.zookeeper.ClientCnxn - Reading reply session id: 0x1000002945e0000, packet:: clientPath:/brokers/topics/my-test-topic serverPath:/brokers/topics/my-test-topic finished:false header:: 71,1 replyHeader:: 71,32,0 request:: '/brokers/topics/my-test-topic,#7b22706172746974696f6e73223a7b2230223a5b315d7d2c22746f7069635f6964223a2272704f54757378505269477972736a6a6a485f667741222c22616464696e675f7265706c69636173223a7b7d2c2272656d6f76696e675f7265706c69636173223a7b7d2c2276657273696f6e223a337d,v{s{31,s{'world,'anyone}}},0 response:: '/brokers/topics/my-test-topic 09:09:35 09:09:35.060 [main-EventThread] DEBUG kafka.zookeeper.ZooKeeperClient - [ZooKeeperClient Kafka server] Received event: WatchedEvent state:SyncConnected type:NodeChildrenChanged path:/brokers/topics 09:09:35 09:09:35.062 [ProcessThread(sid:0 cport:46481):] DEBUG org.apache.zookeeper.server.SessionTrackerImpl - Checking session 0x1000002945e0000 09:09:35 09:09:35.062 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - Processing request:: sessionid:0x1000002945e0000 type:getChildren2 cxid:0x48 zxid:0xfffffffffffffffe txntype:unknown reqpath:/brokers/topics 09:09:35 09:09:35.062 [data-plane-kafka-request-handler-0] DEBUG kafka.zk.AdminZkClient - Updated path /brokers/topics/my-test-topic with Map(my-test-topic-0 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=)) for replica assignment 09:09:35 09:09:35.062 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - sessionid:0x1000002945e0000 type:getChildren2 cxid:0x48 zxid:0xfffffffffffffffe txntype:unknown reqpath:/brokers/topics 09:09:35 09:09:35.062 [SyncThread:0] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Permission requested: 1 09:09:35 09:09:35.062 [SyncThread:0] DEBUG org.apache.zookeeper.server.ZooKeeperServer - ACLs for node: [31,s{'world,'anyone} 09:09:35 ] 09:09:35 09:09:35.062 [SyncThread:0] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Client credentials: ['sasl,'zooclient 09:09:35 , 'ip,'127.0.0.1 09:09:35 ] 09:09:35 09:09:35.063 [main-SendThread(127.0.0.1:46481)] DEBUG org.apache.zookeeper.ClientCnxn - Reading reply session id: 0x1000002945e0000, packet:: clientPath:/brokers/topics serverPath:/brokers/topics finished:false header:: 72,12 replyHeader:: 72,32,0 request:: '/brokers/topics,T response:: v{'my-test-topic},s{6,6,1770973772463,1770973772463,0,1,0,0,0,1,32} 09:09:35 09:09:35.063 [ProcessThread(sid:0 cport:46481):] DEBUG org.apache.zookeeper.server.SessionTrackerImpl - Checking session 0x1000002945e0000 09:09:35 09:09:35.063 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - Processing request:: sessionid:0x1000002945e0000 type:getData cxid:0x49 zxid:0xfffffffffffffffe txntype:unknown reqpath:/brokers/topics/my-test-topic 09:09:35 09:09:35.063 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - sessionid:0x1000002945e0000 type:getData cxid:0x49 zxid:0xfffffffffffffffe txntype:unknown reqpath:/brokers/topics/my-test-topic 09:09:35 09:09:35.063 [SyncThread:0] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Permission requested: 1 09:09:35 09:09:35.064 [SyncThread:0] DEBUG org.apache.zookeeper.server.ZooKeeperServer - ACLs for node: [31,s{'world,'anyone} 09:09:35 ] 09:09:35 09:09:35.064 [SyncThread:0] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Client credentials: ['sasl,'zooclient 09:09:35 , 'ip,'127.0.0.1 09:09:35 ] 09:09:35 09:09:35.064 [main-SendThread(127.0.0.1:46481)] DEBUG org.apache.zookeeper.ClientCnxn - Reading reply session id: 0x1000002945e0000, packet:: clientPath:/brokers/topics/my-test-topic serverPath:/brokers/topics/my-test-topic finished:false header:: 73,4 replyHeader:: 73,32,0 request:: '/brokers/topics/my-test-topic,F response:: #7b22706172746974696f6e73223a7b2230223a5b315d7d2c22746f7069635f6964223a2272704f54757378505269477972736a6a6a485f667741222c22616464696e675f7265706c69636173223a7b7d2c2272656d6f76696e675f7265706c69636173223a7b7d2c2276657273696f6e223a337d,s{32,32,1770973775056,1770973775056,0,0,0,0,116,0,32} 09:09:35 09:09:35.065 [ProcessThread(sid:0 cport:46481):] DEBUG org.apache.zookeeper.server.SessionTrackerImpl - Checking session 0x1000002945e0000 09:09:35 09:09:35.065 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - Processing request:: sessionid:0x1000002945e0000 type:getData cxid:0x4a zxid:0xfffffffffffffffe txntype:unknown reqpath:/brokers/topics/my-test-topic 09:09:35 09:09:35.065 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - sessionid:0x1000002945e0000 type:getData cxid:0x4a zxid:0xfffffffffffffffe txntype:unknown reqpath:/brokers/topics/my-test-topic 09:09:35 09:09:35.066 [SyncThread:0] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Permission requested: 1 09:09:35 09:09:35.066 [SyncThread:0] DEBUG org.apache.zookeeper.server.ZooKeeperServer - ACLs for node: [31,s{'world,'anyone} 09:09:35 ] 09:09:35 09:09:35.066 [SyncThread:0] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Client credentials: ['sasl,'zooclient 09:09:35 , 'ip,'127.0.0.1 09:09:35 ] 09:09:35 09:09:35.066 [main-SendThread(127.0.0.1:46481)] DEBUG org.apache.zookeeper.ClientCnxn - Reading reply session id: 0x1000002945e0000, packet:: clientPath:/brokers/topics/my-test-topic serverPath:/brokers/topics/my-test-topic finished:false header:: 74,4 replyHeader:: 74,32,0 request:: '/brokers/topics/my-test-topic,T response:: #7b22706172746974696f6e73223a7b2230223a5b315d7d2c22746f7069635f6964223a2272704f54757378505269477972736a6a6a485f667741222c22616464696e675f7265706c69636173223a7b7d2c2272656d6f76696e675f7265706c69636173223a7b7d2c2276657273696f6e223a337d,s{32,32,1770973775056,1770973775056,0,0,0,0,116,0,32} 09:09:35 09:09:35.076 [controller-event-thread] INFO kafka.controller.KafkaController - [Controller id=1] New topics: [Set(my-test-topic)], deleted topics: [HashSet()], new partition replica assignment [Set(TopicIdReplicaAssignment(my-test-topic,Some(rpOTusxPRiGyrsjjjH_fwA),Map(my-test-topic-0 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=))))] 09:09:35 09:09:35.077 [controller-event-thread] INFO kafka.controller.KafkaController - [Controller id=1] New partition creation callback for my-test-topic-0 09:09:35 09:09:35.080 [controller-event-thread] INFO state.change.logger - [Controller id=1 epoch=1] Changed partition my-test-topic-0 state from NonExistentPartition to NewPartition with assigned replicas 1 09:09:35 09:09:35.080 [controller-event-thread] INFO state.change.logger - [Controller id=1 epoch=1] Sending UpdateMetadata request to brokers HashSet() for 0 partitions 09:09:35 09:09:35.086 [controller-event-thread] INFO state.change.logger - [Controller id=1 epoch=1] Sending UpdateMetadata request to brokers HashSet() for 0 partitions 09:09:35 09:09:35.095 [ProcessThread(sid:0 cport:46481):] DEBUG org.apache.zookeeper.server.SessionTrackerImpl - Checking session 0x1000002945e0000 09:09:35 09:09:35.095 [ProcessThread(sid:0 cport:46481):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Permission requested: 1 09:09:35 09:09:35.095 [ProcessThread(sid:0 cport:46481):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - ACLs for node: [31,s{'world,'anyone} 09:09:35 ] 09:09:35 09:09:35.095 [ProcessThread(sid:0 cport:46481):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Client credentials: ['sasl,'zooclient 09:09:35 , 'ip,'127.0.0.1 09:09:35 ] 09:09:35 09:09:35.095 [ProcessThread(sid:0 cport:46481):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from data tree is: 47131980593 09:09:35 09:09:35.095 [ProcessThread(sid:0 cport:46481):] DEBUG org.apache.zookeeper.server.SessionTrackerImpl - Checking session 0x1000002945e0000 09:09:35 09:09:35.095 [ProcessThread(sid:0 cport:46481):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Processing ACL: 31,s{'world,'anyone} 09:09:35 09:09:35 09:09:35.096 [ProcessThread(sid:0 cport:46481):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Permission requested: 4 09:09:35 09:09:35.096 [ProcessThread(sid:0 cport:46481):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - ACLs for node: [31,s{'world,'anyone} 09:09:35 ] 09:09:35 09:09:35.096 [ProcessThread(sid:0 cport:46481):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Client credentials: ['sasl,'zooclient 09:09:35 , 'ip,'127.0.0.1 09:09:35 ] 09:09:35 09:09:35.096 [ProcessThread(sid:0 cport:46481):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from data tree is: 47131980593 09:09:35 09:09:35.096 [ProcessThread(sid:0 cport:46481):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 48396906951 09:09:35 09:09:35.096 [ProcessThread(sid:0 cport:46481):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 51977002948 09:09:35 09:09:35.099 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - Processing request:: sessionid:0x1000002945e0000 type:multi cxid:0x4b zxid:0x21 txntype:14 reqpath:n/a 09:09:35 09:09:35.099 [SyncThread:0] DEBUG org.apache.zookeeper.common.PathTrie - brokers 09:09:35 09:09:35.099 [SyncThread:0] DEBUG org.apache.zookeeper.server.DataTree - Digests are matching for Zxid: 21, Digest in log and actual tree: 51977002948 09:09:35 09:09:35.099 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - sessionid:0x1000002945e0000 type:multi cxid:0x4b zxid:0x21 txntype:14 reqpath:n/a 09:09:35 09:09:35.100 [main-SendThread(127.0.0.1:46481)] DEBUG org.apache.zookeeper.ClientCnxn - Reading reply session id: 0x1000002945e0000, packet:: clientPath:null serverPath:null finished:false header:: 75,14 replyHeader:: 75,33,0 request:: org.apache.zookeeper.MultiOperationRecord@81bd0a85 response:: org.apache.zookeeper.MultiResponse@7b890ac6 09:09:35 09:09:35.102 [ProcessThread(sid:0 cport:46481):] DEBUG org.apache.zookeeper.server.SessionTrackerImpl - Checking session 0x1000002945e0000 09:09:35 09:09:35.102 [ProcessThread(sid:0 cport:46481):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Permission requested: 1 09:09:35 09:09:35.102 [ProcessThread(sid:0 cport:46481):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - ACLs for node: [31,s{'world,'anyone} 09:09:35 ] 09:09:35 09:09:35.102 [ProcessThread(sid:0 cport:46481):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Client credentials: ['sasl,'zooclient 09:09:35 , 'ip,'127.0.0.1 09:09:35 ] 09:09:35 09:09:35.102 [ProcessThread(sid:0 cport:46481):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from data tree is: 51977002948 09:09:35 09:09:35.102 [ProcessThread(sid:0 cport:46481):] DEBUG org.apache.zookeeper.server.SessionTrackerImpl - Checking session 0x1000002945e0000 09:09:35 09:09:35.102 [ProcessThread(sid:0 cport:46481):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Processing ACL: 31,s{'world,'anyone} 09:09:35 09:09:35 09:09:35.103 [ProcessThread(sid:0 cport:46481):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Permission requested: 4 09:09:35 09:09:35.103 [ProcessThread(sid:0 cport:46481):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - ACLs for node: [31,s{'world,'anyone} 09:09:35 ] 09:09:35 09:09:35.103 [ProcessThread(sid:0 cport:46481):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Client credentials: ['sasl,'zooclient 09:09:35 , 'ip,'127.0.0.1 09:09:35 ] 09:09:35 09:09:35.103 [ProcessThread(sid:0 cport:46481):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from data tree is: 51977002948 09:09:35 09:09:35.103 [ProcessThread(sid:0 cport:46481):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 49782340831 09:09:35 09:09:35.103 [ProcessThread(sid:0 cport:46481):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 53386951052 09:09:35 09:09:35.105 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - Processing request:: sessionid:0x1000002945e0000 type:multi cxid:0x4c zxid:0x22 txntype:14 reqpath:n/a 09:09:35 09:09:35.105 [SyncThread:0] DEBUG org.apache.zookeeper.common.PathTrie - brokers 09:09:35 09:09:35.105 [SyncThread:0] DEBUG org.apache.zookeeper.server.DataTree - Digests are matching for Zxid: 22, Digest in log and actual tree: 53386951052 09:09:35 09:09:35.105 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - sessionid:0x1000002945e0000 type:multi cxid:0x4c zxid:0x22 txntype:14 reqpath:n/a 09:09:35 09:09:35.106 [main-SendThread(127.0.0.1:46481)] DEBUG org.apache.zookeeper.ClientCnxn - Reading reply session id: 0x1000002945e0000, packet:: clientPath:null serverPath:null finished:false header:: 76,14 replyHeader:: 76,34,0 request:: org.apache.zookeeper.MultiOperationRecord@c37a65e6 response:: org.apache.zookeeper.MultiResponse@bd466627 09:09:35 09:09:35.110 [ProcessThread(sid:0 cport:46481):] DEBUG org.apache.zookeeper.server.SessionTrackerImpl - Checking session 0x1000002945e0000 09:09:35 09:09:35.110 [ProcessThread(sid:0 cport:46481):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Permission requested: 1 09:09:35 09:09:35.110 [ProcessThread(sid:0 cport:46481):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - ACLs for node: [31,s{'world,'anyone} 09:09:35 ] 09:09:35 09:09:35.110 [ProcessThread(sid:0 cport:46481):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Client credentials: ['sasl,'zooclient 09:09:35 , 'ip,'127.0.0.1 09:09:35 ] 09:09:35 09:09:35.110 [ProcessThread(sid:0 cport:46481):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from data tree is: 53386951052 09:09:35 09:09:35.110 [ProcessThread(sid:0 cport:46481):] DEBUG org.apache.zookeeper.server.SessionTrackerImpl - Checking session 0x1000002945e0000 09:09:35 09:09:35.110 [ProcessThread(sid:0 cport:46481):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Processing ACL: 31,s{'world,'anyone} 09:09:35 09:09:35 09:09:35.110 [ProcessThread(sid:0 cport:46481):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Permission requested: 4 09:09:35 09:09:35.110 [ProcessThread(sid:0 cport:46481):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - ACLs for node: [31,s{'world,'anyone} 09:09:35 ] 09:09:35 09:09:35.110 [ProcessThread(sid:0 cport:46481):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Client credentials: ['sasl,'zooclient 09:09:35 , 'ip,'127.0.0.1 09:09:35 ] 09:09:35 09:09:35.110 [ProcessThread(sid:0 cport:46481):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from data tree is: 53386951052 09:09:35 09:09:35.110 [ProcessThread(sid:0 cport:46481):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 52126904086 09:09:35 09:09:35.110 [ProcessThread(sid:0 cport:46481):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 53578972280 09:09:35 09:09:35.111 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - Processing request:: sessionid:0x1000002945e0000 type:multi cxid:0x4d zxid:0x23 txntype:14 reqpath:n/a 09:09:35 09:09:35.111 [SyncThread:0] DEBUG org.apache.zookeeper.common.PathTrie - brokers 09:09:35 09:09:35.111 [SyncThread:0] DEBUG org.apache.zookeeper.server.DataTree - Digests are matching for Zxid: 23, Digest in log and actual tree: 53578972280 09:09:35 09:09:35.112 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - sessionid:0x1000002945e0000 type:multi cxid:0x4d zxid:0x23 txntype:14 reqpath:n/a 09:09:35 09:09:35.112 [main-SendThread(127.0.0.1:46481)] DEBUG org.apache.zookeeper.ClientCnxn - Reading reply session id: 0x1000002945e0000, packet:: clientPath:null serverPath:null finished:false header:: 77,14 replyHeader:: 77,35,0 request:: org.apache.zookeeper.MultiOperationRecord@b3e0859f response:: org.apache.zookeeper.MultiResponse@ce2303a9 09:09:35 09:09:35.120 [controller-event-thread] INFO state.change.logger - [Controller id=1 epoch=1] Changed partition my-test-topic-0 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isr=List(1), leaderRecoveryState=RECOVERED, partitionEpoch=0) 09:09:35 09:09:35.124 [controller-event-thread] INFO state.change.logger - [Controller id=1 epoch=1] Sending LeaderAndIsr request to broker 1 with 1 become-leader and 0 become-follower partitions 09:09:35 09:09:35.127 [controller-event-thread] INFO state.change.logger - [Controller id=1 epoch=1] Sending UpdateMetadata request to brokers HashSet(1) for 1 partitions 09:09:35 09:09:35.129 [controller-event-thread] INFO state.change.logger - [Controller id=1 epoch=1] Sending UpdateMetadata request to brokers HashSet() for 0 partitions 09:09:35 09:09:35.130 [TestBroker:1:Controller-1-to-broker-1-send-thread] DEBUG org.apache.kafka.clients.NetworkClient - [Controller id=1, targetBrokerId=1] Sending LEADER_AND_ISR request with header RequestHeader(apiKey=LEADER_AND_ISR, apiVersion=6, clientId=1, correlationId=1) and timeout 30000 to node 1: LeaderAndIsrRequestData(controllerId=1, controllerEpoch=1, brokerEpoch=25, type=0, ungroupedPartitionStates=[], topicStates=[LeaderAndIsrTopicState(topicName='my-test-topic', topicId=rpOTusxPRiGyrsjjjH_fwA, partitionStates=[LeaderAndIsrPartitionState(topicName='my-test-topic', partitionIndex=0, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0)])], liveLeaders=[LeaderAndIsrLiveLeader(brokerId=1, hostName='localhost', port=40117)]) 09:09:35 09:09:35.142 [data-plane-kafka-request-handler-1] INFO state.change.logger - [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 for 1 partitions 09:09:35 09:09:35.181 [data-plane-kafka-request-handler-1] INFO kafka.server.ReplicaFetcherManager - [ReplicaFetcherManager on broker 1] Removed fetcher for partitions Set(my-test-topic-0) 09:09:35 09:09:35.182 [data-plane-kafka-request-handler-1] INFO state.change.logger - [Broker id=1] Stopped fetchers as part of LeaderAndIsr request correlationId 1 from controller 1 epoch 1 as part of the become-leader transition for 1 partitions 09:09:35 09:09:35.199 [ProcessThread(sid:0 cport:46481):] DEBUG org.apache.zookeeper.server.SessionTrackerImpl - Checking session 0x1000002945e0000 09:09:35 09:09:35.199 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - Processing request:: sessionid:0x1000002945e0000 type:getData cxid:0x4e zxid:0xfffffffffffffffe txntype:unknown reqpath:/config/topics/my-test-topic 09:09:35 09:09:35.199 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - sessionid:0x1000002945e0000 type:getData cxid:0x4e zxid:0xfffffffffffffffe txntype:unknown reqpath:/config/topics/my-test-topic 09:09:35 09:09:35.199 [SyncThread:0] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Permission requested: 1 09:09:35 09:09:35.199 [SyncThread:0] DEBUG org.apache.zookeeper.server.ZooKeeperServer - ACLs for node: [31,s{'world,'anyone} 09:09:35 ] 09:09:35 09:09:35.199 [SyncThread:0] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Client credentials: ['sasl,'zooclient 09:09:35 , 'ip,'127.0.0.1 09:09:35 ] 09:09:35 09:09:35.200 [main-SendThread(127.0.0.1:46481)] DEBUG org.apache.zookeeper.ClientCnxn - Reading reply session id: 0x1000002945e0000, packet:: clientPath:/config/topics/my-test-topic serverPath:/config/topics/my-test-topic finished:false header:: 78,4 replyHeader:: 78,35,0 request:: '/config/topics/my-test-topic,F response:: #7b2276657273696f6e223a312c22636f6e666967223a7b7d7d,s{31,31,1770973775045,1770973775045,0,0,0,0,25,0,31} 09:09:35 09:09:35.287 [data-plane-kafka-request-handler-1] DEBUG kafka.log.OffsetIndex - Loaded index file /tmp/kafka-unit11182757027218931278/my-test-topic-0/00000000000000000000.index with maxEntries = 1310720, maxIndexSize = 10485760, entries = 0, lastOffset = 0, file position = 0 09:09:35 09:09:35.291 [data-plane-kafka-request-handler-1] DEBUG kafka.log.AbstractIndex - Index /tmp/kafka-unit11182757027218931278/my-test-topic-0/00000000000000000000.index was not resized because it already has size 10485760 09:09:35 09:09:35.292 [data-plane-kafka-request-handler-1] DEBUG kafka.log.TimeIndex - Loaded index file /tmp/kafka-unit11182757027218931278/my-test-topic-0/00000000000000000000.timeindex with maxEntries = 873813, maxIndexSize = 10485760, entries = 0, lastOffset = TimestampOffset(-1,0), file position = 0 09:09:35 09:09:35.292 [data-plane-kafka-request-handler-1] DEBUG kafka.log.AbstractIndex - Index /tmp/kafka-unit11182757027218931278/my-test-topic-0/00000000000000000000.timeindex was not resized because it already has size 10485756 09:09:35 09:09:35.297 [data-plane-kafka-request-handler-1] INFO kafka.log.UnifiedLog$ - [LogLoader partition=my-test-topic-0, dir=/tmp/kafka-unit11182757027218931278] Loading producer state till offset 0 with message format version 2 09:09:35 09:09:35.313 [data-plane-kafka-request-handler-1] DEBUG kafka.utils.KafkaScheduler - Scheduling task flush-metadata-file with initial delay 0 ms and period -1 ms. 09:09:35 09:09:35.315 [data-plane-kafka-request-handler-1] DEBUG kafka.utils.KafkaScheduler - Scheduling task PeriodicProducerExpirationCheck with initial delay 600000 ms and period 600000 ms. 09:09:35 09:09:35.318 [data-plane-kafka-request-handler-1] INFO kafka.log.LogManager - Created log for partition my-test-topic-0 in /tmp/kafka-unit11182757027218931278/my-test-topic-0 with properties {} 09:09:35 09:09:35.319 [data-plane-kafka-request-handler-1] INFO kafka.cluster.Partition - [Partition my-test-topic-0 broker=1] No checkpointed highwatermark is found for partition my-test-topic-0 09:09:35 09:09:35.320 [data-plane-kafka-request-handler-1] INFO kafka.cluster.Partition - [Partition my-test-topic-0 broker=1] Log loaded for partition my-test-topic-0 with initial high watermark 0 09:09:35 09:09:35.321 [data-plane-kafka-request-handler-1] INFO state.change.logger - [Broker id=1] Leader my-test-topic-0 with topic id Some(rpOTusxPRiGyrsjjjH_fwA) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas []. Previous leader epoch was -1. 09:09:35 09:09:35.331 [data-plane-kafka-request-handler-1] DEBUG kafka.server.epoch.LeaderEpochFileCache - [LeaderEpochCache my-test-topic-0] Appended new epoch entry EpochEntry(epoch=0, startOffset=0). Cache now contains 1 entries. 09:09:35 09:09:35.435 [data-plane-kafka-request-handler-1] DEBUG kafka.utils.KafkaScheduler - Scheduling task highwatermark-checkpoint with initial delay 0 ms and period 5000 ms. 09:09:35 09:09:35.440 [data-plane-kafka-request-handler-1] INFO state.change.logger - [Broker id=1] Finished LeaderAndIsr request in 300ms correlationId 1 from controller 1 for 1 partitions 09:09:35 09:09:35.445 [TestBroker:1:Controller-1-to-broker-1-send-thread] DEBUG org.apache.kafka.clients.NetworkClient - [Controller id=1, targetBrokerId=1] Received LEADER_AND_ISR response from node 1 for request with header RequestHeader(apiKey=LEADER_AND_ISR, apiVersion=6, clientId=1, correlationId=1): LeaderAndIsrResponseData(errorCode=0, partitionErrors=[], topics=[LeaderAndIsrTopicError(topicId=rpOTusxPRiGyrsjjjH_fwA, partitionErrors=[LeaderAndIsrPartitionError(topicName='', partitionIndex=0, errorCode=0)])]) 09:09:35 09:09:35.447 [TestBroker:1:Controller-1-to-broker-1-send-thread] DEBUG org.apache.kafka.clients.NetworkClient - [Controller id=1, targetBrokerId=1] Sending UPDATE_METADATA request with header RequestHeader(apiKey=UPDATE_METADATA, apiVersion=7, clientId=1, correlationId=2) and timeout 30000 to node 1: UpdateMetadataRequestData(controllerId=1, controllerEpoch=1, brokerEpoch=25, ungroupedPartitionStates=[], topicStates=[UpdateMetadataTopicState(topicName='my-test-topic', topicId=rpOTusxPRiGyrsjjjH_fwA, partitionStates=[UpdateMetadataPartitionState(topicName='my-test-topic', partitionIndex=0, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[])])], liveBrokers=[UpdateMetadataBroker(id=1, v0Host='', v0Port=0, endpoints=[UpdateMetadataEndpoint(port=40117, host='localhost', listener='SASL_PLAINTEXT', securityProtocol=2)], rack=null)]) 09:09:35 09:09:35.448 [data-plane-kafka-network-thread-1-ListenerName(SASL_PLAINTEXT)-SASL_PLAINTEXT-0] DEBUG kafka.request.logger - Completed request:{"isForwarded":false,"requestHeader":{"requestApiKey":4,"requestApiVersion":6,"correlationId":1,"clientId":"1","requestApiKeyName":"LEADER_AND_ISR"},"request":{"controllerId":1,"controllerEpoch":1,"brokerEpoch":25,"type":0,"topicStates":[{"topicName":"my-test-topic","topicId":"rpOTusxPRiGyrsjjjH_fwA","partitionStates":[{"partitionIndex":0,"controllerEpoch":1,"leader":1,"leaderEpoch":0,"isr":[1],"partitionEpoch":0,"replicas":[1],"addingReplicas":[],"removingReplicas":[],"isNew":true,"leaderRecoveryState":0}]}],"liveLeaders":[{"brokerId":1,"hostName":"localhost","port":40117}]},"response":{"errorCode":0,"topics":[{"topicId":"rpOTusxPRiGyrsjjjH_fwA","partitionErrors":[{"partitionIndex":0,"errorCode":0}]}]},"connection":"127.0.0.1:40117-127.0.0.1:49630-0","totalTimeMs":313.679,"requestQueueTimeMs":5.575,"localTimeMs":307.577,"remoteTimeMs":0.0,"throttleTimeMs":0,"responseQueueTimeMs":0.1,"sendTimeMs":0.425,"securityProtocol":"SASL_PLAINTEXT","principal":"User:admin","listener":"SASL_PLAINTEXT","clientInformation":{"softwareName":"unknown","softwareVersion":"unknown"}} 09:09:35 09:09:35.458 [data-plane-kafka-request-handler-0] INFO state.change.logger - [Broker id=1] Add 1 partitions and deleted 0 partitions from metadata cache in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 09:09:35 09:09:35.466 [data-plane-kafka-request-handler-0] DEBUG kafka.server.DelayedOperationPurgatory - Request key TopicKey(my-test-topic) unblocked 1 topic operations 09:09:35 09:09:35.466 [data-plane-kafka-request-handler-0] DEBUG kafka.server.ZkAdminManager - [Admin Manager on Broker 1]: Request key my-test-topic unblocked 1 topic requests. 09:09:35 09:09:35.467 [kafka-admin-client-thread | test-consumer-id] DEBUG org.apache.kafka.clients.NetworkClient - [AdminClient clientId=test-consumer-id] Received CREATE_TOPICS response from node 1 for request with header RequestHeader(apiKey=CREATE_TOPICS, apiVersion=7, clientId=test-consumer-id, correlationId=3): CreateTopicsResponseData(throttleTimeMs=0, topics=[CreatableTopicResult(name='my-test-topic', topicId=rpOTusxPRiGyrsjjjH_fwA, errorCode=0, errorMessage=null, topicConfigErrorCode=0, numPartitions=1, replicationFactor=1, configs=[CreatableTopicConfigs(name='compression.type', value='producer', readOnly=false, configSource=5, isSensitive=false), CreatableTopicConfigs(name='leader.replication.throttled.replicas', value='', readOnly=false, configSource=5, isSensitive=false), CreatableTopicConfigs(name='message.downconversion.enable', value='true', readOnly=false, configSource=5, isSensitive=false), CreatableTopicConfigs(name='min.insync.replicas', value='1', readOnly=false, configSource=5, isSensitive=false), CreatableTopicConfigs(name='segment.jitter.ms', value='0', readOnly=false, configSource=5, isSensitive=false), CreatableTopicConfigs(name='cleanup.policy', value='delete', readOnly=false, configSource=5, isSensitive=false), CreatableTopicConfigs(name='flush.ms', value='9223372036854775807', readOnly=false, configSource=5, isSensitive=false), CreatableTopicConfigs(name='follower.replication.throttled.replicas', value='', readOnly=false, configSource=5, isSensitive=false), CreatableTopicConfigs(name='segment.bytes', value='1073741824', readOnly=false, configSource=5, isSensitive=false), CreatableTopicConfigs(name='retention.ms', value='604800000', readOnly=false, configSource=5, isSensitive=false), CreatableTopicConfigs(name='flush.messages', value='1', readOnly=false, configSource=4, isSensitive=false), CreatableTopicConfigs(name='message.format.version', value='3.0-IV1', readOnly=false, configSource=5, isSensitive=false), CreatableTopicConfigs(name='max.compaction.lag.ms', value='9223372036854775807', readOnly=false, configSource=5, isSensitive=false), CreatableTopicConfigs(name='file.delete.delay.ms', value='60000', readOnly=false, configSource=5, isSensitive=false), CreatableTopicConfigs(name='max.message.bytes', value='1048588', readOnly=false, configSource=5, isSensitive=false), CreatableTopicConfigs(name='min.compaction.lag.ms', value='0', readOnly=false, configSource=5, isSensitive=false), CreatableTopicConfigs(name='message.timestamp.type', value='CreateTime', readOnly=false, configSource=5, isSensitive=false), CreatableTopicConfigs(name='preallocate', value='false', readOnly=false, configSource=5, isSensitive=false), CreatableTopicConfigs(name='min.cleanable.dirty.ratio', value='0.5', readOnly=false, configSource=5, isSensitive=false), CreatableTopicConfigs(name='index.interval.bytes', value='4096', readOnly=false, configSource=5, isSensitive=false), CreatableTopicConfigs(name='unclean.leader.election.enable', value='false', readOnly=false, configSource=5, isSensitive=false), CreatableTopicConfigs(name='retention.bytes', value='-1', readOnly=false, configSource=5, isSensitive=false), CreatableTopicConfigs(name='delete.retention.ms', value='86400000', readOnly=false, configSource=5, isSensitive=false), CreatableTopicConfigs(name='segment.ms', value='604800000', readOnly=false, configSource=5, isSensitive=false), CreatableTopicConfigs(name='message.timestamp.difference.max.ms', value='9223372036854775807', readOnly=false, configSource=5, isSensitive=false), CreatableTopicConfigs(name='segment.index.bytes', value='10485760', readOnly=false, configSource=5, isSensitive=false)])]) 09:09:35 09:09:35.467 [data-plane-kafka-network-thread-1-ListenerName(SASL_PLAINTEXT)-SASL_PLAINTEXT-0] DEBUG kafka.request.logger - Completed request:{"isForwarded":false,"requestHeader":{"requestApiKey":19,"requestApiVersion":7,"correlationId":3,"clientId":"test-consumer-id","requestApiKeyName":"CREATE_TOPICS"},"request":{"topics":[{"name":"my-test-topic","numPartitions":1,"replicationFactor":1,"assignments":[],"configs":[]}],"timeoutMs":14992,"validateOnly":false},"response":{"throttleTimeMs":0,"topics":[{"name":"my-test-topic","topicId":"rpOTusxPRiGyrsjjjH_fwA","errorCode":0,"errorMessage":null,"numPartitions":1,"replicationFactor":1,"configs":[{"name":"compression.type","value":"producer","readOnly":false,"configSource":5,"isSensitive":false},{"name":"leader.replication.throttled.replicas","value":"","readOnly":false,"configSource":5,"isSensitive":false},{"name":"message.downconversion.enable","value":"true","readOnly":false,"configSource":5,"isSensitive":false},{"name":"min.insync.replicas","value":"1","readOnly":false,"configSource":5,"isSensitive":false},{"name":"segment.jitter.ms","value":"0","readOnly":false,"configSource":5,"isSensitive":false},{"name":"cleanup.policy","value":"delete","readOnly":false,"configSource":5,"isSensitive":false},{"name":"flush.ms","value":"9223372036854775807","readOnly":false,"configSource":5,"isSensitive":false},{"name":"follower.replication.throttled.replicas","value":"","readOnly":false,"configSource":5,"isSensitive":false},{"name":"segment.bytes","value":"1073741824","readOnly":false,"configSource":5,"isSensitive":false},{"name":"retention.ms","value":"604800000","readOnly":false,"configSource":5,"isSensitive":false},{"name":"flush.messages","value":"1","readOnly":false,"configSource":4,"isSensitive":false},{"name":"message.format.version","value":"3.0-IV1","readOnly":false,"configSource":5,"isSensitive":false},{"name":"max.compaction.lag.ms","value":"9223372036854775807","readOnly":false,"configSource":5,"isSensitive":false},{"name":"file.delete.delay.ms","value":"60000","readOnly":false,"configSource":5,"isSensitive":false},{"name":"max.message.bytes","value":"1048588","readOnly":false,"configSource":5,"isSensitive":false},{"name":"min.compaction.lag.ms","value":"0","readOnly":false,"configSource":5,"isSensitive":false},{"name":"message.timestamp.type","value":"CreateTime","readOnly":false,"configSource":5,"isSensitive":false},{"name":"preallocate","value":"false","readOnly":false,"configSource":5,"isSensitive":false},{"name":"min.cleanable.dirty.ratio","value":"0.5","readOnly":false,"configSource":5,"isSensitive":false},{"name":"index.interval.bytes","value":"4096","readOnly":false,"configSource":5,"isSensitive":false},{"name":"unclean.leader.election.enable","value":"false","readOnly":false,"configSource":5,"isSensitive":false},{"name":"retention.bytes","value":"-1","readOnly":false,"configSource":5,"isSensitive":false},{"name":"delete.retention.ms","value":"86400000","readOnly":false,"configSource":5,"isSensitive":false},{"name":"segment.ms","value":"604800000","readOnly":false,"configSource":5,"isSensitive":false},{"name":"message.timestamp.difference.max.ms","value":"9223372036854775807","readOnly":false,"configSource":5,"isSensitive":false},{"name":"segment.index.bytes","value":"10485760","readOnly":false,"configSource":5,"isSensitive":false}]}]},"connection":"127.0.0.1:40117-127.0.0.1:49638-2","totalTimeMs":478.575,"requestQueueTimeMs":2.358,"localTimeMs":92.614,"remoteTimeMs":383.223,"throttleTimeMs":0,"responseQueueTimeMs":0.091,"sendTimeMs":0.287,"securityProtocol":"SASL_PLAINTEXT","principal":"User:kafkaclient","listener":"SASL_PLAINTEXT","clientInformation":{"softwareName":"apache-kafka-java","softwareVersion":"3.3.1"}} 09:09:35 09:09:35.468 [TestBroker:1:Controller-1-to-broker-1-send-thread] DEBUG org.apache.kafka.clients.NetworkClient - [Controller id=1, targetBrokerId=1] Received UPDATE_METADATA response from node 1 for request with header RequestHeader(apiKey=UPDATE_METADATA, apiVersion=7, clientId=1, correlationId=2): UpdateMetadataResponseData(errorCode=0) 09:09:35 09:09:35.468 [data-plane-kafka-network-thread-1-ListenerName(SASL_PLAINTEXT)-SASL_PLAINTEXT-0] DEBUG kafka.request.logger - Completed request:{"isForwarded":false,"requestHeader":{"requestApiKey":6,"requestApiVersion":7,"correlationId":2,"clientId":"1","requestApiKeyName":"UPDATE_METADATA"},"request":{"controllerId":1,"controllerEpoch":1,"brokerEpoch":25,"topicStates":[{"topicName":"my-test-topic","topicId":"rpOTusxPRiGyrsjjjH_fwA","partitionStates":[{"partitionIndex":0,"controllerEpoch":1,"leader":1,"leaderEpoch":0,"isr":[1],"zkVersion":0,"replicas":[1],"offlineReplicas":[]}]}],"liveBrokers":[{"id":1,"endpoints":[{"port":40117,"host":"localhost","listener":"SASL_PLAINTEXT","securityProtocol":2}],"rack":null}]},"response":{"errorCode":0},"connection":"127.0.0.1:40117-127.0.0.1:49630-0","totalTimeMs":19.108,"requestQueueTimeMs":2.872,"localTimeMs":14.976,"remoteTimeMs":0.0,"throttleTimeMs":0,"responseQueueTimeMs":1.076,"sendTimeMs":0.183,"securityProtocol":"SASL_PLAINTEXT","principal":"User:admin","listener":"SASL_PLAINTEXT","clientInformation":{"softwareName":"unknown","softwareVersion":"unknown"}} 09:09:35 09:09:35.471 [main] DEBUG org.apache.kafka.clients.admin.KafkaAdminClient - [AdminClient clientId=test-consumer-id] Initiating close operation. 09:09:35 09:09:35.471 [main] DEBUG org.apache.kafka.clients.admin.KafkaAdminClient - [AdminClient clientId=test-consumer-id] Waiting for the I/O thread to exit. Hard shutdown in 31536000000 ms. 09:09:35 09:09:35.472 [kafka-admin-client-thread | test-consumer-id] INFO org.apache.kafka.common.utils.AppInfoParser - App info kafka.admin.client for test-consumer-id unregistered 09:09:35 09:09:35.472 [data-plane-kafka-network-thread-1-ListenerName(SASL_PLAINTEXT)-SASL_PLAINTEXT-0] DEBUG org.apache.kafka.common.network.Selector - [SocketServer listenerType=ZK_BROKER, nodeId=1] Connection with /127.0.0.1 (channelId=127.0.0.1:40117-127.0.0.1:49638-2) disconnected 09:09:35 java.io.EOFException: null 09:09:35 at org.apache.kafka.common.network.NetworkReceive.readFrom(NetworkReceive.java:97) 09:09:35 at org.apache.kafka.common.network.KafkaChannel.receive(KafkaChannel.java:452) 09:09:35 at org.apache.kafka.common.network.KafkaChannel.read(KafkaChannel.java:402) 09:09:35 at org.apache.kafka.common.network.Selector.attemptRead(Selector.java:674) 09:09:35 at org.apache.kafka.common.network.Selector.pollSelectionKeys(Selector.java:576) 09:09:35 at org.apache.kafka.common.network.Selector.poll(Selector.java:481) 09:09:35 at kafka.network.Processor.poll(SocketServer.scala:1055) 09:09:35 at kafka.network.Processor.run(SocketServer.scala:959) 09:09:35 at java.base/java.lang.Thread.run(Thread.java:829) 09:09:35 09:09:35.472 [data-plane-kafka-network-thread-1-ListenerName(SASL_PLAINTEXT)-SASL_PLAINTEXT-1] DEBUG org.apache.kafka.common.network.Selector - [SocketServer listenerType=ZK_BROKER, nodeId=1] Connection with /127.0.0.1 (channelId=127.0.0.1:40117-127.0.0.1:49636-1) disconnected 09:09:35 java.io.EOFException: null 09:09:35 at org.apache.kafka.common.network.NetworkReceive.readFrom(NetworkReceive.java:97) 09:09:35 at org.apache.kafka.common.network.KafkaChannel.receive(KafkaChannel.java:452) 09:09:35 at org.apache.kafka.common.network.KafkaChannel.read(KafkaChannel.java:402) 09:09:35 at org.apache.kafka.common.network.Selector.attemptRead(Selector.java:674) 09:09:35 at org.apache.kafka.common.network.Selector.pollSelectionKeys(Selector.java:576) 09:09:35 at org.apache.kafka.common.network.Selector.poll(Selector.java:481) 09:09:35 at kafka.network.Processor.poll(SocketServer.scala:1055) 09:09:35 at kafka.network.Processor.run(SocketServer.scala:959) 09:09:35 at java.base/java.lang.Thread.run(Thread.java:829) 09:09:35 09:09:35.473 [kafka-admin-client-thread | test-consumer-id] INFO org.apache.kafka.common.metrics.Metrics - Metrics scheduler closed 09:09:35 09:09:35.474 [kafka-admin-client-thread | test-consumer-id] INFO org.apache.kafka.common.metrics.Metrics - Closing reporter org.apache.kafka.common.metrics.JmxReporter 09:09:35 09:09:35.474 [kafka-admin-client-thread | test-consumer-id] INFO org.apache.kafka.common.metrics.Metrics - Metrics reporters closed 09:09:35 09:09:35.474 [kafka-admin-client-thread | test-consumer-id] DEBUG org.apache.kafka.clients.admin.KafkaAdminClient - [AdminClient clientId=test-consumer-id] Exiting AdminClientRunnable thread. 09:09:35 09:09:35.474 [main] DEBUG org.apache.kafka.clients.admin.KafkaAdminClient - [AdminClient clientId=test-consumer-id] Kafka admin client closed. 09:09:35 09:09:35.493 [main] INFO org.apache.kafka.clients.consumer.ConsumerConfig - ConsumerConfig values: 09:09:35 allow.auto.create.topics = false 09:09:35 auto.commit.interval.ms = 5000 09:09:35 auto.offset.reset = latest 09:09:35 bootstrap.servers = [SASL_PLAINTEXT://localhost:40117] 09:09:35 check.crcs = true 09:09:35 client.dns.lookup = use_all_dns_ips 09:09:35 client.id = mso-123456-consumer-f5c1d64b-2a77-4abe-97e2-5d595acfbd82 09:09:35 client.rack = 09:09:35 connections.max.idle.ms = 540000 09:09:35 default.api.timeout.ms = 60000 09:09:35 enable.auto.commit = true 09:09:35 exclude.internal.topics = true 09:09:35 fetch.max.bytes = 52428800 09:09:35 fetch.max.wait.ms = 500 09:09:35 fetch.min.bytes = 1 09:09:35 group.id = mso-group 09:09:35 group.instance.id = null 09:09:35 heartbeat.interval.ms = 3000 09:09:35 interceptor.classes = [] 09:09:35 internal.leave.group.on.close = true 09:09:35 internal.throw.on.fetch.stable.offset.unsupported = false 09:09:35 isolation.level = read_uncommitted 09:09:35 key.deserializer = class org.apache.kafka.common.serialization.StringDeserializer 09:09:35 max.partition.fetch.bytes = 1048576 09:09:35 max.poll.interval.ms = 600000 09:09:35 max.poll.records = 500 09:09:35 metadata.max.age.ms = 300000 09:09:35 metric.reporters = [] 09:09:35 metrics.num.samples = 2 09:09:35 metrics.recording.level = INFO 09:09:35 metrics.sample.window.ms = 30000 09:09:35 partition.assignment.strategy = [class org.apache.kafka.clients.consumer.RangeAssignor, class org.apache.kafka.clients.consumer.CooperativeStickyAssignor] 09:09:35 receive.buffer.bytes = 65536 09:09:35 reconnect.backoff.max.ms = 1000 09:09:35 reconnect.backoff.ms = 50 09:09:35 request.timeout.ms = 30000 09:09:35 retry.backoff.ms = 100 09:09:35 sasl.client.callback.handler.class = null 09:09:35 sasl.jaas.config = [hidden] 09:09:35 sasl.kerberos.kinit.cmd = /usr/bin/kinit 09:09:35 sasl.kerberos.min.time.before.relogin = 60000 09:09:35 sasl.kerberos.service.name = null 09:09:35 sasl.kerberos.ticket.renew.jitter = 0.05 09:09:35 sasl.kerberos.ticket.renew.window.factor = 0.8 09:09:35 sasl.login.callback.handler.class = null 09:09:35 sasl.login.class = null 09:09:35 sasl.login.connect.timeout.ms = null 09:09:35 sasl.login.read.timeout.ms = null 09:09:35 sasl.login.refresh.buffer.seconds = 300 09:09:35 sasl.login.refresh.min.period.seconds = 60 09:09:35 sasl.login.refresh.window.factor = 0.8 09:09:35 sasl.login.refresh.window.jitter = 0.05 09:09:35 sasl.login.retry.backoff.max.ms = 10000 09:09:35 sasl.login.retry.backoff.ms = 100 09:09:35 sasl.mechanism = PLAIN 09:09:35 sasl.oauthbearer.clock.skew.seconds = 30 09:09:35 sasl.oauthbearer.expected.audience = null 09:09:35 sasl.oauthbearer.expected.issuer = null 09:09:35 sasl.oauthbearer.jwks.endpoint.refresh.ms = 3600000 09:09:35 sasl.oauthbearer.jwks.endpoint.retry.backoff.max.ms = 10000 09:09:35 sasl.oauthbearer.jwks.endpoint.retry.backoff.ms = 100 09:09:35 sasl.oauthbearer.jwks.endpoint.url = null 09:09:35 sasl.oauthbearer.scope.claim.name = scope 09:09:35 sasl.oauthbearer.sub.claim.name = sub 09:09:35 sasl.oauthbearer.token.endpoint.url = null 09:09:35 security.protocol = SASL_PLAINTEXT 09:09:35 security.providers = null 09:09:35 send.buffer.bytes = 131072 09:09:35 session.timeout.ms = 50000 09:09:35 socket.connection.setup.timeout.max.ms = 30000 09:09:35 socket.connection.setup.timeout.ms = 10000 09:09:35 ssl.cipher.suites = null 09:09:35 ssl.enabled.protocols = [TLSv1.2, TLSv1.3] 09:09:35 ssl.endpoint.identification.algorithm = https 09:09:35 ssl.engine.factory.class = null 09:09:35 ssl.key.password = null 09:09:35 ssl.keymanager.algorithm = SunX509 09:09:35 ssl.keystore.certificate.chain = null 09:09:35 ssl.keystore.key = null 09:09:35 ssl.keystore.location = null 09:09:35 ssl.keystore.password = null 09:09:35 ssl.keystore.type = JKS 09:09:35 ssl.protocol = TLSv1.3 09:09:35 ssl.provider = null 09:09:35 ssl.secure.random.implementation = null 09:09:35 ssl.trustmanager.algorithm = PKIX 09:09:35 ssl.truststore.certificates = null 09:09:35 ssl.truststore.location = null 09:09:35 ssl.truststore.password = null 09:09:35 ssl.truststore.type = JKS 09:09:35 value.deserializer = class org.apache.kafka.common.serialization.StringDeserializer 09:09:35 09:09:35 09:09:35.494 [main] DEBUG org.apache.kafka.clients.consumer.KafkaConsumer - [Consumer clientId=mso-123456-consumer-f5c1d64b-2a77-4abe-97e2-5d595acfbd82, groupId=mso-group] Initializing the Kafka consumer 09:09:35 09:09:35.504 [main] INFO org.apache.kafka.common.security.authenticator.AbstractLogin - Successfully logged in. 09:09:35 09:09:35.548 [main] INFO org.apache.kafka.common.utils.AppInfoParser - Kafka version: 3.3.1 09:09:35 09:09:35.548 [main] INFO org.apache.kafka.common.utils.AppInfoParser - Kafka commitId: e23c59d00e687ff5 09:09:35 09:09:35.548 [main] INFO org.apache.kafka.common.utils.AppInfoParser - Kafka startTimeMs: 1770973775548 09:09:35 09:09:35.549 [main] DEBUG org.apache.kafka.clients.consumer.KafkaConsumer - [Consumer clientId=mso-123456-consumer-f5c1d64b-2a77-4abe-97e2-5d595acfbd82, groupId=mso-group] Kafka consumer initialized 09:09:35 09:09:35.549 [main] INFO org.apache.kafka.clients.consumer.KafkaConsumer - [Consumer clientId=mso-123456-consumer-f5c1d64b-2a77-4abe-97e2-5d595acfbd82, groupId=mso-group] Subscribed to topic(s): my-test-topic 09:09:35 09:09:35.549 [main] DEBUG org.apache.kafka.clients.consumer.internals.ConsumerCoordinator - [Consumer clientId=mso-123456-consumer-f5c1d64b-2a77-4abe-97e2-5d595acfbd82, groupId=mso-group] Sending FindCoordinator request to broker localhost:40117 (id: -1 rack: null) 09:09:35 09:09:35.553 [main] DEBUG org.apache.kafka.clients.ClientUtils - Resolved host localhost as 127.0.0.1 09:09:35 09:09:35.553 [main] DEBUG org.apache.kafka.clients.NetworkClient - [Consumer clientId=mso-123456-consumer-f5c1d64b-2a77-4abe-97e2-5d595acfbd82, groupId=mso-group] Initiating connection to node localhost:40117 (id: -1 rack: null) using address localhost/127.0.0.1 09:09:35 09:09:35.553 [main] DEBUG org.apache.kafka.common.security.authenticator.SaslClientAuthenticator - [Consumer clientId=mso-123456-consumer-f5c1d64b-2a77-4abe-97e2-5d595acfbd82, groupId=mso-group] Set SASL client state to SEND_APIVERSIONS_REQUEST 09:09:35 09:09:35.553 [main] DEBUG org.apache.kafka.common.security.authenticator.SaslClientAuthenticator - [Consumer clientId=mso-123456-consumer-f5c1d64b-2a77-4abe-97e2-5d595acfbd82, groupId=mso-group] Creating SaslClient: client=null;service=kafka;serviceHostname=localhost;mechs=[PLAIN] 09:09:35 09:09:35.554 [data-plane-kafka-socket-acceptor-ListenerName(SASL_PLAINTEXT)-SASL_PLAINTEXT-40117] DEBUG kafka.network.DataPlaneAcceptor - Accepted connection from /127.0.0.1:49640 on /127.0.0.1:40117 and assigned it to processor 1, sendBufferSize [actual|requested]: [102400|102400] recvBufferSize [actual|requested]: [102400|102400] 09:09:35 09:09:35.554 [data-plane-kafka-network-thread-1-ListenerName(SASL_PLAINTEXT)-SASL_PLAINTEXT-1] DEBUG kafka.network.Processor - Processor 1 listening to new connection from /127.0.0.1:49640 09:09:35 09:09:35.554 [main] DEBUG org.apache.kafka.common.network.Selector - [Consumer clientId=mso-123456-consumer-f5c1d64b-2a77-4abe-97e2-5d595acfbd82, groupId=mso-group] Created socket with SO_RCVBUF = 65536, SO_SNDBUF = 131072, SO_TIMEOUT = 0 to node -1 09:09:35 09:09:35.554 [main] DEBUG org.apache.kafka.common.security.authenticator.SaslClientAuthenticator - [Consumer clientId=mso-123456-consumer-f5c1d64b-2a77-4abe-97e2-5d595acfbd82, groupId=mso-group] Set SASL client state to RECEIVE_APIVERSIONS_RESPONSE 09:09:35 09:09:35.555 [main] DEBUG org.apache.kafka.clients.NetworkClient - [Consumer clientId=mso-123456-consumer-f5c1d64b-2a77-4abe-97e2-5d595acfbd82, groupId=mso-group] Completed connection to node -1. Fetching API versions. 09:09:35 09:09:35.555 [data-plane-kafka-network-thread-1-ListenerName(SASL_PLAINTEXT)-SASL_PLAINTEXT-1] DEBUG org.apache.kafka.common.security.authenticator.SaslServerAuthenticator - Set SASL server state to HANDSHAKE_OR_VERSIONS_REQUEST during authentication 09:09:35 09:09:35.555 [data-plane-kafka-network-thread-1-ListenerName(SASL_PLAINTEXT)-SASL_PLAINTEXT-1] DEBUG org.apache.kafka.common.security.authenticator.SaslServerAuthenticator - Handling Kafka request API_VERSIONS during authentication 09:09:35 09:09:35.556 [data-plane-kafka-network-thread-1-ListenerName(SASL_PLAINTEXT)-SASL_PLAINTEXT-1] DEBUG org.apache.kafka.common.security.authenticator.SaslServerAuthenticator - Set SASL server state to HANDSHAKE_REQUEST during authentication 09:09:35 09:09:35.556 [main] DEBUG org.apache.kafka.common.security.authenticator.SaslClientAuthenticator - [Consumer clientId=mso-123456-consumer-f5c1d64b-2a77-4abe-97e2-5d595acfbd82, groupId=mso-group] Set SASL client state to SEND_HANDSHAKE_REQUEST 09:09:35 09:09:35.556 [main] DEBUG org.apache.kafka.common.security.authenticator.SaslClientAuthenticator - [Consumer clientId=mso-123456-consumer-f5c1d64b-2a77-4abe-97e2-5d595acfbd82, groupId=mso-group] Set SASL client state to RECEIVE_HANDSHAKE_RESPONSE 09:09:35 09:09:35.556 [data-plane-kafka-network-thread-1-ListenerName(SASL_PLAINTEXT)-SASL_PLAINTEXT-1] DEBUG org.apache.kafka.common.security.authenticator.SaslServerAuthenticator - Handling Kafka request SASL_HANDSHAKE during authentication 09:09:35 09:09:35.556 [data-plane-kafka-network-thread-1-ListenerName(SASL_PLAINTEXT)-SASL_PLAINTEXT-1] DEBUG org.apache.kafka.common.security.authenticator.SaslServerAuthenticator - Using SASL mechanism 'PLAIN' provided by client 09:09:35 09:09:35.557 [main] DEBUG org.apache.kafka.common.security.authenticator.SaslClientAuthenticator - [Consumer clientId=mso-123456-consumer-f5c1d64b-2a77-4abe-97e2-5d595acfbd82, groupId=mso-group] Set SASL client state to INITIAL 09:09:35 09:09:35.557 [main] DEBUG org.apache.kafka.common.security.authenticator.SaslClientAuthenticator - [Consumer clientId=mso-123456-consumer-f5c1d64b-2a77-4abe-97e2-5d595acfbd82, groupId=mso-group] Set SASL client state to INTERMEDIATE 09:09:35 09:09:35.558 [data-plane-kafka-network-thread-1-ListenerName(SASL_PLAINTEXT)-SASL_PLAINTEXT-1] DEBUG org.apache.kafka.common.security.authenticator.SaslServerAuthenticator - Set SASL server state to AUTHENTICATE during authentication 09:09:35 09:09:35.558 [data-plane-kafka-network-thread-1-ListenerName(SASL_PLAINTEXT)-SASL_PLAINTEXT-1] DEBUG org.apache.kafka.common.security.authenticator.SaslServerAuthenticator - Authentication complete; session max lifetime from broker config=0 ms, no credential expiration; no session expiration, sending 0 ms to client 09:09:35 09:09:35.559 [data-plane-kafka-network-thread-1-ListenerName(SASL_PLAINTEXT)-SASL_PLAINTEXT-1] DEBUG org.apache.kafka.common.security.authenticator.SaslServerAuthenticator - Set SASL server state to COMPLETE during authentication 09:09:35 09:09:35.559 [main] DEBUG org.apache.kafka.common.security.authenticator.SaslClientAuthenticator - [Consumer clientId=mso-123456-consumer-f5c1d64b-2a77-4abe-97e2-5d595acfbd82, groupId=mso-group] Set SASL client state to COMPLETE 09:09:35 09:09:35.559 [data-plane-kafka-network-thread-1-ListenerName(SASL_PLAINTEXT)-SASL_PLAINTEXT-1] DEBUG org.apache.kafka.common.network.Selector - [SocketServer listenerType=ZK_BROKER, nodeId=1] Successfully authenticated with /127.0.0.1 09:09:35 09:09:35.559 [main] DEBUG org.apache.kafka.common.security.authenticator.SaslClientAuthenticator - [Consumer clientId=mso-123456-consumer-f5c1d64b-2a77-4abe-97e2-5d595acfbd82, groupId=mso-group] Finished authentication with no session expiration and no session re-authentication 09:09:35 09:09:35.559 [main] DEBUG org.apache.kafka.common.network.Selector - [Consumer clientId=mso-123456-consumer-f5c1d64b-2a77-4abe-97e2-5d595acfbd82, groupId=mso-group] Successfully authenticated with localhost/127.0.0.1 09:09:35 09:09:35.559 [main] DEBUG org.apache.kafka.clients.NetworkClient - [Consumer clientId=mso-123456-consumer-f5c1d64b-2a77-4abe-97e2-5d595acfbd82, groupId=mso-group] Initiating API versions fetch from node -1. 09:09:35 09:09:35.559 [main] DEBUG org.apache.kafka.clients.NetworkClient - [Consumer clientId=mso-123456-consumer-f5c1d64b-2a77-4abe-97e2-5d595acfbd82, groupId=mso-group] Sending API_VERSIONS request with header RequestHeader(apiKey=API_VERSIONS, apiVersion=3, clientId=mso-123456-consumer-f5c1d64b-2a77-4abe-97e2-5d595acfbd82, correlationId=1) and timeout 30000 to node -1: ApiVersionsRequestData(clientSoftwareName='apache-kafka-java', clientSoftwareVersion='3.3.1') 09:09:35 09:09:35.561 [main] DEBUG org.apache.kafka.clients.NetworkClient - [Consumer clientId=mso-123456-consumer-f5c1d64b-2a77-4abe-97e2-5d595acfbd82, groupId=mso-group] Received API_VERSIONS response from node -1 for request with header RequestHeader(apiKey=API_VERSIONS, apiVersion=3, clientId=mso-123456-consumer-f5c1d64b-2a77-4abe-97e2-5d595acfbd82, correlationId=1): ApiVersionsResponseData(errorCode=0, apiKeys=[ApiVersion(apiKey=0, minVersion=0, maxVersion=9), ApiVersion(apiKey=1, minVersion=0, maxVersion=13), ApiVersion(apiKey=2, minVersion=0, maxVersion=7), ApiVersion(apiKey=3, minVersion=0, maxVersion=12), ApiVersion(apiKey=4, minVersion=0, maxVersion=6), ApiVersion(apiKey=5, minVersion=0, maxVersion=3), ApiVersion(apiKey=6, minVersion=0, maxVersion=7), ApiVersion(apiKey=7, minVersion=0, maxVersion=3), ApiVersion(apiKey=8, minVersion=0, maxVersion=8), ApiVersion(apiKey=9, minVersion=0, maxVersion=8), ApiVersion(apiKey=10, minVersion=0, maxVersion=4), ApiVersion(apiKey=11, minVersion=0, maxVersion=9), ApiVersion(apiKey=12, minVersion=0, maxVersion=4), ApiVersion(apiKey=13, minVersion=0, maxVersion=5), ApiVersion(apiKey=14, minVersion=0, maxVersion=5), ApiVersion(apiKey=15, minVersion=0, maxVersion=5), ApiVersion(apiKey=16, minVersion=0, maxVersion=4), ApiVersion(apiKey=17, minVersion=0, maxVersion=1), ApiVersion(apiKey=18, minVersion=0, maxVersion=3), ApiVersion(apiKey=19, minVersion=0, maxVersion=7), ApiVersion(apiKey=20, minVersion=0, maxVersion=6), ApiVersion(apiKey=21, minVersion=0, maxVersion=2), ApiVersion(apiKey=22, minVersion=0, maxVersion=4), ApiVersion(apiKey=23, minVersion=0, maxVersion=4), ApiVersion(apiKey=24, minVersion=0, maxVersion=3), ApiVersion(apiKey=25, minVersion=0, maxVersion=3), ApiVersion(apiKey=26, minVersion=0, maxVersion=3), ApiVersion(apiKey=27, minVersion=0, maxVersion=1), ApiVersion(apiKey=28, minVersion=0, maxVersion=3), ApiVersion(apiKey=29, minVersion=0, maxVersion=3), ApiVersion(apiKey=30, minVersion=0, maxVersion=3), ApiVersion(apiKey=31, minVersion=0, maxVersion=3), ApiVersion(apiKey=32, minVersion=0, maxVersion=4), ApiVersion(apiKey=33, minVersion=0, maxVersion=2), ApiVersion(apiKey=34, minVersion=0, maxVersion=2), ApiVersion(apiKey=35, minVersion=0, maxVersion=4), ApiVersion(apiKey=36, minVersion=0, maxVersion=2), ApiVersion(apiKey=37, minVersion=0, maxVersion=3), ApiVersion(apiKey=38, minVersion=0, maxVersion=3), ApiVersion(apiKey=39, minVersion=0, maxVersion=2), ApiVersion(apiKey=40, minVersion=0, maxVersion=2), ApiVersion(apiKey=41, minVersion=0, maxVersion=3), ApiVersion(apiKey=42, minVersion=0, maxVersion=2), ApiVersion(apiKey=43, minVersion=0, maxVersion=2), ApiVersion(apiKey=44, minVersion=0, maxVersion=1), ApiVersion(apiKey=45, minVersion=0, maxVersion=0), ApiVersion(apiKey=46, minVersion=0, maxVersion=0), ApiVersion(apiKey=47, minVersion=0, maxVersion=0), ApiVersion(apiKey=48, minVersion=0, maxVersion=1), ApiVersion(apiKey=49, minVersion=0, maxVersion=1), ApiVersion(apiKey=50, minVersion=0, maxVersion=0), ApiVersion(apiKey=51, minVersion=0, maxVersion=0), ApiVersion(apiKey=56, minVersion=0, maxVersion=2), ApiVersion(apiKey=57, minVersion=0, maxVersion=1), ApiVersion(apiKey=60, minVersion=0, maxVersion=0), ApiVersion(apiKey=61, minVersion=0, maxVersion=0), ApiVersion(apiKey=65, minVersion=0, maxVersion=0), ApiVersion(apiKey=66, minVersion=0, maxVersion=0), ApiVersion(apiKey=67, minVersion=0, maxVersion=0)], throttleTimeMs=0, supportedFeatures=[], finalizedFeaturesEpoch=0, finalizedFeatures=[]) 09:09:35 09:09:35.562 [main] DEBUG org.apache.kafka.clients.NetworkClient - [Consumer clientId=mso-123456-consumer-f5c1d64b-2a77-4abe-97e2-5d595acfbd82, groupId=mso-group] Node -1 has finalized features epoch: 0, finalized features: [], supported features: [], API versions: (Produce(0): 0 to 9 [usable: 9], Fetch(1): 0 to 13 [usable: 13], ListOffsets(2): 0 to 7 [usable: 7], Metadata(3): 0 to 12 [usable: 12], LeaderAndIsr(4): 0 to 6 [usable: 6], StopReplica(5): 0 to 3 [usable: 3], UpdateMetadata(6): 0 to 7 [usable: 7], ControlledShutdown(7): 0 to 3 [usable: 3], OffsetCommit(8): 0 to 8 [usable: 8], OffsetFetch(9): 0 to 8 [usable: 8], FindCoordinator(10): 0 to 4 [usable: 4], JoinGroup(11): 0 to 9 [usable: 9], Heartbeat(12): 0 to 4 [usable: 4], LeaveGroup(13): 0 to 5 [usable: 5], SyncGroup(14): 0 to 5 [usable: 5], DescribeGroups(15): 0 to 5 [usable: 5], ListGroups(16): 0 to 4 [usable: 4], SaslHandshake(17): 0 to 1 [usable: 1], ApiVersions(18): 0 to 3 [usable: 3], CreateTopics(19): 0 to 7 [usable: 7], DeleteTopics(20): 0 to 6 [usable: 6], DeleteRecords(21): 0 to 2 [usable: 2], InitProducerId(22): 0 to 4 [usable: 4], OffsetForLeaderEpoch(23): 0 to 4 [usable: 4], AddPartitionsToTxn(24): 0 to 3 [usable: 3], AddOffsetsToTxn(25): 0 to 3 [usable: 3], EndTxn(26): 0 to 3 [usable: 3], WriteTxnMarkers(27): 0 to 1 [usable: 1], TxnOffsetCommit(28): 0 to 3 [usable: 3], DescribeAcls(29): 0 to 3 [usable: 3], CreateAcls(30): 0 to 3 [usable: 3], DeleteAcls(31): 0 to 3 [usable: 3], DescribeConfigs(32): 0 to 4 [usable: 4], AlterConfigs(33): 0 to 2 [usable: 2], AlterReplicaLogDirs(34): 0 to 2 [usable: 2], DescribeLogDirs(35): 0 to 4 [usable: 4], SaslAuthenticate(36): 0 to 2 [usable: 2], CreatePartitions(37): 0 to 3 [usable: 3], CreateDelegationToken(38): 0 to 3 [usable: 3], RenewDelegationToken(39): 0 to 2 [usable: 2], ExpireDelegationToken(40): 0 to 2 [usable: 2], DescribeDelegationToken(41): 0 to 3 [usable: 3], DeleteGroups(42): 0 to 2 [usable: 2], ElectLeaders(43): 0 to 2 [usable: 2], IncrementalAlterConfigs(44): 0 to 1 [usable: 1], AlterPartitionReassignments(45): 0 [usable: 0], ListPartitionReassignments(46): 0 [usable: 0], OffsetDelete(47): 0 [usable: 0], DescribeClientQuotas(48): 0 to 1 [usable: 1], AlterClientQuotas(49): 0 to 1 [usable: 1], DescribeUserScramCredentials(50): 0 [usable: 0], AlterUserScramCredentials(51): 0 [usable: 0], AlterPartition(56): 0 to 2 [usable: 2], UpdateFeatures(57): 0 to 1 [usable: 1], DescribeCluster(60): 0 [usable: 0], DescribeProducers(61): 0 [usable: 0], DescribeTransactions(65): 0 [usable: 0], ListTransactions(66): 0 [usable: 0], AllocateProducerIds(67): 0 [usable: 0]). 09:09:35 09:09:35.563 [main] DEBUG org.apache.kafka.clients.NetworkClient - [Consumer clientId=mso-123456-consumer-f5c1d64b-2a77-4abe-97e2-5d595acfbd82, groupId=mso-group] Sending metadata request MetadataRequestData(topics=[MetadataRequestTopic(topicId=AAAAAAAAAAAAAAAAAAAAAA, name='my-test-topic')], allowAutoTopicCreation=false, includeClusterAuthorizedOperations=false, includeTopicAuthorizedOperations=false) to node localhost:40117 (id: -1 rack: null) 09:09:35 09:09:35.563 [data-plane-kafka-network-thread-1-ListenerName(SASL_PLAINTEXT)-SASL_PLAINTEXT-1] DEBUG kafka.request.logger - Completed request:{"isForwarded":false,"requestHeader":{"requestApiKey":18,"requestApiVersion":3,"correlationId":1,"clientId":"mso-123456-consumer-f5c1d64b-2a77-4abe-97e2-5d595acfbd82","requestApiKeyName":"API_VERSIONS"},"request":{"clientSoftwareName":"apache-kafka-java","clientSoftwareVersion":"3.3.1"},"response":{"errorCode":0,"apiKeys":[{"apiKey":0,"minVersion":0,"maxVersion":9},{"apiKey":1,"minVersion":0,"maxVersion":13},{"apiKey":2,"minVersion":0,"maxVersion":7},{"apiKey":3,"minVersion":0,"maxVersion":12},{"apiKey":4,"minVersion":0,"maxVersion":6},{"apiKey":5,"minVersion":0,"maxVersion":3},{"apiKey":6,"minVersion":0,"maxVersion":7},{"apiKey":7,"minVersion":0,"maxVersion":3},{"apiKey":8,"minVersion":0,"maxVersion":8},{"apiKey":9,"minVersion":0,"maxVersion":8},{"apiKey":10,"minVersion":0,"maxVersion":4},{"apiKey":11,"minVersion":0,"maxVersion":9},{"apiKey":12,"minVersion":0,"maxVersion":4},{"apiKey":13,"minVersion":0,"maxVersion":5},{"apiKey":14,"minVersion":0,"maxVersion":5},{"apiKey":15,"minVersion":0,"maxVersion":5},{"apiKey":16,"minVersion":0,"maxVersion":4},{"apiKey":17,"minVersion":0,"maxVersion":1},{"apiKey":18,"minVersion":0,"maxVersion":3},{"apiKey":19,"minVersion":0,"maxVersion":7},{"apiKey":20,"minVersion":0,"maxVersion":6},{"apiKey":21,"minVersion":0,"maxVersion":2},{"apiKey":22,"minVersion":0,"maxVersion":4},{"apiKey":23,"minVersion":0,"maxVersion":4},{"apiKey":24,"minVersion":0,"maxVersion":3},{"apiKey":25,"minVersion":0,"maxVersion":3},{"apiKey":26,"minVersion":0,"maxVersion":3},{"apiKey":27,"minVersion":0,"maxVersion":1},{"apiKey":28,"minVersion":0,"maxVersion":3},{"apiKey":29,"minVersion":0,"maxVersion":3},{"apiKey":30,"minVersion":0,"maxVersion":3},{"apiKey":31,"minVersion":0,"maxVersion":3},{"apiKey":32,"minVersion":0,"maxVersion":4},{"apiKey":33,"minVersion":0,"maxVersion":2},{"apiKey":34,"minVersion":0,"maxVersion":2},{"apiKey":35,"minVersion":0,"maxVersion":4},{"apiKey":36,"minVersion":0,"maxVersion":2},{"apiKey":37,"minVersion":0,"maxVersion":3},{"apiKey":38,"minVersion":0,"maxVersion":3},{"apiKey":39,"minVersion":0,"maxVersion":2},{"apiKey":40,"minVersion":0,"maxVersion":2},{"apiKey":41,"minVersion":0,"maxVersion":3},{"apiKey":42,"minVersion":0,"maxVersion":2},{"apiKey":43,"minVersion":0,"maxVersion":2},{"apiKey":44,"minVersion":0,"maxVersion":1},{"apiKey":45,"minVersion":0,"maxVersion":0},{"apiKey":46,"minVersion":0,"maxVersion":0},{"apiKey":47,"minVersion":0,"maxVersion":0},{"apiKey":48,"minVersion":0,"maxVersion":1},{"apiKey":49,"minVersion":0,"maxVersion":1},{"apiKey":50,"minVersion":0,"maxVersion":0},{"apiKey":51,"minVersion":0,"maxVersion":0},{"apiKey":56,"minVersion":0,"maxVersion":2},{"apiKey":57,"minVersion":0,"maxVersion":1},{"apiKey":60,"minVersion":0,"maxVersion":0},{"apiKey":61,"minVersion":0,"maxVersion":0},{"apiKey":65,"minVersion":0,"maxVersion":0},{"apiKey":66,"minVersion":0,"maxVersion":0},{"apiKey":67,"minVersion":0,"maxVersion":0}],"throttleTimeMs":0,"finalizedFeaturesEpoch":0},"connection":"127.0.0.1:40117-127.0.0.1:49640-2","totalTimeMs":1.778,"requestQueueTimeMs":0.371,"localTimeMs":0.737,"remoteTimeMs":0.0,"throttleTimeMs":0,"responseQueueTimeMs":0.281,"sendTimeMs":0.387,"securityProtocol":"SASL_PLAINTEXT","principal":"User:admin","listener":"SASL_PLAINTEXT","clientInformation":{"softwareName":"unknown","softwareVersion":"unknown"}} 09:09:35 09:09:35.563 [main] DEBUG org.apache.kafka.clients.NetworkClient - [Consumer clientId=mso-123456-consumer-f5c1d64b-2a77-4abe-97e2-5d595acfbd82, groupId=mso-group] Sending METADATA request with header RequestHeader(apiKey=METADATA, apiVersion=12, clientId=mso-123456-consumer-f5c1d64b-2a77-4abe-97e2-5d595acfbd82, correlationId=2) and timeout 30000 to node -1: MetadataRequestData(topics=[MetadataRequestTopic(topicId=AAAAAAAAAAAAAAAAAAAAAA, name='my-test-topic')], allowAutoTopicCreation=false, includeClusterAuthorizedOperations=false, includeTopicAuthorizedOperations=false) 09:09:35 09:09:35.564 [main] DEBUG org.apache.kafka.clients.NetworkClient - [Consumer clientId=mso-123456-consumer-f5c1d64b-2a77-4abe-97e2-5d595acfbd82, groupId=mso-group] Sending FIND_COORDINATOR request with header RequestHeader(apiKey=FIND_COORDINATOR, apiVersion=4, clientId=mso-123456-consumer-f5c1d64b-2a77-4abe-97e2-5d595acfbd82, correlationId=0) and timeout 30000 to node -1: FindCoordinatorRequestData(key='', keyType=0, coordinatorKeys=[mso-group]) 09:09:35 09:09:35.575 [main] DEBUG org.apache.kafka.clients.NetworkClient - [Consumer clientId=mso-123456-consumer-f5c1d64b-2a77-4abe-97e2-5d595acfbd82, groupId=mso-group] Received METADATA response from node -1 for request with header RequestHeader(apiKey=METADATA, apiVersion=12, clientId=mso-123456-consumer-f5c1d64b-2a77-4abe-97e2-5d595acfbd82, correlationId=2): MetadataResponseData(throttleTimeMs=0, brokers=[MetadataResponseBroker(nodeId=1, host='localhost', port=40117, rack=null)], clusterId='lcpOyY1-QY2MMThgHGGgSA', controllerId=1, topics=[MetadataResponseTopic(errorCode=0, name='my-test-topic', topicId=rpOTusxPRiGyrsjjjH_fwA, isInternal=false, partitions=[MetadataResponsePartition(errorCode=0, partitionIndex=0, leaderId=1, leaderEpoch=0, replicaNodes=[1], isrNodes=[1], offlineReplicas=[])], topicAuthorizedOperations=-2147483648)], clusterAuthorizedOperations=-2147483648) 09:09:35 09:09:35.575 [data-plane-kafka-network-thread-1-ListenerName(SASL_PLAINTEXT)-SASL_PLAINTEXT-1] DEBUG kafka.request.logger - Completed request:{"isForwarded":false,"requestHeader":{"requestApiKey":3,"requestApiVersion":12,"correlationId":2,"clientId":"mso-123456-consumer-f5c1d64b-2a77-4abe-97e2-5d595acfbd82","requestApiKeyName":"METADATA"},"request":{"topics":[{"topicId":"AAAAAAAAAAAAAAAAAAAAAA","name":"my-test-topic"}],"allowAutoTopicCreation":false,"includeTopicAuthorizedOperations":false},"response":{"throttleTimeMs":0,"brokers":[{"nodeId":1,"host":"localhost","port":40117,"rack":null}],"clusterId":"lcpOyY1-QY2MMThgHGGgSA","controllerId":1,"topics":[{"errorCode":0,"name":"my-test-topic","topicId":"rpOTusxPRiGyrsjjjH_fwA","isInternal":false,"partitions":[{"errorCode":0,"partitionIndex":0,"leaderId":1,"leaderEpoch":0,"replicaNodes":[1],"isrNodes":[1],"offlineReplicas":[]}],"topicAuthorizedOperations":-2147483648}]},"connection":"127.0.0.1:40117-127.0.0.1:49640-2","totalTimeMs":10.681,"requestQueueTimeMs":2.001,"localTimeMs":8.284,"remoteTimeMs":0.0,"throttleTimeMs":0,"responseQueueTimeMs":0.126,"sendTimeMs":0.268,"securityProtocol":"SASL_PLAINTEXT","principal":"User:admin","listener":"SASL_PLAINTEXT","clientInformation":{"softwareName":"apache-kafka-java","softwareVersion":"3.3.1"}} 09:09:35 09:09:35.577 [main] INFO org.apache.kafka.clients.Metadata - [Consumer clientId=mso-123456-consumer-f5c1d64b-2a77-4abe-97e2-5d595acfbd82, groupId=mso-group] Resetting the last seen epoch of partition my-test-topic-0 to 0 since the associated topicId changed from null to rpOTusxPRiGyrsjjjH_fwA 09:09:35 09:09:35.579 [main] INFO org.apache.kafka.clients.Metadata - [Consumer clientId=mso-123456-consumer-f5c1d64b-2a77-4abe-97e2-5d595acfbd82, groupId=mso-group] Cluster ID: lcpOyY1-QY2MMThgHGGgSA 09:09:35 09:09:35.579 [main] DEBUG org.apache.kafka.clients.Metadata - [Consumer clientId=mso-123456-consumer-f5c1d64b-2a77-4abe-97e2-5d595acfbd82, groupId=mso-group] Updated cluster metadata updateVersion 2 to MetadataCache{clusterId='lcpOyY1-QY2MMThgHGGgSA', nodes={1=localhost:40117 (id: 1 rack: null)}, partitions=[PartitionMetadata(error=NONE, partition=my-test-topic-0, leader=Optional[1], leaderEpoch=Optional[0], replicas=1, isr=1, offlineReplicas=)], controller=localhost:40117 (id: 1 rack: null)} 09:09:35 09:09:35.581 [ProcessThread(sid:0 cport:46481):] DEBUG org.apache.zookeeper.server.SessionTrackerImpl - Checking session 0x1000002945e0000 09:09:35 09:09:35.581 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - Processing request:: sessionid:0x1000002945e0000 type:exists cxid:0x4f zxid:0xfffffffffffffffe txntype:unknown reqpath:/admin/delete_topics/__consumer_offsets 09:09:35 09:09:35.581 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - sessionid:0x1000002945e0000 type:exists cxid:0x4f zxid:0xfffffffffffffffe txntype:unknown reqpath:/admin/delete_topics/__consumer_offsets 09:09:35 09:09:35.582 [main-SendThread(127.0.0.1:46481)] DEBUG org.apache.zookeeper.ClientCnxn - Reading reply session id: 0x1000002945e0000, packet:: clientPath:/admin/delete_topics/__consumer_offsets serverPath:/admin/delete_topics/__consumer_offsets finished:false header:: 79,3 replyHeader:: 79,35,-101 request:: '/admin/delete_topics/__consumer_offsets,F response:: 09:09:35 09:09:35.583 [ProcessThread(sid:0 cport:46481):] DEBUG org.apache.zookeeper.server.SessionTrackerImpl - Checking session 0x1000002945e0000 09:09:35 09:09:35.583 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - Processing request:: sessionid:0x1000002945e0000 type:exists cxid:0x50 zxid:0xfffffffffffffffe txntype:unknown reqpath:/brokers/topics/__consumer_offsets 09:09:35 09:09:35.583 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - sessionid:0x1000002945e0000 type:exists cxid:0x50 zxid:0xfffffffffffffffe txntype:unknown reqpath:/brokers/topics/__consumer_offsets 09:09:35 09:09:35.583 [main-SendThread(127.0.0.1:46481)] DEBUG org.apache.zookeeper.ClientCnxn - Reading reply session id: 0x1000002945e0000, packet:: clientPath:/brokers/topics/__consumer_offsets serverPath:/brokers/topics/__consumer_offsets finished:false header:: 80,3 replyHeader:: 80,35,-101 request:: '/brokers/topics/__consumer_offsets,F response:: 09:09:35 09:09:35.584 [ProcessThread(sid:0 cport:46481):] DEBUG org.apache.zookeeper.server.SessionTrackerImpl - Checking session 0x1000002945e0000 09:09:35 09:09:35.584 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - Processing request:: sessionid:0x1000002945e0000 type:getChildren2 cxid:0x51 zxid:0xfffffffffffffffe txntype:unknown reqpath:/brokers/topics 09:09:35 09:09:35.584 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - sessionid:0x1000002945e0000 type:getChildren2 cxid:0x51 zxid:0xfffffffffffffffe txntype:unknown reqpath:/brokers/topics 09:09:35 09:09:35.584 [SyncThread:0] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Permission requested: 1 09:09:35 09:09:35.584 [SyncThread:0] DEBUG org.apache.zookeeper.server.ZooKeeperServer - ACLs for node: [31,s{'world,'anyone} 09:09:35 ] 09:09:35 09:09:35.584 [SyncThread:0] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Client credentials: ['sasl,'zooclient 09:09:35 , 'ip,'127.0.0.1 09:09:35 ] 09:09:35 09:09:35.584 [main-SendThread(127.0.0.1:46481)] DEBUG org.apache.zookeeper.ClientCnxn - Reading reply session id: 0x1000002945e0000, packet:: clientPath:/brokers/topics serverPath:/brokers/topics finished:false header:: 81,12 replyHeader:: 81,35,0 request:: '/brokers/topics,F response:: v{'my-test-topic},s{6,6,1770973772463,1770973772463,0,1,0,0,0,1,32} 09:09:35 09:09:35.589 [data-plane-kafka-request-handler-1] INFO kafka.zk.AdminZkClient - Creating topic __consumer_offsets with configuration {compression.type=producer, cleanup.policy=compact, segment.bytes=104857600} and initial partition assignment HashMap(0 -> ArrayBuffer(1), 1 -> ArrayBuffer(1), 2 -> ArrayBuffer(1), 3 -> ArrayBuffer(1), 4 -> ArrayBuffer(1), 5 -> ArrayBuffer(1), 6 -> ArrayBuffer(1), 7 -> ArrayBuffer(1), 8 -> ArrayBuffer(1), 9 -> ArrayBuffer(1), 10 -> ArrayBuffer(1), 11 -> ArrayBuffer(1), 12 -> ArrayBuffer(1), 13 -> ArrayBuffer(1), 14 -> ArrayBuffer(1), 15 -> ArrayBuffer(1), 16 -> ArrayBuffer(1), 17 -> ArrayBuffer(1), 18 -> ArrayBuffer(1), 19 -> ArrayBuffer(1), 20 -> ArrayBuffer(1), 21 -> ArrayBuffer(1), 22 -> ArrayBuffer(1), 23 -> ArrayBuffer(1), 24 -> ArrayBuffer(1), 25 -> ArrayBuffer(1), 26 -> ArrayBuffer(1), 27 -> ArrayBuffer(1), 28 -> ArrayBuffer(1), 29 -> ArrayBuffer(1), 30 -> ArrayBuffer(1), 31 -> ArrayBuffer(1), 32 -> ArrayBuffer(1), 33 -> ArrayBuffer(1), 34 -> ArrayBuffer(1), 35 -> ArrayBuffer(1), 36 -> ArrayBuffer(1), 37 -> ArrayBuffer(1), 38 -> ArrayBuffer(1), 39 -> ArrayBuffer(1), 40 -> ArrayBuffer(1), 41 -> ArrayBuffer(1), 42 -> ArrayBuffer(1), 43 -> ArrayBuffer(1), 44 -> ArrayBuffer(1), 45 -> ArrayBuffer(1), 46 -> ArrayBuffer(1), 47 -> ArrayBuffer(1), 48 -> ArrayBuffer(1), 49 -> ArrayBuffer(1)) 09:09:35 09:09:35.590 [ProcessThread(sid:0 cport:46481):] DEBUG org.apache.zookeeper.server.SessionTrackerImpl - Checking session 0x1000002945e0000 09:09:35 09:09:35.608 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - Processing request:: sessionid:0x1000002945e0000 type:setData cxid:0x52 zxid:0x24 txntype:-1 reqpath:n/a 09:09:35 09:09:35.608 [SyncThread:0] DEBUG org.apache.zookeeper.server.DataTree - Ignoring processTxn failure hdr: -1 : error: -101 09:09:35 09:09:35.608 [main-SendThread(127.0.0.1:46481)] DEBUG org.apache.zookeeper.ClientCnxn - Reading reply session id: 0x1000002945e0000, packet:: clientPath:/config/topics/__consumer_offsets serverPath:/config/topics/__consumer_offsets finished:false header:: 82,5 replyHeader:: 82,36,-101 request:: '/config/topics/__consumer_offsets,#7b2276657273696f6e223a312c22636f6e666967223a7b22636f6d7072657373696f6e2e74797065223a2270726f6475636572222c22636c65616e75702e706f6c696379223a22636f6d70616374222c227365676d656e742e6279746573223a22313034383537363030227d7d,-1 response:: 09:09:35 09:09:35.609 [ProcessThread(sid:0 cport:46481):] DEBUG org.apache.zookeeper.server.SessionTrackerImpl - Checking session 0x1000002945e0000 09:09:35 09:09:35.610 [ProcessThread(sid:0 cport:46481):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Processing ACL: 31,s{'world,'anyone} 09:09:35 09:09:35 09:09:35.610 [ProcessThread(sid:0 cport:46481):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Permission requested: 4 09:09:35 09:09:35.610 [ProcessThread(sid:0 cport:46481):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - ACLs for node: [31,s{'world,'anyone} 09:09:35 ] 09:09:35 09:09:35.610 [ProcessThread(sid:0 cport:46481):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Client credentials: ['sasl,'zooclient 09:09:35 , 'ip,'127.0.0.1 09:09:35 ] 09:09:35 09:09:35.610 [ProcessThread(sid:0 cport:46481):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from data tree is: 53578972280 09:09:35 09:09:35.610 [ProcessThread(sid:0 cport:46481):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 53926643840 09:09:35 09:09:35.657 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - Processing request:: sessionid:0x1000002945e0000 type:create cxid:0x53 zxid:0x25 txntype:1 reqpath:n/a 09:09:35 09:09:35.658 [SyncThread:0] DEBUG org.apache.zookeeper.common.PathTrie - config 09:09:35 09:09:35.658 [SyncThread:0] DEBUG org.apache.zookeeper.server.DataTree - Digests are matching for Zxid: 25, Digest in log and actual tree: 56306858277 09:09:35 09:09:35.658 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - sessionid:0x1000002945e0000 type:create cxid:0x53 zxid:0x25 txntype:1 reqpath:n/a 09:09:35 09:09:35.658 [main-SendThread(127.0.0.1:46481)] DEBUG org.apache.zookeeper.ClientCnxn - Reading reply session id: 0x1000002945e0000, packet:: clientPath:/config/topics/__consumer_offsets serverPath:/config/topics/__consumer_offsets finished:false header:: 83,1 replyHeader:: 83,37,0 request:: '/config/topics/__consumer_offsets,#7b2276657273696f6e223a312c22636f6e666967223a7b22636f6d7072657373696f6e2e74797065223a2270726f6475636572222c22636c65616e75702e706f6c696379223a22636f6d70616374222c227365676d656e742e6279746573223a22313034383537363030227d7d,v{s{31,s{'world,'anyone}}},0 response:: '/config/topics/__consumer_offsets 09:09:35 09:09:35.670 [ProcessThread(sid:0 cport:46481):] DEBUG org.apache.zookeeper.server.SessionTrackerImpl - Checking session 0x1000002945e0000 09:09:35 09:09:35.670 [ProcessThread(sid:0 cport:46481):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Processing ACL: 31,s{'world,'anyone} 09:09:35 09:09:35 09:09:35.671 [ProcessThread(sid:0 cport:46481):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Permission requested: 4 09:09:35 09:09:35.671 [ProcessThread(sid:0 cport:46481):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - ACLs for node: [31,s{'world,'anyone} 09:09:35 ] 09:09:35 09:09:35.671 [ProcessThread(sid:0 cport:46481):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Client credentials: ['sasl,'zooclient 09:09:35 , 'ip,'127.0.0.1 09:09:35 ] 09:09:35 09:09:35.671 [ProcessThread(sid:0 cport:46481):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from data tree is: 56306858277 09:09:35 09:09:35.671 [ProcessThread(sid:0 cport:46481):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 56458382131 09:09:35 09:09:35.692 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - Processing request:: sessionid:0x1000002945e0000 type:create cxid:0x54 zxid:0x26 txntype:1 reqpath:n/a 09:09:35 09:09:35.693 [SyncThread:0] DEBUG org.apache.zookeeper.common.PathTrie - brokers 09:09:35 09:09:35.693 [SyncThread:0] DEBUG org.apache.zookeeper.server.DataTree - Digests are matching for Zxid: 26, Digest in log and actual tree: 56901056127 09:09:35 09:09:35.693 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - sessionid:0x1000002945e0000 type:create cxid:0x54 zxid:0x26 txntype:1 reqpath:n/a 09:09:35 09:09:35.694 [main-SendThread(127.0.0.1:46481)] DEBUG org.apache.zookeeper.ClientCnxn - Got notification session id: 0x1000002945e0000 09:09:35 09:09:35.694 [main-SendThread(127.0.0.1:46481)] DEBUG org.apache.zookeeper.ClientCnxn - Got WatchedEvent state:SyncConnected type:NodeChildrenChanged path:/brokers/topics for session id 0x1000002945e0000 09:09:35 09:09:35.694 [main-EventThread] DEBUG kafka.zookeeper.ZooKeeperClient - [ZooKeeperClient Kafka server] Received event: WatchedEvent state:SyncConnected type:NodeChildrenChanged path:/brokers/topics 09:09:35 09:09:35.694 [main-SendThread(127.0.0.1:46481)] DEBUG org.apache.zookeeper.ClientCnxn - Reading reply session id: 0x1000002945e0000, packet:: clientPath:/brokers/topics/__consumer_offsets serverPath:/brokers/topics/__consumer_offsets finished:false header:: 84,1 replyHeader:: 84,38,0 request:: '/brokers/topics/__consumer_offsets,#7b22706172746974696f6e73223a7b223434223a5b315d2c223435223a5b315d2c223436223a5b315d2c223437223a5b315d2c223438223a5b315d2c223439223a5b315d2c223130223a5b315d2c223131223a5b315d2c223132223a5b315d2c223133223a5b315d2c223134223a5b315d2c223135223a5b315d2c223136223a5b315d2c223137223a5b315d2c223138223a5b315d2c223139223a5b315d2c2230223a5b315d2c2231223a5b315d2c2232223a5b315d2c2233223a5b315d2c2234223a5b315d2c2235223a5b315d2c2236223a5b315d2c2237223a5b315d2c2238223a5b315d2c2239223a5b315d2c223230223a5b315d2c223231223a5b315d2c223232223a5b315d2c223233223a5b315d2c223234223a5b315d2c223235223a5b315d2c223236223a5b315d2c223237223a5b315d2c223238223a5b315d2c223239223a5b315d2c223330223a5b315d2c223331223a5b315d2c223332223a5b315d2c223333223a5b315d2c223334223a5b315d2c223335223a5b315d2c223336223a5b315d2c223337223a5b315d2c223338223a5b315d2c223339223a5b315d2c223430223a5b315d2c223431223a5b315d2c223432223a5b315d2c223433223a5b315d7d2c22746f7069635f6964223a22524e755741326d68514b47515a77487552694d494351222c22616464696e675f7265706c69636173223a7b7d2c2272656d6f76696e675f7265706c69636173223a7b7d2c2276657273696f6e223a337d,v{s{31,s{'world,'anyone}}},0 response:: '/brokers/topics/__consumer_offsets 09:09:35 09:09:35.695 [ProcessThread(sid:0 cport:46481):] DEBUG org.apache.zookeeper.server.SessionTrackerImpl - Checking session 0x1000002945e0000 09:09:35 09:09:35.695 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - Processing request:: sessionid:0x1000002945e0000 type:getChildren2 cxid:0x55 zxid:0xfffffffffffffffe txntype:unknown reqpath:/brokers/topics 09:09:35 09:09:35.695 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - sessionid:0x1000002945e0000 type:getChildren2 cxid:0x55 zxid:0xfffffffffffffffe txntype:unknown reqpath:/brokers/topics 09:09:35 09:09:35.695 [SyncThread:0] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Permission requested: 1 09:09:35 09:09:35.695 [SyncThread:0] DEBUG org.apache.zookeeper.server.ZooKeeperServer - ACLs for node: [31,s{'world,'anyone} 09:09:35 ] 09:09:35 09:09:35.695 [SyncThread:0] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Client credentials: ['sasl,'zooclient 09:09:35 , 'ip,'127.0.0.1 09:09:35 ] 09:09:35 09:09:35.696 [data-plane-kafka-request-handler-1] DEBUG kafka.zk.AdminZkClient - Updated path /brokers/topics/__consumer_offsets with HashMap(__consumer_offsets-22 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-30 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-25 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-35 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-37 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-38 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-13 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-8 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-21 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-4 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-27 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-7 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-9 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-46 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-41 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-33 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-23 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-49 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-47 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-16 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-28 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-31 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-36 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-42 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-3 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-18 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-15 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-24 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-17 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-48 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-19 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-11 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-2 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-43 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-6 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-14 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-20 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-0 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-44 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-39 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-12 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-45 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-1 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-5 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-26 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-29 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-34 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-10 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-32 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-40 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=)) for replica assignment 09:09:35 09:09:35.696 [main-SendThread(127.0.0.1:46481)] DEBUG org.apache.zookeeper.ClientCnxn - Reading reply session id: 0x1000002945e0000, packet:: clientPath:/brokers/topics serverPath:/brokers/topics finished:false header:: 85,12 replyHeader:: 85,38,0 request:: '/brokers/topics,T response:: v{'my-test-topic,'__consumer_offsets},s{6,6,1770973772463,1770973772463,0,2,0,0,0,2,38} 09:09:35 09:09:35.698 [ProcessThread(sid:0 cport:46481):] DEBUG org.apache.zookeeper.server.SessionTrackerImpl - Checking session 0x1000002945e0000 09:09:35 09:09:35.698 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - Processing request:: sessionid:0x1000002945e0000 type:getData cxid:0x56 zxid:0xfffffffffffffffe txntype:unknown reqpath:/brokers/topics/__consumer_offsets 09:09:35 09:09:35.698 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - sessionid:0x1000002945e0000 type:getData cxid:0x56 zxid:0xfffffffffffffffe txntype:unknown reqpath:/brokers/topics/__consumer_offsets 09:09:35 09:09:35.698 [SyncThread:0] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Permission requested: 1 09:09:35 09:09:35.698 [SyncThread:0] DEBUG org.apache.zookeeper.server.ZooKeeperServer - ACLs for node: [31,s{'world,'anyone} 09:09:35 ] 09:09:35 09:09:35.698 [SyncThread:0] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Client credentials: ['sasl,'zooclient 09:09:35 , 'ip,'127.0.0.1 09:09:35 ] 09:09:35 09:09:35.699 [main-SendThread(127.0.0.1:46481)] DEBUG org.apache.zookeeper.ClientCnxn - Reading reply session id: 0x1000002945e0000, packet:: clientPath:/brokers/topics/__consumer_offsets serverPath:/brokers/topics/__consumer_offsets finished:false header:: 86,4 replyHeader:: 86,38,0 request:: '/brokers/topics/__consumer_offsets,T response:: #7b22706172746974696f6e73223a7b223434223a5b315d2c223435223a5b315d2c223436223a5b315d2c223437223a5b315d2c223438223a5b315d2c223439223a5b315d2c223130223a5b315d2c223131223a5b315d2c223132223a5b315d2c223133223a5b315d2c223134223a5b315d2c223135223a5b315d2c223136223a5b315d2c223137223a5b315d2c223138223a5b315d2c223139223a5b315d2c2230223a5b315d2c2231223a5b315d2c2232223a5b315d2c2233223a5b315d2c2234223a5b315d2c2235223a5b315d2c2236223a5b315d2c2237223a5b315d2c2238223a5b315d2c2239223a5b315d2c223230223a5b315d2c223231223a5b315d2c223232223a5b315d2c223233223a5b315d2c223234223a5b315d2c223235223a5b315d2c223236223a5b315d2c223237223a5b315d2c223238223a5b315d2c223239223a5b315d2c223330223a5b315d2c223331223a5b315d2c223332223a5b315d2c223333223a5b315d2c223334223a5b315d2c223335223a5b315d2c223336223a5b315d2c223337223a5b315d2c223338223a5b315d2c223339223a5b315d2c223430223a5b315d2c223431223a5b315d2c223432223a5b315d2c223433223a5b315d7d2c22746f7069635f6964223a22524e755741326d68514b47515a77487552694d494351222c22616464696e675f7265706c69636173223a7b7d2c2272656d6f76696e675f7265706c69636173223a7b7d2c2276657273696f6e223a337d,s{38,38,1770973775670,1770973775670,0,0,0,0,548,0,38} 09:09:35 09:09:35.699 [data-plane-kafka-request-handler-1] DEBUG kafka.server.DefaultAutoTopicCreationManager - Cleared inflight topic creation state for HashMap(__consumer_offsets -> CreatableTopic(name='__consumer_offsets', numPartitions=50, replicationFactor=1, assignments=[], configs=[CreateableTopicConfig(name='compression.type', value='producer'), CreateableTopicConfig(name='cleanup.policy', value='compact'), CreateableTopicConfig(name='segment.bytes', value='104857600')])) 09:09:35 09:09:35.706 [main] DEBUG org.apache.kafka.clients.NetworkClient - [Consumer clientId=mso-123456-consumer-f5c1d64b-2a77-4abe-97e2-5d595acfbd82, groupId=mso-group] Received FIND_COORDINATOR response from node -1 for request with header RequestHeader(apiKey=FIND_COORDINATOR, apiVersion=4, clientId=mso-123456-consumer-f5c1d64b-2a77-4abe-97e2-5d595acfbd82, correlationId=0): FindCoordinatorResponseData(throttleTimeMs=0, errorCode=0, errorMessage='', nodeId=0, host='', port=0, coordinators=[Coordinator(key='mso-group', nodeId=-1, host='', port=-1, errorCode=15, errorMessage='')]) 09:09:35 09:09:35.707 [data-plane-kafka-network-thread-1-ListenerName(SASL_PLAINTEXT)-SASL_PLAINTEXT-1] DEBUG kafka.request.logger - Completed request:{"isForwarded":false,"requestHeader":{"requestApiKey":10,"requestApiVersion":4,"correlationId":0,"clientId":"mso-123456-consumer-f5c1d64b-2a77-4abe-97e2-5d595acfbd82","requestApiKeyName":"FIND_COORDINATOR"},"request":{"keyType":0,"coordinatorKeys":["mso-group"]},"response":{"throttleTimeMs":0,"coordinators":[{"key":"mso-group","nodeId":-1,"host":"","port":-1,"errorCode":15,"errorMessage":""}]},"connection":"127.0.0.1:40117-127.0.0.1:49640-2","totalTimeMs":130.238,"requestQueueTimeMs":1.613,"localTimeMs":128.235,"remoteTimeMs":0.0,"throttleTimeMs":0,"responseQueueTimeMs":0.113,"sendTimeMs":0.275,"securityProtocol":"SASL_PLAINTEXT","principal":"User:admin","listener":"SASL_PLAINTEXT","clientInformation":{"softwareName":"apache-kafka-java","softwareVersion":"3.3.1"}} 09:09:35 09:09:35.707 [main] DEBUG org.apache.kafka.clients.consumer.internals.ConsumerCoordinator - [Consumer clientId=mso-123456-consumer-f5c1d64b-2a77-4abe-97e2-5d595acfbd82, groupId=mso-group] Received FindCoordinator response ClientResponse(receivedTimeMs=1770973775706, latencyMs=156, disconnected=false, requestHeader=RequestHeader(apiKey=FIND_COORDINATOR, apiVersion=4, clientId=mso-123456-consumer-f5c1d64b-2a77-4abe-97e2-5d595acfbd82, correlationId=0), responseBody=FindCoordinatorResponseData(throttleTimeMs=0, errorCode=0, errorMessage='', nodeId=0, host='', port=0, coordinators=[Coordinator(key='mso-group', nodeId=-1, host='', port=-1, errorCode=15, errorMessage='')])) 09:09:35 09:09:35.707 [main] DEBUG org.apache.kafka.clients.consumer.internals.ConsumerCoordinator - [Consumer clientId=mso-123456-consumer-f5c1d64b-2a77-4abe-97e2-5d595acfbd82, groupId=mso-group] Group coordinator lookup failed: 09:09:35 09:09:35.707 [main] DEBUG org.apache.kafka.clients.consumer.internals.ConsumerCoordinator - [Consumer clientId=mso-123456-consumer-f5c1d64b-2a77-4abe-97e2-5d595acfbd82, groupId=mso-group] Coordinator discovery failed, refreshing metadata 09:09:35 org.apache.kafka.common.errors.CoordinatorNotAvailableException: The coordinator is not available. 09:09:35 09:09:35.707 [main] DEBUG org.apache.kafka.clients.NetworkClient - [Consumer clientId=mso-123456-consumer-f5c1d64b-2a77-4abe-97e2-5d595acfbd82, groupId=mso-group] Initialize connection to node localhost:40117 (id: 1 rack: null) for sending metadata request 09:09:35 09:09:35.707 [main] DEBUG org.apache.kafka.clients.ClientUtils - Resolved host localhost as 127.0.0.1 09:09:35 09:09:35.707 [main] DEBUG org.apache.kafka.clients.NetworkClient - [Consumer clientId=mso-123456-consumer-f5c1d64b-2a77-4abe-97e2-5d595acfbd82, groupId=mso-group] Initiating connection to node localhost:40117 (id: 1 rack: null) using address localhost/127.0.0.1 09:09:35 09:09:35.708 [main] DEBUG org.apache.kafka.common.security.authenticator.SaslClientAuthenticator - [Consumer clientId=mso-123456-consumer-f5c1d64b-2a77-4abe-97e2-5d595acfbd82, groupId=mso-group] Set SASL client state to SEND_APIVERSIONS_REQUEST 09:09:35 09:09:35.708 [main] DEBUG org.apache.kafka.common.security.authenticator.SaslClientAuthenticator - [Consumer clientId=mso-123456-consumer-f5c1d64b-2a77-4abe-97e2-5d595acfbd82, groupId=mso-group] Creating SaslClient: client=null;service=kafka;serviceHostname=localhost;mechs=[PLAIN] 09:09:35 09:09:35.708 [data-plane-kafka-socket-acceptor-ListenerName(SASL_PLAINTEXT)-SASL_PLAINTEXT-40117] DEBUG kafka.network.DataPlaneAcceptor - Accepted connection from /127.0.0.1:49642 on /127.0.0.1:40117 and assigned it to processor 0, sendBufferSize [actual|requested]: [102400|102400] recvBufferSize [actual|requested]: [102400|102400] 09:09:35 09:09:35.708 [data-plane-kafka-network-thread-1-ListenerName(SASL_PLAINTEXT)-SASL_PLAINTEXT-0] DEBUG kafka.network.Processor - Processor 0 listening to new connection from /127.0.0.1:49642 09:09:35 09:09:35.708 [controller-event-thread] INFO kafka.controller.KafkaController - [Controller id=1] New topics: [Set(__consumer_offsets)], deleted topics: [HashSet()], new partition replica assignment [Set(TopicIdReplicaAssignment(__consumer_offsets,Some(RNuWA2mhQKGQZwHuRiMICQ),HashMap(__consumer_offsets-22 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-30 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-25 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-35 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-37 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-38 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-13 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-8 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-21 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-4 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-27 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-7 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-9 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-46 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-41 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-33 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-23 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-49 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-47 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-16 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-28 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-31 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-36 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-42 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-3 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-18 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-15 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-24 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-17 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-48 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-19 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-11 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-2 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-43 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-6 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-14 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-20 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-0 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-44 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-39 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-12 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-45 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-1 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-5 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-26 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-29 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-34 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-10 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-32 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-40 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=))))] 09:09:35 09:09:35.709 [controller-event-thread] INFO kafka.controller.KafkaController - [Controller id=1] New partition creation callback for __consumer_offsets-22,__consumer_offsets-30,__consumer_offsets-25,__consumer_offsets-35,__consumer_offsets-37,__consumer_offsets-38,__consumer_offsets-13,__consumer_offsets-8,__consumer_offsets-21,__consumer_offsets-4,__consumer_offsets-27,__consumer_offsets-7,__consumer_offsets-9,__consumer_offsets-46,__consumer_offsets-41,__consumer_offsets-33,__consumer_offsets-23,__consumer_offsets-49,__consumer_offsets-47,__consumer_offsets-16,__consumer_offsets-28,__consumer_offsets-31,__consumer_offsets-36,__consumer_offsets-42,__consumer_offsets-3,__consumer_offsets-18,__consumer_offsets-15,__consumer_offsets-24,__consumer_offsets-17,__consumer_offsets-48,__consumer_offsets-19,__consumer_offsets-11,__consumer_offsets-2,__consumer_offsets-43,__consumer_offsets-6,__consumer_offsets-14,__consumer_offsets-20,__consumer_offsets-0,__consumer_offsets-44,__consumer_offsets-39,__consumer_offsets-12,__consumer_offsets-45,__consumer_offsets-1,__consumer_offsets-5,__consumer_offsets-26,__consumer_offsets-29,__consumer_offsets-34,__consumer_offsets-10,__consumer_offsets-32,__consumer_offsets-40 09:09:35 09:09:35.709 [main] DEBUG org.apache.kafka.common.network.Selector - [Consumer clientId=mso-123456-consumer-f5c1d64b-2a77-4abe-97e2-5d595acfbd82, groupId=mso-group] Created socket with SO_RCVBUF = 65536, SO_SNDBUF = 131072, SO_TIMEOUT = 0 to node 1 09:09:35 09:09:35.709 [controller-event-thread] INFO state.change.logger - [Controller id=1 epoch=1] Changed partition __consumer_offsets-22 state from NonExistentPartition to NewPartition with assigned replicas 1 09:09:35 09:09:35.709 [controller-event-thread] INFO state.change.logger - [Controller id=1 epoch=1] Changed partition __consumer_offsets-30 state from NonExistentPartition to NewPartition with assigned replicas 1 09:09:35 09:09:35.709 [controller-event-thread] INFO state.change.logger - [Controller id=1 epoch=1] Changed partition __consumer_offsets-25 state from NonExistentPartition to NewPartition with assigned replicas 1 09:09:35 09:09:35.709 [controller-event-thread] INFO state.change.logger - [Controller id=1 epoch=1] Changed partition __consumer_offsets-35 state from NonExistentPartition to NewPartition with assigned replicas 1 09:09:35 09:09:35.710 [main] DEBUG org.apache.kafka.common.security.authenticator.SaslClientAuthenticator - [Consumer clientId=mso-123456-consumer-f5c1d64b-2a77-4abe-97e2-5d595acfbd82, groupId=mso-group] Set SASL client state to RECEIVE_APIVERSIONS_RESPONSE 09:09:35 09:09:35.710 [main] DEBUG org.apache.kafka.clients.NetworkClient - [Consumer clientId=mso-123456-consumer-f5c1d64b-2a77-4abe-97e2-5d595acfbd82, groupId=mso-group] Completed connection to node 1. Fetching API versions. 09:09:35 09:09:35.710 [data-plane-kafka-network-thread-1-ListenerName(SASL_PLAINTEXT)-SASL_PLAINTEXT-0] DEBUG org.apache.kafka.common.security.authenticator.SaslServerAuthenticator - Set SASL server state to HANDSHAKE_OR_VERSIONS_REQUEST during authentication 09:09:35 09:09:35.710 [data-plane-kafka-network-thread-1-ListenerName(SASL_PLAINTEXT)-SASL_PLAINTEXT-0] DEBUG org.apache.kafka.common.security.authenticator.SaslServerAuthenticator - Handling Kafka request API_VERSIONS during authentication 09:09:35 09:09:35.710 [data-plane-kafka-network-thread-1-ListenerName(SASL_PLAINTEXT)-SASL_PLAINTEXT-0] DEBUG org.apache.kafka.common.security.authenticator.SaslServerAuthenticator - Set SASL server state to HANDSHAKE_REQUEST during authentication 09:09:35 09:09:35.710 [controller-event-thread] INFO state.change.logger - [Controller id=1 epoch=1] Changed partition __consumer_offsets-37 state from NonExistentPartition to NewPartition with assigned replicas 1 09:09:35 09:09:35.711 [controller-event-thread] INFO state.change.logger - [Controller id=1 epoch=1] Changed partition __consumer_offsets-38 state from NonExistentPartition to NewPartition with assigned replicas 1 09:09:35 09:09:35.711 [controller-event-thread] INFO state.change.logger - [Controller id=1 epoch=1] Changed partition __consumer_offsets-13 state from NonExistentPartition to NewPartition with assigned replicas 1 09:09:35 09:09:35.711 [controller-event-thread] INFO state.change.logger - [Controller id=1 epoch=1] Changed partition __consumer_offsets-8 state from NonExistentPartition to NewPartition with assigned replicas 1 09:09:35 09:09:35.711 [controller-event-thread] INFO state.change.logger - [Controller id=1 epoch=1] Changed partition __consumer_offsets-21 state from NonExistentPartition to NewPartition with assigned replicas 1 09:09:35 09:09:35.711 [main] DEBUG org.apache.kafka.common.security.authenticator.SaslClientAuthenticator - [Consumer clientId=mso-123456-consumer-f5c1d64b-2a77-4abe-97e2-5d595acfbd82, groupId=mso-group] Set SASL client state to SEND_HANDSHAKE_REQUEST 09:09:35 09:09:35.711 [main] DEBUG org.apache.kafka.common.security.authenticator.SaslClientAuthenticator - [Consumer clientId=mso-123456-consumer-f5c1d64b-2a77-4abe-97e2-5d595acfbd82, groupId=mso-group] Set SASL client state to RECEIVE_HANDSHAKE_RESPONSE 09:09:35 09:09:35.711 [data-plane-kafka-network-thread-1-ListenerName(SASL_PLAINTEXT)-SASL_PLAINTEXT-0] DEBUG org.apache.kafka.common.security.authenticator.SaslServerAuthenticator - Handling Kafka request SASL_HANDSHAKE during authentication 09:09:35 09:09:35.711 [data-plane-kafka-network-thread-1-ListenerName(SASL_PLAINTEXT)-SASL_PLAINTEXT-0] DEBUG org.apache.kafka.common.security.authenticator.SaslServerAuthenticator - Using SASL mechanism 'PLAIN' provided by client 09:09:35 09:09:35.711 [main] DEBUG org.apache.kafka.common.security.authenticator.SaslClientAuthenticator - [Consumer clientId=mso-123456-consumer-f5c1d64b-2a77-4abe-97e2-5d595acfbd82, groupId=mso-group] Set SASL client state to INITIAL 09:09:35 09:09:35.712 [main] DEBUG org.apache.kafka.common.security.authenticator.SaslClientAuthenticator - [Consumer clientId=mso-123456-consumer-f5c1d64b-2a77-4abe-97e2-5d595acfbd82, groupId=mso-group] Set SASL client state to INTERMEDIATE 09:09:35 09:09:35.711 [controller-event-thread] INFO state.change.logger - [Controller id=1 epoch=1] Changed partition __consumer_offsets-4 state from NonExistentPartition to NewPartition with assigned replicas 1 09:09:35 09:09:35.712 [data-plane-kafka-network-thread-1-ListenerName(SASL_PLAINTEXT)-SASL_PLAINTEXT-0] DEBUG org.apache.kafka.common.security.authenticator.SaslServerAuthenticator - Set SASL server state to AUTHENTICATE during authentication 09:09:35 09:09:35.712 [data-plane-kafka-network-thread-1-ListenerName(SASL_PLAINTEXT)-SASL_PLAINTEXT-0] DEBUG org.apache.kafka.common.security.authenticator.SaslServerAuthenticator - Authentication complete; session max lifetime from broker config=0 ms, no credential expiration; no session expiration, sending 0 ms to client 09:09:35 09:09:35.712 [data-plane-kafka-network-thread-1-ListenerName(SASL_PLAINTEXT)-SASL_PLAINTEXT-0] DEBUG org.apache.kafka.common.security.authenticator.SaslServerAuthenticator - Set SASL server state to COMPLETE during authentication 09:09:35 09:09:35.712 [data-plane-kafka-network-thread-1-ListenerName(SASL_PLAINTEXT)-SASL_PLAINTEXT-0] DEBUG org.apache.kafka.common.network.Selector - [SocketServer listenerType=ZK_BROKER, nodeId=1] Successfully authenticated with /127.0.0.1 09:09:35 09:09:35.713 [main] DEBUG org.apache.kafka.common.security.authenticator.SaslClientAuthenticator - [Consumer clientId=mso-123456-consumer-f5c1d64b-2a77-4abe-97e2-5d595acfbd82, groupId=mso-group] Set SASL client state to COMPLETE 09:09:35 09:09:35.713 [controller-event-thread] INFO state.change.logger - [Controller id=1 epoch=1] Changed partition __consumer_offsets-27 state from NonExistentPartition to NewPartition with assigned replicas 1 09:09:35 09:09:35.713 [main] DEBUG org.apache.kafka.common.security.authenticator.SaslClientAuthenticator - [Consumer clientId=mso-123456-consumer-f5c1d64b-2a77-4abe-97e2-5d595acfbd82, groupId=mso-group] Finished authentication with no session expiration and no session re-authentication 09:09:35 09:09:35.713 [controller-event-thread] INFO state.change.logger - [Controller id=1 epoch=1] Changed partition __consumer_offsets-7 state from NonExistentPartition to NewPartition with assigned replicas 1 09:09:35 09:09:35.713 [main] DEBUG org.apache.kafka.common.network.Selector - [Consumer clientId=mso-123456-consumer-f5c1d64b-2a77-4abe-97e2-5d595acfbd82, groupId=mso-group] Successfully authenticated with localhost/127.0.0.1 09:09:35 09:09:35.713 [controller-event-thread] INFO state.change.logger - [Controller id=1 epoch=1] Changed partition __consumer_offsets-9 state from NonExistentPartition to NewPartition with assigned replicas 1 09:09:35 09:09:35.713 [controller-event-thread] INFO state.change.logger - [Controller id=1 epoch=1] Changed partition __consumer_offsets-46 state from NonExistentPartition to NewPartition with assigned replicas 1 09:09:35 09:09:35.713 [main] DEBUG org.apache.kafka.clients.NetworkClient - [Consumer clientId=mso-123456-consumer-f5c1d64b-2a77-4abe-97e2-5d595acfbd82, groupId=mso-group] Initiating API versions fetch from node 1. 09:09:35 09:09:35.713 [controller-event-thread] INFO state.change.logger - [Controller id=1 epoch=1] Changed partition __consumer_offsets-41 state from NonExistentPartition to NewPartition with assigned replicas 1 09:09:35 09:09:35.713 [controller-event-thread] INFO state.change.logger - [Controller id=1 epoch=1] Changed partition __consumer_offsets-33 state from NonExistentPartition to NewPartition with assigned replicas 1 09:09:35 09:09:35.713 [main] DEBUG org.apache.kafka.clients.NetworkClient - [Consumer clientId=mso-123456-consumer-f5c1d64b-2a77-4abe-97e2-5d595acfbd82, groupId=mso-group] Sending API_VERSIONS request with header RequestHeader(apiKey=API_VERSIONS, apiVersion=3, clientId=mso-123456-consumer-f5c1d64b-2a77-4abe-97e2-5d595acfbd82, correlationId=3) and timeout 30000 to node 1: ApiVersionsRequestData(clientSoftwareName='apache-kafka-java', clientSoftwareVersion='3.3.1') 09:09:35 09:09:35.713 [controller-event-thread] INFO state.change.logger - [Controller id=1 epoch=1] Changed partition __consumer_offsets-23 state from NonExistentPartition to NewPartition with assigned replicas 1 09:09:35 09:09:35.713 [controller-event-thread] INFO state.change.logger - [Controller id=1 epoch=1] Changed partition __consumer_offsets-49 state from NonExistentPartition to NewPartition with assigned replicas 1 09:09:35 09:09:35.713 [controller-event-thread] INFO state.change.logger - [Controller id=1 epoch=1] Changed partition __consumer_offsets-47 state from NonExistentPartition to NewPartition with assigned replicas 1 09:09:35 09:09:35.713 [controller-event-thread] INFO state.change.logger - [Controller id=1 epoch=1] Changed partition __consumer_offsets-16 state from NonExistentPartition to NewPartition with assigned replicas 1 09:09:35 09:09:35.713 [controller-event-thread] INFO state.change.logger - [Controller id=1 epoch=1] Changed partition __consumer_offsets-28 state from NonExistentPartition to NewPartition with assigned replicas 1 09:09:35 09:09:35.713 [controller-event-thread] INFO state.change.logger - [Controller id=1 epoch=1] Changed partition __consumer_offsets-31 state from NonExistentPartition to NewPartition with assigned replicas 1 09:09:35 09:09:35.713 [controller-event-thread] INFO state.change.logger - [Controller id=1 epoch=1] Changed partition __consumer_offsets-36 state from NonExistentPartition to NewPartition with assigned replicas 1 09:09:35 09:09:35.714 [controller-event-thread] INFO state.change.logger - [Controller id=1 epoch=1] Changed partition __consumer_offsets-42 state from NonExistentPartition to NewPartition with assigned replicas 1 09:09:35 09:09:35.714 [controller-event-thread] INFO state.change.logger - [Controller id=1 epoch=1] Changed partition __consumer_offsets-3 state from NonExistentPartition to NewPartition with assigned replicas 1 09:09:35 09:09:35.714 [controller-event-thread] INFO state.change.logger - [Controller id=1 epoch=1] Changed partition __consumer_offsets-18 state from NonExistentPartition to NewPartition with assigned replicas 1 09:09:35 09:09:35.714 [controller-event-thread] INFO state.change.logger - [Controller id=1 epoch=1] Changed partition __consumer_offsets-15 state from NonExistentPartition to NewPartition with assigned replicas 1 09:09:35 09:09:35.714 [controller-event-thread] INFO state.change.logger - [Controller id=1 epoch=1] Changed partition __consumer_offsets-24 state from NonExistentPartition to NewPartition with assigned replicas 1 09:09:35 09:09:35.714 [controller-event-thread] INFO state.change.logger - [Controller id=1 epoch=1] Changed partition __consumer_offsets-17 state from NonExistentPartition to NewPartition with assigned replicas 1 09:09:35 09:09:35.714 [controller-event-thread] INFO state.change.logger - [Controller id=1 epoch=1] Changed partition __consumer_offsets-48 state from NonExistentPartition to NewPartition with assigned replicas 1 09:09:35 09:09:35.714 [controller-event-thread] INFO state.change.logger - [Controller id=1 epoch=1] Changed partition __consumer_offsets-19 state from NonExistentPartition to NewPartition with assigned replicas 1 09:09:35 09:09:35.714 [controller-event-thread] INFO state.change.logger - [Controller id=1 epoch=1] Changed partition __consumer_offsets-11 state from NonExistentPartition to NewPartition with assigned replicas 1 09:09:35 09:09:35.714 [controller-event-thread] INFO state.change.logger - [Controller id=1 epoch=1] Changed partition __consumer_offsets-2 state from NonExistentPartition to NewPartition with assigned replicas 1 09:09:35 09:09:35.714 [controller-event-thread] INFO state.change.logger - [Controller id=1 epoch=1] Changed partition __consumer_offsets-43 state from NonExistentPartition to NewPartition with assigned replicas 1 09:09:35 09:09:35.714 [controller-event-thread] INFO state.change.logger - [Controller id=1 epoch=1] Changed partition __consumer_offsets-6 state from NonExistentPartition to NewPartition with assigned replicas 1 09:09:35 09:09:35.714 [controller-event-thread] INFO state.change.logger - [Controller id=1 epoch=1] Changed partition __consumer_offsets-14 state from NonExistentPartition to NewPartition with assigned replicas 1 09:09:35 09:09:35.714 [controller-event-thread] INFO state.change.logger - [Controller id=1 epoch=1] Changed partition __consumer_offsets-20 state from NonExistentPartition to NewPartition with assigned replicas 1 09:09:35 09:09:35.714 [controller-event-thread] INFO state.change.logger - [Controller id=1 epoch=1] Changed partition __consumer_offsets-0 state from NonExistentPartition to NewPartition with assigned replicas 1 09:09:35 09:09:35.714 [controller-event-thread] INFO state.change.logger - [Controller id=1 epoch=1] Changed partition __consumer_offsets-44 state from NonExistentPartition to NewPartition with assigned replicas 1 09:09:35 09:09:35.714 [controller-event-thread] INFO state.change.logger - [Controller id=1 epoch=1] Changed partition __consumer_offsets-39 state from NonExistentPartition to NewPartition with assigned replicas 1 09:09:35 09:09:35.714 [controller-event-thread] INFO state.change.logger - [Controller id=1 epoch=1] Changed partition __consumer_offsets-12 state from NonExistentPartition to NewPartition with assigned replicas 1 09:09:35 09:09:35.714 [controller-event-thread] INFO state.change.logger - [Controller id=1 epoch=1] Changed partition __consumer_offsets-45 state from NonExistentPartition to NewPartition with assigned replicas 1 09:09:35 09:09:35.714 [controller-event-thread] INFO state.change.logger - [Controller id=1 epoch=1] Changed partition __consumer_offsets-1 state from NonExistentPartition to NewPartition with assigned replicas 1 09:09:35 09:09:35.714 [controller-event-thread] INFO state.change.logger - [Controller id=1 epoch=1] Changed partition __consumer_offsets-5 state from NonExistentPartition to NewPartition with assigned replicas 1 09:09:35 09:09:35.714 [controller-event-thread] INFO state.change.logger - [Controller id=1 epoch=1] Changed partition __consumer_offsets-26 state from NonExistentPartition to NewPartition with assigned replicas 1 09:09:35 09:09:35.714 [controller-event-thread] INFO state.change.logger - [Controller id=1 epoch=1] Changed partition __consumer_offsets-29 state from NonExistentPartition to NewPartition with assigned replicas 1 09:09:35 09:09:35.714 [controller-event-thread] INFO state.change.logger - [Controller id=1 epoch=1] Changed partition __consumer_offsets-34 state from NonExistentPartition to NewPartition with assigned replicas 1 09:09:35 09:09:35.714 [controller-event-thread] INFO state.change.logger - [Controller id=1 epoch=1] Changed partition __consumer_offsets-10 state from NonExistentPartition to NewPartition with assigned replicas 1 09:09:35 09:09:35.714 [controller-event-thread] INFO state.change.logger - [Controller id=1 epoch=1] Changed partition __consumer_offsets-32 state from NonExistentPartition to NewPartition with assigned replicas 1 09:09:35 09:09:35.714 [controller-event-thread] INFO state.change.logger - [Controller id=1 epoch=1] Changed partition __consumer_offsets-40 state from NonExistentPartition to NewPartition with assigned replicas 1 09:09:35 09:09:35.714 [controller-event-thread] INFO state.change.logger - [Controller id=1 epoch=1] Sending UpdateMetadata request to brokers HashSet() for 0 partitions 09:09:35 09:09:35.715 [data-plane-kafka-network-thread-1-ListenerName(SASL_PLAINTEXT)-SASL_PLAINTEXT-0] DEBUG kafka.request.logger - Completed request:{"isForwarded":false,"requestHeader":{"requestApiKey":18,"requestApiVersion":3,"correlationId":3,"clientId":"mso-123456-consumer-f5c1d64b-2a77-4abe-97e2-5d595acfbd82","requestApiKeyName":"API_VERSIONS"},"request":{"clientSoftwareName":"apache-kafka-java","clientSoftwareVersion":"3.3.1"},"response":{"errorCode":0,"apiKeys":[{"apiKey":0,"minVersion":0,"maxVersion":9},{"apiKey":1,"minVersion":0,"maxVersion":13},{"apiKey":2,"minVersion":0,"maxVersion":7},{"apiKey":3,"minVersion":0,"maxVersion":12},{"apiKey":4,"minVersion":0,"maxVersion":6},{"apiKey":5,"minVersion":0,"maxVersion":3},{"apiKey":6,"minVersion":0,"maxVersion":7},{"apiKey":7,"minVersion":0,"maxVersion":3},{"apiKey":8,"minVersion":0,"maxVersion":8},{"apiKey":9,"minVersion":0,"maxVersion":8},{"apiKey":10,"minVersion":0,"maxVersion":4},{"apiKey":11,"minVersion":0,"maxVersion":9},{"apiKey":12,"minVersion":0,"maxVersion":4},{"apiKey":13,"minVersion":0,"maxVersion":5},{"apiKey":14,"minVersion":0,"maxVersion":5},{"apiKey":15,"minVersion":0,"maxVersion":5},{"apiKey":16,"minVersion":0,"maxVersion":4},{"apiKey":17,"minVersion":0,"maxVersion":1},{"apiKey":18,"minVersion":0,"maxVersion":3},{"apiKey":19,"minVersion":0,"maxVersion":7},{"apiKey":20,"minVersion":0,"maxVersion":6},{"apiKey":21,"minVersion":0,"maxVersion":2},{"apiKey":22,"minVersion":0,"maxVersion":4},{"apiKey":23,"minVersion":0,"maxVersion":4},{"apiKey":24,"minVersion":0,"maxVersion":3},{"apiKey":25,"minVersion":0,"maxVersion":3},{"apiKey":26,"minVersion":0,"maxVersion":3},{"apiKey":27,"minVersion":0,"maxVersion":1},{"apiKey":28,"minVersion":0,"maxVersion":3},{"apiKey":29,"minVersion":0,"maxVersion":3},{"apiKey":30,"minVersion":0,"maxVersion":3},{"apiKey":31,"minVersion":0,"maxVersion":3},{"apiKey":32,"minVersion":0,"maxVersion":4},{"apiKey":33,"minVersion":0,"maxVersion":2},{"apiKey":34,"minVersion":0,"maxVersion":2},{"apiKey":35,"minVersion":0,"maxVersion":4},{"apiKey":36,"minVersion":0,"maxVersion":2},{"apiKey":37,"minVersion":0,"maxVersion":3},{"apiKey":38,"minVersion":0,"maxVersion":3},{"apiKey":39,"minVersion":0,"maxVersion":2},{"apiKey":40,"minVersion":0,"maxVersion":2},{"apiKey":41,"minVersion":0,"maxVersion":3},{"apiKey":42,"minVersion":0,"maxVersion":2},{"apiKey":43,"minVersion":0,"maxVersion":2},{"apiKey":44,"minVersion":0,"maxVersion":1},{"apiKey":45,"minVersion":0,"maxVersion":0},{"apiKey":46,"minVersion":0,"maxVersion":0},{"apiKey":47,"minVersion":0,"maxVersion":0},{"apiKey":48,"minVersion":0,"maxVersion":1},{"apiKey":49,"minVersion":0,"maxVersion":1},{"apiKey":50,"minVersion":0,"maxVersion":0},{"apiKey":51,"minVersion":0,"maxVersion":0},{"apiKey":56,"minVersion":0,"maxVersion":2},{"apiKey":57,"minVersion":0,"maxVersion":1},{"apiKey":60,"minVersion":0,"maxVersion":0},{"apiKey":61,"minVersion":0,"maxVersion":0},{"apiKey":65,"minVersion":0,"maxVersion":0},{"apiKey":66,"minVersion":0,"maxVersion":0},{"apiKey":67,"minVersion":0,"maxVersion":0}],"throttleTimeMs":0,"finalizedFeaturesEpoch":0},"connection":"127.0.0.1:40117-127.0.0.1:49642-3","totalTimeMs":1.32,"requestQueueTimeMs":0.209,"localTimeMs":0.917,"remoteTimeMs":0.0,"throttleTimeMs":0,"responseQueueTimeMs":0.063,"sendTimeMs":0.129,"securityProtocol":"SASL_PLAINTEXT","principal":"User:admin","listener":"SASL_PLAINTEXT","clientInformation":{"softwareName":"unknown","softwareVersion":"unknown"}} 09:09:35 09:09:35.716 [main] DEBUG org.apache.kafka.clients.NetworkClient - [Consumer clientId=mso-123456-consumer-f5c1d64b-2a77-4abe-97e2-5d595acfbd82, groupId=mso-group] Received API_VERSIONS response from node 1 for request with header RequestHeader(apiKey=API_VERSIONS, apiVersion=3, clientId=mso-123456-consumer-f5c1d64b-2a77-4abe-97e2-5d595acfbd82, correlationId=3): ApiVersionsResponseData(errorCode=0, apiKeys=[ApiVersion(apiKey=0, minVersion=0, maxVersion=9), ApiVersion(apiKey=1, minVersion=0, maxVersion=13), ApiVersion(apiKey=2, minVersion=0, maxVersion=7), ApiVersion(apiKey=3, minVersion=0, maxVersion=12), ApiVersion(apiKey=4, minVersion=0, maxVersion=6), ApiVersion(apiKey=5, minVersion=0, maxVersion=3), ApiVersion(apiKey=6, minVersion=0, maxVersion=7), ApiVersion(apiKey=7, minVersion=0, maxVersion=3), ApiVersion(apiKey=8, minVersion=0, maxVersion=8), ApiVersion(apiKey=9, minVersion=0, maxVersion=8), ApiVersion(apiKey=10, minVersion=0, maxVersion=4), ApiVersion(apiKey=11, minVersion=0, maxVersion=9), ApiVersion(apiKey=12, minVersion=0, maxVersion=4), ApiVersion(apiKey=13, minVersion=0, maxVersion=5), ApiVersion(apiKey=14, minVersion=0, maxVersion=5), ApiVersion(apiKey=15, minVersion=0, maxVersion=5), ApiVersion(apiKey=16, minVersion=0, maxVersion=4), ApiVersion(apiKey=17, minVersion=0, maxVersion=1), ApiVersion(apiKey=18, minVersion=0, maxVersion=3), ApiVersion(apiKey=19, minVersion=0, maxVersion=7), ApiVersion(apiKey=20, minVersion=0, maxVersion=6), ApiVersion(apiKey=21, minVersion=0, maxVersion=2), ApiVersion(apiKey=22, minVersion=0, maxVersion=4), ApiVersion(apiKey=23, minVersion=0, maxVersion=4), ApiVersion(apiKey=24, minVersion=0, maxVersion=3), ApiVersion(apiKey=25, minVersion=0, maxVersion=3), ApiVersion(apiKey=26, minVersion=0, maxVersion=3), ApiVersion(apiKey=27, minVersion=0, maxVersion=1), ApiVersion(apiKey=28, minVersion=0, maxVersion=3), ApiVersion(apiKey=29, minVersion=0, maxVersion=3), ApiVersion(apiKey=30, minVersion=0, maxVersion=3), ApiVersion(apiKey=31, minVersion=0, maxVersion=3), ApiVersion(apiKey=32, minVersion=0, maxVersion=4), ApiVersion(apiKey=33, minVersion=0, maxVersion=2), ApiVersion(apiKey=34, minVersion=0, maxVersion=2), ApiVersion(apiKey=35, minVersion=0, maxVersion=4), ApiVersion(apiKey=36, minVersion=0, maxVersion=2), ApiVersion(apiKey=37, minVersion=0, maxVersion=3), ApiVersion(apiKey=38, minVersion=0, maxVersion=3), ApiVersion(apiKey=39, minVersion=0, maxVersion=2), ApiVersion(apiKey=40, minVersion=0, maxVersion=2), ApiVersion(apiKey=41, minVersion=0, maxVersion=3), ApiVersion(apiKey=42, minVersion=0, maxVersion=2), ApiVersion(apiKey=43, minVersion=0, maxVersion=2), ApiVersion(apiKey=44, minVersion=0, maxVersion=1), ApiVersion(apiKey=45, minVersion=0, maxVersion=0), ApiVersion(apiKey=46, minVersion=0, maxVersion=0), ApiVersion(apiKey=47, minVersion=0, maxVersion=0), ApiVersion(apiKey=48, minVersion=0, maxVersion=1), ApiVersion(apiKey=49, minVersion=0, maxVersion=1), ApiVersion(apiKey=50, minVersion=0, maxVersion=0), ApiVersion(apiKey=51, minVersion=0, maxVersion=0), ApiVersion(apiKey=56, minVersion=0, maxVersion=2), ApiVersion(apiKey=57, minVersion=0, maxVersion=1), ApiVersion(apiKey=60, minVersion=0, maxVersion=0), ApiVersion(apiKey=61, minVersion=0, maxVersion=0), ApiVersion(apiKey=65, minVersion=0, maxVersion=0), ApiVersion(apiKey=66, minVersion=0, maxVersion=0), ApiVersion(apiKey=67, minVersion=0, maxVersion=0)], throttleTimeMs=0, supportedFeatures=[], finalizedFeaturesEpoch=0, finalizedFeatures=[]) 09:09:35 09:09:35.717 [main] DEBUG org.apache.kafka.clients.NetworkClient - [Consumer clientId=mso-123456-consumer-f5c1d64b-2a77-4abe-97e2-5d595acfbd82, groupId=mso-group] Node 1 has finalized features epoch: 0, finalized features: [], supported features: [], API versions: (Produce(0): 0 to 9 [usable: 9], Fetch(1): 0 to 13 [usable: 13], ListOffsets(2): 0 to 7 [usable: 7], Metadata(3): 0 to 12 [usable: 12], LeaderAndIsr(4): 0 to 6 [usable: 6], StopReplica(5): 0 to 3 [usable: 3], UpdateMetadata(6): 0 to 7 [usable: 7], ControlledShutdown(7): 0 to 3 [usable: 3], OffsetCommit(8): 0 to 8 [usable: 8], OffsetFetch(9): 0 to 8 [usable: 8], FindCoordinator(10): 0 to 4 [usable: 4], JoinGroup(11): 0 to 9 [usable: 9], Heartbeat(12): 0 to 4 [usable: 4], LeaveGroup(13): 0 to 5 [usable: 5], SyncGroup(14): 0 to 5 [usable: 5], DescribeGroups(15): 0 to 5 [usable: 5], ListGroups(16): 0 to 4 [usable: 4], SaslHandshake(17): 0 to 1 [usable: 1], ApiVersions(18): 0 to 3 [usable: 3], CreateTopics(19): 0 to 7 [usable: 7], DeleteTopics(20): 0 to 6 [usable: 6], DeleteRecords(21): 0 to 2 [usable: 2], InitProducerId(22): 0 to 4 [usable: 4], OffsetForLeaderEpoch(23): 0 to 4 [usable: 4], AddPartitionsToTxn(24): 0 to 3 [usable: 3], AddOffsetsToTxn(25): 0 to 3 [usable: 3], EndTxn(26): 0 to 3 [usable: 3], WriteTxnMarkers(27): 0 to 1 [usable: 1], TxnOffsetCommit(28): 0 to 3 [usable: 3], DescribeAcls(29): 0 to 3 [usable: 3], CreateAcls(30): 0 to 3 [usable: 3], DeleteAcls(31): 0 to 3 [usable: 3], DescribeConfigs(32): 0 to 4 [usable: 4], AlterConfigs(33): 0 to 2 [usable: 2], AlterReplicaLogDirs(34): 0 to 2 [usable: 2], DescribeLogDirs(35): 0 to 4 [usable: 4], SaslAuthenticate(36): 0 to 2 [usable: 2], CreatePartitions(37): 0 to 3 [usable: 3], CreateDelegationToken(38): 0 to 3 [usable: 3], RenewDelegationToken(39): 0 to 2 [usable: 2], ExpireDelegationToken(40): 0 to 2 [usable: 2], DescribeDelegationToken(41): 0 to 3 [usable: 3], DeleteGroups(42): 0 to 2 [usable: 2], ElectLeaders(43): 0 to 2 [usable: 2], IncrementalAlterConfigs(44): 0 to 1 [usable: 1], AlterPartitionReassignments(45): 0 [usable: 0], ListPartitionReassignments(46): 0 [usable: 0], OffsetDelete(47): 0 [usable: 0], DescribeClientQuotas(48): 0 to 1 [usable: 1], AlterClientQuotas(49): 0 to 1 [usable: 1], DescribeUserScramCredentials(50): 0 [usable: 0], AlterUserScramCredentials(51): 0 [usable: 0], AlterPartition(56): 0 to 2 [usable: 2], UpdateFeatures(57): 0 to 1 [usable: 1], DescribeCluster(60): 0 [usable: 0], DescribeProducers(61): 0 [usable: 0], DescribeTransactions(65): 0 [usable: 0], ListTransactions(66): 0 [usable: 0], AllocateProducerIds(67): 0 [usable: 0]). 09:09:35 09:09:35.717 [main] DEBUG org.apache.kafka.clients.NetworkClient - [Consumer clientId=mso-123456-consumer-f5c1d64b-2a77-4abe-97e2-5d595acfbd82, groupId=mso-group] Sending metadata request MetadataRequestData(topics=[MetadataRequestTopic(topicId=AAAAAAAAAAAAAAAAAAAAAA, name='my-test-topic')], allowAutoTopicCreation=false, includeClusterAuthorizedOperations=false, includeTopicAuthorizedOperations=false) to node localhost:40117 (id: 1 rack: null) 09:09:35 09:09:35.718 [main] DEBUG org.apache.kafka.clients.NetworkClient - [Consumer clientId=mso-123456-consumer-f5c1d64b-2a77-4abe-97e2-5d595acfbd82, groupId=mso-group] Sending METADATA request with header RequestHeader(apiKey=METADATA, apiVersion=12, clientId=mso-123456-consumer-f5c1d64b-2a77-4abe-97e2-5d595acfbd82, correlationId=4) and timeout 30000 to node 1: MetadataRequestData(topics=[MetadataRequestTopic(topicId=AAAAAAAAAAAAAAAAAAAAAA, name='my-test-topic')], allowAutoTopicCreation=false, includeClusterAuthorizedOperations=false, includeTopicAuthorizedOperations=false) 09:09:35 09:09:35.718 [controller-event-thread] INFO state.change.logger - [Controller id=1 epoch=1] Sending UpdateMetadata request to brokers HashSet() for 0 partitions 09:09:35 09:09:35.720 [main] DEBUG org.apache.kafka.clients.NetworkClient - [Consumer clientId=mso-123456-consumer-f5c1d64b-2a77-4abe-97e2-5d595acfbd82, groupId=mso-group] Received METADATA response from node 1 for request with header RequestHeader(apiKey=METADATA, apiVersion=12, clientId=mso-123456-consumer-f5c1d64b-2a77-4abe-97e2-5d595acfbd82, correlationId=4): MetadataResponseData(throttleTimeMs=0, brokers=[MetadataResponseBroker(nodeId=1, host='localhost', port=40117, rack=null)], clusterId='lcpOyY1-QY2MMThgHGGgSA', controllerId=1, topics=[MetadataResponseTopic(errorCode=0, name='my-test-topic', topicId=rpOTusxPRiGyrsjjjH_fwA, isInternal=false, partitions=[MetadataResponsePartition(errorCode=0, partitionIndex=0, leaderId=1, leaderEpoch=0, replicaNodes=[1], isrNodes=[1], offlineReplicas=[])], topicAuthorizedOperations=-2147483648)], clusterAuthorizedOperations=-2147483648) 09:09:35 09:09:35.720 [data-plane-kafka-network-thread-1-ListenerName(SASL_PLAINTEXT)-SASL_PLAINTEXT-0] DEBUG kafka.request.logger - Completed request:{"isForwarded":false,"requestHeader":{"requestApiKey":3,"requestApiVersion":12,"correlationId":4,"clientId":"mso-123456-consumer-f5c1d64b-2a77-4abe-97e2-5d595acfbd82","requestApiKeyName":"METADATA"},"request":{"topics":[{"topicId":"AAAAAAAAAAAAAAAAAAAAAA","name":"my-test-topic"}],"allowAutoTopicCreation":false,"includeTopicAuthorizedOperations":false},"response":{"throttleTimeMs":0,"brokers":[{"nodeId":1,"host":"localhost","port":40117,"rack":null}],"clusterId":"lcpOyY1-QY2MMThgHGGgSA","controllerId":1,"topics":[{"errorCode":0,"name":"my-test-topic","topicId":"rpOTusxPRiGyrsjjjH_fwA","isInternal":false,"partitions":[{"errorCode":0,"partitionIndex":0,"leaderId":1,"leaderEpoch":0,"replicaNodes":[1],"isrNodes":[1],"offlineReplicas":[]}],"topicAuthorizedOperations":-2147483648}]},"connection":"127.0.0.1:40117-127.0.0.1:49642-3","totalTimeMs":1.684,"requestQueueTimeMs":0.114,"localTimeMs":1.386,"remoteTimeMs":0.0,"throttleTimeMs":0,"responseQueueTimeMs":0.057,"sendTimeMs":0.126,"securityProtocol":"SASL_PLAINTEXT","principal":"User:admin","listener":"SASL_PLAINTEXT","clientInformation":{"softwareName":"apache-kafka-java","softwareVersion":"3.3.1"}} 09:09:35 09:09:35.720 [main] DEBUG org.apache.kafka.clients.Metadata - [Consumer clientId=mso-123456-consumer-f5c1d64b-2a77-4abe-97e2-5d595acfbd82, groupId=mso-group] Updating last seen epoch for partition my-test-topic-0 from 0 to epoch 0 from new metadata 09:09:35 09:09:35.721 [main] DEBUG org.apache.kafka.clients.Metadata - [Consumer clientId=mso-123456-consumer-f5c1d64b-2a77-4abe-97e2-5d595acfbd82, groupId=mso-group] Updated cluster metadata updateVersion 3 to MetadataCache{clusterId='lcpOyY1-QY2MMThgHGGgSA', nodes={1=localhost:40117 (id: 1 rack: null)}, partitions=[PartitionMetadata(error=NONE, partition=my-test-topic-0, leader=Optional[1], leaderEpoch=Optional[0], replicas=1, isr=1, offlineReplicas=)], controller=localhost:40117 (id: 1 rack: null)} 09:09:35 09:09:35.721 [main] DEBUG org.apache.kafka.clients.consumer.internals.ConsumerCoordinator - [Consumer clientId=mso-123456-consumer-f5c1d64b-2a77-4abe-97e2-5d595acfbd82, groupId=mso-group] Sending FindCoordinator request to broker localhost:40117 (id: 1 rack: null) 09:09:35 09:09:35.721 [main] DEBUG org.apache.kafka.clients.NetworkClient - [Consumer clientId=mso-123456-consumer-f5c1d64b-2a77-4abe-97e2-5d595acfbd82, groupId=mso-group] Sending FIND_COORDINATOR request with header RequestHeader(apiKey=FIND_COORDINATOR, apiVersion=4, clientId=mso-123456-consumer-f5c1d64b-2a77-4abe-97e2-5d595acfbd82, correlationId=5) and timeout 30000 to node 1: FindCoordinatorRequestData(key='', keyType=0, coordinatorKeys=[mso-group]) 09:09:35 09:09:35.723 [ProcessThread(sid:0 cport:46481):] DEBUG org.apache.zookeeper.server.SessionTrackerImpl - Checking session 0x1000002945e0000 09:09:35 09:09:35.723 [ProcessThread(sid:0 cport:46481):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Permission requested: 1 09:09:35 09:09:35.723 [ProcessThread(sid:0 cport:46481):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - ACLs for node: [31,s{'world,'anyone} 09:09:35 ] 09:09:35 09:09:35.723 [ProcessThread(sid:0 cport:46481):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Client credentials: ['sasl,'zooclient 09:09:35 , 'ip,'127.0.0.1 09:09:35 ] 09:09:35 09:09:35.723 [ProcessThread(sid:0 cport:46481):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from data tree is: 56901056127 09:09:35 09:09:35.723 [ProcessThread(sid:0 cport:46481):] DEBUG org.apache.zookeeper.server.SessionTrackerImpl - Checking session 0x1000002945e0000 09:09:35 09:09:35.723 [ProcessThread(sid:0 cport:46481):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Processing ACL: 31,s{'world,'anyone} 09:09:35 09:09:35 09:09:35.723 [ProcessThread(sid:0 cport:46481):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Permission requested: 4 09:09:35 09:09:35.723 [ProcessThread(sid:0 cport:46481):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - ACLs for node: [31,s{'world,'anyone} 09:09:35 ] 09:09:35 09:09:35.723 [ProcessThread(sid:0 cport:46481):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Client credentials: ['sasl,'zooclient 09:09:35 , 'ip,'127.0.0.1 09:09:35 ] 09:09:35 09:09:35.724 [ProcessThread(sid:0 cport:46481):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from data tree is: 56901056127 09:09:35 09:09:35.724 [ProcessThread(sid:0 cport:46481):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 57649852681 09:09:35 09:09:35.724 [ProcessThread(sid:0 cport:46481):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 59657095866 09:09:35 09:09:35.724 [ProcessThread(sid:0 cport:46481):] DEBUG org.apache.zookeeper.server.SessionTrackerImpl - Checking session 0x1000002945e0000 09:09:35 09:09:35.730 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - Processing request:: sessionid:0x1000002945e0000 type:multi cxid:0x57 zxid:0x27 txntype:14 reqpath:n/a 09:09:35 09:09:35.730 [SyncThread:0] DEBUG org.apache.zookeeper.common.PathTrie - brokers 09:09:35 09:09:35.730 [SyncThread:0] DEBUG org.apache.zookeeper.server.DataTree - Digests are matching for Zxid: 27, Digest in log and actual tree: 59657095866 09:09:35 09:09:35.730 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - sessionid:0x1000002945e0000 type:multi cxid:0x57 zxid:0x27 txntype:14 reqpath:n/a 09:09:35 09:09:35.730 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - Processing request:: sessionid:0x1000002945e0000 type:exists cxid:0x58 zxid:0xfffffffffffffffe txntype:unknown reqpath:/admin/delete_topics/__consumer_offsets 09:09:35 09:09:35.730 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - sessionid:0x1000002945e0000 type:exists cxid:0x58 zxid:0xfffffffffffffffe txntype:unknown reqpath:/admin/delete_topics/__consumer_offsets 09:09:35 09:09:35.731 [main-SendThread(127.0.0.1:46481)] DEBUG org.apache.zookeeper.ClientCnxn - Reading reply session id: 0x1000002945e0000, packet:: clientPath:null serverPath:null finished:false header:: 87,14 replyHeader:: 87,39,0 request:: org.apache.zookeeper.MultiOperationRecord@47c7375 response:: org.apache.zookeeper.MultiResponse@fe4873b6 09:09:35 09:09:35.731 [main-SendThread(127.0.0.1:46481)] DEBUG org.apache.zookeeper.ClientCnxn - Reading reply session id: 0x1000002945e0000, packet:: clientPath:/admin/delete_topics/__consumer_offsets serverPath:/admin/delete_topics/__consumer_offsets finished:false header:: 88,3 replyHeader:: 88,39,-101 request:: '/admin/delete_topics/__consumer_offsets,F response:: 09:09:35 09:09:35.731 [ProcessThread(sid:0 cport:46481):] DEBUG org.apache.zookeeper.server.SessionTrackerImpl - Checking session 0x1000002945e0000 09:09:35 09:09:35.731 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - Processing request:: sessionid:0x1000002945e0000 type:exists cxid:0x59 zxid:0xfffffffffffffffe txntype:unknown reqpath:/brokers/topics/__consumer_offsets 09:09:35 09:09:35.731 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - sessionid:0x1000002945e0000 type:exists cxid:0x59 zxid:0xfffffffffffffffe txntype:unknown reqpath:/brokers/topics/__consumer_offsets 09:09:35 09:09:35.732 [main-SendThread(127.0.0.1:46481)] DEBUG org.apache.zookeeper.ClientCnxn - Reading reply session id: 0x1000002945e0000, packet:: clientPath:/brokers/topics/__consumer_offsets serverPath:/brokers/topics/__consumer_offsets finished:false header:: 89,3 replyHeader:: 89,39,0 request:: '/brokers/topics/__consumer_offsets,T response:: s{38,38,1770973775670,1770973775670,0,1,0,0,548,1,39} 09:09:35 09:09:35.732 [ProcessThread(sid:0 cport:46481):] DEBUG org.apache.zookeeper.server.SessionTrackerImpl - Checking session 0x1000002945e0000 09:09:35 09:09:35.732 [ProcessThread(sid:0 cport:46481):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Permission requested: 1 09:09:35 09:09:35.732 [ProcessThread(sid:0 cport:46481):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - ACLs for node: [31,s{'world,'anyone} 09:09:35 ] 09:09:35 09:09:35.732 [ProcessThread(sid:0 cport:46481):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Client credentials: ['sasl,'zooclient 09:09:35 , 'ip,'127.0.0.1 09:09:35 ] 09:09:35 09:09:35.732 [ProcessThread(sid:0 cport:46481):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from data tree is: 59657095866 09:09:35 09:09:35.732 [ProcessThread(sid:0 cport:46481):] DEBUG org.apache.zookeeper.server.SessionTrackerImpl - Checking session 0x1000002945e0000 09:09:35 09:09:35.732 [ProcessThread(sid:0 cport:46481):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Processing ACL: 31,s{'world,'anyone} 09:09:35 09:09:35 09:09:35.743 [ProcessThread(sid:0 cport:46481):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Permission requested: 4 09:09:35 09:09:35.743 [ProcessThread(sid:0 cport:46481):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - ACLs for node: [31,s{'world,'anyone} 09:09:35 ] 09:09:35 09:09:35.743 [ProcessThread(sid:0 cport:46481):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Client credentials: ['sasl,'zooclient 09:09:35 , 'ip,'127.0.0.1 09:09:35 ] 09:09:35 09:09:35.743 [ProcessThread(sid:0 cport:46481):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from data tree is: 59657095866 09:09:35 09:09:35.743 [ProcessThread(sid:0 cport:46481):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 58768029086 09:09:35 09:09:35.744 [ProcessThread(sid:0 cport:46481):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 61628124345 09:09:35 09:09:35.744 [data-plane-kafka-request-handler-0] DEBUG kafka.server.ZkAdminManager - [Admin Manager on Broker 1]: Topic creation failed since topic '__consumer_offsets' already exists. 09:09:35 org.apache.kafka.common.errors.TopicExistsException: Topic '__consumer_offsets' already exists. 09:09:35 09:09:35.744 [ProcessThread(sid:0 cport:46481):] DEBUG org.apache.zookeeper.server.SessionTrackerImpl - Checking session 0x1000002945e0000 09:09:35 09:09:35.744 [ProcessThread(sid:0 cport:46481):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Permission requested: 1 09:09:35 09:09:35.744 [ProcessThread(sid:0 cport:46481):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - ACLs for node: [31,s{'world,'anyone} 09:09:35 ] 09:09:35 09:09:35.744 [ProcessThread(sid:0 cport:46481):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Client credentials: ['sasl,'zooclient 09:09:35 , 'ip,'127.0.0.1 09:09:35 ] 09:09:35 09:09:35.744 [ProcessThread(sid:0 cport:46481):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 61628124345 09:09:35 09:09:35.744 [data-plane-kafka-request-handler-0] DEBUG kafka.server.DefaultAutoTopicCreationManager - Cleared inflight topic creation state for HashMap(__consumer_offsets -> CreatableTopic(name='__consumer_offsets', numPartitions=50, replicationFactor=1, assignments=[], configs=[CreateableTopicConfig(name='compression.type', value='producer'), CreateableTopicConfig(name='cleanup.policy', value='compact'), CreateableTopicConfig(name='segment.bytes', value='104857600')])) 09:09:35 09:09:35.744 [ProcessThread(sid:0 cport:46481):] DEBUG org.apache.zookeeper.server.SessionTrackerImpl - Checking session 0x1000002945e0000 09:09:35 09:09:35.744 [ProcessThread(sid:0 cport:46481):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Processing ACL: 31,s{'world,'anyone} 09:09:35 09:09:35 09:09:35.744 [ProcessThread(sid:0 cport:46481):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Permission requested: 4 09:09:35 09:09:35.744 [ProcessThread(sid:0 cport:46481):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - ACLs for node: [31,s{'world,'anyone} 09:09:35 ] 09:09:35 09:09:35.744 [ProcessThread(sid:0 cport:46481):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Client credentials: ['sasl,'zooclient 09:09:35 , 'ip,'127.0.0.1 09:09:35 ] 09:09:35 09:09:35.744 [ProcessThread(sid:0 cport:46481):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 61628124345 09:09:35 09:09:35.744 [ProcessThread(sid:0 cport:46481):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 62650910568 09:09:35 09:09:35.744 [ProcessThread(sid:0 cport:46481):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 63428180132 09:09:35 09:09:35.744 [ProcessThread(sid:0 cport:46481):] DEBUG org.apache.zookeeper.server.SessionTrackerImpl - Checking session 0x1000002945e0000 09:09:35 09:09:35.744 [ProcessThread(sid:0 cport:46481):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Permission requested: 1 09:09:35 09:09:35.744 [ProcessThread(sid:0 cport:46481):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - ACLs for node: [31,s{'world,'anyone} 09:09:35 ] 09:09:35 09:09:35.745 [ProcessThread(sid:0 cport:46481):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Client credentials: ['sasl,'zooclient 09:09:35 , 'ip,'127.0.0.1 09:09:35 ] 09:09:35 09:09:35.745 [ProcessThread(sid:0 cport:46481):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 63428180132 09:09:35 09:09:35.745 [ProcessThread(sid:0 cport:46481):] DEBUG org.apache.zookeeper.server.SessionTrackerImpl - Checking session 0x1000002945e0000 09:09:35 09:09:35.745 [ProcessThread(sid:0 cport:46481):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Processing ACL: 31,s{'world,'anyone} 09:09:35 09:09:35 09:09:35.745 [ProcessThread(sid:0 cport:46481):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Permission requested: 4 09:09:35 09:09:35.745 [ProcessThread(sid:0 cport:46481):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - ACLs for node: [31,s{'world,'anyone} 09:09:35 ] 09:09:35 09:09:35.745 [ProcessThread(sid:0 cport:46481):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Client credentials: ['sasl,'zooclient 09:09:35 , 'ip,'127.0.0.1 09:09:35 ] 09:09:35 09:09:35.745 [main] DEBUG org.apache.kafka.clients.NetworkClient - [Consumer clientId=mso-123456-consumer-f5c1d64b-2a77-4abe-97e2-5d595acfbd82, groupId=mso-group] Received FIND_COORDINATOR response from node 1 for request with header RequestHeader(apiKey=FIND_COORDINATOR, apiVersion=4, clientId=mso-123456-consumer-f5c1d64b-2a77-4abe-97e2-5d595acfbd82, correlationId=5): FindCoordinatorResponseData(throttleTimeMs=0, errorCode=0, errorMessage='', nodeId=0, host='', port=0, coordinators=[Coordinator(key='mso-group', nodeId=-1, host='', port=-1, errorCode=15, errorMessage='')]) 09:09:35 09:09:35.745 [data-plane-kafka-network-thread-1-ListenerName(SASL_PLAINTEXT)-SASL_PLAINTEXT-0] DEBUG kafka.request.logger - Completed request:{"isForwarded":false,"requestHeader":{"requestApiKey":10,"requestApiVersion":4,"correlationId":5,"clientId":"mso-123456-consumer-f5c1d64b-2a77-4abe-97e2-5d595acfbd82","requestApiKeyName":"FIND_COORDINATOR"},"request":{"keyType":0,"coordinatorKeys":["mso-group"]},"response":{"throttleTimeMs":0,"coordinators":[{"key":"mso-group","nodeId":-1,"host":"","port":-1,"errorCode":15,"errorMessage":""}]},"connection":"127.0.0.1:40117-127.0.0.1:49642-3","totalTimeMs":23.433,"requestQueueTimeMs":0.075,"localTimeMs":23.132,"remoteTimeMs":0.0,"throttleTimeMs":0,"responseQueueTimeMs":0.072,"sendTimeMs":0.153,"securityProtocol":"SASL_PLAINTEXT","principal":"User:admin","listener":"SASL_PLAINTEXT","clientInformation":{"softwareName":"apache-kafka-java","softwareVersion":"3.3.1"}} 09:09:35 09:09:35.745 [main] DEBUG org.apache.kafka.clients.consumer.internals.ConsumerCoordinator - [Consumer clientId=mso-123456-consumer-f5c1d64b-2a77-4abe-97e2-5d595acfbd82, groupId=mso-group] Received FindCoordinator response ClientResponse(receivedTimeMs=1770973775745, latencyMs=24, disconnected=false, requestHeader=RequestHeader(apiKey=FIND_COORDINATOR, apiVersion=4, clientId=mso-123456-consumer-f5c1d64b-2a77-4abe-97e2-5d595acfbd82, correlationId=5), responseBody=FindCoordinatorResponseData(throttleTimeMs=0, errorCode=0, errorMessage='', nodeId=0, host='', port=0, coordinators=[Coordinator(key='mso-group', nodeId=-1, host='', port=-1, errorCode=15, errorMessage='')])) 09:09:35 09:09:35.745 [main] DEBUG org.apache.kafka.clients.consumer.internals.ConsumerCoordinator - [Consumer clientId=mso-123456-consumer-f5c1d64b-2a77-4abe-97e2-5d595acfbd82, groupId=mso-group] Group coordinator lookup failed: 09:09:35 09:09:35.745 [ProcessThread(sid:0 cport:46481):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 63428180132 09:09:35 09:09:35.745 [ProcessThread(sid:0 cport:46481):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 65455018753 09:09:35 09:09:35.745 [ProcessThread(sid:0 cport:46481):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 66243891961 09:09:35 09:09:35.745 [main] DEBUG org.apache.kafka.clients.consumer.internals.ConsumerCoordinator - [Consumer clientId=mso-123456-consumer-f5c1d64b-2a77-4abe-97e2-5d595acfbd82, groupId=mso-group] Coordinator discovery failed, refreshing metadata 09:09:35 org.apache.kafka.common.errors.CoordinatorNotAvailableException: The coordinator is not available. 09:09:35 09:09:35.745 [ProcessThread(sid:0 cport:46481):] DEBUG org.apache.zookeeper.server.SessionTrackerImpl - Checking session 0x1000002945e0000 09:09:35 09:09:35.746 [ProcessThread(sid:0 cport:46481):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Permission requested: 1 09:09:35 09:09:35.746 [ProcessThread(sid:0 cport:46481):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - ACLs for node: [31,s{'world,'anyone} 09:09:35 ] 09:09:35 09:09:35.746 [ProcessThread(sid:0 cport:46481):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Client credentials: ['sasl,'zooclient 09:09:35 , 'ip,'127.0.0.1 09:09:35 ] 09:09:35 09:09:35.746 [ProcessThread(sid:0 cport:46481):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 66243891961 09:09:35 09:09:35.746 [ProcessThread(sid:0 cport:46481):] DEBUG org.apache.zookeeper.server.SessionTrackerImpl - Checking session 0x1000002945e0000 09:09:35 09:09:35.746 [ProcessThread(sid:0 cport:46481):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Processing ACL: 31,s{'world,'anyone} 09:09:35 09:09:35 09:09:35.746 [ProcessThread(sid:0 cport:46481):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Permission requested: 4 09:09:35 09:09:35.746 [ProcessThread(sid:0 cport:46481):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - ACLs for node: [31,s{'world,'anyone} 09:09:35 ] 09:09:35 09:09:35.746 [ProcessThread(sid:0 cport:46481):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Client credentials: ['sasl,'zooclient 09:09:35 , 'ip,'127.0.0.1 09:09:35 ] 09:09:35 09:09:35.746 [ProcessThread(sid:0 cport:46481):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 66243891961 09:09:35 09:09:35.746 [ProcessThread(sid:0 cport:46481):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 62175216190 09:09:35 09:09:35.746 [ProcessThread(sid:0 cport:46481):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 63122934878 09:09:35 09:09:35.746 [ProcessThread(sid:0 cport:46481):] DEBUG org.apache.zookeeper.server.SessionTrackerImpl - Checking session 0x1000002945e0000 09:09:35 09:09:35.746 [ProcessThread(sid:0 cport:46481):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Permission requested: 1 09:09:35 09:09:35.746 [ProcessThread(sid:0 cport:46481):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - ACLs for node: [31,s{'world,'anyone} 09:09:35 ] 09:09:35 09:09:35.746 [ProcessThread(sid:0 cport:46481):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Client credentials: ['sasl,'zooclient 09:09:35 , 'ip,'127.0.0.1 09:09:35 ] 09:09:35 09:09:35.746 [ProcessThread(sid:0 cport:46481):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 63122934878 09:09:35 09:09:35.746 [ProcessThread(sid:0 cport:46481):] DEBUG org.apache.zookeeper.server.SessionTrackerImpl - Checking session 0x1000002945e0000 09:09:35 09:09:35.746 [ProcessThread(sid:0 cport:46481):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Processing ACL: 31,s{'world,'anyone} 09:09:35 09:09:35 09:09:35.746 [ProcessThread(sid:0 cport:46481):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Permission requested: 4 09:09:35 09:09:35.746 [ProcessThread(sid:0 cport:46481):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - ACLs for node: [31,s{'world,'anyone} 09:09:35 ] 09:09:35 09:09:35.746 [ProcessThread(sid:0 cport:46481):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Client credentials: ['sasl,'zooclient 09:09:35 , 'ip,'127.0.0.1 09:09:35 ] 09:09:35 09:09:35.746 [ProcessThread(sid:0 cport:46481):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 63122934878 09:09:35 09:09:35.746 [ProcessThread(sid:0 cport:46481):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 66988227636 09:09:35 09:09:35.746 [ProcessThread(sid:0 cport:46481):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 68443246370 09:09:35 09:09:35.746 [ProcessThread(sid:0 cport:46481):] DEBUG org.apache.zookeeper.server.SessionTrackerImpl - Checking session 0x1000002945e0000 09:09:35 09:09:35.746 [ProcessThread(sid:0 cport:46481):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Permission requested: 1 09:09:35 09:09:35.746 [ProcessThread(sid:0 cport:46481):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - ACLs for node: [31,s{'world,'anyone} 09:09:35 ] 09:09:35 09:09:35.746 [ProcessThread(sid:0 cport:46481):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Client credentials: ['sasl,'zooclient 09:09:35 , 'ip,'127.0.0.1 09:09:35 ] 09:09:35 09:09:35.746 [ProcessThread(sid:0 cport:46481):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 68443246370 09:09:35 09:09:35.746 [ProcessThread(sid:0 cport:46481):] DEBUG org.apache.zookeeper.server.SessionTrackerImpl - Checking session 0x1000002945e0000 09:09:35 09:09:35.746 [ProcessThread(sid:0 cport:46481):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Processing ACL: 31,s{'world,'anyone} 09:09:35 09:09:35 09:09:35.746 [ProcessThread(sid:0 cport:46481):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Permission requested: 4 09:09:35 09:09:35.746 [ProcessThread(sid:0 cport:46481):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - ACLs for node: [31,s{'world,'anyone} 09:09:35 ] 09:09:35 09:09:35.746 [ProcessThread(sid:0 cport:46481):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Client credentials: ['sasl,'zooclient 09:09:35 , 'ip,'127.0.0.1 09:09:35 ] 09:09:35 09:09:35.746 [ProcessThread(sid:0 cport:46481):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 68443246370 09:09:35 09:09:35.746 [ProcessThread(sid:0 cport:46481):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 67993016787 09:09:35 09:09:35.747 [ProcessThread(sid:0 cport:46481):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 72180823885 09:09:35 09:09:35.747 [ProcessThread(sid:0 cport:46481):] DEBUG org.apache.zookeeper.server.SessionTrackerImpl - Checking session 0x1000002945e0000 09:09:35 09:09:35.747 [ProcessThread(sid:0 cport:46481):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Permission requested: 1 09:09:35 09:09:35.747 [ProcessThread(sid:0 cport:46481):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - ACLs for node: [31,s{'world,'anyone} 09:09:35 ] 09:09:35 09:09:35.747 [ProcessThread(sid:0 cport:46481):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Client credentials: ['sasl,'zooclient 09:09:35 , 'ip,'127.0.0.1 09:09:35 ] 09:09:35 09:09:35.747 [ProcessThread(sid:0 cport:46481):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 72180823885 09:09:35 09:09:35.747 [ProcessThread(sid:0 cport:46481):] DEBUG org.apache.zookeeper.server.SessionTrackerImpl - Checking session 0x1000002945e0000 09:09:35 09:09:35.747 [ProcessThread(sid:0 cport:46481):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Processing ACL: 31,s{'world,'anyone} 09:09:35 09:09:35 09:09:35.747 [ProcessThread(sid:0 cport:46481):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Permission requested: 4 09:09:35 09:09:35.747 [ProcessThread(sid:0 cport:46481):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - ACLs for node: [31,s{'world,'anyone} 09:09:35 ] 09:09:35 09:09:35.747 [ProcessThread(sid:0 cport:46481):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Client credentials: ['sasl,'zooclient 09:09:35 , 'ip,'127.0.0.1 09:09:35 ] 09:09:35 09:09:35.747 [ProcessThread(sid:0 cport:46481):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 72180823885 09:09:35 09:09:35.747 [ProcessThread(sid:0 cport:46481):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 70117972328 09:09:35 09:09:35.747 [ProcessThread(sid:0 cport:46481):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 72435022804 09:09:35 09:09:35.747 [ProcessThread(sid:0 cport:46481):] DEBUG org.apache.zookeeper.server.SessionTrackerImpl - Checking session 0x1000002945e0000 09:09:35 09:09:35.747 [ProcessThread(sid:0 cport:46481):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Permission requested: 1 09:09:35 09:09:35.747 [ProcessThread(sid:0 cport:46481):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - ACLs for node: [31,s{'world,'anyone} 09:09:35 ] 09:09:35 09:09:35.747 [ProcessThread(sid:0 cport:46481):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Client credentials: ['sasl,'zooclient 09:09:35 , 'ip,'127.0.0.1 09:09:35 ] 09:09:35 09:09:35.747 [ProcessThread(sid:0 cport:46481):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 72435022804 09:09:35 09:09:35.747 [ProcessThread(sid:0 cport:46481):] DEBUG org.apache.zookeeper.server.SessionTrackerImpl - Checking session 0x1000002945e0000 09:09:35 09:09:35.747 [ProcessThread(sid:0 cport:46481):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Processing ACL: 31,s{'world,'anyone} 09:09:35 09:09:35 09:09:35.747 [ProcessThread(sid:0 cport:46481):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Permission requested: 4 09:09:35 09:09:35.747 [ProcessThread(sid:0 cport:46481):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - ACLs for node: [31,s{'world,'anyone} 09:09:35 ] 09:09:35 09:09:35.747 [ProcessThread(sid:0 cport:46481):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Client credentials: ['sasl,'zooclient 09:09:35 , 'ip,'127.0.0.1 09:09:35 ] 09:09:35 09:09:35.747 [ProcessThread(sid:0 cport:46481):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 72435022804 09:09:35 09:09:35.747 [ProcessThread(sid:0 cport:46481):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 75027086830 09:09:35 09:09:35.747 [ProcessThread(sid:0 cport:46481):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 76674604984 09:09:35 09:09:35.747 [ProcessThread(sid:0 cport:46481):] DEBUG org.apache.zookeeper.server.SessionTrackerImpl - Checking session 0x1000002945e0000 09:09:35 09:09:35.747 [ProcessThread(sid:0 cport:46481):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Permission requested: 1 09:09:35 09:09:35.747 [ProcessThread(sid:0 cport:46481):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - ACLs for node: [31,s{'world,'anyone} 09:09:35 ] 09:09:35 09:09:35.747 [ProcessThread(sid:0 cport:46481):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Client credentials: ['sasl,'zooclient 09:09:35 , 'ip,'127.0.0.1 09:09:35 ] 09:09:35 09:09:35.747 [ProcessThread(sid:0 cport:46481):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 76674604984 09:09:35 09:09:35.747 [ProcessThread(sid:0 cport:46481):] DEBUG org.apache.zookeeper.server.SessionTrackerImpl - Checking session 0x1000002945e0000 09:09:35 09:09:35.747 [ProcessThread(sid:0 cport:46481):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Processing ACL: 31,s{'world,'anyone} 09:09:35 09:09:35 09:09:35.747 [ProcessThread(sid:0 cport:46481):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Permission requested: 4 09:09:35 09:09:35.747 [ProcessThread(sid:0 cport:46481):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - ACLs for node: [31,s{'world,'anyone} 09:09:35 ] 09:09:35 09:09:35.748 [ProcessThread(sid:0 cport:46481):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Client credentials: ['sasl,'zooclient 09:09:35 , 'ip,'127.0.0.1 09:09:35 ] 09:09:35 09:09:35.748 [ProcessThread(sid:0 cport:46481):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 76674604984 09:09:35 09:09:35.748 [ProcessThread(sid:0 cport:46481):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 75420360273 09:09:35 09:09:35.748 [ProcessThread(sid:0 cport:46481):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 76459657387 09:09:35 09:09:35.748 [ProcessThread(sid:0 cport:46481):] DEBUG org.apache.zookeeper.server.SessionTrackerImpl - Checking session 0x1000002945e0000 09:09:35 09:09:35.748 [ProcessThread(sid:0 cport:46481):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Permission requested: 1 09:09:35 09:09:35.748 [ProcessThread(sid:0 cport:46481):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - ACLs for node: [31,s{'world,'anyone} 09:09:35 ] 09:09:35 09:09:35.748 [ProcessThread(sid:0 cport:46481):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Client credentials: ['sasl,'zooclient 09:09:35 , 'ip,'127.0.0.1 09:09:35 ] 09:09:35 09:09:35.748 [ProcessThread(sid:0 cport:46481):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 76459657387 09:09:35 09:09:35.748 [ProcessThread(sid:0 cport:46481):] DEBUG org.apache.zookeeper.server.SessionTrackerImpl - Checking session 0x1000002945e0000 09:09:35 09:09:35.748 [ProcessThread(sid:0 cport:46481):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Processing ACL: 31,s{'world,'anyone} 09:09:35 09:09:35 09:09:35.748 [ProcessThread(sid:0 cport:46481):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Permission requested: 4 09:09:35 09:09:35.748 [ProcessThread(sid:0 cport:46481):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - ACLs for node: [31,s{'world,'anyone} 09:09:35 ] 09:09:35 09:09:35.748 [ProcessThread(sid:0 cport:46481):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Client credentials: ['sasl,'zooclient 09:09:35 , 'ip,'127.0.0.1 09:09:35 ] 09:09:35 09:09:35.748 [ProcessThread(sid:0 cport:46481):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 76459657387 09:09:35 09:09:35.748 [ProcessThread(sid:0 cport:46481):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 76271321178 09:09:35 09:09:35.748 [ProcessThread(sid:0 cport:46481):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 79497809355 09:09:35 09:09:35.748 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - Processing request:: sessionid:0x1000002945e0000 type:multi cxid:0x5a zxid:0x28 txntype:14 reqpath:n/a 09:09:35 09:09:35.748 [SyncThread:0] DEBUG org.apache.zookeeper.common.PathTrie - brokers 09:09:35 09:09:35.748 [SyncThread:0] DEBUG org.apache.zookeeper.server.DataTree - Digests are matching for Zxid: 28, Digest in log and actual tree: 61628124345 09:09:35 09:09:35.748 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - sessionid:0x1000002945e0000 type:multi cxid:0x5a zxid:0x28 txntype:14 reqpath:n/a 09:09:35 09:09:35.749 [main-SendThread(127.0.0.1:46481)] DEBUG org.apache.zookeeper.ClientCnxn - Reading reply session id: 0x1000002945e0000, packet:: clientPath:null serverPath:null finished:false header:: 90,14 replyHeader:: 90,40,0 request:: org.apache.zookeeper.MultiOperationRecord@324db770 response:: org.apache.zookeeper.MultiResponse@2c19b7b1 09:09:35 09:09:35.749 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - Processing request:: sessionid:0x1000002945e0000 type:multi cxid:0x5b zxid:0x29 txntype:14 reqpath:n/a 09:09:35 09:09:35.749 [SyncThread:0] DEBUG org.apache.zookeeper.common.PathTrie - brokers 09:09:35 09:09:35.749 [SyncThread:0] DEBUG org.apache.zookeeper.server.DataTree - Digests are matching for Zxid: 29, Digest in log and actual tree: 63428180132 09:09:35 09:09:35.749 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - sessionid:0x1000002945e0000 type:multi cxid:0x5b zxid:0x29 txntype:14 reqpath:n/a 09:09:35 09:09:35.749 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - Processing request:: sessionid:0x1000002945e0000 type:multi cxid:0x5c zxid:0x2a txntype:14 reqpath:n/a 09:09:35 09:09:35.749 [SyncThread:0] DEBUG org.apache.zookeeper.common.PathTrie - brokers 09:09:35 09:09:35.749 [SyncThread:0] DEBUG org.apache.zookeeper.server.DataTree - Digests are matching for Zxid: 2a, Digest in log and actual tree: 66243891961 09:09:35 09:09:35.750 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - sessionid:0x1000002945e0000 type:multi cxid:0x5c zxid:0x2a txntype:14 reqpath:n/a 09:09:35 09:09:35.750 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - Processing request:: sessionid:0x1000002945e0000 type:multi cxid:0x5d zxid:0x2b txntype:14 reqpath:n/a 09:09:35 09:09:35.750 [SyncThread:0] DEBUG org.apache.zookeeper.common.PathTrie - brokers 09:09:35 09:09:35.750 [SyncThread:0] DEBUG org.apache.zookeeper.server.DataTree - Digests are matching for Zxid: 2b, Digest in log and actual tree: 63122934878 09:09:35 09:09:35.750 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - sessionid:0x1000002945e0000 type:multi cxid:0x5d zxid:0x2b txntype:14 reqpath:n/a 09:09:35 09:09:35.750 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - Processing request:: sessionid:0x1000002945e0000 type:multi cxid:0x5e zxid:0x2c txntype:14 reqpath:n/a 09:09:35 09:09:35.750 [SyncThread:0] DEBUG org.apache.zookeeper.common.PathTrie - brokers 09:09:35 09:09:35.750 [SyncThread:0] DEBUG org.apache.zookeeper.server.DataTree - Digests are matching for Zxid: 2c, Digest in log and actual tree: 68443246370 09:09:35 09:09:35.750 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - sessionid:0x1000002945e0000 type:multi cxid:0x5e zxid:0x2c txntype:14 reqpath:n/a 09:09:35 09:09:35.750 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - Processing request:: sessionid:0x1000002945e0000 type:multi cxid:0x5f zxid:0x2d txntype:14 reqpath:n/a 09:09:35 09:09:35.750 [SyncThread:0] DEBUG org.apache.zookeeper.common.PathTrie - brokers 09:09:35 09:09:35.750 [SyncThread:0] DEBUG org.apache.zookeeper.server.DataTree - Digests are matching for Zxid: 2d, Digest in log and actual tree: 72180823885 09:09:35 09:09:35.750 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - sessionid:0x1000002945e0000 type:multi cxid:0x5f zxid:0x2d txntype:14 reqpath:n/a 09:09:35 09:09:35.750 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - Processing request:: sessionid:0x1000002945e0000 type:multi cxid:0x60 zxid:0x2e txntype:14 reqpath:n/a 09:09:35 09:09:35.750 [SyncThread:0] DEBUG org.apache.zookeeper.common.PathTrie - brokers 09:09:35 09:09:35.750 [SyncThread:0] DEBUG org.apache.zookeeper.server.DataTree - Digests are matching for Zxid: 2e, Digest in log and actual tree: 72435022804 09:09:35 09:09:35.750 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - sessionid:0x1000002945e0000 type:multi cxid:0x60 zxid:0x2e txntype:14 reqpath:n/a 09:09:35 09:09:35.750 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - Processing request:: sessionid:0x1000002945e0000 type:multi cxid:0x61 zxid:0x2f txntype:14 reqpath:n/a 09:09:35 09:09:35.751 [SyncThread:0] DEBUG org.apache.zookeeper.common.PathTrie - brokers 09:09:35 09:09:35.751 [SyncThread:0] DEBUG org.apache.zookeeper.server.DataTree - Digests are matching for Zxid: 2f, Digest in log and actual tree: 76674604984 09:09:35 09:09:35.751 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - sessionid:0x1000002945e0000 type:multi cxid:0x61 zxid:0x2f txntype:14 reqpath:n/a 09:09:35 09:09:35.751 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - Processing request:: sessionid:0x1000002945e0000 type:multi cxid:0x62 zxid:0x30 txntype:14 reqpath:n/a 09:09:35 09:09:35.751 [SyncThread:0] DEBUG org.apache.zookeeper.common.PathTrie - brokers 09:09:35 09:09:35.751 [SyncThread:0] DEBUG org.apache.zookeeper.server.DataTree - Digests are matching for Zxid: 30, Digest in log and actual tree: 76459657387 09:09:35 09:09:35.751 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - sessionid:0x1000002945e0000 type:multi cxid:0x62 zxid:0x30 txntype:14 reqpath:n/a 09:09:35 09:09:35.751 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - Processing request:: sessionid:0x1000002945e0000 type:multi cxid:0x63 zxid:0x31 txntype:14 reqpath:n/a 09:09:35 09:09:35.751 [SyncThread:0] DEBUG org.apache.zookeeper.common.PathTrie - brokers 09:09:35 09:09:35.751 [SyncThread:0] DEBUG org.apache.zookeeper.server.DataTree - Digests are matching for Zxid: 31, Digest in log and actual tree: 79497809355 09:09:35 09:09:35.751 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - sessionid:0x1000002945e0000 type:multi cxid:0x63 zxid:0x31 txntype:14 reqpath:n/a 09:09:35 09:09:35.751 [main-SendThread(127.0.0.1:46481)] DEBUG org.apache.zookeeper.ClientCnxn - Reading reply session id: 0x1000002945e0000, packet:: clientPath:null serverPath:null finished:false header:: 91,14 replyHeader:: 91,41,0 request:: org.apache.zookeeper.MultiOperationRecord@324db78d response:: org.apache.zookeeper.MultiResponse@2c19b7ce 09:09:35 09:09:35.752 [ProcessThread(sid:0 cport:46481):] DEBUG org.apache.zookeeper.server.SessionTrackerImpl - Checking session 0x1000002945e0000 09:09:35 09:09:35.752 [ProcessThread(sid:0 cport:46481):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Permission requested: 1 09:09:35 09:09:35.752 [ProcessThread(sid:0 cport:46481):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - ACLs for node: [31,s{'world,'anyone} 09:09:35 ] 09:09:35 09:09:35.752 [ProcessThread(sid:0 cport:46481):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Client credentials: ['sasl,'zooclient 09:09:35 , 'ip,'127.0.0.1 09:09:35 ] 09:09:35 09:09:35.752 [ProcessThread(sid:0 cport:46481):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from data tree is: 79497809355 09:09:35 09:09:35.752 [ProcessThread(sid:0 cport:46481):] DEBUG org.apache.zookeeper.server.SessionTrackerImpl - Checking session 0x1000002945e0000 09:09:35 09:09:35.752 [ProcessThread(sid:0 cport:46481):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Processing ACL: 31,s{'world,'anyone} 09:09:35 09:09:35 09:09:35.752 [ProcessThread(sid:0 cport:46481):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Permission requested: 4 09:09:35 09:09:35.752 [ProcessThread(sid:0 cport:46481):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - ACLs for node: [31,s{'world,'anyone} 09:09:35 ] 09:09:35 09:09:35.752 [ProcessThread(sid:0 cport:46481):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Client credentials: ['sasl,'zooclient 09:09:35 , 'ip,'127.0.0.1 09:09:35 ] 09:09:35 09:09:35.752 [ProcessThread(sid:0 cport:46481):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from data tree is: 79497809355 09:09:35 09:09:35.752 [ProcessThread(sid:0 cport:46481):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 77382658800 09:09:35 09:09:35.752 [ProcessThread(sid:0 cport:46481):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 77456712918 09:09:35 09:09:35.752 [main-SendThread(127.0.0.1:46481)] DEBUG org.apache.zookeeper.ClientCnxn - Reading reply session id: 0x1000002945e0000, packet:: clientPath:null serverPath:null finished:false header:: 92,14 replyHeader:: 92,42,0 request:: org.apache.zookeeper.MultiOperationRecord@324db773 response:: org.apache.zookeeper.MultiResponse@2c19b7b4 09:09:35 09:09:35.752 [main-SendThread(127.0.0.1:46481)] DEBUG org.apache.zookeeper.ClientCnxn - Reading reply session id: 0x1000002945e0000, packet:: clientPath:null serverPath:null finished:false header:: 93,14 replyHeader:: 93,43,0 request:: org.apache.zookeeper.MultiOperationRecord@324db792 response:: org.apache.zookeeper.MultiResponse@2c19b7d3 09:09:35 09:09:35.752 [main-SendThread(127.0.0.1:46481)] DEBUG org.apache.zookeeper.ClientCnxn - Reading reply session id: 0x1000002945e0000, packet:: clientPath:null serverPath:null finished:false header:: 94,14 replyHeader:: 94,44,0 request:: org.apache.zookeeper.MultiOperationRecord@324db794 response:: org.apache.zookeeper.MultiResponse@2c19b7d5 09:09:35 09:09:35.752 [main-SendThread(127.0.0.1:46481)] DEBUG org.apache.zookeeper.ClientCnxn - Reading reply session id: 0x1000002945e0000, packet:: clientPath:null serverPath:null finished:false header:: 95,14 replyHeader:: 95,45,0 request:: org.apache.zookeeper.MultiOperationRecord@324db795 response:: org.apache.zookeeper.MultiResponse@2c19b7d6 09:09:35 09:09:35.753 [main-SendThread(127.0.0.1:46481)] DEBUG org.apache.zookeeper.ClientCnxn - Reading reply session id: 0x1000002945e0000, packet:: clientPath:null serverPath:null finished:false header:: 96,14 replyHeader:: 96,46,0 request:: org.apache.zookeeper.MultiOperationRecord@324db752 response:: org.apache.zookeeper.MultiResponse@2c19b793 09:09:35 09:09:35.753 [main-SendThread(127.0.0.1:46481)] DEBUG org.apache.zookeeper.ClientCnxn - Reading reply session id: 0x1000002945e0000, packet:: clientPath:null serverPath:null finished:false header:: 97,14 replyHeader:: 97,47,0 request:: org.apache.zookeeper.MultiOperationRecord@940352de response:: org.apache.zookeeper.MultiResponse@8dcf531f 09:09:35 09:09:35.753 [main-SendThread(127.0.0.1:46481)] DEBUG org.apache.zookeeper.ClientCnxn - Reading reply session id: 0x1000002945e0000, packet:: clientPath:null serverPath:null finished:false header:: 98,14 replyHeader:: 98,48,0 request:: org.apache.zookeeper.MultiOperationRecord@324db76f response:: org.apache.zookeeper.MultiResponse@2c19b7b0 09:09:35 09:09:35.753 [ProcessThread(sid:0 cport:46481):] DEBUG org.apache.zookeeper.server.SessionTrackerImpl - Checking session 0x1000002945e0000 09:09:35 09:09:35.753 [ProcessThread(sid:0 cport:46481):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Permission requested: 1 09:09:35 09:09:35.753 [ProcessThread(sid:0 cport:46481):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - ACLs for node: [31,s{'world,'anyone} 09:09:35 ] 09:09:35 09:09:35.753 [ProcessThread(sid:0 cport:46481):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Client credentials: ['sasl,'zooclient 09:09:35 , 'ip,'127.0.0.1 09:09:35 ] 09:09:35 09:09:35.753 [ProcessThread(sid:0 cport:46481):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 77456712918 09:09:35 09:09:35.754 [ProcessThread(sid:0 cport:46481):] DEBUG org.apache.zookeeper.server.SessionTrackerImpl - Checking session 0x1000002945e0000 09:09:35 09:09:35.754 [ProcessThread(sid:0 cport:46481):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Processing ACL: 31,s{'world,'anyone} 09:09:35 09:09:35 09:09:35.754 [ProcessThread(sid:0 cport:46481):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Permission requested: 4 09:09:35 09:09:35.754 [ProcessThread(sid:0 cport:46481):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - ACLs for node: [31,s{'world,'anyone} 09:09:35 ] 09:09:35 09:09:35.754 [ProcessThread(sid:0 cport:46481):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Client credentials: ['sasl,'zooclient 09:09:35 , 'ip,'127.0.0.1 09:09:35 ] 09:09:35 09:09:35.754 [ProcessThread(sid:0 cport:46481):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 77456712918 09:09:35 09:09:35.754 [ProcessThread(sid:0 cport:46481):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 80753366031 09:09:35 09:09:35.754 [ProcessThread(sid:0 cport:46481):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 84228245535 09:09:35 09:09:35.754 [ProcessThread(sid:0 cport:46481):] DEBUG org.apache.zookeeper.server.SessionTrackerImpl - Checking session 0x1000002945e0000 09:09:35 09:09:35.754 [ProcessThread(sid:0 cport:46481):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Permission requested: 1 09:09:35 09:09:35.754 [ProcessThread(sid:0 cport:46481):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - ACLs for node: [31,s{'world,'anyone} 09:09:35 ] 09:09:35 09:09:35.754 [ProcessThread(sid:0 cport:46481):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Client credentials: ['sasl,'zooclient 09:09:35 , 'ip,'127.0.0.1 09:09:35 ] 09:09:35 09:09:35.754 [ProcessThread(sid:0 cport:46481):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 84228245535 09:09:35 09:09:35.754 [ProcessThread(sid:0 cport:46481):] DEBUG org.apache.zookeeper.server.SessionTrackerImpl - Checking session 0x1000002945e0000 09:09:35 09:09:35.754 [ProcessThread(sid:0 cport:46481):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Processing ACL: 31,s{'world,'anyone} 09:09:35 09:09:35 09:09:35.754 [ProcessThread(sid:0 cport:46481):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Permission requested: 4 09:09:35 09:09:35.754 [ProcessThread(sid:0 cport:46481):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - ACLs for node: [31,s{'world,'anyone} 09:09:35 ] 09:09:35 09:09:35.754 [ProcessThread(sid:0 cport:46481):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Client credentials: ['sasl,'zooclient 09:09:35 , 'ip,'127.0.0.1 09:09:35 ] 09:09:35 09:09:35.754 [ProcessThread(sid:0 cport:46481):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 84228245535 09:09:35 09:09:35.754 [ProcessThread(sid:0 cport:46481):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 80594106345 09:09:35 09:09:35.754 [ProcessThread(sid:0 cport:46481):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 84169159426 09:09:35 09:09:35.754 [ProcessThread(sid:0 cport:46481):] DEBUG org.apache.zookeeper.server.SessionTrackerImpl - Checking session 0x1000002945e0000 09:09:35 09:09:35.754 [ProcessThread(sid:0 cport:46481):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Permission requested: 1 09:09:35 09:09:35.754 [ProcessThread(sid:0 cport:46481):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - ACLs for node: [31,s{'world,'anyone} 09:09:35 ] 09:09:35 09:09:35.754 [ProcessThread(sid:0 cport:46481):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Client credentials: ['sasl,'zooclient 09:09:35 , 'ip,'127.0.0.1 09:09:35 ] 09:09:35 09:09:35.754 [ProcessThread(sid:0 cport:46481):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 84169159426 09:09:35 09:09:35.754 [ProcessThread(sid:0 cport:46481):] DEBUG org.apache.zookeeper.server.SessionTrackerImpl - Checking session 0x1000002945e0000 09:09:35 09:09:35.754 [ProcessThread(sid:0 cport:46481):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Processing ACL: 31,s{'world,'anyone} 09:09:35 09:09:35 09:09:35.754 [ProcessThread(sid:0 cport:46481):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Permission requested: 4 09:09:35 09:09:35.754 [ProcessThread(sid:0 cport:46481):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - ACLs for node: [31,s{'world,'anyone} 09:09:35 ] 09:09:35 09:09:35.754 [ProcessThread(sid:0 cport:46481):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Client credentials: ['sasl,'zooclient 09:09:35 , 'ip,'127.0.0.1 09:09:35 ] 09:09:35 09:09:35.754 [ProcessThread(sid:0 cport:46481):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 84169159426 09:09:35 09:09:35.754 [ProcessThread(sid:0 cport:46481):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 84925856435 09:09:35 09:09:35.754 [ProcessThread(sid:0 cport:46481):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 85238193188 09:09:35 09:09:35.754 [ProcessThread(sid:0 cport:46481):] DEBUG org.apache.zookeeper.server.SessionTrackerImpl - Checking session 0x1000002945e0000 09:09:35 09:09:35.754 [ProcessThread(sid:0 cport:46481):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Permission requested: 1 09:09:35 09:09:35.754 [ProcessThread(sid:0 cport:46481):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - ACLs for node: [31,s{'world,'anyone} 09:09:35 ] 09:09:35 09:09:35.755 [ProcessThread(sid:0 cport:46481):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Client credentials: ['sasl,'zooclient 09:09:35 , 'ip,'127.0.0.1 09:09:35 ] 09:09:35 09:09:35.755 [ProcessThread(sid:0 cport:46481):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 85238193188 09:09:35 09:09:35.755 [ProcessThread(sid:0 cport:46481):] DEBUG org.apache.zookeeper.server.SessionTrackerImpl - Checking session 0x1000002945e0000 09:09:35 09:09:35.755 [ProcessThread(sid:0 cport:46481):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Processing ACL: 31,s{'world,'anyone} 09:09:35 09:09:35 09:09:35.755 [ProcessThread(sid:0 cport:46481):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Permission requested: 4 09:09:35 09:09:35.755 [ProcessThread(sid:0 cport:46481):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - ACLs for node: [31,s{'world,'anyone} 09:09:35 ] 09:09:35 09:09:35.755 [ProcessThread(sid:0 cport:46481):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Client credentials: ['sasl,'zooclient 09:09:35 , 'ip,'127.0.0.1 09:09:35 ] 09:09:35 09:09:35.755 [ProcessThread(sid:0 cport:46481):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 85238193188 09:09:35 09:09:35.755 [ProcessThread(sid:0 cport:46481):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 87321264519 09:09:35 09:09:35.755 [ProcessThread(sid:0 cport:46481):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 88903407426 09:09:35 09:09:35.755 [ProcessThread(sid:0 cport:46481):] DEBUG org.apache.zookeeper.server.SessionTrackerImpl - Checking session 0x1000002945e0000 09:09:35 09:09:35.755 [ProcessThread(sid:0 cport:46481):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Permission requested: 1 09:09:35 09:09:35.755 [ProcessThread(sid:0 cport:46481):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - ACLs for node: [31,s{'world,'anyone} 09:09:35 ] 09:09:35 09:09:35.755 [ProcessThread(sid:0 cport:46481):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Client credentials: ['sasl,'zooclient 09:09:35 , 'ip,'127.0.0.1 09:09:35 ] 09:09:35 09:09:35.755 [ProcessThread(sid:0 cport:46481):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 88903407426 09:09:35 09:09:35.755 [ProcessThread(sid:0 cport:46481):] DEBUG org.apache.zookeeper.server.SessionTrackerImpl - Checking session 0x1000002945e0000 09:09:35 09:09:35.755 [ProcessThread(sid:0 cport:46481):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Processing ACL: 31,s{'world,'anyone} 09:09:35 09:09:35 09:09:35.755 [ProcessThread(sid:0 cport:46481):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Permission requested: 4 09:09:35 09:09:35.755 [ProcessThread(sid:0 cport:46481):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - ACLs for node: [31,s{'world,'anyone} 09:09:35 ] 09:09:35 09:09:35.755 [ProcessThread(sid:0 cport:46481):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Client credentials: ['sasl,'zooclient 09:09:35 , 'ip,'127.0.0.1 09:09:35 ] 09:09:35 09:09:35.755 [ProcessThread(sid:0 cport:46481):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 88903407426 09:09:35 09:09:35.755 [ProcessThread(sid:0 cport:46481):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 88627065842 09:09:35 09:09:35.755 [ProcessThread(sid:0 cport:46481):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 91581262769 09:09:35 09:09:35.755 [ProcessThread(sid:0 cport:46481):] DEBUG org.apache.zookeeper.server.SessionTrackerImpl - Checking session 0x1000002945e0000 09:09:35 09:09:35.755 [ProcessThread(sid:0 cport:46481):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Permission requested: 1 09:09:35 09:09:35.755 [ProcessThread(sid:0 cport:46481):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - ACLs for node: [31,s{'world,'anyone} 09:09:35 ] 09:09:35 09:09:35.755 [ProcessThread(sid:0 cport:46481):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Client credentials: ['sasl,'zooclient 09:09:35 , 'ip,'127.0.0.1 09:09:35 ] 09:09:35 09:09:35.755 [ProcessThread(sid:0 cport:46481):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 91581262769 09:09:35 09:09:35.755 [ProcessThread(sid:0 cport:46481):] DEBUG org.apache.zookeeper.server.SessionTrackerImpl - Checking session 0x1000002945e0000 09:09:35 09:09:35.755 [ProcessThread(sid:0 cport:46481):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Processing ACL: 31,s{'world,'anyone} 09:09:35 09:09:35 09:09:35.755 [ProcessThread(sid:0 cport:46481):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Permission requested: 4 09:09:35 09:09:35.755 [ProcessThread(sid:0 cport:46481):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - ACLs for node: [31,s{'world,'anyone} 09:09:35 ] 09:09:35 09:09:35.755 [ProcessThread(sid:0 cport:46481):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Client credentials: ['sasl,'zooclient 09:09:35 , 'ip,'127.0.0.1 09:09:35 ] 09:09:35 09:09:35.755 [ProcessThread(sid:0 cport:46481):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 91581262769 09:09:35 09:09:35.755 [ProcessThread(sid:0 cport:46481):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 91363034829 09:09:35 09:09:35.755 [ProcessThread(sid:0 cport:46481):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 94804546928 09:09:35 09:09:35.756 [main-SendThread(127.0.0.1:46481)] DEBUG org.apache.zookeeper.ClientCnxn - Reading reply session id: 0x1000002945e0000, packet:: clientPath:null serverPath:null finished:false header:: 99,14 replyHeader:: 99,49,0 request:: org.apache.zookeeper.MultiOperationRecord@940352da response:: org.apache.zookeeper.MultiResponse@8dcf531b 09:09:35 09:09:35.756 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - Processing request:: sessionid:0x1000002945e0000 type:multi cxid:0x64 zxid:0x32 txntype:14 reqpath:n/a 09:09:35 09:09:35.756 [SyncThread:0] DEBUG org.apache.zookeeper.common.PathTrie - brokers 09:09:35 09:09:35.756 [SyncThread:0] DEBUG org.apache.zookeeper.server.DataTree - Digests are matching for Zxid: 32, Digest in log and actual tree: 77456712918 09:09:35 09:09:35.756 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - sessionid:0x1000002945e0000 type:multi cxid:0x64 zxid:0x32 txntype:14 reqpath:n/a 09:09:35 09:09:35.756 [main-SendThread(127.0.0.1:46481)] DEBUG org.apache.zookeeper.ClientCnxn - Reading reply session id: 0x1000002945e0000, packet:: clientPath:null serverPath:null finished:false header:: 100,14 replyHeader:: 100,50,0 request:: org.apache.zookeeper.MultiOperationRecord@324db775 response:: org.apache.zookeeper.MultiResponse@2c19b7b6 09:09:35 09:09:35.757 [ProcessThread(sid:0 cport:46481):] DEBUG org.apache.zookeeper.server.SessionTrackerImpl - Checking session 0x1000002945e0000 09:09:35 09:09:35.757 [ProcessThread(sid:0 cport:46481):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Permission requested: 1 09:09:35 09:09:35.757 [ProcessThread(sid:0 cport:46481):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - ACLs for node: [31,s{'world,'anyone} 09:09:35 ] 09:09:35 09:09:35.757 [ProcessThread(sid:0 cport:46481):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Client credentials: ['sasl,'zooclient 09:09:35 , 'ip,'127.0.0.1 09:09:35 ] 09:09:35 09:09:35.757 [ProcessThread(sid:0 cport:46481):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 94804546928 09:09:35 09:09:35.757 [ProcessThread(sid:0 cport:46481):] DEBUG org.apache.zookeeper.server.SessionTrackerImpl - Checking session 0x1000002945e0000 09:09:35 09:09:35.757 [ProcessThread(sid:0 cport:46481):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Processing ACL: 31,s{'world,'anyone} 09:09:35 09:09:35 09:09:35.757 [ProcessThread(sid:0 cport:46481):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Permission requested: 4 09:09:35 09:09:35.757 [ProcessThread(sid:0 cport:46481):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - ACLs for node: [31,s{'world,'anyone} 09:09:35 ] 09:09:35 09:09:35.757 [ProcessThread(sid:0 cport:46481):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Client credentials: ['sasl,'zooclient 09:09:35 , 'ip,'127.0.0.1 09:09:35 ] 09:09:35 09:09:35.757 [ProcessThread(sid:0 cport:46481):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 94804546928 09:09:35 09:09:35.757 [ProcessThread(sid:0 cport:46481):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 95157492641 09:09:35 09:09:35.757 [ProcessThread(sid:0 cport:46481):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 96607388137 09:09:35 09:09:35.757 [ProcessThread(sid:0 cport:46481):] DEBUG org.apache.zookeeper.server.SessionTrackerImpl - Checking session 0x1000002945e0000 09:09:35 09:09:35.757 [ProcessThread(sid:0 cport:46481):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Permission requested: 1 09:09:35 09:09:35.757 [ProcessThread(sid:0 cport:46481):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - ACLs for node: [31,s{'world,'anyone} 09:09:35 ] 09:09:35 09:09:35.757 [ProcessThread(sid:0 cport:46481):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Client credentials: ['sasl,'zooclient 09:09:35 , 'ip,'127.0.0.1 09:09:35 ] 09:09:35 09:09:35.757 [ProcessThread(sid:0 cport:46481):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 96607388137 09:09:35 09:09:35.757 [ProcessThread(sid:0 cport:46481):] DEBUG org.apache.zookeeper.server.SessionTrackerImpl - Checking session 0x1000002945e0000 09:09:35 09:09:35.757 [ProcessThread(sid:0 cport:46481):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Processing ACL: 31,s{'world,'anyone} 09:09:35 09:09:35 09:09:35.757 [ProcessThread(sid:0 cport:46481):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Permission requested: 4 09:09:35 09:09:35.757 [ProcessThread(sid:0 cport:46481):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - ACLs for node: [31,s{'world,'anyone} 09:09:35 ] 09:09:35 09:09:35.757 [ProcessThread(sid:0 cport:46481):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Client credentials: ['sasl,'zooclient 09:09:35 , 'ip,'127.0.0.1 09:09:35 ] 09:09:35 09:09:35.757 [ProcessThread(sid:0 cport:46481):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 96607388137 09:09:35 09:09:35.757 [ProcessThread(sid:0 cport:46481):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 94538080396 09:09:35 09:09:35.758 [ProcessThread(sid:0 cport:46481):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 95611755347 09:09:35 09:09:35.758 [ProcessThread(sid:0 cport:46481):] DEBUG org.apache.zookeeper.server.SessionTrackerImpl - Checking session 0x1000002945e0000 09:09:35 09:09:35.758 [ProcessThread(sid:0 cport:46481):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Permission requested: 1 09:09:35 09:09:35.758 [ProcessThread(sid:0 cport:46481):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - ACLs for node: [31,s{'world,'anyone} 09:09:35 ] 09:09:35 09:09:35.758 [ProcessThread(sid:0 cport:46481):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Client credentials: ['sasl,'zooclient 09:09:35 , 'ip,'127.0.0.1 09:09:35 ] 09:09:35 09:09:35.758 [ProcessThread(sid:0 cport:46481):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 95611755347 09:09:35 09:09:35.758 [ProcessThread(sid:0 cport:46481):] DEBUG org.apache.zookeeper.server.SessionTrackerImpl - Checking session 0x1000002945e0000 09:09:35 09:09:35.758 [ProcessThread(sid:0 cport:46481):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Processing ACL: 31,s{'world,'anyone} 09:09:35 09:09:35 09:09:35.758 [ProcessThread(sid:0 cport:46481):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Permission requested: 4 09:09:35 09:09:35.758 [ProcessThread(sid:0 cport:46481):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - ACLs for node: [31,s{'world,'anyone} 09:09:35 ] 09:09:35 09:09:35.758 [ProcessThread(sid:0 cport:46481):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Client credentials: ['sasl,'zooclient 09:09:35 , 'ip,'127.0.0.1 09:09:35 ] 09:09:35 09:09:35.758 [ProcessThread(sid:0 cport:46481):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 95611755347 09:09:35 09:09:35.758 [ProcessThread(sid:0 cport:46481):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 98388584974 09:09:35 09:09:35.758 [ProcessThread(sid:0 cport:46481):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 101651538980 09:09:35 09:09:35.758 [ProcessThread(sid:0 cport:46481):] DEBUG org.apache.zookeeper.server.SessionTrackerImpl - Checking session 0x1000002945e0000 09:09:35 09:09:35.758 [ProcessThread(sid:0 cport:46481):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Permission requested: 1 09:09:35 09:09:35.758 [ProcessThread(sid:0 cport:46481):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - ACLs for node: [31,s{'world,'anyone} 09:09:35 ] 09:09:35 09:09:35.758 [ProcessThread(sid:0 cport:46481):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Client credentials: ['sasl,'zooclient 09:09:35 , 'ip,'127.0.0.1 09:09:35 ] 09:09:35 09:09:35.758 [ProcessThread(sid:0 cport:46481):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 101651538980 09:09:35 09:09:35.758 [ProcessThread(sid:0 cport:46481):] DEBUG org.apache.zookeeper.server.SessionTrackerImpl - Checking session 0x1000002945e0000 09:09:35 09:09:35.758 [ProcessThread(sid:0 cport:46481):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Processing ACL: 31,s{'world,'anyone} 09:09:35 09:09:35 09:09:35.758 [ProcessThread(sid:0 cport:46481):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Permission requested: 4 09:09:35 09:09:35.758 [ProcessThread(sid:0 cport:46481):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - ACLs for node: [31,s{'world,'anyone} 09:09:35 ] 09:09:35 09:09:35.758 [ProcessThread(sid:0 cport:46481):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Client credentials: ['sasl,'zooclient 09:09:35 , 'ip,'127.0.0.1 09:09:35 ] 09:09:35 09:09:35.758 [ProcessThread(sid:0 cport:46481):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 101651538980 09:09:35 09:09:35.758 [ProcessThread(sid:0 cport:46481):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 99078616190 09:09:35 09:09:35.758 [ProcessThread(sid:0 cport:46481):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 99200249535 09:09:35 09:09:35.780 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - Processing request:: sessionid:0x1000002945e0000 type:multi cxid:0x65 zxid:0x33 txntype:14 reqpath:n/a 09:09:35 09:09:35.780 [SyncThread:0] DEBUG org.apache.zookeeper.common.PathTrie - brokers 09:09:35 09:09:35.780 [SyncThread:0] DEBUG org.apache.zookeeper.server.DataTree - Digests are matching for Zxid: 33, Digest in log and actual tree: 84228245535 09:09:35 09:09:35.780 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - sessionid:0x1000002945e0000 type:multi cxid:0x65 zxid:0x33 txntype:14 reqpath:n/a 09:09:35 09:09:35.780 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - Processing request:: sessionid:0x1000002945e0000 type:multi cxid:0x66 zxid:0x34 txntype:14 reqpath:n/a 09:09:35 09:09:35.780 [SyncThread:0] DEBUG org.apache.zookeeper.common.PathTrie - brokers 09:09:35 09:09:35.780 [SyncThread:0] DEBUG org.apache.zookeeper.server.DataTree - Digests are matching for Zxid: 34, Digest in log and actual tree: 84169159426 09:09:35 09:09:35.780 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - sessionid:0x1000002945e0000 type:multi cxid:0x66 zxid:0x34 txntype:14 reqpath:n/a 09:09:35 09:09:35.781 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - Processing request:: sessionid:0x1000002945e0000 type:multi cxid:0x67 zxid:0x35 txntype:14 reqpath:n/a 09:09:35 09:09:35.781 [SyncThread:0] DEBUG org.apache.zookeeper.common.PathTrie - brokers 09:09:35 09:09:35.781 [SyncThread:0] DEBUG org.apache.zookeeper.server.DataTree - Digests are matching for Zxid: 35, Digest in log and actual tree: 85238193188 09:09:35 09:09:35.781 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - sessionid:0x1000002945e0000 type:multi cxid:0x67 zxid:0x35 txntype:14 reqpath:n/a 09:09:35 09:09:35.781 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - Processing request:: sessionid:0x1000002945e0000 type:multi cxid:0x68 zxid:0x36 txntype:14 reqpath:n/a 09:09:35 09:09:35.781 [SyncThread:0] DEBUG org.apache.zookeeper.common.PathTrie - brokers 09:09:35 09:09:35.781 [SyncThread:0] DEBUG org.apache.zookeeper.server.DataTree - Digests are matching for Zxid: 36, Digest in log and actual tree: 88903407426 09:09:35 09:09:35.781 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - sessionid:0x1000002945e0000 type:multi cxid:0x68 zxid:0x36 txntype:14 reqpath:n/a 09:09:35 09:09:35.781 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - Processing request:: sessionid:0x1000002945e0000 type:multi cxid:0x69 zxid:0x37 txntype:14 reqpath:n/a 09:09:35 09:09:35.781 [SyncThread:0] DEBUG org.apache.zookeeper.common.PathTrie - brokers 09:09:35 09:09:35.781 [SyncThread:0] DEBUG org.apache.zookeeper.server.DataTree - Digests are matching for Zxid: 37, Digest in log and actual tree: 91581262769 09:09:35 09:09:35.781 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - sessionid:0x1000002945e0000 type:multi cxid:0x69 zxid:0x37 txntype:14 reqpath:n/a 09:09:35 09:09:35.781 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - Processing request:: sessionid:0x1000002945e0000 type:multi cxid:0x6a zxid:0x38 txntype:14 reqpath:n/a 09:09:35 09:09:35.781 [main-SendThread(127.0.0.1:46481)] DEBUG org.apache.zookeeper.ClientCnxn - Reading reply session id: 0x1000002945e0000, packet:: clientPath:null serverPath:null finished:false header:: 101,14 replyHeader:: 101,51,0 request:: org.apache.zookeeper.MultiOperationRecord@940352dd response:: org.apache.zookeeper.MultiResponse@8dcf531e 09:09:35 09:09:35.782 [main-SendThread(127.0.0.1:46481)] DEBUG org.apache.zookeeper.ClientCnxn - Reading reply session id: 0x1000002945e0000, packet:: clientPath:null serverPath:null finished:false header:: 102,14 replyHeader:: 102,52,0 request:: org.apache.zookeeper.MultiOperationRecord@940352df response:: org.apache.zookeeper.MultiResponse@8dcf5320 09:09:35 09:09:35.782 [main-SendThread(127.0.0.1:46481)] DEBUG org.apache.zookeeper.ClientCnxn - Reading reply session id: 0x1000002945e0000, packet:: clientPath:null serverPath:null finished:false header:: 103,14 replyHeader:: 103,53,0 request:: org.apache.zookeeper.MultiOperationRecord@324db7b2 response:: org.apache.zookeeper.MultiResponse@2c19b7f3 09:09:35 09:09:35.782 [main-SendThread(127.0.0.1:46481)] DEBUG org.apache.zookeeper.ClientCnxn - Reading reply session id: 0x1000002945e0000, packet:: clientPath:null serverPath:null finished:false header:: 104,14 replyHeader:: 104,54,0 request:: org.apache.zookeeper.MultiOperationRecord@324db7ad response:: org.apache.zookeeper.MultiResponse@2c19b7ee 09:09:35 09:09:35.782 [main-SendThread(127.0.0.1:46481)] DEBUG org.apache.zookeeper.ClientCnxn - Reading reply session id: 0x1000002945e0000, packet:: clientPath:null serverPath:null finished:false header:: 105,14 replyHeader:: 105,55,0 request:: org.apache.zookeeper.MultiOperationRecord@324db790 response:: org.apache.zookeeper.MultiResponse@2c19b7d1 09:09:35 09:09:35.782 [SyncThread:0] DEBUG org.apache.zookeeper.common.PathTrie - brokers 09:09:35 09:09:35.782 [SyncThread:0] DEBUG org.apache.zookeeper.server.DataTree - Digests are matching for Zxid: 38, Digest in log and actual tree: 94804546928 09:09:35 09:09:35.783 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - sessionid:0x1000002945e0000 type:multi cxid:0x6a zxid:0x38 txntype:14 reqpath:n/a 09:09:35 09:09:35.783 [ProcessThread(sid:0 cport:46481):] DEBUG org.apache.zookeeper.server.SessionTrackerImpl - Checking session 0x1000002945e0000 09:09:35 09:09:35.783 [ProcessThread(sid:0 cport:46481):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Permission requested: 1 09:09:35 09:09:35.783 [main-SendThread(127.0.0.1:46481)] DEBUG org.apache.zookeeper.ClientCnxn - Reading reply session id: 0x1000002945e0000, packet:: clientPath:null serverPath:null finished:false header:: 106,14 replyHeader:: 106,56,0 request:: org.apache.zookeeper.MultiOperationRecord@324db771 response:: org.apache.zookeeper.MultiResponse@2c19b7b2 09:09:35 09:09:35.783 [ProcessThread(sid:0 cport:46481):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - ACLs for node: [31,s{'world,'anyone} 09:09:35 ] 09:09:35 09:09:35.783 [ProcessThread(sid:0 cport:46481):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Client credentials: ['sasl,'zooclient 09:09:35 , 'ip,'127.0.0.1 09:09:35 ] 09:09:35 09:09:35.783 [ProcessThread(sid:0 cport:46481):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 99200249535 09:09:35 09:09:35.783 [ProcessThread(sid:0 cport:46481):] DEBUG org.apache.zookeeper.server.SessionTrackerImpl - Checking session 0x1000002945e0000 09:09:35 09:09:35.783 [ProcessThread(sid:0 cport:46481):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Processing ACL: 31,s{'world,'anyone} 09:09:35 09:09:35 09:09:35.783 [ProcessThread(sid:0 cport:46481):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Permission requested: 4 09:09:35 09:09:35.783 [ProcessThread(sid:0 cport:46481):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - ACLs for node: [31,s{'world,'anyone} 09:09:35 ] 09:09:35 09:09:35.783 [ProcessThread(sid:0 cport:46481):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Client credentials: ['sasl,'zooclient 09:09:35 , 'ip,'127.0.0.1 09:09:35 ] 09:09:35 09:09:35.783 [ProcessThread(sid:0 cport:46481):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 99200249535 09:09:35 09:09:35.783 [ProcessThread(sid:0 cport:46481):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 98341856494 09:09:35 09:09:35.783 [ProcessThread(sid:0 cport:46481):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 102620313481 09:09:35 09:09:35.783 [ProcessThread(sid:0 cport:46481):] DEBUG org.apache.zookeeper.server.SessionTrackerImpl - Checking session 0x1000002945e0000 09:09:35 09:09:35.783 [ProcessThread(sid:0 cport:46481):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Permission requested: 1 09:09:35 09:09:35.783 [ProcessThread(sid:0 cport:46481):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - ACLs for node: [31,s{'world,'anyone} 09:09:35 ] 09:09:35 09:09:35.783 [ProcessThread(sid:0 cport:46481):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Client credentials: ['sasl,'zooclient 09:09:35 , 'ip,'127.0.0.1 09:09:35 ] 09:09:35 09:09:35.783 [ProcessThread(sid:0 cport:46481):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 102620313481 09:09:35 09:09:35.784 [ProcessThread(sid:0 cport:46481):] DEBUG org.apache.zookeeper.server.SessionTrackerImpl - Checking session 0x1000002945e0000 09:09:35 09:09:35.784 [ProcessThread(sid:0 cport:46481):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Processing ACL: 31,s{'world,'anyone} 09:09:35 09:09:35 09:09:35.784 [ProcessThread(sid:0 cport:46481):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Permission requested: 4 09:09:35 09:09:35.784 [ProcessThread(sid:0 cport:46481):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - ACLs for node: [31,s{'world,'anyone} 09:09:35 ] 09:09:35 09:09:35.784 [ProcessThread(sid:0 cport:46481):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Client credentials: ['sasl,'zooclient 09:09:35 , 'ip,'127.0.0.1 09:09:35 ] 09:09:35 09:09:35.784 [ProcessThread(sid:0 cport:46481):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 102620313481 09:09:35 09:09:35.784 [ProcessThread(sid:0 cport:46481):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 104657477230 09:09:35 09:09:35.784 [ProcessThread(sid:0 cport:46481):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 107674518719 09:09:35 09:09:35.784 [ProcessThread(sid:0 cport:46481):] DEBUG org.apache.zookeeper.server.SessionTrackerImpl - Checking session 0x1000002945e0000 09:09:35 09:09:35.784 [ProcessThread(sid:0 cport:46481):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Permission requested: 1 09:09:35 09:09:35.784 [ProcessThread(sid:0 cport:46481):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - ACLs for node: [31,s{'world,'anyone} 09:09:35 ] 09:09:35 09:09:35.784 [ProcessThread(sid:0 cport:46481):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Client credentials: ['sasl,'zooclient 09:09:35 , 'ip,'127.0.0.1 09:09:35 ] 09:09:35 09:09:35.784 [ProcessThread(sid:0 cport:46481):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 107674518719 09:09:35 09:09:35.784 [ProcessThread(sid:0 cport:46481):] DEBUG org.apache.zookeeper.server.SessionTrackerImpl - Checking session 0x1000002945e0000 09:09:35 09:09:35.784 [ProcessThread(sid:0 cport:46481):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Processing ACL: 31,s{'world,'anyone} 09:09:35 09:09:35 09:09:35.784 [ProcessThread(sid:0 cport:46481):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Permission requested: 4 09:09:35 09:09:35.784 [ProcessThread(sid:0 cport:46481):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - ACLs for node: [31,s{'world,'anyone} 09:09:35 ] 09:09:35 09:09:35.784 [ProcessThread(sid:0 cport:46481):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Client credentials: ['sasl,'zooclient 09:09:35 , 'ip,'127.0.0.1 09:09:35 ] 09:09:35 09:09:35.784 [ProcessThread(sid:0 cport:46481):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 107674518719 09:09:35 09:09:35.784 [ProcessThread(sid:0 cport:46481):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 106172973717 09:09:35 09:09:35.784 [ProcessThread(sid:0 cport:46481):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 107886360498 09:09:35 09:09:35.784 [ProcessThread(sid:0 cport:46481):] DEBUG org.apache.zookeeper.server.SessionTrackerImpl - Checking session 0x1000002945e0000 09:09:35 09:09:35.784 [ProcessThread(sid:0 cport:46481):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Permission requested: 1 09:09:35 09:09:35.784 [ProcessThread(sid:0 cport:46481):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - ACLs for node: [31,s{'world,'anyone} 09:09:35 ] 09:09:35 09:09:35.784 [ProcessThread(sid:0 cport:46481):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Client credentials: ['sasl,'zooclient 09:09:35 , 'ip,'127.0.0.1 09:09:35 ] 09:09:35 09:09:35.784 [ProcessThread(sid:0 cport:46481):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 107886360498 09:09:35 09:09:35.784 [ProcessThread(sid:0 cport:46481):] DEBUG org.apache.zookeeper.server.SessionTrackerImpl - Checking session 0x1000002945e0000 09:09:35 09:09:35.784 [ProcessThread(sid:0 cport:46481):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Processing ACL: 31,s{'world,'anyone} 09:09:35 09:09:35 09:09:35.784 [ProcessThread(sid:0 cport:46481):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Permission requested: 4 09:09:35 09:09:35.784 [ProcessThread(sid:0 cport:46481):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - ACLs for node: [31,s{'world,'anyone} 09:09:35 ] 09:09:35 09:09:35.785 [ProcessThread(sid:0 cport:46481):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Client credentials: ['sasl,'zooclient 09:09:35 , 'ip,'127.0.0.1 09:09:35 ] 09:09:35 09:09:35.785 [ProcessThread(sid:0 cport:46481):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 107886360498 09:09:35 09:09:35.785 [ProcessThread(sid:0 cport:46481):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 111120264064 09:09:35 09:09:35.785 [ProcessThread(sid:0 cport:46481):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 114739953220 09:09:35 09:09:35.785 [ProcessThread(sid:0 cport:46481):] DEBUG org.apache.zookeeper.server.SessionTrackerImpl - Checking session 0x1000002945e0000 09:09:35 09:09:35.785 [ProcessThread(sid:0 cport:46481):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Permission requested: 1 09:09:35 09:09:35.785 [ProcessThread(sid:0 cport:46481):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - ACLs for node: [31,s{'world,'anyone} 09:09:35 ] 09:09:35 09:09:35.785 [ProcessThread(sid:0 cport:46481):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Client credentials: ['sasl,'zooclient 09:09:35 , 'ip,'127.0.0.1 09:09:35 ] 09:09:35 09:09:35.785 [ProcessThread(sid:0 cport:46481):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 114739953220 09:09:35 09:09:35.785 [ProcessThread(sid:0 cport:46481):] DEBUG org.apache.zookeeper.server.SessionTrackerImpl - Checking session 0x1000002945e0000 09:09:35 09:09:35.785 [ProcessThread(sid:0 cport:46481):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Processing ACL: 31,s{'world,'anyone} 09:09:35 09:09:35 09:09:35.785 [ProcessThread(sid:0 cport:46481):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Permission requested: 4 09:09:35 09:09:35.785 [ProcessThread(sid:0 cport:46481):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - ACLs for node: [31,s{'world,'anyone} 09:09:35 ] 09:09:35 09:09:35.785 [ProcessThread(sid:0 cport:46481):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Client credentials: ['sasl,'zooclient 09:09:35 , 'ip,'127.0.0.1 09:09:35 ] 09:09:35 09:09:35.785 [ProcessThread(sid:0 cport:46481):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 114739953220 09:09:35 09:09:35.785 [ProcessThread(sid:0 cport:46481):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 114518333171 09:09:35 09:09:35.785 [ProcessThread(sid:0 cport:46481):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 117793366731 09:09:35 09:09:35.785 [ProcessThread(sid:0 cport:46481):] DEBUG org.apache.zookeeper.server.SessionTrackerImpl - Checking session 0x1000002945e0000 09:09:35 09:09:35.785 [ProcessThread(sid:0 cport:46481):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Permission requested: 1 09:09:35 09:09:35.785 [ProcessThread(sid:0 cport:46481):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - ACLs for node: [31,s{'world,'anyone} 09:09:35 ] 09:09:35 09:09:35.785 [ProcessThread(sid:0 cport:46481):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Client credentials: ['sasl,'zooclient 09:09:35 , 'ip,'127.0.0.1 09:09:35 ] 09:09:35 09:09:35.785 [ProcessThread(sid:0 cport:46481):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 117793366731 09:09:35 09:09:35.785 [ProcessThread(sid:0 cport:46481):] DEBUG org.apache.zookeeper.server.SessionTrackerImpl - Checking session 0x1000002945e0000 09:09:35 09:09:35.785 [ProcessThread(sid:0 cport:46481):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Processing ACL: 31,s{'world,'anyone} 09:09:35 09:09:35 09:09:35.785 [ProcessThread(sid:0 cport:46481):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Permission requested: 4 09:09:35 09:09:35.785 [ProcessThread(sid:0 cport:46481):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - ACLs for node: [31,s{'world,'anyone} 09:09:35 ] 09:09:35 09:09:35.785 [ProcessThread(sid:0 cport:46481):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Client credentials: ['sasl,'zooclient 09:09:35 , 'ip,'127.0.0.1 09:09:35 ] 09:09:35 09:09:35.785 [ProcessThread(sid:0 cport:46481):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 117793366731 09:09:35 09:09:35.785 [ProcessThread(sid:0 cport:46481):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 115686458672 09:09:35 09:09:35.785 [ProcessThread(sid:0 cport:46481):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 116440876358 09:09:35 09:09:35.784 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - Processing request:: sessionid:0x1000002945e0000 type:multi cxid:0x6b zxid:0x39 txntype:14 reqpath:n/a 09:09:35 09:09:35.785 [SyncThread:0] DEBUG org.apache.zookeeper.common.PathTrie - brokers 09:09:35 09:09:35.785 [SyncThread:0] DEBUG org.apache.zookeeper.server.DataTree - Digests are matching for Zxid: 39, Digest in log and actual tree: 96607388137 09:09:35 09:09:35.786 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - sessionid:0x1000002945e0000 type:multi cxid:0x6b zxid:0x39 txntype:14 reqpath:n/a 09:09:35 09:09:35.786 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - Processing request:: sessionid:0x1000002945e0000 type:multi cxid:0x6c zxid:0x3a txntype:14 reqpath:n/a 09:09:35 09:09:35.786 [SyncThread:0] DEBUG org.apache.zookeeper.common.PathTrie - brokers 09:09:35 09:09:35.786 [SyncThread:0] DEBUG org.apache.zookeeper.server.DataTree - Digests are matching for Zxid: 3a, Digest in log and actual tree: 95611755347 09:09:35 09:09:35.786 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - sessionid:0x1000002945e0000 type:multi cxid:0x6c zxid:0x3a txntype:14 reqpath:n/a 09:09:35 09:09:35.786 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - Processing request:: sessionid:0x1000002945e0000 type:multi cxid:0x6d zxid:0x3b txntype:14 reqpath:n/a 09:09:35 09:09:35.786 [SyncThread:0] DEBUG org.apache.zookeeper.common.PathTrie - brokers 09:09:35 09:09:35.786 [SyncThread:0] DEBUG org.apache.zookeeper.server.DataTree - Digests are matching for Zxid: 3b, Digest in log and actual tree: 101651538980 09:09:35 09:09:35.786 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - sessionid:0x1000002945e0000 type:multi cxid:0x6d zxid:0x3b txntype:14 reqpath:n/a 09:09:35 09:09:35.786 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - Processing request:: sessionid:0x1000002945e0000 type:multi cxid:0x6e zxid:0x3c txntype:14 reqpath:n/a 09:09:35 09:09:35.786 [SyncThread:0] DEBUG org.apache.zookeeper.common.PathTrie - brokers 09:09:35 09:09:35.786 [SyncThread:0] DEBUG org.apache.zookeeper.server.DataTree - Digests are matching for Zxid: 3c, Digest in log and actual tree: 99200249535 09:09:35 09:09:35.786 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - sessionid:0x1000002945e0000 type:multi cxid:0x6e zxid:0x3c txntype:14 reqpath:n/a 09:09:35 09:09:35.786 [main-SendThread(127.0.0.1:46481)] DEBUG org.apache.zookeeper.ClientCnxn - Reading reply session id: 0x1000002945e0000, packet:: clientPath:null serverPath:null finished:false header:: 107,14 replyHeader:: 107,57,0 request:: org.apache.zookeeper.MultiOperationRecord@324db7b5 response:: org.apache.zookeeper.MultiResponse@2c19b7f6 09:09:35 09:09:35.787 [main-SendThread(127.0.0.1:46481)] DEBUG org.apache.zookeeper.ClientCnxn - Reading reply session id: 0x1000002945e0000, packet:: clientPath:null serverPath:null finished:false header:: 108,14 replyHeader:: 108,58,0 request:: org.apache.zookeeper.MultiOperationRecord@324db7b3 response:: org.apache.zookeeper.MultiResponse@2c19b7f4 09:09:35 09:09:35.787 [main-SendThread(127.0.0.1:46481)] DEBUG org.apache.zookeeper.ClientCnxn - Reading reply session id: 0x1000002945e0000, packet:: clientPath:null serverPath:null finished:false header:: 109,14 replyHeader:: 109,59,0 request:: org.apache.zookeeper.MultiOperationRecord@324db755 response:: org.apache.zookeeper.MultiResponse@2c19b796 09:09:35 09:09:35.787 [main-SendThread(127.0.0.1:46481)] DEBUG org.apache.zookeeper.ClientCnxn - Reading reply session id: 0x1000002945e0000, packet:: clientPath:null serverPath:null finished:false header:: 110,14 replyHeader:: 110,60,0 request:: org.apache.zookeeper.MultiOperationRecord@324db776 response:: org.apache.zookeeper.MultiResponse@2c19b7b7 09:09:35 09:09:35.787 [ProcessThread(sid:0 cport:46481):] DEBUG org.apache.zookeeper.server.SessionTrackerImpl - Checking session 0x1000002945e0000 09:09:35 09:09:35.787 [ProcessThread(sid:0 cport:46481):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Permission requested: 1 09:09:35 09:09:35.787 [ProcessThread(sid:0 cport:46481):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - ACLs for node: [31,s{'world,'anyone} 09:09:35 ] 09:09:35 09:09:35.787 [ProcessThread(sid:0 cport:46481):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Client credentials: ['sasl,'zooclient 09:09:35 , 'ip,'127.0.0.1 09:09:35 ] 09:09:35 09:09:35.787 [ProcessThread(sid:0 cport:46481):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 116440876358 09:09:35 09:09:35.787 [ProcessThread(sid:0 cport:46481):] DEBUG org.apache.zookeeper.server.SessionTrackerImpl - Checking session 0x1000002945e0000 09:09:35 09:09:35.787 [ProcessThread(sid:0 cport:46481):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Processing ACL: 31,s{'world,'anyone} 09:09:35 09:09:35 09:09:35.787 [ProcessThread(sid:0 cport:46481):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Permission requested: 4 09:09:35 09:09:35.787 [ProcessThread(sid:0 cport:46481):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - ACLs for node: [31,s{'world,'anyone} 09:09:35 ] 09:09:35 09:09:35.788 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - Processing request:: sessionid:0x1000002945e0000 type:multi cxid:0x6f zxid:0x3d txntype:14 reqpath:n/a 09:09:35 09:09:35.788 [SyncThread:0] DEBUG org.apache.zookeeper.common.PathTrie - brokers 09:09:35 09:09:35.788 [SyncThread:0] DEBUG org.apache.zookeeper.server.DataTree - Digests are matching for Zxid: 3d, Digest in log and actual tree: 102620313481 09:09:35 09:09:35.788 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - sessionid:0x1000002945e0000 type:multi cxid:0x6f zxid:0x3d txntype:14 reqpath:n/a 09:09:35 09:09:35.788 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - Processing request:: sessionid:0x1000002945e0000 type:multi cxid:0x70 zxid:0x3e txntype:14 reqpath:n/a 09:09:35 09:09:35.788 [SyncThread:0] DEBUG org.apache.zookeeper.common.PathTrie - brokers 09:09:35 09:09:35.788 [SyncThread:0] DEBUG org.apache.zookeeper.server.DataTree - Digests are matching for Zxid: 3e, Digest in log and actual tree: 107674518719 09:09:35 09:09:35.788 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - sessionid:0x1000002945e0000 type:multi cxid:0x70 zxid:0x3e txntype:14 reqpath:n/a 09:09:35 09:09:35.788 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - Processing request:: sessionid:0x1000002945e0000 type:multi cxid:0x71 zxid:0x3f txntype:14 reqpath:n/a 09:09:35 09:09:35.788 [main-SendThread(127.0.0.1:46481)] DEBUG org.apache.zookeeper.ClientCnxn - Reading reply session id: 0x1000002945e0000, packet:: clientPath:null serverPath:null finished:false header:: 111,14 replyHeader:: 111,61,0 request:: org.apache.zookeeper.MultiOperationRecord@324db78e response:: org.apache.zookeeper.MultiResponse@2c19b7cf 09:09:35 09:09:35.788 [main-SendThread(127.0.0.1:46481)] DEBUG org.apache.zookeeper.ClientCnxn - Reading reply session id: 0x1000002945e0000, packet:: clientPath:null serverPath:null finished:false header:: 112,14 replyHeader:: 112,62,0 request:: org.apache.zookeeper.MultiOperationRecord@324db793 response:: org.apache.zookeeper.MultiResponse@2c19b7d4 09:09:35 09:09:35.788 [SyncThread:0] DEBUG org.apache.zookeeper.common.PathTrie - brokers 09:09:35 09:09:35.788 [SyncThread:0] DEBUG org.apache.zookeeper.server.DataTree - Digests are matching for Zxid: 3f, Digest in log and actual tree: 107886360498 09:09:35 09:09:35.789 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - sessionid:0x1000002945e0000 type:multi cxid:0x71 zxid:0x3f txntype:14 reqpath:n/a 09:09:35 09:09:35.789 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - Processing request:: sessionid:0x1000002945e0000 type:multi cxid:0x72 zxid:0x40 txntype:14 reqpath:n/a 09:09:35 09:09:35.789 [SyncThread:0] DEBUG org.apache.zookeeper.common.PathTrie - brokers 09:09:35 09:09:35.789 [SyncThread:0] DEBUG org.apache.zookeeper.server.DataTree - Digests are matching for Zxid: 40, Digest in log and actual tree: 114739953220 09:09:35 09:09:35.789 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - sessionid:0x1000002945e0000 type:multi cxid:0x72 zxid:0x40 txntype:14 reqpath:n/a 09:09:35 09:09:35.789 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - Processing request:: sessionid:0x1000002945e0000 type:multi cxid:0x73 zxid:0x41 txntype:14 reqpath:n/a 09:09:35 09:09:35.789 [SyncThread:0] DEBUG org.apache.zookeeper.common.PathTrie - brokers 09:09:35 09:09:35.789 [SyncThread:0] DEBUG org.apache.zookeeper.server.DataTree - Digests are matching for Zxid: 41, Digest in log and actual tree: 117793366731 09:09:35 09:09:35.789 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - sessionid:0x1000002945e0000 type:multi cxid:0x73 zxid:0x41 txntype:14 reqpath:n/a 09:09:35 09:09:35.789 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - Processing request:: sessionid:0x1000002945e0000 type:multi cxid:0x74 zxid:0x42 txntype:14 reqpath:n/a 09:09:35 09:09:35.789 [SyncThread:0] DEBUG org.apache.zookeeper.common.PathTrie - brokers 09:09:35 09:09:35.789 [SyncThread:0] DEBUG org.apache.zookeeper.server.DataTree - Digests are matching for Zxid: 42, Digest in log and actual tree: 116440876358 09:09:35 09:09:35.789 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - sessionid:0x1000002945e0000 type:multi cxid:0x74 zxid:0x42 txntype:14 reqpath:n/a 09:09:35 09:09:35.789 [main-SendThread(127.0.0.1:46481)] DEBUG org.apache.zookeeper.ClientCnxn - Reading reply session id: 0x1000002945e0000, packet:: clientPath:null serverPath:null finished:false header:: 113,14 replyHeader:: 113,63,0 request:: org.apache.zookeeper.MultiOperationRecord@324db7ae response:: org.apache.zookeeper.MultiResponse@2c19b7ef 09:09:35 09:09:35.790 [main-SendThread(127.0.0.1:46481)] DEBUG org.apache.zookeeper.ClientCnxn - Reading reply session id: 0x1000002945e0000, packet:: clientPath:null serverPath:null finished:false header:: 114,14 replyHeader:: 114,64,0 request:: org.apache.zookeeper.MultiOperationRecord@940352d9 response:: org.apache.zookeeper.MultiResponse@8dcf531a 09:09:35 09:09:35.790 [main-SendThread(127.0.0.1:46481)] DEBUG org.apache.zookeeper.ClientCnxn - Reading reply session id: 0x1000002945e0000, packet:: clientPath:null serverPath:null finished:false header:: 115,14 replyHeader:: 115,65,0 request:: org.apache.zookeeper.MultiOperationRecord@324db757 response:: org.apache.zookeeper.MultiResponse@2c19b798 09:09:35 09:09:35.790 [main-SendThread(127.0.0.1:46481)] DEBUG org.apache.zookeeper.ClientCnxn - Reading reply session id: 0x1000002945e0000, packet:: clientPath:null serverPath:null finished:false header:: 116,14 replyHeader:: 116,66,0 request:: org.apache.zookeeper.MultiOperationRecord@324db754 response:: org.apache.zookeeper.MultiResponse@2c19b795 09:09:35 09:09:35.787 [ProcessThread(sid:0 cport:46481):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Client credentials: ['sasl,'zooclient 09:09:35 , 'ip,'127.0.0.1 09:09:35 ] 09:09:35 09:09:35.790 [ProcessThread(sid:0 cport:46481):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from data tree is: 116440876358 09:09:35 09:09:35.790 [ProcessThread(sid:0 cport:46481):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 117573408383 09:09:35 09:09:35.791 [ProcessThread(sid:0 cport:46481):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 118363172446 09:09:35 09:09:35.791 [ProcessThread(sid:0 cport:46481):] DEBUG org.apache.zookeeper.server.SessionTrackerImpl - Checking session 0x1000002945e0000 09:09:35 09:09:35.791 [ProcessThread(sid:0 cport:46481):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Permission requested: 1 09:09:35 09:09:35.791 [ProcessThread(sid:0 cport:46481):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - ACLs for node: [31,s{'world,'anyone} 09:09:35 ] 09:09:35 09:09:35.791 [ProcessThread(sid:0 cport:46481):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Client credentials: ['sasl,'zooclient 09:09:35 , 'ip,'127.0.0.1 09:09:35 ] 09:09:35 09:09:35.791 [ProcessThread(sid:0 cport:46481):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 118363172446 09:09:35 09:09:35.791 [ProcessThread(sid:0 cport:46481):] DEBUG org.apache.zookeeper.server.SessionTrackerImpl - Checking session 0x1000002945e0000 09:09:35 09:09:35.791 [ProcessThread(sid:0 cport:46481):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Processing ACL: 31,s{'world,'anyone} 09:09:35 09:09:35 09:09:35.791 [ProcessThread(sid:0 cport:46481):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Permission requested: 4 09:09:35 09:09:35.791 [ProcessThread(sid:0 cport:46481):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - ACLs for node: [31,s{'world,'anyone} 09:09:35 ] 09:09:35 09:09:35.791 [ProcessThread(sid:0 cport:46481):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Client credentials: ['sasl,'zooclient 09:09:35 , 'ip,'127.0.0.1 09:09:35 ] 09:09:35 09:09:35.791 [ProcessThread(sid:0 cport:46481):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 118363172446 09:09:35 09:09:35.791 [ProcessThread(sid:0 cport:46481):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 116893302328 09:09:35 09:09:35.791 [ProcessThread(sid:0 cport:46481):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 119691422034 09:09:35 09:09:35.791 [ProcessThread(sid:0 cport:46481):] DEBUG org.apache.zookeeper.server.SessionTrackerImpl - Checking session 0x1000002945e0000 09:09:35 09:09:35.791 [ProcessThread(sid:0 cport:46481):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Permission requested: 1 09:09:35 09:09:35.791 [ProcessThread(sid:0 cport:46481):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - ACLs for node: [31,s{'world,'anyone} 09:09:35 ] 09:09:35 09:09:35.791 [ProcessThread(sid:0 cport:46481):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Client credentials: ['sasl,'zooclient 09:09:35 , 'ip,'127.0.0.1 09:09:35 ] 09:09:35 09:09:35.791 [ProcessThread(sid:0 cport:46481):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 119691422034 09:09:35 09:09:35.791 [ProcessThread(sid:0 cport:46481):] DEBUG org.apache.zookeeper.server.SessionTrackerImpl - Checking session 0x1000002945e0000 09:09:35 09:09:35.791 [ProcessThread(sid:0 cport:46481):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Processing ACL: 31,s{'world,'anyone} 09:09:35 09:09:35 09:09:35.791 [ProcessThread(sid:0 cport:46481):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Permission requested: 4 09:09:35 09:09:35.791 [ProcessThread(sid:0 cport:46481):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - ACLs for node: [31,s{'world,'anyone} 09:09:35 ] 09:09:35 09:09:35.791 [ProcessThread(sid:0 cport:46481):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Client credentials: ['sasl,'zooclient 09:09:35 , 'ip,'127.0.0.1 09:09:35 ] 09:09:35 09:09:35.791 [ProcessThread(sid:0 cport:46481):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 119691422034 09:09:35 09:09:35.791 [ProcessThread(sid:0 cport:46481):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 120414294531 09:09:35 09:09:35.791 [ProcessThread(sid:0 cport:46481):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 120582431232 09:09:35 09:09:35.791 [ProcessThread(sid:0 cport:46481):] DEBUG org.apache.zookeeper.server.SessionTrackerImpl - Checking session 0x1000002945e0000 09:09:35 09:09:35.791 [ProcessThread(sid:0 cport:46481):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Permission requested: 1 09:09:35 09:09:35.791 [ProcessThread(sid:0 cport:46481):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - ACLs for node: [31,s{'world,'anyone} 09:09:35 ] 09:09:35 09:09:35.791 [ProcessThread(sid:0 cport:46481):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Client credentials: ['sasl,'zooclient 09:09:35 , 'ip,'127.0.0.1 09:09:35 ] 09:09:35 09:09:35.791 [ProcessThread(sid:0 cport:46481):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 120582431232 09:09:35 09:09:35.791 [ProcessThread(sid:0 cport:46481):] DEBUG org.apache.zookeeper.server.SessionTrackerImpl - Checking session 0x1000002945e0000 09:09:35 09:09:35.791 [ProcessThread(sid:0 cport:46481):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Processing ACL: 31,s{'world,'anyone} 09:09:35 09:09:35 09:09:35.791 [ProcessThread(sid:0 cport:46481):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Permission requested: 4 09:09:35 09:09:35.791 [ProcessThread(sid:0 cport:46481):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - ACLs for node: [31,s{'world,'anyone} 09:09:35 ] 09:09:35 09:09:35.791 [ProcessThread(sid:0 cport:46481):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Client credentials: ['sasl,'zooclient 09:09:35 , 'ip,'127.0.0.1 09:09:35 ] 09:09:35 09:09:35.791 [ProcessThread(sid:0 cport:46481):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 120582431232 09:09:35 09:09:35.792 [ProcessThread(sid:0 cport:46481):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 122724629539 09:09:35 09:09:35.792 [ProcessThread(sid:0 cport:46481):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 125189584042 09:09:35 09:09:35.792 [ProcessThread(sid:0 cport:46481):] DEBUG org.apache.zookeeper.server.SessionTrackerImpl - Checking session 0x1000002945e0000 09:09:35 09:09:35.792 [ProcessThread(sid:0 cport:46481):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Permission requested: 1 09:09:35 09:09:35.792 [ProcessThread(sid:0 cport:46481):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - ACLs for node: [31,s{'world,'anyone} 09:09:35 ] 09:09:35 09:09:35.792 [ProcessThread(sid:0 cport:46481):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Client credentials: ['sasl,'zooclient 09:09:35 , 'ip,'127.0.0.1 09:09:35 ] 09:09:35 09:09:35.792 [ProcessThread(sid:0 cport:46481):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 125189584042 09:09:35 09:09:35.792 [ProcessThread(sid:0 cport:46481):] DEBUG org.apache.zookeeper.server.SessionTrackerImpl - Checking session 0x1000002945e0000 09:09:35 09:09:35.792 [ProcessThread(sid:0 cport:46481):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Processing ACL: 31,s{'world,'anyone} 09:09:35 09:09:35 09:09:35.792 [ProcessThread(sid:0 cport:46481):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Permission requested: 4 09:09:35 09:09:35.792 [ProcessThread(sid:0 cport:46481):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - ACLs for node: [31,s{'world,'anyone} 09:09:35 ] 09:09:35 09:09:35.792 [ProcessThread(sid:0 cport:46481):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Client credentials: ['sasl,'zooclient 09:09:35 , 'ip,'127.0.0.1 09:09:35 ] 09:09:35 09:09:35.792 [ProcessThread(sid:0 cport:46481):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 125189584042 09:09:35 09:09:35.792 [ProcessThread(sid:0 cport:46481):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 122493100967 09:09:35 09:09:35.792 [ProcessThread(sid:0 cport:46481):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 123740451294 09:09:35 09:09:35.792 [ProcessThread(sid:0 cport:46481):] DEBUG org.apache.zookeeper.server.SessionTrackerImpl - Checking session 0x1000002945e0000 09:09:35 09:09:35.792 [ProcessThread(sid:0 cport:46481):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Permission requested: 1 09:09:35 09:09:35.792 [ProcessThread(sid:0 cport:46481):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - ACLs for node: [31,s{'world,'anyone} 09:09:35 ] 09:09:35 09:09:35.792 [ProcessThread(sid:0 cport:46481):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Client credentials: ['sasl,'zooclient 09:09:35 , 'ip,'127.0.0.1 09:09:35 ] 09:09:35 09:09:35.792 [ProcessThread(sid:0 cport:46481):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 123740451294 09:09:35 09:09:35.792 [ProcessThread(sid:0 cport:46481):] DEBUG org.apache.zookeeper.server.SessionTrackerImpl - Checking session 0x1000002945e0000 09:09:35 09:09:35.792 [ProcessThread(sid:0 cport:46481):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Processing ACL: 31,s{'world,'anyone} 09:09:35 09:09:35 09:09:35.792 [ProcessThread(sid:0 cport:46481):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Permission requested: 4 09:09:35 09:09:35.792 [ProcessThread(sid:0 cport:46481):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - ACLs for node: [31,s{'world,'anyone} 09:09:35 ] 09:09:35 09:09:35.792 [ProcessThread(sid:0 cport:46481):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Client credentials: ['sasl,'zooclient 09:09:35 , 'ip,'127.0.0.1 09:09:35 ] 09:09:35 09:09:35.792 [ProcessThread(sid:0 cport:46481):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 123740451294 09:09:35 09:09:35.792 [ProcessThread(sid:0 cport:46481):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 124059079938 09:09:35 09:09:35.792 [ProcessThread(sid:0 cport:46481):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 125877867415 09:09:35 09:09:35.792 [ProcessThread(sid:0 cport:46481):] DEBUG org.apache.zookeeper.server.SessionTrackerImpl - Checking session 0x1000002945e0000 09:09:35 09:09:35.792 [ProcessThread(sid:0 cport:46481):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Permission requested: 1 09:09:35 09:09:35.792 [ProcessThread(sid:0 cport:46481):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - ACLs for node: [31,s{'world,'anyone} 09:09:35 ] 09:09:35 09:09:35.792 [ProcessThread(sid:0 cport:46481):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Client credentials: ['sasl,'zooclient 09:09:35 , 'ip,'127.0.0.1 09:09:35 ] 09:09:35 09:09:35.792 [ProcessThread(sid:0 cport:46481):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 125877867415 09:09:35 09:09:35.792 [ProcessThread(sid:0 cport:46481):] DEBUG org.apache.zookeeper.server.SessionTrackerImpl - Checking session 0x1000002945e0000 09:09:35 09:09:35.792 [ProcessThread(sid:0 cport:46481):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Processing ACL: 31,s{'world,'anyone} 09:09:35 09:09:35 09:09:35.792 [ProcessThread(sid:0 cport:46481):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Permission requested: 4 09:09:35 09:09:35.792 [ProcessThread(sid:0 cport:46481):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - ACLs for node: [31,s{'world,'anyone} 09:09:35 ] 09:09:35 09:09:35.792 [ProcessThread(sid:0 cport:46481):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Client credentials: ['sasl,'zooclient 09:09:35 , 'ip,'127.0.0.1 09:09:35 ] 09:09:35 09:09:35.793 [ProcessThread(sid:0 cport:46481):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 125877867415 09:09:35 09:09:35.793 [ProcessThread(sid:0 cport:46481):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 125692955238 09:09:35 09:09:35.793 [ProcessThread(sid:0 cport:46481):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 126308840362 09:09:35 09:09:35.793 [ProcessThread(sid:0 cport:46481):] DEBUG org.apache.zookeeper.server.SessionTrackerImpl - Checking session 0x1000002945e0000 09:09:35 09:09:35.793 [ProcessThread(sid:0 cport:46481):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Permission requested: 1 09:09:35 09:09:35.793 [ProcessThread(sid:0 cport:46481):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - ACLs for node: [31,s{'world,'anyone} 09:09:35 ] 09:09:35 09:09:35.793 [ProcessThread(sid:0 cport:46481):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Client credentials: ['sasl,'zooclient 09:09:35 , 'ip,'127.0.0.1 09:09:35 ] 09:09:35 09:09:35.793 [ProcessThread(sid:0 cport:46481):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 126308840362 09:09:35 09:09:35.793 [ProcessThread(sid:0 cport:46481):] DEBUG org.apache.zookeeper.server.SessionTrackerImpl - Checking session 0x1000002945e0000 09:09:35 09:09:35.793 [ProcessThread(sid:0 cport:46481):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Processing ACL: 31,s{'world,'anyone} 09:09:35 09:09:35 09:09:35.793 [ProcessThread(sid:0 cport:46481):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Permission requested: 4 09:09:35 09:09:35.793 [ProcessThread(sid:0 cport:46481):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - ACLs for node: [31,s{'world,'anyone} 09:09:35 ] 09:09:35 09:09:35.793 [ProcessThread(sid:0 cport:46481):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Client credentials: ['sasl,'zooclient 09:09:35 , 'ip,'127.0.0.1 09:09:35 ] 09:09:35 09:09:35.793 [ProcessThread(sid:0 cport:46481):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 126308840362 09:09:35 09:09:35.793 [ProcessThread(sid:0 cport:46481):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 128486873543 09:09:35 09:09:35.793 [ProcessThread(sid:0 cport:46481):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 128613491325 09:09:35 09:09:35.793 [ProcessThread(sid:0 cport:46481):] DEBUG org.apache.zookeeper.server.SessionTrackerImpl - Checking session 0x1000002945e0000 09:09:35 09:09:35.793 [ProcessThread(sid:0 cport:46481):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Permission requested: 1 09:09:35 09:09:35.793 [ProcessThread(sid:0 cport:46481):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - ACLs for node: [31,s{'world,'anyone} 09:09:35 ] 09:09:35 09:09:35.793 [ProcessThread(sid:0 cport:46481):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Client credentials: ['sasl,'zooclient 09:09:35 , 'ip,'127.0.0.1 09:09:35 ] 09:09:35 09:09:35.793 [ProcessThread(sid:0 cport:46481):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 128613491325 09:09:35 09:09:35.793 [ProcessThread(sid:0 cport:46481):] DEBUG org.apache.zookeeper.server.SessionTrackerImpl - Checking session 0x1000002945e0000 09:09:35 09:09:35.793 [ProcessThread(sid:0 cport:46481):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Processing ACL: 31,s{'world,'anyone} 09:09:35 09:09:35 09:09:35.793 [ProcessThread(sid:0 cport:46481):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Permission requested: 4 09:09:35 09:09:35.793 [ProcessThread(sid:0 cport:46481):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - ACLs for node: [31,s{'world,'anyone} 09:09:35 ] 09:09:35 09:09:35.793 [ProcessThread(sid:0 cport:46481):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Client credentials: ['sasl,'zooclient 09:09:35 , 'ip,'127.0.0.1 09:09:35 ] 09:09:35 09:09:35.793 [ProcessThread(sid:0 cport:46481):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 128613491325 09:09:35 09:09:35.793 [ProcessThread(sid:0 cport:46481):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 125467824574 09:09:35 09:09:35.793 [ProcessThread(sid:0 cport:46481):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 128871593235 09:09:35 09:09:35.793 [ProcessThread(sid:0 cport:46481):] DEBUG org.apache.zookeeper.server.SessionTrackerImpl - Checking session 0x1000002945e0000 09:09:35 09:09:35.793 [ProcessThread(sid:0 cport:46481):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Permission requested: 1 09:09:35 09:09:35.793 [ProcessThread(sid:0 cport:46481):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - ACLs for node: [31,s{'world,'anyone} 09:09:35 ] 09:09:35 09:09:35.793 [ProcessThread(sid:0 cport:46481):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Client credentials: ['sasl,'zooclient 09:09:35 , 'ip,'127.0.0.1 09:09:35 ] 09:09:35 09:09:35.793 [ProcessThread(sid:0 cport:46481):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 128871593235 09:09:35 09:09:35.793 [ProcessThread(sid:0 cport:46481):] DEBUG org.apache.zookeeper.server.SessionTrackerImpl - Checking session 0x1000002945e0000 09:09:35 09:09:35.793 [ProcessThread(sid:0 cport:46481):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Processing ACL: 31,s{'world,'anyone} 09:09:35 09:09:35 09:09:35.793 [ProcessThread(sid:0 cport:46481):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Permission requested: 4 09:09:35 09:09:35.793 [ProcessThread(sid:0 cport:46481):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - ACLs for node: [31,s{'world,'anyone} 09:09:35 ] 09:09:35 09:09:35.793 [ProcessThread(sid:0 cport:46481):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Client credentials: ['sasl,'zooclient 09:09:35 , 'ip,'127.0.0.1 09:09:35 ] 09:09:35 09:09:35.794 [ProcessThread(sid:0 cport:46481):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 128871593235 09:09:35 09:09:35.794 [ProcessThread(sid:0 cport:46481):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 131679462589 09:09:35 09:09:35.794 [ProcessThread(sid:0 cport:46481):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 134576734403 09:09:35 09:09:35.819 [main] DEBUG org.apache.kafka.clients.NetworkClient - [Consumer clientId=mso-123456-consumer-f5c1d64b-2a77-4abe-97e2-5d595acfbd82, groupId=mso-group] Sending metadata request MetadataRequestData(topics=[MetadataRequestTopic(topicId=AAAAAAAAAAAAAAAAAAAAAA, name='my-test-topic')], allowAutoTopicCreation=false, includeClusterAuthorizedOperations=false, includeTopicAuthorizedOperations=false) to node localhost:40117 (id: 1 rack: null) 09:09:35 09:09:35.819 [main] DEBUG org.apache.kafka.clients.NetworkClient - [Consumer clientId=mso-123456-consumer-f5c1d64b-2a77-4abe-97e2-5d595acfbd82, groupId=mso-group] Sending METADATA request with header RequestHeader(apiKey=METADATA, apiVersion=12, clientId=mso-123456-consumer-f5c1d64b-2a77-4abe-97e2-5d595acfbd82, correlationId=6) and timeout 30000 to node 1: MetadataRequestData(topics=[MetadataRequestTopic(topicId=AAAAAAAAAAAAAAAAAAAAAA, name='my-test-topic')], allowAutoTopicCreation=false, includeClusterAuthorizedOperations=false, includeTopicAuthorizedOperations=false) 09:09:35 09:09:35.823 [main] DEBUG org.apache.kafka.clients.NetworkClient - [Consumer clientId=mso-123456-consumer-f5c1d64b-2a77-4abe-97e2-5d595acfbd82, groupId=mso-group] Received METADATA response from node 1 for request with header RequestHeader(apiKey=METADATA, apiVersion=12, clientId=mso-123456-consumer-f5c1d64b-2a77-4abe-97e2-5d595acfbd82, correlationId=6): MetadataResponseData(throttleTimeMs=0, brokers=[MetadataResponseBroker(nodeId=1, host='localhost', port=40117, rack=null)], clusterId='lcpOyY1-QY2MMThgHGGgSA', controllerId=1, topics=[MetadataResponseTopic(errorCode=0, name='my-test-topic', topicId=rpOTusxPRiGyrsjjjH_fwA, isInternal=false, partitions=[MetadataResponsePartition(errorCode=0, partitionIndex=0, leaderId=1, leaderEpoch=0, replicaNodes=[1], isrNodes=[1], offlineReplicas=[])], topicAuthorizedOperations=-2147483648)], clusterAuthorizedOperations=-2147483648) 09:09:35 09:09:35.823 [main] DEBUG org.apache.kafka.clients.Metadata - [Consumer clientId=mso-123456-consumer-f5c1d64b-2a77-4abe-97e2-5d595acfbd82, groupId=mso-group] Updating last seen epoch for partition my-test-topic-0 from 0 to epoch 0 from new metadata 09:09:35 09:09:35.823 [data-plane-kafka-network-thread-1-ListenerName(SASL_PLAINTEXT)-SASL_PLAINTEXT-0] DEBUG kafka.request.logger - Completed request:{"isForwarded":false,"requestHeader":{"requestApiKey":3,"requestApiVersion":12,"correlationId":6,"clientId":"mso-123456-consumer-f5c1d64b-2a77-4abe-97e2-5d595acfbd82","requestApiKeyName":"METADATA"},"request":{"topics":[{"topicId":"AAAAAAAAAAAAAAAAAAAAAA","name":"my-test-topic"}],"allowAutoTopicCreation":false,"includeTopicAuthorizedOperations":false},"response":{"throttleTimeMs":0,"brokers":[{"nodeId":1,"host":"localhost","port":40117,"rack":null}],"clusterId":"lcpOyY1-QY2MMThgHGGgSA","controllerId":1,"topics":[{"errorCode":0,"name":"my-test-topic","topicId":"rpOTusxPRiGyrsjjjH_fwA","isInternal":false,"partitions":[{"errorCode":0,"partitionIndex":0,"leaderId":1,"leaderEpoch":0,"replicaNodes":[1],"isrNodes":[1],"offlineReplicas":[]}],"topicAuthorizedOperations":-2147483648}]},"connection":"127.0.0.1:40117-127.0.0.1:49642-3","totalTimeMs":2.57,"requestQueueTimeMs":0.249,"localTimeMs":2.051,"remoteTimeMs":0.0,"throttleTimeMs":0,"responseQueueTimeMs":0.079,"sendTimeMs":0.189,"securityProtocol":"SASL_PLAINTEXT","principal":"User:admin","listener":"SASL_PLAINTEXT","clientInformation":{"softwareName":"apache-kafka-java","softwareVersion":"3.3.1"}} 09:09:35 09:09:35.823 [main] DEBUG org.apache.kafka.clients.Metadata - [Consumer clientId=mso-123456-consumer-f5c1d64b-2a77-4abe-97e2-5d595acfbd82, groupId=mso-group] Updated cluster metadata updateVersion 4 to MetadataCache{clusterId='lcpOyY1-QY2MMThgHGGgSA', nodes={1=localhost:40117 (id: 1 rack: null)}, partitions=[PartitionMetadata(error=NONE, partition=my-test-topic-0, leader=Optional[1], leaderEpoch=Optional[0], replicas=1, isr=1, offlineReplicas=)], controller=localhost:40117 (id: 1 rack: null)} 09:09:35 09:09:35.824 [main] DEBUG org.apache.kafka.clients.consumer.internals.ConsumerCoordinator - [Consumer clientId=mso-123456-consumer-f5c1d64b-2a77-4abe-97e2-5d595acfbd82, groupId=mso-group] Sending FindCoordinator request to broker localhost:40117 (id: 1 rack: null) 09:09:35 09:09:35.824 [main] DEBUG org.apache.kafka.clients.NetworkClient - [Consumer clientId=mso-123456-consumer-f5c1d64b-2a77-4abe-97e2-5d595acfbd82, groupId=mso-group] Sending FIND_COORDINATOR request with header RequestHeader(apiKey=FIND_COORDINATOR, apiVersion=4, clientId=mso-123456-consumer-f5c1d64b-2a77-4abe-97e2-5d595acfbd82, correlationId=7) and timeout 30000 to node 1: FindCoordinatorRequestData(key='', keyType=0, coordinatorKeys=[mso-group]) 09:09:35 09:09:35.843 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - Processing request:: sessionid:0x1000002945e0000 type:multi cxid:0x75 zxid:0x43 txntype:14 reqpath:n/a 09:09:35 09:09:35.843 [SyncThread:0] DEBUG org.apache.zookeeper.common.PathTrie - brokers 09:09:35 09:09:35.843 [SyncThread:0] DEBUG org.apache.zookeeper.server.DataTree - Digests are matching for Zxid: 43, Digest in log and actual tree: 118363172446 09:09:35 09:09:35.843 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - sessionid:0x1000002945e0000 type:multi cxid:0x75 zxid:0x43 txntype:14 reqpath:n/a 09:09:35 09:09:35.843 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - Processing request:: sessionid:0x1000002945e0000 type:multi cxid:0x76 zxid:0x44 txntype:14 reqpath:n/a 09:09:35 09:09:35.843 [SyncThread:0] DEBUG org.apache.zookeeper.common.PathTrie - brokers 09:09:35 09:09:35.843 [SyncThread:0] DEBUG org.apache.zookeeper.server.DataTree - Digests are matching for Zxid: 44, Digest in log and actual tree: 119691422034 09:09:35 09:09:35.843 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - sessionid:0x1000002945e0000 type:multi cxid:0x76 zxid:0x44 txntype:14 reqpath:n/a 09:09:35 09:09:35.844 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - Processing request:: sessionid:0x1000002945e0000 type:multi cxid:0x77 zxid:0x45 txntype:14 reqpath:n/a 09:09:35 09:09:35.844 [SyncThread:0] DEBUG org.apache.zookeeper.common.PathTrie - brokers 09:09:35 09:09:35.844 [SyncThread:0] DEBUG org.apache.zookeeper.server.DataTree - Digests are matching for Zxid: 45, Digest in log and actual tree: 120582431232 09:09:35 09:09:35.844 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - sessionid:0x1000002945e0000 type:multi cxid:0x77 zxid:0x45 txntype:14 reqpath:n/a 09:09:35 09:09:35.844 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - Processing request:: sessionid:0x1000002945e0000 type:multi cxid:0x78 zxid:0x46 txntype:14 reqpath:n/a 09:09:35 09:09:35.844 [SyncThread:0] DEBUG org.apache.zookeeper.common.PathTrie - brokers 09:09:35 09:09:35.844 [SyncThread:0] DEBUG org.apache.zookeeper.server.DataTree - Digests are matching for Zxid: 46, Digest in log and actual tree: 125189584042 09:09:35 09:09:35.844 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - sessionid:0x1000002945e0000 type:multi cxid:0x78 zxid:0x46 txntype:14 reqpath:n/a 09:09:35 09:09:35.844 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - Processing request:: sessionid:0x1000002945e0000 type:multi cxid:0x79 zxid:0x47 txntype:14 reqpath:n/a 09:09:35 09:09:35.844 [SyncThread:0] DEBUG org.apache.zookeeper.common.PathTrie - brokers 09:09:35 09:09:35.844 [SyncThread:0] DEBUG org.apache.zookeeper.server.DataTree - Digests are matching for Zxid: 47, Digest in log and actual tree: 123740451294 09:09:35 09:09:35.844 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - sessionid:0x1000002945e0000 type:multi cxid:0x79 zxid:0x47 txntype:14 reqpath:n/a 09:09:35 09:09:35.844 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - Processing request:: sessionid:0x1000002945e0000 type:multi cxid:0x7a zxid:0x48 txntype:14 reqpath:n/a 09:09:35 09:09:35.844 [SyncThread:0] DEBUG org.apache.zookeeper.common.PathTrie - brokers 09:09:35 09:09:35.844 [SyncThread:0] DEBUG org.apache.zookeeper.server.DataTree - Digests are matching for Zxid: 48, Digest in log and actual tree: 125877867415 09:09:35 09:09:35.844 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - sessionid:0x1000002945e0000 type:multi cxid:0x7a zxid:0x48 txntype:14 reqpath:n/a 09:09:35 09:09:35.844 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - Processing request:: sessionid:0x1000002945e0000 type:multi cxid:0x7b zxid:0x49 txntype:14 reqpath:n/a 09:09:35 09:09:35.844 [SyncThread:0] DEBUG org.apache.zookeeper.common.PathTrie - brokers 09:09:35 09:09:35.845 [SyncThread:0] DEBUG org.apache.zookeeper.server.DataTree - Digests are matching for Zxid: 49, Digest in log and actual tree: 126308840362 09:09:35 09:09:35.845 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - sessionid:0x1000002945e0000 type:multi cxid:0x7b zxid:0x49 txntype:14 reqpath:n/a 09:09:35 09:09:35.845 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - Processing request:: sessionid:0x1000002945e0000 type:multi cxid:0x7c zxid:0x4a txntype:14 reqpath:n/a 09:09:35 09:09:35.845 [main-SendThread(127.0.0.1:46481)] DEBUG org.apache.zookeeper.ClientCnxn - Reading reply session id: 0x1000002945e0000, packet:: clientPath:null serverPath:null finished:false header:: 117,14 replyHeader:: 117,67,0 request:: org.apache.zookeeper.MultiOperationRecord@324db772 response:: org.apache.zookeeper.MultiResponse@2c19b7b3 09:09:35 09:09:35.845 [main-SendThread(127.0.0.1:46481)] DEBUG org.apache.zookeeper.ClientCnxn - Reading reply session id: 0x1000002945e0000, packet:: clientPath:null serverPath:null finished:false header:: 118,14 replyHeader:: 118,68,0 request:: org.apache.zookeeper.MultiOperationRecord@324db756 response:: org.apache.zookeeper.MultiResponse@2c19b797 09:09:35 09:09:35.845 [main-SendThread(127.0.0.1:46481)] DEBUG org.apache.zookeeper.ClientCnxn - Reading reply session id: 0x1000002945e0000, packet:: clientPath:null serverPath:null finished:false header:: 119,14 replyHeader:: 119,69,0 request:: org.apache.zookeeper.MultiOperationRecord@324db7b4 response:: org.apache.zookeeper.MultiResponse@2c19b7f5 09:09:35 09:09:35.845 [main-SendThread(127.0.0.1:46481)] DEBUG org.apache.zookeeper.ClientCnxn - Reading reply session id: 0x1000002945e0000, packet:: clientPath:null serverPath:null finished:false header:: 120,14 replyHeader:: 120,70,0 request:: org.apache.zookeeper.MultiOperationRecord@324db758 response:: org.apache.zookeeper.MultiResponse@2c19b799 09:09:35 09:09:35.845 [main-SendThread(127.0.0.1:46481)] DEBUG org.apache.zookeeper.ClientCnxn - Reading reply session id: 0x1000002945e0000, packet:: clientPath:null serverPath:null finished:false header:: 121,14 replyHeader:: 121,71,0 request:: org.apache.zookeeper.MultiOperationRecord@324db750 response:: org.apache.zookeeper.MultiResponse@2c19b791 09:09:35 09:09:35.846 [main-SendThread(127.0.0.1:46481)] DEBUG org.apache.zookeeper.ClientCnxn - Reading reply session id: 0x1000002945e0000, packet:: clientPath:null serverPath:null finished:false header:: 122,14 replyHeader:: 122,72,0 request:: org.apache.zookeeper.MultiOperationRecord@940352d8 response:: org.apache.zookeeper.MultiResponse@8dcf5319 09:09:35 09:09:35.846 [main-SendThread(127.0.0.1:46481)] DEBUG org.apache.zookeeper.ClientCnxn - Reading reply session id: 0x1000002945e0000, packet:: clientPath:null serverPath:null finished:false header:: 123,14 replyHeader:: 123,73,0 request:: org.apache.zookeeper.MultiOperationRecord@324db7af response:: org.apache.zookeeper.MultiResponse@2c19b7f0 09:09:35 09:09:35.846 [SyncThread:0] DEBUG org.apache.zookeeper.common.PathTrie - brokers 09:09:35 09:09:35.846 [SyncThread:0] DEBUG org.apache.zookeeper.server.DataTree - Digests are matching for Zxid: 4a, Digest in log and actual tree: 128613491325 09:09:35 09:09:35.846 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - sessionid:0x1000002945e0000 type:multi cxid:0x7c zxid:0x4a txntype:14 reqpath:n/a 09:09:35 09:09:35.847 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - Processing request:: sessionid:0x1000002945e0000 type:multi cxid:0x7d zxid:0x4b txntype:14 reqpath:n/a 09:09:35 09:09:35.847 [SyncThread:0] DEBUG org.apache.zookeeper.common.PathTrie - brokers 09:09:35 09:09:35.847 [SyncThread:0] DEBUG org.apache.zookeeper.server.DataTree - Digests are matching for Zxid: 4b, Digest in log and actual tree: 128871593235 09:09:35 09:09:35.847 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - sessionid:0x1000002945e0000 type:multi cxid:0x7d zxid:0x4b txntype:14 reqpath:n/a 09:09:35 09:09:35.847 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - Processing request:: sessionid:0x1000002945e0000 type:multi cxid:0x7e zxid:0x4c txntype:14 reqpath:n/a 09:09:35 09:09:35.847 [SyncThread:0] DEBUG org.apache.zookeeper.common.PathTrie - brokers 09:09:35 09:09:35.847 [SyncThread:0] DEBUG org.apache.zookeeper.server.DataTree - Digests are matching for Zxid: 4c, Digest in log and actual tree: 134576734403 09:09:35 09:09:35.847 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - sessionid:0x1000002945e0000 type:multi cxid:0x7e zxid:0x4c txntype:14 reqpath:n/a 09:09:35 09:09:35.847 [main-SendThread(127.0.0.1:46481)] DEBUG org.apache.zookeeper.ClientCnxn - Reading reply session id: 0x1000002945e0000, packet:: clientPath:null serverPath:null finished:false header:: 124,14 replyHeader:: 124,74,0 request:: org.apache.zookeeper.MultiOperationRecord@940352dc response:: org.apache.zookeeper.MultiResponse@8dcf531d 09:09:35 09:09:35.847 [main-SendThread(127.0.0.1:46481)] DEBUG org.apache.zookeeper.ClientCnxn - Reading reply session id: 0x1000002945e0000, packet:: clientPath:null serverPath:null finished:false header:: 125,14 replyHeader:: 125,75,0 request:: org.apache.zookeeper.MultiOperationRecord@324db753 response:: org.apache.zookeeper.MultiResponse@2c19b794 09:09:35 09:09:35.847 [main-SendThread(127.0.0.1:46481)] DEBUG org.apache.zookeeper.ClientCnxn - Reading reply session id: 0x1000002945e0000, packet:: clientPath:null serverPath:null finished:false header:: 126,14 replyHeader:: 126,76,0 request:: org.apache.zookeeper.MultiOperationRecord@324db76e response:: org.apache.zookeeper.MultiResponse@2c19b7af 09:09:35 09:09:35.848 [ProcessThread(sid:0 cport:46481):] DEBUG org.apache.zookeeper.server.SessionTrackerImpl - Checking session 0x1000002945e0000 09:09:35 09:09:35.848 [ProcessThread(sid:0 cport:46481):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Permission requested: 1 09:09:35 09:09:35.848 [ProcessThread(sid:0 cport:46481):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - ACLs for node: [31,s{'world,'anyone} 09:09:35 ] 09:09:35 09:09:35.848 [ProcessThread(sid:0 cport:46481):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Client credentials: ['sasl,'zooclient 09:09:35 , 'ip,'127.0.0.1 09:09:35 ] 09:09:35 09:09:35.848 [ProcessThread(sid:0 cport:46481):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from data tree is: 134576734403 09:09:35 09:09:35.848 [ProcessThread(sid:0 cport:46481):] DEBUG org.apache.zookeeper.server.SessionTrackerImpl - Checking session 0x1000002945e0000 09:09:35 09:09:35.848 [ProcessThread(sid:0 cport:46481):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Processing ACL: 31,s{'world,'anyone} 09:09:35 09:09:35 09:09:35.848 [ProcessThread(sid:0 cport:46481):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Permission requested: 4 09:09:35 09:09:35.848 [ProcessThread(sid:0 cport:46481):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - ACLs for node: [31,s{'world,'anyone} 09:09:35 ] 09:09:35 09:09:35.848 [ProcessThread(sid:0 cport:46481):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Client credentials: ['sasl,'zooclient 09:09:35 , 'ip,'127.0.0.1 09:09:35 ] 09:09:35 09:09:35.848 [ProcessThread(sid:0 cport:46481):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from data tree is: 134576734403 09:09:35 09:09:35.848 [ProcessThread(sid:0 cport:46481):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 135334201236 09:09:35 09:09:35.848 [ProcessThread(sid:0 cport:46481):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 138078681471 09:09:35 09:09:35.848 [ProcessThread(sid:0 cport:46481):] DEBUG org.apache.zookeeper.server.SessionTrackerImpl - Checking session 0x1000002945e0000 09:09:35 09:09:35.848 [ProcessThread(sid:0 cport:46481):] DEBUG org.apache.zookeeper.server.SessionTrackerImpl - Checking session 0x1000002945e0000 09:09:35 09:09:35.848 [ProcessThread(sid:0 cport:46481):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Permission requested: 1 09:09:35 09:09:35.848 [ProcessThread(sid:0 cport:46481):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - ACLs for node: [31,s{'world,'anyone} 09:09:35 ] 09:09:35 09:09:35.848 [ProcessThread(sid:0 cport:46481):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Client credentials: ['sasl,'zooclient 09:09:35 , 'ip,'127.0.0.1 09:09:35 ] 09:09:35 09:09:35.848 [ProcessThread(sid:0 cport:46481):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 138078681471 09:09:35 09:09:35.848 [ProcessThread(sid:0 cport:46481):] DEBUG org.apache.zookeeper.server.SessionTrackerImpl - Checking session 0x1000002945e0000 09:09:35 09:09:35.848 [ProcessThread(sid:0 cport:46481):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Processing ACL: 31,s{'world,'anyone} 09:09:35 09:09:35 09:09:35.848 [ProcessThread(sid:0 cport:46481):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Permission requested: 4 09:09:35 09:09:35.848 [ProcessThread(sid:0 cport:46481):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - ACLs for node: [31,s{'world,'anyone} 09:09:35 ] 09:09:35 09:09:35.848 [ProcessThread(sid:0 cport:46481):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Client credentials: ['sasl,'zooclient 09:09:35 , 'ip,'127.0.0.1 09:09:35 ] 09:09:35 09:09:35.848 [ProcessThread(sid:0 cport:46481):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 138078681471 09:09:35 09:09:35.848 [ProcessThread(sid:0 cport:46481):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 135865554906 09:09:35 09:09:35.848 [ProcessThread(sid:0 cport:46481):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 135894703990 09:09:35 09:09:35.848 [ProcessThread(sid:0 cport:46481):] DEBUG org.apache.zookeeper.server.SessionTrackerImpl - Checking session 0x1000002945e0000 09:09:35 09:09:35.848 [ProcessThread(sid:0 cport:46481):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Permission requested: 1 09:09:35 09:09:35.848 [ProcessThread(sid:0 cport:46481):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - ACLs for node: [31,s{'world,'anyone} 09:09:35 ] 09:09:35 09:09:35.849 [ProcessThread(sid:0 cport:46481):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Client credentials: ['sasl,'zooclient 09:09:35 , 'ip,'127.0.0.1 09:09:35 ] 09:09:35 09:09:35.849 [ProcessThread(sid:0 cport:46481):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 135894703990 09:09:35 09:09:35.849 [ProcessThread(sid:0 cport:46481):] DEBUG org.apache.zookeeper.server.SessionTrackerImpl - Checking session 0x1000002945e0000 09:09:35 09:09:35.849 [ProcessThread(sid:0 cport:46481):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Processing ACL: 31,s{'world,'anyone} 09:09:35 09:09:35 09:09:35.849 [ProcessThread(sid:0 cport:46481):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Permission requested: 4 09:09:35 09:09:35.849 [ProcessThread(sid:0 cport:46481):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - ACLs for node: [31,s{'world,'anyone} 09:09:35 ] 09:09:35 09:09:35.849 [ProcessThread(sid:0 cport:46481):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Client credentials: ['sasl,'zooclient 09:09:35 , 'ip,'127.0.0.1 09:09:35 ] 09:09:35 09:09:35.849 [ProcessThread(sid:0 cport:46481):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 135894703990 09:09:35 09:09:35.849 [ProcessThread(sid:0 cport:46481):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 137564085596 09:09:35 09:09:35.849 [ProcessThread(sid:0 cport:46481):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 141349641472 09:09:35 09:09:35.849 [ProcessThread(sid:0 cport:46481):] DEBUG org.apache.zookeeper.server.SessionTrackerImpl - Checking session 0x1000002945e0000 09:09:35 09:09:35.849 [ProcessThread(sid:0 cport:46481):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Permission requested: 1 09:09:35 09:09:35.849 [ProcessThread(sid:0 cport:46481):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - ACLs for node: [31,s{'world,'anyone} 09:09:35 ] 09:09:35 09:09:35.849 [ProcessThread(sid:0 cport:46481):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Client credentials: ['sasl,'zooclient 09:09:35 , 'ip,'127.0.0.1 09:09:35 ] 09:09:35 09:09:35.849 [ProcessThread(sid:0 cport:46481):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 141349641472 09:09:35 09:09:35.849 [ProcessThread(sid:0 cport:46481):] DEBUG org.apache.zookeeper.server.SessionTrackerImpl - Checking session 0x1000002945e0000 09:09:35 09:09:35.849 [ProcessThread(sid:0 cport:46481):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Processing ACL: 31,s{'world,'anyone} 09:09:35 09:09:35 09:09:35.849 [ProcessThread(sid:0 cport:46481):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Permission requested: 4 09:09:35 09:09:35.849 [ProcessThread(sid:0 cport:46481):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - ACLs for node: [31,s{'world,'anyone} 09:09:35 ] 09:09:35 09:09:35.849 [ProcessThread(sid:0 cport:46481):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Client credentials: ['sasl,'zooclient 09:09:35 , 'ip,'127.0.0.1 09:09:35 ] 09:09:35 09:09:35.849 [ProcessThread(sid:0 cport:46481):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 141349641472 09:09:35 09:09:35.849 [ProcessThread(sid:0 cport:46481):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 139960392553 09:09:35 09:09:35.849 [ProcessThread(sid:0 cport:46481):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 141645907320 09:09:35 09:09:35.849 [ProcessThread(sid:0 cport:46481):] DEBUG org.apache.zookeeper.server.SessionTrackerImpl - Checking session 0x1000002945e0000 09:09:35 09:09:35.849 [ProcessThread(sid:0 cport:46481):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Permission requested: 1 09:09:35 09:09:35.849 [ProcessThread(sid:0 cport:46481):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - ACLs for node: [31,s{'world,'anyone} 09:09:35 ] 09:09:35 09:09:35.849 [ProcessThread(sid:0 cport:46481):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Client credentials: ['sasl,'zooclient 09:09:35 , 'ip,'127.0.0.1 09:09:35 ] 09:09:35 09:09:35.849 [ProcessThread(sid:0 cport:46481):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 141645907320 09:09:35 09:09:35.849 [ProcessThread(sid:0 cport:46481):] DEBUG org.apache.zookeeper.server.SessionTrackerImpl - Checking session 0x1000002945e0000 09:09:35 09:09:35.849 [ProcessThread(sid:0 cport:46481):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Processing ACL: 31,s{'world,'anyone} 09:09:35 09:09:35 09:09:35.849 [ProcessThread(sid:0 cport:46481):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Permission requested: 4 09:09:35 09:09:35.849 [ProcessThread(sid:0 cport:46481):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - ACLs for node: [31,s{'world,'anyone} 09:09:35 ] 09:09:35 09:09:35.849 [ProcessThread(sid:0 cport:46481):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Client credentials: ['sasl,'zooclient 09:09:35 , 'ip,'127.0.0.1 09:09:35 ] 09:09:35 09:09:35.849 [ProcessThread(sid:0 cport:46481):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 141645907320 09:09:35 09:09:35.849 [ProcessThread(sid:0 cport:46481):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 142665793863 09:09:35 09:09:35.849 [ProcessThread(sid:0 cport:46481):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 145085705433 09:09:35 09:09:35.849 [ProcessThread(sid:0 cport:46481):] DEBUG org.apache.zookeeper.server.SessionTrackerImpl - Checking session 0x1000002945e0000 09:09:35 09:09:35.849 [ProcessThread(sid:0 cport:46481):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Permission requested: 1 09:09:36 09:09:35.849 [ProcessThread(sid:0 cport:46481):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - ACLs for node: [31,s{'world,'anyone} 09:09:36 ] 09:09:36 09:09:35.849 [ProcessThread(sid:0 cport:46481):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Client credentials: ['sasl,'zooclient 09:09:36 , 'ip,'127.0.0.1 09:09:36 ] 09:09:36 09:09:35.849 [ProcessThread(sid:0 cport:46481):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 145085705433 09:09:36 09:09:35.849 [ProcessThread(sid:0 cport:46481):] DEBUG org.apache.zookeeper.server.SessionTrackerImpl - Checking session 0x1000002945e0000 09:09:36 09:09:35.850 [ProcessThread(sid:0 cport:46481):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Processing ACL: 31,s{'world,'anyone} 09:09:36 09:09:36 09:09:35.850 [ProcessThread(sid:0 cport:46481):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Permission requested: 4 09:09:36 09:09:35.850 [ProcessThread(sid:0 cport:46481):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - ACLs for node: [31,s{'world,'anyone} 09:09:36 ] 09:09:36 09:09:35.850 [ProcessThread(sid:0 cport:46481):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Client credentials: ['sasl,'zooclient 09:09:36 , 'ip,'127.0.0.1 09:09:36 ] 09:09:36 09:09:35.850 [ProcessThread(sid:0 cport:46481):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 145085705433 09:09:36 09:09:35.850 [ProcessThread(sid:0 cport:46481):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 142819757630 09:09:36 09:09:35.850 [ProcessThread(sid:0 cport:46481):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 143660018723 09:09:36 09:09:35.850 [ProcessThread(sid:0 cport:46481):] DEBUG org.apache.zookeeper.server.SessionTrackerImpl - Checking session 0x1000002945e0000 09:09:36 09:09:35.850 [ProcessThread(sid:0 cport:46481):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Permission requested: 1 09:09:36 09:09:35.850 [ProcessThread(sid:0 cport:46481):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - ACLs for node: [31,s{'world,'anyone} 09:09:36 ] 09:09:36 09:09:35.850 [ProcessThread(sid:0 cport:46481):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Client credentials: ['sasl,'zooclient 09:09:36 , 'ip,'127.0.0.1 09:09:36 ] 09:09:36 09:09:35.850 [ProcessThread(sid:0 cport:46481):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 143660018723 09:09:36 09:09:35.850 [ProcessThread(sid:0 cport:46481):] DEBUG org.apache.zookeeper.server.SessionTrackerImpl - Checking session 0x1000002945e0000 09:09:36 09:09:35.850 [ProcessThread(sid:0 cport:46481):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Processing ACL: 31,s{'world,'anyone} 09:09:36 09:09:36 09:09:35.850 [ProcessThread(sid:0 cport:46481):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Permission requested: 4 09:09:36 09:09:35.850 [ProcessThread(sid:0 cport:46481):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - ACLs for node: [31,s{'world,'anyone} 09:09:36 ] 09:09:36 09:09:35.850 [ProcessThread(sid:0 cport:46481):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Client credentials: ['sasl,'zooclient 09:09:36 , 'ip,'127.0.0.1 09:09:36 ] 09:09:36 09:09:35.850 [ProcessThread(sid:0 cport:46481):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 143660018723 09:09:36 09:09:35.850 [ProcessThread(sid:0 cport:46481):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 146034187104 09:09:36 09:09:35.850 [ProcessThread(sid:0 cport:46481):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 148684530981 09:09:36 09:09:35.850 [ProcessThread(sid:0 cport:46481):] DEBUG org.apache.zookeeper.server.SessionTrackerImpl - Checking session 0x1000002945e0000 09:09:36 09:09:35.850 [ProcessThread(sid:0 cport:46481):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Permission requested: 1 09:09:36 09:09:35.850 [ProcessThread(sid:0 cport:46481):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - ACLs for node: [31,s{'world,'anyone} 09:09:36 ] 09:09:36 09:09:35.850 [ProcessThread(sid:0 cport:46481):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Client credentials: ['sasl,'zooclient 09:09:36 , 'ip,'127.0.0.1 09:09:36 ] 09:09:36 09:09:35.850 [ProcessThread(sid:0 cport:46481):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 148684530981 09:09:36 09:09:35.850 [ProcessThread(sid:0 cport:46481):] DEBUG org.apache.zookeeper.server.SessionTrackerImpl - Checking session 0x1000002945e0000 09:09:36 09:09:35.850 [ProcessThread(sid:0 cport:46481):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Processing ACL: 31,s{'world,'anyone} 09:09:36 09:09:36 09:09:35.850 [ProcessThread(sid:0 cport:46481):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Permission requested: 4 09:09:36 09:09:35.850 [ProcessThread(sid:0 cport:46481):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - ACLs for node: [31,s{'world,'anyone} 09:09:36 ] 09:09:36 09:09:35.850 [ProcessThread(sid:0 cport:46481):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Client credentials: ['sasl,'zooclient 09:09:36 , 'ip,'127.0.0.1 09:09:36 ] 09:09:36 09:09:35.850 [ProcessThread(sid:0 cport:46481):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 148684530981 09:09:36 09:09:35.850 [ProcessThread(sid:0 cport:46481):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 146106897627 09:09:36 09:09:35.850 [ProcessThread(sid:0 cport:46481):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 147095847616 09:09:36 09:09:35.850 [ProcessThread(sid:0 cport:46481):] DEBUG org.apache.zookeeper.server.SessionTrackerImpl - Checking session 0x1000002945e0000 09:09:36 09:09:35.850 [ProcessThread(sid:0 cport:46481):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Permission requested: 1 09:09:36 09:09:35.850 [ProcessThread(sid:0 cport:46481):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - ACLs for node: [31,s{'world,'anyone} 09:09:36 ] 09:09:36 09:09:35.850 [ProcessThread(sid:0 cport:46481):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Client credentials: ['sasl,'zooclient 09:09:36 , 'ip,'127.0.0.1 09:09:36 ] 09:09:36 09:09:35.850 [ProcessThread(sid:0 cport:46481):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 147095847616 09:09:36 09:09:35.850 [ProcessThread(sid:0 cport:46481):] DEBUG org.apache.zookeeper.server.SessionTrackerImpl - Checking session 0x1000002945e0000 09:09:36 09:09:35.850 [ProcessThread(sid:0 cport:46481):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Processing ACL: 31,s{'world,'anyone} 09:09:36 09:09:36 09:09:35.850 [ProcessThread(sid:0 cport:46481):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Permission requested: 4 09:09:36 09:09:35.850 [ProcessThread(sid:0 cport:46481):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - ACLs for node: [31,s{'world,'anyone} 09:09:36 ] 09:09:36 09:09:35.850 [ProcessThread(sid:0 cport:46481):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Client credentials: ['sasl,'zooclient 09:09:36 , 'ip,'127.0.0.1 09:09:36 ] 09:09:36 09:09:35.851 [ProcessThread(sid:0 cport:46481):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 147095847616 09:09:36 09:09:35.851 [ProcessThread(sid:0 cport:46481):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 146644323985 09:09:36 09:09:35.851 [ProcessThread(sid:0 cport:46481):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 149158228226 09:09:36 09:09:35.852 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - Processing request:: sessionid:0x1000002945e0000 type:multi cxid:0x7f zxid:0x4d txntype:14 reqpath:n/a 09:09:36 09:09:35.852 [SyncThread:0] DEBUG org.apache.zookeeper.common.PathTrie - brokers 09:09:36 09:09:35.852 [SyncThread:0] DEBUG org.apache.zookeeper.server.DataTree - Digests are matching for Zxid: 4d, Digest in log and actual tree: 138078681471 09:09:36 09:09:35.852 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - sessionid:0x1000002945e0000 type:multi cxid:0x7f zxid:0x4d txntype:14 reqpath:n/a 09:09:36 09:09:35.852 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - Processing request:: sessionid:0x1000002945e0000 type:exists cxid:0x80 zxid:0xfffffffffffffffe txntype:unknown reqpath:/admin/delete_topics/__consumer_offsets 09:09:36 09:09:35.852 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - sessionid:0x1000002945e0000 type:exists cxid:0x80 zxid:0xfffffffffffffffe txntype:unknown reqpath:/admin/delete_topics/__consumer_offsets 09:09:36 09:09:35.852 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - Processing request:: sessionid:0x1000002945e0000 type:multi cxid:0x81 zxid:0x4e txntype:14 reqpath:n/a 09:09:36 09:09:35.852 [SyncThread:0] DEBUG org.apache.zookeeper.common.PathTrie - brokers 09:09:36 09:09:35.852 [SyncThread:0] DEBUG org.apache.zookeeper.server.DataTree - Digests are matching for Zxid: 4e, Digest in log and actual tree: 135894703990 09:09:36 09:09:35.852 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - sessionid:0x1000002945e0000 type:multi cxid:0x81 zxid:0x4e txntype:14 reqpath:n/a 09:09:36 09:09:35.852 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - Processing request:: sessionid:0x1000002945e0000 type:multi cxid:0x82 zxid:0x4f txntype:14 reqpath:n/a 09:09:36 09:09:35.852 [SyncThread:0] DEBUG org.apache.zookeeper.common.PathTrie - brokers 09:09:36 09:09:35.853 [SyncThread:0] DEBUG org.apache.zookeeper.server.DataTree - Digests are matching for Zxid: 4f, Digest in log and actual tree: 141349641472 09:09:36 09:09:35.853 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - sessionid:0x1000002945e0000 type:multi cxid:0x82 zxid:0x4f txntype:14 reqpath:n/a 09:09:36 09:09:35.853 [main-SendThread(127.0.0.1:46481)] DEBUG org.apache.zookeeper.ClientCnxn - Reading reply session id: 0x1000002945e0000, packet:: clientPath:null serverPath:null finished:false header:: 127,14 replyHeader:: 127,77,0 request:: org.apache.zookeeper.MultiOperationRecord@940352d6 response:: org.apache.zookeeper.MultiResponse@8dcf5317 09:09:36 09:09:35.853 [main-SendThread(127.0.0.1:46481)] DEBUG org.apache.zookeeper.ClientCnxn - Reading reply session id: 0x1000002945e0000, packet:: clientPath:/admin/delete_topics/__consumer_offsets serverPath:/admin/delete_topics/__consumer_offsets finished:false header:: 128,3 replyHeader:: 128,77,-101 request:: '/admin/delete_topics/__consumer_offsets,F response:: 09:09:36 09:09:35.853 [main-SendThread(127.0.0.1:46481)] DEBUG org.apache.zookeeper.ClientCnxn - Reading reply session id: 0x1000002945e0000, packet:: clientPath:null serverPath:null finished:false header:: 129,14 replyHeader:: 129,78,0 request:: org.apache.zookeeper.MultiOperationRecord@324db7b0 response:: org.apache.zookeeper.MultiResponse@2c19b7f1 09:09:36 09:09:35.853 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - Processing request:: sessionid:0x1000002945e0000 type:multi cxid:0x83 zxid:0x50 txntype:14 reqpath:n/a 09:09:36 09:09:35.853 [SyncThread:0] DEBUG org.apache.zookeeper.common.PathTrie - brokers 09:09:36 09:09:35.854 [SyncThread:0] DEBUG org.apache.zookeeper.server.DataTree - Digests are matching for Zxid: 50, Digest in log and actual tree: 141645907320 09:09:36 09:09:35.854 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - sessionid:0x1000002945e0000 type:multi cxid:0x83 zxid:0x50 txntype:14 reqpath:n/a 09:09:36 09:09:35.854 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - Processing request:: sessionid:0x1000002945e0000 type:multi cxid:0x84 zxid:0x51 txntype:14 reqpath:n/a 09:09:36 09:09:35.854 [SyncThread:0] DEBUG org.apache.zookeeper.common.PathTrie - brokers 09:09:36 09:09:35.854 [SyncThread:0] DEBUG org.apache.zookeeper.server.DataTree - Digests are matching for Zxid: 51, Digest in log and actual tree: 145085705433 09:09:36 09:09:35.854 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - sessionid:0x1000002945e0000 type:multi cxid:0x84 zxid:0x51 txntype:14 reqpath:n/a 09:09:36 09:09:35.854 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - Processing request:: sessionid:0x1000002945e0000 type:multi cxid:0x85 zxid:0x52 txntype:14 reqpath:n/a 09:09:36 09:09:35.854 [SyncThread:0] DEBUG org.apache.zookeeper.common.PathTrie - brokers 09:09:36 09:09:35.854 [SyncThread:0] DEBUG org.apache.zookeeper.server.DataTree - Digests are matching for Zxid: 52, Digest in log and actual tree: 143660018723 09:09:36 09:09:35.854 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - sessionid:0x1000002945e0000 type:multi cxid:0x85 zxid:0x52 txntype:14 reqpath:n/a 09:09:36 09:09:35.854 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - Processing request:: sessionid:0x1000002945e0000 type:multi cxid:0x86 zxid:0x53 txntype:14 reqpath:n/a 09:09:36 09:09:35.854 [main-SendThread(127.0.0.1:46481)] DEBUG org.apache.zookeeper.ClientCnxn - Reading reply session id: 0x1000002945e0000, packet:: clientPath:null serverPath:null finished:false header:: 130,14 replyHeader:: 130,79,0 request:: org.apache.zookeeper.MultiOperationRecord@324db796 response:: org.apache.zookeeper.MultiResponse@2c19b7d7 09:09:36 09:09:35.854 [main-SendThread(127.0.0.1:46481)] DEBUG org.apache.zookeeper.ClientCnxn - Reading reply session id: 0x1000002945e0000, packet:: clientPath:null serverPath:null finished:false header:: 131,14 replyHeader:: 131,80,0 request:: org.apache.zookeeper.MultiOperationRecord@324db751 response:: org.apache.zookeeper.MultiResponse@2c19b792 09:09:36 09:09:35.854 [main-SendThread(127.0.0.1:46481)] DEBUG org.apache.zookeeper.ClientCnxn - Reading reply session id: 0x1000002945e0000, packet:: clientPath:null serverPath:null finished:false header:: 132,14 replyHeader:: 132,81,0 request:: org.apache.zookeeper.MultiOperationRecord@324db7b1 response:: org.apache.zookeeper.MultiResponse@2c19b7f2 09:09:36 09:09:35.855 [main-SendThread(127.0.0.1:46481)] DEBUG org.apache.zookeeper.ClientCnxn - Reading reply session id: 0x1000002945e0000, packet:: clientPath:null serverPath:null finished:false header:: 133,14 replyHeader:: 133,82,0 request:: org.apache.zookeeper.MultiOperationRecord@940352d7 response:: org.apache.zookeeper.MultiResponse@8dcf5318 09:09:36 09:09:35.855 [SyncThread:0] DEBUG org.apache.zookeeper.common.PathTrie - brokers 09:09:36 09:09:35.855 [SyncThread:0] DEBUG org.apache.zookeeper.server.DataTree - Digests are matching for Zxid: 53, Digest in log and actual tree: 148684530981 09:09:36 09:09:35.855 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - sessionid:0x1000002945e0000 type:multi cxid:0x86 zxid:0x53 txntype:14 reqpath:n/a 09:09:36 09:09:35.855 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - Processing request:: sessionid:0x1000002945e0000 type:multi cxid:0x87 zxid:0x54 txntype:14 reqpath:n/a 09:09:36 09:09:35.855 [SyncThread:0] DEBUG org.apache.zookeeper.common.PathTrie - brokers 09:09:36 09:09:35.855 [SyncThread:0] DEBUG org.apache.zookeeper.server.DataTree - Digests are matching for Zxid: 54, Digest in log and actual tree: 147095847616 09:09:36 09:09:35.855 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - sessionid:0x1000002945e0000 type:multi cxid:0x87 zxid:0x54 txntype:14 reqpath:n/a 09:09:36 09:09:35.855 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - Processing request:: sessionid:0x1000002945e0000 type:multi cxid:0x88 zxid:0x55 txntype:14 reqpath:n/a 09:09:36 09:09:35.855 [main-SendThread(127.0.0.1:46481)] DEBUG org.apache.zookeeper.ClientCnxn - Reading reply session id: 0x1000002945e0000, packet:: clientPath:null serverPath:null finished:false header:: 134,14 replyHeader:: 134,83,0 request:: org.apache.zookeeper.MultiOperationRecord@940352db response:: org.apache.zookeeper.MultiResponse@8dcf531c 09:09:36 09:09:35.855 [main-SendThread(127.0.0.1:46481)] DEBUG org.apache.zookeeper.ClientCnxn - Reading reply session id: 0x1000002945e0000, packet:: clientPath:null serverPath:null finished:false header:: 135,14 replyHeader:: 135,84,0 request:: org.apache.zookeeper.MultiOperationRecord@324db774 response:: org.apache.zookeeper.MultiResponse@2c19b7b5 09:09:36 09:09:35.856 [SyncThread:0] DEBUG org.apache.zookeeper.common.PathTrie - brokers 09:09:36 09:09:35.856 [SyncThread:0] DEBUG org.apache.zookeeper.server.DataTree - Digests are matching for Zxid: 55, Digest in log and actual tree: 149158228226 09:09:36 09:09:35.856 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - sessionid:0x1000002945e0000 type:multi cxid:0x88 zxid:0x55 txntype:14 reqpath:n/a 09:09:36 09:09:35.856 [ProcessThread(sid:0 cport:46481):] DEBUG org.apache.zookeeper.server.SessionTrackerImpl - Checking session 0x1000002945e0000 09:09:36 09:09:35.856 [main-SendThread(127.0.0.1:46481)] DEBUG org.apache.zookeeper.ClientCnxn - Reading reply session id: 0x1000002945e0000, packet:: clientPath:null serverPath:null finished:false header:: 136,14 replyHeader:: 136,85,0 request:: org.apache.zookeeper.MultiOperationRecord@324db777 response:: org.apache.zookeeper.MultiResponse@2c19b7b8 09:09:36 09:09:35.856 [ProcessThread(sid:0 cport:46481):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Permission requested: 1 09:09:36 09:09:35.856 [ProcessThread(sid:0 cport:46481):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - ACLs for node: [31,s{'world,'anyone} 09:09:36 ] 09:09:36 09:09:35.856 [ProcessThread(sid:0 cport:46481):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Client credentials: ['sasl,'zooclient 09:09:36 , 'ip,'127.0.0.1 09:09:36 ] 09:09:36 09:09:35.856 [ProcessThread(sid:0 cport:46481):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from data tree is: 149158228226 09:09:36 09:09:35.856 [ProcessThread(sid:0 cport:46481):] DEBUG org.apache.zookeeper.server.SessionTrackerImpl - Checking session 0x1000002945e0000 09:09:36 09:09:35.856 [ProcessThread(sid:0 cport:46481):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Processing ACL: 31,s{'world,'anyone} 09:09:36 09:09:36 09:09:35.856 [ProcessThread(sid:0 cport:46481):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Permission requested: 4 09:09:36 09:09:35.856 [ProcessThread(sid:0 cport:46481):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - ACLs for node: [31,s{'world,'anyone} 09:09:36 ] 09:09:36 09:09:35.856 [ProcessThread(sid:0 cport:46481):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Client credentials: ['sasl,'zooclient 09:09:36 , 'ip,'127.0.0.1 09:09:36 ] 09:09:36 09:09:35.856 [ProcessThread(sid:0 cport:46481):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from data tree is: 149158228226 09:09:36 09:09:35.856 [ProcessThread(sid:0 cport:46481):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 151393012261 09:09:36 09:09:35.856 [ProcessThread(sid:0 cport:46481):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 154224669676 09:09:36 09:09:35.856 [ProcessThread(sid:0 cport:46481):] DEBUG org.apache.zookeeper.server.SessionTrackerImpl - Checking session 0x1000002945e0000 09:09:36 09:09:35.856 [ProcessThread(sid:0 cport:46481):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Permission requested: 1 09:09:36 09:09:35.856 [ProcessThread(sid:0 cport:46481):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - ACLs for node: [31,s{'world,'anyone} 09:09:36 ] 09:09:36 09:09:35.856 [ProcessThread(sid:0 cport:46481):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Client credentials: ['sasl,'zooclient 09:09:36 , 'ip,'127.0.0.1 09:09:36 ] 09:09:36 09:09:35.856 [ProcessThread(sid:0 cport:46481):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 154224669676 09:09:36 09:09:35.856 [ProcessThread(sid:0 cport:46481):] DEBUG org.apache.zookeeper.server.SessionTrackerImpl - Checking session 0x1000002945e0000 09:09:36 09:09:35.856 [ProcessThread(sid:0 cport:46481):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Processing ACL: 31,s{'world,'anyone} 09:09:36 09:09:36 09:09:35.856 [ProcessThread(sid:0 cport:46481):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Permission requested: 4 09:09:36 09:09:35.856 [ProcessThread(sid:0 cport:46481):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - ACLs for node: [31,s{'world,'anyone} 09:09:36 ] 09:09:36 09:09:35.856 [ProcessThread(sid:0 cport:46481):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Client credentials: ['sasl,'zooclient 09:09:36 , 'ip,'127.0.0.1 09:09:36 ] 09:09:36 09:09:35.856 [ProcessThread(sid:0 cport:46481):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 154224669676 09:09:36 09:09:35.856 [ProcessThread(sid:0 cport:46481):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 153930766524 09:09:36 09:09:35.857 [ProcessThread(sid:0 cport:46481):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 155784375173 09:09:36 09:09:35.857 [ProcessThread(sid:0 cport:46481):] DEBUG org.apache.zookeeper.server.SessionTrackerImpl - Checking session 0x1000002945e0000 09:09:36 09:09:35.857 [ProcessThread(sid:0 cport:46481):] DEBUG org.apache.zookeeper.server.SessionTrackerImpl - Checking session 0x1000002945e0000 09:09:36 09:09:35.857 [ProcessThread(sid:0 cport:46481):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Permission requested: 1 09:09:36 09:09:35.857 [ProcessThread(sid:0 cport:46481):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - ACLs for node: [31,s{'world,'anyone} 09:09:36 ] 09:09:36 09:09:35.857 [ProcessThread(sid:0 cport:46481):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Client credentials: ['sasl,'zooclient 09:09:36 , 'ip,'127.0.0.1 09:09:36 ] 09:09:36 09:09:35.857 [ProcessThread(sid:0 cport:46481):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 155784375173 09:09:36 09:09:35.857 [ProcessThread(sid:0 cport:46481):] DEBUG org.apache.zookeeper.server.SessionTrackerImpl - Checking session 0x1000002945e0000 09:09:36 09:09:35.857 [ProcessThread(sid:0 cport:46481):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Processing ACL: 31,s{'world,'anyone} 09:09:36 09:09:36 09:09:35.857 [ProcessThread(sid:0 cport:46481):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Permission requested: 4 09:09:36 09:09:35.857 [ProcessThread(sid:0 cport:46481):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - ACLs for node: [31,s{'world,'anyone} 09:09:36 ] 09:09:36 09:09:35.857 [ProcessThread(sid:0 cport:46481):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Client credentials: ['sasl,'zooclient 09:09:36 , 'ip,'127.0.0.1 09:09:36 ] 09:09:36 09:09:35.857 [ProcessThread(sid:0 cport:46481):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 155784375173 09:09:36 09:09:35.857 [ProcessThread(sid:0 cport:46481):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 156505931361 09:09:36 09:09:35.857 [ProcessThread(sid:0 cport:46481):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 156863826977 09:09:36 09:09:35.857 [ProcessThread(sid:0 cport:46481):] DEBUG org.apache.zookeeper.server.SessionTrackerImpl - Checking session 0x1000002945e0000 09:09:36 09:09:35.857 [ProcessThread(sid:0 cport:46481):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Permission requested: 1 09:09:36 09:09:35.857 [ProcessThread(sid:0 cport:46481):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - ACLs for node: [31,s{'world,'anyone} 09:09:36 ] 09:09:36 09:09:35.857 [ProcessThread(sid:0 cport:46481):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Client credentials: ['sasl,'zooclient 09:09:36 , 'ip,'127.0.0.1 09:09:36 ] 09:09:36 09:09:35.857 [ProcessThread(sid:0 cport:46481):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 156863826977 09:09:36 09:09:35.857 [ProcessThread(sid:0 cport:46481):] DEBUG org.apache.zookeeper.server.SessionTrackerImpl - Checking session 0x1000002945e0000 09:09:36 09:09:35.857 [ProcessThread(sid:0 cport:46481):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Processing ACL: 31,s{'world,'anyone} 09:09:36 09:09:36 09:09:35.857 [ProcessThread(sid:0 cport:46481):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Permission requested: 4 09:09:36 09:09:35.857 [ProcessThread(sid:0 cport:46481):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - ACLs for node: [31,s{'world,'anyone} 09:09:36 ] 09:09:36 09:09:35.857 [ProcessThread(sid:0 cport:46481):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Client credentials: ['sasl,'zooclient 09:09:36 , 'ip,'127.0.0.1 09:09:36 ] 09:09:36 09:09:35.857 [ProcessThread(sid:0 cport:46481):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 156863826977 09:09:36 09:09:35.857 [ProcessThread(sid:0 cport:46481):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 156276987506 09:09:36 09:09:35.857 [ProcessThread(sid:0 cport:46481):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 156408105968 09:09:36 09:09:35.858 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - Processing request:: sessionid:0x1000002945e0000 type:multi cxid:0x89 zxid:0x56 txntype:14 reqpath:n/a 09:09:36 09:09:35.858 [SyncThread:0] DEBUG org.apache.zookeeper.common.PathTrie - brokers 09:09:36 09:09:35.858 [SyncThread:0] DEBUG org.apache.zookeeper.server.DataTree - Digests are matching for Zxid: 56, Digest in log and actual tree: 154224669676 09:09:36 09:09:35.858 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - sessionid:0x1000002945e0000 type:multi cxid:0x89 zxid:0x56 txntype:14 reqpath:n/a 09:09:36 09:09:35.858 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - Processing request:: sessionid:0x1000002945e0000 type:multi cxid:0x8a zxid:0x57 txntype:14 reqpath:n/a 09:09:36 09:09:35.858 [SyncThread:0] DEBUG org.apache.zookeeper.common.PathTrie - brokers 09:09:36 09:09:35.858 [SyncThread:0] DEBUG org.apache.zookeeper.server.DataTree - Digests are matching for Zxid: 57, Digest in log and actual tree: 155784375173 09:09:36 09:09:35.858 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - sessionid:0x1000002945e0000 type:multi cxid:0x8a zxid:0x57 txntype:14 reqpath:n/a 09:09:36 09:09:35.858 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - Processing request:: sessionid:0x1000002945e0000 type:exists cxid:0x8b zxid:0xfffffffffffffffe txntype:unknown reqpath:/brokers/topics/__consumer_offsets 09:09:36 09:09:35.858 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - sessionid:0x1000002945e0000 type:exists cxid:0x8b zxid:0xfffffffffffffffe txntype:unknown reqpath:/brokers/topics/__consumer_offsets 09:09:36 09:09:35.858 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - Processing request:: sessionid:0x1000002945e0000 type:multi cxid:0x8c zxid:0x58 txntype:14 reqpath:n/a 09:09:36 09:09:35.858 [SyncThread:0] DEBUG org.apache.zookeeper.common.PathTrie - brokers 09:09:36 09:09:35.859 [SyncThread:0] DEBUG org.apache.zookeeper.server.DataTree - Digests are matching for Zxid: 58, Digest in log and actual tree: 156863826977 09:09:36 09:09:35.859 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - sessionid:0x1000002945e0000 type:multi cxid:0x8c zxid:0x58 txntype:14 reqpath:n/a 09:09:36 09:09:35.859 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - Processing request:: sessionid:0x1000002945e0000 type:multi cxid:0x8d zxid:0x59 txntype:14 reqpath:n/a 09:09:36 09:09:35.859 [SyncThread:0] DEBUG org.apache.zookeeper.common.PathTrie - brokers 09:09:36 09:09:35.859 [SyncThread:0] DEBUG org.apache.zookeeper.server.DataTree - Digests are matching for Zxid: 59, Digest in log and actual tree: 156408105968 09:09:36 09:09:35.859 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - sessionid:0x1000002945e0000 type:multi cxid:0x8d zxid:0x59 txntype:14 reqpath:n/a 09:09:36 09:09:35.859 [main-SendThread(127.0.0.1:46481)] DEBUG org.apache.zookeeper.ClientCnxn - Reading reply session id: 0x1000002945e0000, packet:: clientPath:null serverPath:null finished:false header:: 137,14 replyHeader:: 137,86,0 request:: org.apache.zookeeper.MultiOperationRecord@324db791 response:: org.apache.zookeeper.MultiResponse@2c19b7d2 09:09:36 09:09:35.859 [main-SendThread(127.0.0.1:46481)] DEBUG org.apache.zookeeper.ClientCnxn - Reading reply session id: 0x1000002945e0000, packet:: clientPath:null serverPath:null finished:false header:: 138,14 replyHeader:: 138,87,0 request:: org.apache.zookeeper.MultiOperationRecord@324db74f response:: org.apache.zookeeper.MultiResponse@2c19b790 09:09:36 09:09:35.859 [main-SendThread(127.0.0.1:46481)] DEBUG org.apache.zookeeper.ClientCnxn - Reading reply session id: 0x1000002945e0000, packet:: clientPath:/brokers/topics/__consumer_offsets serverPath:/brokers/topics/__consumer_offsets finished:false header:: 139,3 replyHeader:: 139,87,0 request:: '/brokers/topics/__consumer_offsets,T response:: s{38,38,1770973775670,1770973775670,0,1,0,0,548,1,39} 09:09:36 09:09:35.859 [main-SendThread(127.0.0.1:46481)] DEBUG org.apache.zookeeper.ClientCnxn - Reading reply session id: 0x1000002945e0000, packet:: clientPath:null serverPath:null finished:false header:: 140,14 replyHeader:: 140,88,0 request:: org.apache.zookeeper.MultiOperationRecord@324db78f response:: org.apache.zookeeper.MultiResponse@2c19b7d0 09:09:36 09:09:35.859 [data-plane-kafka-request-handler-0] DEBUG kafka.server.ZkAdminManager - [Admin Manager on Broker 1]: Topic creation failed since topic '__consumer_offsets' already exists. 09:09:36 org.apache.kafka.common.errors.TopicExistsException: Topic '__consumer_offsets' already exists. 09:09:36 09:09:35.859 [main-SendThread(127.0.0.1:46481)] DEBUG org.apache.zookeeper.ClientCnxn - Reading reply session id: 0x1000002945e0000, packet:: clientPath:null serverPath:null finished:false header:: 141,14 replyHeader:: 141,89,0 request:: org.apache.zookeeper.MultiOperationRecord@324db7ac response:: org.apache.zookeeper.MultiResponse@2c19b7ed 09:09:36 09:09:35.860 [data-plane-kafka-request-handler-0] DEBUG kafka.server.DefaultAutoTopicCreationManager - Cleared inflight topic creation state for HashMap(__consumer_offsets -> CreatableTopic(name='__consumer_offsets', numPartitions=50, replicationFactor=1, assignments=[], configs=[CreateableTopicConfig(name='compression.type', value='producer'), CreateableTopicConfig(name='cleanup.policy', value='compact'), CreateableTopicConfig(name='segment.bytes', value='104857600')])) 09:09:36 09:09:35.860 [main] DEBUG org.apache.kafka.clients.NetworkClient - [Consumer clientId=mso-123456-consumer-f5c1d64b-2a77-4abe-97e2-5d595acfbd82, groupId=mso-group] Received FIND_COORDINATOR response from node 1 for request with header RequestHeader(apiKey=FIND_COORDINATOR, apiVersion=4, clientId=mso-123456-consumer-f5c1d64b-2a77-4abe-97e2-5d595acfbd82, correlationId=7): FindCoordinatorResponseData(throttleTimeMs=0, errorCode=0, errorMessage='', nodeId=0, host='', port=0, coordinators=[Coordinator(key='mso-group', nodeId=-1, host='', port=-1, errorCode=15, errorMessage='')]) 09:09:36 09:09:35.860 [main] DEBUG org.apache.kafka.clients.consumer.internals.ConsumerCoordinator - [Consumer clientId=mso-123456-consumer-f5c1d64b-2a77-4abe-97e2-5d595acfbd82, groupId=mso-group] Received FindCoordinator response ClientResponse(receivedTimeMs=1770973775860, latencyMs=36, disconnected=false, requestHeader=RequestHeader(apiKey=FIND_COORDINATOR, apiVersion=4, clientId=mso-123456-consumer-f5c1d64b-2a77-4abe-97e2-5d595acfbd82, correlationId=7), responseBody=FindCoordinatorResponseData(throttleTimeMs=0, errorCode=0, errorMessage='', nodeId=0, host='', port=0, coordinators=[Coordinator(key='mso-group', nodeId=-1, host='', port=-1, errorCode=15, errorMessage='')])) 09:09:36 09:09:35.860 [main] DEBUG org.apache.kafka.clients.consumer.internals.ConsumerCoordinator - [Consumer clientId=mso-123456-consumer-f5c1d64b-2a77-4abe-97e2-5d595acfbd82, groupId=mso-group] Group coordinator lookup failed: 09:09:36 09:09:35.861 [main] DEBUG org.apache.kafka.clients.consumer.internals.ConsumerCoordinator - [Consumer clientId=mso-123456-consumer-f5c1d64b-2a77-4abe-97e2-5d595acfbd82, groupId=mso-group] Coordinator discovery failed, refreshing metadata 09:09:36 org.apache.kafka.common.errors.CoordinatorNotAvailableException: The coordinator is not available. 09:09:36 09:09:35.861 [data-plane-kafka-network-thread-1-ListenerName(SASL_PLAINTEXT)-SASL_PLAINTEXT-0] DEBUG kafka.request.logger - Completed request:{"isForwarded":false,"requestHeader":{"requestApiKey":10,"requestApiVersion":4,"correlationId":7,"clientId":"mso-123456-consumer-f5c1d64b-2a77-4abe-97e2-5d595acfbd82","requestApiKeyName":"FIND_COORDINATOR"},"request":{"keyType":0,"coordinatorKeys":["mso-group"]},"response":{"throttleTimeMs":0,"coordinators":[{"key":"mso-group","nodeId":-1,"host":"","port":-1,"errorCode":15,"errorMessage":""}]},"connection":"127.0.0.1:40117-127.0.0.1:49642-3","totalTimeMs":36.193,"requestQueueTimeMs":0.11,"localTimeMs":35.771,"remoteTimeMs":0.0,"throttleTimeMs":0,"responseQueueTimeMs":0.081,"sendTimeMs":0.229,"securityProtocol":"SASL_PLAINTEXT","principal":"User:admin","listener":"SASL_PLAINTEXT","clientInformation":{"softwareName":"apache-kafka-java","softwareVersion":"3.3.1"}} 09:09:36 09:09:35.874 [ProcessThread(sid:0 cport:46481):] DEBUG org.apache.zookeeper.server.SessionTrackerImpl - Checking session 0x1000002945e0000 09:09:36 09:09:35.874 [ProcessThread(sid:0 cport:46481):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Permission requested: 1 09:09:36 09:09:35.874 [ProcessThread(sid:0 cport:46481):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - ACLs for node: [31,s{'world,'anyone} 09:09:36 ] 09:09:36 09:09:35.874 [ProcessThread(sid:0 cport:46481):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Client credentials: ['sasl,'zooclient 09:09:36 , 'ip,'127.0.0.1 09:09:36 ] 09:09:36 09:09:35.874 [ProcessThread(sid:0 cport:46481):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from data tree is: 156408105968 09:09:36 09:09:35.874 [ProcessThread(sid:0 cport:46481):] DEBUG org.apache.zookeeper.server.SessionTrackerImpl - Checking session 0x1000002945e0000 09:09:36 09:09:35.874 [ProcessThread(sid:0 cport:46481):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Processing ACL: 31,s{'world,'anyone} 09:09:36 09:09:36 09:09:35.875 [ProcessThread(sid:0 cport:46481):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Permission requested: 4 09:09:36 09:09:35.875 [ProcessThread(sid:0 cport:46481):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - ACLs for node: [31,s{'world,'anyone} 09:09:36 ] 09:09:36 09:09:35.875 [ProcessThread(sid:0 cport:46481):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Client credentials: ['sasl,'zooclient 09:09:36 , 'ip,'127.0.0.1 09:09:36 ] 09:09:36 09:09:35.875 [ProcessThread(sid:0 cport:46481):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from data tree is: 156408105968 09:09:36 09:09:35.875 [ProcessThread(sid:0 cport:46481):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 155036658792 09:09:36 09:09:35.875 [ProcessThread(sid:0 cport:46481):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 156866860932 09:09:36 09:09:35.875 [ProcessThread(sid:0 cport:46481):] DEBUG org.apache.zookeeper.server.SessionTrackerImpl - Checking session 0x1000002945e0000 09:09:36 09:09:35.875 [ProcessThread(sid:0 cport:46481):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Permission requested: 1 09:09:36 09:09:35.875 [ProcessThread(sid:0 cport:46481):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - ACLs for node: [31,s{'world,'anyone} 09:09:36 ] 09:09:36 09:09:35.875 [ProcessThread(sid:0 cport:46481):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Client credentials: ['sasl,'zooclient 09:09:36 , 'ip,'127.0.0.1 09:09:36 ] 09:09:36 09:09:35.875 [ProcessThread(sid:0 cport:46481):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 156866860932 09:09:36 09:09:35.875 [ProcessThread(sid:0 cport:46481):] DEBUG org.apache.zookeeper.server.SessionTrackerImpl - Checking session 0x1000002945e0000 09:09:36 09:09:35.875 [ProcessThread(sid:0 cport:46481):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Processing ACL: 31,s{'world,'anyone} 09:09:36 09:09:36 09:09:35.876 [ProcessThread(sid:0 cport:46481):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Permission requested: 4 09:09:36 09:09:35.876 [ProcessThread(sid:0 cport:46481):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - ACLs for node: [31,s{'world,'anyone} 09:09:36 ] 09:09:36 09:09:35.876 [ProcessThread(sid:0 cport:46481):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Client credentials: ['sasl,'zooclient 09:09:36 , 'ip,'127.0.0.1 09:09:36 ] 09:09:36 09:09:35.876 [ProcessThread(sid:0 cport:46481):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 156866860932 09:09:36 09:09:35.876 [ProcessThread(sid:0 cport:46481):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 159790495740 09:09:36 09:09:35.876 [ProcessThread(sid:0 cport:46481):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 162649773635 09:09:36 09:09:35.876 [ProcessThread(sid:0 cport:46481):] DEBUG org.apache.zookeeper.server.SessionTrackerImpl - Checking session 0x1000002945e0000 09:09:36 09:09:35.876 [ProcessThread(sid:0 cport:46481):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Permission requested: 1 09:09:36 09:09:35.876 [ProcessThread(sid:0 cport:46481):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - ACLs for node: [31,s{'world,'anyone} 09:09:36 ] 09:09:36 09:09:35.876 [ProcessThread(sid:0 cport:46481):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Client credentials: ['sasl,'zooclient 09:09:36 , 'ip,'127.0.0.1 09:09:36 ] 09:09:36 09:09:35.876 [ProcessThread(sid:0 cport:46481):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 162649773635 09:09:36 09:09:35.876 [ProcessThread(sid:0 cport:46481):] DEBUG org.apache.zookeeper.server.SessionTrackerImpl - Checking session 0x1000002945e0000 09:09:36 09:09:35.876 [ProcessThread(sid:0 cport:46481):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Processing ACL: 31,s{'world,'anyone} 09:09:36 09:09:36 09:09:35.876 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - Processing request:: sessionid:0x1000002945e0000 type:multi cxid:0x8e zxid:0x5a txntype:14 reqpath:n/a 09:09:36 09:09:35.876 [ProcessThread(sid:0 cport:46481):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Permission requested: 4 09:09:36 09:09:35.876 [ProcessThread(sid:0 cport:46481):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - ACLs for node: [31,s{'world,'anyone} 09:09:36 ] 09:09:36 09:09:35.876 [ProcessThread(sid:0 cport:46481):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Client credentials: ['sasl,'zooclient 09:09:36 , 'ip,'127.0.0.1 09:09:36 ] 09:09:36 09:09:35.876 [ProcessThread(sid:0 cport:46481):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 162649773635 09:09:36 09:09:35.876 [ProcessThread(sid:0 cport:46481):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 164873789466 09:09:36 09:09:35.876 [ProcessThread(sid:0 cport:46481):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 164959560735 09:09:36 09:09:35.876 [ProcessThread(sid:0 cport:46481):] DEBUG org.apache.zookeeper.server.SessionTrackerImpl - Checking session 0x1000002945e0000 09:09:36 09:09:35.876 [ProcessThread(sid:0 cport:46481):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Permission requested: 1 09:09:36 09:09:35.876 [ProcessThread(sid:0 cport:46481):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - ACLs for node: [31,s{'world,'anyone} 09:09:36 ] 09:09:36 09:09:35.876 [ProcessThread(sid:0 cport:46481):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Client credentials: ['sasl,'zooclient 09:09:36 , 'ip,'127.0.0.1 09:09:36 ] 09:09:36 09:09:35.877 [ProcessThread(sid:0 cport:46481):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 164959560735 09:09:36 09:09:35.877 [ProcessThread(sid:0 cport:46481):] DEBUG org.apache.zookeeper.server.SessionTrackerImpl - Checking session 0x1000002945e0000 09:09:36 09:09:35.877 [ProcessThread(sid:0 cport:46481):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Processing ACL: 31,s{'world,'anyone} 09:09:36 09:09:36 09:09:35.877 [ProcessThread(sid:0 cport:46481):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Permission requested: 4 09:09:36 09:09:35.877 [ProcessThread(sid:0 cport:46481):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - ACLs for node: [31,s{'world,'anyone} 09:09:36 ] 09:09:36 09:09:35.877 [ProcessThread(sid:0 cport:46481):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Client credentials: ['sasl,'zooclient 09:09:36 , 'ip,'127.0.0.1 09:09:36 ] 09:09:36 09:09:35.877 [ProcessThread(sid:0 cport:46481):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 164959560735 09:09:36 09:09:35.877 [ProcessThread(sid:0 cport:46481):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 166778852886 09:09:36 09:09:35.877 [ProcessThread(sid:0 cport:46481):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 167676061156 09:09:36 09:09:35.877 [ProcessThread(sid:0 cport:46481):] DEBUG org.apache.zookeeper.server.SessionTrackerImpl - Checking session 0x1000002945e0000 09:09:36 09:09:35.877 [ProcessThread(sid:0 cport:46481):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Permission requested: 1 09:09:36 09:09:35.877 [ProcessThread(sid:0 cport:46481):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - ACLs for node: [31,s{'world,'anyone} 09:09:36 ] 09:09:36 09:09:35.877 [ProcessThread(sid:0 cport:46481):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Client credentials: ['sasl,'zooclient 09:09:36 , 'ip,'127.0.0.1 09:09:36 ] 09:09:36 09:09:35.877 [ProcessThread(sid:0 cport:46481):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 167676061156 09:09:36 09:09:35.877 [ProcessThread(sid:0 cport:46481):] DEBUG org.apache.zookeeper.server.SessionTrackerImpl - Checking session 0x1000002945e0000 09:09:36 09:09:35.877 [ProcessThread(sid:0 cport:46481):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Processing ACL: 31,s{'world,'anyone} 09:09:36 09:09:36 09:09:35.877 [ProcessThread(sid:0 cport:46481):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Permission requested: 4 09:09:36 09:09:35.877 [ProcessThread(sid:0 cport:46481):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - ACLs for node: [31,s{'world,'anyone} 09:09:36 ] 09:09:36 09:09:35.877 [ProcessThread(sid:0 cport:46481):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Client credentials: ['sasl,'zooclient 09:09:36 , 'ip,'127.0.0.1 09:09:36 ] 09:09:36 09:09:35.877 [ProcessThread(sid:0 cport:46481):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 167676061156 09:09:36 09:09:35.877 [ProcessThread(sid:0 cport:46481):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 168980546908 09:09:36 09:09:35.877 [ProcessThread(sid:0 cport:46481):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 169595332036 09:09:36 09:09:35.877 [ProcessThread(sid:0 cport:46481):] DEBUG org.apache.zookeeper.server.SessionTrackerImpl - Checking session 0x1000002945e0000 09:09:36 09:09:35.877 [ProcessThread(sid:0 cport:46481):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Permission requested: 1 09:09:36 09:09:35.878 [ProcessThread(sid:0 cport:46481):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - ACLs for node: [31,s{'world,'anyone} 09:09:36 ] 09:09:36 09:09:35.878 [ProcessThread(sid:0 cport:46481):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Client credentials: ['sasl,'zooclient 09:09:36 , 'ip,'127.0.0.1 09:09:36 ] 09:09:36 09:09:35.878 [ProcessThread(sid:0 cport:46481):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 169595332036 09:09:36 09:09:35.878 [ProcessThread(sid:0 cport:46481):] DEBUG org.apache.zookeeper.server.SessionTrackerImpl - Checking session 0x1000002945e0000 09:09:36 09:09:35.878 [ProcessThread(sid:0 cport:46481):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Processing ACL: 31,s{'world,'anyone} 09:09:36 09:09:36 09:09:35.878 [ProcessThread(sid:0 cport:46481):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Permission requested: 4 09:09:36 09:09:35.878 [ProcessThread(sid:0 cport:46481):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - ACLs for node: [31,s{'world,'anyone} 09:09:36 ] 09:09:36 09:09:35.878 [ProcessThread(sid:0 cport:46481):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Client credentials: ['sasl,'zooclient 09:09:36 , 'ip,'127.0.0.1 09:09:36 ] 09:09:36 09:09:35.878 [ProcessThread(sid:0 cport:46481):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 169595332036 09:09:36 09:09:35.878 [ProcessThread(sid:0 cport:46481):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 165598348860 09:09:36 09:09:35.879 [ProcessThread(sid:0 cport:46481):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 166936029173 09:09:36 09:09:35.879 [ProcessThread(sid:0 cport:46481):] DEBUG org.apache.zookeeper.server.SessionTrackerImpl - Checking session 0x1000002945e0000 09:09:36 09:09:35.879 [ProcessThread(sid:0 cport:46481):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Permission requested: 1 09:09:36 09:09:35.879 [ProcessThread(sid:0 cport:46481):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - ACLs for node: [31,s{'world,'anyone} 09:09:36 ] 09:09:36 09:09:35.879 [ProcessThread(sid:0 cport:46481):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Client credentials: ['sasl,'zooclient 09:09:36 , 'ip,'127.0.0.1 09:09:36 ] 09:09:36 09:09:35.879 [ProcessThread(sid:0 cport:46481):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 166936029173 09:09:36 09:09:35.879 [ProcessThread(sid:0 cport:46481):] DEBUG org.apache.zookeeper.server.SessionTrackerImpl - Checking session 0x1000002945e0000 09:09:36 09:09:35.879 [ProcessThread(sid:0 cport:46481):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Processing ACL: 31,s{'world,'anyone} 09:09:36 09:09:36 09:09:35.879 [ProcessThread(sid:0 cport:46481):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Permission requested: 4 09:09:36 09:09:35.879 [ProcessThread(sid:0 cport:46481):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - ACLs for node: [31,s{'world,'anyone} 09:09:36 ] 09:09:36 09:09:35.879 [ProcessThread(sid:0 cport:46481):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Client credentials: ['sasl,'zooclient 09:09:36 , 'ip,'127.0.0.1 09:09:36 ] 09:09:36 09:09:35.879 [ProcessThread(sid:0 cport:46481):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 166936029173 09:09:36 09:09:35.879 [ProcessThread(sid:0 cport:46481):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 166437806699 09:09:36 09:09:35.879 [ProcessThread(sid:0 cport:46481):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 170388834328 09:09:36 09:09:35.879 [ProcessThread(sid:0 cport:46481):] DEBUG org.apache.zookeeper.server.SessionTrackerImpl - Checking session 0x1000002945e0000 09:09:36 09:09:35.879 [ProcessThread(sid:0 cport:46481):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Permission requested: 1 09:09:36 09:09:35.879 [ProcessThread(sid:0 cport:46481):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - ACLs for node: [31,s{'world,'anyone} 09:09:36 ] 09:09:36 09:09:35.879 [ProcessThread(sid:0 cport:46481):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Client credentials: ['sasl,'zooclient 09:09:36 , 'ip,'127.0.0.1 09:09:36 ] 09:09:36 09:09:35.879 [ProcessThread(sid:0 cport:46481):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 170388834328 09:09:36 09:09:35.879 [ProcessThread(sid:0 cport:46481):] DEBUG org.apache.zookeeper.server.SessionTrackerImpl - Checking session 0x1000002945e0000 09:09:36 09:09:35.879 [ProcessThread(sid:0 cport:46481):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Processing ACL: 31,s{'world,'anyone} 09:09:36 09:09:36 09:09:35.879 [ProcessThread(sid:0 cport:46481):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Permission requested: 4 09:09:36 09:09:35.879 [ProcessThread(sid:0 cport:46481):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - ACLs for node: [31,s{'world,'anyone} 09:09:36 ] 09:09:36 09:09:35.879 [ProcessThread(sid:0 cport:46481):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Client credentials: ['sasl,'zooclient 09:09:36 , 'ip,'127.0.0.1 09:09:36 ] 09:09:36 09:09:35.879 [ProcessThread(sid:0 cport:46481):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 170388834328 09:09:36 09:09:35.879 [ProcessThread(sid:0 cport:46481):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 170960192914 09:09:36 09:09:35.879 [ProcessThread(sid:0 cport:46481):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 173570266394 09:09:36 09:09:35.879 [ProcessThread(sid:0 cport:46481):] DEBUG org.apache.zookeeper.server.SessionTrackerImpl - Checking session 0x1000002945e0000 09:09:36 09:09:35.879 [ProcessThread(sid:0 cport:46481):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Permission requested: 1 09:09:36 09:09:35.879 [ProcessThread(sid:0 cport:46481):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - ACLs for node: [31,s{'world,'anyone} 09:09:36 ] 09:09:36 09:09:35.879 [ProcessThread(sid:0 cport:46481):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Client credentials: ['sasl,'zooclient 09:09:36 , 'ip,'127.0.0.1 09:09:36 ] 09:09:36 09:09:35.879 [ProcessThread(sid:0 cport:46481):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 173570266394 09:09:36 09:09:35.879 [ProcessThread(sid:0 cport:46481):] DEBUG org.apache.zookeeper.server.SessionTrackerImpl - Checking session 0x1000002945e0000 09:09:36 09:09:35.879 [ProcessThread(sid:0 cport:46481):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Processing ACL: 31,s{'world,'anyone} 09:09:36 09:09:36 09:09:35.879 [ProcessThread(sid:0 cport:46481):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Permission requested: 4 09:09:36 09:09:35.879 [ProcessThread(sid:0 cport:46481):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - ACLs for node: [31,s{'world,'anyone} 09:09:36 ] 09:09:36 09:09:35.879 [ProcessThread(sid:0 cport:46481):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Client credentials: ['sasl,'zooclient 09:09:36 , 'ip,'127.0.0.1 09:09:36 ] 09:09:36 09:09:35.879 [ProcessThread(sid:0 cport:46481):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 173570266394 09:09:36 09:09:35.879 [ProcessThread(sid:0 cport:46481):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 172720348776 09:09:36 09:09:35.879 [ProcessThread(sid:0 cport:46481):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 176707727664 09:09:36 09:09:35.879 [ProcessThread(sid:0 cport:46481):] DEBUG org.apache.zookeeper.server.SessionTrackerImpl - Checking session 0x1000002945e0000 09:09:36 09:09:35.880 [ProcessThread(sid:0 cport:46481):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Permission requested: 1 09:09:36 09:09:35.880 [ProcessThread(sid:0 cport:46481):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - ACLs for node: [31,s{'world,'anyone} 09:09:36 ] 09:09:36 09:09:35.880 [ProcessThread(sid:0 cport:46481):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Client credentials: ['sasl,'zooclient 09:09:36 , 'ip,'127.0.0.1 09:09:36 ] 09:09:36 09:09:35.880 [ProcessThread(sid:0 cport:46481):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 176707727664 09:09:36 09:09:35.880 [ProcessThread(sid:0 cport:46481):] DEBUG org.apache.zookeeper.server.SessionTrackerImpl - Checking session 0x1000002945e0000 09:09:36 09:09:35.880 [ProcessThread(sid:0 cport:46481):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Processing ACL: 31,s{'world,'anyone} 09:09:36 09:09:36 09:09:35.880 [ProcessThread(sid:0 cport:46481):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Permission requested: 4 09:09:36 09:09:35.880 [ProcessThread(sid:0 cport:46481):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - ACLs for node: [31,s{'world,'anyone} 09:09:36 ] 09:09:36 09:09:35.880 [ProcessThread(sid:0 cport:46481):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Client credentials: ['sasl,'zooclient 09:09:36 , 'ip,'127.0.0.1 09:09:36 ] 09:09:36 09:09:35.880 [ProcessThread(sid:0 cport:46481):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 176707727664 09:09:36 09:09:35.880 [ProcessThread(sid:0 cport:46481):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 177623873602 09:09:36 09:09:35.880 [ProcessThread(sid:0 cport:46481):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 178033787282 09:09:36 09:09:35.880 [SyncThread:0] DEBUG org.apache.zookeeper.common.PathTrie - brokers 09:09:36 09:09:35.880 [SyncThread:0] DEBUG org.apache.zookeeper.server.DataTree - Digests are matching for Zxid: 5a, Digest in log and actual tree: 156866860932 09:09:36 09:09:35.880 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - sessionid:0x1000002945e0000 type:multi cxid:0x8e zxid:0x5a txntype:14 reqpath:n/a 09:09:36 09:09:35.880 [main-SendThread(127.0.0.1:46481)] DEBUG org.apache.zookeeper.ClientCnxn - Reading reply session id: 0x1000002945e0000, packet:: clientPath:null serverPath:null finished:false header:: 142,14 replyHeader:: 142,90,0 request:: org.apache.zookeeper.MultiOperationRecord@d54f07a9 response:: org.apache.zookeeper.MultiResponse@ef9185b3 09:09:36 09:09:35.881 [ProcessThread(sid:0 cport:46481):] DEBUG org.apache.zookeeper.server.SessionTrackerImpl - Checking session 0x1000002945e0000 09:09:36 09:09:35.881 [ProcessThread(sid:0 cport:46481):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Permission requested: 1 09:09:36 09:09:35.881 [ProcessThread(sid:0 cport:46481):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - ACLs for node: [31,s{'world,'anyone} 09:09:36 ] 09:09:36 09:09:35.881 [ProcessThread(sid:0 cport:46481):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Client credentials: ['sasl,'zooclient 09:09:36 , 'ip,'127.0.0.1 09:09:36 ] 09:09:36 09:09:35.881 [ProcessThread(sid:0 cport:46481):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 178033787282 09:09:36 09:09:35.881 [ProcessThread(sid:0 cport:46481):] DEBUG org.apache.zookeeper.server.SessionTrackerImpl - Checking session 0x1000002945e0000 09:09:36 09:09:35.881 [ProcessThread(sid:0 cport:46481):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Processing ACL: 31,s{'world,'anyone} 09:09:36 09:09:36 09:09:35.881 [ProcessThread(sid:0 cport:46481):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Permission requested: 4 09:09:36 09:09:35.881 [ProcessThread(sid:0 cport:46481):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - ACLs for node: [31,s{'world,'anyone} 09:09:36 ] 09:09:36 09:09:35.881 [ProcessThread(sid:0 cport:46481):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Client credentials: ['sasl,'zooclient 09:09:36 , 'ip,'127.0.0.1 09:09:36 ] 09:09:36 09:09:35.881 [ProcessThread(sid:0 cport:46481):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 178033787282 09:09:36 09:09:35.881 [ProcessThread(sid:0 cport:46481):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 179511721559 09:09:36 09:09:35.881 [ProcessThread(sid:0 cport:46481):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 179703480679 09:09:36 09:09:35.882 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - Processing request:: sessionid:0x1000002945e0000 type:multi cxid:0x8f zxid:0x5b txntype:14 reqpath:n/a 09:09:36 09:09:35.882 [SyncThread:0] DEBUG org.apache.zookeeper.common.PathTrie - brokers 09:09:36 09:09:35.882 [SyncThread:0] DEBUG org.apache.zookeeper.server.DataTree - Digests are matching for Zxid: 5b, Digest in log and actual tree: 162649773635 09:09:36 09:09:35.882 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - sessionid:0x1000002945e0000 type:multi cxid:0x8f zxid:0x5b txntype:14 reqpath:n/a 09:09:36 09:09:35.882 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - Processing request:: sessionid:0x1000002945e0000 type:multi cxid:0x90 zxid:0x5c txntype:14 reqpath:n/a 09:09:36 09:09:35.882 [main-SendThread(127.0.0.1:46481)] DEBUG org.apache.zookeeper.ClientCnxn - Reading reply session id: 0x1000002945e0000, packet:: clientPath:null serverPath:null finished:false header:: 143,14 replyHeader:: 143,91,0 request:: org.apache.zookeeper.MultiOperationRecord@d363be06 response:: org.apache.zookeeper.MultiResponse@eda63c10 09:09:36 09:09:35.882 [SyncThread:0] DEBUG org.apache.zookeeper.common.PathTrie - brokers 09:09:36 09:09:35.882 [SyncThread:0] DEBUG org.apache.zookeeper.server.DataTree - Digests are matching for Zxid: 5c, Digest in log and actual tree: 164959560735 09:09:36 09:09:35.882 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - sessionid:0x1000002945e0000 type:multi cxid:0x90 zxid:0x5c txntype:14 reqpath:n/a 09:09:36 09:09:35.882 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - Processing request:: sessionid:0x1000002945e0000 type:multi cxid:0x91 zxid:0x5d txntype:14 reqpath:n/a 09:09:36 09:09:35.882 [main-SendThread(127.0.0.1:46481)] DEBUG org.apache.zookeeper.ClientCnxn - Reading reply session id: 0x1000002945e0000, packet:: clientPath:null serverPath:null finished:false header:: 144,14 replyHeader:: 144,92,0 request:: org.apache.zookeeper.MultiOperationRecord@7401b96c response:: org.apache.zookeeper.MultiResponse@8e443776 09:09:36 09:09:35.883 [SyncThread:0] DEBUG org.apache.zookeeper.common.PathTrie - brokers 09:09:36 09:09:35.883 [SyncThread:0] DEBUG org.apache.zookeeper.server.DataTree - Digests are matching for Zxid: 5d, Digest in log and actual tree: 167676061156 09:09:36 09:09:35.883 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - sessionid:0x1000002945e0000 type:multi cxid:0x91 zxid:0x5d txntype:14 reqpath:n/a 09:09:36 09:09:35.883 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - Processing request:: sessionid:0x1000002945e0000 type:multi cxid:0x92 zxid:0x5e txntype:14 reqpath:n/a 09:09:36 09:09:35.883 [main-SendThread(127.0.0.1:46481)] DEBUG org.apache.zookeeper.ClientCnxn - Reading reply session id: 0x1000002945e0000, packet:: clientPath:null serverPath:null finished:false header:: 145,14 replyHeader:: 145,93,0 request:: org.apache.zookeeper.MultiOperationRecord@dbe2e64b response:: org.apache.zookeeper.MultiResponse@f6256455 09:09:36 09:09:35.883 [SyncThread:0] DEBUG org.apache.zookeeper.common.PathTrie - brokers 09:09:36 09:09:35.883 [SyncThread:0] DEBUG org.apache.zookeeper.server.DataTree - Digests are matching for Zxid: 5e, Digest in log and actual tree: 169595332036 09:09:36 09:09:35.883 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - sessionid:0x1000002945e0000 type:multi cxid:0x92 zxid:0x5e txntype:14 reqpath:n/a 09:09:36 09:09:35.883 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - Processing request:: sessionid:0x1000002945e0000 type:multi cxid:0x93 zxid:0x5f txntype:14 reqpath:n/a 09:09:36 09:09:35.883 [main-SendThread(127.0.0.1:46481)] DEBUG org.apache.zookeeper.ClientCnxn - Reading reply session id: 0x1000002945e0000, packet:: clientPath:null serverPath:null finished:false header:: 146,14 replyHeader:: 146,94,0 request:: org.apache.zookeeper.MultiOperationRecord@45af5ccd response:: org.apache.zookeeper.MultiResponse@5ff1dad7 09:09:36 09:09:35.884 [SyncThread:0] DEBUG org.apache.zookeeper.common.PathTrie - brokers 09:09:36 09:09:35.884 [SyncThread:0] DEBUG org.apache.zookeeper.server.DataTree - Digests are matching for Zxid: 5f, Digest in log and actual tree: 166936029173 09:09:36 09:09:35.884 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - sessionid:0x1000002945e0000 type:multi cxid:0x93 zxid:0x5f txntype:14 reqpath:n/a 09:09:36 09:09:35.884 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - Processing request:: sessionid:0x1000002945e0000 type:multi cxid:0x94 zxid:0x60 txntype:14 reqpath:n/a 09:09:36 09:09:35.884 [SyncThread:0] DEBUG org.apache.zookeeper.common.PathTrie - brokers 09:09:36 09:09:35.884 [SyncThread:0] DEBUG org.apache.zookeeper.server.DataTree - Digests are matching for Zxid: 60, Digest in log and actual tree: 170388834328 09:09:36 09:09:35.884 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - sessionid:0x1000002945e0000 type:multi cxid:0x94 zxid:0x60 txntype:14 reqpath:n/a 09:09:36 09:09:35.884 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - Processing request:: sessionid:0x1000002945e0000 type:multi cxid:0x95 zxid:0x61 txntype:14 reqpath:n/a 09:09:36 09:09:35.885 [main-SendThread(127.0.0.1:46481)] DEBUG org.apache.zookeeper.ClientCnxn - Reading reply session id: 0x1000002945e0000, packet:: clientPath:null serverPath:null finished:false header:: 147,14 replyHeader:: 147,95,0 request:: org.apache.zookeeper.MultiOperationRecord@7a95980e response:: org.apache.zookeeper.MultiResponse@94d81618 09:09:36 09:09:35.885 [main-SendThread(127.0.0.1:46481)] DEBUG org.apache.zookeeper.ClientCnxn - Reading reply session id: 0x1000002945e0000, packet:: clientPath:null serverPath:null finished:false header:: 148,14 replyHeader:: 148,96,0 request:: org.apache.zookeeper.MultiOperationRecord@a254160b response:: org.apache.zookeeper.MultiResponse@bc969415 09:09:36 09:09:35.885 [SyncThread:0] DEBUG org.apache.zookeeper.common.PathTrie - brokers 09:09:36 09:09:35.885 [SyncThread:0] DEBUG org.apache.zookeeper.server.DataTree - Digests are matching for Zxid: 61, Digest in log and actual tree: 173570266394 09:09:36 09:09:35.885 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - sessionid:0x1000002945e0000 type:multi cxid:0x95 zxid:0x61 txntype:14 reqpath:n/a 09:09:36 09:09:35.885 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - Processing request:: sessionid:0x1000002945e0000 type:multi cxid:0x96 zxid:0x62 txntype:14 reqpath:n/a 09:09:36 09:09:35.885 [SyncThread:0] DEBUG org.apache.zookeeper.common.PathTrie - brokers 09:09:36 09:09:35.885 [SyncThread:0] DEBUG org.apache.zookeeper.server.DataTree - Digests are matching for Zxid: 62, Digest in log and actual tree: 176707727664 09:09:36 09:09:35.886 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - sessionid:0x1000002945e0000 type:multi cxid:0x96 zxid:0x62 txntype:14 reqpath:n/a 09:09:36 09:09:35.886 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - Processing request:: sessionid:0x1000002945e0000 type:multi cxid:0x97 zxid:0x63 txntype:14 reqpath:n/a 09:09:36 09:09:35.886 [main-SendThread(127.0.0.1:46481)] DEBUG org.apache.zookeeper.ClientCnxn - Reading reply session id: 0x1000002945e0000, packet:: clientPath:null serverPath:null finished:false header:: 149,14 replyHeader:: 149,97,0 request:: org.apache.zookeeper.MultiOperationRecord@7c11d897 response:: org.apache.zookeeper.MultiResponse@965456a1 09:09:36 09:09:35.886 [main-SendThread(127.0.0.1:46481)] DEBUG org.apache.zookeeper.ClientCnxn - Reading reply session id: 0x1000002945e0000, packet:: clientPath:null serverPath:null finished:false header:: 150,14 replyHeader:: 150,98,0 request:: org.apache.zookeeper.MultiOperationRecord@a068cc68 response:: org.apache.zookeeper.MultiResponse@baab4a72 09:09:36 09:09:35.886 [SyncThread:0] DEBUG org.apache.zookeeper.common.PathTrie - brokers 09:09:36 09:09:35.886 [SyncThread:0] DEBUG org.apache.zookeeper.server.DataTree - Digests are matching for Zxid: 63, Digest in log and actual tree: 178033787282 09:09:36 09:09:35.886 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - sessionid:0x1000002945e0000 type:multi cxid:0x97 zxid:0x63 txntype:14 reqpath:n/a 09:09:36 09:09:35.886 [ProcessThread(sid:0 cport:46481):] DEBUG org.apache.zookeeper.server.SessionTrackerImpl - Checking session 0x1000002945e0000 09:09:36 09:09:35.887 [main-SendThread(127.0.0.1:46481)] DEBUG org.apache.zookeeper.ClientCnxn - Reading reply session id: 0x1000002945e0000, packet:: clientPath:null serverPath:null finished:false header:: 151,14 replyHeader:: 151,99,0 request:: org.apache.zookeeper.MultiOperationRecord@a878eb93 response:: org.apache.zookeeper.MultiResponse@c2bb699d 09:09:36 09:09:35.887 [ProcessThread(sid:0 cport:46481):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Permission requested: 1 09:09:36 09:09:35.887 [ProcessThread(sid:0 cport:46481):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - ACLs for node: [31,s{'world,'anyone} 09:09:36 ] 09:09:36 09:09:35.887 [ProcessThread(sid:0 cport:46481):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Client credentials: ['sasl,'zooclient 09:09:36 , 'ip,'127.0.0.1 09:09:36 ] 09:09:36 09:09:35.887 [ProcessThread(sid:0 cport:46481):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 179703480679 09:09:36 09:09:35.887 [ProcessThread(sid:0 cport:46481):] DEBUG org.apache.zookeeper.server.SessionTrackerImpl - Checking session 0x1000002945e0000 09:09:36 09:09:35.887 [ProcessThread(sid:0 cport:46481):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Processing ACL: 31,s{'world,'anyone} 09:09:36 09:09:36 09:09:35.887 [ProcessThread(sid:0 cport:46481):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Permission requested: 4 09:09:36 09:09:35.887 [ProcessThread(sid:0 cport:46481):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - ACLs for node: [31,s{'world,'anyone} 09:09:36 ] 09:09:36 09:09:35.887 [ProcessThread(sid:0 cport:46481):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Client credentials: ['sasl,'zooclient 09:09:36 , 'ip,'127.0.0.1 09:09:36 ] 09:09:36 09:09:35.887 [ProcessThread(sid:0 cport:46481):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 179703480679 09:09:36 09:09:35.887 [ProcessThread(sid:0 cport:46481):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 178778146996 09:09:36 09:09:35.887 [ProcessThread(sid:0 cport:46481):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 180779562464 09:09:36 09:09:35.887 [ProcessThread(sid:0 cport:46481):] DEBUG org.apache.zookeeper.server.SessionTrackerImpl - Checking session 0x1000002945e0000 09:09:36 09:09:35.887 [ProcessThread(sid:0 cport:46481):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Permission requested: 1 09:09:36 09:09:35.887 [ProcessThread(sid:0 cport:46481):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - ACLs for node: [31,s{'world,'anyone} 09:09:36 ] 09:09:36 09:09:35.887 [ProcessThread(sid:0 cport:46481):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Client credentials: ['sasl,'zooclient 09:09:36 , 'ip,'127.0.0.1 09:09:36 ] 09:09:36 09:09:35.887 [ProcessThread(sid:0 cport:46481):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 180779562464 09:09:36 09:09:35.887 [ProcessThread(sid:0 cport:46481):] DEBUG org.apache.zookeeper.server.SessionTrackerImpl - Checking session 0x1000002945e0000 09:09:36 09:09:35.887 [ProcessThread(sid:0 cport:46481):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Processing ACL: 31,s{'world,'anyone} 09:09:36 09:09:36 09:09:35.887 [ProcessThread(sid:0 cport:46481):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Permission requested: 4 09:09:36 09:09:35.887 [ProcessThread(sid:0 cport:46481):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - ACLs for node: [31,s{'world,'anyone} 09:09:36 ] 09:09:36 09:09:35.887 [ProcessThread(sid:0 cport:46481):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Client credentials: ['sasl,'zooclient 09:09:36 , 'ip,'127.0.0.1 09:09:36 ] 09:09:36 09:09:35.887 [ProcessThread(sid:0 cport:46481):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 180779562464 09:09:36 09:09:35.887 [ProcessThread(sid:0 cport:46481):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 181024284082 09:09:36 09:09:35.887 [ProcessThread(sid:0 cport:46481):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 182602709032 09:09:36 09:09:35.887 [ProcessThread(sid:0 cport:46481):] DEBUG org.apache.zookeeper.server.SessionTrackerImpl - Checking session 0x1000002945e0000 09:09:36 09:09:35.887 [ProcessThread(sid:0 cport:46481):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Permission requested: 1 09:09:36 09:09:35.887 [ProcessThread(sid:0 cport:46481):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - ACLs for node: [31,s{'world,'anyone} 09:09:36 ] 09:09:36 09:09:35.887 [ProcessThread(sid:0 cport:46481):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Client credentials: ['sasl,'zooclient 09:09:36 , 'ip,'127.0.0.1 09:09:36 ] 09:09:36 09:09:35.887 [ProcessThread(sid:0 cport:46481):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 182602709032 09:09:36 09:09:35.887 [ProcessThread(sid:0 cport:46481):] DEBUG org.apache.zookeeper.server.SessionTrackerImpl - Checking session 0x1000002945e0000 09:09:36 09:09:35.887 [ProcessThread(sid:0 cport:46481):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Processing ACL: 31,s{'world,'anyone} 09:09:36 09:09:36 09:09:35.887 [ProcessThread(sid:0 cport:46481):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Permission requested: 4 09:09:36 09:09:35.888 [ProcessThread(sid:0 cport:46481):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - ACLs for node: [31,s{'world,'anyone} 09:09:36 ] 09:09:36 09:09:35.888 [ProcessThread(sid:0 cport:46481):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Client credentials: ['sasl,'zooclient 09:09:36 , 'ip,'127.0.0.1 09:09:36 ] 09:09:36 09:09:35.888 [ProcessThread(sid:0 cport:46481):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 182602709032 09:09:36 09:09:35.888 [ProcessThread(sid:0 cport:46481):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 182896856890 09:09:36 09:09:35.888 [ProcessThread(sid:0 cport:46481):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 183847942453 09:09:36 09:09:35.888 [ProcessThread(sid:0 cport:46481):] DEBUG org.apache.zookeeper.server.SessionTrackerImpl - Checking session 0x1000002945e0000 09:09:36 09:09:35.888 [ProcessThread(sid:0 cport:46481):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Permission requested: 1 09:09:36 09:09:35.888 [ProcessThread(sid:0 cport:46481):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - ACLs for node: [31,s{'world,'anyone} 09:09:36 ] 09:09:36 09:09:35.888 [ProcessThread(sid:0 cport:46481):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Client credentials: ['sasl,'zooclient 09:09:36 , 'ip,'127.0.0.1 09:09:36 ] 09:09:36 09:09:35.888 [ProcessThread(sid:0 cport:46481):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 183847942453 09:09:36 09:09:35.888 [ProcessThread(sid:0 cport:46481):] DEBUG org.apache.zookeeper.server.SessionTrackerImpl - Checking session 0x1000002945e0000 09:09:36 09:09:35.888 [ProcessThread(sid:0 cport:46481):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Processing ACL: 31,s{'world,'anyone} 09:09:36 09:09:36 09:09:35.888 [ProcessThread(sid:0 cport:46481):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Permission requested: 4 09:09:36 09:09:35.888 [ProcessThread(sid:0 cport:46481):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - ACLs for node: [31,s{'world,'anyone} 09:09:36 ] 09:09:36 09:09:35.888 [ProcessThread(sid:0 cport:46481):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Client credentials: ['sasl,'zooclient 09:09:36 , 'ip,'127.0.0.1 09:09:36 ] 09:09:36 09:09:35.888 [ProcessThread(sid:0 cport:46481):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 183847942453 09:09:36 09:09:35.888 [ProcessThread(sid:0 cport:46481):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 185923476226 09:09:36 09:09:35.888 [ProcessThread(sid:0 cport:46481):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 186858605862 09:09:36 09:09:35.888 [ProcessThread(sid:0 cport:46481):] DEBUG org.apache.zookeeper.server.SessionTrackerImpl - Checking session 0x1000002945e0000 09:09:36 09:09:35.888 [ProcessThread(sid:0 cport:46481):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Permission requested: 1 09:09:36 09:09:35.888 [ProcessThread(sid:0 cport:46481):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - ACLs for node: [31,s{'world,'anyone} 09:09:36 ] 09:09:36 09:09:35.888 [ProcessThread(sid:0 cport:46481):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Client credentials: ['sasl,'zooclient 09:09:36 , 'ip,'127.0.0.1 09:09:36 ] 09:09:36 09:09:35.888 [ProcessThread(sid:0 cport:46481):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 186858605862 09:09:36 09:09:35.888 [ProcessThread(sid:0 cport:46481):] DEBUG org.apache.zookeeper.server.SessionTrackerImpl - Checking session 0x1000002945e0000 09:09:36 09:09:35.888 [ProcessThread(sid:0 cport:46481):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Processing ACL: 31,s{'world,'anyone} 09:09:36 09:09:36 09:09:35.888 [ProcessThread(sid:0 cport:46481):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Permission requested: 4 09:09:36 09:09:35.888 [ProcessThread(sid:0 cport:46481):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - ACLs for node: [31,s{'world,'anyone} 09:09:36 ] 09:09:36 09:09:35.888 [ProcessThread(sid:0 cport:46481):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Client credentials: ['sasl,'zooclient 09:09:36 , 'ip,'127.0.0.1 09:09:36 ] 09:09:36 09:09:35.888 [ProcessThread(sid:0 cport:46481):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 186858605862 09:09:36 09:09:35.888 [ProcessThread(sid:0 cport:46481):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 184782809843 09:09:36 09:09:35.888 [ProcessThread(sid:0 cport:46481):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 187932404805 09:09:36 09:09:35.888 [ProcessThread(sid:0 cport:46481):] DEBUG org.apache.zookeeper.server.SessionTrackerImpl - Checking session 0x1000002945e0000 09:09:36 09:09:35.888 [ProcessThread(sid:0 cport:46481):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Permission requested: 1 09:09:36 09:09:35.888 [ProcessThread(sid:0 cport:46481):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - ACLs for node: [31,s{'world,'anyone} 09:09:36 ] 09:09:36 09:09:35.888 [ProcessThread(sid:0 cport:46481):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Client credentials: ['sasl,'zooclient 09:09:36 , 'ip,'127.0.0.1 09:09:36 ] 09:09:36 09:09:35.888 [ProcessThread(sid:0 cport:46481):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 187932404805 09:09:36 09:09:35.888 [ProcessThread(sid:0 cport:46481):] DEBUG org.apache.zookeeper.server.SessionTrackerImpl - Checking session 0x1000002945e0000 09:09:36 09:09:35.888 [ProcessThread(sid:0 cport:46481):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Processing ACL: 31,s{'world,'anyone} 09:09:36 09:09:36 09:09:35.888 [ProcessThread(sid:0 cport:46481):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Permission requested: 4 09:09:36 09:09:35.888 [ProcessThread(sid:0 cport:46481):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - ACLs for node: [31,s{'world,'anyone} 09:09:36 ] 09:09:36 09:09:35.888 [ProcessThread(sid:0 cport:46481):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Client credentials: ['sasl,'zooclient 09:09:36 , 'ip,'127.0.0.1 09:09:36 ] 09:09:36 09:09:35.888 [ProcessThread(sid:0 cport:46481):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 187932404805 09:09:36 09:09:35.888 [ProcessThread(sid:0 cport:46481):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 188712105203 09:09:36 09:09:35.888 [ProcessThread(sid:0 cport:46481):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 189527935408 09:09:36 09:09:35.888 [ProcessThread(sid:0 cport:46481):] DEBUG org.apache.zookeeper.server.SessionTrackerImpl - Checking session 0x1000002945e0000 09:09:36 09:09:35.888 [ProcessThread(sid:0 cport:46481):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Permission requested: 1 09:09:36 09:09:35.888 [ProcessThread(sid:0 cport:46481):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - ACLs for node: [31,s{'world,'anyone} 09:09:36 ] 09:09:36 09:09:35.888 [ProcessThread(sid:0 cport:46481):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Client credentials: ['sasl,'zooclient 09:09:36 , 'ip,'127.0.0.1 09:09:36 ] 09:09:36 09:09:35.889 [ProcessThread(sid:0 cport:46481):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 189527935408 09:09:36 09:09:35.889 [ProcessThread(sid:0 cport:46481):] DEBUG org.apache.zookeeper.server.SessionTrackerImpl - Checking session 0x1000002945e0000 09:09:36 09:09:35.889 [ProcessThread(sid:0 cport:46481):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Processing ACL: 31,s{'world,'anyone} 09:09:36 09:09:36 09:09:35.889 [ProcessThread(sid:0 cport:46481):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Permission requested: 4 09:09:36 09:09:35.889 [ProcessThread(sid:0 cport:46481):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - ACLs for node: [31,s{'world,'anyone} 09:09:36 ] 09:09:36 09:09:35.889 [ProcessThread(sid:0 cport:46481):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Client credentials: ['sasl,'zooclient 09:09:36 , 'ip,'127.0.0.1 09:09:36 ] 09:09:36 09:09:35.889 [ProcessThread(sid:0 cport:46481):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 189527935408 09:09:36 09:09:35.889 [ProcessThread(sid:0 cport:46481):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 189702316834 09:09:36 09:09:35.889 [ProcessThread(sid:0 cport:46481):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 193972049628 09:09:36 09:09:35.889 [ProcessThread(sid:0 cport:46481):] DEBUG org.apache.zookeeper.server.SessionTrackerImpl - Checking session 0x1000002945e0000 09:09:36 09:09:35.889 [ProcessThread(sid:0 cport:46481):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Permission requested: 1 09:09:36 09:09:35.889 [ProcessThread(sid:0 cport:46481):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - ACLs for node: [31,s{'world,'anyone} 09:09:36 ] 09:09:36 09:09:35.889 [ProcessThread(sid:0 cport:46481):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Client credentials: ['sasl,'zooclient 09:09:36 , 'ip,'127.0.0.1 09:09:36 ] 09:09:36 09:09:35.889 [ProcessThread(sid:0 cport:46481):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 193972049628 09:09:36 09:09:35.889 [ProcessThread(sid:0 cport:46481):] DEBUG org.apache.zookeeper.server.SessionTrackerImpl - Checking session 0x1000002945e0000 09:09:36 09:09:35.889 [ProcessThread(sid:0 cport:46481):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Processing ACL: 31,s{'world,'anyone} 09:09:36 09:09:36 09:09:35.889 [ProcessThread(sid:0 cport:46481):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Permission requested: 4 09:09:36 09:09:35.889 [ProcessThread(sid:0 cport:46481):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - ACLs for node: [31,s{'world,'anyone} 09:09:36 ] 09:09:36 09:09:35.889 [ProcessThread(sid:0 cport:46481):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Client credentials: ['sasl,'zooclient 09:09:36 , 'ip,'127.0.0.1 09:09:36 ] 09:09:36 09:09:35.889 [ProcessThread(sid:0 cport:46481):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 193972049628 09:09:36 09:09:35.889 [ProcessThread(sid:0 cport:46481):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 194627899807 09:09:36 09:09:35.889 [ProcessThread(sid:0 cport:46481):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 196685877384 09:09:36 09:09:35.889 [ProcessThread(sid:0 cport:46481):] DEBUG org.apache.zookeeper.server.SessionTrackerImpl - Checking session 0x1000002945e0000 09:09:36 09:09:35.889 [ProcessThread(sid:0 cport:46481):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Permission requested: 1 09:09:36 09:09:35.889 [ProcessThread(sid:0 cport:46481):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - ACLs for node: [31,s{'world,'anyone} 09:09:36 ] 09:09:36 09:09:35.889 [ProcessThread(sid:0 cport:46481):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Client credentials: ['sasl,'zooclient 09:09:36 , 'ip,'127.0.0.1 09:09:36 ] 09:09:36 09:09:35.889 [ProcessThread(sid:0 cport:46481):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 196685877384 09:09:36 09:09:35.889 [ProcessThread(sid:0 cport:46481):] DEBUG org.apache.zookeeper.server.SessionTrackerImpl - Checking session 0x1000002945e0000 09:09:36 09:09:35.889 [ProcessThread(sid:0 cport:46481):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Processing ACL: 31,s{'world,'anyone} 09:09:36 09:09:36 09:09:35.889 [ProcessThread(sid:0 cport:46481):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Permission requested: 4 09:09:36 09:09:35.889 [ProcessThread(sid:0 cport:46481):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - ACLs for node: [31,s{'world,'anyone} 09:09:36 ] 09:09:36 09:09:35.889 [ProcessThread(sid:0 cport:46481):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Client credentials: ['sasl,'zooclient 09:09:36 , 'ip,'127.0.0.1 09:09:36 ] 09:09:36 09:09:35.889 [ProcessThread(sid:0 cport:46481):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 196685877384 09:09:36 09:09:35.889 [ProcessThread(sid:0 cport:46481):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 196016327629 09:09:36 09:09:35.889 [ProcessThread(sid:0 cport:46481):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 196607565279 09:09:36 09:09:35.889 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - Processing request:: sessionid:0x1000002945e0000 type:multi cxid:0x98 zxid:0x64 txntype:14 reqpath:n/a 09:09:36 09:09:35.890 [SyncThread:0] DEBUG org.apache.zookeeper.common.PathTrie - brokers 09:09:36 09:09:35.890 [SyncThread:0] DEBUG org.apache.zookeeper.server.DataTree - Digests are matching for Zxid: 64, Digest in log and actual tree: 179703480679 09:09:36 09:09:35.890 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - sessionid:0x1000002945e0000 type:multi cxid:0x98 zxid:0x64 txntype:14 reqpath:n/a 09:09:36 09:09:35.890 [main-SendThread(127.0.0.1:46481)] DEBUG org.apache.zookeeper.ClientCnxn - Reading reply session id: 0x1000002945e0000, packet:: clientPath:null serverPath:null finished:false header:: 152,14 replyHeader:: 152,100,0 request:: org.apache.zookeeper.MultiOperationRecord@ddce2fee response:: org.apache.zookeeper.MultiResponse@f810adf8 09:09:36 09:09:35.890 [ProcessThread(sid:0 cport:46481):] DEBUG org.apache.zookeeper.server.SessionTrackerImpl - Checking session 0x1000002945e0000 09:09:36 09:09:35.890 [ProcessThread(sid:0 cport:46481):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Permission requested: 1 09:09:36 09:09:35.890 [ProcessThread(sid:0 cport:46481):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - ACLs for node: [31,s{'world,'anyone} 09:09:36 ] 09:09:36 09:09:35.890 [ProcessThread(sid:0 cport:46481):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Client credentials: ['sasl,'zooclient 09:09:36 , 'ip,'127.0.0.1 09:09:36 ] 09:09:36 09:09:35.890 [ProcessThread(sid:0 cport:46481):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 196607565279 09:09:36 09:09:35.890 [ProcessThread(sid:0 cport:46481):] DEBUG org.apache.zookeeper.server.SessionTrackerImpl - Checking session 0x1000002945e0000 09:09:36 09:09:35.891 [ProcessThread(sid:0 cport:46481):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Processing ACL: 31,s{'world,'anyone} 09:09:36 09:09:36 09:09:35.891 [ProcessThread(sid:0 cport:46481):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Permission requested: 4 09:09:36 09:09:35.891 [ProcessThread(sid:0 cport:46481):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - ACLs for node: [31,s{'world,'anyone} 09:09:36 ] 09:09:36 09:09:35.891 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - Processing request:: sessionid:0x1000002945e0000 type:multi cxid:0x99 zxid:0x65 txntype:14 reqpath:n/a 09:09:36 09:09:35.891 [ProcessThread(sid:0 cport:46481):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Client credentials: ['sasl,'zooclient 09:09:36 , 'ip,'127.0.0.1 09:09:36 ] 09:09:36 09:09:35.891 [SyncThread:0] DEBUG org.apache.zookeeper.common.PathTrie - brokers 09:09:36 09:09:35.891 [SyncThread:0] DEBUG org.apache.zookeeper.server.DataTree - Digests are matching for Zxid: 65, Digest in log and actual tree: 180779562464 09:09:36 09:09:35.891 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - sessionid:0x1000002945e0000 type:multi cxid:0x99 zxid:0x65 txntype:14 reqpath:n/a 09:09:36 09:09:35.891 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - Processing request:: sessionid:0x1000002945e0000 type:multi cxid:0x9a zxid:0x66 txntype:14 reqpath:n/a 09:09:36 09:09:35.891 [SyncThread:0] DEBUG org.apache.zookeeper.common.PathTrie - brokers 09:09:36 09:09:35.891 [SyncThread:0] DEBUG org.apache.zookeeper.server.DataTree - Digests are matching for Zxid: 66, Digest in log and actual tree: 182602709032 09:09:36 09:09:35.891 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - sessionid:0x1000002945e0000 type:multi cxid:0x9a zxid:0x66 txntype:14 reqpath:n/a 09:09:36 09:09:35.891 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - Processing request:: sessionid:0x1000002945e0000 type:multi cxid:0x9b zxid:0x67 txntype:14 reqpath:n/a 09:09:36 09:09:35.892 [main-SendThread(127.0.0.1:46481)] DEBUG org.apache.zookeeper.ClientCnxn - Reading reply session id: 0x1000002945e0000, packet:: clientPath:null serverPath:null finished:false header:: 153,14 replyHeader:: 153,101,0 request:: org.apache.zookeeper.MultiOperationRecord@472b9d56 response:: org.apache.zookeeper.MultiResponse@616e1b60 09:09:36 09:09:35.892 [main-SendThread(127.0.0.1:46481)] DEBUG org.apache.zookeeper.ClientCnxn - Reading reply session id: 0x1000002945e0000, packet:: clientPath:null serverPath:null finished:false header:: 154,14 replyHeader:: 154,102,0 request:: org.apache.zookeeper.MultiOperationRecord@b0f813d8 response:: org.apache.zookeeper.MultiResponse@cb3a91e2 09:09:36 09:09:35.892 [SyncThread:0] DEBUG org.apache.zookeeper.common.PathTrie - brokers 09:09:36 09:09:35.892 [SyncThread:0] DEBUG org.apache.zookeeper.server.DataTree - Digests are matching for Zxid: 67, Digest in log and actual tree: 183847942453 09:09:36 09:09:35.892 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - sessionid:0x1000002945e0000 type:multi cxid:0x9b zxid:0x67 txntype:14 reqpath:n/a 09:09:36 09:09:35.892 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - Processing request:: sessionid:0x1000002945e0000 type:multi cxid:0x9c zxid:0x68 txntype:14 reqpath:n/a 09:09:36 09:09:35.893 [SyncThread:0] DEBUG org.apache.zookeeper.common.PathTrie - brokers 09:09:36 09:09:35.893 [main-SendThread(127.0.0.1:46481)] DEBUG org.apache.zookeeper.ClientCnxn - Reading reply session id: 0x1000002945e0000, packet:: clientPath:null serverPath:null finished:false header:: 155,14 replyHeader:: 155,103,0 request:: org.apache.zookeeper.MultiOperationRecord@78aa4e6b response:: org.apache.zookeeper.MultiResponse@92eccc75 09:09:36 09:09:35.893 [SyncThread:0] DEBUG org.apache.zookeeper.server.DataTree - Digests are matching for Zxid: 68, Digest in log and actual tree: 186858605862 09:09:36 09:09:35.893 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - sessionid:0x1000002945e0000 type:multi cxid:0x9c zxid:0x68 txntype:14 reqpath:n/a 09:09:36 09:09:35.893 [main-SendThread(127.0.0.1:46481)] DEBUG org.apache.zookeeper.ClientCnxn - Reading reply session id: 0x1000002945e0000, packet:: clientPath:null serverPath:null finished:false header:: 156,14 replyHeader:: 156,104,0 request:: org.apache.zookeeper.MultiOperationRecord@702b2626 response:: org.apache.zookeeper.MultiResponse@8a6da430 09:09:36 09:09:35.893 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - Processing request:: sessionid:0x1000002945e0000 type:multi cxid:0x9d zxid:0x69 txntype:14 reqpath:n/a 09:09:36 09:09:35.893 [SyncThread:0] DEBUG org.apache.zookeeper.common.PathTrie - brokers 09:09:36 09:09:35.893 [SyncThread:0] DEBUG org.apache.zookeeper.server.DataTree - Digests are matching for Zxid: 69, Digest in log and actual tree: 187932404805 09:09:36 09:09:35.894 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - sessionid:0x1000002945e0000 type:multi cxid:0x9d zxid:0x69 txntype:14 reqpath:n/a 09:09:36 09:09:35.894 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - Processing request:: sessionid:0x1000002945e0000 type:multi cxid:0x9e zxid:0x6a txntype:14 reqpath:n/a 09:09:36 09:09:35.894 [main-SendThread(127.0.0.1:46481)] DEBUG org.apache.zookeeper.ClientCnxn - Reading reply session id: 0x1000002945e0000, packet:: clientPath:null serverPath:null finished:false header:: 157,14 replyHeader:: 157,105,0 request:: org.apache.zookeeper.MultiOperationRecord@72166fc9 response:: org.apache.zookeeper.MultiResponse@8c58edd3 09:09:36 09:09:35.894 [SyncThread:0] DEBUG org.apache.zookeeper.common.PathTrie - brokers 09:09:36 09:09:35.894 [SyncThread:0] DEBUG org.apache.zookeeper.server.DataTree - Digests are matching for Zxid: 6a, Digest in log and actual tree: 189527935408 09:09:36 09:09:35.894 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - sessionid:0x1000002945e0000 type:multi cxid:0x9e zxid:0x6a txntype:14 reqpath:n/a 09:09:36 09:09:35.894 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - Processing request:: sessionid:0x1000002945e0000 type:multi cxid:0x9f zxid:0x6b txntype:14 reqpath:n/a 09:09:36 09:09:35.895 [main-SendThread(127.0.0.1:46481)] DEBUG org.apache.zookeeper.ClientCnxn - Reading reply session id: 0x1000002945e0000, packet:: clientPath:null serverPath:null finished:false header:: 158,14 replyHeader:: 158,106,0 request:: org.apache.zookeeper.MultiOperationRecord@a3542ea response:: org.apache.zookeeper.MultiResponse@2477c0f4 09:09:36 09:09:35.895 [SyncThread:0] DEBUG org.apache.zookeeper.common.PathTrie - brokers 09:09:36 09:09:35.895 [SyncThread:0] DEBUG org.apache.zookeeper.server.DataTree - Digests are matching for Zxid: 6b, Digest in log and actual tree: 193972049628 09:09:36 09:09:35.895 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - sessionid:0x1000002945e0000 type:multi cxid:0x9f zxid:0x6b txntype:14 reqpath:n/a 09:09:36 09:09:35.895 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - Processing request:: sessionid:0x1000002945e0000 type:multi cxid:0xa0 zxid:0x6c txntype:14 reqpath:n/a 09:09:36 09:09:35.895 [main-SendThread(127.0.0.1:46481)] DEBUG org.apache.zookeeper.ClientCnxn - Reading reply session id: 0x1000002945e0000, packet:: clientPath:null serverPath:null finished:false header:: 159,14 replyHeader:: 159,107,0 request:: org.apache.zookeeper.MultiOperationRecord@175d002e response:: org.apache.zookeeper.MultiResponse@319f7e38 09:09:36 09:09:35.895 [SyncThread:0] DEBUG org.apache.zookeeper.common.PathTrie - brokers 09:09:36 09:09:35.895 [SyncThread:0] DEBUG org.apache.zookeeper.server.DataTree - Digests are matching for Zxid: 6c, Digest in log and actual tree: 196685877384 09:09:36 09:09:35.895 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - sessionid:0x1000002945e0000 type:multi cxid:0xa0 zxid:0x6c txntype:14 reqpath:n/a 09:09:36 09:09:35.896 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - Processing request:: sessionid:0x1000002945e0000 type:multi cxid:0xa1 zxid:0x6d txntype:14 reqpath:n/a 09:09:36 09:09:35.896 [main-SendThread(127.0.0.1:46481)] DEBUG org.apache.zookeeper.ClientCnxn - Reading reply session id: 0x1000002945e0000, packet:: clientPath:null serverPath:null finished:false header:: 160,14 replyHeader:: 160,108,0 request:: org.apache.zookeeper.MultiOperationRecord@ad9089ac response:: org.apache.zookeeper.MultiResponse@c7d307b6 09:09:36 09:09:35.896 [SyncThread:0] DEBUG org.apache.zookeeper.common.PathTrie - brokers 09:09:36 09:09:35.896 [SyncThread:0] DEBUG org.apache.zookeeper.server.DataTree - Digests are matching for Zxid: 6d, Digest in log and actual tree: 196607565279 09:09:36 09:09:35.896 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - sessionid:0x1000002945e0000 type:multi cxid:0xa1 zxid:0x6d txntype:14 reqpath:n/a 09:09:36 09:09:35.896 [main-SendThread(127.0.0.1:46481)] DEBUG org.apache.zookeeper.ClientCnxn - Reading reply session id: 0x1000002945e0000, packet:: clientPath:null serverPath:null finished:false header:: 161,14 replyHeader:: 161,109,0 request:: org.apache.zookeeper.MultiOperationRecord@4106c7ce response:: org.apache.zookeeper.MultiResponse@5b4945d8 09:09:36 09:09:35.896 [ProcessThread(sid:0 cport:46481):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from data tree is: 196607565279 09:09:36 09:09:35.896 [ProcessThread(sid:0 cport:46481):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 197316675409 09:09:36 09:09:35.896 [ProcessThread(sid:0 cport:46481):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 199980180071 09:09:36 09:09:35.896 [ProcessThread(sid:0 cport:46481):] DEBUG org.apache.zookeeper.server.SessionTrackerImpl - Checking session 0x1000002945e0000 09:09:36 09:09:35.896 [ProcessThread(sid:0 cport:46481):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Permission requested: 1 09:09:36 09:09:35.896 [ProcessThread(sid:0 cport:46481):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - ACLs for node: [31,s{'world,'anyone} 09:09:36 ] 09:09:36 09:09:35.896 [ProcessThread(sid:0 cport:46481):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Client credentials: ['sasl,'zooclient 09:09:36 , 'ip,'127.0.0.1 09:09:36 ] 09:09:36 09:09:35.896 [ProcessThread(sid:0 cport:46481):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 199980180071 09:09:36 09:09:35.897 [ProcessThread(sid:0 cport:46481):] DEBUG org.apache.zookeeper.server.SessionTrackerImpl - Checking session 0x1000002945e0000 09:09:36 09:09:35.897 [ProcessThread(sid:0 cport:46481):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Processing ACL: 31,s{'world,'anyone} 09:09:36 09:09:36 09:09:35.897 [ProcessThread(sid:0 cport:46481):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Permission requested: 4 09:09:36 09:09:35.897 [ProcessThread(sid:0 cport:46481):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - ACLs for node: [31,s{'world,'anyone} 09:09:36 ] 09:09:36 09:09:35.897 [ProcessThread(sid:0 cport:46481):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Client credentials: ['sasl,'zooclient 09:09:36 , 'ip,'127.0.0.1 09:09:36 ] 09:09:36 09:09:35.897 [ProcessThread(sid:0 cport:46481):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 199980180071 09:09:36 09:09:35.897 [ProcessThread(sid:0 cport:46481):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 199086512949 09:09:36 09:09:35.897 [ProcessThread(sid:0 cport:46481):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 199852206262 09:09:36 09:09:35.897 [ProcessThread(sid:0 cport:46481):] DEBUG org.apache.zookeeper.server.SessionTrackerImpl - Checking session 0x1000002945e0000 09:09:36 09:09:35.897 [ProcessThread(sid:0 cport:46481):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Permission requested: 1 09:09:36 09:09:35.897 [ProcessThread(sid:0 cport:46481):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - ACLs for node: [31,s{'world,'anyone} 09:09:36 ] 09:09:36 09:09:35.897 [ProcessThread(sid:0 cport:46481):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Client credentials: ['sasl,'zooclient 09:09:36 , 'ip,'127.0.0.1 09:09:36 ] 09:09:36 09:09:35.897 [ProcessThread(sid:0 cport:46481):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 199852206262 09:09:36 09:09:35.897 [ProcessThread(sid:0 cport:46481):] DEBUG org.apache.zookeeper.server.SessionTrackerImpl - Checking session 0x1000002945e0000 09:09:36 09:09:35.897 [ProcessThread(sid:0 cport:46481):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Processing ACL: 31,s{'world,'anyone} 09:09:36 09:09:36 09:09:35.897 [ProcessThread(sid:0 cport:46481):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Permission requested: 4 09:09:36 09:09:35.897 [ProcessThread(sid:0 cport:46481):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - ACLs for node: [31,s{'world,'anyone} 09:09:36 ] 09:09:36 09:09:35.897 [ProcessThread(sid:0 cport:46481):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Client credentials: ['sasl,'zooclient 09:09:36 , 'ip,'127.0.0.1 09:09:36 ] 09:09:36 09:09:35.897 [ProcessThread(sid:0 cport:46481):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 199852206262 09:09:36 09:09:35.897 [ProcessThread(sid:0 cport:46481):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 198272113476 09:09:36 09:09:35.897 [ProcessThread(sid:0 cport:46481):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 201901133343 09:09:36 09:09:35.897 [ProcessThread(sid:0 cport:46481):] DEBUG org.apache.zookeeper.server.SessionTrackerImpl - Checking session 0x1000002945e0000 09:09:36 09:09:35.897 [ProcessThread(sid:0 cport:46481):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Permission requested: 1 09:09:36 09:09:35.897 [ProcessThread(sid:0 cport:46481):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - ACLs for node: [31,s{'world,'anyone} 09:09:36 ] 09:09:36 09:09:35.897 [ProcessThread(sid:0 cport:46481):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Client credentials: ['sasl,'zooclient 09:09:36 , 'ip,'127.0.0.1 09:09:36 ] 09:09:36 09:09:35.897 [ProcessThread(sid:0 cport:46481):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 201901133343 09:09:36 09:09:35.897 [ProcessThread(sid:0 cport:46481):] DEBUG org.apache.zookeeper.server.SessionTrackerImpl - Checking session 0x1000002945e0000 09:09:36 09:09:35.897 [ProcessThread(sid:0 cport:46481):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Processing ACL: 31,s{'world,'anyone} 09:09:36 09:09:36 09:09:35.897 [ProcessThread(sid:0 cport:46481):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Permission requested: 4 09:09:36 09:09:35.897 [ProcessThread(sid:0 cport:46481):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - ACLs for node: [31,s{'world,'anyone} 09:09:36 ] 09:09:36 09:09:35.897 [ProcessThread(sid:0 cport:46481):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Client credentials: ['sasl,'zooclient 09:09:36 , 'ip,'127.0.0.1 09:09:36 ] 09:09:36 09:09:35.897 [ProcessThread(sid:0 cport:46481):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 201901133343 09:09:36 09:09:35.897 [ProcessThread(sid:0 cport:46481):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 202340623509 09:09:36 09:09:35.897 [ProcessThread(sid:0 cport:46481):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 203549184616 09:09:36 09:09:35.897 [ProcessThread(sid:0 cport:46481):] DEBUG org.apache.zookeeper.server.SessionTrackerImpl - Checking session 0x1000002945e0000 09:09:36 09:09:35.897 [ProcessThread(sid:0 cport:46481):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Permission requested: 1 09:09:36 09:09:35.897 [ProcessThread(sid:0 cport:46481):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - ACLs for node: [31,s{'world,'anyone} 09:09:36 ] 09:09:36 09:09:35.898 [ProcessThread(sid:0 cport:46481):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Client credentials: ['sasl,'zooclient 09:09:36 , 'ip,'127.0.0.1 09:09:36 ] 09:09:36 09:09:35.898 [ProcessThread(sid:0 cport:46481):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 203549184616 09:09:36 09:09:35.898 [ProcessThread(sid:0 cport:46481):] DEBUG org.apache.zookeeper.server.SessionTrackerImpl - Checking session 0x1000002945e0000 09:09:36 09:09:35.898 [ProcessThread(sid:0 cport:46481):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Processing ACL: 31,s{'world,'anyone} 09:09:36 09:09:36 09:09:35.898 [ProcessThread(sid:0 cport:46481):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Permission requested: 4 09:09:36 09:09:35.898 [ProcessThread(sid:0 cport:46481):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - ACLs for node: [31,s{'world,'anyone} 09:09:36 ] 09:09:36 09:09:35.898 [ProcessThread(sid:0 cport:46481):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Client credentials: ['sasl,'zooclient 09:09:36 , 'ip,'127.0.0.1 09:09:36 ] 09:09:36 09:09:35.898 [ProcessThread(sid:0 cport:46481):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 203549184616 09:09:36 09:09:35.898 [ProcessThread(sid:0 cport:46481):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 201917793949 09:09:36 09:09:35.898 [ProcessThread(sid:0 cport:46481):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 204712362825 09:09:36 09:09:35.898 [ProcessThread(sid:0 cport:46481):] DEBUG org.apache.zookeeper.server.SessionTrackerImpl - Checking session 0x1000002945e0000 09:09:36 09:09:35.898 [ProcessThread(sid:0 cport:46481):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Permission requested: 1 09:09:36 09:09:35.898 [ProcessThread(sid:0 cport:46481):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - ACLs for node: [31,s{'world,'anyone} 09:09:36 ] 09:09:36 09:09:35.898 [ProcessThread(sid:0 cport:46481):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Client credentials: ['sasl,'zooclient 09:09:36 , 'ip,'127.0.0.1 09:09:36 ] 09:09:36 09:09:35.898 [ProcessThread(sid:0 cport:46481):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 204712362825 09:09:36 09:09:35.898 [ProcessThread(sid:0 cport:46481):] DEBUG org.apache.zookeeper.server.SessionTrackerImpl - Checking session 0x1000002945e0000 09:09:36 09:09:35.898 [ProcessThread(sid:0 cport:46481):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Processing ACL: 31,s{'world,'anyone} 09:09:36 09:09:36 09:09:35.898 [ProcessThread(sid:0 cport:46481):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Permission requested: 4 09:09:36 09:09:35.898 [ProcessThread(sid:0 cport:46481):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - ACLs for node: [31,s{'world,'anyone} 09:09:36 ] 09:09:36 09:09:35.898 [ProcessThread(sid:0 cport:46481):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Client credentials: ['sasl,'zooclient 09:09:36 , 'ip,'127.0.0.1 09:09:36 ] 09:09:36 09:09:35.898 [ProcessThread(sid:0 cport:46481):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 204712362825 09:09:36 09:09:35.898 [ProcessThread(sid:0 cport:46481):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 203089237846 09:09:36 09:09:35.898 [ProcessThread(sid:0 cport:46481):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 203230418524 09:09:36 09:09:35.898 [ProcessThread(sid:0 cport:46481):] DEBUG org.apache.zookeeper.server.SessionTrackerImpl - Checking session 0x1000002945e0000 09:09:36 09:09:35.898 [ProcessThread(sid:0 cport:46481):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Permission requested: 1 09:09:36 09:09:35.898 [ProcessThread(sid:0 cport:46481):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - ACLs for node: [31,s{'world,'anyone} 09:09:36 ] 09:09:36 09:09:35.898 [ProcessThread(sid:0 cport:46481):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Client credentials: ['sasl,'zooclient 09:09:36 , 'ip,'127.0.0.1 09:09:36 ] 09:09:36 09:09:35.898 [ProcessThread(sid:0 cport:46481):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 203230418524 09:09:36 09:09:35.898 [ProcessThread(sid:0 cport:46481):] DEBUG org.apache.zookeeper.server.SessionTrackerImpl - Checking session 0x1000002945e0000 09:09:36 09:09:35.898 [ProcessThread(sid:0 cport:46481):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Processing ACL: 31,s{'world,'anyone} 09:09:36 09:09:36 09:09:35.898 [ProcessThread(sid:0 cport:46481):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Permission requested: 4 09:09:36 09:09:35.898 [ProcessThread(sid:0 cport:46481):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - ACLs for node: [31,s{'world,'anyone} 09:09:36 ] 09:09:36 09:09:35.898 [ProcessThread(sid:0 cport:46481):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Client credentials: ['sasl,'zooclient 09:09:36 , 'ip,'127.0.0.1 09:09:36 ] 09:09:36 09:09:35.898 [ProcessThread(sid:0 cport:46481):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 203230418524 09:09:36 09:09:35.898 [ProcessThread(sid:0 cport:46481):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 206299617498 09:09:36 09:09:35.898 [ProcessThread(sid:0 cport:46481):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 208418002858 09:09:36 09:09:35.898 [ProcessThread(sid:0 cport:46481):] DEBUG org.apache.zookeeper.server.SessionTrackerImpl - Checking session 0x1000002945e0000 09:09:36 09:09:35.898 [ProcessThread(sid:0 cport:46481):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Permission requested: 1 09:09:36 09:09:35.898 [ProcessThread(sid:0 cport:46481):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - ACLs for node: [31,s{'world,'anyone} 09:09:36 ] 09:09:36 09:09:35.898 [ProcessThread(sid:0 cport:46481):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Client credentials: ['sasl,'zooclient 09:09:36 , 'ip,'127.0.0.1 09:09:36 ] 09:09:36 09:09:35.898 [ProcessThread(sid:0 cport:46481):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 208418002858 09:09:36 09:09:35.898 [ProcessThread(sid:0 cport:46481):] DEBUG org.apache.zookeeper.server.SessionTrackerImpl - Checking session 0x1000002945e0000 09:09:36 09:09:35.898 [ProcessThread(sid:0 cport:46481):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Processing ACL: 31,s{'world,'anyone} 09:09:36 09:09:36 09:09:35.898 [ProcessThread(sid:0 cport:46481):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Permission requested: 4 09:09:36 09:09:35.898 [ProcessThread(sid:0 cport:46481):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - ACLs for node: [31,s{'world,'anyone} 09:09:36 ] 09:09:36 09:09:35.898 [ProcessThread(sid:0 cport:46481):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Client credentials: ['sasl,'zooclient 09:09:36 , 'ip,'127.0.0.1 09:09:36 ] 09:09:36 09:09:35.898 [ProcessThread(sid:0 cport:46481):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 208418002858 09:09:36 09:09:35.898 [ProcessThread(sid:0 cport:46481):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 211386504488 09:09:36 09:09:35.899 [ProcessThread(sid:0 cport:46481):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 213810111460 09:09:36 09:09:35.899 [ProcessThread(sid:0 cport:46481):] DEBUG org.apache.zookeeper.server.SessionTrackerImpl - Checking session 0x1000002945e0000 09:09:36 09:09:35.899 [ProcessThread(sid:0 cport:46481):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Permission requested: 1 09:09:36 09:09:35.899 [ProcessThread(sid:0 cport:46481):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - ACLs for node: [31,s{'world,'anyone} 09:09:36 ] 09:09:36 09:09:35.899 [ProcessThread(sid:0 cport:46481):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Client credentials: ['sasl,'zooclient 09:09:36 , 'ip,'127.0.0.1 09:09:36 ] 09:09:36 09:09:35.899 [ProcessThread(sid:0 cport:46481):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 213810111460 09:09:36 09:09:35.899 [ProcessThread(sid:0 cport:46481):] DEBUG org.apache.zookeeper.server.SessionTrackerImpl - Checking session 0x1000002945e0000 09:09:36 09:09:35.899 [ProcessThread(sid:0 cport:46481):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Processing ACL: 31,s{'world,'anyone} 09:09:36 09:09:36 09:09:35.899 [ProcessThread(sid:0 cport:46481):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Permission requested: 4 09:09:36 09:09:35.899 [ProcessThread(sid:0 cport:46481):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - ACLs for node: [31,s{'world,'anyone} 09:09:36 ] 09:09:36 09:09:35.899 [ProcessThread(sid:0 cport:46481):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Client credentials: ['sasl,'zooclient 09:09:36 , 'ip,'127.0.0.1 09:09:36 ] 09:09:36 09:09:35.899 [ProcessThread(sid:0 cport:46481):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 213810111460 09:09:36 09:09:35.899 [ProcessThread(sid:0 cport:46481):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 211138148337 09:09:36 09:09:35.899 [ProcessThread(sid:0 cport:46481):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 211979359829 09:09:36 09:09:35.899 [ProcessThread(sid:0 cport:46481):] DEBUG org.apache.zookeeper.server.SessionTrackerImpl - Checking session 0x1000002945e0000 09:09:36 09:09:35.899 [ProcessThread(sid:0 cport:46481):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Permission requested: 1 09:09:36 09:09:35.899 [ProcessThread(sid:0 cport:46481):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - ACLs for node: [31,s{'world,'anyone} 09:09:36 ] 09:09:36 09:09:35.899 [ProcessThread(sid:0 cport:46481):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Client credentials: ['sasl,'zooclient 09:09:36 , 'ip,'127.0.0.1 09:09:36 ] 09:09:36 09:09:35.899 [ProcessThread(sid:0 cport:46481):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 211979359829 09:09:36 09:09:35.899 [ProcessThread(sid:0 cport:46481):] DEBUG org.apache.zookeeper.server.SessionTrackerImpl - Checking session 0x1000002945e0000 09:09:36 09:09:35.899 [ProcessThread(sid:0 cport:46481):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Processing ACL: 31,s{'world,'anyone} 09:09:36 09:09:36 09:09:35.899 [ProcessThread(sid:0 cport:46481):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Permission requested: 4 09:09:36 09:09:35.899 [ProcessThread(sid:0 cport:46481):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - ACLs for node: [31,s{'world,'anyone} 09:09:36 ] 09:09:36 09:09:35.899 [ProcessThread(sid:0 cport:46481):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Client credentials: ['sasl,'zooclient 09:09:36 , 'ip,'127.0.0.1 09:09:36 ] 09:09:36 09:09:35.899 [ProcessThread(sid:0 cport:46481):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 211979359829 09:09:36 09:09:35.899 [ProcessThread(sid:0 cport:46481):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 214684754456 09:09:36 09:09:35.899 [ProcessThread(sid:0 cport:46481):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 216177225825 09:09:36 09:09:35.900 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - Processing request:: sessionid:0x1000002945e0000 type:multi cxid:0xa2 zxid:0x6e txntype:14 reqpath:n/a 09:09:36 09:09:35.900 [SyncThread:0] DEBUG org.apache.zookeeper.common.PathTrie - brokers 09:09:36 09:09:35.900 [SyncThread:0] DEBUG org.apache.zookeeper.server.DataTree - Digests are matching for Zxid: 6e, Digest in log and actual tree: 199980180071 09:09:36 09:09:35.901 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - sessionid:0x1000002945e0000 type:multi cxid:0xa2 zxid:0x6e txntype:14 reqpath:n/a 09:09:36 09:09:35.901 [main-SendThread(127.0.0.1:46481)] DEBUG org.apache.zookeeper.ClientCnxn - Reading reply session id: 0x1000002945e0000, packet:: clientPath:null serverPath:null finished:false header:: 162,14 replyHeader:: 162,110,0 request:: org.apache.zookeeper.MultiOperationRecord@12b46b2f response:: org.apache.zookeeper.MultiResponse@2cf6e939 09:09:36 09:09:35.901 [ProcessThread(sid:0 cport:46481):] DEBUG org.apache.zookeeper.server.SessionTrackerImpl - Checking session 0x1000002945e0000 09:09:36 09:09:35.901 [ProcessThread(sid:0 cport:46481):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Permission requested: 1 09:09:36 09:09:35.901 [ProcessThread(sid:0 cport:46481):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - ACLs for node: [31,s{'world,'anyone} 09:09:36 ] 09:09:36 09:09:35.901 [ProcessThread(sid:0 cport:46481):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Client credentials: ['sasl,'zooclient 09:09:36 , 'ip,'127.0.0.1 09:09:36 ] 09:09:36 09:09:35.901 [ProcessThread(sid:0 cport:46481):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 216177225825 09:09:36 09:09:35.901 [ProcessThread(sid:0 cport:46481):] DEBUG org.apache.zookeeper.server.SessionTrackerImpl - Checking session 0x1000002945e0000 09:09:36 09:09:35.901 [ProcessThread(sid:0 cport:46481):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Processing ACL: 31,s{'world,'anyone} 09:09:36 09:09:36 09:09:35.901 [ProcessThread(sid:0 cport:46481):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Permission requested: 4 09:09:36 09:09:35.901 [ProcessThread(sid:0 cport:46481):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - ACLs for node: [31,s{'world,'anyone} 09:09:36 ] 09:09:36 09:09:35.901 [ProcessThread(sid:0 cport:46481):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Client credentials: ['sasl,'zooclient 09:09:36 , 'ip,'127.0.0.1 09:09:36 ] 09:09:36 09:09:35.901 [ProcessThread(sid:0 cport:46481):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 216177225825 09:09:36 09:09:35.901 [ProcessThread(sid:0 cport:46481):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 215882485589 09:09:36 09:09:35.902 [ProcessThread(sid:0 cport:46481):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 219920511208 09:09:36 09:09:35.902 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - Processing request:: sessionid:0x1000002945e0000 type:multi cxid:0xa3 zxid:0x6f txntype:14 reqpath:n/a 09:09:36 09:09:35.902 [SyncThread:0] DEBUG org.apache.zookeeper.common.PathTrie - brokers 09:09:36 09:09:35.902 [SyncThread:0] DEBUG org.apache.zookeeper.server.DataTree - Digests are matching for Zxid: 6f, Digest in log and actual tree: 199852206262 09:09:36 09:09:35.902 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - sessionid:0x1000002945e0000 type:multi cxid:0xa3 zxid:0x6f txntype:14 reqpath:n/a 09:09:36 09:09:35.902 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - Processing request:: sessionid:0x1000002945e0000 type:multi cxid:0xa4 zxid:0x70 txntype:14 reqpath:n/a 09:09:36 09:09:35.902 [main-SendThread(127.0.0.1:46481)] DEBUG org.apache.zookeeper.ClientCnxn - Reading reply session id: 0x1000002945e0000, packet:: clientPath:null serverPath:null finished:false header:: 163,14 replyHeader:: 163,111,0 request:: org.apache.zookeeper.MultiOperationRecord@849f947 response:: org.apache.zookeeper.MultiResponse@228c7751 09:09:36 09:09:35.902 [SyncThread:0] DEBUG org.apache.zookeeper.common.PathTrie - brokers 09:09:36 09:09:35.902 [SyncThread:0] DEBUG org.apache.zookeeper.server.DataTree - Digests are matching for Zxid: 70, Digest in log and actual tree: 201901133343 09:09:36 09:09:35.903 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - sessionid:0x1000002945e0000 type:multi cxid:0xa4 zxid:0x70 txntype:14 reqpath:n/a 09:09:36 09:09:35.903 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - Processing request:: sessionid:0x1000002945e0000 type:multi cxid:0xa5 zxid:0x71 txntype:14 reqpath:n/a 09:09:36 09:09:35.903 [SyncThread:0] DEBUG org.apache.zookeeper.common.PathTrie - brokers 09:09:36 09:09:35.903 [SyncThread:0] DEBUG org.apache.zookeeper.server.DataTree - Digests are matching for Zxid: 71, Digest in log and actual tree: 203549184616 09:09:36 09:09:35.903 [main-SendThread(127.0.0.1:46481)] DEBUG org.apache.zookeeper.ClientCnxn - Reading reply session id: 0x1000002945e0000, packet:: clientPath:null serverPath:null finished:false header:: 164,14 replyHeader:: 164,112,0 request:: org.apache.zookeeper.MultiOperationRecord@10c9218c response:: org.apache.zookeeper.MultiResponse@2b0b9f96 09:09:36 09:09:35.903 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - sessionid:0x1000002945e0000 type:multi cxid:0xa5 zxid:0x71 txntype:14 reqpath:n/a 09:09:36 09:09:35.903 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - Processing request:: sessionid:0x1000002945e0000 type:multi cxid:0xa6 zxid:0x72 txntype:14 reqpath:n/a 09:09:36 09:09:35.904 [SyncThread:0] DEBUG org.apache.zookeeper.common.PathTrie - brokers 09:09:36 09:09:35.904 [main-SendThread(127.0.0.1:46481)] DEBUG org.apache.zookeeper.ClientCnxn - Reading reply session id: 0x1000002945e0000, packet:: clientPath:null serverPath:null finished:false header:: 165,14 replyHeader:: 165,113,0 request:: org.apache.zookeeper.MultiOperationRecord@a5116167 response:: org.apache.zookeeper.MultiResponse@bf53df71 09:09:36 09:09:35.904 [SyncThread:0] DEBUG org.apache.zookeeper.server.DataTree - Digests are matching for Zxid: 72, Digest in log and actual tree: 204712362825 09:09:36 09:09:35.905 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - sessionid:0x1000002945e0000 type:multi cxid:0xa6 zxid:0x72 txntype:14 reqpath:n/a 09:09:36 09:09:35.905 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - Processing request:: sessionid:0x1000002945e0000 type:multi cxid:0xa7 zxid:0x73 txntype:14 reqpath:n/a 09:09:36 09:09:35.905 [main-SendThread(127.0.0.1:46481)] DEBUG org.apache.zookeeper.ClientCnxn - Reading reply session id: 0x1000002945e0000, packet:: clientPath:null serverPath:null finished:false header:: 166,14 replyHeader:: 166,114,0 request:: org.apache.zookeeper.MultiOperationRecord@7392b052 response:: org.apache.zookeeper.MultiResponse@8dd52e5c 09:09:36 09:09:35.905 [SyncThread:0] DEBUG org.apache.zookeeper.common.PathTrie - brokers 09:09:36 09:09:35.905 [SyncThread:0] DEBUG org.apache.zookeeper.server.DataTree - Digests are matching for Zxid: 73, Digest in log and actual tree: 203230418524 09:09:36 09:09:35.905 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - sessionid:0x1000002945e0000 type:multi cxid:0xa7 zxid:0x73 txntype:14 reqpath:n/a 09:09:36 09:09:35.905 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - Processing request:: sessionid:0x1000002945e0000 type:multi cxid:0xa8 zxid:0x74 txntype:14 reqpath:n/a 09:09:36 09:09:35.905 [main-SendThread(127.0.0.1:46481)] DEBUG org.apache.zookeeper.ClientCnxn - Reading reply session id: 0x1000002945e0000, packet:: clientPath:null serverPath:null finished:false header:: 167,14 replyHeader:: 167,115,0 request:: org.apache.zookeeper.MultiOperationRecord@aad33e50 response:: org.apache.zookeeper.MultiResponse@c515bc5a 09:09:36 09:09:35.905 [SyncThread:0] DEBUG org.apache.zookeeper.common.PathTrie - brokers 09:09:36 09:09:35.905 [SyncThread:0] DEBUG org.apache.zookeeper.server.DataTree - Digests are matching for Zxid: 74, Digest in log and actual tree: 208418002858 09:09:36 09:09:35.905 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - sessionid:0x1000002945e0000 type:multi cxid:0xa8 zxid:0x74 txntype:14 reqpath:n/a 09:09:36 09:09:35.905 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - Processing request:: sessionid:0x1000002945e0000 type:multi cxid:0xa9 zxid:0x75 txntype:14 reqpath:n/a 09:09:36 09:09:35.906 [main-SendThread(127.0.0.1:46481)] DEBUG org.apache.zookeeper.ClientCnxn - Reading reply session id: 0x1000002945e0000, packet:: clientPath:null serverPath:null finished:false header:: 168,14 replyHeader:: 168,116,0 request:: org.apache.zookeeper.MultiOperationRecord@c208c8d response:: org.apache.zookeeper.MultiResponse@26630a97 09:09:36 09:09:35.906 [SyncThread:0] DEBUG org.apache.zookeeper.common.PathTrie - brokers 09:09:36 09:09:35.906 [SyncThread:0] DEBUG org.apache.zookeeper.server.DataTree - Digests are matching for Zxid: 75, Digest in log and actual tree: 213810111460 09:09:36 09:09:35.906 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - sessionid:0x1000002945e0000 type:multi cxid:0xa9 zxid:0x75 txntype:14 reqpath:n/a 09:09:36 09:09:35.906 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - Processing request:: sessionid:0x1000002945e0000 type:multi cxid:0xaa zxid:0x76 txntype:14 reqpath:n/a 09:09:36 09:09:35.906 [main-SendThread(127.0.0.1:46481)] DEBUG org.apache.zookeeper.ClientCnxn - Reading reply session id: 0x1000002945e0000, packet:: clientPath:null serverPath:null finished:false header:: 169,14 replyHeader:: 169,117,0 request:: org.apache.zookeeper.MultiOperationRecord@3f1b7e2b response:: org.apache.zookeeper.MultiResponse@595dfc35 09:09:36 09:09:35.906 [SyncThread:0] DEBUG org.apache.zookeeper.common.PathTrie - brokers 09:09:36 09:09:35.906 [SyncThread:0] DEBUG org.apache.zookeeper.server.DataTree - Digests are matching for Zxid: 76, Digest in log and actual tree: 211979359829 09:09:36 09:09:35.906 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - sessionid:0x1000002945e0000 type:multi cxid:0xaa zxid:0x76 txntype:14 reqpath:n/a 09:09:36 09:09:35.906 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - Processing request:: sessionid:0x1000002945e0000 type:multi cxid:0xab zxid:0x77 txntype:14 reqpath:n/a 09:09:36 09:09:35.907 [main-SendThread(127.0.0.1:46481)] DEBUG org.apache.zookeeper.ClientCnxn - Reading reply session id: 0x1000002945e0000, packet:: clientPath:null serverPath:null finished:false header:: 170,14 replyHeader:: 170,118,0 request:: org.apache.zookeeper.MultiOperationRecord@75ed030f response:: org.apache.zookeeper.MultiResponse@902f8119 09:09:36 09:09:35.907 [SyncThread:0] DEBUG org.apache.zookeeper.common.PathTrie - brokers 09:09:36 09:09:35.907 [SyncThread:0] DEBUG org.apache.zookeeper.server.DataTree - Digests are matching for Zxid: 77, Digest in log and actual tree: 216177225825 09:09:36 09:09:35.907 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - sessionid:0x1000002945e0000 type:multi cxid:0xab zxid:0x77 txntype:14 reqpath:n/a 09:09:36 09:09:35.907 [main-SendThread(127.0.0.1:46481)] DEBUG org.apache.zookeeper.ClientCnxn - Reading reply session id: 0x1000002945e0000, packet:: clientPath:null serverPath:null finished:false header:: 171,14 replyHeader:: 171,119,0 request:: org.apache.zookeeper.MultiOperationRecord@e276c4ed response:: org.apache.zookeeper.MultiResponse@fcb942f7 09:09:36 09:09:35.907 [ProcessThread(sid:0 cport:46481):] DEBUG org.apache.zookeeper.server.SessionTrackerImpl - Checking session 0x1000002945e0000 09:09:36 09:09:35.907 [ProcessThread(sid:0 cport:46481):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Permission requested: 1 09:09:36 09:09:35.907 [ProcessThread(sid:0 cport:46481):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - ACLs for node: [31,s{'world,'anyone} 09:09:36 ] 09:09:36 09:09:35.907 [ProcessThread(sid:0 cport:46481):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Client credentials: ['sasl,'zooclient 09:09:36 , 'ip,'127.0.0.1 09:09:36 ] 09:09:36 09:09:35.907 [ProcessThread(sid:0 cport:46481):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 219920511208 09:09:36 09:09:35.907 [ProcessThread(sid:0 cport:46481):] DEBUG org.apache.zookeeper.server.SessionTrackerImpl - Checking session 0x1000002945e0000 09:09:36 09:09:35.907 [ProcessThread(sid:0 cport:46481):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Processing ACL: 31,s{'world,'anyone} 09:09:36 09:09:36 09:09:35.907 [ProcessThread(sid:0 cport:46481):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Permission requested: 4 09:09:36 09:09:35.907 [ProcessThread(sid:0 cport:46481):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - ACLs for node: [31,s{'world,'anyone} 09:09:36 ] 09:09:36 09:09:35.907 [ProcessThread(sid:0 cport:46481):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Client credentials: ['sasl,'zooclient 09:09:36 , 'ip,'127.0.0.1 09:09:36 ] 09:09:36 09:09:35.908 [ProcessThread(sid:0 cport:46481):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 219920511208 09:09:36 09:09:35.908 [ProcessThread(sid:0 cport:46481):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 220182054012 09:09:36 09:09:35.908 [ProcessThread(sid:0 cport:46481):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 223980439124 09:09:36 09:09:35.908 [ProcessThread(sid:0 cport:46481):] DEBUG org.apache.zookeeper.server.SessionTrackerImpl - Checking session 0x1000002945e0000 09:09:36 09:09:35.908 [ProcessThread(sid:0 cport:46481):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Permission requested: 1 09:09:36 09:09:35.908 [ProcessThread(sid:0 cport:46481):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - ACLs for node: [31,s{'world,'anyone} 09:09:36 ] 09:09:36 09:09:35.908 [ProcessThread(sid:0 cport:46481):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Client credentials: ['sasl,'zooclient 09:09:36 , 'ip,'127.0.0.1 09:09:36 ] 09:09:36 09:09:35.908 [ProcessThread(sid:0 cport:46481):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 223980439124 09:09:36 09:09:35.908 [ProcessThread(sid:0 cport:46481):] DEBUG org.apache.zookeeper.server.SessionTrackerImpl - Checking session 0x1000002945e0000 09:09:36 09:09:35.908 [ProcessThread(sid:0 cport:46481):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Processing ACL: 31,s{'world,'anyone} 09:09:36 09:09:36 09:09:35.908 [ProcessThread(sid:0 cport:46481):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Permission requested: 4 09:09:36 09:09:35.908 [ProcessThread(sid:0 cport:46481):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - ACLs for node: [31,s{'world,'anyone} 09:09:36 ] 09:09:36 09:09:35.908 [ProcessThread(sid:0 cport:46481):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Client credentials: ['sasl,'zooclient 09:09:36 , 'ip,'127.0.0.1 09:09:36 ] 09:09:36 09:09:35.908 [ProcessThread(sid:0 cport:46481):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 223980439124 09:09:36 09:09:35.908 [ProcessThread(sid:0 cport:46481):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 225603826279 09:09:36 09:09:35.908 [ProcessThread(sid:0 cport:46481):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 228245626107 09:09:36 09:09:35.908 [ProcessThread(sid:0 cport:46481):] DEBUG org.apache.zookeeper.server.SessionTrackerImpl - Checking session 0x1000002945e0000 09:09:36 09:09:35.908 [ProcessThread(sid:0 cport:46481):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Permission requested: 1 09:09:36 09:09:35.908 [ProcessThread(sid:0 cport:46481):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - ACLs for node: [31,s{'world,'anyone} 09:09:36 ] 09:09:36 09:09:35.908 [ProcessThread(sid:0 cport:46481):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Client credentials: ['sasl,'zooclient 09:09:36 , 'ip,'127.0.0.1 09:09:36 ] 09:09:36 09:09:35.908 [ProcessThread(sid:0 cport:46481):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 228245626107 09:09:36 09:09:35.908 [ProcessThread(sid:0 cport:46481):] DEBUG org.apache.zookeeper.server.SessionTrackerImpl - Checking session 0x1000002945e0000 09:09:36 09:09:35.908 [ProcessThread(sid:0 cport:46481):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Processing ACL: 31,s{'world,'anyone} 09:09:36 09:09:36 09:09:35.908 [ProcessThread(sid:0 cport:46481):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Permission requested: 4 09:09:36 09:09:35.908 [ProcessThread(sid:0 cport:46481):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - ACLs for node: [31,s{'world,'anyone} 09:09:36 ] 09:09:36 09:09:35.908 [ProcessThread(sid:0 cport:46481):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Client credentials: ['sasl,'zooclient 09:09:36 , 'ip,'127.0.0.1 09:09:36 ] 09:09:36 09:09:35.908 [ProcessThread(sid:0 cport:46481):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 228245626107 09:09:36 09:09:35.908 [ProcessThread(sid:0 cport:46481):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 229877281072 09:09:36 09:09:35.908 [ProcessThread(sid:0 cport:46481):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 233242367743 09:09:36 09:09:35.908 [ProcessThread(sid:0 cport:46481):] DEBUG org.apache.zookeeper.server.SessionTrackerImpl - Checking session 0x1000002945e0000 09:09:36 09:09:35.908 [ProcessThread(sid:0 cport:46481):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Permission requested: 1 09:09:36 09:09:35.908 [ProcessThread(sid:0 cport:46481):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - ACLs for node: [31,s{'world,'anyone} 09:09:36 ] 09:09:36 09:09:35.908 [ProcessThread(sid:0 cport:46481):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Client credentials: ['sasl,'zooclient 09:09:36 , 'ip,'127.0.0.1 09:09:36 ] 09:09:36 09:09:35.908 [ProcessThread(sid:0 cport:46481):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 233242367743 09:09:36 09:09:35.908 [ProcessThread(sid:0 cport:46481):] DEBUG org.apache.zookeeper.server.SessionTrackerImpl - Checking session 0x1000002945e0000 09:09:36 09:09:35.908 [ProcessThread(sid:0 cport:46481):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Processing ACL: 31,s{'world,'anyone} 09:09:36 09:09:36 09:09:35.908 [ProcessThread(sid:0 cport:46481):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Permission requested: 4 09:09:36 09:09:35.908 [ProcessThread(sid:0 cport:46481):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - ACLs for node: [31,s{'world,'anyone} 09:09:36 ] 09:09:36 09:09:35.908 [ProcessThread(sid:0 cport:46481):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Client credentials: ['sasl,'zooclient 09:09:36 , 'ip,'127.0.0.1 09:09:36 ] 09:09:36 09:09:35.908 [ProcessThread(sid:0 cport:46481):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 233242367743 09:09:36 09:09:35.908 [ProcessThread(sid:0 cport:46481):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 236481435773 09:09:36 09:09:35.908 [ProcessThread(sid:0 cport:46481):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 239098699626 09:09:36 09:09:35.908 [ProcessThread(sid:0 cport:46481):] DEBUG org.apache.zookeeper.server.SessionTrackerImpl - Checking session 0x1000002945e0000 09:09:36 09:09:35.909 [ProcessThread(sid:0 cport:46481):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Permission requested: 1 09:09:36 09:09:35.909 [ProcessThread(sid:0 cport:46481):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - ACLs for node: [31,s{'world,'anyone} 09:09:36 ] 09:09:36 09:09:35.909 [ProcessThread(sid:0 cport:46481):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Client credentials: ['sasl,'zooclient 09:09:36 , 'ip,'127.0.0.1 09:09:36 ] 09:09:36 09:09:35.909 [ProcessThread(sid:0 cport:46481):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 239098699626 09:09:36 09:09:35.909 [ProcessThread(sid:0 cport:46481):] DEBUG org.apache.zookeeper.server.SessionTrackerImpl - Checking session 0x1000002945e0000 09:09:36 09:09:35.909 [ProcessThread(sid:0 cport:46481):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Processing ACL: 31,s{'world,'anyone} 09:09:36 09:09:36 09:09:35.909 [ProcessThread(sid:0 cport:46481):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Permission requested: 4 09:09:36 09:09:35.909 [ProcessThread(sid:0 cport:46481):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - ACLs for node: [31,s{'world,'anyone} 09:09:36 ] 09:09:36 09:09:35.909 [ProcessThread(sid:0 cport:46481):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Client credentials: ['sasl,'zooclient 09:09:36 , 'ip,'127.0.0.1 09:09:36 ] 09:09:36 09:09:35.909 [ProcessThread(sid:0 cport:46481):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 239098699626 09:09:36 09:09:35.909 [ProcessThread(sid:0 cport:46481):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 235794619884 09:09:36 09:09:35.909 [ProcessThread(sid:0 cport:46481):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 239117331553 09:09:36 09:09:35.909 [ProcessThread(sid:0 cport:46481):] DEBUG org.apache.zookeeper.server.SessionTrackerImpl - Checking session 0x1000002945e0000 09:09:36 09:09:35.909 [ProcessThread(sid:0 cport:46481):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Permission requested: 1 09:09:36 09:09:35.909 [ProcessThread(sid:0 cport:46481):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - ACLs for node: [31,s{'world,'anyone} 09:09:36 ] 09:09:36 09:09:35.909 [ProcessThread(sid:0 cport:46481):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Client credentials: ['sasl,'zooclient 09:09:36 , 'ip,'127.0.0.1 09:09:36 ] 09:09:36 09:09:35.909 [ProcessThread(sid:0 cport:46481):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 239117331553 09:09:36 09:09:35.909 [ProcessThread(sid:0 cport:46481):] DEBUG org.apache.zookeeper.server.SessionTrackerImpl - Checking session 0x1000002945e0000 09:09:36 09:09:35.909 [ProcessThread(sid:0 cport:46481):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Processing ACL: 31,s{'world,'anyone} 09:09:36 09:09:36 09:09:35.909 [ProcessThread(sid:0 cport:46481):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Permission requested: 4 09:09:36 09:09:35.909 [ProcessThread(sid:0 cport:46481):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - ACLs for node: [31,s{'world,'anyone} 09:09:36 ] 09:09:36 09:09:35.909 [ProcessThread(sid:0 cport:46481):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Client credentials: ['sasl,'zooclient 09:09:36 , 'ip,'127.0.0.1 09:09:36 ] 09:09:36 09:09:35.909 [ProcessThread(sid:0 cport:46481):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 239117331553 09:09:36 09:09:35.909 [ProcessThread(sid:0 cport:46481):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 236454140054 09:09:36 09:09:35.909 [ProcessThread(sid:0 cport:46481):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 238567709020 09:09:36 09:09:35.909 [ProcessThread(sid:0 cport:46481):] DEBUG org.apache.zookeeper.server.SessionTrackerImpl - Checking session 0x1000002945e0000 09:09:36 09:09:35.909 [ProcessThread(sid:0 cport:46481):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Permission requested: 1 09:09:36 09:09:35.909 [ProcessThread(sid:0 cport:46481):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - ACLs for node: [31,s{'world,'anyone} 09:09:36 ] 09:09:36 09:09:35.909 [ProcessThread(sid:0 cport:46481):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Client credentials: ['sasl,'zooclient 09:09:36 , 'ip,'127.0.0.1 09:09:36 ] 09:09:36 09:09:35.909 [ProcessThread(sid:0 cport:46481):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 238567709020 09:09:36 09:09:35.909 [ProcessThread(sid:0 cport:46481):] DEBUG org.apache.zookeeper.server.SessionTrackerImpl - Checking session 0x1000002945e0000 09:09:36 09:09:35.909 [ProcessThread(sid:0 cport:46481):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Processing ACL: 31,s{'world,'anyone} 09:09:36 09:09:36 09:09:35.909 [ProcessThread(sid:0 cport:46481):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Permission requested: 4 09:09:36 09:09:35.909 [ProcessThread(sid:0 cport:46481):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - ACLs for node: [31,s{'world,'anyone} 09:09:36 ] 09:09:36 09:09:35.909 [ProcessThread(sid:0 cport:46481):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Client credentials: ['sasl,'zooclient 09:09:36 , 'ip,'127.0.0.1 09:09:36 ] 09:09:36 09:09:35.909 [ProcessThread(sid:0 cport:46481):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 238567709020 09:09:36 09:09:35.909 [ProcessThread(sid:0 cport:46481):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 235870833991 09:09:36 09:09:35.909 [ProcessThread(sid:0 cport:46481):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 238460694118 09:09:36 09:09:35.909 [ProcessThread(sid:0 cport:46481):] DEBUG org.apache.zookeeper.server.SessionTrackerImpl - Checking session 0x1000002945e0000 09:09:36 09:09:35.909 [ProcessThread(sid:0 cport:46481):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Permission requested: 1 09:09:36 09:09:35.909 [ProcessThread(sid:0 cport:46481):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - ACLs for node: [31,s{'world,'anyone} 09:09:36 ] 09:09:36 09:09:35.909 [ProcessThread(sid:0 cport:46481):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Client credentials: ['sasl,'zooclient 09:09:36 , 'ip,'127.0.0.1 09:09:36 ] 09:09:36 09:09:35.909 [ProcessThread(sid:0 cport:46481):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 238460694118 09:09:36 09:09:35.909 [ProcessThread(sid:0 cport:46481):] DEBUG org.apache.zookeeper.server.SessionTrackerImpl - Checking session 0x1000002945e0000 09:09:36 09:09:35.909 [ProcessThread(sid:0 cport:46481):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Processing ACL: 31,s{'world,'anyone} 09:09:36 09:09:36 09:09:35.909 [ProcessThread(sid:0 cport:46481):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Permission requested: 4 09:09:36 09:09:35.909 [ProcessThread(sid:0 cport:46481):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - ACLs for node: [31,s{'world,'anyone} 09:09:36 ] 09:09:36 09:09:35.909 [ProcessThread(sid:0 cport:46481):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Client credentials: ['sasl,'zooclient 09:09:36 , 'ip,'127.0.0.1 09:09:36 ] 09:09:36 09:09:35.910 [ProcessThread(sid:0 cport:46481):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 238460694118 09:09:36 09:09:35.910 [ProcessThread(sid:0 cport:46481):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 239518807874 09:09:36 09:09:35.910 [ProcessThread(sid:0 cport:46481):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 240855524274 09:09:36 09:09:35.910 [ProcessThread(sid:0 cport:46481):] DEBUG org.apache.zookeeper.server.SessionTrackerImpl - Checking session 0x1000002945e0000 09:09:36 09:09:35.910 [ProcessThread(sid:0 cport:46481):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Permission requested: 1 09:09:36 09:09:35.910 [ProcessThread(sid:0 cport:46481):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - ACLs for node: [31,s{'world,'anyone} 09:09:36 ] 09:09:36 09:09:35.910 [ProcessThread(sid:0 cport:46481):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Client credentials: ['sasl,'zooclient 09:09:36 , 'ip,'127.0.0.1 09:09:36 ] 09:09:36 09:09:35.910 [ProcessThread(sid:0 cport:46481):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 240855524274 09:09:36 09:09:35.910 [ProcessThread(sid:0 cport:46481):] DEBUG org.apache.zookeeper.server.SessionTrackerImpl - Checking session 0x1000002945e0000 09:09:36 09:09:35.910 [ProcessThread(sid:0 cport:46481):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Processing ACL: 31,s{'world,'anyone} 09:09:36 09:09:36 09:09:35.910 [ProcessThread(sid:0 cport:46481):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Permission requested: 4 09:09:36 09:09:35.910 [ProcessThread(sid:0 cport:46481):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - ACLs for node: [31,s{'world,'anyone} 09:09:36 ] 09:09:36 09:09:35.910 [ProcessThread(sid:0 cport:46481):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Client credentials: ['sasl,'zooclient 09:09:36 , 'ip,'127.0.0.1 09:09:36 ] 09:09:36 09:09:35.910 [ProcessThread(sid:0 cport:46481):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 240855524274 09:09:36 09:09:35.910 [ProcessThread(sid:0 cport:46481):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 239767981134 09:09:36 09:09:35.910 [ProcessThread(sid:0 cport:46481):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 240491283608 09:09:36 09:09:35.922 [main] DEBUG org.apache.kafka.clients.NetworkClient - [Consumer clientId=mso-123456-consumer-f5c1d64b-2a77-4abe-97e2-5d595acfbd82, groupId=mso-group] Sending metadata request MetadataRequestData(topics=[MetadataRequestTopic(topicId=AAAAAAAAAAAAAAAAAAAAAA, name='my-test-topic')], allowAutoTopicCreation=false, includeClusterAuthorizedOperations=false, includeTopicAuthorizedOperations=false) to node localhost:40117 (id: 1 rack: null) 09:09:36 09:09:35.922 [main] DEBUG org.apache.kafka.clients.NetworkClient - [Consumer clientId=mso-123456-consumer-f5c1d64b-2a77-4abe-97e2-5d595acfbd82, groupId=mso-group] Sending METADATA request with header RequestHeader(apiKey=METADATA, apiVersion=12, clientId=mso-123456-consumer-f5c1d64b-2a77-4abe-97e2-5d595acfbd82, correlationId=8) and timeout 30000 to node 1: MetadataRequestData(topics=[MetadataRequestTopic(topicId=AAAAAAAAAAAAAAAAAAAAAA, name='my-test-topic')], allowAutoTopicCreation=false, includeClusterAuthorizedOperations=false, includeTopicAuthorizedOperations=false) 09:09:36 09:09:35.925 [main] DEBUG org.apache.kafka.clients.NetworkClient - [Consumer clientId=mso-123456-consumer-f5c1d64b-2a77-4abe-97e2-5d595acfbd82, groupId=mso-group] Received METADATA response from node 1 for request with header RequestHeader(apiKey=METADATA, apiVersion=12, clientId=mso-123456-consumer-f5c1d64b-2a77-4abe-97e2-5d595acfbd82, correlationId=8): MetadataResponseData(throttleTimeMs=0, brokers=[MetadataResponseBroker(nodeId=1, host='localhost', port=40117, rack=null)], clusterId='lcpOyY1-QY2MMThgHGGgSA', controllerId=1, topics=[MetadataResponseTopic(errorCode=0, name='my-test-topic', topicId=rpOTusxPRiGyrsjjjH_fwA, isInternal=false, partitions=[MetadataResponsePartition(errorCode=0, partitionIndex=0, leaderId=1, leaderEpoch=0, replicaNodes=[1], isrNodes=[1], offlineReplicas=[])], topicAuthorizedOperations=-2147483648)], clusterAuthorizedOperations=-2147483648) 09:09:36 09:09:35.925 [main] DEBUG org.apache.kafka.clients.Metadata - [Consumer clientId=mso-123456-consumer-f5c1d64b-2a77-4abe-97e2-5d595acfbd82, groupId=mso-group] Updating last seen epoch for partition my-test-topic-0 from 0 to epoch 0 from new metadata 09:09:36 09:09:35.926 [main] DEBUG org.apache.kafka.clients.Metadata - [Consumer clientId=mso-123456-consumer-f5c1d64b-2a77-4abe-97e2-5d595acfbd82, groupId=mso-group] Updated cluster metadata updateVersion 5 to MetadataCache{clusterId='lcpOyY1-QY2MMThgHGGgSA', nodes={1=localhost:40117 (id: 1 rack: null)}, partitions=[PartitionMetadata(error=NONE, partition=my-test-topic-0, leader=Optional[1], leaderEpoch=Optional[0], replicas=1, isr=1, offlineReplicas=)], controller=localhost:40117 (id: 1 rack: null)} 09:09:36 09:09:35.926 [main] DEBUG org.apache.kafka.clients.consumer.internals.ConsumerCoordinator - [Consumer clientId=mso-123456-consumer-f5c1d64b-2a77-4abe-97e2-5d595acfbd82, groupId=mso-group] Sending FindCoordinator request to broker localhost:40117 (id: 1 rack: null) 09:09:36 09:09:35.926 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - Processing request:: sessionid:0x1000002945e0000 type:multi cxid:0xac zxid:0x78 txntype:14 reqpath:n/a 09:09:36 09:09:35.926 [data-plane-kafka-network-thread-1-ListenerName(SASL_PLAINTEXT)-SASL_PLAINTEXT-0] DEBUG kafka.request.logger - Completed request:{"isForwarded":false,"requestHeader":{"requestApiKey":3,"requestApiVersion":12,"correlationId":8,"clientId":"mso-123456-consumer-f5c1d64b-2a77-4abe-97e2-5d595acfbd82","requestApiKeyName":"METADATA"},"request":{"topics":[{"topicId":"AAAAAAAAAAAAAAAAAAAAAA","name":"my-test-topic"}],"allowAutoTopicCreation":false,"includeTopicAuthorizedOperations":false},"response":{"throttleTimeMs":0,"brokers":[{"nodeId":1,"host":"localhost","port":40117,"rack":null}],"clusterId":"lcpOyY1-QY2MMThgHGGgSA","controllerId":1,"topics":[{"errorCode":0,"name":"my-test-topic","topicId":"rpOTusxPRiGyrsjjjH_fwA","isInternal":false,"partitions":[{"errorCode":0,"partitionIndex":0,"leaderId":1,"leaderEpoch":0,"replicaNodes":[1],"isrNodes":[1],"offlineReplicas":[]}],"topicAuthorizedOperations":-2147483648}]},"connection":"127.0.0.1:40117-127.0.0.1:49642-3","totalTimeMs":2.193,"requestQueueTimeMs":0.3,"localTimeMs":1.444,"remoteTimeMs":0.0,"throttleTimeMs":0,"responseQueueTimeMs":0.103,"sendTimeMs":0.343,"securityProtocol":"SASL_PLAINTEXT","principal":"User:admin","listener":"SASL_PLAINTEXT","clientInformation":{"softwareName":"apache-kafka-java","softwareVersion":"3.3.1"}} 09:09:36 09:09:35.926 [main] DEBUG org.apache.kafka.clients.NetworkClient - [Consumer clientId=mso-123456-consumer-f5c1d64b-2a77-4abe-97e2-5d595acfbd82, groupId=mso-group] Sending FIND_COORDINATOR request with header RequestHeader(apiKey=FIND_COORDINATOR, apiVersion=4, clientId=mso-123456-consumer-f5c1d64b-2a77-4abe-97e2-5d595acfbd82, correlationId=9) and timeout 30000 to node 1: FindCoordinatorRequestData(key='', keyType=0, coordinatorKeys=[mso-group]) 09:09:36 09:09:35.926 [SyncThread:0] DEBUG org.apache.zookeeper.common.PathTrie - brokers 09:09:36 09:09:35.926 [SyncThread:0] DEBUG org.apache.zookeeper.server.DataTree - Digests are matching for Zxid: 78, Digest in log and actual tree: 219920511208 09:09:36 09:09:35.927 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - sessionid:0x1000002945e0000 type:multi cxid:0xac zxid:0x78 txntype:14 reqpath:n/a 09:09:36 09:09:35.927 [main-SendThread(127.0.0.1:46481)] DEBUG org.apache.zookeeper.ClientCnxn - Reading reply session id: 0x1000002945e0000, packet:: clientPath:null serverPath:null finished:false header:: 172,14 replyHeader:: 172,120,0 request:: org.apache.zookeeper.MultiOperationRecord@dfb97991 response:: org.apache.zookeeper.MultiResponse@f9fbf79b 09:09:36 09:09:35.928 [ProcessThread(sid:0 cport:46481):] DEBUG org.apache.zookeeper.server.SessionTrackerImpl - Checking session 0x1000002945e0000 09:09:36 09:09:35.928 [ProcessThread(sid:0 cport:46481):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Permission requested: 1 09:09:36 09:09:35.928 [ProcessThread(sid:0 cport:46481):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - ACLs for node: [31,s{'world,'anyone} 09:09:36 ] 09:09:36 09:09:35.928 [ProcessThread(sid:0 cport:46481):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Client credentials: ['sasl,'zooclient 09:09:36 , 'ip,'127.0.0.1 09:09:36 ] 09:09:36 09:09:35.928 [ProcessThread(sid:0 cport:46481):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 240491283608 09:09:36 09:09:35.928 [ProcessThread(sid:0 cport:46481):] DEBUG org.apache.zookeeper.server.SessionTrackerImpl - Checking session 0x1000002945e0000 09:09:36 09:09:35.928 [ProcessThread(sid:0 cport:46481):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Processing ACL: 31,s{'world,'anyone} 09:09:36 09:09:36 09:09:35.928 [ProcessThread(sid:0 cport:46481):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Permission requested: 4 09:09:36 09:09:35.928 [ProcessThread(sid:0 cport:46481):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - ACLs for node: [31,s{'world,'anyone} 09:09:36 ] 09:09:36 09:09:35.928 [ProcessThread(sid:0 cport:46481):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Client credentials: ['sasl,'zooclient 09:09:36 , 'ip,'127.0.0.1 09:09:36 ] 09:09:36 09:09:35.928 [ProcessThread(sid:0 cport:46481):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 240491283608 09:09:36 09:09:35.928 [ProcessThread(sid:0 cport:46481):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 242929677600 09:09:36 09:09:35.928 [ProcessThread(sid:0 cport:46481):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 243299849563 09:09:36 09:09:35.934 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - Processing request:: sessionid:0x1000002945e0000 type:multi cxid:0xad zxid:0x79 txntype:14 reqpath:n/a 09:09:36 09:09:35.934 [SyncThread:0] DEBUG org.apache.zookeeper.common.PathTrie - brokers 09:09:36 09:09:35.934 [SyncThread:0] DEBUG org.apache.zookeeper.server.DataTree - Digests are matching for Zxid: 79, Digest in log and actual tree: 223980439124 09:09:36 09:09:35.934 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - sessionid:0x1000002945e0000 type:multi cxid:0xad zxid:0x79 txntype:14 reqpath:n/a 09:09:36 09:09:35.934 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - Processing request:: sessionid:0x1000002945e0000 type:multi cxid:0xae zxid:0x7a txntype:14 reqpath:n/a 09:09:36 09:09:35.934 [SyncThread:0] DEBUG org.apache.zookeeper.common.PathTrie - brokers 09:09:36 09:09:35.934 [SyncThread:0] DEBUG org.apache.zookeeper.server.DataTree - Digests are matching for Zxid: 7a, Digest in log and actual tree: 228245626107 09:09:36 09:09:35.934 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - sessionid:0x1000002945e0000 type:multi cxid:0xae zxid:0x7a txntype:14 reqpath:n/a 09:09:36 09:09:35.934 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - Processing request:: sessionid:0x1000002945e0000 type:multi cxid:0xaf zxid:0x7b txntype:14 reqpath:n/a 09:09:36 09:09:35.934 [main-SendThread(127.0.0.1:46481)] DEBUG org.apache.zookeeper.ClientCnxn - Reading reply session id: 0x1000002945e0000, packet:: clientPath:null serverPath:null finished:false header:: 173,14 replyHeader:: 173,121,0 request:: org.apache.zookeeper.MultiOperationRecord@38879f89 response:: org.apache.zookeeper.MultiResponse@52ca1d93 09:09:36 09:09:35.934 [SyncThread:0] DEBUG org.apache.zookeeper.common.PathTrie - brokers 09:09:36 09:09:35.934 [main-SendThread(127.0.0.1:46481)] DEBUG org.apache.zookeeper.ClientCnxn - Reading reply session id: 0x1000002945e0000, packet:: clientPath:null serverPath:null finished:false header:: 174,14 replyHeader:: 174,122,0 request:: org.apache.zookeeper.MultiOperationRecord@3eac7511 response:: org.apache.zookeeper.MultiResponse@58eef31b 09:09:36 09:09:35.934 [SyncThread:0] DEBUG org.apache.zookeeper.server.DataTree - Digests are matching for Zxid: 7b, Digest in log and actual tree: 233242367743 09:09:36 09:09:35.935 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - sessionid:0x1000002945e0000 type:multi cxid:0xaf zxid:0x7b txntype:14 reqpath:n/a 09:09:36 09:09:35.935 [ProcessThread(sid:0 cport:46481):] DEBUG org.apache.zookeeper.server.SessionTrackerImpl - Checking session 0x1000002945e0000 09:09:36 09:09:35.935 [ProcessThread(sid:0 cport:46481):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Permission requested: 1 09:09:36 09:09:35.935 [ProcessThread(sid:0 cport:46481):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - ACLs for node: [31,s{'world,'anyone} 09:09:36 ] 09:09:36 09:09:35.935 [ProcessThread(sid:0 cport:46481):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Client credentials: ['sasl,'zooclient 09:09:36 , 'ip,'127.0.0.1 09:09:36 ] 09:09:36 09:09:35.935 [ProcessThread(sid:0 cport:46481):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 243299849563 09:09:36 09:09:35.935 [ProcessThread(sid:0 cport:46481):] DEBUG org.apache.zookeeper.server.SessionTrackerImpl - Checking session 0x1000002945e0000 09:09:36 09:09:35.935 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - Processing request:: sessionid:0x1000002945e0000 type:multi cxid:0xb0 zxid:0x7c txntype:14 reqpath:n/a 09:09:36 09:09:35.935 [ProcessThread(sid:0 cport:46481):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Processing ACL: 31,s{'world,'anyone} 09:09:36 09:09:36 09:09:35.935 [main-SendThread(127.0.0.1:46481)] DEBUG org.apache.zookeeper.ClientCnxn - Reading reply session id: 0x1000002945e0000, packet:: clientPath:null serverPath:null finished:false header:: 175,14 replyHeader:: 175,123,0 request:: org.apache.zookeeper.MultiOperationRecord@d9f79ca8 response:: org.apache.zookeeper.MultiResponse@f43a1ab2 09:09:36 09:09:35.935 [SyncThread:0] DEBUG org.apache.zookeeper.common.PathTrie - brokers 09:09:36 09:09:35.935 [SyncThread:0] DEBUG org.apache.zookeeper.server.DataTree - Digests are matching for Zxid: 7c, Digest in log and actual tree: 239098699626 09:09:36 09:09:35.936 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - sessionid:0x1000002945e0000 type:multi cxid:0xb0 zxid:0x7c txntype:14 reqpath:n/a 09:09:36 09:09:35.936 [ProcessThread(sid:0 cport:46481):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Permission requested: 4 09:09:36 09:09:35.936 [ProcessThread(sid:0 cport:46481):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - ACLs for node: [31,s{'world,'anyone} 09:09:36 ] 09:09:36 09:09:35.936 [ProcessThread(sid:0 cport:46481):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Client credentials: ['sasl,'zooclient 09:09:36 , 'ip,'127.0.0.1 09:09:36 ] 09:09:36 09:09:35.936 [ProcessThread(sid:0 cport:46481):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 243299849563 09:09:36 09:09:35.936 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - Processing request:: sessionid:0x1000002945e0000 type:multi cxid:0xb1 zxid:0x7d txntype:14 reqpath:n/a 09:09:36 09:09:35.936 [ProcessThread(sid:0 cport:46481):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 240905618643 09:09:36 09:09:35.936 [SyncThread:0] DEBUG org.apache.zookeeper.common.PathTrie - brokers 09:09:36 09:09:35.936 [main-SendThread(127.0.0.1:46481)] DEBUG org.apache.zookeeper.ClientCnxn - Reading reply session id: 0x1000002945e0000, packet:: clientPath:null serverPath:null finished:false header:: 176,14 replyHeader:: 176,124,0 request:: org.apache.zookeeper.MultiOperationRecord@12456215 response:: org.apache.zookeeper.MultiResponse@2c87e01f 09:09:36 09:09:35.936 [SyncThread:0] DEBUG org.apache.zookeeper.server.DataTree - Digests are matching for Zxid: 7d, Digest in log and actual tree: 239117331553 09:09:36 09:09:35.936 [ProcessThread(sid:0 cport:46481):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 244265710638 09:09:36 09:09:35.936 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - sessionid:0x1000002945e0000 type:multi cxid:0xb1 zxid:0x7d txntype:14 reqpath:n/a 09:09:36 09:09:35.936 [ProcessThread(sid:0 cport:46481):] DEBUG org.apache.zookeeper.server.SessionTrackerImpl - Checking session 0x1000002945e0000 09:09:36 09:09:35.936 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - Processing request:: sessionid:0x1000002945e0000 type:multi cxid:0xb2 zxid:0x7e txntype:14 reqpath:n/a 09:09:36 09:09:35.936 [SyncThread:0] DEBUG org.apache.zookeeper.common.PathTrie - brokers 09:09:36 09:09:35.936 [main-SendThread(127.0.0.1:46481)] DEBUG org.apache.zookeeper.ClientCnxn - Reading reply session id: 0x1000002945e0000, packet:: clientPath:null serverPath:null finished:false header:: 177,14 replyHeader:: 177,125,0 request:: org.apache.zookeeper.MultiOperationRecord@d73a514c response:: org.apache.zookeeper.MultiResponse@f17ccf56 09:09:36 09:09:35.936 [SyncThread:0] DEBUG org.apache.zookeeper.server.DataTree - Digests are matching for Zxid: 7e, Digest in log and actual tree: 238567709020 09:09:36 09:09:35.936 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - sessionid:0x1000002945e0000 type:multi cxid:0xb2 zxid:0x7e txntype:14 reqpath:n/a 09:09:36 09:09:35.936 [ProcessThread(sid:0 cport:46481):] DEBUG org.apache.zookeeper.server.SessionTrackerImpl - Checking session 0x1000002945e0000 09:09:36 09:09:35.936 [ProcessThread(sid:0 cport:46481):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Permission requested: 1 09:09:36 09:09:35.936 [ProcessThread(sid:0 cport:46481):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - ACLs for node: [31,s{'world,'anyone} 09:09:36 ] 09:09:36 09:09:35.936 [ProcessThread(sid:0 cport:46481):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Client credentials: ['sasl,'zooclient 09:09:36 , 'ip,'127.0.0.1 09:09:36 ] 09:09:36 09:09:35.936 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - Processing request:: sessionid:0x1000002945e0000 type:multi cxid:0xb3 zxid:0x7f txntype:14 reqpath:n/a 09:09:36 09:09:35.936 [ProcessThread(sid:0 cport:46481):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 244265710638 09:09:36 09:09:35.937 [SyncThread:0] DEBUG org.apache.zookeeper.common.PathTrie - brokers 09:09:36 09:09:35.937 [SyncThread:0] DEBUG org.apache.zookeeper.server.DataTree - Digests are matching for Zxid: 7f, Digest in log and actual tree: 238460694118 09:09:36 09:09:35.937 [main-SendThread(127.0.0.1:46481)] DEBUG org.apache.zookeeper.ClientCnxn - Reading reply session id: 0x1000002945e0000, packet:: clientPath:null serverPath:null finished:false header:: 178,14 replyHeader:: 178,126,0 request:: org.apache.zookeeper.MultiOperationRecord@6b829127 response:: org.apache.zookeeper.MultiResponse@85c50f31 09:09:36 09:09:35.937 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - sessionid:0x1000002945e0000 type:multi cxid:0xb3 zxid:0x7f txntype:14 reqpath:n/a 09:09:36 09:09:35.937 [ProcessThread(sid:0 cport:46481):] DEBUG org.apache.zookeeper.server.SessionTrackerImpl - Checking session 0x1000002945e0000 09:09:36 09:09:35.937 [ProcessThread(sid:0 cport:46481):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Processing ACL: 31,s{'world,'anyone} 09:09:36 09:09:36 09:09:35.937 [ProcessThread(sid:0 cport:46481):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Permission requested: 4 09:09:36 09:09:35.937 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - Processing request:: sessionid:0x1000002945e0000 type:multi cxid:0xb4 zxid:0x80 txntype:14 reqpath:n/a 09:09:36 09:09:35.937 [main-SendThread(127.0.0.1:46481)] DEBUG org.apache.zookeeper.ClientCnxn - Reading reply session id: 0x1000002945e0000, packet:: clientPath:null serverPath:null finished:false header:: 179,14 replyHeader:: 179,127,0 request:: org.apache.zookeeper.MultiOperationRecord@d4dffe8f response:: org.apache.zookeeper.MultiResponse@ef227c99 09:09:36 09:09:35.937 [SyncThread:0] DEBUG org.apache.zookeeper.common.PathTrie - brokers 09:09:36 09:09:35.937 [ProcessThread(sid:0 cport:46481):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - ACLs for node: [31,s{'world,'anyone} 09:09:36 ] 09:09:36 09:09:35.937 [ProcessThread(sid:0 cport:46481):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Client credentials: ['sasl,'zooclient 09:09:36 , 'ip,'127.0.0.1 09:09:36 ] 09:09:36 09:09:35.937 [SyncThread:0] DEBUG org.apache.zookeeper.server.DataTree - Digests are matching for Zxid: 80, Digest in log and actual tree: 240855524274 09:09:36 09:09:35.937 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - sessionid:0x1000002945e0000 type:multi cxid:0xb4 zxid:0x80 txntype:14 reqpath:n/a 09:09:36 09:09:35.937 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - Processing request:: sessionid:0x1000002945e0000 type:multi cxid:0xb5 zxid:0x81 txntype:14 reqpath:n/a 09:09:36 09:09:35.937 [ProcessThread(sid:0 cport:46481):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 244265710638 09:09:36 09:09:35.938 [SyncThread:0] DEBUG org.apache.zookeeper.common.PathTrie - brokers 09:09:36 09:09:35.938 [SyncThread:0] DEBUG org.apache.zookeeper.server.DataTree - Digests are matching for Zxid: 81, Digest in log and actual tree: 240491283608 09:09:36 09:09:35.938 [main-SendThread(127.0.0.1:46481)] DEBUG org.apache.zookeeper.ClientCnxn - Reading reply session id: 0x1000002945e0000, packet:: clientPath:null serverPath:null finished:false header:: 180,14 replyHeader:: 180,128,0 request:: org.apache.zookeeper.MultiOperationRecord@eddd7e9 response:: org.apache.zookeeper.MultiResponse@292055f3 09:09:36 09:09:35.938 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - sessionid:0x1000002945e0000 type:multi cxid:0xb5 zxid:0x81 txntype:14 reqpath:n/a 09:09:36 09:09:35.938 [ProcessThread(sid:0 cport:46481):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 246881374731 09:09:36 09:09:35.938 [ProcessThread(sid:0 cport:46481):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 249159809548 09:09:36 09:09:35.938 [ProcessThread(sid:0 cport:46481):] DEBUG org.apache.zookeeper.server.SessionTrackerImpl - Checking session 0x1000002945e0000 09:09:36 09:09:35.938 [ProcessThread(sid:0 cport:46481):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Permission requested: 1 09:09:36 09:09:35.938 [main-SendThread(127.0.0.1:46481)] DEBUG org.apache.zookeeper.ClientCnxn - Reading reply session id: 0x1000002945e0000, packet:: clientPath:null serverPath:null finished:false header:: 181,14 replyHeader:: 181,129,0 request:: org.apache.zookeeper.MultiOperationRecord@af7bd34f response:: org.apache.zookeeper.MultiResponse@c9be5159 09:09:36 09:09:35.938 [ProcessThread(sid:0 cport:46481):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - ACLs for node: [31,s{'world,'anyone} 09:09:36 ] 09:09:36 09:09:35.938 [ProcessThread(sid:0 cport:46481):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Client credentials: ['sasl,'zooclient 09:09:36 , 'ip,'127.0.0.1 09:09:36 ] 09:09:36 09:09:35.938 [ProcessThread(sid:0 cport:46481):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 249159809548 09:09:36 09:09:35.938 [ProcessThread(sid:0 cport:46481):] DEBUG org.apache.zookeeper.server.SessionTrackerImpl - Checking session 0x1000002945e0000 09:09:36 09:09:35.938 [ProcessThread(sid:0 cport:46481):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Processing ACL: 31,s{'world,'anyone} 09:09:36 09:09:36 09:09:35.938 [ProcessThread(sid:0 cport:46481):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Permission requested: 4 09:09:36 09:09:35.938 [ProcessThread(sid:0 cport:46481):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - ACLs for node: [31,s{'world,'anyone} 09:09:36 ] 09:09:36 09:09:35.938 [ProcessThread(sid:0 cport:46481):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Client credentials: ['sasl,'zooclient 09:09:36 , 'ip,'127.0.0.1 09:09:36 ] 09:09:36 09:09:35.938 [ProcessThread(sid:0 cport:46481):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 249159809548 09:09:36 09:09:35.938 [ProcessThread(sid:0 cport:46481):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 248154775593 09:09:36 09:09:35.938 [ProcessThread(sid:0 cport:46481):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 252168453275 09:09:36 09:09:35.938 [ProcessThread(sid:0 cport:46481):] DEBUG org.apache.zookeeper.server.SessionTrackerImpl - Checking session 0x1000002945e0000 09:09:36 09:09:35.938 [ProcessThread(sid:0 cport:46481):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Permission requested: 1 09:09:36 09:09:35.938 [ProcessThread(sid:0 cport:46481):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - ACLs for node: [31,s{'world,'anyone} 09:09:36 ] 09:09:36 09:09:35.939 [ProcessThread(sid:0 cport:46481):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Client credentials: ['sasl,'zooclient 09:09:36 , 'ip,'127.0.0.1 09:09:36 ] 09:09:36 09:09:35.939 [ProcessThread(sid:0 cport:46481):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 252168453275 09:09:36 09:09:35.939 [ProcessThread(sid:0 cport:46481):] DEBUG org.apache.zookeeper.server.SessionTrackerImpl - Checking session 0x1000002945e0000 09:09:36 09:09:35.939 [ProcessThread(sid:0 cport:46481):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Processing ACL: 31,s{'world,'anyone} 09:09:36 09:09:36 09:09:35.939 [ProcessThread(sid:0 cport:46481):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Permission requested: 4 09:09:36 09:09:35.939 [ProcessThread(sid:0 cport:46481):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - ACLs for node: [31,s{'world,'anyone} 09:09:36 ] 09:09:36 09:09:35.939 [ProcessThread(sid:0 cport:46481):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Client credentials: ['sasl,'zooclient 09:09:36 , 'ip,'127.0.0.1 09:09:36 ] 09:09:36 09:09:35.939 [ProcessThread(sid:0 cport:46481):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 252168453275 09:09:36 09:09:35.939 [ProcessThread(sid:0 cport:46481):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 254053198899 09:09:36 09:09:35.939 [ProcessThread(sid:0 cport:46481):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 257236820451 09:09:36 09:09:35.939 [ProcessThread(sid:0 cport:46481):] DEBUG org.apache.zookeeper.server.SessionTrackerImpl - Checking session 0x1000002945e0000 09:09:36 09:09:35.939 [ProcessThread(sid:0 cport:46481):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Permission requested: 1 09:09:36 09:09:35.939 [ProcessThread(sid:0 cport:46481):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - ACLs for node: [31,s{'world,'anyone} 09:09:36 ] 09:09:36 09:09:35.939 [ProcessThread(sid:0 cport:46481):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Client credentials: ['sasl,'zooclient 09:09:36 , 'ip,'127.0.0.1 09:09:36 ] 09:09:36 09:09:35.939 [ProcessThread(sid:0 cport:46481):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 257236820451 09:09:36 09:09:35.939 [ProcessThread(sid:0 cport:46481):] DEBUG org.apache.zookeeper.server.SessionTrackerImpl - Checking session 0x1000002945e0000 09:09:36 09:09:35.939 [ProcessThread(sid:0 cport:46481):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Processing ACL: 31,s{'world,'anyone} 09:09:36 09:09:36 09:09:35.939 [ProcessThread(sid:0 cport:46481):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Permission requested: 4 09:09:36 09:09:35.939 [ProcessThread(sid:0 cport:46481):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - ACLs for node: [31,s{'world,'anyone} 09:09:36 ] 09:09:36 09:09:35.939 [ProcessThread(sid:0 cport:46481):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Client credentials: ['sasl,'zooclient 09:09:36 , 'ip,'127.0.0.1 09:09:36 ] 09:09:36 09:09:35.939 [ProcessThread(sid:0 cport:46481):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 257236820451 09:09:36 09:09:35.939 [ProcessThread(sid:0 cport:46481):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 254797107803 09:09:36 09:09:35.939 [ProcessThread(sid:0 cport:46481):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 255501812022 09:09:36 09:09:35.939 [ProcessThread(sid:0 cport:46481):] DEBUG org.apache.zookeeper.server.SessionTrackerImpl - Checking session 0x1000002945e0000 09:09:36 09:09:35.939 [ProcessThread(sid:0 cport:46481):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Permission requested: 1 09:09:36 09:09:35.939 [ProcessThread(sid:0 cport:46481):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - ACLs for node: [31,s{'world,'anyone} 09:09:36 ] 09:09:36 09:09:35.939 [ProcessThread(sid:0 cport:46481):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Client credentials: ['sasl,'zooclient 09:09:36 , 'ip,'127.0.0.1 09:09:36 ] 09:09:36 09:09:35.939 [ProcessThread(sid:0 cport:46481):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 255501812022 09:09:36 09:09:35.939 [ProcessThread(sid:0 cport:46481):] DEBUG org.apache.zookeeper.server.SessionTrackerImpl - Checking session 0x1000002945e0000 09:09:36 09:09:35.939 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - Processing request:: sessionid:0x1000002945e0000 type:multi cxid:0xb6 zxid:0x82 txntype:14 reqpath:n/a 09:09:36 09:09:35.939 [ProcessThread(sid:0 cport:46481):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Processing ACL: 31,s{'world,'anyone} 09:09:36 09:09:36 09:09:35.939 [SyncThread:0] DEBUG org.apache.zookeeper.common.PathTrie - brokers 09:09:36 09:09:35.939 [SyncThread:0] DEBUG org.apache.zookeeper.server.DataTree - Digests are matching for Zxid: 82, Digest in log and actual tree: 243299849563 09:09:36 09:09:35.939 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - sessionid:0x1000002945e0000 type:multi cxid:0xb6 zxid:0x82 txntype:14 reqpath:n/a 09:09:36 09:09:35.939 [ProcessThread(sid:0 cport:46481):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Permission requested: 4 09:09:36 09:09:35.939 [ProcessThread(sid:0 cport:46481):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - ACLs for node: [31,s{'world,'anyone} 09:09:36 ] 09:09:36 09:09:35.939 [ProcessThread(sid:0 cport:46481):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Client credentials: ['sasl,'zooclient 09:09:36 , 'ip,'127.0.0.1 09:09:36 ] 09:09:36 09:09:35.939 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - Processing request:: sessionid:0x1000002945e0000 type:multi cxid:0xb7 zxid:0x83 txntype:14 reqpath:n/a 09:09:36 09:09:35.939 [ProcessThread(sid:0 cport:46481):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 255501812022 09:09:36 09:09:35.940 [ProcessThread(sid:0 cport:46481):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 255011191309 09:09:36 09:09:35.940 [SyncThread:0] DEBUG org.apache.zookeeper.common.PathTrie - brokers 09:09:36 09:09:35.940 [main-SendThread(127.0.0.1:46481)] DEBUG org.apache.zookeeper.ClientCnxn - Reading reply session id: 0x1000002945e0000, packet:: clientPath:null serverPath:null finished:false header:: 182,14 replyHeader:: 182,130,0 request:: org.apache.zookeeper.MultiOperationRecord@6d6ddaca response:: org.apache.zookeeper.MultiResponse@87b058d4 09:09:36 09:09:35.940 [SyncThread:0] DEBUG org.apache.zookeeper.server.DataTree - Digests are matching for Zxid: 83, Digest in log and actual tree: 244265710638 09:09:36 09:09:35.940 [ProcessThread(sid:0 cport:46481):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 257990611860 09:09:36 09:09:35.940 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - sessionid:0x1000002945e0000 type:multi cxid:0xb7 zxid:0x83 txntype:14 reqpath:n/a 09:09:36 09:09:35.940 [ProcessThread(sid:0 cport:46481):] DEBUG org.apache.zookeeper.server.SessionTrackerImpl - Checking session 0x1000002945e0000 09:09:36 09:09:35.940 [ProcessThread(sid:0 cport:46481):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Permission requested: 1 09:09:36 09:09:35.940 [ProcessThread(sid:0 cport:46481):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - ACLs for node: [31,s{'world,'anyone} 09:09:36 ] 09:09:36 09:09:35.940 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - Processing request:: sessionid:0x1000002945e0000 type:exists cxid:0xb8 zxid:0xfffffffffffffffe txntype:unknown reqpath:/admin/delete_topics/__consumer_offsets 09:09:36 09:09:35.940 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - sessionid:0x1000002945e0000 type:exists cxid:0xb8 zxid:0xfffffffffffffffe txntype:unknown reqpath:/admin/delete_topics/__consumer_offsets 09:09:36 09:09:35.940 [main-SendThread(127.0.0.1:46481)] DEBUG org.apache.zookeeper.ClientCnxn - Reading reply session id: 0x1000002945e0000, packet:: clientPath:null serverPath:null finished:false header:: 183,14 replyHeader:: 183,131,0 request:: org.apache.zookeeper.MultiOperationRecord@43c4132a response:: org.apache.zookeeper.MultiResponse@5e069134 09:09:36 09:09:35.940 [ProcessThread(sid:0 cport:46481):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Client credentials: ['sasl,'zooclient 09:09:36 , 'ip,'127.0.0.1 09:09:36 ] 09:09:36 09:09:35.940 [ProcessThread(sid:0 cport:46481):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 257990611860 09:09:36 09:09:35.940 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - Processing request:: sessionid:0x1000002945e0000 type:multi cxid:0xb9 zxid:0x84 txntype:14 reqpath:n/a 09:09:36 09:09:35.940 [ProcessThread(sid:0 cport:46481):] DEBUG org.apache.zookeeper.server.SessionTrackerImpl - Checking session 0x1000002945e0000 09:09:36 09:09:35.940 [SyncThread:0] DEBUG org.apache.zookeeper.common.PathTrie - brokers 09:09:36 09:09:35.940 [main-SendThread(127.0.0.1:46481)] DEBUG org.apache.zookeeper.ClientCnxn - Reading reply session id: 0x1000002945e0000, packet:: clientPath:/admin/delete_topics/__consumer_offsets serverPath:/admin/delete_topics/__consumer_offsets finished:false header:: 184,3 replyHeader:: 184,131,-101 request:: '/admin/delete_topics/__consumer_offsets,F response:: 09:09:36 09:09:35.940 [SyncThread:0] DEBUG org.apache.zookeeper.server.DataTree - Digests are matching for Zxid: 84, Digest in log and actual tree: 249159809548 09:09:36 09:09:35.940 [ProcessThread(sid:0 cport:46481):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Processing ACL: 31,s{'world,'anyone} 09:09:36 09:09:36 09:09:35.940 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - sessionid:0x1000002945e0000 type:multi cxid:0xb9 zxid:0x84 txntype:14 reqpath:n/a 09:09:36 09:09:35.940 [ProcessThread(sid:0 cport:46481):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Permission requested: 4 09:09:36 09:09:35.940 [ProcessThread(sid:0 cport:46481):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - ACLs for node: [31,s{'world,'anyone} 09:09:36 ] 09:09:36 09:09:35.940 [ProcessThread(sid:0 cport:46481):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Client credentials: ['sasl,'zooclient 09:09:36 , 'ip,'127.0.0.1 09:09:36 ] 09:09:36 09:09:35.941 [ProcessThread(sid:0 cport:46481):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 257990611860 09:09:36 09:09:35.941 [ProcessThread(sid:0 cport:46481):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 257432328795 09:09:36 09:09:35.941 [ProcessThread(sid:0 cport:46481):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 260579734257 09:09:36 09:09:35.941 [main-SendThread(127.0.0.1:46481)] DEBUG org.apache.zookeeper.ClientCnxn - Reading reply session id: 0x1000002945e0000, packet:: clientPath:null serverPath:null finished:false header:: 185,14 replyHeader:: 185,132,0 request:: org.apache.zookeeper.MultiOperationRecord@9c639d0 response:: org.apache.zookeeper.MultiResponse@2408b7da 09:09:36 09:09:35.941 [ProcessThread(sid:0 cport:46481):] DEBUG org.apache.zookeeper.server.SessionTrackerImpl - Checking session 0x1000002945e0000 09:09:36 09:09:35.941 [ProcessThread(sid:0 cport:46481):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Permission requested: 1 09:09:36 09:09:35.941 [ProcessThread(sid:0 cport:46481):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - ACLs for node: [31,s{'world,'anyone} 09:09:36 ] 09:09:36 09:09:35.941 [ProcessThread(sid:0 cport:46481):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Client credentials: ['sasl,'zooclient 09:09:36 , 'ip,'127.0.0.1 09:09:36 ] 09:09:36 09:09:35.941 [ProcessThread(sid:0 cport:46481):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 260579734257 09:09:36 09:09:35.941 [ProcessThread(sid:0 cport:46481):] DEBUG org.apache.zookeeper.server.SessionTrackerImpl - Checking session 0x1000002945e0000 09:09:36 09:09:35.941 [ProcessThread(sid:0 cport:46481):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Processing ACL: 31,s{'world,'anyone} 09:09:36 09:09:36 09:09:35.941 [ProcessThread(sid:0 cport:46481):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Permission requested: 4 09:09:36 09:09:35.941 [ProcessThread(sid:0 cport:46481):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - ACLs for node: [31,s{'world,'anyone} 09:09:36 ] 09:09:36 09:09:35.941 [ProcessThread(sid:0 cport:46481):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Client credentials: ['sasl,'zooclient 09:09:36 , 'ip,'127.0.0.1 09:09:36 ] 09:09:36 09:09:35.941 [ProcessThread(sid:0 cport:46481):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 260579734257 09:09:36 09:09:35.941 [ProcessThread(sid:0 cport:46481):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 262452044425 09:09:36 09:09:35.941 [ProcessThread(sid:0 cport:46481):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 264364073227 09:09:36 09:09:35.941 [ProcessThread(sid:0 cport:46481):] DEBUG org.apache.zookeeper.server.SessionTrackerImpl - Checking session 0x1000002945e0000 09:09:36 09:09:35.941 [ProcessThread(sid:0 cport:46481):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Permission requested: 1 09:09:36 09:09:35.941 [ProcessThread(sid:0 cport:46481):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - ACLs for node: [31,s{'world,'anyone} 09:09:36 ] 09:09:36 09:09:35.941 [ProcessThread(sid:0 cport:46481):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Client credentials: ['sasl,'zooclient 09:09:36 , 'ip,'127.0.0.1 09:09:36 ] 09:09:36 09:09:35.941 [ProcessThread(sid:0 cport:46481):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 264364073227 09:09:36 09:09:35.941 [ProcessThread(sid:0 cport:46481):] DEBUG org.apache.zookeeper.server.SessionTrackerImpl - Checking session 0x1000002945e0000 09:09:36 09:09:35.941 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - Processing request:: sessionid:0x1000002945e0000 type:multi cxid:0xba zxid:0x85 txntype:14 reqpath:n/a 09:09:36 09:09:35.941 [ProcessThread(sid:0 cport:46481):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Processing ACL: 31,s{'world,'anyone} 09:09:36 09:09:36 09:09:35.941 [SyncThread:0] DEBUG org.apache.zookeeper.common.PathTrie - brokers 09:09:36 09:09:35.941 [SyncThread:0] DEBUG org.apache.zookeeper.server.DataTree - Digests are matching for Zxid: 85, Digest in log and actual tree: 252168453275 09:09:36 09:09:35.942 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - sessionid:0x1000002945e0000 type:multi cxid:0xba zxid:0x85 txntype:14 reqpath:n/a 09:09:36 09:09:35.942 [ProcessThread(sid:0 cport:46481):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Permission requested: 4 09:09:36 09:09:35.942 [ProcessThread(sid:0 cport:46481):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - ACLs for node: [31,s{'world,'anyone} 09:09:36 ] 09:09:36 09:09:35.942 [ProcessThread(sid:0 cport:46481):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Client credentials: ['sasl,'zooclient 09:09:36 , 'ip,'127.0.0.1 09:09:36 ] 09:09:36 09:09:35.942 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - Processing request:: sessionid:0x1000002945e0000 type:multi cxid:0xbb zxid:0x86 txntype:14 reqpath:n/a 09:09:36 09:09:35.942 [ProcessThread(sid:0 cport:46481):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 264364073227 09:09:36 09:09:35.942 [SyncThread:0] DEBUG org.apache.zookeeper.common.PathTrie - brokers 09:09:36 09:09:35.942 [main-SendThread(127.0.0.1:46481)] DEBUG org.apache.zookeeper.ClientCnxn - Reading reply session id: 0x1000002945e0000, packet:: clientPath:null serverPath:null finished:false header:: 186,14 replyHeader:: 186,133,0 request:: org.apache.zookeeper.MultiOperationRecord@dd5f26d4 response:: org.apache.zookeeper.MultiResponse@f7a1a4de 09:09:36 09:09:35.942 [SyncThread:0] DEBUG org.apache.zookeeper.server.DataTree - Digests are matching for Zxid: 86, Digest in log and actual tree: 257236820451 09:09:36 09:09:35.942 [ProcessThread(sid:0 cport:46481):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 266756583795 09:09:36 09:09:35.942 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - sessionid:0x1000002945e0000 type:multi cxid:0xbb zxid:0x86 txntype:14 reqpath:n/a 09:09:36 09:09:35.942 [ProcessThread(sid:0 cport:46481):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 271017000807 09:09:36 09:09:35.942 [ProcessThread(sid:0 cport:46481):] DEBUG org.apache.zookeeper.server.SessionTrackerImpl - Checking session 0x1000002945e0000 09:09:36 09:09:35.942 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - Processing request:: sessionid:0x1000002945e0000 type:multi cxid:0xbc zxid:0x87 txntype:14 reqpath:n/a 09:09:36 09:09:35.942 [SyncThread:0] DEBUG org.apache.zookeeper.common.PathTrie - brokers 09:09:36 09:09:35.942 [SyncThread:0] DEBUG org.apache.zookeeper.server.DataTree - Digests are matching for Zxid: 87, Digest in log and actual tree: 255501812022 09:09:36 09:09:35.942 [main-SendThread(127.0.0.1:46481)] DEBUG org.apache.zookeeper.ClientCnxn - Reading reply session id: 0x1000002945e0000, packet:: clientPath:null serverPath:null finished:false header:: 187,14 replyHeader:: 187,134,0 request:: org.apache.zookeeper.MultiOperationRecord@a8e7f4ad response:: org.apache.zookeeper.MultiResponse@c32a72b7 09:09:36 09:09:35.942 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - sessionid:0x1000002945e0000 type:multi cxid:0xbc zxid:0x87 txntype:14 reqpath:n/a 09:09:36 09:09:35.942 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - Processing request:: sessionid:0x1000002945e0000 type:multi cxid:0xbd zxid:0x88 txntype:14 reqpath:n/a 09:09:36 09:09:35.942 [SyncThread:0] DEBUG org.apache.zookeeper.common.PathTrie - brokers 09:09:36 09:09:35.942 [main-SendThread(127.0.0.1:46481)] DEBUG org.apache.zookeeper.ClientCnxn - Reading reply session id: 0x1000002945e0000, packet:: clientPath:null serverPath:null finished:false header:: 188,14 replyHeader:: 188,135,0 request:: org.apache.zookeeper.MultiOperationRecord@479aa670 response:: org.apache.zookeeper.MultiResponse@61dd247a 09:09:36 09:09:35.943 [SyncThread:0] DEBUG org.apache.zookeeper.server.DataTree - Digests are matching for Zxid: 88, Digest in log and actual tree: 257990611860 09:09:36 09:09:35.943 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - sessionid:0x1000002945e0000 type:multi cxid:0xbd zxid:0x88 txntype:14 reqpath:n/a 09:09:36 09:09:35.943 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - Processing request:: sessionid:0x1000002945e0000 type:multi cxid:0xbe zxid:0x89 txntype:14 reqpath:n/a 09:09:36 09:09:35.943 [main-SendThread(127.0.0.1:46481)] DEBUG org.apache.zookeeper.ClientCnxn - Reading reply session id: 0x1000002945e0000, packet:: clientPath:null serverPath:null finished:false header:: 189,14 replyHeader:: 189,136,0 request:: org.apache.zookeeper.MultiOperationRecord@a6fcab0a response:: org.apache.zookeeper.MultiResponse@c13f2914 09:09:36 09:09:35.943 [SyncThread:0] DEBUG org.apache.zookeeper.common.PathTrie - brokers 09:09:36 09:09:35.943 [SyncThread:0] DEBUG org.apache.zookeeper.server.DataTree - Digests are matching for Zxid: 89, Digest in log and actual tree: 260579734257 09:09:36 09:09:35.943 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - sessionid:0x1000002945e0000 type:multi cxid:0xbe zxid:0x89 txntype:14 reqpath:n/a 09:09:36 09:09:35.943 [main-SendThread(127.0.0.1:46481)] DEBUG org.apache.zookeeper.ClientCnxn - Reading reply session id: 0x1000002945e0000, packet:: clientPath:null serverPath:null finished:false header:: 190,14 replyHeader:: 190,137,0 request:: org.apache.zookeeper.MultiOperationRecord@3a16448 response:: org.apache.zookeeper.MultiResponse@1de3e252 09:09:36 09:09:35.944 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - Processing request:: sessionid:0x1000002945e0000 type:multi cxid:0xbf zxid:0x8a txntype:14 reqpath:n/a 09:09:36 09:09:35.944 [SyncThread:0] DEBUG org.apache.zookeeper.common.PathTrie - brokers 09:09:36 09:09:35.944 [SyncThread:0] DEBUG org.apache.zookeeper.server.DataTree - Digests are matching for Zxid: 8a, Digest in log and actual tree: 264364073227 09:09:36 09:09:35.944 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - sessionid:0x1000002945e0000 type:multi cxid:0xbf zxid:0x8a txntype:14 reqpath:n/a 09:09:36 09:09:35.944 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - Processing request:: sessionid:0x1000002945e0000 type:multi cxid:0xc0 zxid:0x8b txntype:14 reqpath:n/a 09:09:36 09:09:35.944 [SyncThread:0] DEBUG org.apache.zookeeper.common.PathTrie - brokers 09:09:36 09:09:35.944 [SyncThread:0] DEBUG org.apache.zookeeper.server.DataTree - Digests are matching for Zxid: 8b, Digest in log and actual tree: 271017000807 09:09:36 09:09:35.944 [main-SendThread(127.0.0.1:46481)] DEBUG org.apache.zookeeper.ClientCnxn - Reading reply session id: 0x1000002945e0000, packet:: clientPath:null serverPath:null finished:false header:: 191,14 replyHeader:: 191,138,0 request:: org.apache.zookeeper.MultiOperationRecord@3d303488 response:: org.apache.zookeeper.MultiResponse@5772b292 09:09:36 09:09:35.944 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - sessionid:0x1000002945e0000 type:multi cxid:0xc0 zxid:0x8b txntype:14 reqpath:n/a 09:09:36 09:09:35.944 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - Processing request:: sessionid:0x1000002945e0000 type:exists cxid:0xc1 zxid:0xfffffffffffffffe txntype:unknown reqpath:/brokers/topics/__consumer_offsets 09:09:36 09:09:35.944 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - sessionid:0x1000002945e0000 type:exists cxid:0xc1 zxid:0xfffffffffffffffe txntype:unknown reqpath:/brokers/topics/__consumer_offsets 09:09:36 09:09:35.944 [main-SendThread(127.0.0.1:46481)] DEBUG org.apache.zookeeper.ClientCnxn - Reading reply session id: 0x1000002945e0000, packet:: clientPath:null serverPath:null finished:false header:: 192,14 replyHeader:: 192,139,0 request:: org.apache.zookeeper.MultiOperationRecord@3b44eae5 response:: org.apache.zookeeper.MultiResponse@558768ef 09:09:36 09:09:35.945 [main-SendThread(127.0.0.1:46481)] DEBUG org.apache.zookeeper.ClientCnxn - Reading reply session id: 0x1000002945e0000, packet:: clientPath:/brokers/topics/__consumer_offsets serverPath:/brokers/topics/__consumer_offsets finished:false header:: 193,3 replyHeader:: 193,139,0 request:: '/brokers/topics/__consumer_offsets,T response:: s{38,38,1770973775670,1770973775670,0,1,0,0,548,1,39} 09:09:36 09:09:35.945 [data-plane-kafka-request-handler-0] DEBUG kafka.server.ZkAdminManager - [Admin Manager on Broker 1]: Topic creation failed since topic '__consumer_offsets' already exists. 09:09:36 org.apache.kafka.common.errors.TopicExistsException: Topic '__consumer_offsets' already exists. 09:09:36 09:09:35.945 [data-plane-kafka-request-handler-0] DEBUG kafka.server.DefaultAutoTopicCreationManager - Cleared inflight topic creation state for HashMap(__consumer_offsets -> CreatableTopic(name='__consumer_offsets', numPartitions=50, replicationFactor=1, assignments=[], configs=[CreateableTopicConfig(name='compression.type', value='producer'), CreateableTopicConfig(name='cleanup.policy', value='compact'), CreateableTopicConfig(name='segment.bytes', value='104857600')])) 09:09:36 09:09:35.946 [main] DEBUG org.apache.kafka.clients.NetworkClient - [Consumer clientId=mso-123456-consumer-f5c1d64b-2a77-4abe-97e2-5d595acfbd82, groupId=mso-group] Received FIND_COORDINATOR response from node 1 for request with header RequestHeader(apiKey=FIND_COORDINATOR, apiVersion=4, clientId=mso-123456-consumer-f5c1d64b-2a77-4abe-97e2-5d595acfbd82, correlationId=9): FindCoordinatorResponseData(throttleTimeMs=0, errorCode=0, errorMessage='', nodeId=0, host='', port=0, coordinators=[Coordinator(key='mso-group', nodeId=-1, host='', port=-1, errorCode=15, errorMessage='')]) 09:09:36 09:09:35.946 [main] DEBUG org.apache.kafka.clients.consumer.internals.ConsumerCoordinator - [Consumer clientId=mso-123456-consumer-f5c1d64b-2a77-4abe-97e2-5d595acfbd82, groupId=mso-group] Received FindCoordinator response ClientResponse(receivedTimeMs=1770973775946, latencyMs=20, disconnected=false, requestHeader=RequestHeader(apiKey=FIND_COORDINATOR, apiVersion=4, clientId=mso-123456-consumer-f5c1d64b-2a77-4abe-97e2-5d595acfbd82, correlationId=9), responseBody=FindCoordinatorResponseData(throttleTimeMs=0, errorCode=0, errorMessage='', nodeId=0, host='', port=0, coordinators=[Coordinator(key='mso-group', nodeId=-1, host='', port=-1, errorCode=15, errorMessage='')])) 09:09:36 09:09:35.946 [main] DEBUG org.apache.kafka.clients.consumer.internals.ConsumerCoordinator - [Consumer clientId=mso-123456-consumer-f5c1d64b-2a77-4abe-97e2-5d595acfbd82, groupId=mso-group] Group coordinator lookup failed: 09:09:36 09:09:35.946 [main] DEBUG org.apache.kafka.clients.consumer.internals.ConsumerCoordinator - [Consumer clientId=mso-123456-consumer-f5c1d64b-2a77-4abe-97e2-5d595acfbd82, groupId=mso-group] Coordinator discovery failed, refreshing metadata 09:09:36 org.apache.kafka.common.errors.CoordinatorNotAvailableException: The coordinator is not available. 09:09:36 09:09:35.946 [data-plane-kafka-network-thread-1-ListenerName(SASL_PLAINTEXT)-SASL_PLAINTEXT-0] DEBUG kafka.request.logger - Completed request:{"isForwarded":false,"requestHeader":{"requestApiKey":10,"requestApiVersion":4,"correlationId":9,"clientId":"mso-123456-consumer-f5c1d64b-2a77-4abe-97e2-5d595acfbd82","requestApiKeyName":"FIND_COORDINATOR"},"request":{"keyType":0,"coordinatorKeys":["mso-group"]},"response":{"throttleTimeMs":0,"coordinators":[{"key":"mso-group","nodeId":-1,"host":"","port":-1,"errorCode":15,"errorMessage":""}]},"connection":"127.0.0.1:40117-127.0.0.1:49642-3","totalTimeMs":18.886,"requestQueueTimeMs":0.141,"localTimeMs":18.518,"remoteTimeMs":0.0,"throttleTimeMs":0,"responseQueueTimeMs":0.068,"sendTimeMs":0.158,"securityProtocol":"SASL_PLAINTEXT","principal":"User:admin","listener":"SASL_PLAINTEXT","clientInformation":{"softwareName":"apache-kafka-java","softwareVersion":"3.3.1"}} 09:09:36 09:09:35.957 [controller-event-thread] INFO state.change.logger - [Controller id=1 epoch=1] Changed partition __consumer_offsets-22 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isr=List(1), leaderRecoveryState=RECOVERED, partitionEpoch=0) 09:09:36 09:09:35.957 [controller-event-thread] INFO state.change.logger - [Controller id=1 epoch=1] Changed partition __consumer_offsets-30 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isr=List(1), leaderRecoveryState=RECOVERED, partitionEpoch=0) 09:09:36 09:09:35.957 [controller-event-thread] INFO state.change.logger - [Controller id=1 epoch=1] Changed partition __consumer_offsets-25 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isr=List(1), leaderRecoveryState=RECOVERED, partitionEpoch=0) 09:09:36 09:09:35.957 [controller-event-thread] INFO state.change.logger - [Controller id=1 epoch=1] Changed partition __consumer_offsets-35 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isr=List(1), leaderRecoveryState=RECOVERED, partitionEpoch=0) 09:09:36 09:09:35.957 [controller-event-thread] INFO state.change.logger - [Controller id=1 epoch=1] Changed partition __consumer_offsets-37 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isr=List(1), leaderRecoveryState=RECOVERED, partitionEpoch=0) 09:09:36 09:09:35.957 [controller-event-thread] INFO state.change.logger - [Controller id=1 epoch=1] Changed partition __consumer_offsets-38 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isr=List(1), leaderRecoveryState=RECOVERED, partitionEpoch=0) 09:09:36 09:09:35.957 [controller-event-thread] INFO state.change.logger - [Controller id=1 epoch=1] Changed partition __consumer_offsets-13 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isr=List(1), leaderRecoveryState=RECOVERED, partitionEpoch=0) 09:09:36 09:09:35.958 [controller-event-thread] INFO state.change.logger - [Controller id=1 epoch=1] Changed partition __consumer_offsets-8 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isr=List(1), leaderRecoveryState=RECOVERED, partitionEpoch=0) 09:09:36 09:09:35.958 [controller-event-thread] INFO state.change.logger - [Controller id=1 epoch=1] Changed partition __consumer_offsets-21 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isr=List(1), leaderRecoveryState=RECOVERED, partitionEpoch=0) 09:09:36 09:09:35.958 [controller-event-thread] INFO state.change.logger - [Controller id=1 epoch=1] Changed partition __consumer_offsets-4 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isr=List(1), leaderRecoveryState=RECOVERED, partitionEpoch=0) 09:09:36 09:09:35.958 [controller-event-thread] INFO state.change.logger - [Controller id=1 epoch=1] Changed partition __consumer_offsets-27 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isr=List(1), leaderRecoveryState=RECOVERED, partitionEpoch=0) 09:09:36 09:09:35.958 [controller-event-thread] INFO state.change.logger - [Controller id=1 epoch=1] Changed partition __consumer_offsets-7 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isr=List(1), leaderRecoveryState=RECOVERED, partitionEpoch=0) 09:09:36 09:09:35.958 [controller-event-thread] INFO state.change.logger - [Controller id=1 epoch=1] Changed partition __consumer_offsets-9 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isr=List(1), leaderRecoveryState=RECOVERED, partitionEpoch=0) 09:09:36 09:09:35.958 [controller-event-thread] INFO state.change.logger - [Controller id=1 epoch=1] Changed partition __consumer_offsets-46 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isr=List(1), leaderRecoveryState=RECOVERED, partitionEpoch=0) 09:09:36 09:09:35.958 [controller-event-thread] INFO state.change.logger - [Controller id=1 epoch=1] Changed partition __consumer_offsets-41 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isr=List(1), leaderRecoveryState=RECOVERED, partitionEpoch=0) 09:09:36 09:09:35.958 [controller-event-thread] INFO state.change.logger - [Controller id=1 epoch=1] Changed partition __consumer_offsets-33 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isr=List(1), leaderRecoveryState=RECOVERED, partitionEpoch=0) 09:09:36 09:09:35.958 [controller-event-thread] INFO state.change.logger - [Controller id=1 epoch=1] Changed partition __consumer_offsets-23 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isr=List(1), leaderRecoveryState=RECOVERED, partitionEpoch=0) 09:09:36 09:09:35.958 [controller-event-thread] INFO state.change.logger - [Controller id=1 epoch=1] Changed partition __consumer_offsets-49 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isr=List(1), leaderRecoveryState=RECOVERED, partitionEpoch=0) 09:09:36 09:09:35.958 [controller-event-thread] INFO state.change.logger - [Controller id=1 epoch=1] Changed partition __consumer_offsets-47 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isr=List(1), leaderRecoveryState=RECOVERED, partitionEpoch=0) 09:09:36 09:09:35.958 [controller-event-thread] INFO state.change.logger - [Controller id=1 epoch=1] Changed partition __consumer_offsets-16 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isr=List(1), leaderRecoveryState=RECOVERED, partitionEpoch=0) 09:09:36 09:09:35.958 [controller-event-thread] INFO state.change.logger - [Controller id=1 epoch=1] Changed partition __consumer_offsets-28 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isr=List(1), leaderRecoveryState=RECOVERED, partitionEpoch=0) 09:09:36 09:09:35.958 [controller-event-thread] INFO state.change.logger - [Controller id=1 epoch=1] Changed partition __consumer_offsets-31 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isr=List(1), leaderRecoveryState=RECOVERED, partitionEpoch=0) 09:09:36 09:09:35.958 [controller-event-thread] INFO state.change.logger - [Controller id=1 epoch=1] Changed partition __consumer_offsets-36 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isr=List(1), leaderRecoveryState=RECOVERED, partitionEpoch=0) 09:09:36 09:09:35.958 [controller-event-thread] INFO state.change.logger - [Controller id=1 epoch=1] Changed partition __consumer_offsets-42 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isr=List(1), leaderRecoveryState=RECOVERED, partitionEpoch=0) 09:09:36 09:09:35.958 [controller-event-thread] INFO state.change.logger - [Controller id=1 epoch=1] Changed partition __consumer_offsets-3 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isr=List(1), leaderRecoveryState=RECOVERED, partitionEpoch=0) 09:09:36 09:09:35.958 [controller-event-thread] INFO state.change.logger - [Controller id=1 epoch=1] Changed partition __consumer_offsets-18 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isr=List(1), leaderRecoveryState=RECOVERED, partitionEpoch=0) 09:09:36 09:09:35.958 [controller-event-thread] INFO state.change.logger - [Controller id=1 epoch=1] Changed partition __consumer_offsets-15 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isr=List(1), leaderRecoveryState=RECOVERED, partitionEpoch=0) 09:09:36 09:09:35.958 [controller-event-thread] INFO state.change.logger - [Controller id=1 epoch=1] Changed partition __consumer_offsets-24 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isr=List(1), leaderRecoveryState=RECOVERED, partitionEpoch=0) 09:09:36 09:09:35.958 [controller-event-thread] INFO state.change.logger - [Controller id=1 epoch=1] Changed partition __consumer_offsets-17 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isr=List(1), leaderRecoveryState=RECOVERED, partitionEpoch=0) 09:09:36 09:09:35.958 [controller-event-thread] INFO state.change.logger - [Controller id=1 epoch=1] Changed partition __consumer_offsets-48 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isr=List(1), leaderRecoveryState=RECOVERED, partitionEpoch=0) 09:09:36 09:09:35.958 [controller-event-thread] INFO state.change.logger - [Controller id=1 epoch=1] Changed partition __consumer_offsets-19 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isr=List(1), leaderRecoveryState=RECOVERED, partitionEpoch=0) 09:09:36 09:09:35.958 [controller-event-thread] INFO state.change.logger - [Controller id=1 epoch=1] Changed partition __consumer_offsets-11 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isr=List(1), leaderRecoveryState=RECOVERED, partitionEpoch=0) 09:09:36 09:09:35.959 [controller-event-thread] INFO state.change.logger - [Controller id=1 epoch=1] Changed partition __consumer_offsets-2 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isr=List(1), leaderRecoveryState=RECOVERED, partitionEpoch=0) 09:09:36 09:09:35.959 [controller-event-thread] INFO state.change.logger - [Controller id=1 epoch=1] Changed partition __consumer_offsets-43 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isr=List(1), leaderRecoveryState=RECOVERED, partitionEpoch=0) 09:09:36 09:09:35.959 [controller-event-thread] INFO state.change.logger - [Controller id=1 epoch=1] Changed partition __consumer_offsets-6 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isr=List(1), leaderRecoveryState=RECOVERED, partitionEpoch=0) 09:09:36 09:09:35.959 [controller-event-thread] INFO state.change.logger - [Controller id=1 epoch=1] Changed partition __consumer_offsets-14 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isr=List(1), leaderRecoveryState=RECOVERED, partitionEpoch=0) 09:09:36 09:09:35.959 [controller-event-thread] INFO state.change.logger - [Controller id=1 epoch=1] Changed partition __consumer_offsets-20 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isr=List(1), leaderRecoveryState=RECOVERED, partitionEpoch=0) 09:09:36 09:09:35.959 [controller-event-thread] INFO state.change.logger - [Controller id=1 epoch=1] Changed partition __consumer_offsets-0 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isr=List(1), leaderRecoveryState=RECOVERED, partitionEpoch=0) 09:09:36 09:09:35.959 [controller-event-thread] INFO state.change.logger - [Controller id=1 epoch=1] Changed partition __consumer_offsets-44 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isr=List(1), leaderRecoveryState=RECOVERED, partitionEpoch=0) 09:09:36 09:09:35.959 [controller-event-thread] INFO state.change.logger - [Controller id=1 epoch=1] Changed partition __consumer_offsets-39 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isr=List(1), leaderRecoveryState=RECOVERED, partitionEpoch=0) 09:09:36 09:09:35.959 [controller-event-thread] INFO state.change.logger - [Controller id=1 epoch=1] Changed partition __consumer_offsets-12 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isr=List(1), leaderRecoveryState=RECOVERED, partitionEpoch=0) 09:09:36 09:09:35.959 [controller-event-thread] INFO state.change.logger - [Controller id=1 epoch=1] Changed partition __consumer_offsets-45 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isr=List(1), leaderRecoveryState=RECOVERED, partitionEpoch=0) 09:09:36 09:09:35.959 [controller-event-thread] INFO state.change.logger - [Controller id=1 epoch=1] Changed partition __consumer_offsets-1 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isr=List(1), leaderRecoveryState=RECOVERED, partitionEpoch=0) 09:09:36 09:09:35.959 [controller-event-thread] INFO state.change.logger - [Controller id=1 epoch=1] Changed partition __consumer_offsets-5 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isr=List(1), leaderRecoveryState=RECOVERED, partitionEpoch=0) 09:09:36 09:09:35.959 [controller-event-thread] INFO state.change.logger - [Controller id=1 epoch=1] Changed partition __consumer_offsets-26 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isr=List(1), leaderRecoveryState=RECOVERED, partitionEpoch=0) 09:09:36 09:09:35.959 [controller-event-thread] INFO state.change.logger - [Controller id=1 epoch=1] Changed partition __consumer_offsets-29 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isr=List(1), leaderRecoveryState=RECOVERED, partitionEpoch=0) 09:09:36 09:09:35.959 [controller-event-thread] INFO state.change.logger - [Controller id=1 epoch=1] Changed partition __consumer_offsets-34 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isr=List(1), leaderRecoveryState=RECOVERED, partitionEpoch=0) 09:09:36 09:09:35.959 [controller-event-thread] INFO state.change.logger - [Controller id=1 epoch=1] Changed partition __consumer_offsets-10 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isr=List(1), leaderRecoveryState=RECOVERED, partitionEpoch=0) 09:09:36 09:09:35.959 [controller-event-thread] INFO state.change.logger - [Controller id=1 epoch=1] Changed partition __consumer_offsets-32 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isr=List(1), leaderRecoveryState=RECOVERED, partitionEpoch=0) 09:09:36 09:09:35.959 [controller-event-thread] INFO state.change.logger - [Controller id=1 epoch=1] Changed partition __consumer_offsets-40 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isr=List(1), leaderRecoveryState=RECOVERED, partitionEpoch=0) 09:09:36 09:09:35.959 [controller-event-thread] INFO state.change.logger - [Controller id=1 epoch=1] Sending LeaderAndIsr request to broker 1 with 50 become-leader and 0 become-follower partitions 09:09:36 09:09:35.960 [controller-event-thread] INFO state.change.logger - [Controller id=1 epoch=1] Sending UpdateMetadata request to brokers HashSet(1) for 50 partitions 09:09:36 09:09:35.961 [TestBroker:1:Controller-1-to-broker-1-send-thread] DEBUG org.apache.kafka.clients.NetworkClient - [Controller id=1, targetBrokerId=1] Sending LEADER_AND_ISR request with header RequestHeader(apiKey=LEADER_AND_ISR, apiVersion=6, clientId=1, correlationId=3) and timeout 30000 to node 1: LeaderAndIsrRequestData(controllerId=1, controllerEpoch=1, brokerEpoch=25, type=0, ungroupedPartitionStates=[], topicStates=[LeaderAndIsrTopicState(topicName='__consumer_offsets', topicId=RNuWA2mhQKGQZwHuRiMICQ, partitionStates=[LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=13, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0), LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=46, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0), LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=9, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0), LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=42, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0), LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=21, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0), LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=17, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0), LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=30, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0), LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=26, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0), LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=5, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0), LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=38, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0), LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=1, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0), LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=34, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0), LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=16, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0), LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=45, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0), LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=12, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0), LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=41, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0), LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=24, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0), LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=20, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0), LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=49, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0), LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=0, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0), LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=29, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0), LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=25, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0), LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=8, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0), LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=37, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0), LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=4, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0), LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=33, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0), LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=15, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0), LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=48, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0), LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=11, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0), LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=44, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0), LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=23, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0), LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=19, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0), LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=32, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0), LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=28, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0), LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=7, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0), LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=40, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0), LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=3, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0), LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=36, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0), LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=47, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0), LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=14, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0), LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=43, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0), LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=10, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0), LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=22, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0), LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=18, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0), LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=31, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0), LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=27, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0), LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=39, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0), LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=6, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0), LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=35, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0), LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=2, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0)])], liveLeaders=[LeaderAndIsrLiveLeader(brokerId=1, hostName='localhost', port=40117)]) 09:09:36 09:09:35.962 [controller-event-thread] INFO state.change.logger - [Controller id=1 epoch=1] Sending UpdateMetadata request to brokers HashSet() for 0 partitions 09:09:36 09:09:35.964 [data-plane-kafka-request-handler-1] INFO state.change.logger - [Broker id=1] Handling LeaderAndIsr request correlationId 3 from controller 1 for 50 partitions 09:09:36 09:09:35.992 [data-plane-kafka-request-handler-1] INFO kafka.server.ReplicaFetcherManager - [ReplicaFetcherManager on broker 1] Removed fetcher for partitions HashSet(__consumer_offsets-22, __consumer_offsets-30, __consumer_offsets-25, __consumer_offsets-35, __consumer_offsets-37, __consumer_offsets-38, __consumer_offsets-13, __consumer_offsets-8, __consumer_offsets-21, __consumer_offsets-4, __consumer_offsets-27, __consumer_offsets-7, __consumer_offsets-9, __consumer_offsets-46, __consumer_offsets-41, __consumer_offsets-33, __consumer_offsets-23, __consumer_offsets-49, __consumer_offsets-47, __consumer_offsets-16, __consumer_offsets-28, __consumer_offsets-31, __consumer_offsets-36, __consumer_offsets-42, __consumer_offsets-3, __consumer_offsets-18, __consumer_offsets-15, __consumer_offsets-24, __consumer_offsets-17, __consumer_offsets-48, __consumer_offsets-19, __consumer_offsets-11, __consumer_offsets-2, __consumer_offsets-43, __consumer_offsets-6, __consumer_offsets-14, __consumer_offsets-20, __consumer_offsets-0, __consumer_offsets-44, __consumer_offsets-39, __consumer_offsets-12, __consumer_offsets-45, __consumer_offsets-1, __consumer_offsets-5, __consumer_offsets-26, __consumer_offsets-29, __consumer_offsets-34, __consumer_offsets-10, __consumer_offsets-32, __consumer_offsets-40) 09:09:36 09:09:35.992 [data-plane-kafka-request-handler-1] INFO state.change.logger - [Broker id=1] Stopped fetchers as part of LeaderAndIsr request correlationId 3 from controller 1 epoch 1 as part of the become-leader transition for 50 partitions 09:09:36 09:09:35.994 [ProcessThread(sid:0 cport:46481):] DEBUG org.apache.zookeeper.server.SessionTrackerImpl - Checking session 0x1000002945e0000 09:09:36 09:09:35.994 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - Processing request:: sessionid:0x1000002945e0000 type:getData cxid:0xc2 zxid:0xfffffffffffffffe txntype:unknown reqpath:/config/topics/__consumer_offsets 09:09:36 09:09:35.994 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - sessionid:0x1000002945e0000 type:getData cxid:0xc2 zxid:0xfffffffffffffffe txntype:unknown reqpath:/config/topics/__consumer_offsets 09:09:36 09:09:35.994 [SyncThread:0] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Permission requested: 1 09:09:36 09:09:35.994 [SyncThread:0] DEBUG org.apache.zookeeper.server.ZooKeeperServer - ACLs for node: [31,s{'world,'anyone} 09:09:36 ] 09:09:36 09:09:35.994 [SyncThread:0] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Client credentials: ['sasl,'zooclient 09:09:36 , 'ip,'127.0.0.1 09:09:36 ] 09:09:36 09:09:35.994 [main-SendThread(127.0.0.1:46481)] DEBUG org.apache.zookeeper.ClientCnxn - Reading reply session id: 0x1000002945e0000, packet:: clientPath:/config/topics/__consumer_offsets serverPath:/config/topics/__consumer_offsets finished:false header:: 194,4 replyHeader:: 194,139,0 request:: '/config/topics/__consumer_offsets,F response:: #7b2276657273696f6e223a312c22636f6e666967223a7b22636f6d7072657373696f6e2e74797065223a2270726f6475636572222c22636c65616e75702e706f6c696379223a22636f6d70616374222c227365676d656e742e6279746573223a22313034383537363030227d7d,s{37,37,1770973775609,1770973775609,0,0,0,0,109,0,37} 09:09:36 09:09:35.998 [data-plane-kafka-request-handler-1] DEBUG kafka.log.OffsetIndex - Loaded index file /tmp/kafka-unit11182757027218931278/__consumer_offsets-3/00000000000000000000.index with maxEntries = 1310720, maxIndexSize = 10485760, entries = 0, lastOffset = 0, file position = 0 09:09:36 09:09:35.998 [data-plane-kafka-request-handler-1] DEBUG kafka.log.AbstractIndex - Index /tmp/kafka-unit11182757027218931278/__consumer_offsets-3/00000000000000000000.index was not resized because it already has size 10485760 09:09:36 09:09:35.998 [data-plane-kafka-request-handler-1] DEBUG kafka.log.TimeIndex - Loaded index file /tmp/kafka-unit11182757027218931278/__consumer_offsets-3/00000000000000000000.timeindex with maxEntries = 873813, maxIndexSize = 10485760, entries = 0, lastOffset = TimestampOffset(-1,0), file position = 0 09:09:36 09:09:35.998 [data-plane-kafka-request-handler-1] DEBUG kafka.log.AbstractIndex - Index /tmp/kafka-unit11182757027218931278/__consumer_offsets-3/00000000000000000000.timeindex was not resized because it already has size 10485756 09:09:36 09:09:35.998 [data-plane-kafka-request-handler-1] INFO kafka.log.UnifiedLog$ - [LogLoader partition=__consumer_offsets-3, dir=/tmp/kafka-unit11182757027218931278] Loading producer state till offset 0 with message format version 2 09:09:36 09:09:35.998 [data-plane-kafka-request-handler-1] DEBUG kafka.utils.KafkaScheduler - Scheduling task flush-metadata-file with initial delay 0 ms and period -1 ms. 09:09:36 09:09:35.999 [data-plane-kafka-request-handler-1] DEBUG kafka.utils.KafkaScheduler - Scheduling task PeriodicProducerExpirationCheck with initial delay 600000 ms and period 600000 ms. 09:09:36 09:09:35.999 [data-plane-kafka-request-handler-1] INFO kafka.log.LogManager - Created log for partition __consumer_offsets-3 in /tmp/kafka-unit11182757027218931278/__consumer_offsets-3 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} 09:09:36 09:09:36.000 [data-plane-kafka-request-handler-1] INFO kafka.cluster.Partition - [Partition __consumer_offsets-3 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-3 09:09:36 09:09:36.000 [data-plane-kafka-request-handler-1] INFO kafka.cluster.Partition - [Partition __consumer_offsets-3 broker=1] Log loaded for partition __consumer_offsets-3 with initial high watermark 0 09:09:36 09:09:36.000 [data-plane-kafka-request-handler-1] INFO state.change.logger - [Broker id=1] Leader __consumer_offsets-3 with topic id Some(RNuWA2mhQKGQZwHuRiMICQ) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas []. Previous leader epoch was -1. 09:09:36 09:09:36.000 [data-plane-kafka-request-handler-1] DEBUG kafka.server.epoch.LeaderEpochFileCache - [LeaderEpochCache __consumer_offsets-3] Appended new epoch entry EpochEntry(epoch=0, startOffset=0). Cache now contains 1 entries. 09:09:36 09:09:36.005 [ProcessThread(sid:0 cport:46481):] DEBUG org.apache.zookeeper.server.SessionTrackerImpl - Checking session 0x1000002945e0000 09:09:36 09:09:36.005 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - Processing request:: sessionid:0x1000002945e0000 type:getData cxid:0xc3 zxid:0xfffffffffffffffe txntype:unknown reqpath:/config/topics/__consumer_offsets 09:09:36 09:09:36.005 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - sessionid:0x1000002945e0000 type:getData cxid:0xc3 zxid:0xfffffffffffffffe txntype:unknown reqpath:/config/topics/__consumer_offsets 09:09:36 09:09:36.005 [SyncThread:0] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Permission requested: 1 09:09:36 09:09:36.005 [SyncThread:0] DEBUG org.apache.zookeeper.server.ZooKeeperServer - ACLs for node: [31,s{'world,'anyone} 09:09:36 ] 09:09:36 09:09:36.005 [SyncThread:0] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Client credentials: ['sasl,'zooclient 09:09:36 , 'ip,'127.0.0.1 09:09:36 ] 09:09:36 09:09:36.006 [main-SendThread(127.0.0.1:46481)] DEBUG org.apache.zookeeper.ClientCnxn - Reading reply session id: 0x1000002945e0000, packet:: clientPath:/config/topics/__consumer_offsets serverPath:/config/topics/__consumer_offsets finished:false header:: 195,4 replyHeader:: 195,139,0 request:: '/config/topics/__consumer_offsets,F response:: #7b2276657273696f6e223a312c22636f6e666967223a7b22636f6d7072657373696f6e2e74797065223a2270726f6475636572222c22636c65616e75702e706f6c696379223a22636f6d70616374222c227365676d656e742e6279746573223a22313034383537363030227d7d,s{37,37,1770973775609,1770973775609,0,0,0,0,109,0,37} 09:09:36 09:09:36.008 [data-plane-kafka-request-handler-1] DEBUG kafka.log.OffsetIndex - Loaded index file /tmp/kafka-unit11182757027218931278/__consumer_offsets-18/00000000000000000000.index with maxEntries = 1310720, maxIndexSize = 10485760, entries = 0, lastOffset = 0, file position = 0 09:09:36 09:09:36.008 [data-plane-kafka-request-handler-1] DEBUG kafka.log.AbstractIndex - Index /tmp/kafka-unit11182757027218931278/__consumer_offsets-18/00000000000000000000.index was not resized because it already has size 10485760 09:09:36 09:09:36.008 [data-plane-kafka-request-handler-1] DEBUG kafka.log.TimeIndex - Loaded index file /tmp/kafka-unit11182757027218931278/__consumer_offsets-18/00000000000000000000.timeindex with maxEntries = 873813, maxIndexSize = 10485760, entries = 0, lastOffset = TimestampOffset(-1,0), file position = 0 09:09:36 09:09:36.008 [data-plane-kafka-request-handler-1] DEBUG kafka.log.AbstractIndex - Index /tmp/kafka-unit11182757027218931278/__consumer_offsets-18/00000000000000000000.timeindex was not resized because it already has size 10485756 09:09:36 09:09:36.008 [data-plane-kafka-request-handler-1] INFO kafka.log.UnifiedLog$ - [LogLoader partition=__consumer_offsets-18, dir=/tmp/kafka-unit11182757027218931278] Loading producer state till offset 0 with message format version 2 09:09:36 09:09:36.008 [data-plane-kafka-request-handler-1] DEBUG kafka.utils.KafkaScheduler - Scheduling task flush-metadata-file with initial delay 0 ms and period -1 ms. 09:09:36 09:09:36.009 [data-plane-kafka-request-handler-1] DEBUG kafka.utils.KafkaScheduler - Scheduling task PeriodicProducerExpirationCheck with initial delay 600000 ms and period 600000 ms. 09:09:36 09:09:36.009 [data-plane-kafka-request-handler-1] INFO kafka.log.LogManager - Created log for partition __consumer_offsets-18 in /tmp/kafka-unit11182757027218931278/__consumer_offsets-18 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} 09:09:36 09:09:36.009 [data-plane-kafka-request-handler-1] INFO kafka.cluster.Partition - [Partition __consumer_offsets-18 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-18 09:09:36 09:09:36.009 [data-plane-kafka-request-handler-1] INFO kafka.cluster.Partition - [Partition __consumer_offsets-18 broker=1] Log loaded for partition __consumer_offsets-18 with initial high watermark 0 09:09:36 09:09:36.009 [data-plane-kafka-request-handler-1] INFO state.change.logger - [Broker id=1] Leader __consumer_offsets-18 with topic id Some(RNuWA2mhQKGQZwHuRiMICQ) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas []. Previous leader epoch was -1. 09:09:36 09:09:36.009 [data-plane-kafka-request-handler-1] DEBUG kafka.server.epoch.LeaderEpochFileCache - [LeaderEpochCache __consumer_offsets-18] Appended new epoch entry EpochEntry(epoch=0, startOffset=0). Cache now contains 1 entries. 09:09:36 09:09:36.014 [ProcessThread(sid:0 cport:46481):] DEBUG org.apache.zookeeper.server.SessionTrackerImpl - Checking session 0x1000002945e0000 09:09:36 09:09:36.014 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - Processing request:: sessionid:0x1000002945e0000 type:getData cxid:0xc4 zxid:0xfffffffffffffffe txntype:unknown reqpath:/config/topics/__consumer_offsets 09:09:36 09:09:36.014 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - sessionid:0x1000002945e0000 type:getData cxid:0xc4 zxid:0xfffffffffffffffe txntype:unknown reqpath:/config/topics/__consumer_offsets 09:09:36 09:09:36.014 [SyncThread:0] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Permission requested: 1 09:09:36 09:09:36.014 [SyncThread:0] DEBUG org.apache.zookeeper.server.ZooKeeperServer - ACLs for node: [31,s{'world,'anyone} 09:09:36 ] 09:09:36 09:09:36.014 [SyncThread:0] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Client credentials: ['sasl,'zooclient 09:09:36 , 'ip,'127.0.0.1 09:09:36 ] 09:09:36 09:09:36.015 [main-SendThread(127.0.0.1:46481)] DEBUG org.apache.zookeeper.ClientCnxn - Reading reply session id: 0x1000002945e0000, packet:: clientPath:/config/topics/__consumer_offsets serverPath:/config/topics/__consumer_offsets finished:false header:: 196,4 replyHeader:: 196,139,0 request:: '/config/topics/__consumer_offsets,F response:: #7b2276657273696f6e223a312c22636f6e666967223a7b22636f6d7072657373696f6e2e74797065223a2270726f6475636572222c22636c65616e75702e706f6c696379223a22636f6d70616374222c227365676d656e742e6279746573223a22313034383537363030227d7d,s{37,37,1770973775609,1770973775609,0,0,0,0,109,0,37} 09:09:36 09:09:36.017 [data-plane-kafka-request-handler-1] DEBUG kafka.log.OffsetIndex - Loaded index file /tmp/kafka-unit11182757027218931278/__consumer_offsets-41/00000000000000000000.index with maxEntries = 1310720, maxIndexSize = 10485760, entries = 0, lastOffset = 0, file position = 0 09:09:36 09:09:36.017 [data-plane-kafka-request-handler-1] DEBUG kafka.log.AbstractIndex - Index /tmp/kafka-unit11182757027218931278/__consumer_offsets-41/00000000000000000000.index was not resized because it already has size 10485760 09:09:36 09:09:36.017 [data-plane-kafka-request-handler-1] DEBUG kafka.log.TimeIndex - Loaded index file /tmp/kafka-unit11182757027218931278/__consumer_offsets-41/00000000000000000000.timeindex with maxEntries = 873813, maxIndexSize = 10485760, entries = 0, lastOffset = TimestampOffset(-1,0), file position = 0 09:09:36 09:09:36.017 [data-plane-kafka-request-handler-1] DEBUG kafka.log.AbstractIndex - Index /tmp/kafka-unit11182757027218931278/__consumer_offsets-41/00000000000000000000.timeindex was not resized because it already has size 10485756 09:09:36 09:09:36.017 [data-plane-kafka-request-handler-1] INFO kafka.log.UnifiedLog$ - [LogLoader partition=__consumer_offsets-41, dir=/tmp/kafka-unit11182757027218931278] Loading producer state till offset 0 with message format version 2 09:09:36 09:09:36.017 [data-plane-kafka-request-handler-1] DEBUG kafka.utils.KafkaScheduler - Scheduling task flush-metadata-file with initial delay 0 ms and period -1 ms. 09:09:36 09:09:36.018 [data-plane-kafka-request-handler-1] DEBUG kafka.utils.KafkaScheduler - Scheduling task PeriodicProducerExpirationCheck with initial delay 600000 ms and period 600000 ms. 09:09:36 09:09:36.018 [data-plane-kafka-request-handler-1] INFO kafka.log.LogManager - Created log for partition __consumer_offsets-41 in /tmp/kafka-unit11182757027218931278/__consumer_offsets-41 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} 09:09:36 09:09:36.018 [data-plane-kafka-request-handler-1] INFO kafka.cluster.Partition - [Partition __consumer_offsets-41 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-41 09:09:36 09:09:36.019 [data-plane-kafka-request-handler-1] INFO kafka.cluster.Partition - [Partition __consumer_offsets-41 broker=1] Log loaded for partition __consumer_offsets-41 with initial high watermark 0 09:09:36 09:09:36.019 [data-plane-kafka-request-handler-1] INFO state.change.logger - [Broker id=1] Leader __consumer_offsets-41 with topic id Some(RNuWA2mhQKGQZwHuRiMICQ) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas []. Previous leader epoch was -1. 09:09:36 09:09:36.019 [data-plane-kafka-request-handler-1] DEBUG kafka.server.epoch.LeaderEpochFileCache - [LeaderEpochCache __consumer_offsets-41] Appended new epoch entry EpochEntry(epoch=0, startOffset=0). Cache now contains 1 entries. 09:09:36 09:09:36.022 [ProcessThread(sid:0 cport:46481):] DEBUG org.apache.zookeeper.server.SessionTrackerImpl - Checking session 0x1000002945e0000 09:09:36 09:09:36.022 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - Processing request:: sessionid:0x1000002945e0000 type:getData cxid:0xc5 zxid:0xfffffffffffffffe txntype:unknown reqpath:/config/topics/__consumer_offsets 09:09:36 09:09:36.023 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - sessionid:0x1000002945e0000 type:getData cxid:0xc5 zxid:0xfffffffffffffffe txntype:unknown reqpath:/config/topics/__consumer_offsets 09:09:36 09:09:36.023 [SyncThread:0] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Permission requested: 1 09:09:36 09:09:36.023 [SyncThread:0] DEBUG org.apache.zookeeper.server.ZooKeeperServer - ACLs for node: [31,s{'world,'anyone} 09:09:36 ] 09:09:36 09:09:36.023 [SyncThread:0] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Client credentials: ['sasl,'zooclient 09:09:36 , 'ip,'127.0.0.1 09:09:36 ] 09:09:36 09:09:36.023 [main-SendThread(127.0.0.1:46481)] DEBUG org.apache.zookeeper.ClientCnxn - Reading reply session id: 0x1000002945e0000, packet:: clientPath:/config/topics/__consumer_offsets serverPath:/config/topics/__consumer_offsets finished:false header:: 197,4 replyHeader:: 197,139,0 request:: '/config/topics/__consumer_offsets,F response:: #7b2276657273696f6e223a312c22636f6e666967223a7b22636f6d7072657373696f6e2e74797065223a2270726f6475636572222c22636c65616e75702e706f6c696379223a22636f6d70616374222c227365676d656e742e6279746573223a22313034383537363030227d7d,s{37,37,1770973775609,1770973775609,0,0,0,0,109,0,37} 09:09:36 09:09:36.025 [data-plane-kafka-request-handler-1] DEBUG kafka.log.OffsetIndex - Loaded index file /tmp/kafka-unit11182757027218931278/__consumer_offsets-10/00000000000000000000.index with maxEntries = 1310720, maxIndexSize = 10485760, entries = 0, lastOffset = 0, file position = 0 09:09:36 09:09:36.025 [data-plane-kafka-request-handler-1] DEBUG kafka.log.AbstractIndex - Index /tmp/kafka-unit11182757027218931278/__consumer_offsets-10/00000000000000000000.index was not resized because it already has size 10485760 09:09:36 09:09:36.025 [data-plane-kafka-request-handler-1] DEBUG kafka.log.TimeIndex - Loaded index file /tmp/kafka-unit11182757027218931278/__consumer_offsets-10/00000000000000000000.timeindex with maxEntries = 873813, maxIndexSize = 10485760, entries = 0, lastOffset = TimestampOffset(-1,0), file position = 0 09:09:36 09:09:36.025 [data-plane-kafka-request-handler-1] DEBUG kafka.log.AbstractIndex - Index /tmp/kafka-unit11182757027218931278/__consumer_offsets-10/00000000000000000000.timeindex was not resized because it already has size 10485756 09:09:36 09:09:36.025 [data-plane-kafka-request-handler-1] INFO kafka.log.UnifiedLog$ - [LogLoader partition=__consumer_offsets-10, dir=/tmp/kafka-unit11182757027218931278] Loading producer state till offset 0 with message format version 2 09:09:36 09:09:36.026 [main] DEBUG org.apache.kafka.clients.NetworkClient - [Consumer clientId=mso-123456-consumer-f5c1d64b-2a77-4abe-97e2-5d595acfbd82, groupId=mso-group] Sending metadata request MetadataRequestData(topics=[MetadataRequestTopic(topicId=AAAAAAAAAAAAAAAAAAAAAA, name='my-test-topic')], allowAutoTopicCreation=false, includeClusterAuthorizedOperations=false, includeTopicAuthorizedOperations=false) to node localhost:40117 (id: 1 rack: null) 09:09:36 09:09:36.026 [data-plane-kafka-request-handler-1] DEBUG kafka.utils.KafkaScheduler - Scheduling task flush-metadata-file with initial delay 0 ms and period -1 ms. 09:09:36 09:09:36.026 [main] DEBUG org.apache.kafka.clients.NetworkClient - [Consumer clientId=mso-123456-consumer-f5c1d64b-2a77-4abe-97e2-5d595acfbd82, groupId=mso-group] Sending METADATA request with header RequestHeader(apiKey=METADATA, apiVersion=12, clientId=mso-123456-consumer-f5c1d64b-2a77-4abe-97e2-5d595acfbd82, correlationId=10) and timeout 30000 to node 1: MetadataRequestData(topics=[MetadataRequestTopic(topicId=AAAAAAAAAAAAAAAAAAAAAA, name='my-test-topic')], allowAutoTopicCreation=false, includeClusterAuthorizedOperations=false, includeTopicAuthorizedOperations=false) 09:09:36 09:09:36.026 [data-plane-kafka-request-handler-1] DEBUG kafka.utils.KafkaScheduler - Scheduling task PeriodicProducerExpirationCheck with initial delay 600000 ms and period 600000 ms. 09:09:36 09:09:36.026 [data-plane-kafka-request-handler-1] INFO kafka.log.LogManager - Created log for partition __consumer_offsets-10 in /tmp/kafka-unit11182757027218931278/__consumer_offsets-10 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} 09:09:36 09:09:36.027 [data-plane-kafka-request-handler-1] INFO kafka.cluster.Partition - [Partition __consumer_offsets-10 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-10 09:09:36 09:09:36.027 [data-plane-kafka-request-handler-1] INFO kafka.cluster.Partition - [Partition __consumer_offsets-10 broker=1] Log loaded for partition __consumer_offsets-10 with initial high watermark 0 09:09:36 09:09:36.027 [data-plane-kafka-request-handler-1] INFO state.change.logger - [Broker id=1] Leader __consumer_offsets-10 with topic id Some(RNuWA2mhQKGQZwHuRiMICQ) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas []. Previous leader epoch was -1. 09:09:36 09:09:36.028 [main] DEBUG org.apache.kafka.clients.NetworkClient - [Consumer clientId=mso-123456-consumer-f5c1d64b-2a77-4abe-97e2-5d595acfbd82, groupId=mso-group] Received METADATA response from node 1 for request with header RequestHeader(apiKey=METADATA, apiVersion=12, clientId=mso-123456-consumer-f5c1d64b-2a77-4abe-97e2-5d595acfbd82, correlationId=10): MetadataResponseData(throttleTimeMs=0, brokers=[MetadataResponseBroker(nodeId=1, host='localhost', port=40117, rack=null)], clusterId='lcpOyY1-QY2MMThgHGGgSA', controllerId=1, topics=[MetadataResponseTopic(errorCode=0, name='my-test-topic', topicId=rpOTusxPRiGyrsjjjH_fwA, isInternal=false, partitions=[MetadataResponsePartition(errorCode=0, partitionIndex=0, leaderId=1, leaderEpoch=0, replicaNodes=[1], isrNodes=[1], offlineReplicas=[])], topicAuthorizedOperations=-2147483648)], clusterAuthorizedOperations=-2147483648) 09:09:36 09:09:36.028 [main] DEBUG org.apache.kafka.clients.Metadata - [Consumer clientId=mso-123456-consumer-f5c1d64b-2a77-4abe-97e2-5d595acfbd82, groupId=mso-group] Updating last seen epoch for partition my-test-topic-0 from 0 to epoch 0 from new metadata 09:09:36 09:09:36.028 [data-plane-kafka-network-thread-1-ListenerName(SASL_PLAINTEXT)-SASL_PLAINTEXT-0] DEBUG kafka.request.logger - Completed request:{"isForwarded":false,"requestHeader":{"requestApiKey":3,"requestApiVersion":12,"correlationId":10,"clientId":"mso-123456-consumer-f5c1d64b-2a77-4abe-97e2-5d595acfbd82","requestApiKeyName":"METADATA"},"request":{"topics":[{"topicId":"AAAAAAAAAAAAAAAAAAAAAA","name":"my-test-topic"}],"allowAutoTopicCreation":false,"includeTopicAuthorizedOperations":false},"response":{"throttleTimeMs":0,"brokers":[{"nodeId":1,"host":"localhost","port":40117,"rack":null}],"clusterId":"lcpOyY1-QY2MMThgHGGgSA","controllerId":1,"topics":[{"errorCode":0,"name":"my-test-topic","topicId":"rpOTusxPRiGyrsjjjH_fwA","isInternal":false,"partitions":[{"errorCode":0,"partitionIndex":0,"leaderId":1,"leaderEpoch":0,"replicaNodes":[1],"isrNodes":[1],"offlineReplicas":[]}],"topicAuthorizedOperations":-2147483648}]},"connection":"127.0.0.1:40117-127.0.0.1:49642-3","totalTimeMs":1.624,"requestQueueTimeMs":0.194,"localTimeMs":1.19,"remoteTimeMs":0.0,"throttleTimeMs":0,"responseQueueTimeMs":0.061,"sendTimeMs":0.177,"securityProtocol":"SASL_PLAINTEXT","principal":"User:admin","listener":"SASL_PLAINTEXT","clientInformation":{"softwareName":"apache-kafka-java","softwareVersion":"3.3.1"}} 09:09:36 09:09:36.028 [main] DEBUG org.apache.kafka.clients.Metadata - [Consumer clientId=mso-123456-consumer-f5c1d64b-2a77-4abe-97e2-5d595acfbd82, groupId=mso-group] Updated cluster metadata updateVersion 6 to MetadataCache{clusterId='lcpOyY1-QY2MMThgHGGgSA', nodes={1=localhost:40117 (id: 1 rack: null)}, partitions=[PartitionMetadata(error=NONE, partition=my-test-topic-0, leader=Optional[1], leaderEpoch=Optional[0], replicas=1, isr=1, offlineReplicas=)], controller=localhost:40117 (id: 1 rack: null)} 09:09:36 09:09:36.028 [main] DEBUG org.apache.kafka.clients.consumer.internals.ConsumerCoordinator - [Consumer clientId=mso-123456-consumer-f5c1d64b-2a77-4abe-97e2-5d595acfbd82, groupId=mso-group] Sending FindCoordinator request to broker localhost:40117 (id: 1 rack: null) 09:09:36 09:09:36.029 [main] DEBUG org.apache.kafka.clients.NetworkClient - [Consumer clientId=mso-123456-consumer-f5c1d64b-2a77-4abe-97e2-5d595acfbd82, groupId=mso-group] Sending FIND_COORDINATOR request with header RequestHeader(apiKey=FIND_COORDINATOR, apiVersion=4, clientId=mso-123456-consumer-f5c1d64b-2a77-4abe-97e2-5d595acfbd82, correlationId=11) and timeout 30000 to node 1: FindCoordinatorRequestData(key='', keyType=0, coordinatorKeys=[mso-group]) 09:09:36 09:09:36.030 [data-plane-kafka-request-handler-1] DEBUG kafka.server.epoch.LeaderEpochFileCache - [LeaderEpochCache __consumer_offsets-10] Appended new epoch entry EpochEntry(epoch=0, startOffset=0). Cache now contains 1 entries. 09:09:36 09:09:36.030 [ProcessThread(sid:0 cport:46481):] DEBUG org.apache.zookeeper.server.SessionTrackerImpl - Checking session 0x1000002945e0000 09:09:36 09:09:36.030 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - Processing request:: sessionid:0x1000002945e0000 type:exists cxid:0xc6 zxid:0xfffffffffffffffe txntype:unknown reqpath:/admin/delete_topics/__consumer_offsets 09:09:36 09:09:36.030 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - sessionid:0x1000002945e0000 type:exists cxid:0xc6 zxid:0xfffffffffffffffe txntype:unknown reqpath:/admin/delete_topics/__consumer_offsets 09:09:36 09:09:36.030 [main-SendThread(127.0.0.1:46481)] DEBUG org.apache.zookeeper.ClientCnxn - Reading reply session id: 0x1000002945e0000, packet:: clientPath:/admin/delete_topics/__consumer_offsets serverPath:/admin/delete_topics/__consumer_offsets finished:false header:: 198,3 replyHeader:: 198,139,-101 request:: '/admin/delete_topics/__consumer_offsets,F response:: 09:09:36 09:09:36.031 [ProcessThread(sid:0 cport:46481):] DEBUG org.apache.zookeeper.server.SessionTrackerImpl - Checking session 0x1000002945e0000 09:09:36 09:09:36.031 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - Processing request:: sessionid:0x1000002945e0000 type:exists cxid:0xc7 zxid:0xfffffffffffffffe txntype:unknown reqpath:/brokers/topics/__consumer_offsets 09:09:36 09:09:36.031 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - sessionid:0x1000002945e0000 type:exists cxid:0xc7 zxid:0xfffffffffffffffe txntype:unknown reqpath:/brokers/topics/__consumer_offsets 09:09:36 09:09:36.031 [main-SendThread(127.0.0.1:46481)] DEBUG org.apache.zookeeper.ClientCnxn - Reading reply session id: 0x1000002945e0000, packet:: clientPath:/brokers/topics/__consumer_offsets serverPath:/brokers/topics/__consumer_offsets finished:false header:: 199,3 replyHeader:: 199,139,0 request:: '/brokers/topics/__consumer_offsets,T response:: s{38,38,1770973775670,1770973775670,0,1,0,0,548,1,39} 09:09:36 09:09:36.032 [data-plane-kafka-request-handler-0] DEBUG kafka.server.ZkAdminManager - [Admin Manager on Broker 1]: Topic creation failed since topic '__consumer_offsets' already exists. 09:09:36 org.apache.kafka.common.errors.TopicExistsException: Topic '__consumer_offsets' already exists. 09:09:36 09:09:36.032 [data-plane-kafka-request-handler-0] DEBUG kafka.server.DefaultAutoTopicCreationManager - Cleared inflight topic creation state for HashMap(__consumer_offsets -> CreatableTopic(name='__consumer_offsets', numPartitions=50, replicationFactor=1, assignments=[], configs=[CreateableTopicConfig(name='compression.type', value='producer'), CreateableTopicConfig(name='cleanup.policy', value='compact'), CreateableTopicConfig(name='segment.bytes', value='104857600')])) 09:09:36 09:09:36.032 [main] DEBUG org.apache.kafka.clients.NetworkClient - [Consumer clientId=mso-123456-consumer-f5c1d64b-2a77-4abe-97e2-5d595acfbd82, groupId=mso-group] Received FIND_COORDINATOR response from node 1 for request with header RequestHeader(apiKey=FIND_COORDINATOR, apiVersion=4, clientId=mso-123456-consumer-f5c1d64b-2a77-4abe-97e2-5d595acfbd82, correlationId=11): FindCoordinatorResponseData(throttleTimeMs=0, errorCode=0, errorMessage='', nodeId=0, host='', port=0, coordinators=[Coordinator(key='mso-group', nodeId=-1, host='', port=-1, errorCode=15, errorMessage='')]) 09:09:36 09:09:36.032 [main] DEBUG org.apache.kafka.clients.consumer.internals.ConsumerCoordinator - [Consumer clientId=mso-123456-consumer-f5c1d64b-2a77-4abe-97e2-5d595acfbd82, groupId=mso-group] Received FindCoordinator response ClientResponse(receivedTimeMs=1770973776032, latencyMs=3, disconnected=false, requestHeader=RequestHeader(apiKey=FIND_COORDINATOR, apiVersion=4, clientId=mso-123456-consumer-f5c1d64b-2a77-4abe-97e2-5d595acfbd82, correlationId=11), responseBody=FindCoordinatorResponseData(throttleTimeMs=0, errorCode=0, errorMessage='', nodeId=0, host='', port=0, coordinators=[Coordinator(key='mso-group', nodeId=-1, host='', port=-1, errorCode=15, errorMessage='')])) 09:09:36 09:09:36.032 [main] DEBUG org.apache.kafka.clients.consumer.internals.ConsumerCoordinator - [Consumer clientId=mso-123456-consumer-f5c1d64b-2a77-4abe-97e2-5d595acfbd82, groupId=mso-group] Group coordinator lookup failed: 09:09:36 09:09:36.033 [main] DEBUG org.apache.kafka.clients.consumer.internals.ConsumerCoordinator - [Consumer clientId=mso-123456-consumer-f5c1d64b-2a77-4abe-97e2-5d595acfbd82, groupId=mso-group] Coordinator discovery failed, refreshing metadata 09:09:36 org.apache.kafka.common.errors.CoordinatorNotAvailableException: The coordinator is not available. 09:09:36 09:09:36.033 [data-plane-kafka-network-thread-1-ListenerName(SASL_PLAINTEXT)-SASL_PLAINTEXT-0] DEBUG kafka.request.logger - Completed request:{"isForwarded":false,"requestHeader":{"requestApiKey":10,"requestApiVersion":4,"correlationId":11,"clientId":"mso-123456-consumer-f5c1d64b-2a77-4abe-97e2-5d595acfbd82","requestApiKeyName":"FIND_COORDINATOR"},"request":{"keyType":0,"coordinatorKeys":["mso-group"]},"response":{"throttleTimeMs":0,"coordinators":[{"key":"mso-group","nodeId":-1,"host":"","port":-1,"errorCode":15,"errorMessage":""}]},"connection":"127.0.0.1:40117-127.0.0.1:49642-3","totalTimeMs":3.382,"requestQueueTimeMs":0.074,"localTimeMs":3.136,"remoteTimeMs":0.0,"throttleTimeMs":0,"responseQueueTimeMs":0.043,"sendTimeMs":0.128,"securityProtocol":"SASL_PLAINTEXT","principal":"User:admin","listener":"SASL_PLAINTEXT","clientInformation":{"softwareName":"apache-kafka-java","softwareVersion":"3.3.1"}} 09:09:36 09:09:36.067 [ProcessThread(sid:0 cport:46481):] DEBUG org.apache.zookeeper.server.SessionTrackerImpl - Checking session 0x1000002945e0000 09:09:36 09:09:36.067 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - Processing request:: sessionid:0x1000002945e0000 type:getData cxid:0xc8 zxid:0xfffffffffffffffe txntype:unknown reqpath:/config/topics/__consumer_offsets 09:09:36 09:09:36.067 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - sessionid:0x1000002945e0000 type:getData cxid:0xc8 zxid:0xfffffffffffffffe txntype:unknown reqpath:/config/topics/__consumer_offsets 09:09:36 09:09:36.067 [SyncThread:0] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Permission requested: 1 09:09:36 09:09:36.067 [SyncThread:0] DEBUG org.apache.zookeeper.server.ZooKeeperServer - ACLs for node: [31,s{'world,'anyone} 09:09:36 ] 09:09:36 09:09:36.067 [SyncThread:0] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Client credentials: ['sasl,'zooclient 09:09:36 , 'ip,'127.0.0.1 09:09:36 ] 09:09:36 09:09:36.067 [main-SendThread(127.0.0.1:46481)] DEBUG org.apache.zookeeper.ClientCnxn - Reading reply session id: 0x1000002945e0000, packet:: clientPath:/config/topics/__consumer_offsets serverPath:/config/topics/__consumer_offsets finished:false header:: 200,4 replyHeader:: 200,139,0 request:: '/config/topics/__consumer_offsets,F response:: #7b2276657273696f6e223a312c22636f6e666967223a7b22636f6d7072657373696f6e2e74797065223a2270726f6475636572222c22636c65616e75702e706f6c696379223a22636f6d70616374222c227365676d656e742e6279746573223a22313034383537363030227d7d,s{37,37,1770973775609,1770973775609,0,0,0,0,109,0,37} 09:09:36 09:09:36.071 [data-plane-kafka-request-handler-1] DEBUG kafka.log.OffsetIndex - Loaded index file /tmp/kafka-unit11182757027218931278/__consumer_offsets-33/00000000000000000000.index with maxEntries = 1310720, maxIndexSize = 10485760, entries = 0, lastOffset = 0, file position = 0 09:09:36 09:09:36.071 [data-plane-kafka-request-handler-1] DEBUG kafka.log.AbstractIndex - Index /tmp/kafka-unit11182757027218931278/__consumer_offsets-33/00000000000000000000.index was not resized because it already has size 10485760 09:09:36 09:09:36.071 [data-plane-kafka-request-handler-1] DEBUG kafka.log.TimeIndex - Loaded index file /tmp/kafka-unit11182757027218931278/__consumer_offsets-33/00000000000000000000.timeindex with maxEntries = 873813, maxIndexSize = 10485760, entries = 0, lastOffset = TimestampOffset(-1,0), file position = 0 09:09:36 09:09:36.071 [data-plane-kafka-request-handler-1] DEBUG kafka.log.AbstractIndex - Index /tmp/kafka-unit11182757027218931278/__consumer_offsets-33/00000000000000000000.timeindex was not resized because it already has size 10485756 09:09:36 09:09:36.071 [data-plane-kafka-request-handler-1] INFO kafka.log.UnifiedLog$ - [LogLoader partition=__consumer_offsets-33, dir=/tmp/kafka-unit11182757027218931278] Loading producer state till offset 0 with message format version 2 09:09:36 09:09:36.072 [data-plane-kafka-request-handler-1] DEBUG kafka.utils.KafkaScheduler - Scheduling task flush-metadata-file with initial delay 0 ms and period -1 ms. 09:09:36 09:09:36.072 [data-plane-kafka-request-handler-1] DEBUG kafka.utils.KafkaScheduler - Scheduling task PeriodicProducerExpirationCheck with initial delay 600000 ms and period 600000 ms. 09:09:36 09:09:36.072 [data-plane-kafka-request-handler-1] INFO kafka.log.LogManager - Created log for partition __consumer_offsets-33 in /tmp/kafka-unit11182757027218931278/__consumer_offsets-33 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} 09:09:36 09:09:36.072 [data-plane-kafka-request-handler-1] INFO kafka.cluster.Partition - [Partition __consumer_offsets-33 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-33 09:09:36 09:09:36.072 [data-plane-kafka-request-handler-1] INFO kafka.cluster.Partition - [Partition __consumer_offsets-33 broker=1] Log loaded for partition __consumer_offsets-33 with initial high watermark 0 09:09:36 09:09:36.073 [data-plane-kafka-request-handler-1] INFO state.change.logger - [Broker id=1] Leader __consumer_offsets-33 with topic id Some(RNuWA2mhQKGQZwHuRiMICQ) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas []. Previous leader epoch was -1. 09:09:36 09:09:36.073 [data-plane-kafka-request-handler-1] DEBUG kafka.server.epoch.LeaderEpochFileCache - [LeaderEpochCache __consumer_offsets-33] Appended new epoch entry EpochEntry(epoch=0, startOffset=0). Cache now contains 1 entries. 09:09:36 09:09:36.128 [main] DEBUG org.apache.kafka.clients.NetworkClient - [Consumer clientId=mso-123456-consumer-f5c1d64b-2a77-4abe-97e2-5d595acfbd82, groupId=mso-group] Sending metadata request MetadataRequestData(topics=[MetadataRequestTopic(topicId=AAAAAAAAAAAAAAAAAAAAAA, name='my-test-topic')], allowAutoTopicCreation=false, includeClusterAuthorizedOperations=false, includeTopicAuthorizedOperations=false) to node localhost:40117 (id: 1 rack: null) 09:09:36 09:09:36.128 [main] DEBUG org.apache.kafka.clients.NetworkClient - [Consumer clientId=mso-123456-consumer-f5c1d64b-2a77-4abe-97e2-5d595acfbd82, groupId=mso-group] Sending METADATA request with header RequestHeader(apiKey=METADATA, apiVersion=12, clientId=mso-123456-consumer-f5c1d64b-2a77-4abe-97e2-5d595acfbd82, correlationId=12) and timeout 30000 to node 1: MetadataRequestData(topics=[MetadataRequestTopic(topicId=AAAAAAAAAAAAAAAAAAAAAA, name='my-test-topic')], allowAutoTopicCreation=false, includeClusterAuthorizedOperations=false, includeTopicAuthorizedOperations=false) 09:09:36 09:09:36.132 [main] DEBUG org.apache.kafka.clients.NetworkClient - [Consumer clientId=mso-123456-consumer-f5c1d64b-2a77-4abe-97e2-5d595acfbd82, groupId=mso-group] Received METADATA response from node 1 for request with header RequestHeader(apiKey=METADATA, apiVersion=12, clientId=mso-123456-consumer-f5c1d64b-2a77-4abe-97e2-5d595acfbd82, correlationId=12): MetadataResponseData(throttleTimeMs=0, brokers=[MetadataResponseBroker(nodeId=1, host='localhost', port=40117, rack=null)], clusterId='lcpOyY1-QY2MMThgHGGgSA', controllerId=1, topics=[MetadataResponseTopic(errorCode=0, name='my-test-topic', topicId=rpOTusxPRiGyrsjjjH_fwA, isInternal=false, partitions=[MetadataResponsePartition(errorCode=0, partitionIndex=0, leaderId=1, leaderEpoch=0, replicaNodes=[1], isrNodes=[1], offlineReplicas=[])], topicAuthorizedOperations=-2147483648)], clusterAuthorizedOperations=-2147483648) 09:09:36 09:09:36.132 [main] DEBUG org.apache.kafka.clients.Metadata - [Consumer clientId=mso-123456-consumer-f5c1d64b-2a77-4abe-97e2-5d595acfbd82, groupId=mso-group] Updating last seen epoch for partition my-test-topic-0 from 0 to epoch 0 from new metadata 09:09:36 09:09:36.132 [data-plane-kafka-network-thread-1-ListenerName(SASL_PLAINTEXT)-SASL_PLAINTEXT-0] DEBUG kafka.request.logger - Completed request:{"isForwarded":false,"requestHeader":{"requestApiKey":3,"requestApiVersion":12,"correlationId":12,"clientId":"mso-123456-consumer-f5c1d64b-2a77-4abe-97e2-5d595acfbd82","requestApiKeyName":"METADATA"},"request":{"topics":[{"topicId":"AAAAAAAAAAAAAAAAAAAAAA","name":"my-test-topic"}],"allowAutoTopicCreation":false,"includeTopicAuthorizedOperations":false},"response":{"throttleTimeMs":0,"brokers":[{"nodeId":1,"host":"localhost","port":40117,"rack":null}],"clusterId":"lcpOyY1-QY2MMThgHGGgSA","controllerId":1,"topics":[{"errorCode":0,"name":"my-test-topic","topicId":"rpOTusxPRiGyrsjjjH_fwA","isInternal":false,"partitions":[{"errorCode":0,"partitionIndex":0,"leaderId":1,"leaderEpoch":0,"replicaNodes":[1],"isrNodes":[1],"offlineReplicas":[]}],"topicAuthorizedOperations":-2147483648}]},"connection":"127.0.0.1:40117-127.0.0.1:49642-3","totalTimeMs":2.203,"requestQueueTimeMs":0.24,"localTimeMs":1.662,"remoteTimeMs":0.0,"throttleTimeMs":0,"responseQueueTimeMs":0.082,"sendTimeMs":0.218,"securityProtocol":"SASL_PLAINTEXT","principal":"User:admin","listener":"SASL_PLAINTEXT","clientInformation":{"softwareName":"apache-kafka-java","softwareVersion":"3.3.1"}} 09:09:36 09:09:36.132 [main] DEBUG org.apache.kafka.clients.Metadata - [Consumer clientId=mso-123456-consumer-f5c1d64b-2a77-4abe-97e2-5d595acfbd82, groupId=mso-group] Updated cluster metadata updateVersion 7 to MetadataCache{clusterId='lcpOyY1-QY2MMThgHGGgSA', nodes={1=localhost:40117 (id: 1 rack: null)}, partitions=[PartitionMetadata(error=NONE, partition=my-test-topic-0, leader=Optional[1], leaderEpoch=Optional[0], replicas=1, isr=1, offlineReplicas=)], controller=localhost:40117 (id: 1 rack: null)} 09:09:36 09:09:36.132 [main] DEBUG org.apache.kafka.clients.consumer.internals.ConsumerCoordinator - [Consumer clientId=mso-123456-consumer-f5c1d64b-2a77-4abe-97e2-5d595acfbd82, groupId=mso-group] Sending FindCoordinator request to broker localhost:40117 (id: 1 rack: null) 09:09:36 09:09:36.133 [main] DEBUG org.apache.kafka.clients.NetworkClient - [Consumer clientId=mso-123456-consumer-f5c1d64b-2a77-4abe-97e2-5d595acfbd82, groupId=mso-group] Sending FIND_COORDINATOR request with header RequestHeader(apiKey=FIND_COORDINATOR, apiVersion=4, clientId=mso-123456-consumer-f5c1d64b-2a77-4abe-97e2-5d595acfbd82, correlationId=13) and timeout 30000 to node 1: FindCoordinatorRequestData(key='', keyType=0, coordinatorKeys=[mso-group]) 09:09:36 09:09:36.135 [ProcessThread(sid:0 cport:46481):] DEBUG org.apache.zookeeper.server.SessionTrackerImpl - Checking session 0x1000002945e0000 09:09:36 09:09:36.135 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - Processing request:: sessionid:0x1000002945e0000 type:exists cxid:0xc9 zxid:0xfffffffffffffffe txntype:unknown reqpath:/admin/delete_topics/__consumer_offsets 09:09:36 09:09:36.135 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - sessionid:0x1000002945e0000 type:exists cxid:0xc9 zxid:0xfffffffffffffffe txntype:unknown reqpath:/admin/delete_topics/__consumer_offsets 09:09:36 09:09:36.136 [main-SendThread(127.0.0.1:46481)] DEBUG org.apache.zookeeper.ClientCnxn - Reading reply session id: 0x1000002945e0000, packet:: clientPath:/admin/delete_topics/__consumer_offsets serverPath:/admin/delete_topics/__consumer_offsets finished:false header:: 201,3 replyHeader:: 201,139,-101 request:: '/admin/delete_topics/__consumer_offsets,F response:: 09:09:36 09:09:36.137 [ProcessThread(sid:0 cport:46481):] DEBUG org.apache.zookeeper.server.SessionTrackerImpl - Checking session 0x1000002945e0000 09:09:36 09:09:36.137 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - Processing request:: sessionid:0x1000002945e0000 type:exists cxid:0xca zxid:0xfffffffffffffffe txntype:unknown reqpath:/brokers/topics/__consumer_offsets 09:09:36 09:09:36.137 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - sessionid:0x1000002945e0000 type:exists cxid:0xca zxid:0xfffffffffffffffe txntype:unknown reqpath:/brokers/topics/__consumer_offsets 09:09:36 09:09:36.137 [main-SendThread(127.0.0.1:46481)] DEBUG org.apache.zookeeper.ClientCnxn - Reading reply session id: 0x1000002945e0000, packet:: clientPath:/brokers/topics/__consumer_offsets serverPath:/brokers/topics/__consumer_offsets finished:false header:: 202,3 replyHeader:: 202,139,0 request:: '/brokers/topics/__consumer_offsets,T response:: s{38,38,1770973775670,1770973775670,0,1,0,0,548,1,39} 09:09:36 09:09:36.138 [data-plane-kafka-request-handler-0] DEBUG kafka.server.ZkAdminManager - [Admin Manager on Broker 1]: Topic creation failed since topic '__consumer_offsets' already exists. 09:09:36 org.apache.kafka.common.errors.TopicExistsException: Topic '__consumer_offsets' already exists. 09:09:36 09:09:36.138 [data-plane-kafka-request-handler-0] DEBUG kafka.server.DefaultAutoTopicCreationManager - Cleared inflight topic creation state for HashMap(__consumer_offsets -> CreatableTopic(name='__consumer_offsets', numPartitions=50, replicationFactor=1, assignments=[], configs=[CreateableTopicConfig(name='compression.type', value='producer'), CreateableTopicConfig(name='cleanup.policy', value='compact'), CreateableTopicConfig(name='segment.bytes', value='104857600')])) 09:09:36 09:09:36.139 [main] DEBUG org.apache.kafka.clients.NetworkClient - [Consumer clientId=mso-123456-consumer-f5c1d64b-2a77-4abe-97e2-5d595acfbd82, groupId=mso-group] Received FIND_COORDINATOR response from node 1 for request with header RequestHeader(apiKey=FIND_COORDINATOR, apiVersion=4, clientId=mso-123456-consumer-f5c1d64b-2a77-4abe-97e2-5d595acfbd82, correlationId=13): FindCoordinatorResponseData(throttleTimeMs=0, errorCode=0, errorMessage='', nodeId=0, host='', port=0, coordinators=[Coordinator(key='mso-group', nodeId=-1, host='', port=-1, errorCode=15, errorMessage='')]) 09:09:36 09:09:36.139 [main] DEBUG org.apache.kafka.clients.consumer.internals.ConsumerCoordinator - [Consumer clientId=mso-123456-consumer-f5c1d64b-2a77-4abe-97e2-5d595acfbd82, groupId=mso-group] Received FindCoordinator response ClientResponse(receivedTimeMs=1770973776139, latencyMs=7, disconnected=false, requestHeader=RequestHeader(apiKey=FIND_COORDINATOR, apiVersion=4, clientId=mso-123456-consumer-f5c1d64b-2a77-4abe-97e2-5d595acfbd82, correlationId=13), responseBody=FindCoordinatorResponseData(throttleTimeMs=0, errorCode=0, errorMessage='', nodeId=0, host='', port=0, coordinators=[Coordinator(key='mso-group', nodeId=-1, host='', port=-1, errorCode=15, errorMessage='')])) 09:09:36 09:09:36.139 [main] DEBUG org.apache.kafka.clients.consumer.internals.ConsumerCoordinator - [Consumer clientId=mso-123456-consumer-f5c1d64b-2a77-4abe-97e2-5d595acfbd82, groupId=mso-group] Group coordinator lookup failed: 09:09:36 09:09:36.139 [main] DEBUG org.apache.kafka.clients.consumer.internals.ConsumerCoordinator - [Consumer clientId=mso-123456-consumer-f5c1d64b-2a77-4abe-97e2-5d595acfbd82, groupId=mso-group] Coordinator discovery failed, refreshing metadata 09:09:36 org.apache.kafka.common.errors.CoordinatorNotAvailableException: The coordinator is not available. 09:09:36 09:09:36.139 [data-plane-kafka-network-thread-1-ListenerName(SASL_PLAINTEXT)-SASL_PLAINTEXT-0] DEBUG kafka.request.logger - Completed request:{"isForwarded":false,"requestHeader":{"requestApiKey":10,"requestApiVersion":4,"correlationId":13,"clientId":"mso-123456-consumer-f5c1d64b-2a77-4abe-97e2-5d595acfbd82","requestApiKeyName":"FIND_COORDINATOR"},"request":{"keyType":0,"coordinatorKeys":["mso-group"]},"response":{"throttleTimeMs":0,"coordinators":[{"key":"mso-group","nodeId":-1,"host":"","port":-1,"errorCode":15,"errorMessage":""}]},"connection":"127.0.0.1:40117-127.0.0.1:49642-3","totalTimeMs":5.658,"requestQueueTimeMs":0.154,"localTimeMs":5.154,"remoteTimeMs":0.0,"throttleTimeMs":0,"responseQueueTimeMs":0.089,"sendTimeMs":0.26,"securityProtocol":"SASL_PLAINTEXT","principal":"User:admin","listener":"SASL_PLAINTEXT","clientInformation":{"softwareName":"apache-kafka-java","softwareVersion":"3.3.1"}} 09:09:36 09:09:36.169 [ProcessThread(sid:0 cport:46481):] DEBUG org.apache.zookeeper.server.SessionTrackerImpl - Checking session 0x1000002945e0000 09:09:36 09:09:36.169 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - Processing request:: sessionid:0x1000002945e0000 type:getData cxid:0xcb zxid:0xfffffffffffffffe txntype:unknown reqpath:/config/topics/__consumer_offsets 09:09:36 09:09:36.169 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - sessionid:0x1000002945e0000 type:getData cxid:0xcb zxid:0xfffffffffffffffe txntype:unknown reqpath:/config/topics/__consumer_offsets 09:09:36 09:09:36.170 [SyncThread:0] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Permission requested: 1 09:09:36 09:09:36.170 [SyncThread:0] DEBUG org.apache.zookeeper.server.ZooKeeperServer - ACLs for node: [31,s{'world,'anyone} 09:09:36 ] 09:09:36 09:09:36.170 [SyncThread:0] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Client credentials: ['sasl,'zooclient 09:09:36 , 'ip,'127.0.0.1 09:09:36 ] 09:09:36 09:09:36.170 [main-SendThread(127.0.0.1:46481)] DEBUG org.apache.zookeeper.ClientCnxn - Reading reply session id: 0x1000002945e0000, packet:: clientPath:/config/topics/__consumer_offsets serverPath:/config/topics/__consumer_offsets finished:false header:: 203,4 replyHeader:: 203,139,0 request:: '/config/topics/__consumer_offsets,F response:: #7b2276657273696f6e223a312c22636f6e666967223a7b22636f6d7072657373696f6e2e74797065223a2270726f6475636572222c22636c65616e75702e706f6c696379223a22636f6d70616374222c227365676d656e742e6279746573223a22313034383537363030227d7d,s{37,37,1770973775609,1770973775609,0,0,0,0,109,0,37} 09:09:36 09:09:36.173 [data-plane-kafka-request-handler-1] DEBUG kafka.log.OffsetIndex - Loaded index file /tmp/kafka-unit11182757027218931278/__consumer_offsets-48/00000000000000000000.index with maxEntries = 1310720, maxIndexSize = 10485760, entries = 0, lastOffset = 0, file position = 0 09:09:36 09:09:36.174 [data-plane-kafka-request-handler-1] DEBUG kafka.log.AbstractIndex - Index /tmp/kafka-unit11182757027218931278/__consumer_offsets-48/00000000000000000000.index was not resized because it already has size 10485760 09:09:36 09:09:36.174 [data-plane-kafka-request-handler-1] DEBUG kafka.log.TimeIndex - Loaded index file /tmp/kafka-unit11182757027218931278/__consumer_offsets-48/00000000000000000000.timeindex with maxEntries = 873813, maxIndexSize = 10485760, entries = 0, lastOffset = TimestampOffset(-1,0), file position = 0 09:09:36 09:09:36.174 [data-plane-kafka-request-handler-1] DEBUG kafka.log.AbstractIndex - Index /tmp/kafka-unit11182757027218931278/__consumer_offsets-48/00000000000000000000.timeindex was not resized because it already has size 10485756 09:09:36 09:09:36.174 [data-plane-kafka-request-handler-1] INFO kafka.log.UnifiedLog$ - [LogLoader partition=__consumer_offsets-48, dir=/tmp/kafka-unit11182757027218931278] Loading producer state till offset 0 with message format version 2 09:09:36 09:09:36.174 [data-plane-kafka-request-handler-1] DEBUG kafka.utils.KafkaScheduler - Scheduling task flush-metadata-file with initial delay 0 ms and period -1 ms. 09:09:36 09:09:36.175 [data-plane-kafka-request-handler-1] DEBUG kafka.utils.KafkaScheduler - Scheduling task PeriodicProducerExpirationCheck with initial delay 600000 ms and period 600000 ms. 09:09:36 09:09:36.176 [data-plane-kafka-request-handler-1] INFO kafka.log.LogManager - Created log for partition __consumer_offsets-48 in /tmp/kafka-unit11182757027218931278/__consumer_offsets-48 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} 09:09:36 09:09:36.176 [data-plane-kafka-request-handler-1] INFO kafka.cluster.Partition - [Partition __consumer_offsets-48 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-48 09:09:36 09:09:36.176 [data-plane-kafka-request-handler-1] INFO kafka.cluster.Partition - [Partition __consumer_offsets-48 broker=1] Log loaded for partition __consumer_offsets-48 with initial high watermark 0 09:09:36 09:09:36.176 [data-plane-kafka-request-handler-1] INFO state.change.logger - [Broker id=1] Leader __consumer_offsets-48 with topic id Some(RNuWA2mhQKGQZwHuRiMICQ) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas []. Previous leader epoch was -1. 09:09:36 09:09:36.176 [data-plane-kafka-request-handler-1] DEBUG kafka.server.epoch.LeaderEpochFileCache - [LeaderEpochCache __consumer_offsets-48] Appended new epoch entry EpochEntry(epoch=0, startOffset=0). Cache now contains 1 entries. 09:09:36 09:09:36.181 [ProcessThread(sid:0 cport:46481):] DEBUG org.apache.zookeeper.server.SessionTrackerImpl - Checking session 0x1000002945e0000 09:09:36 09:09:36.181 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - Processing request:: sessionid:0x1000002945e0000 type:getData cxid:0xcc zxid:0xfffffffffffffffe txntype:unknown reqpath:/config/topics/__consumer_offsets 09:09:36 09:09:36.181 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - sessionid:0x1000002945e0000 type:getData cxid:0xcc zxid:0xfffffffffffffffe txntype:unknown reqpath:/config/topics/__consumer_offsets 09:09:36 09:09:36.181 [SyncThread:0] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Permission requested: 1 09:09:36 09:09:36.181 [SyncThread:0] DEBUG org.apache.zookeeper.server.ZooKeeperServer - ACLs for node: [31,s{'world,'anyone} 09:09:36 ] 09:09:36 09:09:36.181 [SyncThread:0] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Client credentials: ['sasl,'zooclient 09:09:36 , 'ip,'127.0.0.1 09:09:36 ] 09:09:36 09:09:36.181 [main-SendThread(127.0.0.1:46481)] DEBUG org.apache.zookeeper.ClientCnxn - Reading reply session id: 0x1000002945e0000, packet:: clientPath:/config/topics/__consumer_offsets serverPath:/config/topics/__consumer_offsets finished:false header:: 204,4 replyHeader:: 204,139,0 request:: '/config/topics/__consumer_offsets,F response:: #7b2276657273696f6e223a312c22636f6e666967223a7b22636f6d7072657373696f6e2e74797065223a2270726f6475636572222c22636c65616e75702e706f6c696379223a22636f6d70616374222c227365676d656e742e6279746573223a22313034383537363030227d7d,s{37,37,1770973775609,1770973775609,0,0,0,0,109,0,37} 09:09:36 09:09:36.182 [data-plane-kafka-request-handler-1] DEBUG kafka.log.OffsetIndex - Loaded index file /tmp/kafka-unit11182757027218931278/__consumer_offsets-19/00000000000000000000.index with maxEntries = 1310720, maxIndexSize = 10485760, entries = 0, lastOffset = 0, file position = 0 09:09:36 09:09:36.182 [data-plane-kafka-request-handler-1] DEBUG kafka.log.AbstractIndex - Index /tmp/kafka-unit11182757027218931278/__consumer_offsets-19/00000000000000000000.index was not resized because it already has size 10485760 09:09:36 09:09:36.183 [data-plane-kafka-request-handler-1] DEBUG kafka.log.TimeIndex - Loaded index file /tmp/kafka-unit11182757027218931278/__consumer_offsets-19/00000000000000000000.timeindex with maxEntries = 873813, maxIndexSize = 10485760, entries = 0, lastOffset = TimestampOffset(-1,0), file position = 0 09:09:36 09:09:36.183 [data-plane-kafka-request-handler-1] DEBUG kafka.log.AbstractIndex - Index /tmp/kafka-unit11182757027218931278/__consumer_offsets-19/00000000000000000000.timeindex was not resized because it already has size 10485756 09:09:36 09:09:36.184 [data-plane-kafka-request-handler-1] INFO kafka.log.UnifiedLog$ - [LogLoader partition=__consumer_offsets-19, dir=/tmp/kafka-unit11182757027218931278] Loading producer state till offset 0 with message format version 2 09:09:36 09:09:36.184 [data-plane-kafka-request-handler-1] DEBUG kafka.utils.KafkaScheduler - Scheduling task flush-metadata-file with initial delay 0 ms and period -1 ms. 09:09:36 09:09:36.184 [data-plane-kafka-request-handler-1] DEBUG kafka.utils.KafkaScheduler - Scheduling task PeriodicProducerExpirationCheck with initial delay 600000 ms and period 600000 ms. 09:09:36 09:09:36.185 [data-plane-kafka-request-handler-1] INFO kafka.log.LogManager - Created log for partition __consumer_offsets-19 in /tmp/kafka-unit11182757027218931278/__consumer_offsets-19 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} 09:09:36 09:09:36.185 [data-plane-kafka-request-handler-1] INFO kafka.cluster.Partition - [Partition __consumer_offsets-19 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-19 09:09:36 09:09:36.185 [data-plane-kafka-request-handler-1] INFO kafka.cluster.Partition - [Partition __consumer_offsets-19 broker=1] Log loaded for partition __consumer_offsets-19 with initial high watermark 0 09:09:36 09:09:36.185 [data-plane-kafka-request-handler-1] INFO state.change.logger - [Broker id=1] Leader __consumer_offsets-19 with topic id Some(RNuWA2mhQKGQZwHuRiMICQ) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas []. Previous leader epoch was -1. 09:09:36 09:09:36.185 [data-plane-kafka-request-handler-1] DEBUG kafka.server.epoch.LeaderEpochFileCache - [LeaderEpochCache __consumer_offsets-19] Appended new epoch entry EpochEntry(epoch=0, startOffset=0). Cache now contains 1 entries. 09:09:36 09:09:36.192 [ProcessThread(sid:0 cport:46481):] DEBUG org.apache.zookeeper.server.SessionTrackerImpl - Checking session 0x1000002945e0000 09:09:36 09:09:36.192 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - Processing request:: sessionid:0x1000002945e0000 type:getData cxid:0xcd zxid:0xfffffffffffffffe txntype:unknown reqpath:/config/topics/__consumer_offsets 09:09:36 09:09:36.192 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - sessionid:0x1000002945e0000 type:getData cxid:0xcd zxid:0xfffffffffffffffe txntype:unknown reqpath:/config/topics/__consumer_offsets 09:09:36 09:09:36.192 [SyncThread:0] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Permission requested: 1 09:09:36 09:09:36.192 [SyncThread:0] DEBUG org.apache.zookeeper.server.ZooKeeperServer - ACLs for node: [31,s{'world,'anyone} 09:09:36 ] 09:09:36 09:09:36.192 [SyncThread:0] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Client credentials: ['sasl,'zooclient 09:09:36 , 'ip,'127.0.0.1 09:09:36 ] 09:09:36 09:09:36.192 [main-SendThread(127.0.0.1:46481)] DEBUG org.apache.zookeeper.ClientCnxn - Reading reply session id: 0x1000002945e0000, packet:: clientPath:/config/topics/__consumer_offsets serverPath:/config/topics/__consumer_offsets finished:false header:: 205,4 replyHeader:: 205,139,0 request:: '/config/topics/__consumer_offsets,F response:: #7b2276657273696f6e223a312c22636f6e666967223a7b22636f6d7072657373696f6e2e74797065223a2270726f6475636572222c22636c65616e75702e706f6c696379223a22636f6d70616374222c227365676d656e742e6279746573223a22313034383537363030227d7d,s{37,37,1770973775609,1770973775609,0,0,0,0,109,0,37} 09:09:36 09:09:36.194 [data-plane-kafka-request-handler-1] DEBUG kafka.log.OffsetIndex - Loaded index file /tmp/kafka-unit11182757027218931278/__consumer_offsets-34/00000000000000000000.index with maxEntries = 1310720, maxIndexSize = 10485760, entries = 0, lastOffset = 0, file position = 0 09:09:36 09:09:36.194 [data-plane-kafka-request-handler-1] DEBUG kafka.log.AbstractIndex - Index /tmp/kafka-unit11182757027218931278/__consumer_offsets-34/00000000000000000000.index was not resized because it already has size 10485760 09:09:36 09:09:36.194 [data-plane-kafka-request-handler-1] DEBUG kafka.log.TimeIndex - Loaded index file /tmp/kafka-unit11182757027218931278/__consumer_offsets-34/00000000000000000000.timeindex with maxEntries = 873813, maxIndexSize = 10485760, entries = 0, lastOffset = TimestampOffset(-1,0), file position = 0 09:09:36 09:09:36.194 [data-plane-kafka-request-handler-1] DEBUG kafka.log.AbstractIndex - Index /tmp/kafka-unit11182757027218931278/__consumer_offsets-34/00000000000000000000.timeindex was not resized because it already has size 10485756 09:09:36 09:09:36.194 [data-plane-kafka-request-handler-1] INFO kafka.log.UnifiedLog$ - [LogLoader partition=__consumer_offsets-34, dir=/tmp/kafka-unit11182757027218931278] Loading producer state till offset 0 with message format version 2 09:09:36 09:09:36.195 [data-plane-kafka-request-handler-1] DEBUG kafka.utils.KafkaScheduler - Scheduling task flush-metadata-file with initial delay 0 ms and period -1 ms. 09:09:36 09:09:36.195 [data-plane-kafka-request-handler-1] DEBUG kafka.utils.KafkaScheduler - Scheduling task PeriodicProducerExpirationCheck with initial delay 600000 ms and period 600000 ms. 09:09:36 09:09:36.195 [data-plane-kafka-request-handler-1] INFO kafka.log.LogManager - Created log for partition __consumer_offsets-34 in /tmp/kafka-unit11182757027218931278/__consumer_offsets-34 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} 09:09:36 09:09:36.195 [data-plane-kafka-request-handler-1] INFO kafka.cluster.Partition - [Partition __consumer_offsets-34 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-34 09:09:36 09:09:36.195 [data-plane-kafka-request-handler-1] INFO kafka.cluster.Partition - [Partition __consumer_offsets-34 broker=1] Log loaded for partition __consumer_offsets-34 with initial high watermark 0 09:09:36 09:09:36.195 [data-plane-kafka-request-handler-1] INFO state.change.logger - [Broker id=1] Leader __consumer_offsets-34 with topic id Some(RNuWA2mhQKGQZwHuRiMICQ) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas []. Previous leader epoch was -1. 09:09:36 09:09:36.196 [data-plane-kafka-request-handler-1] DEBUG kafka.server.epoch.LeaderEpochFileCache - [LeaderEpochCache __consumer_offsets-34] Appended new epoch entry EpochEntry(epoch=0, startOffset=0). Cache now contains 1 entries. 09:09:36 09:09:36.201 [ProcessThread(sid:0 cport:46481):] DEBUG org.apache.zookeeper.server.SessionTrackerImpl - Checking session 0x1000002945e0000 09:09:36 09:09:36.201 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - Processing request:: sessionid:0x1000002945e0000 type:getData cxid:0xce zxid:0xfffffffffffffffe txntype:unknown reqpath:/config/topics/__consumer_offsets 09:09:36 09:09:36.201 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - sessionid:0x1000002945e0000 type:getData cxid:0xce zxid:0xfffffffffffffffe txntype:unknown reqpath:/config/topics/__consumer_offsets 09:09:36 09:09:36.201 [SyncThread:0] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Permission requested: 1 09:09:36 09:09:36.201 [SyncThread:0] DEBUG org.apache.zookeeper.server.ZooKeeperServer - ACLs for node: [31,s{'world,'anyone} 09:09:36 ] 09:09:36 09:09:36.201 [SyncThread:0] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Client credentials: ['sasl,'zooclient 09:09:36 , 'ip,'127.0.0.1 09:09:36 ] 09:09:36 09:09:36.201 [main-SendThread(127.0.0.1:46481)] DEBUG org.apache.zookeeper.ClientCnxn - Reading reply session id: 0x1000002945e0000, packet:: clientPath:/config/topics/__consumer_offsets serverPath:/config/topics/__consumer_offsets finished:false header:: 206,4 replyHeader:: 206,139,0 request:: '/config/topics/__consumer_offsets,F response:: #7b2276657273696f6e223a312c22636f6e666967223a7b22636f6d7072657373696f6e2e74797065223a2270726f6475636572222c22636c65616e75702e706f6c696379223a22636f6d70616374222c227365676d656e742e6279746573223a22313034383537363030227d7d,s{37,37,1770973775609,1770973775609,0,0,0,0,109,0,37} 09:09:36 09:09:36.203 [data-plane-kafka-request-handler-1] DEBUG kafka.log.OffsetIndex - Loaded index file /tmp/kafka-unit11182757027218931278/__consumer_offsets-4/00000000000000000000.index with maxEntries = 1310720, maxIndexSize = 10485760, entries = 0, lastOffset = 0, file position = 0 09:09:36 09:09:36.203 [data-plane-kafka-request-handler-1] DEBUG kafka.log.AbstractIndex - Index /tmp/kafka-unit11182757027218931278/__consumer_offsets-4/00000000000000000000.index was not resized because it already has size 10485760 09:09:36 09:09:36.204 [data-plane-kafka-request-handler-1] DEBUG kafka.log.TimeIndex - Loaded index file /tmp/kafka-unit11182757027218931278/__consumer_offsets-4/00000000000000000000.timeindex with maxEntries = 873813, maxIndexSize = 10485760, entries = 0, lastOffset = TimestampOffset(-1,0), file position = 0 09:09:36 09:09:36.204 [data-plane-kafka-request-handler-1] DEBUG kafka.log.AbstractIndex - Index /tmp/kafka-unit11182757027218931278/__consumer_offsets-4/00000000000000000000.timeindex was not resized because it already has size 10485756 09:09:36 09:09:36.204 [data-plane-kafka-request-handler-1] INFO kafka.log.UnifiedLog$ - [LogLoader partition=__consumer_offsets-4, dir=/tmp/kafka-unit11182757027218931278] Loading producer state till offset 0 with message format version 2 09:09:36 09:09:36.204 [data-plane-kafka-request-handler-1] DEBUG kafka.utils.KafkaScheduler - Scheduling task flush-metadata-file with initial delay 0 ms and period -1 ms. 09:09:36 09:09:36.206 [data-plane-kafka-request-handler-1] DEBUG kafka.utils.KafkaScheduler - Scheduling task PeriodicProducerExpirationCheck with initial delay 600000 ms and period 600000 ms. 09:09:36 09:09:36.206 [data-plane-kafka-request-handler-1] INFO kafka.log.LogManager - Created log for partition __consumer_offsets-4 in /tmp/kafka-unit11182757027218931278/__consumer_offsets-4 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} 09:09:36 09:09:36.206 [data-plane-kafka-request-handler-1] INFO kafka.cluster.Partition - [Partition __consumer_offsets-4 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-4 09:09:36 09:09:36.206 [data-plane-kafka-request-handler-1] INFO kafka.cluster.Partition - [Partition __consumer_offsets-4 broker=1] Log loaded for partition __consumer_offsets-4 with initial high watermark 0 09:09:36 09:09:36.206 [data-plane-kafka-request-handler-1] INFO state.change.logger - [Broker id=1] Leader __consumer_offsets-4 with topic id Some(RNuWA2mhQKGQZwHuRiMICQ) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas []. Previous leader epoch was -1. 09:09:36 09:09:36.206 [data-plane-kafka-request-handler-1] DEBUG kafka.server.epoch.LeaderEpochFileCache - [LeaderEpochCache __consumer_offsets-4] Appended new epoch entry EpochEntry(epoch=0, startOffset=0). Cache now contains 1 entries. 09:09:36 09:09:36.210 [ProcessThread(sid:0 cport:46481):] DEBUG org.apache.zookeeper.server.SessionTrackerImpl - Checking session 0x1000002945e0000 09:09:36 09:09:36.210 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - Processing request:: sessionid:0x1000002945e0000 type:getData cxid:0xcf zxid:0xfffffffffffffffe txntype:unknown reqpath:/config/topics/__consumer_offsets 09:09:36 09:09:36.210 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - sessionid:0x1000002945e0000 type:getData cxid:0xcf zxid:0xfffffffffffffffe txntype:unknown reqpath:/config/topics/__consumer_offsets 09:09:36 09:09:36.210 [SyncThread:0] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Permission requested: 1 09:09:36 09:09:36.210 [SyncThread:0] DEBUG org.apache.zookeeper.server.ZooKeeperServer - ACLs for node: [31,s{'world,'anyone} 09:09:36 ] 09:09:36 09:09:36.210 [SyncThread:0] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Client credentials: ['sasl,'zooclient 09:09:36 , 'ip,'127.0.0.1 09:09:36 ] 09:09:36 09:09:36.211 [main-SendThread(127.0.0.1:46481)] DEBUG org.apache.zookeeper.ClientCnxn - Reading reply session id: 0x1000002945e0000, packet:: clientPath:/config/topics/__consumer_offsets serverPath:/config/topics/__consumer_offsets finished:false header:: 207,4 replyHeader:: 207,139,0 request:: '/config/topics/__consumer_offsets,F response:: #7b2276657273696f6e223a312c22636f6e666967223a7b22636f6d7072657373696f6e2e74797065223a2270726f6475636572222c22636c65616e75702e706f6c696379223a22636f6d70616374222c227365676d656e742e6279746573223a22313034383537363030227d7d,s{37,37,1770973775609,1770973775609,0,0,0,0,109,0,37} 09:09:36 09:09:36.212 [data-plane-kafka-request-handler-1] DEBUG kafka.log.OffsetIndex - Loaded index file /tmp/kafka-unit11182757027218931278/__consumer_offsets-11/00000000000000000000.index with maxEntries = 1310720, maxIndexSize = 10485760, entries = 0, lastOffset = 0, file position = 0 09:09:36 09:09:36.212 [data-plane-kafka-request-handler-1] DEBUG kafka.log.AbstractIndex - Index /tmp/kafka-unit11182757027218931278/__consumer_offsets-11/00000000000000000000.index was not resized because it already has size 10485760 09:09:36 09:09:36.212 [data-plane-kafka-request-handler-1] DEBUG kafka.log.TimeIndex - Loaded index file /tmp/kafka-unit11182757027218931278/__consumer_offsets-11/00000000000000000000.timeindex with maxEntries = 873813, maxIndexSize = 10485760, entries = 0, lastOffset = TimestampOffset(-1,0), file position = 0 09:09:36 09:09:36.212 [data-plane-kafka-request-handler-1] DEBUG kafka.log.AbstractIndex - Index /tmp/kafka-unit11182757027218931278/__consumer_offsets-11/00000000000000000000.timeindex was not resized because it already has size 10485756 09:09:36 09:09:36.212 [data-plane-kafka-request-handler-1] INFO kafka.log.UnifiedLog$ - [LogLoader partition=__consumer_offsets-11, dir=/tmp/kafka-unit11182757027218931278] Loading producer state till offset 0 with message format version 2 09:09:36 09:09:36.212 [data-plane-kafka-request-handler-1] DEBUG kafka.utils.KafkaScheduler - Scheduling task flush-metadata-file with initial delay 0 ms and period -1 ms. 09:09:36 09:09:36.213 [data-plane-kafka-request-handler-1] DEBUG kafka.utils.KafkaScheduler - Scheduling task PeriodicProducerExpirationCheck with initial delay 600000 ms and period 600000 ms. 09:09:36 09:09:36.213 [data-plane-kafka-request-handler-1] INFO kafka.log.LogManager - Created log for partition __consumer_offsets-11 in /tmp/kafka-unit11182757027218931278/__consumer_offsets-11 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} 09:09:36 09:09:36.213 [data-plane-kafka-request-handler-1] INFO kafka.cluster.Partition - [Partition __consumer_offsets-11 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-11 09:09:36 09:09:36.213 [data-plane-kafka-request-handler-1] INFO kafka.cluster.Partition - [Partition __consumer_offsets-11 broker=1] Log loaded for partition __consumer_offsets-11 with initial high watermark 0 09:09:36 09:09:36.213 [data-plane-kafka-request-handler-1] INFO state.change.logger - [Broker id=1] Leader __consumer_offsets-11 with topic id Some(RNuWA2mhQKGQZwHuRiMICQ) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas []. Previous leader epoch was -1. 09:09:36 09:09:36.213 [data-plane-kafka-request-handler-1] DEBUG kafka.server.epoch.LeaderEpochFileCache - [LeaderEpochCache __consumer_offsets-11] Appended new epoch entry EpochEntry(epoch=0, startOffset=0). Cache now contains 1 entries. 09:09:36 09:09:36.220 [ProcessThread(sid:0 cport:46481):] DEBUG org.apache.zookeeper.server.SessionTrackerImpl - Checking session 0x1000002945e0000 09:09:36 09:09:36.220 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - Processing request:: sessionid:0x1000002945e0000 type:getData cxid:0xd0 zxid:0xfffffffffffffffe txntype:unknown reqpath:/config/topics/__consumer_offsets 09:09:36 09:09:36.220 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - sessionid:0x1000002945e0000 type:getData cxid:0xd0 zxid:0xfffffffffffffffe txntype:unknown reqpath:/config/topics/__consumer_offsets 09:09:36 09:09:36.220 [SyncThread:0] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Permission requested: 1 09:09:36 09:09:36.220 [SyncThread:0] DEBUG org.apache.zookeeper.server.ZooKeeperServer - ACLs for node: [31,s{'world,'anyone} 09:09:36 ] 09:09:36 09:09:36.220 [SyncThread:0] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Client credentials: ['sasl,'zooclient 09:09:36 , 'ip,'127.0.0.1 09:09:36 ] 09:09:36 09:09:36.220 [main-SendThread(127.0.0.1:46481)] DEBUG org.apache.zookeeper.ClientCnxn - Reading reply session id: 0x1000002945e0000, packet:: clientPath:/config/topics/__consumer_offsets serverPath:/config/topics/__consumer_offsets finished:false header:: 208,4 replyHeader:: 208,139,0 request:: '/config/topics/__consumer_offsets,F response:: #7b2276657273696f6e223a312c22636f6e666967223a7b22636f6d7072657373696f6e2e74797065223a2270726f6475636572222c22636c65616e75702e706f6c696379223a22636f6d70616374222c227365676d656e742e6279746573223a22313034383537363030227d7d,s{37,37,1770973775609,1770973775609,0,0,0,0,109,0,37} 09:09:36 09:09:36.222 [data-plane-kafka-request-handler-1] DEBUG kafka.log.OffsetIndex - Loaded index file /tmp/kafka-unit11182757027218931278/__consumer_offsets-26/00000000000000000000.index with maxEntries = 1310720, maxIndexSize = 10485760, entries = 0, lastOffset = 0, file position = 0 09:09:36 09:09:36.222 [data-plane-kafka-request-handler-1] DEBUG kafka.log.AbstractIndex - Index /tmp/kafka-unit11182757027218931278/__consumer_offsets-26/00000000000000000000.index was not resized because it already has size 10485760 09:09:36 09:09:36.222 [data-plane-kafka-request-handler-1] DEBUG kafka.log.TimeIndex - Loaded index file /tmp/kafka-unit11182757027218931278/__consumer_offsets-26/00000000000000000000.timeindex with maxEntries = 873813, maxIndexSize = 10485760, entries = 0, lastOffset = TimestampOffset(-1,0), file position = 0 09:09:36 09:09:36.222 [data-plane-kafka-request-handler-1] DEBUG kafka.log.AbstractIndex - Index /tmp/kafka-unit11182757027218931278/__consumer_offsets-26/00000000000000000000.timeindex was not resized because it already has size 10485756 09:09:36 09:09:36.223 [data-plane-kafka-request-handler-1] INFO kafka.log.UnifiedLog$ - [LogLoader partition=__consumer_offsets-26, dir=/tmp/kafka-unit11182757027218931278] Loading producer state till offset 0 with message format version 2 09:09:36 09:09:36.223 [data-plane-kafka-request-handler-1] DEBUG kafka.utils.KafkaScheduler - Scheduling task flush-metadata-file with initial delay 0 ms and period -1 ms. 09:09:36 09:09:36.223 [data-plane-kafka-request-handler-1] DEBUG kafka.utils.KafkaScheduler - Scheduling task PeriodicProducerExpirationCheck with initial delay 600000 ms and period 600000 ms. 09:09:36 09:09:36.223 [data-plane-kafka-request-handler-1] INFO kafka.log.LogManager - Created log for partition __consumer_offsets-26 in /tmp/kafka-unit11182757027218931278/__consumer_offsets-26 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} 09:09:36 09:09:36.223 [data-plane-kafka-request-handler-1] INFO kafka.cluster.Partition - [Partition __consumer_offsets-26 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-26 09:09:36 09:09:36.224 [data-plane-kafka-request-handler-1] INFO kafka.cluster.Partition - [Partition __consumer_offsets-26 broker=1] Log loaded for partition __consumer_offsets-26 with initial high watermark 0 09:09:36 09:09:36.224 [data-plane-kafka-request-handler-1] INFO state.change.logger - [Broker id=1] Leader __consumer_offsets-26 with topic id Some(RNuWA2mhQKGQZwHuRiMICQ) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas []. Previous leader epoch was -1. 09:09:36 09:09:36.224 [data-plane-kafka-request-handler-1] DEBUG kafka.server.epoch.LeaderEpochFileCache - [LeaderEpochCache __consumer_offsets-26] Appended new epoch entry EpochEntry(epoch=0, startOffset=0). Cache now contains 1 entries. 09:09:36 09:09:36.228 [ProcessThread(sid:0 cport:46481):] DEBUG org.apache.zookeeper.server.SessionTrackerImpl - Checking session 0x1000002945e0000 09:09:36 09:09:36.228 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - Processing request:: sessionid:0x1000002945e0000 type:getData cxid:0xd1 zxid:0xfffffffffffffffe txntype:unknown reqpath:/config/topics/__consumer_offsets 09:09:36 09:09:36.228 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - sessionid:0x1000002945e0000 type:getData cxid:0xd1 zxid:0xfffffffffffffffe txntype:unknown reqpath:/config/topics/__consumer_offsets 09:09:36 09:09:36.228 [SyncThread:0] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Permission requested: 1 09:09:36 09:09:36.228 [SyncThread:0] DEBUG org.apache.zookeeper.server.ZooKeeperServer - ACLs for node: [31,s{'world,'anyone} 09:09:36 ] 09:09:36 09:09:36.228 [SyncThread:0] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Client credentials: ['sasl,'zooclient 09:09:36 , 'ip,'127.0.0.1 09:09:36 ] 09:09:36 09:09:36.229 [main-SendThread(127.0.0.1:46481)] DEBUG org.apache.zookeeper.ClientCnxn - Reading reply session id: 0x1000002945e0000, packet:: clientPath:/config/topics/__consumer_offsets serverPath:/config/topics/__consumer_offsets finished:false header:: 209,4 replyHeader:: 209,139,0 request:: '/config/topics/__consumer_offsets,F response:: #7b2276657273696f6e223a312c22636f6e666967223a7b22636f6d7072657373696f6e2e74797065223a2270726f6475636572222c22636c65616e75702e706f6c696379223a22636f6d70616374222c227365676d656e742e6279746573223a22313034383537363030227d7d,s{37,37,1770973775609,1770973775609,0,0,0,0,109,0,37} 09:09:36 09:09:36.232 [main] DEBUG org.apache.kafka.clients.NetworkClient - [Consumer clientId=mso-123456-consumer-f5c1d64b-2a77-4abe-97e2-5d595acfbd82, groupId=mso-group] Sending metadata request MetadataRequestData(topics=[MetadataRequestTopic(topicId=AAAAAAAAAAAAAAAAAAAAAA, name='my-test-topic')], allowAutoTopicCreation=false, includeClusterAuthorizedOperations=false, includeTopicAuthorizedOperations=false) to node localhost:40117 (id: 1 rack: null) 09:09:36 09:09:36.232 [main] DEBUG org.apache.kafka.clients.NetworkClient - [Consumer clientId=mso-123456-consumer-f5c1d64b-2a77-4abe-97e2-5d595acfbd82, groupId=mso-group] Sending METADATA request with header RequestHeader(apiKey=METADATA, apiVersion=12, clientId=mso-123456-consumer-f5c1d64b-2a77-4abe-97e2-5d595acfbd82, correlationId=14) and timeout 30000 to node 1: MetadataRequestData(topics=[MetadataRequestTopic(topicId=AAAAAAAAAAAAAAAAAAAAAA, name='my-test-topic')], allowAutoTopicCreation=false, includeClusterAuthorizedOperations=false, includeTopicAuthorizedOperations=false) 09:09:36 09:09:36.232 [data-plane-kafka-request-handler-1] DEBUG kafka.log.OffsetIndex - Loaded index file /tmp/kafka-unit11182757027218931278/__consumer_offsets-49/00000000000000000000.index with maxEntries = 1310720, maxIndexSize = 10485760, entries = 0, lastOffset = 0, file position = 0 09:09:36 09:09:36.232 [data-plane-kafka-request-handler-1] DEBUG kafka.log.AbstractIndex - Index /tmp/kafka-unit11182757027218931278/__consumer_offsets-49/00000000000000000000.index was not resized because it already has size 10485760 09:09:36 09:09:36.232 [data-plane-kafka-request-handler-1] DEBUG kafka.log.TimeIndex - Loaded index file /tmp/kafka-unit11182757027218931278/__consumer_offsets-49/00000000000000000000.timeindex with maxEntries = 873813, maxIndexSize = 10485760, entries = 0, lastOffset = TimestampOffset(-1,0), file position = 0 09:09:36 09:09:36.232 [data-plane-kafka-request-handler-1] DEBUG kafka.log.AbstractIndex - Index /tmp/kafka-unit11182757027218931278/__consumer_offsets-49/00000000000000000000.timeindex was not resized because it already has size 10485756 09:09:36 09:09:36.233 [data-plane-kafka-request-handler-1] INFO kafka.log.UnifiedLog$ - [LogLoader partition=__consumer_offsets-49, dir=/tmp/kafka-unit11182757027218931278] Loading producer state till offset 0 with message format version 2 09:09:36 09:09:36.233 [data-plane-kafka-request-handler-1] DEBUG kafka.utils.KafkaScheduler - Scheduling task flush-metadata-file with initial delay 0 ms and period -1 ms. 09:09:36 09:09:36.234 [data-plane-kafka-request-handler-1] DEBUG kafka.utils.KafkaScheduler - Scheduling task PeriodicProducerExpirationCheck with initial delay 600000 ms and period 600000 ms. 09:09:36 09:09:36.234 [data-plane-kafka-request-handler-1] INFO kafka.log.LogManager - Created log for partition __consumer_offsets-49 in /tmp/kafka-unit11182757027218931278/__consumer_offsets-49 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} 09:09:36 09:09:36.234 [data-plane-kafka-request-handler-1] INFO kafka.cluster.Partition - [Partition __consumer_offsets-49 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-49 09:09:36 09:09:36.234 [data-plane-kafka-request-handler-1] INFO kafka.cluster.Partition - [Partition __consumer_offsets-49 broker=1] Log loaded for partition __consumer_offsets-49 with initial high watermark 0 09:09:36 09:09:36.234 [data-plane-kafka-request-handler-1] INFO state.change.logger - [Broker id=1] Leader __consumer_offsets-49 with topic id Some(RNuWA2mhQKGQZwHuRiMICQ) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas []. Previous leader epoch was -1. 09:09:36 09:09:36.234 [data-plane-kafka-request-handler-1] DEBUG kafka.server.epoch.LeaderEpochFileCache - [LeaderEpochCache __consumer_offsets-49] Appended new epoch entry EpochEntry(epoch=0, startOffset=0). Cache now contains 1 entries. 09:09:36 09:09:36.235 [data-plane-kafka-network-thread-1-ListenerName(SASL_PLAINTEXT)-SASL_PLAINTEXT-0] DEBUG kafka.request.logger - Completed request:{"isForwarded":false,"requestHeader":{"requestApiKey":3,"requestApiVersion":12,"correlationId":14,"clientId":"mso-123456-consumer-f5c1d64b-2a77-4abe-97e2-5d595acfbd82","requestApiKeyName":"METADATA"},"request":{"topics":[{"topicId":"AAAAAAAAAAAAAAAAAAAAAA","name":"my-test-topic"}],"allowAutoTopicCreation":false,"includeTopicAuthorizedOperations":false},"response":{"throttleTimeMs":0,"brokers":[{"nodeId":1,"host":"localhost","port":40117,"rack":null}],"clusterId":"lcpOyY1-QY2MMThgHGGgSA","controllerId":1,"topics":[{"errorCode":0,"name":"my-test-topic","topicId":"rpOTusxPRiGyrsjjjH_fwA","isInternal":false,"partitions":[{"errorCode":0,"partitionIndex":0,"leaderId":1,"leaderEpoch":0,"replicaNodes":[1],"isrNodes":[1],"offlineReplicas":[]}],"topicAuthorizedOperations":-2147483648}]},"connection":"127.0.0.1:40117-127.0.0.1:49642-3","totalTimeMs":2.147,"requestQueueTimeMs":0.267,"localTimeMs":1.499,"remoteTimeMs":0.0,"throttleTimeMs":0,"responseQueueTimeMs":0.079,"sendTimeMs":0.302,"securityProtocol":"SASL_PLAINTEXT","principal":"User:admin","listener":"SASL_PLAINTEXT","clientInformation":{"softwareName":"apache-kafka-java","softwareVersion":"3.3.1"}} 09:09:36 09:09:36.239 [ProcessThread(sid:0 cport:46481):] DEBUG org.apache.zookeeper.server.SessionTrackerImpl - Checking session 0x1000002945e0000 09:09:36 09:09:36.239 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - Processing request:: sessionid:0x1000002945e0000 type:getData cxid:0xd2 zxid:0xfffffffffffffffe txntype:unknown reqpath:/config/topics/__consumer_offsets 09:09:36 09:09:36.239 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - sessionid:0x1000002945e0000 type:getData cxid:0xd2 zxid:0xfffffffffffffffe txntype:unknown reqpath:/config/topics/__consumer_offsets 09:09:36 09:09:36.239 [SyncThread:0] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Permission requested: 1 09:09:36 09:09:36.239 [SyncThread:0] DEBUG org.apache.zookeeper.server.ZooKeeperServer - ACLs for node: [31,s{'world,'anyone} 09:09:36 ] 09:09:36 09:09:36.239 [SyncThread:0] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Client credentials: ['sasl,'zooclient 09:09:36 , 'ip,'127.0.0.1 09:09:36 ] 09:09:36 09:09:36.239 [main-SendThread(127.0.0.1:46481)] DEBUG org.apache.zookeeper.ClientCnxn - Reading reply session id: 0x1000002945e0000, packet:: clientPath:/config/topics/__consumer_offsets serverPath:/config/topics/__consumer_offsets finished:false header:: 210,4 replyHeader:: 210,139,0 request:: '/config/topics/__consumer_offsets,F response:: #7b2276657273696f6e223a312c22636f6e666967223a7b22636f6d7072657373696f6e2e74797065223a2270726f6475636572222c22636c65616e75702e706f6c696379223a22636f6d70616374222c227365676d656e742e6279746573223a22313034383537363030227d7d,s{37,37,1770973775609,1770973775609,0,0,0,0,109,0,37} 09:09:36 09:09:36.241 [main] DEBUG org.apache.kafka.clients.NetworkClient - [Consumer clientId=mso-123456-consumer-f5c1d64b-2a77-4abe-97e2-5d595acfbd82, groupId=mso-group] Received METADATA response from node 1 for request with header RequestHeader(apiKey=METADATA, apiVersion=12, clientId=mso-123456-consumer-f5c1d64b-2a77-4abe-97e2-5d595acfbd82, correlationId=14): MetadataResponseData(throttleTimeMs=0, brokers=[MetadataResponseBroker(nodeId=1, host='localhost', port=40117, rack=null)], clusterId='lcpOyY1-QY2MMThgHGGgSA', controllerId=1, topics=[MetadataResponseTopic(errorCode=0, name='my-test-topic', topicId=rpOTusxPRiGyrsjjjH_fwA, isInternal=false, partitions=[MetadataResponsePartition(errorCode=0, partitionIndex=0, leaderId=1, leaderEpoch=0, replicaNodes=[1], isrNodes=[1], offlineReplicas=[])], topicAuthorizedOperations=-2147483648)], clusterAuthorizedOperations=-2147483648) 09:09:36 09:09:36.241 [main] DEBUG org.apache.kafka.clients.Metadata - [Consumer clientId=mso-123456-consumer-f5c1d64b-2a77-4abe-97e2-5d595acfbd82, groupId=mso-group] Updating last seen epoch for partition my-test-topic-0 from 0 to epoch 0 from new metadata 09:09:36 09:09:36.241 [main] DEBUG org.apache.kafka.clients.Metadata - [Consumer clientId=mso-123456-consumer-f5c1d64b-2a77-4abe-97e2-5d595acfbd82, groupId=mso-group] Updated cluster metadata updateVersion 8 to MetadataCache{clusterId='lcpOyY1-QY2MMThgHGGgSA', nodes={1=localhost:40117 (id: 1 rack: null)}, partitions=[PartitionMetadata(error=NONE, partition=my-test-topic-0, leader=Optional[1], leaderEpoch=Optional[0], replicas=1, isr=1, offlineReplicas=)], controller=localhost:40117 (id: 1 rack: null)} 09:09:36 09:09:36.242 [main] DEBUG org.apache.kafka.clients.consumer.internals.ConsumerCoordinator - [Consumer clientId=mso-123456-consumer-f5c1d64b-2a77-4abe-97e2-5d595acfbd82, groupId=mso-group] Sending FindCoordinator request to broker localhost:40117 (id: 1 rack: null) 09:09:36 09:09:36.242 [main] DEBUG org.apache.kafka.clients.NetworkClient - [Consumer clientId=mso-123456-consumer-f5c1d64b-2a77-4abe-97e2-5d595acfbd82, groupId=mso-group] Sending FIND_COORDINATOR request with header RequestHeader(apiKey=FIND_COORDINATOR, apiVersion=4, clientId=mso-123456-consumer-f5c1d64b-2a77-4abe-97e2-5d595acfbd82, correlationId=15) and timeout 30000 to node 1: FindCoordinatorRequestData(key='', keyType=0, coordinatorKeys=[mso-group]) 09:09:36 09:09:36.242 [data-plane-kafka-request-handler-1] DEBUG kafka.log.OffsetIndex - Loaded index file /tmp/kafka-unit11182757027218931278/__consumer_offsets-39/00000000000000000000.index with maxEntries = 1310720, maxIndexSize = 10485760, entries = 0, lastOffset = 0, file position = 0 09:09:36 09:09:36.242 [data-plane-kafka-request-handler-1] DEBUG kafka.log.AbstractIndex - Index /tmp/kafka-unit11182757027218931278/__consumer_offsets-39/00000000000000000000.index was not resized because it already has size 10485760 09:09:36 09:09:36.242 [data-plane-kafka-request-handler-1] DEBUG kafka.log.TimeIndex - Loaded index file /tmp/kafka-unit11182757027218931278/__consumer_offsets-39/00000000000000000000.timeindex with maxEntries = 873813, maxIndexSize = 10485760, entries = 0, lastOffset = TimestampOffset(-1,0), file position = 0 09:09:36 09:09:36.242 [data-plane-kafka-request-handler-1] DEBUG kafka.log.AbstractIndex - Index /tmp/kafka-unit11182757027218931278/__consumer_offsets-39/00000000000000000000.timeindex was not resized because it already has size 10485756 09:09:36 09:09:36.243 [data-plane-kafka-request-handler-1] INFO kafka.log.UnifiedLog$ - [LogLoader partition=__consumer_offsets-39, dir=/tmp/kafka-unit11182757027218931278] Loading producer state till offset 0 with message format version 2 09:09:36 09:09:36.243 [data-plane-kafka-request-handler-1] DEBUG kafka.utils.KafkaScheduler - Scheduling task flush-metadata-file with initial delay 0 ms and period -1 ms. 09:09:36 09:09:36.244 [ProcessThread(sid:0 cport:46481):] DEBUG org.apache.zookeeper.server.SessionTrackerImpl - Checking session 0x1000002945e0000 09:09:36 09:09:36.244 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - Processing request:: sessionid:0x1000002945e0000 type:exists cxid:0xd3 zxid:0xfffffffffffffffe txntype:unknown reqpath:/admin/delete_topics/__consumer_offsets 09:09:36 09:09:36.244 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - sessionid:0x1000002945e0000 type:exists cxid:0xd3 zxid:0xfffffffffffffffe txntype:unknown reqpath:/admin/delete_topics/__consumer_offsets 09:09:36 09:09:36.244 [main-SendThread(127.0.0.1:46481)] DEBUG org.apache.zookeeper.ClientCnxn - Reading reply session id: 0x1000002945e0000, packet:: clientPath:/admin/delete_topics/__consumer_offsets serverPath:/admin/delete_topics/__consumer_offsets finished:false header:: 211,3 replyHeader:: 211,139,-101 request:: '/admin/delete_topics/__consumer_offsets,F response:: 09:09:36 09:09:36.244 [data-plane-kafka-request-handler-1] DEBUG kafka.utils.KafkaScheduler - Scheduling task PeriodicProducerExpirationCheck with initial delay 600000 ms and period 600000 ms. 09:09:36 09:09:36.245 [ProcessThread(sid:0 cport:46481):] DEBUG org.apache.zookeeper.server.SessionTrackerImpl - Checking session 0x1000002945e0000 09:09:36 09:09:36.245 [data-plane-kafka-request-handler-1] INFO kafka.log.LogManager - Created log for partition __consumer_offsets-39 in /tmp/kafka-unit11182757027218931278/__consumer_offsets-39 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} 09:09:36 09:09:36.245 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - Processing request:: sessionid:0x1000002945e0000 type:exists cxid:0xd4 zxid:0xfffffffffffffffe txntype:unknown reqpath:/brokers/topics/__consumer_offsets 09:09:36 09:09:36.245 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - sessionid:0x1000002945e0000 type:exists cxid:0xd4 zxid:0xfffffffffffffffe txntype:unknown reqpath:/brokers/topics/__consumer_offsets 09:09:36 09:09:36.245 [data-plane-kafka-request-handler-1] INFO kafka.cluster.Partition - [Partition __consumer_offsets-39 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-39 09:09:36 09:09:36.245 [data-plane-kafka-request-handler-1] INFO kafka.cluster.Partition - [Partition __consumer_offsets-39 broker=1] Log loaded for partition __consumer_offsets-39 with initial high watermark 0 09:09:36 09:09:36.245 [main-SendThread(127.0.0.1:46481)] DEBUG org.apache.zookeeper.ClientCnxn - Reading reply session id: 0x1000002945e0000, packet:: clientPath:/brokers/topics/__consumer_offsets serverPath:/brokers/topics/__consumer_offsets finished:false header:: 212,3 replyHeader:: 212,139,0 request:: '/brokers/topics/__consumer_offsets,T response:: s{38,38,1770973775670,1770973775670,0,1,0,0,548,1,39} 09:09:36 09:09:36.245 [data-plane-kafka-request-handler-1] INFO state.change.logger - [Broker id=1] Leader __consumer_offsets-39 with topic id Some(RNuWA2mhQKGQZwHuRiMICQ) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas []. Previous leader epoch was -1. 09:09:36 09:09:36.245 [data-plane-kafka-request-handler-0] DEBUG kafka.server.ZkAdminManager - [Admin Manager on Broker 1]: Topic creation failed since topic '__consumer_offsets' already exists. 09:09:36 org.apache.kafka.common.errors.TopicExistsException: Topic '__consumer_offsets' already exists. 09:09:36 09:09:36.245 [data-plane-kafka-request-handler-1] DEBUG kafka.server.epoch.LeaderEpochFileCache - [LeaderEpochCache __consumer_offsets-39] Appended new epoch entry EpochEntry(epoch=0, startOffset=0). Cache now contains 1 entries. 09:09:36 09:09:36.246 [data-plane-kafka-request-handler-0] DEBUG kafka.server.DefaultAutoTopicCreationManager - Cleared inflight topic creation state for HashMap(__consumer_offsets -> CreatableTopic(name='__consumer_offsets', numPartitions=50, replicationFactor=1, assignments=[], configs=[CreateableTopicConfig(name='compression.type', value='producer'), CreateableTopicConfig(name='cleanup.policy', value='compact'), CreateableTopicConfig(name='segment.bytes', value='104857600')])) 09:09:36 09:09:36.246 [main] DEBUG org.apache.kafka.clients.NetworkClient - [Consumer clientId=mso-123456-consumer-f5c1d64b-2a77-4abe-97e2-5d595acfbd82, groupId=mso-group] Received FIND_COORDINATOR response from node 1 for request with header RequestHeader(apiKey=FIND_COORDINATOR, apiVersion=4, clientId=mso-123456-consumer-f5c1d64b-2a77-4abe-97e2-5d595acfbd82, correlationId=15): FindCoordinatorResponseData(throttleTimeMs=0, errorCode=0, errorMessage='', nodeId=0, host='', port=0, coordinators=[Coordinator(key='mso-group', nodeId=-1, host='', port=-1, errorCode=15, errorMessage='')]) 09:09:36 09:09:36.246 [main] DEBUG org.apache.kafka.clients.consumer.internals.ConsumerCoordinator - [Consumer clientId=mso-123456-consumer-f5c1d64b-2a77-4abe-97e2-5d595acfbd82, groupId=mso-group] Received FindCoordinator response ClientResponse(receivedTimeMs=1770973776246, latencyMs=4, disconnected=false, requestHeader=RequestHeader(apiKey=FIND_COORDINATOR, apiVersion=4, clientId=mso-123456-consumer-f5c1d64b-2a77-4abe-97e2-5d595acfbd82, correlationId=15), responseBody=FindCoordinatorResponseData(throttleTimeMs=0, errorCode=0, errorMessage='', nodeId=0, host='', port=0, coordinators=[Coordinator(key='mso-group', nodeId=-1, host='', port=-1, errorCode=15, errorMessage='')])) 09:09:36 09:09:36.246 [main] DEBUG org.apache.kafka.clients.consumer.internals.ConsumerCoordinator - [Consumer clientId=mso-123456-consumer-f5c1d64b-2a77-4abe-97e2-5d595acfbd82, groupId=mso-group] Group coordinator lookup failed: 09:09:36 09:09:36.246 [main] DEBUG org.apache.kafka.clients.consumer.internals.ConsumerCoordinator - [Consumer clientId=mso-123456-consumer-f5c1d64b-2a77-4abe-97e2-5d595acfbd82, groupId=mso-group] Coordinator discovery failed, refreshing metadata 09:09:36 org.apache.kafka.common.errors.CoordinatorNotAvailableException: The coordinator is not available. 09:09:36 09:09:36.247 [data-plane-kafka-network-thread-1-ListenerName(SASL_PLAINTEXT)-SASL_PLAINTEXT-0] DEBUG kafka.request.logger - Completed request:{"isForwarded":false,"requestHeader":{"requestApiKey":10,"requestApiVersion":4,"correlationId":15,"clientId":"mso-123456-consumer-f5c1d64b-2a77-4abe-97e2-5d595acfbd82","requestApiKeyName":"FIND_COORDINATOR"},"request":{"keyType":0,"coordinatorKeys":["mso-group"]},"response":{"throttleTimeMs":0,"coordinators":[{"key":"mso-group","nodeId":-1,"host":"","port":-1,"errorCode":15,"errorMessage":""}]},"connection":"127.0.0.1:40117-127.0.0.1:49642-3","totalTimeMs":4.135,"requestQueueTimeMs":0.104,"localTimeMs":3.74,"remoteTimeMs":0.0,"throttleTimeMs":0,"responseQueueTimeMs":0.066,"sendTimeMs":0.225,"securityProtocol":"SASL_PLAINTEXT","principal":"User:admin","listener":"SASL_PLAINTEXT","clientInformation":{"softwareName":"apache-kafka-java","softwareVersion":"3.3.1"}} 09:09:36 09:09:36.249 [ProcessThread(sid:0 cport:46481):] DEBUG org.apache.zookeeper.server.SessionTrackerImpl - Checking session 0x1000002945e0000 09:09:36 09:09:36.249 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - Processing request:: sessionid:0x1000002945e0000 type:getData cxid:0xd5 zxid:0xfffffffffffffffe txntype:unknown reqpath:/config/topics/__consumer_offsets 09:09:36 09:09:36.249 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - sessionid:0x1000002945e0000 type:getData cxid:0xd5 zxid:0xfffffffffffffffe txntype:unknown reqpath:/config/topics/__consumer_offsets 09:09:36 09:09:36.249 [SyncThread:0] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Permission requested: 1 09:09:36 09:09:36.249 [SyncThread:0] DEBUG org.apache.zookeeper.server.ZooKeeperServer - ACLs for node: [31,s{'world,'anyone} 09:09:36 ] 09:09:36 09:09:36.249 [SyncThread:0] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Client credentials: ['sasl,'zooclient 09:09:36 , 'ip,'127.0.0.1 09:09:36 ] 09:09:36 09:09:36.249 [main-SendThread(127.0.0.1:46481)] DEBUG org.apache.zookeeper.ClientCnxn - Reading reply session id: 0x1000002945e0000, packet:: clientPath:/config/topics/__consumer_offsets serverPath:/config/topics/__consumer_offsets finished:false header:: 213,4 replyHeader:: 213,139,0 request:: '/config/topics/__consumer_offsets,F response:: #7b2276657273696f6e223a312c22636f6e666967223a7b22636f6d7072657373696f6e2e74797065223a2270726f6475636572222c22636c65616e75702e706f6c696379223a22636f6d70616374222c227365676d656e742e6279746573223a22313034383537363030227d7d,s{37,37,1770973775609,1770973775609,0,0,0,0,109,0,37} 09:09:36 09:09:36.251 [data-plane-kafka-request-handler-1] DEBUG kafka.log.OffsetIndex - Loaded index file /tmp/kafka-unit11182757027218931278/__consumer_offsets-9/00000000000000000000.index with maxEntries = 1310720, maxIndexSize = 10485760, entries = 0, lastOffset = 0, file position = 0 09:09:36 09:09:36.251 [data-plane-kafka-request-handler-1] DEBUG kafka.log.AbstractIndex - Index /tmp/kafka-unit11182757027218931278/__consumer_offsets-9/00000000000000000000.index was not resized because it already has size 10485760 09:09:36 09:09:36.251 [data-plane-kafka-request-handler-1] DEBUG kafka.log.TimeIndex - Loaded index file /tmp/kafka-unit11182757027218931278/__consumer_offsets-9/00000000000000000000.timeindex with maxEntries = 873813, maxIndexSize = 10485760, entries = 0, lastOffset = TimestampOffset(-1,0), file position = 0 09:09:36 09:09:36.251 [data-plane-kafka-request-handler-1] DEBUG kafka.log.AbstractIndex - Index /tmp/kafka-unit11182757027218931278/__consumer_offsets-9/00000000000000000000.timeindex was not resized because it already has size 10485756 09:09:36 09:09:36.252 [data-plane-kafka-request-handler-1] INFO kafka.log.UnifiedLog$ - [LogLoader partition=__consumer_offsets-9, dir=/tmp/kafka-unit11182757027218931278] Loading producer state till offset 0 with message format version 2 09:09:36 09:09:36.252 [data-plane-kafka-request-handler-1] DEBUG kafka.utils.KafkaScheduler - Scheduling task flush-metadata-file with initial delay 0 ms and period -1 ms. 09:09:36 09:09:36.252 [data-plane-kafka-request-handler-1] DEBUG kafka.utils.KafkaScheduler - Scheduling task PeriodicProducerExpirationCheck with initial delay 600000 ms and period 600000 ms. 09:09:36 09:09:36.253 [data-plane-kafka-request-handler-1] INFO kafka.log.LogManager - Created log for partition __consumer_offsets-9 in /tmp/kafka-unit11182757027218931278/__consumer_offsets-9 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} 09:09:36 09:09:36.253 [data-plane-kafka-request-handler-1] INFO kafka.cluster.Partition - [Partition __consumer_offsets-9 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-9 09:09:36 09:09:36.253 [data-plane-kafka-request-handler-1] INFO kafka.cluster.Partition - [Partition __consumer_offsets-9 broker=1] Log loaded for partition __consumer_offsets-9 with initial high watermark 0 09:09:36 09:09:36.253 [data-plane-kafka-request-handler-1] INFO state.change.logger - [Broker id=1] Leader __consumer_offsets-9 with topic id Some(RNuWA2mhQKGQZwHuRiMICQ) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas []. Previous leader epoch was -1. 09:09:36 09:09:36.253 [data-plane-kafka-request-handler-1] DEBUG kafka.server.epoch.LeaderEpochFileCache - [LeaderEpochCache __consumer_offsets-9] Appended new epoch entry EpochEntry(epoch=0, startOffset=0). Cache now contains 1 entries. 09:09:36 09:09:36.256 [ProcessThread(sid:0 cport:46481):] DEBUG org.apache.zookeeper.server.SessionTrackerImpl - Checking session 0x1000002945e0000 09:09:36 09:09:36.256 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - Processing request:: sessionid:0x1000002945e0000 type:getData cxid:0xd6 zxid:0xfffffffffffffffe txntype:unknown reqpath:/config/topics/__consumer_offsets 09:09:36 09:09:36.256 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - sessionid:0x1000002945e0000 type:getData cxid:0xd6 zxid:0xfffffffffffffffe txntype:unknown reqpath:/config/topics/__consumer_offsets 09:09:36 09:09:36.256 [SyncThread:0] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Permission requested: 1 09:09:36 09:09:36.256 [SyncThread:0] DEBUG org.apache.zookeeper.server.ZooKeeperServer - ACLs for node: [31,s{'world,'anyone} 09:09:36 ] 09:09:36 09:09:36.256 [SyncThread:0] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Client credentials: ['sasl,'zooclient 09:09:36 , 'ip,'127.0.0.1 09:09:36 ] 09:09:36 09:09:36.257 [main-SendThread(127.0.0.1:46481)] DEBUG org.apache.zookeeper.ClientCnxn - Reading reply session id: 0x1000002945e0000, packet:: clientPath:/config/topics/__consumer_offsets serverPath:/config/topics/__consumer_offsets finished:false header:: 214,4 replyHeader:: 214,139,0 request:: '/config/topics/__consumer_offsets,F response:: #7b2276657273696f6e223a312c22636f6e666967223a7b22636f6d7072657373696f6e2e74797065223a2270726f6475636572222c22636c65616e75702e706f6c696379223a22636f6d70616374222c227365676d656e742e6279746573223a22313034383537363030227d7d,s{37,37,1770973775609,1770973775609,0,0,0,0,109,0,37} 09:09:36 09:09:36.259 [data-plane-kafka-request-handler-1] DEBUG kafka.log.OffsetIndex - Loaded index file /tmp/kafka-unit11182757027218931278/__consumer_offsets-24/00000000000000000000.index with maxEntries = 1310720, maxIndexSize = 10485760, entries = 0, lastOffset = 0, file position = 0 09:09:36 09:09:36.259 [data-plane-kafka-request-handler-1] DEBUG kafka.log.AbstractIndex - Index /tmp/kafka-unit11182757027218931278/__consumer_offsets-24/00000000000000000000.index was not resized because it already has size 10485760 09:09:36 09:09:36.259 [data-plane-kafka-request-handler-1] DEBUG kafka.log.TimeIndex - Loaded index file /tmp/kafka-unit11182757027218931278/__consumer_offsets-24/00000000000000000000.timeindex with maxEntries = 873813, maxIndexSize = 10485760, entries = 0, lastOffset = TimestampOffset(-1,0), file position = 0 09:09:36 09:09:36.259 [data-plane-kafka-request-handler-1] DEBUG kafka.log.AbstractIndex - Index /tmp/kafka-unit11182757027218931278/__consumer_offsets-24/00000000000000000000.timeindex was not resized because it already has size 10485756 09:09:36 09:09:36.259 [data-plane-kafka-request-handler-1] INFO kafka.log.UnifiedLog$ - [LogLoader partition=__consumer_offsets-24, dir=/tmp/kafka-unit11182757027218931278] Loading producer state till offset 0 with message format version 2 09:09:36 09:09:36.259 [data-plane-kafka-request-handler-1] DEBUG kafka.utils.KafkaScheduler - Scheduling task flush-metadata-file with initial delay 0 ms and period -1 ms. 09:09:36 09:09:36.260 [data-plane-kafka-request-handler-1] DEBUG kafka.utils.KafkaScheduler - Scheduling task PeriodicProducerExpirationCheck with initial delay 600000 ms and period 600000 ms. 09:09:36 09:09:36.260 [data-plane-kafka-request-handler-1] INFO kafka.log.LogManager - Created log for partition __consumer_offsets-24 in /tmp/kafka-unit11182757027218931278/__consumer_offsets-24 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} 09:09:36 09:09:36.260 [data-plane-kafka-request-handler-1] INFO kafka.cluster.Partition - [Partition __consumer_offsets-24 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-24 09:09:36 09:09:36.260 [data-plane-kafka-request-handler-1] INFO kafka.cluster.Partition - [Partition __consumer_offsets-24 broker=1] Log loaded for partition __consumer_offsets-24 with initial high watermark 0 09:09:36 09:09:36.260 [data-plane-kafka-request-handler-1] INFO state.change.logger - [Broker id=1] Leader __consumer_offsets-24 with topic id Some(RNuWA2mhQKGQZwHuRiMICQ) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas []. Previous leader epoch was -1. 09:09:36 09:09:36.260 [data-plane-kafka-request-handler-1] DEBUG kafka.server.epoch.LeaderEpochFileCache - [LeaderEpochCache __consumer_offsets-24] Appended new epoch entry EpochEntry(epoch=0, startOffset=0). Cache now contains 1 entries. 09:09:36 09:09:36.285 [ProcessThread(sid:0 cport:46481):] DEBUG org.apache.zookeeper.server.SessionTrackerImpl - Checking session 0x1000002945e0000 09:09:36 09:09:36.285 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - Processing request:: sessionid:0x1000002945e0000 type:getData cxid:0xd7 zxid:0xfffffffffffffffe txntype:unknown reqpath:/config/topics/__consumer_offsets 09:09:36 09:09:36.285 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - sessionid:0x1000002945e0000 type:getData cxid:0xd7 zxid:0xfffffffffffffffe txntype:unknown reqpath:/config/topics/__consumer_offsets 09:09:36 09:09:36.285 [SyncThread:0] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Permission requested: 1 09:09:36 09:09:36.285 [SyncThread:0] DEBUG org.apache.zookeeper.server.ZooKeeperServer - ACLs for node: [31,s{'world,'anyone} 09:09:36 ] 09:09:36 09:09:36.286 [SyncThread:0] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Client credentials: ['sasl,'zooclient 09:09:36 , 'ip,'127.0.0.1 09:09:36 ] 09:09:36 09:09:36.286 [main-SendThread(127.0.0.1:46481)] DEBUG org.apache.zookeeper.ClientCnxn - Reading reply session id: 0x1000002945e0000, packet:: clientPath:/config/topics/__consumer_offsets serverPath:/config/topics/__consumer_offsets finished:false header:: 215,4 replyHeader:: 215,139,0 request:: '/config/topics/__consumer_offsets,F response:: #7b2276657273696f6e223a312c22636f6e666967223a7b22636f6d7072657373696f6e2e74797065223a2270726f6475636572222c22636c65616e75702e706f6c696379223a22636f6d70616374222c227365676d656e742e6279746573223a22313034383537363030227d7d,s{37,37,1770973775609,1770973775609,0,0,0,0,109,0,37} 09:09:36 09:09:36.288 [data-plane-kafka-request-handler-1] DEBUG kafka.log.OffsetIndex - Loaded index file /tmp/kafka-unit11182757027218931278/__consumer_offsets-31/00000000000000000000.index with maxEntries = 1310720, maxIndexSize = 10485760, entries = 0, lastOffset = 0, file position = 0 09:09:36 09:09:36.288 [data-plane-kafka-request-handler-1] DEBUG kafka.log.AbstractIndex - Index /tmp/kafka-unit11182757027218931278/__consumer_offsets-31/00000000000000000000.index was not resized because it already has size 10485760 09:09:36 09:09:36.289 [data-plane-kafka-request-handler-1] DEBUG kafka.log.TimeIndex - Loaded index file /tmp/kafka-unit11182757027218931278/__consumer_offsets-31/00000000000000000000.timeindex with maxEntries = 873813, maxIndexSize = 10485760, entries = 0, lastOffset = TimestampOffset(-1,0), file position = 0 09:09:36 09:09:36.289 [data-plane-kafka-request-handler-1] DEBUG kafka.log.AbstractIndex - Index /tmp/kafka-unit11182757027218931278/__consumer_offsets-31/00000000000000000000.timeindex was not resized because it already has size 10485756 09:09:36 09:09:36.289 [data-plane-kafka-request-handler-1] INFO kafka.log.UnifiedLog$ - [LogLoader partition=__consumer_offsets-31, dir=/tmp/kafka-unit11182757027218931278] Loading producer state till offset 0 with message format version 2 09:09:36 09:09:36.289 [data-plane-kafka-request-handler-1] DEBUG kafka.utils.KafkaScheduler - Scheduling task flush-metadata-file with initial delay 0 ms and period -1 ms. 09:09:36 09:09:36.290 [data-plane-kafka-request-handler-1] DEBUG kafka.utils.KafkaScheduler - Scheduling task PeriodicProducerExpirationCheck with initial delay 600000 ms and period 600000 ms. 09:09:36 09:09:36.290 [data-plane-kafka-request-handler-1] INFO kafka.log.LogManager - Created log for partition __consumer_offsets-31 in /tmp/kafka-unit11182757027218931278/__consumer_offsets-31 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} 09:09:36 09:09:36.290 [data-plane-kafka-request-handler-1] INFO kafka.cluster.Partition - [Partition __consumer_offsets-31 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-31 09:09:36 09:09:36.290 [data-plane-kafka-request-handler-1] INFO kafka.cluster.Partition - [Partition __consumer_offsets-31 broker=1] Log loaded for partition __consumer_offsets-31 with initial high watermark 0 09:09:36 09:09:36.290 [data-plane-kafka-request-handler-1] INFO state.change.logger - [Broker id=1] Leader __consumer_offsets-31 with topic id Some(RNuWA2mhQKGQZwHuRiMICQ) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas []. Previous leader epoch was -1. 09:09:36 09:09:36.291 [data-plane-kafka-request-handler-1] DEBUG kafka.server.epoch.LeaderEpochFileCache - [LeaderEpochCache __consumer_offsets-31] Appended new epoch entry EpochEntry(epoch=0, startOffset=0). Cache now contains 1 entries. 09:09:36 09:09:36.295 [ProcessThread(sid:0 cport:46481):] DEBUG org.apache.zookeeper.server.SessionTrackerImpl - Checking session 0x1000002945e0000 09:09:36 09:09:36.295 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - Processing request:: sessionid:0x1000002945e0000 type:getData cxid:0xd8 zxid:0xfffffffffffffffe txntype:unknown reqpath:/config/topics/__consumer_offsets 09:09:36 09:09:36.295 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - sessionid:0x1000002945e0000 type:getData cxid:0xd8 zxid:0xfffffffffffffffe txntype:unknown reqpath:/config/topics/__consumer_offsets 09:09:36 09:09:36.295 [SyncThread:0] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Permission requested: 1 09:09:36 09:09:36.295 [SyncThread:0] DEBUG org.apache.zookeeper.server.ZooKeeperServer - ACLs for node: [31,s{'world,'anyone} 09:09:36 ] 09:09:36 09:09:36.295 [SyncThread:0] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Client credentials: ['sasl,'zooclient 09:09:36 , 'ip,'127.0.0.1 09:09:36 ] 09:09:36 09:09:36.295 [main-SendThread(127.0.0.1:46481)] DEBUG org.apache.zookeeper.ClientCnxn - Reading reply session id: 0x1000002945e0000, packet:: clientPath:/config/topics/__consumer_offsets serverPath:/config/topics/__consumer_offsets finished:false header:: 216,4 replyHeader:: 216,139,0 request:: '/config/topics/__consumer_offsets,F response:: #7b2276657273696f6e223a312c22636f6e666967223a7b22636f6d7072657373696f6e2e74797065223a2270726f6475636572222c22636c65616e75702e706f6c696379223a22636f6d70616374222c227365676d656e742e6279746573223a22313034383537363030227d7d,s{37,37,1770973775609,1770973775609,0,0,0,0,109,0,37} 09:09:36 09:09:36.297 [data-plane-kafka-request-handler-1] DEBUG kafka.log.OffsetIndex - Loaded index file /tmp/kafka-unit11182757027218931278/__consumer_offsets-46/00000000000000000000.index with maxEntries = 1310720, maxIndexSize = 10485760, entries = 0, lastOffset = 0, file position = 0 09:09:36 09:09:36.297 [data-plane-kafka-request-handler-1] DEBUG kafka.log.AbstractIndex - Index /tmp/kafka-unit11182757027218931278/__consumer_offsets-46/00000000000000000000.index was not resized because it already has size 10485760 09:09:36 09:09:36.297 [data-plane-kafka-request-handler-1] DEBUG kafka.log.TimeIndex - Loaded index file /tmp/kafka-unit11182757027218931278/__consumer_offsets-46/00000000000000000000.timeindex with maxEntries = 873813, maxIndexSize = 10485760, entries = 0, lastOffset = TimestampOffset(-1,0), file position = 0 09:09:36 09:09:36.297 [data-plane-kafka-request-handler-1] DEBUG kafka.log.AbstractIndex - Index /tmp/kafka-unit11182757027218931278/__consumer_offsets-46/00000000000000000000.timeindex was not resized because it already has size 10485756 09:09:36 09:09:36.297 [data-plane-kafka-request-handler-1] INFO kafka.log.UnifiedLog$ - [LogLoader partition=__consumer_offsets-46, dir=/tmp/kafka-unit11182757027218931278] Loading producer state till offset 0 with message format version 2 09:09:36 09:09:36.297 [data-plane-kafka-request-handler-1] DEBUG kafka.utils.KafkaScheduler - Scheduling task flush-metadata-file with initial delay 0 ms and period -1 ms. 09:09:36 09:09:36.298 [data-plane-kafka-request-handler-1] DEBUG kafka.utils.KafkaScheduler - Scheduling task PeriodicProducerExpirationCheck with initial delay 600000 ms and period 600000 ms. 09:09:36 09:09:36.298 [data-plane-kafka-request-handler-1] INFO kafka.log.LogManager - Created log for partition __consumer_offsets-46 in /tmp/kafka-unit11182757027218931278/__consumer_offsets-46 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} 09:09:36 09:09:36.298 [data-plane-kafka-request-handler-1] INFO kafka.cluster.Partition - [Partition __consumer_offsets-46 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-46 09:09:36 09:09:36.298 [data-plane-kafka-request-handler-1] INFO kafka.cluster.Partition - [Partition __consumer_offsets-46 broker=1] Log loaded for partition __consumer_offsets-46 with initial high watermark 0 09:09:36 09:09:36.298 [data-plane-kafka-request-handler-1] INFO state.change.logger - [Broker id=1] Leader __consumer_offsets-46 with topic id Some(RNuWA2mhQKGQZwHuRiMICQ) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas []. Previous leader epoch was -1. 09:09:36 09:09:36.298 [data-plane-kafka-request-handler-1] DEBUG kafka.server.epoch.LeaderEpochFileCache - [LeaderEpochCache __consumer_offsets-46] Appended new epoch entry EpochEntry(epoch=0, startOffset=0). Cache now contains 1 entries. 09:09:36 09:09:36.303 [ProcessThread(sid:0 cport:46481):] DEBUG org.apache.zookeeper.server.SessionTrackerImpl - Checking session 0x1000002945e0000 09:09:36 09:09:36.303 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - Processing request:: sessionid:0x1000002945e0000 type:getData cxid:0xd9 zxid:0xfffffffffffffffe txntype:unknown reqpath:/config/topics/__consumer_offsets 09:09:36 09:09:36.303 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - sessionid:0x1000002945e0000 type:getData cxid:0xd9 zxid:0xfffffffffffffffe txntype:unknown reqpath:/config/topics/__consumer_offsets 09:09:36 09:09:36.303 [SyncThread:0] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Permission requested: 1 09:09:36 09:09:36.303 [SyncThread:0] DEBUG org.apache.zookeeper.server.ZooKeeperServer - ACLs for node: [31,s{'world,'anyone} 09:09:36 ] 09:09:36 09:09:36.303 [SyncThread:0] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Client credentials: ['sasl,'zooclient 09:09:36 , 'ip,'127.0.0.1 09:09:36 ] 09:09:36 09:09:36.303 [main-SendThread(127.0.0.1:46481)] DEBUG org.apache.zookeeper.ClientCnxn - Reading reply session id: 0x1000002945e0000, packet:: clientPath:/config/topics/__consumer_offsets serverPath:/config/topics/__consumer_offsets finished:false header:: 217,4 replyHeader:: 217,139,0 request:: '/config/topics/__consumer_offsets,F response:: #7b2276657273696f6e223a312c22636f6e666967223a7b22636f6d7072657373696f6e2e74797065223a2270726f6475636572222c22636c65616e75702e706f6c696379223a22636f6d70616374222c227365676d656e742e6279746573223a22313034383537363030227d7d,s{37,37,1770973775609,1770973775609,0,0,0,0,109,0,37} 09:09:36 09:09:36.305 [data-plane-kafka-request-handler-1] DEBUG kafka.log.OffsetIndex - Loaded index file /tmp/kafka-unit11182757027218931278/__consumer_offsets-1/00000000000000000000.index with maxEntries = 1310720, maxIndexSize = 10485760, entries = 0, lastOffset = 0, file position = 0 09:09:36 09:09:36.305 [data-plane-kafka-request-handler-1] DEBUG kafka.log.AbstractIndex - Index /tmp/kafka-unit11182757027218931278/__consumer_offsets-1/00000000000000000000.index was not resized because it already has size 10485760 09:09:36 09:09:36.305 [data-plane-kafka-request-handler-1] DEBUG kafka.log.TimeIndex - Loaded index file /tmp/kafka-unit11182757027218931278/__consumer_offsets-1/00000000000000000000.timeindex with maxEntries = 873813, maxIndexSize = 10485760, entries = 0, lastOffset = TimestampOffset(-1,0), file position = 0 09:09:36 09:09:36.305 [data-plane-kafka-request-handler-1] DEBUG kafka.log.AbstractIndex - Index /tmp/kafka-unit11182757027218931278/__consumer_offsets-1/00000000000000000000.timeindex was not resized because it already has size 10485756 09:09:36 09:09:36.305 [data-plane-kafka-request-handler-1] INFO kafka.log.UnifiedLog$ - [LogLoader partition=__consumer_offsets-1, dir=/tmp/kafka-unit11182757027218931278] Loading producer state till offset 0 with message format version 2 09:09:36 09:09:36.305 [data-plane-kafka-request-handler-1] DEBUG kafka.utils.KafkaScheduler - Scheduling task flush-metadata-file with initial delay 0 ms and period -1 ms. 09:09:36 09:09:36.306 [data-plane-kafka-request-handler-1] DEBUG kafka.utils.KafkaScheduler - Scheduling task PeriodicProducerExpirationCheck with initial delay 600000 ms and period 600000 ms. 09:09:36 09:09:36.306 [data-plane-kafka-request-handler-1] INFO kafka.log.LogManager - Created log for partition __consumer_offsets-1 in /tmp/kafka-unit11182757027218931278/__consumer_offsets-1 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} 09:09:36 09:09:36.306 [data-plane-kafka-request-handler-1] INFO kafka.cluster.Partition - [Partition __consumer_offsets-1 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-1 09:09:36 09:09:36.306 [data-plane-kafka-request-handler-1] INFO kafka.cluster.Partition - [Partition __consumer_offsets-1 broker=1] Log loaded for partition __consumer_offsets-1 with initial high watermark 0 09:09:36 09:09:36.306 [data-plane-kafka-request-handler-1] INFO state.change.logger - [Broker id=1] Leader __consumer_offsets-1 with topic id Some(RNuWA2mhQKGQZwHuRiMICQ) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas []. Previous leader epoch was -1. 09:09:36 09:09:36.306 [data-plane-kafka-request-handler-1] DEBUG kafka.server.epoch.LeaderEpochFileCache - [LeaderEpochCache __consumer_offsets-1] Appended new epoch entry EpochEntry(epoch=0, startOffset=0). Cache now contains 1 entries. 09:09:36 09:09:36.310 [ProcessThread(sid:0 cport:46481):] DEBUG org.apache.zookeeper.server.SessionTrackerImpl - Checking session 0x1000002945e0000 09:09:36 09:09:36.310 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - Processing request:: sessionid:0x1000002945e0000 type:getData cxid:0xda zxid:0xfffffffffffffffe txntype:unknown reqpath:/config/topics/__consumer_offsets 09:09:36 09:09:36.310 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - sessionid:0x1000002945e0000 type:getData cxid:0xda zxid:0xfffffffffffffffe txntype:unknown reqpath:/config/topics/__consumer_offsets 09:09:36 09:09:36.310 [SyncThread:0] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Permission requested: 1 09:09:36 09:09:36.310 [SyncThread:0] DEBUG org.apache.zookeeper.server.ZooKeeperServer - ACLs for node: [31,s{'world,'anyone} 09:09:36 ] 09:09:36 09:09:36.310 [SyncThread:0] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Client credentials: ['sasl,'zooclient 09:09:36 , 'ip,'127.0.0.1 09:09:36 ] 09:09:36 09:09:36.311 [main-SendThread(127.0.0.1:46481)] DEBUG org.apache.zookeeper.ClientCnxn - Reading reply session id: 0x1000002945e0000, packet:: clientPath:/config/topics/__consumer_offsets serverPath:/config/topics/__consumer_offsets finished:false header:: 218,4 replyHeader:: 218,139,0 request:: '/config/topics/__consumer_offsets,F response:: #7b2276657273696f6e223a312c22636f6e666967223a7b22636f6d7072657373696f6e2e74797065223a2270726f6475636572222c22636c65616e75702e706f6c696379223a22636f6d70616374222c227365676d656e742e6279746573223a22313034383537363030227d7d,s{37,37,1770973775609,1770973775609,0,0,0,0,109,0,37} 09:09:36 09:09:36.312 [data-plane-kafka-request-handler-1] DEBUG kafka.log.OffsetIndex - Loaded index file /tmp/kafka-unit11182757027218931278/__consumer_offsets-16/00000000000000000000.index with maxEntries = 1310720, maxIndexSize = 10485760, entries = 0, lastOffset = 0, file position = 0 09:09:36 09:09:36.313 [data-plane-kafka-request-handler-1] DEBUG kafka.log.AbstractIndex - Index /tmp/kafka-unit11182757027218931278/__consumer_offsets-16/00000000000000000000.index was not resized because it already has size 10485760 09:09:36 09:09:36.313 [data-plane-kafka-request-handler-1] DEBUG kafka.log.TimeIndex - Loaded index file /tmp/kafka-unit11182757027218931278/__consumer_offsets-16/00000000000000000000.timeindex with maxEntries = 873813, maxIndexSize = 10485760, entries = 0, lastOffset = TimestampOffset(-1,0), file position = 0 09:09:36 09:09:36.313 [data-plane-kafka-request-handler-1] DEBUG kafka.log.AbstractIndex - Index /tmp/kafka-unit11182757027218931278/__consumer_offsets-16/00000000000000000000.timeindex was not resized because it already has size 10485756 09:09:36 09:09:36.313 [data-plane-kafka-request-handler-1] INFO kafka.log.UnifiedLog$ - [LogLoader partition=__consumer_offsets-16, dir=/tmp/kafka-unit11182757027218931278] Loading producer state till offset 0 with message format version 2 09:09:36 09:09:36.313 [data-plane-kafka-request-handler-1] DEBUG kafka.utils.KafkaScheduler - Scheduling task flush-metadata-file with initial delay 0 ms and period -1 ms. 09:09:36 09:09:36.314 [data-plane-kafka-request-handler-1] DEBUG kafka.utils.KafkaScheduler - Scheduling task PeriodicProducerExpirationCheck with initial delay 600000 ms and period 600000 ms. 09:09:36 09:09:36.314 [data-plane-kafka-request-handler-1] INFO kafka.log.LogManager - Created log for partition __consumer_offsets-16 in /tmp/kafka-unit11182757027218931278/__consumer_offsets-16 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} 09:09:36 09:09:36.314 [data-plane-kafka-request-handler-1] INFO kafka.cluster.Partition - [Partition __consumer_offsets-16 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-16 09:09:36 09:09:36.314 [data-plane-kafka-request-handler-1] INFO kafka.cluster.Partition - [Partition __consumer_offsets-16 broker=1] Log loaded for partition __consumer_offsets-16 with initial high watermark 0 09:09:36 09:09:36.314 [data-plane-kafka-request-handler-1] INFO state.change.logger - [Broker id=1] Leader __consumer_offsets-16 with topic id Some(RNuWA2mhQKGQZwHuRiMICQ) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas []. Previous leader epoch was -1. 09:09:36 09:09:36.314 [data-plane-kafka-request-handler-1] DEBUG kafka.server.epoch.LeaderEpochFileCache - [LeaderEpochCache __consumer_offsets-16] Appended new epoch entry EpochEntry(epoch=0, startOffset=0). Cache now contains 1 entries. 09:09:36 09:09:36.318 [ProcessThread(sid:0 cport:46481):] DEBUG org.apache.zookeeper.server.SessionTrackerImpl - Checking session 0x1000002945e0000 09:09:36 09:09:36.318 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - Processing request:: sessionid:0x1000002945e0000 type:getData cxid:0xdb zxid:0xfffffffffffffffe txntype:unknown reqpath:/config/topics/__consumer_offsets 09:09:36 09:09:36.318 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - sessionid:0x1000002945e0000 type:getData cxid:0xdb zxid:0xfffffffffffffffe txntype:unknown reqpath:/config/topics/__consumer_offsets 09:09:36 09:09:36.318 [SyncThread:0] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Permission requested: 1 09:09:36 09:09:36.318 [SyncThread:0] DEBUG org.apache.zookeeper.server.ZooKeeperServer - ACLs for node: [31,s{'world,'anyone} 09:09:36 ] 09:09:36 09:09:36.318 [SyncThread:0] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Client credentials: ['sasl,'zooclient 09:09:36 , 'ip,'127.0.0.1 09:09:36 ] 09:09:36 09:09:36.318 [main-SendThread(127.0.0.1:46481)] DEBUG org.apache.zookeeper.ClientCnxn - Reading reply session id: 0x1000002945e0000, packet:: clientPath:/config/topics/__consumer_offsets serverPath:/config/topics/__consumer_offsets finished:false header:: 219,4 replyHeader:: 219,139,0 request:: '/config/topics/__consumer_offsets,F response:: #7b2276657273696f6e223a312c22636f6e666967223a7b22636f6d7072657373696f6e2e74797065223a2270726f6475636572222c22636c65616e75702e706f6c696379223a22636f6d70616374222c227365676d656e742e6279746573223a22313034383537363030227d7d,s{37,37,1770973775609,1770973775609,0,0,0,0,109,0,37} 09:09:36 09:09:36.320 [data-plane-kafka-request-handler-1] DEBUG kafka.log.OffsetIndex - Loaded index file /tmp/kafka-unit11182757027218931278/__consumer_offsets-2/00000000000000000000.index with maxEntries = 1310720, maxIndexSize = 10485760, entries = 0, lastOffset = 0, file position = 0 09:09:36 09:09:36.320 [data-plane-kafka-request-handler-1] DEBUG kafka.log.AbstractIndex - Index /tmp/kafka-unit11182757027218931278/__consumer_offsets-2/00000000000000000000.index was not resized because it already has size 10485760 09:09:36 09:09:36.320 [data-plane-kafka-request-handler-1] DEBUG kafka.log.TimeIndex - Loaded index file /tmp/kafka-unit11182757027218931278/__consumer_offsets-2/00000000000000000000.timeindex with maxEntries = 873813, maxIndexSize = 10485760, entries = 0, lastOffset = TimestampOffset(-1,0), file position = 0 09:09:36 09:09:36.320 [data-plane-kafka-request-handler-1] DEBUG kafka.log.AbstractIndex - Index /tmp/kafka-unit11182757027218931278/__consumer_offsets-2/00000000000000000000.timeindex was not resized because it already has size 10485756 09:09:36 09:09:36.321 [data-plane-kafka-request-handler-1] INFO kafka.log.UnifiedLog$ - [LogLoader partition=__consumer_offsets-2, dir=/tmp/kafka-unit11182757027218931278] Loading producer state till offset 0 with message format version 2 09:09:36 09:09:36.321 [data-plane-kafka-request-handler-1] DEBUG kafka.utils.KafkaScheduler - Scheduling task flush-metadata-file with initial delay 0 ms and period -1 ms. 09:09:36 09:09:36.321 [data-plane-kafka-request-handler-1] DEBUG kafka.utils.KafkaScheduler - Scheduling task PeriodicProducerExpirationCheck with initial delay 600000 ms and period 600000 ms. 09:09:36 09:09:36.321 [data-plane-kafka-request-handler-1] INFO kafka.log.LogManager - Created log for partition __consumer_offsets-2 in /tmp/kafka-unit11182757027218931278/__consumer_offsets-2 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} 09:09:36 09:09:36.321 [data-plane-kafka-request-handler-1] INFO kafka.cluster.Partition - [Partition __consumer_offsets-2 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-2 09:09:36 09:09:36.321 [data-plane-kafka-request-handler-1] INFO kafka.cluster.Partition - [Partition __consumer_offsets-2 broker=1] Log loaded for partition __consumer_offsets-2 with initial high watermark 0 09:09:36 09:09:36.322 [data-plane-kafka-request-handler-1] INFO state.change.logger - [Broker id=1] Leader __consumer_offsets-2 with topic id Some(RNuWA2mhQKGQZwHuRiMICQ) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas []. Previous leader epoch was -1. 09:09:36 09:09:36.322 [data-plane-kafka-request-handler-1] DEBUG kafka.server.epoch.LeaderEpochFileCache - [LeaderEpochCache __consumer_offsets-2] Appended new epoch entry EpochEntry(epoch=0, startOffset=0). Cache now contains 1 entries. 09:09:36 09:09:36.326 [ProcessThread(sid:0 cport:46481):] DEBUG org.apache.zookeeper.server.SessionTrackerImpl - Checking session 0x1000002945e0000 09:09:36 09:09:36.326 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - Processing request:: sessionid:0x1000002945e0000 type:getData cxid:0xdc zxid:0xfffffffffffffffe txntype:unknown reqpath:/config/topics/__consumer_offsets 09:09:36 09:09:36.326 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - sessionid:0x1000002945e0000 type:getData cxid:0xdc zxid:0xfffffffffffffffe txntype:unknown reqpath:/config/topics/__consumer_offsets 09:09:36 09:09:36.326 [SyncThread:0] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Permission requested: 1 09:09:36 09:09:36.326 [SyncThread:0] DEBUG org.apache.zookeeper.server.ZooKeeperServer - ACLs for node: [31,s{'world,'anyone} 09:09:36 ] 09:09:36 09:09:36.326 [SyncThread:0] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Client credentials: ['sasl,'zooclient 09:09:36 , 'ip,'127.0.0.1 09:09:36 ] 09:09:36 09:09:36.326 [main-SendThread(127.0.0.1:46481)] DEBUG org.apache.zookeeper.ClientCnxn - Reading reply session id: 0x1000002945e0000, packet:: clientPath:/config/topics/__consumer_offsets serverPath:/config/topics/__consumer_offsets finished:false header:: 220,4 replyHeader:: 220,139,0 request:: '/config/topics/__consumer_offsets,F response:: #7b2276657273696f6e223a312c22636f6e666967223a7b22636f6d7072657373696f6e2e74797065223a2270726f6475636572222c22636c65616e75702e706f6c696379223a22636f6d70616374222c227365676d656e742e6279746573223a22313034383537363030227d7d,s{37,37,1770973775609,1770973775609,0,0,0,0,109,0,37} 09:09:36 09:09:36.327 [data-plane-kafka-request-handler-1] DEBUG kafka.log.OffsetIndex - Loaded index file /tmp/kafka-unit11182757027218931278/__consumer_offsets-25/00000000000000000000.index with maxEntries = 1310720, maxIndexSize = 10485760, entries = 0, lastOffset = 0, file position = 0 09:09:36 09:09:36.328 [data-plane-kafka-request-handler-1] DEBUG kafka.log.AbstractIndex - Index /tmp/kafka-unit11182757027218931278/__consumer_offsets-25/00000000000000000000.index was not resized because it already has size 10485760 09:09:36 09:09:36.328 [data-plane-kafka-request-handler-1] DEBUG kafka.log.TimeIndex - Loaded index file /tmp/kafka-unit11182757027218931278/__consumer_offsets-25/00000000000000000000.timeindex with maxEntries = 873813, maxIndexSize = 10485760, entries = 0, lastOffset = TimestampOffset(-1,0), file position = 0 09:09:36 09:09:36.328 [data-plane-kafka-request-handler-1] DEBUG kafka.log.AbstractIndex - Index /tmp/kafka-unit11182757027218931278/__consumer_offsets-25/00000000000000000000.timeindex was not resized because it already has size 10485756 09:09:36 09:09:36.328 [data-plane-kafka-request-handler-1] INFO kafka.log.UnifiedLog$ - [LogLoader partition=__consumer_offsets-25, dir=/tmp/kafka-unit11182757027218931278] Loading producer state till offset 0 with message format version 2 09:09:36 09:09:36.328 [data-plane-kafka-request-handler-1] DEBUG kafka.utils.KafkaScheduler - Scheduling task flush-metadata-file with initial delay 0 ms and period -1 ms. 09:09:36 09:09:36.329 [data-plane-kafka-request-handler-1] DEBUG kafka.utils.KafkaScheduler - Scheduling task PeriodicProducerExpirationCheck with initial delay 600000 ms and period 600000 ms. 09:09:36 09:09:36.329 [data-plane-kafka-request-handler-1] INFO kafka.log.LogManager - Created log for partition __consumer_offsets-25 in /tmp/kafka-unit11182757027218931278/__consumer_offsets-25 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} 09:09:36 09:09:36.329 [data-plane-kafka-request-handler-1] INFO kafka.cluster.Partition - [Partition __consumer_offsets-25 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-25 09:09:36 09:09:36.329 [data-plane-kafka-request-handler-1] INFO kafka.cluster.Partition - [Partition __consumer_offsets-25 broker=1] Log loaded for partition __consumer_offsets-25 with initial high watermark 0 09:09:36 09:09:36.329 [data-plane-kafka-request-handler-1] INFO state.change.logger - [Broker id=1] Leader __consumer_offsets-25 with topic id Some(RNuWA2mhQKGQZwHuRiMICQ) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas []. Previous leader epoch was -1. 09:09:36 09:09:36.329 [data-plane-kafka-request-handler-1] DEBUG kafka.server.epoch.LeaderEpochFileCache - [LeaderEpochCache __consumer_offsets-25] Appended new epoch entry EpochEntry(epoch=0, startOffset=0). Cache now contains 1 entries. 09:09:36 09:09:36.341 [main] DEBUG org.apache.kafka.clients.NetworkClient - [Consumer clientId=mso-123456-consumer-f5c1d64b-2a77-4abe-97e2-5d595acfbd82, groupId=mso-group] Sending metadata request MetadataRequestData(topics=[MetadataRequestTopic(topicId=AAAAAAAAAAAAAAAAAAAAAA, name='my-test-topic')], allowAutoTopicCreation=false, includeClusterAuthorizedOperations=false, includeTopicAuthorizedOperations=false) to node localhost:40117 (id: 1 rack: null) 09:09:36 09:09:36.341 [main] DEBUG org.apache.kafka.clients.NetworkClient - [Consumer clientId=mso-123456-consumer-f5c1d64b-2a77-4abe-97e2-5d595acfbd82, groupId=mso-group] Sending METADATA request with header RequestHeader(apiKey=METADATA, apiVersion=12, clientId=mso-123456-consumer-f5c1d64b-2a77-4abe-97e2-5d595acfbd82, correlationId=16) and timeout 30000 to node 1: MetadataRequestData(topics=[MetadataRequestTopic(topicId=AAAAAAAAAAAAAAAAAAAAAA, name='my-test-topic')], allowAutoTopicCreation=false, includeClusterAuthorizedOperations=false, includeTopicAuthorizedOperations=false) 09:09:36 09:09:36.344 [main] DEBUG org.apache.kafka.clients.NetworkClient - [Consumer clientId=mso-123456-consumer-f5c1d64b-2a77-4abe-97e2-5d595acfbd82, groupId=mso-group] Received METADATA response from node 1 for request with header RequestHeader(apiKey=METADATA, apiVersion=12, clientId=mso-123456-consumer-f5c1d64b-2a77-4abe-97e2-5d595acfbd82, correlationId=16): MetadataResponseData(throttleTimeMs=0, brokers=[MetadataResponseBroker(nodeId=1, host='localhost', port=40117, rack=null)], clusterId='lcpOyY1-QY2MMThgHGGgSA', controllerId=1, topics=[MetadataResponseTopic(errorCode=0, name='my-test-topic', topicId=rpOTusxPRiGyrsjjjH_fwA, isInternal=false, partitions=[MetadataResponsePartition(errorCode=0, partitionIndex=0, leaderId=1, leaderEpoch=0, replicaNodes=[1], isrNodes=[1], offlineReplicas=[])], topicAuthorizedOperations=-2147483648)], clusterAuthorizedOperations=-2147483648) 09:09:36 09:09:36.344 [main] DEBUG org.apache.kafka.clients.Metadata - [Consumer clientId=mso-123456-consumer-f5c1d64b-2a77-4abe-97e2-5d595acfbd82, groupId=mso-group] Updating last seen epoch for partition my-test-topic-0 from 0 to epoch 0 from new metadata 09:09:36 09:09:36.344 [data-plane-kafka-network-thread-1-ListenerName(SASL_PLAINTEXT)-SASL_PLAINTEXT-0] DEBUG kafka.request.logger - Completed request:{"isForwarded":false,"requestHeader":{"requestApiKey":3,"requestApiVersion":12,"correlationId":16,"clientId":"mso-123456-consumer-f5c1d64b-2a77-4abe-97e2-5d595acfbd82","requestApiKeyName":"METADATA"},"request":{"topics":[{"topicId":"AAAAAAAAAAAAAAAAAAAAAA","name":"my-test-topic"}],"allowAutoTopicCreation":false,"includeTopicAuthorizedOperations":false},"response":{"throttleTimeMs":0,"brokers":[{"nodeId":1,"host":"localhost","port":40117,"rack":null}],"clusterId":"lcpOyY1-QY2MMThgHGGgSA","controllerId":1,"topics":[{"errorCode":0,"name":"my-test-topic","topicId":"rpOTusxPRiGyrsjjjH_fwA","isInternal":false,"partitions":[{"errorCode":0,"partitionIndex":0,"leaderId":1,"leaderEpoch":0,"replicaNodes":[1],"isrNodes":[1],"offlineReplicas":[]}],"topicAuthorizedOperations":-2147483648}]},"connection":"127.0.0.1:40117-127.0.0.1:49642-3","totalTimeMs":1.902,"requestQueueTimeMs":0.19,"localTimeMs":1.393,"remoteTimeMs":0.0,"throttleTimeMs":0,"responseQueueTimeMs":0.091,"sendTimeMs":0.226,"securityProtocol":"SASL_PLAINTEXT","principal":"User:admin","listener":"SASL_PLAINTEXT","clientInformation":{"softwareName":"apache-kafka-java","softwareVersion":"3.3.1"}} 09:09:36 09:09:36.345 [main] DEBUG org.apache.kafka.clients.Metadata - [Consumer clientId=mso-123456-consumer-f5c1d64b-2a77-4abe-97e2-5d595acfbd82, groupId=mso-group] Updated cluster metadata updateVersion 9 to MetadataCache{clusterId='lcpOyY1-QY2MMThgHGGgSA', nodes={1=localhost:40117 (id: 1 rack: null)}, partitions=[PartitionMetadata(error=NONE, partition=my-test-topic-0, leader=Optional[1], leaderEpoch=Optional[0], replicas=1, isr=1, offlineReplicas=)], controller=localhost:40117 (id: 1 rack: null)} 09:09:36 09:09:36.345 [main] DEBUG org.apache.kafka.clients.consumer.internals.ConsumerCoordinator - [Consumer clientId=mso-123456-consumer-f5c1d64b-2a77-4abe-97e2-5d595acfbd82, groupId=mso-group] Sending FindCoordinator request to broker localhost:40117 (id: 1 rack: null) 09:09:36 09:09:36.345 [main] DEBUG org.apache.kafka.clients.NetworkClient - [Consumer clientId=mso-123456-consumer-f5c1d64b-2a77-4abe-97e2-5d595acfbd82, groupId=mso-group] Sending FIND_COORDINATOR request with header RequestHeader(apiKey=FIND_COORDINATOR, apiVersion=4, clientId=mso-123456-consumer-f5c1d64b-2a77-4abe-97e2-5d595acfbd82, correlationId=17) and timeout 30000 to node 1: FindCoordinatorRequestData(key='', keyType=0, coordinatorKeys=[mso-group]) 09:09:36 09:09:36.347 [ProcessThread(sid:0 cport:46481):] DEBUG org.apache.zookeeper.server.SessionTrackerImpl - Checking session 0x1000002945e0000 09:09:36 09:09:36.347 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - Processing request:: sessionid:0x1000002945e0000 type:exists cxid:0xdd zxid:0xfffffffffffffffe txntype:unknown reqpath:/admin/delete_topics/__consumer_offsets 09:09:36 09:09:36.348 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - sessionid:0x1000002945e0000 type:exists cxid:0xdd zxid:0xfffffffffffffffe txntype:unknown reqpath:/admin/delete_topics/__consumer_offsets 09:09:36 09:09:36.348 [main-SendThread(127.0.0.1:46481)] DEBUG org.apache.zookeeper.ClientCnxn - Reading reply session id: 0x1000002945e0000, packet:: clientPath:/admin/delete_topics/__consumer_offsets serverPath:/admin/delete_topics/__consumer_offsets finished:false header:: 221,3 replyHeader:: 221,139,-101 request:: '/admin/delete_topics/__consumer_offsets,F response:: 09:09:36 09:09:36.349 [ProcessThread(sid:0 cport:46481):] DEBUG org.apache.zookeeper.server.SessionTrackerImpl - Checking session 0x1000002945e0000 09:09:36 09:09:36.349 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - Processing request:: sessionid:0x1000002945e0000 type:exists cxid:0xde zxid:0xfffffffffffffffe txntype:unknown reqpath:/brokers/topics/__consumer_offsets 09:09:36 09:09:36.349 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - sessionid:0x1000002945e0000 type:exists cxid:0xde zxid:0xfffffffffffffffe txntype:unknown reqpath:/brokers/topics/__consumer_offsets 09:09:36 09:09:36.349 [main-SendThread(127.0.0.1:46481)] DEBUG org.apache.zookeeper.ClientCnxn - Reading reply session id: 0x1000002945e0000, packet:: clientPath:/brokers/topics/__consumer_offsets serverPath:/brokers/topics/__consumer_offsets finished:false header:: 222,3 replyHeader:: 222,139,0 request:: '/brokers/topics/__consumer_offsets,T response:: s{38,38,1770973775670,1770973775670,0,1,0,0,548,1,39} 09:09:36 09:09:36.349 [data-plane-kafka-request-handler-0] DEBUG kafka.server.ZkAdminManager - [Admin Manager on Broker 1]: Topic creation failed since topic '__consumer_offsets' already exists. 09:09:36 org.apache.kafka.common.errors.TopicExistsException: Topic '__consumer_offsets' already exists. 09:09:36 09:09:36.350 [data-plane-kafka-request-handler-0] DEBUG kafka.server.DefaultAutoTopicCreationManager - Cleared inflight topic creation state for HashMap(__consumer_offsets -> CreatableTopic(name='__consumer_offsets', numPartitions=50, replicationFactor=1, assignments=[], configs=[CreateableTopicConfig(name='compression.type', value='producer'), CreateableTopicConfig(name='cleanup.policy', value='compact'), CreateableTopicConfig(name='segment.bytes', value='104857600')])) 09:09:36 09:09:36.350 [main] DEBUG org.apache.kafka.clients.NetworkClient - [Consumer clientId=mso-123456-consumer-f5c1d64b-2a77-4abe-97e2-5d595acfbd82, groupId=mso-group] Received FIND_COORDINATOR response from node 1 for request with header RequestHeader(apiKey=FIND_COORDINATOR, apiVersion=4, clientId=mso-123456-consumer-f5c1d64b-2a77-4abe-97e2-5d595acfbd82, correlationId=17): FindCoordinatorResponseData(throttleTimeMs=0, errorCode=0, errorMessage='', nodeId=0, host='', port=0, coordinators=[Coordinator(key='mso-group', nodeId=-1, host='', port=-1, errorCode=15, errorMessage='')]) 09:09:36 09:09:36.350 [main] DEBUG org.apache.kafka.clients.consumer.internals.ConsumerCoordinator - [Consumer clientId=mso-123456-consumer-f5c1d64b-2a77-4abe-97e2-5d595acfbd82, groupId=mso-group] Received FindCoordinator response ClientResponse(receivedTimeMs=1770973776350, latencyMs=5, disconnected=false, requestHeader=RequestHeader(apiKey=FIND_COORDINATOR, apiVersion=4, clientId=mso-123456-consumer-f5c1d64b-2a77-4abe-97e2-5d595acfbd82, correlationId=17), responseBody=FindCoordinatorResponseData(throttleTimeMs=0, errorCode=0, errorMessage='', nodeId=0, host='', port=0, coordinators=[Coordinator(key='mso-group', nodeId=-1, host='', port=-1, errorCode=15, errorMessage='')])) 09:09:36 09:09:36.350 [main] DEBUG org.apache.kafka.clients.consumer.internals.ConsumerCoordinator - [Consumer clientId=mso-123456-consumer-f5c1d64b-2a77-4abe-97e2-5d595acfbd82, groupId=mso-group] Group coordinator lookup failed: 09:09:36 09:09:36.351 [main] DEBUG org.apache.kafka.clients.consumer.internals.ConsumerCoordinator - [Consumer clientId=mso-123456-consumer-f5c1d64b-2a77-4abe-97e2-5d595acfbd82, groupId=mso-group] Coordinator discovery failed, refreshing metadata 09:09:36 org.apache.kafka.common.errors.CoordinatorNotAvailableException: The coordinator is not available. 09:09:36 09:09:36.351 [data-plane-kafka-network-thread-1-ListenerName(SASL_PLAINTEXT)-SASL_PLAINTEXT-0] DEBUG kafka.request.logger - Completed request:{"isForwarded":false,"requestHeader":{"requestApiKey":10,"requestApiVersion":4,"correlationId":17,"clientId":"mso-123456-consumer-f5c1d64b-2a77-4abe-97e2-5d595acfbd82","requestApiKeyName":"FIND_COORDINATOR"},"request":{"keyType":0,"coordinatorKeys":["mso-group"]},"response":{"throttleTimeMs":0,"coordinators":[{"key":"mso-group","nodeId":-1,"host":"","port":-1,"errorCode":15,"errorMessage":""}]},"connection":"127.0.0.1:40117-127.0.0.1:49642-3","totalTimeMs":4.659,"requestQueueTimeMs":0.14,"localTimeMs":4.148,"remoteTimeMs":0.0,"throttleTimeMs":0,"responseQueueTimeMs":0.116,"sendTimeMs":0.255,"securityProtocol":"SASL_PLAINTEXT","principal":"User:admin","listener":"SASL_PLAINTEXT","clientInformation":{"softwareName":"apache-kafka-java","softwareVersion":"3.3.1"}} 09:09:36 09:09:36.379 [ProcessThread(sid:0 cport:46481):] DEBUG org.apache.zookeeper.server.SessionTrackerImpl - Checking session 0x1000002945e0000 09:09:36 09:09:36.379 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - Processing request:: sessionid:0x1000002945e0000 type:getData cxid:0xdf zxid:0xfffffffffffffffe txntype:unknown reqpath:/config/topics/__consumer_offsets 09:09:36 09:09:36.379 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - sessionid:0x1000002945e0000 type:getData cxid:0xdf zxid:0xfffffffffffffffe txntype:unknown reqpath:/config/topics/__consumer_offsets 09:09:36 09:09:36.379 [SyncThread:0] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Permission requested: 1 09:09:36 09:09:36.379 [SyncThread:0] DEBUG org.apache.zookeeper.server.ZooKeeperServer - ACLs for node: [31,s{'world,'anyone} 09:09:36 ] 09:09:36 09:09:36.379 [SyncThread:0] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Client credentials: ['sasl,'zooclient 09:09:36 , 'ip,'127.0.0.1 09:09:36 ] 09:09:36 09:09:36.380 [main-SendThread(127.0.0.1:46481)] DEBUG org.apache.zookeeper.ClientCnxn - Reading reply session id: 0x1000002945e0000, packet:: clientPath:/config/topics/__consumer_offsets serverPath:/config/topics/__consumer_offsets finished:false header:: 223,4 replyHeader:: 223,139,0 request:: '/config/topics/__consumer_offsets,F response:: #7b2276657273696f6e223a312c22636f6e666967223a7b22636f6d7072657373696f6e2e74797065223a2270726f6475636572222c22636c65616e75702e706f6c696379223a22636f6d70616374222c227365676d656e742e6279746573223a22313034383537363030227d7d,s{37,37,1770973775609,1770973775609,0,0,0,0,109,0,37} 09:09:36 09:09:36.382 [data-plane-kafka-request-handler-1] DEBUG kafka.log.OffsetIndex - Loaded index file /tmp/kafka-unit11182757027218931278/__consumer_offsets-40/00000000000000000000.index with maxEntries = 1310720, maxIndexSize = 10485760, entries = 0, lastOffset = 0, file position = 0 09:09:36 09:09:36.382 [data-plane-kafka-request-handler-1] DEBUG kafka.log.AbstractIndex - Index /tmp/kafka-unit11182757027218931278/__consumer_offsets-40/00000000000000000000.index was not resized because it already has size 10485760 09:09:36 09:09:36.383 [data-plane-kafka-request-handler-1] DEBUG kafka.log.TimeIndex - Loaded index file /tmp/kafka-unit11182757027218931278/__consumer_offsets-40/00000000000000000000.timeindex with maxEntries = 873813, maxIndexSize = 10485760, entries = 0, lastOffset = TimestampOffset(-1,0), file position = 0 09:09:36 09:09:36.383 [data-plane-kafka-request-handler-1] DEBUG kafka.log.AbstractIndex - Index /tmp/kafka-unit11182757027218931278/__consumer_offsets-40/00000000000000000000.timeindex was not resized because it already has size 10485756 09:09:36 09:09:36.383 [data-plane-kafka-request-handler-1] INFO kafka.log.UnifiedLog$ - [LogLoader partition=__consumer_offsets-40, dir=/tmp/kafka-unit11182757027218931278] Loading producer state till offset 0 with message format version 2 09:09:36 09:09:36.383 [data-plane-kafka-request-handler-1] DEBUG kafka.utils.KafkaScheduler - Scheduling task flush-metadata-file with initial delay 0 ms and period -1 ms. 09:09:36 09:09:36.384 [data-plane-kafka-request-handler-1] DEBUG kafka.utils.KafkaScheduler - Scheduling task PeriodicProducerExpirationCheck with initial delay 600000 ms and period 600000 ms. 09:09:36 09:09:36.384 [data-plane-kafka-request-handler-1] INFO kafka.log.LogManager - Created log for partition __consumer_offsets-40 in /tmp/kafka-unit11182757027218931278/__consumer_offsets-40 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} 09:09:36 09:09:36.384 [data-plane-kafka-request-handler-1] INFO kafka.cluster.Partition - [Partition __consumer_offsets-40 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-40 09:09:36 09:09:36.384 [data-plane-kafka-request-handler-1] INFO kafka.cluster.Partition - [Partition __consumer_offsets-40 broker=1] Log loaded for partition __consumer_offsets-40 with initial high watermark 0 09:09:36 09:09:36.385 [data-plane-kafka-request-handler-1] INFO state.change.logger - [Broker id=1] Leader __consumer_offsets-40 with topic id Some(RNuWA2mhQKGQZwHuRiMICQ) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas []. Previous leader epoch was -1. 09:09:36 09:09:36.385 [data-plane-kafka-request-handler-1] DEBUG kafka.server.epoch.LeaderEpochFileCache - [LeaderEpochCache __consumer_offsets-40] Appended new epoch entry EpochEntry(epoch=0, startOffset=0). Cache now contains 1 entries. 09:09:36 09:09:36.389 [ProcessThread(sid:0 cport:46481):] DEBUG org.apache.zookeeper.server.SessionTrackerImpl - Checking session 0x1000002945e0000 09:09:36 09:09:36.390 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - Processing request:: sessionid:0x1000002945e0000 type:getData cxid:0xe0 zxid:0xfffffffffffffffe txntype:unknown reqpath:/config/topics/__consumer_offsets 09:09:36 09:09:36.390 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - sessionid:0x1000002945e0000 type:getData cxid:0xe0 zxid:0xfffffffffffffffe txntype:unknown reqpath:/config/topics/__consumer_offsets 09:09:36 09:09:36.390 [SyncThread:0] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Permission requested: 1 09:09:36 09:09:36.390 [SyncThread:0] DEBUG org.apache.zookeeper.server.ZooKeeperServer - ACLs for node: [31,s{'world,'anyone} 09:09:36 ] 09:09:36 09:09:36.390 [SyncThread:0] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Client credentials: ['sasl,'zooclient 09:09:36 , 'ip,'127.0.0.1 09:09:36 ] 09:09:36 09:09:36.390 [main-SendThread(127.0.0.1:46481)] DEBUG org.apache.zookeeper.ClientCnxn - Reading reply session id: 0x1000002945e0000, packet:: clientPath:/config/topics/__consumer_offsets serverPath:/config/topics/__consumer_offsets finished:false header:: 224,4 replyHeader:: 224,139,0 request:: '/config/topics/__consumer_offsets,F response:: #7b2276657273696f6e223a312c22636f6e666967223a7b22636f6d7072657373696f6e2e74797065223a2270726f6475636572222c22636c65616e75702e706f6c696379223a22636f6d70616374222c227365676d656e742e6279746573223a22313034383537363030227d7d,s{37,37,1770973775609,1770973775609,0,0,0,0,109,0,37} 09:09:36 09:09:36.392 [data-plane-kafka-request-handler-1] DEBUG kafka.log.OffsetIndex - Loaded index file /tmp/kafka-unit11182757027218931278/__consumer_offsets-47/00000000000000000000.index with maxEntries = 1310720, maxIndexSize = 10485760, entries = 0, lastOffset = 0, file position = 0 09:09:36 09:09:36.392 [data-plane-kafka-request-handler-1] DEBUG kafka.log.AbstractIndex - Index /tmp/kafka-unit11182757027218931278/__consumer_offsets-47/00000000000000000000.index was not resized because it already has size 10485760 09:09:36 09:09:36.392 [data-plane-kafka-request-handler-1] DEBUG kafka.log.TimeIndex - Loaded index file /tmp/kafka-unit11182757027218931278/__consumer_offsets-47/00000000000000000000.timeindex with maxEntries = 873813, maxIndexSize = 10485760, entries = 0, lastOffset = TimestampOffset(-1,0), file position = 0 09:09:36 09:09:36.392 [data-plane-kafka-request-handler-1] DEBUG kafka.log.AbstractIndex - Index /tmp/kafka-unit11182757027218931278/__consumer_offsets-47/00000000000000000000.timeindex was not resized because it already has size 10485756 09:09:36 09:09:36.392 [data-plane-kafka-request-handler-1] INFO kafka.log.UnifiedLog$ - [LogLoader partition=__consumer_offsets-47, dir=/tmp/kafka-unit11182757027218931278] Loading producer state till offset 0 with message format version 2 09:09:36 09:09:36.392 [data-plane-kafka-request-handler-1] DEBUG kafka.utils.KafkaScheduler - Scheduling task flush-metadata-file with initial delay 0 ms and period -1 ms. 09:09:36 09:09:36.393 [data-plane-kafka-request-handler-1] DEBUG kafka.utils.KafkaScheduler - Scheduling task PeriodicProducerExpirationCheck with initial delay 600000 ms and period 600000 ms. 09:09:36 09:09:36.393 [data-plane-kafka-request-handler-1] INFO kafka.log.LogManager - Created log for partition __consumer_offsets-47 in /tmp/kafka-unit11182757027218931278/__consumer_offsets-47 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} 09:09:36 09:09:36.393 [data-plane-kafka-request-handler-1] INFO kafka.cluster.Partition - [Partition __consumer_offsets-47 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-47 09:09:36 09:09:36.393 [data-plane-kafka-request-handler-1] INFO kafka.cluster.Partition - [Partition __consumer_offsets-47 broker=1] Log loaded for partition __consumer_offsets-47 with initial high watermark 0 09:09:36 09:09:36.393 [data-plane-kafka-request-handler-1] INFO state.change.logger - [Broker id=1] Leader __consumer_offsets-47 with topic id Some(RNuWA2mhQKGQZwHuRiMICQ) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas []. Previous leader epoch was -1. 09:09:36 09:09:36.393 [data-plane-kafka-request-handler-1] DEBUG kafka.server.epoch.LeaderEpochFileCache - [LeaderEpochCache __consumer_offsets-47] Appended new epoch entry EpochEntry(epoch=0, startOffset=0). Cache now contains 1 entries. 09:09:36 09:09:36.404 [ProcessThread(sid:0 cport:46481):] DEBUG org.apache.zookeeper.server.SessionTrackerImpl - Checking session 0x1000002945e0000 09:09:36 09:09:36.404 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - Processing request:: sessionid:0x1000002945e0000 type:getData cxid:0xe1 zxid:0xfffffffffffffffe txntype:unknown reqpath:/config/topics/__consumer_offsets 09:09:36 09:09:36.404 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - sessionid:0x1000002945e0000 type:getData cxid:0xe1 zxid:0xfffffffffffffffe txntype:unknown reqpath:/config/topics/__consumer_offsets 09:09:36 09:09:36.404 [SyncThread:0] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Permission requested: 1 09:09:36 09:09:36.404 [SyncThread:0] DEBUG org.apache.zookeeper.server.ZooKeeperServer - ACLs for node: [31,s{'world,'anyone} 09:09:36 ] 09:09:36 09:09:36.404 [SyncThread:0] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Client credentials: ['sasl,'zooclient 09:09:36 , 'ip,'127.0.0.1 09:09:36 ] 09:09:36 09:09:36.404 [main-SendThread(127.0.0.1:46481)] DEBUG org.apache.zookeeper.ClientCnxn - Reading reply session id: 0x1000002945e0000, packet:: clientPath:/config/topics/__consumer_offsets serverPath:/config/topics/__consumer_offsets finished:false header:: 225,4 replyHeader:: 225,139,0 request:: '/config/topics/__consumer_offsets,F response:: #7b2276657273696f6e223a312c22636f6e666967223a7b22636f6d7072657373696f6e2e74797065223a2270726f6475636572222c22636c65616e75702e706f6c696379223a22636f6d70616374222c227365676d656e742e6279746573223a22313034383537363030227d7d,s{37,37,1770973775609,1770973775609,0,0,0,0,109,0,37} 09:09:36 09:09:36.408 [data-plane-kafka-request-handler-1] DEBUG kafka.log.OffsetIndex - Loaded index file /tmp/kafka-unit11182757027218931278/__consumer_offsets-17/00000000000000000000.index with maxEntries = 1310720, maxIndexSize = 10485760, entries = 0, lastOffset = 0, file position = 0 09:09:36 09:09:36.408 [data-plane-kafka-request-handler-1] DEBUG kafka.log.AbstractIndex - Index /tmp/kafka-unit11182757027218931278/__consumer_offsets-17/00000000000000000000.index was not resized because it already has size 10485760 09:09:36 09:09:36.408 [data-plane-kafka-request-handler-1] DEBUG kafka.log.TimeIndex - Loaded index file /tmp/kafka-unit11182757027218931278/__consumer_offsets-17/00000000000000000000.timeindex with maxEntries = 873813, maxIndexSize = 10485760, entries = 0, lastOffset = TimestampOffset(-1,0), file position = 0 09:09:36 09:09:36.408 [data-plane-kafka-request-handler-1] DEBUG kafka.log.AbstractIndex - Index /tmp/kafka-unit11182757027218931278/__consumer_offsets-17/00000000000000000000.timeindex was not resized because it already has size 10485756 09:09:36 09:09:36.408 [data-plane-kafka-request-handler-1] INFO kafka.log.UnifiedLog$ - [LogLoader partition=__consumer_offsets-17, dir=/tmp/kafka-unit11182757027218931278] Loading producer state till offset 0 with message format version 2 09:09:36 09:09:36.409 [data-plane-kafka-request-handler-1] DEBUG kafka.utils.KafkaScheduler - Scheduling task flush-metadata-file with initial delay 0 ms and period -1 ms. 09:09:36 09:09:36.410 [data-plane-kafka-request-handler-1] DEBUG kafka.utils.KafkaScheduler - Scheduling task PeriodicProducerExpirationCheck with initial delay 600000 ms and period 600000 ms. 09:09:36 09:09:36.410 [data-plane-kafka-request-handler-1] INFO kafka.log.LogManager - Created log for partition __consumer_offsets-17 in /tmp/kafka-unit11182757027218931278/__consumer_offsets-17 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} 09:09:36 09:09:36.410 [data-plane-kafka-request-handler-1] INFO kafka.cluster.Partition - [Partition __consumer_offsets-17 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-17 09:09:36 09:09:36.410 [data-plane-kafka-request-handler-1] INFO kafka.cluster.Partition - [Partition __consumer_offsets-17 broker=1] Log loaded for partition __consumer_offsets-17 with initial high watermark 0 09:09:36 09:09:36.410 [data-plane-kafka-request-handler-1] INFO state.change.logger - [Broker id=1] Leader __consumer_offsets-17 with topic id Some(RNuWA2mhQKGQZwHuRiMICQ) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas []. Previous leader epoch was -1. 09:09:36 09:09:36.410 [data-plane-kafka-request-handler-1] DEBUG kafka.server.epoch.LeaderEpochFileCache - [LeaderEpochCache __consumer_offsets-17] Appended new epoch entry EpochEntry(epoch=0, startOffset=0). Cache now contains 1 entries. 09:09:36 09:09:36.444 [ProcessThread(sid:0 cport:46481):] DEBUG org.apache.zookeeper.server.SessionTrackerImpl - Checking session 0x1000002945e0000 09:09:36 09:09:36.444 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - Processing request:: sessionid:0x1000002945e0000 type:getData cxid:0xe2 zxid:0xfffffffffffffffe txntype:unknown reqpath:/config/topics/__consumer_offsets 09:09:36 09:09:36.444 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - sessionid:0x1000002945e0000 type:getData cxid:0xe2 zxid:0xfffffffffffffffe txntype:unknown reqpath:/config/topics/__consumer_offsets 09:09:36 09:09:36.444 [SyncThread:0] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Permission requested: 1 09:09:36 09:09:36.444 [SyncThread:0] DEBUG org.apache.zookeeper.server.ZooKeeperServer - ACLs for node: [31,s{'world,'anyone} 09:09:36 ] 09:09:36 09:09:36.444 [SyncThread:0] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Client credentials: ['sasl,'zooclient 09:09:36 , 'ip,'127.0.0.1 09:09:36 ] 09:09:36 09:09:36.444 [main] DEBUG org.apache.kafka.clients.NetworkClient - [Consumer clientId=mso-123456-consumer-f5c1d64b-2a77-4abe-97e2-5d595acfbd82, groupId=mso-group] Sending metadata request MetadataRequestData(topics=[MetadataRequestTopic(topicId=AAAAAAAAAAAAAAAAAAAAAA, name='my-test-topic')], allowAutoTopicCreation=false, includeClusterAuthorizedOperations=false, includeTopicAuthorizedOperations=false) to node localhost:40117 (id: 1 rack: null) 09:09:36 09:09:36.445 [main] DEBUG org.apache.kafka.clients.NetworkClient - [Consumer clientId=mso-123456-consumer-f5c1d64b-2a77-4abe-97e2-5d595acfbd82, groupId=mso-group] Sending METADATA request with header RequestHeader(apiKey=METADATA, apiVersion=12, clientId=mso-123456-consumer-f5c1d64b-2a77-4abe-97e2-5d595acfbd82, correlationId=18) and timeout 30000 to node 1: MetadataRequestData(topics=[MetadataRequestTopic(topicId=AAAAAAAAAAAAAAAAAAAAAA, name='my-test-topic')], allowAutoTopicCreation=false, includeClusterAuthorizedOperations=false, includeTopicAuthorizedOperations=false) 09:09:36 09:09:36.444 [main-SendThread(127.0.0.1:46481)] DEBUG org.apache.zookeeper.ClientCnxn - Reading reply session id: 0x1000002945e0000, packet:: clientPath:/config/topics/__consumer_offsets serverPath:/config/topics/__consumer_offsets finished:false header:: 226,4 replyHeader:: 226,139,0 request:: '/config/topics/__consumer_offsets,F response:: #7b2276657273696f6e223a312c22636f6e666967223a7b22636f6d7072657373696f6e2e74797065223a2270726f6475636572222c22636c65616e75702e706f6c696379223a22636f6d70616374222c227365676d656e742e6279746573223a22313034383537363030227d7d,s{37,37,1770973775609,1770973775609,0,0,0,0,109,0,37} 09:09:36 09:09:36.446 [data-plane-kafka-request-handler-1] DEBUG kafka.log.OffsetIndex - Loaded index file /tmp/kafka-unit11182757027218931278/__consumer_offsets-32/00000000000000000000.index with maxEntries = 1310720, maxIndexSize = 10485760, entries = 0, lastOffset = 0, file position = 0 09:09:36 09:09:36.446 [data-plane-kafka-request-handler-1] DEBUG kafka.log.AbstractIndex - Index /tmp/kafka-unit11182757027218931278/__consumer_offsets-32/00000000000000000000.index was not resized because it already has size 10485760 09:09:36 09:09:36.446 [data-plane-kafka-request-handler-1] DEBUG kafka.log.TimeIndex - Loaded index file /tmp/kafka-unit11182757027218931278/__consumer_offsets-32/00000000000000000000.timeindex with maxEntries = 873813, maxIndexSize = 10485760, entries = 0, lastOffset = TimestampOffset(-1,0), file position = 0 09:09:36 09:09:36.446 [data-plane-kafka-request-handler-1] DEBUG kafka.log.AbstractIndex - Index /tmp/kafka-unit11182757027218931278/__consumer_offsets-32/00000000000000000000.timeindex was not resized because it already has size 10485756 09:09:36 09:09:36.446 [data-plane-kafka-request-handler-1] INFO kafka.log.UnifiedLog$ - [LogLoader partition=__consumer_offsets-32, dir=/tmp/kafka-unit11182757027218931278] Loading producer state till offset 0 with message format version 2 09:09:36 09:09:36.446 [data-plane-kafka-request-handler-1] DEBUG kafka.utils.KafkaScheduler - Scheduling task flush-metadata-file with initial delay 0 ms and period -1 ms. 09:09:36 09:09:36.447 [data-plane-kafka-request-handler-1] DEBUG kafka.utils.KafkaScheduler - Scheduling task PeriodicProducerExpirationCheck with initial delay 600000 ms and period 600000 ms. 09:09:36 09:09:36.447 [data-plane-kafka-request-handler-1] INFO kafka.log.LogManager - Created log for partition __consumer_offsets-32 in /tmp/kafka-unit11182757027218931278/__consumer_offsets-32 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} 09:09:36 09:09:36.447 [data-plane-kafka-request-handler-1] INFO kafka.cluster.Partition - [Partition __consumer_offsets-32 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-32 09:09:36 09:09:36.447 [data-plane-kafka-request-handler-1] INFO kafka.cluster.Partition - [Partition __consumer_offsets-32 broker=1] Log loaded for partition __consumer_offsets-32 with initial high watermark 0 09:09:36 09:09:36.447 [data-plane-kafka-request-handler-1] INFO state.change.logger - [Broker id=1] Leader __consumer_offsets-32 with topic id Some(RNuWA2mhQKGQZwHuRiMICQ) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas []. Previous leader epoch was -1. 09:09:36 09:09:36.447 [data-plane-kafka-request-handler-1] DEBUG kafka.server.epoch.LeaderEpochFileCache - [LeaderEpochCache __consumer_offsets-32] Appended new epoch entry EpochEntry(epoch=0, startOffset=0). Cache now contains 1 entries. 09:09:36 09:09:36.449 [data-plane-kafka-network-thread-1-ListenerName(SASL_PLAINTEXT)-SASL_PLAINTEXT-0] DEBUG kafka.request.logger - Completed request:{"isForwarded":false,"requestHeader":{"requestApiKey":3,"requestApiVersion":12,"correlationId":18,"clientId":"mso-123456-consumer-f5c1d64b-2a77-4abe-97e2-5d595acfbd82","requestApiKeyName":"METADATA"},"request":{"topics":[{"topicId":"AAAAAAAAAAAAAAAAAAAAAA","name":"my-test-topic"}],"allowAutoTopicCreation":false,"includeTopicAuthorizedOperations":false},"response":{"throttleTimeMs":0,"brokers":[{"nodeId":1,"host":"localhost","port":40117,"rack":null}],"clusterId":"lcpOyY1-QY2MMThgHGGgSA","controllerId":1,"topics":[{"errorCode":0,"name":"my-test-topic","topicId":"rpOTusxPRiGyrsjjjH_fwA","isInternal":false,"partitions":[{"errorCode":0,"partitionIndex":0,"leaderId":1,"leaderEpoch":0,"replicaNodes":[1],"isrNodes":[1],"offlineReplicas":[]}],"topicAuthorizedOperations":-2147483648}]},"connection":"127.0.0.1:40117-127.0.0.1:49642-3","totalTimeMs":2.503,"requestQueueTimeMs":0.296,"localTimeMs":1.707,"remoteTimeMs":0.0,"throttleTimeMs":0,"responseQueueTimeMs":0.204,"sendTimeMs":0.295,"securityProtocol":"SASL_PLAINTEXT","principal":"User:admin","listener":"SASL_PLAINTEXT","clientInformation":{"softwareName":"apache-kafka-java","softwareVersion":"3.3.1"}} 09:09:36 09:09:36.449 [main] DEBUG org.apache.kafka.clients.NetworkClient - [Consumer clientId=mso-123456-consumer-f5c1d64b-2a77-4abe-97e2-5d595acfbd82, groupId=mso-group] Received METADATA response from node 1 for request with header RequestHeader(apiKey=METADATA, apiVersion=12, clientId=mso-123456-consumer-f5c1d64b-2a77-4abe-97e2-5d595acfbd82, correlationId=18): MetadataResponseData(throttleTimeMs=0, brokers=[MetadataResponseBroker(nodeId=1, host='localhost', port=40117, rack=null)], clusterId='lcpOyY1-QY2MMThgHGGgSA', controllerId=1, topics=[MetadataResponseTopic(errorCode=0, name='my-test-topic', topicId=rpOTusxPRiGyrsjjjH_fwA, isInternal=false, partitions=[MetadataResponsePartition(errorCode=0, partitionIndex=0, leaderId=1, leaderEpoch=0, replicaNodes=[1], isrNodes=[1], offlineReplicas=[])], topicAuthorizedOperations=-2147483648)], clusterAuthorizedOperations=-2147483648) 09:09:36 09:09:36.450 [main] DEBUG org.apache.kafka.clients.Metadata - [Consumer clientId=mso-123456-consumer-f5c1d64b-2a77-4abe-97e2-5d595acfbd82, groupId=mso-group] Updating last seen epoch for partition my-test-topic-0 from 0 to epoch 0 from new metadata 09:09:36 09:09:36.450 [main] DEBUG org.apache.kafka.clients.Metadata - [Consumer clientId=mso-123456-consumer-f5c1d64b-2a77-4abe-97e2-5d595acfbd82, groupId=mso-group] Updated cluster metadata updateVersion 10 to MetadataCache{clusterId='lcpOyY1-QY2MMThgHGGgSA', nodes={1=localhost:40117 (id: 1 rack: null)}, partitions=[PartitionMetadata(error=NONE, partition=my-test-topic-0, leader=Optional[1], leaderEpoch=Optional[0], replicas=1, isr=1, offlineReplicas=)], controller=localhost:40117 (id: 1 rack: null)} 09:09:36 09:09:36.450 [main] DEBUG org.apache.kafka.clients.consumer.internals.ConsumerCoordinator - [Consumer clientId=mso-123456-consumer-f5c1d64b-2a77-4abe-97e2-5d595acfbd82, groupId=mso-group] Sending FindCoordinator request to broker localhost:40117 (id: 1 rack: null) 09:09:36 09:09:36.450 [main] DEBUG org.apache.kafka.clients.NetworkClient - [Consumer clientId=mso-123456-consumer-f5c1d64b-2a77-4abe-97e2-5d595acfbd82, groupId=mso-group] Sending FIND_COORDINATOR request with header RequestHeader(apiKey=FIND_COORDINATOR, apiVersion=4, clientId=mso-123456-consumer-f5c1d64b-2a77-4abe-97e2-5d595acfbd82, correlationId=19) and timeout 30000 to node 1: FindCoordinatorRequestData(key='', keyType=0, coordinatorKeys=[mso-group]) 09:09:36 09:09:36.452 [ProcessThread(sid:0 cport:46481):] DEBUG org.apache.zookeeper.server.SessionTrackerImpl - Checking session 0x1000002945e0000 09:09:36 09:09:36.452 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - Processing request:: sessionid:0x1000002945e0000 type:getData cxid:0xe3 zxid:0xfffffffffffffffe txntype:unknown reqpath:/config/topics/__consumer_offsets 09:09:36 09:09:36.453 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - sessionid:0x1000002945e0000 type:getData cxid:0xe3 zxid:0xfffffffffffffffe txntype:unknown reqpath:/config/topics/__consumer_offsets 09:09:36 09:09:36.453 [SyncThread:0] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Permission requested: 1 09:09:36 09:09:36.453 [SyncThread:0] DEBUG org.apache.zookeeper.server.ZooKeeperServer - ACLs for node: [31,s{'world,'anyone} 09:09:36 ] 09:09:36 09:09:36.453 [SyncThread:0] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Client credentials: ['sasl,'zooclient 09:09:36 , 'ip,'127.0.0.1 09:09:36 ] 09:09:36 09:09:36.453 [ProcessThread(sid:0 cport:46481):] DEBUG org.apache.zookeeper.server.SessionTrackerImpl - Checking session 0x1000002945e0000 09:09:36 09:09:36.453 [main-SendThread(127.0.0.1:46481)] DEBUG org.apache.zookeeper.ClientCnxn - Reading reply session id: 0x1000002945e0000, packet:: clientPath:/config/topics/__consumer_offsets serverPath:/config/topics/__consumer_offsets finished:false header:: 227,4 replyHeader:: 227,139,0 request:: '/config/topics/__consumer_offsets,F response:: #7b2276657273696f6e223a312c22636f6e666967223a7b22636f6d7072657373696f6e2e74797065223a2270726f6475636572222c22636c65616e75702e706f6c696379223a22636f6d70616374222c227365676d656e742e6279746573223a22313034383537363030227d7d,s{37,37,1770973775609,1770973775609,0,0,0,0,109,0,37} 09:09:36 09:09:36.453 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - Processing request:: sessionid:0x1000002945e0000 type:exists cxid:0xe4 zxid:0xfffffffffffffffe txntype:unknown reqpath:/admin/delete_topics/__consumer_offsets 09:09:36 09:09:36.453 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - sessionid:0x1000002945e0000 type:exists cxid:0xe4 zxid:0xfffffffffffffffe txntype:unknown reqpath:/admin/delete_topics/__consumer_offsets 09:09:36 09:09:36.454 [main-SendThread(127.0.0.1:46481)] DEBUG org.apache.zookeeper.ClientCnxn - Reading reply session id: 0x1000002945e0000, packet:: clientPath:/admin/delete_topics/__consumer_offsets serverPath:/admin/delete_topics/__consumer_offsets finished:false header:: 228,3 replyHeader:: 228,139,-101 request:: '/admin/delete_topics/__consumer_offsets,F response:: 09:09:36 09:09:36.454 [ProcessThread(sid:0 cport:46481):] DEBUG org.apache.zookeeper.server.SessionTrackerImpl - Checking session 0x1000002945e0000 09:09:36 09:09:36.454 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - Processing request:: sessionid:0x1000002945e0000 type:exists cxid:0xe5 zxid:0xfffffffffffffffe txntype:unknown reqpath:/brokers/topics/__consumer_offsets 09:09:36 09:09:36.455 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - sessionid:0x1000002945e0000 type:exists cxid:0xe5 zxid:0xfffffffffffffffe txntype:unknown reqpath:/brokers/topics/__consumer_offsets 09:09:36 09:09:36.455 [main-SendThread(127.0.0.1:46481)] DEBUG org.apache.zookeeper.ClientCnxn - Reading reply session id: 0x1000002945e0000, packet:: clientPath:/brokers/topics/__consumer_offsets serverPath:/brokers/topics/__consumer_offsets finished:false header:: 229,3 replyHeader:: 229,139,0 request:: '/brokers/topics/__consumer_offsets,T response:: s{38,38,1770973775670,1770973775670,0,1,0,0,548,1,39} 09:09:36 09:09:36.455 [data-plane-kafka-request-handler-0] DEBUG kafka.server.ZkAdminManager - [Admin Manager on Broker 1]: Topic creation failed since topic '__consumer_offsets' already exists. 09:09:36 org.apache.kafka.common.errors.TopicExistsException: Topic '__consumer_offsets' already exists. 09:09:36 09:09:36.455 [data-plane-kafka-request-handler-1] DEBUG kafka.log.OffsetIndex - Loaded index file /tmp/kafka-unit11182757027218931278/__consumer_offsets-37/00000000000000000000.index with maxEntries = 1310720, maxIndexSize = 10485760, entries = 0, lastOffset = 0, file position = 0 09:09:36 09:09:36.455 [data-plane-kafka-request-handler-1] DEBUG kafka.log.AbstractIndex - Index /tmp/kafka-unit11182757027218931278/__consumer_offsets-37/00000000000000000000.index was not resized because it already has size 10485760 09:09:36 09:09:36.455 [data-plane-kafka-request-handler-0] DEBUG kafka.server.DefaultAutoTopicCreationManager - Cleared inflight topic creation state for HashMap(__consumer_offsets -> CreatableTopic(name='__consumer_offsets', numPartitions=50, replicationFactor=1, assignments=[], configs=[CreateableTopicConfig(name='compression.type', value='producer'), CreateableTopicConfig(name='cleanup.policy', value='compact'), CreateableTopicConfig(name='segment.bytes', value='104857600')])) 09:09:36 09:09:36.455 [data-plane-kafka-request-handler-1] DEBUG kafka.log.TimeIndex - Loaded index file /tmp/kafka-unit11182757027218931278/__consumer_offsets-37/00000000000000000000.timeindex with maxEntries = 873813, maxIndexSize = 10485760, entries = 0, lastOffset = TimestampOffset(-1,0), file position = 0 09:09:36 09:09:36.455 [data-plane-kafka-request-handler-1] DEBUG kafka.log.AbstractIndex - Index /tmp/kafka-unit11182757027218931278/__consumer_offsets-37/00000000000000000000.timeindex was not resized because it already has size 10485756 09:09:36 09:09:36.456 [data-plane-kafka-request-handler-1] INFO kafka.log.UnifiedLog$ - [LogLoader partition=__consumer_offsets-37, dir=/tmp/kafka-unit11182757027218931278] Loading producer state till offset 0 with message format version 2 09:09:36 09:09:36.456 [data-plane-kafka-request-handler-1] DEBUG kafka.utils.KafkaScheduler - Scheduling task flush-metadata-file with initial delay 0 ms and period -1 ms. 09:09:36 09:09:36.456 [data-plane-kafka-network-thread-1-ListenerName(SASL_PLAINTEXT)-SASL_PLAINTEXT-0] DEBUG kafka.request.logger - Completed request:{"isForwarded":false,"requestHeader":{"requestApiKey":10,"requestApiVersion":4,"correlationId":19,"clientId":"mso-123456-consumer-f5c1d64b-2a77-4abe-97e2-5d595acfbd82","requestApiKeyName":"FIND_COORDINATOR"},"request":{"keyType":0,"coordinatorKeys":["mso-group"]},"response":{"throttleTimeMs":0,"coordinators":[{"key":"mso-group","nodeId":-1,"host":"","port":-1,"errorCode":15,"errorMessage":""}]},"connection":"127.0.0.1:40117-127.0.0.1:49642-3","totalTimeMs":4.93,"requestQueueTimeMs":0.171,"localTimeMs":4.56,"remoteTimeMs":0.0,"throttleTimeMs":0,"responseQueueTimeMs":0.066,"sendTimeMs":0.133,"securityProtocol":"SASL_PLAINTEXT","principal":"User:admin","listener":"SASL_PLAINTEXT","clientInformation":{"softwareName":"apache-kafka-java","softwareVersion":"3.3.1"}} 09:09:36 09:09:36.456 [data-plane-kafka-request-handler-1] DEBUG kafka.utils.KafkaScheduler - Scheduling task PeriodicProducerExpirationCheck with initial delay 600000 ms and period 600000 ms. 09:09:36 09:09:36.456 [data-plane-kafka-request-handler-1] INFO kafka.log.LogManager - Created log for partition __consumer_offsets-37 in /tmp/kafka-unit11182757027218931278/__consumer_offsets-37 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} 09:09:36 09:09:36.456 [main] DEBUG org.apache.kafka.clients.NetworkClient - [Consumer clientId=mso-123456-consumer-f5c1d64b-2a77-4abe-97e2-5d595acfbd82, groupId=mso-group] Received FIND_COORDINATOR response from node 1 for request with header RequestHeader(apiKey=FIND_COORDINATOR, apiVersion=4, clientId=mso-123456-consumer-f5c1d64b-2a77-4abe-97e2-5d595acfbd82, correlationId=19): FindCoordinatorResponseData(throttleTimeMs=0, errorCode=0, errorMessage='', nodeId=0, host='', port=0, coordinators=[Coordinator(key='mso-group', nodeId=-1, host='', port=-1, errorCode=15, errorMessage='')]) 09:09:36 09:09:36.456 [data-plane-kafka-request-handler-1] INFO kafka.cluster.Partition - [Partition __consumer_offsets-37 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-37 09:09:36 09:09:36.456 [data-plane-kafka-request-handler-1] INFO kafka.cluster.Partition - [Partition __consumer_offsets-37 broker=1] Log loaded for partition __consumer_offsets-37 with initial high watermark 0 09:09:36 09:09:36.456 [main] DEBUG org.apache.kafka.clients.consumer.internals.ConsumerCoordinator - [Consumer clientId=mso-123456-consumer-f5c1d64b-2a77-4abe-97e2-5d595acfbd82, groupId=mso-group] Received FindCoordinator response ClientResponse(receivedTimeMs=1770973776456, latencyMs=6, disconnected=false, requestHeader=RequestHeader(apiKey=FIND_COORDINATOR, apiVersion=4, clientId=mso-123456-consumer-f5c1d64b-2a77-4abe-97e2-5d595acfbd82, correlationId=19), responseBody=FindCoordinatorResponseData(throttleTimeMs=0, errorCode=0, errorMessage='', nodeId=0, host='', port=0, coordinators=[Coordinator(key='mso-group', nodeId=-1, host='', port=-1, errorCode=15, errorMessage='')])) 09:09:36 09:09:36.456 [data-plane-kafka-request-handler-1] INFO state.change.logger - [Broker id=1] Leader __consumer_offsets-37 with topic id Some(RNuWA2mhQKGQZwHuRiMICQ) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas []. Previous leader epoch was -1. 09:09:36 09:09:36.456 [main] DEBUG org.apache.kafka.clients.consumer.internals.ConsumerCoordinator - [Consumer clientId=mso-123456-consumer-f5c1d64b-2a77-4abe-97e2-5d595acfbd82, groupId=mso-group] Group coordinator lookup failed: 09:09:36 09:09:36.457 [data-plane-kafka-request-handler-1] DEBUG kafka.server.epoch.LeaderEpochFileCache - [LeaderEpochCache __consumer_offsets-37] Appended new epoch entry EpochEntry(epoch=0, startOffset=0). Cache now contains 1 entries. 09:09:36 09:09:36.457 [main] DEBUG org.apache.kafka.clients.consumer.internals.ConsumerCoordinator - [Consumer clientId=mso-123456-consumer-f5c1d64b-2a77-4abe-97e2-5d595acfbd82, groupId=mso-group] Coordinator discovery failed, refreshing metadata 09:09:36 org.apache.kafka.common.errors.CoordinatorNotAvailableException: The coordinator is not available. 09:09:36 09:09:36.462 [ProcessThread(sid:0 cport:46481):] DEBUG org.apache.zookeeper.server.SessionTrackerImpl - Checking session 0x1000002945e0000 09:09:36 09:09:36.462 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - Processing request:: sessionid:0x1000002945e0000 type:getData cxid:0xe6 zxid:0xfffffffffffffffe txntype:unknown reqpath:/config/topics/__consumer_offsets 09:09:36 09:09:36.462 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - sessionid:0x1000002945e0000 type:getData cxid:0xe6 zxid:0xfffffffffffffffe txntype:unknown reqpath:/config/topics/__consumer_offsets 09:09:36 09:09:36.462 [SyncThread:0] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Permission requested: 1 09:09:36 09:09:36.462 [SyncThread:0] DEBUG org.apache.zookeeper.server.ZooKeeperServer - ACLs for node: [31,s{'world,'anyone} 09:09:36 ] 09:09:36 09:09:36.462 [SyncThread:0] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Client credentials: ['sasl,'zooclient 09:09:36 , 'ip,'127.0.0.1 09:09:36 ] 09:09:36 09:09:36.462 [main-SendThread(127.0.0.1:46481)] DEBUG org.apache.zookeeper.ClientCnxn - Reading reply session id: 0x1000002945e0000, packet:: clientPath:/config/topics/__consumer_offsets serverPath:/config/topics/__consumer_offsets finished:false header:: 230,4 replyHeader:: 230,139,0 request:: '/config/topics/__consumer_offsets,F response:: #7b2276657273696f6e223a312c22636f6e666967223a7b22636f6d7072657373696f6e2e74797065223a2270726f6475636572222c22636c65616e75702e706f6c696379223a22636f6d70616374222c227365676d656e742e6279746573223a22313034383537363030227d7d,s{37,37,1770973775609,1770973775609,0,0,0,0,109,0,37} 09:09:36 09:09:36.466 [data-plane-kafka-request-handler-1] DEBUG kafka.log.OffsetIndex - Loaded index file /tmp/kafka-unit11182757027218931278/__consumer_offsets-7/00000000000000000000.index with maxEntries = 1310720, maxIndexSize = 10485760, entries = 0, lastOffset = 0, file position = 0 09:09:36 09:09:36.466 [data-plane-kafka-request-handler-1] DEBUG kafka.log.AbstractIndex - Index /tmp/kafka-unit11182757027218931278/__consumer_offsets-7/00000000000000000000.index was not resized because it already has size 10485760 09:09:36 09:09:36.467 [data-plane-kafka-request-handler-1] DEBUG kafka.log.TimeIndex - Loaded index file /tmp/kafka-unit11182757027218931278/__consumer_offsets-7/00000000000000000000.timeindex with maxEntries = 873813, maxIndexSize = 10485760, entries = 0, lastOffset = TimestampOffset(-1,0), file position = 0 09:09:36 09:09:36.467 [data-plane-kafka-request-handler-1] DEBUG kafka.log.AbstractIndex - Index /tmp/kafka-unit11182757027218931278/__consumer_offsets-7/00000000000000000000.timeindex was not resized because it already has size 10485756 09:09:36 09:09:36.467 [data-plane-kafka-request-handler-1] INFO kafka.log.UnifiedLog$ - [LogLoader partition=__consumer_offsets-7, dir=/tmp/kafka-unit11182757027218931278] Loading producer state till offset 0 with message format version 2 09:09:36 09:09:36.468 [data-plane-kafka-request-handler-1] DEBUG kafka.utils.KafkaScheduler - Scheduling task flush-metadata-file with initial delay 0 ms and period -1 ms. 09:09:36 09:09:36.469 [data-plane-kafka-request-handler-1] DEBUG kafka.utils.KafkaScheduler - Scheduling task PeriodicProducerExpirationCheck with initial delay 600000 ms and period 600000 ms. 09:09:36 09:09:36.469 [data-plane-kafka-request-handler-1] INFO kafka.log.LogManager - Created log for partition __consumer_offsets-7 in /tmp/kafka-unit11182757027218931278/__consumer_offsets-7 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} 09:09:36 09:09:36.469 [data-plane-kafka-request-handler-1] INFO kafka.cluster.Partition - [Partition __consumer_offsets-7 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-7 09:09:36 09:09:36.470 [data-plane-kafka-request-handler-1] INFO kafka.cluster.Partition - [Partition __consumer_offsets-7 broker=1] Log loaded for partition __consumer_offsets-7 with initial high watermark 0 09:09:36 09:09:36.470 [data-plane-kafka-request-handler-1] INFO state.change.logger - [Broker id=1] Leader __consumer_offsets-7 with topic id Some(RNuWA2mhQKGQZwHuRiMICQ) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas []. Previous leader epoch was -1. 09:09:36 09:09:36.470 [data-plane-kafka-request-handler-1] DEBUG kafka.server.epoch.LeaderEpochFileCache - [LeaderEpochCache __consumer_offsets-7] Appended new epoch entry EpochEntry(epoch=0, startOffset=0). Cache now contains 1 entries. 09:09:36 09:09:36.474 [ProcessThread(sid:0 cport:46481):] DEBUG org.apache.zookeeper.server.SessionTrackerImpl - Checking session 0x1000002945e0000 09:09:36 09:09:36.474 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - Processing request:: sessionid:0x1000002945e0000 type:getData cxid:0xe7 zxid:0xfffffffffffffffe txntype:unknown reqpath:/config/topics/__consumer_offsets 09:09:36 09:09:36.474 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - sessionid:0x1000002945e0000 type:getData cxid:0xe7 zxid:0xfffffffffffffffe txntype:unknown reqpath:/config/topics/__consumer_offsets 09:09:36 09:09:36.474 [SyncThread:0] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Permission requested: 1 09:09:36 09:09:36.474 [SyncThread:0] DEBUG org.apache.zookeeper.server.ZooKeeperServer - ACLs for node: [31,s{'world,'anyone} 09:09:36 ] 09:09:36 09:09:36.474 [SyncThread:0] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Client credentials: ['sasl,'zooclient 09:09:36 , 'ip,'127.0.0.1 09:09:36 ] 09:09:36 09:09:36.475 [main-SendThread(127.0.0.1:46481)] DEBUG org.apache.zookeeper.ClientCnxn - Reading reply session id: 0x1000002945e0000, packet:: clientPath:/config/topics/__consumer_offsets serverPath:/config/topics/__consumer_offsets finished:false header:: 231,4 replyHeader:: 231,139,0 request:: '/config/topics/__consumer_offsets,F response:: #7b2276657273696f6e223a312c22636f6e666967223a7b22636f6d7072657373696f6e2e74797065223a2270726f6475636572222c22636c65616e75702e706f6c696379223a22636f6d70616374222c227365676d656e742e6279746573223a22313034383537363030227d7d,s{37,37,1770973775609,1770973775609,0,0,0,0,109,0,37} 09:09:36 09:09:36.477 [data-plane-kafka-request-handler-1] DEBUG kafka.log.OffsetIndex - Loaded index file /tmp/kafka-unit11182757027218931278/__consumer_offsets-22/00000000000000000000.index with maxEntries = 1310720, maxIndexSize = 10485760, entries = 0, lastOffset = 0, file position = 0 09:09:36 09:09:36.477 [data-plane-kafka-request-handler-1] DEBUG kafka.log.AbstractIndex - Index /tmp/kafka-unit11182757027218931278/__consumer_offsets-22/00000000000000000000.index was not resized because it already has size 10485760 09:09:36 09:09:36.477 [data-plane-kafka-request-handler-1] DEBUG kafka.log.TimeIndex - Loaded index file /tmp/kafka-unit11182757027218931278/__consumer_offsets-22/00000000000000000000.timeindex with maxEntries = 873813, maxIndexSize = 10485760, entries = 0, lastOffset = TimestampOffset(-1,0), file position = 0 09:09:36 09:09:36.477 [data-plane-kafka-request-handler-1] DEBUG kafka.log.AbstractIndex - Index /tmp/kafka-unit11182757027218931278/__consumer_offsets-22/00000000000000000000.timeindex was not resized because it already has size 10485756 09:09:36 09:09:36.478 [data-plane-kafka-request-handler-1] INFO kafka.log.UnifiedLog$ - [LogLoader partition=__consumer_offsets-22, dir=/tmp/kafka-unit11182757027218931278] Loading producer state till offset 0 with message format version 2 09:09:36 09:09:36.478 [data-plane-kafka-request-handler-1] DEBUG kafka.utils.KafkaScheduler - Scheduling task flush-metadata-file with initial delay 0 ms and period -1 ms. 09:09:36 09:09:36.479 [data-plane-kafka-request-handler-1] DEBUG kafka.utils.KafkaScheduler - Scheduling task PeriodicProducerExpirationCheck with initial delay 600000 ms and period 600000 ms. 09:09:36 09:09:36.479 [data-plane-kafka-request-handler-1] INFO kafka.log.LogManager - Created log for partition __consumer_offsets-22 in /tmp/kafka-unit11182757027218931278/__consumer_offsets-22 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} 09:09:36 09:09:36.479 [data-plane-kafka-request-handler-1] INFO kafka.cluster.Partition - [Partition __consumer_offsets-22 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-22 09:09:36 09:09:36.479 [data-plane-kafka-request-handler-1] INFO kafka.cluster.Partition - [Partition __consumer_offsets-22 broker=1] Log loaded for partition __consumer_offsets-22 with initial high watermark 0 09:09:36 09:09:36.479 [data-plane-kafka-request-handler-1] INFO state.change.logger - [Broker id=1] Leader __consumer_offsets-22 with topic id Some(RNuWA2mhQKGQZwHuRiMICQ) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas []. Previous leader epoch was -1. 09:09:36 09:09:36.479 [data-plane-kafka-request-handler-1] DEBUG kafka.server.epoch.LeaderEpochFileCache - [LeaderEpochCache __consumer_offsets-22] Appended new epoch entry EpochEntry(epoch=0, startOffset=0). Cache now contains 1 entries. 09:09:36 09:09:36.483 [ProcessThread(sid:0 cport:46481):] DEBUG org.apache.zookeeper.server.SessionTrackerImpl - Checking session 0x1000002945e0000 09:09:36 09:09:36.483 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - Processing request:: sessionid:0x1000002945e0000 type:getData cxid:0xe8 zxid:0xfffffffffffffffe txntype:unknown reqpath:/config/topics/__consumer_offsets 09:09:36 09:09:36.483 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - sessionid:0x1000002945e0000 type:getData cxid:0xe8 zxid:0xfffffffffffffffe txntype:unknown reqpath:/config/topics/__consumer_offsets 09:09:36 09:09:36.483 [SyncThread:0] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Permission requested: 1 09:09:36 09:09:36.483 [SyncThread:0] DEBUG org.apache.zookeeper.server.ZooKeeperServer - ACLs for node: [31,s{'world,'anyone} 09:09:36 ] 09:09:36 09:09:36.483 [SyncThread:0] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Client credentials: ['sasl,'zooclient 09:09:36 , 'ip,'127.0.0.1 09:09:36 ] 09:09:36 09:09:36.484 [main-SendThread(127.0.0.1:46481)] DEBUG org.apache.zookeeper.ClientCnxn - Reading reply session id: 0x1000002945e0000, packet:: clientPath:/config/topics/__consumer_offsets serverPath:/config/topics/__consumer_offsets finished:false header:: 232,4 replyHeader:: 232,139,0 request:: '/config/topics/__consumer_offsets,F response:: #7b2276657273696f6e223a312c22636f6e666967223a7b22636f6d7072657373696f6e2e74797065223a2270726f6475636572222c22636c65616e75702e706f6c696379223a22636f6d70616374222c227365676d656e742e6279746573223a22313034383537363030227d7d,s{37,37,1770973775609,1770973775609,0,0,0,0,109,0,37} 09:09:36 09:09:36.486 [data-plane-kafka-request-handler-1] DEBUG kafka.log.OffsetIndex - Loaded index file /tmp/kafka-unit11182757027218931278/__consumer_offsets-29/00000000000000000000.index with maxEntries = 1310720, maxIndexSize = 10485760, entries = 0, lastOffset = 0, file position = 0 09:09:36 09:09:36.486 [data-plane-kafka-request-handler-1] DEBUG kafka.log.AbstractIndex - Index /tmp/kafka-unit11182757027218931278/__consumer_offsets-29/00000000000000000000.index was not resized because it already has size 10485760 09:09:36 09:09:36.486 [data-plane-kafka-request-handler-1] DEBUG kafka.log.TimeIndex - Loaded index file /tmp/kafka-unit11182757027218931278/__consumer_offsets-29/00000000000000000000.timeindex with maxEntries = 873813, maxIndexSize = 10485760, entries = 0, lastOffset = TimestampOffset(-1,0), file position = 0 09:09:36 09:09:36.486 [data-plane-kafka-request-handler-1] DEBUG kafka.log.AbstractIndex - Index /tmp/kafka-unit11182757027218931278/__consumer_offsets-29/00000000000000000000.timeindex was not resized because it already has size 10485756 09:09:36 09:09:36.487 [data-plane-kafka-request-handler-1] INFO kafka.log.UnifiedLog$ - [LogLoader partition=__consumer_offsets-29, dir=/tmp/kafka-unit11182757027218931278] Loading producer state till offset 0 with message format version 2 09:09:36 09:09:36.487 [data-plane-kafka-request-handler-1] DEBUG kafka.utils.KafkaScheduler - Scheduling task flush-metadata-file with initial delay 0 ms and period -1 ms. 09:09:36 09:09:36.488 [data-plane-kafka-request-handler-1] DEBUG kafka.utils.KafkaScheduler - Scheduling task PeriodicProducerExpirationCheck with initial delay 600000 ms and period 600000 ms. 09:09:36 09:09:36.488 [data-plane-kafka-request-handler-1] INFO kafka.log.LogManager - Created log for partition __consumer_offsets-29 in /tmp/kafka-unit11182757027218931278/__consumer_offsets-29 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} 09:09:36 09:09:36.488 [data-plane-kafka-request-handler-1] INFO kafka.cluster.Partition - [Partition __consumer_offsets-29 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-29 09:09:36 09:09:36.488 [data-plane-kafka-request-handler-1] INFO kafka.cluster.Partition - [Partition __consumer_offsets-29 broker=1] Log loaded for partition __consumer_offsets-29 with initial high watermark 0 09:09:36 09:09:36.488 [data-plane-kafka-request-handler-1] INFO state.change.logger - [Broker id=1] Leader __consumer_offsets-29 with topic id Some(RNuWA2mhQKGQZwHuRiMICQ) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas []. Previous leader epoch was -1. 09:09:36 09:09:36.488 [data-plane-kafka-request-handler-1] DEBUG kafka.server.epoch.LeaderEpochFileCache - [LeaderEpochCache __consumer_offsets-29] Appended new epoch entry EpochEntry(epoch=0, startOffset=0). Cache now contains 1 entries. 09:09:36 09:09:36.492 [ProcessThread(sid:0 cport:46481):] DEBUG org.apache.zookeeper.server.SessionTrackerImpl - Checking session 0x1000002945e0000 09:09:36 09:09:36.493 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - Processing request:: sessionid:0x1000002945e0000 type:getData cxid:0xe9 zxid:0xfffffffffffffffe txntype:unknown reqpath:/config/topics/__consumer_offsets 09:09:36 09:09:36.493 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - sessionid:0x1000002945e0000 type:getData cxid:0xe9 zxid:0xfffffffffffffffe txntype:unknown reqpath:/config/topics/__consumer_offsets 09:09:36 09:09:36.493 [SyncThread:0] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Permission requested: 1 09:09:36 09:09:36.493 [SyncThread:0] DEBUG org.apache.zookeeper.server.ZooKeeperServer - ACLs for node: [31,s{'world,'anyone} 09:09:36 ] 09:09:36 09:09:36.493 [SyncThread:0] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Client credentials: ['sasl,'zooclient 09:09:36 , 'ip,'127.0.0.1 09:09:36 ] 09:09:36 09:09:36.493 [main-SendThread(127.0.0.1:46481)] DEBUG org.apache.zookeeper.ClientCnxn - Reading reply session id: 0x1000002945e0000, packet:: clientPath:/config/topics/__consumer_offsets serverPath:/config/topics/__consumer_offsets finished:false header:: 233,4 replyHeader:: 233,139,0 request:: '/config/topics/__consumer_offsets,F response:: #7b2276657273696f6e223a312c22636f6e666967223a7b22636f6d7072657373696f6e2e74797065223a2270726f6475636572222c22636c65616e75702e706f6c696379223a22636f6d70616374222c227365676d656e742e6279746573223a22313034383537363030227d7d,s{37,37,1770973775609,1770973775609,0,0,0,0,109,0,37} 09:09:36 09:09:36.495 [data-plane-kafka-request-handler-1] DEBUG kafka.log.OffsetIndex - Loaded index file /tmp/kafka-unit11182757027218931278/__consumer_offsets-44/00000000000000000000.index with maxEntries = 1310720, maxIndexSize = 10485760, entries = 0, lastOffset = 0, file position = 0 09:09:36 09:09:36.495 [data-plane-kafka-request-handler-1] DEBUG kafka.log.AbstractIndex - Index /tmp/kafka-unit11182757027218931278/__consumer_offsets-44/00000000000000000000.index was not resized because it already has size 10485760 09:09:36 09:09:36.495 [data-plane-kafka-request-handler-1] DEBUG kafka.log.TimeIndex - Loaded index file /tmp/kafka-unit11182757027218931278/__consumer_offsets-44/00000000000000000000.timeindex with maxEntries = 873813, maxIndexSize = 10485760, entries = 0, lastOffset = TimestampOffset(-1,0), file position = 0 09:09:36 09:09:36.495 [data-plane-kafka-request-handler-1] DEBUG kafka.log.AbstractIndex - Index /tmp/kafka-unit11182757027218931278/__consumer_offsets-44/00000000000000000000.timeindex was not resized because it already has size 10485756 09:09:36 09:09:36.496 [data-plane-kafka-request-handler-1] INFO kafka.log.UnifiedLog$ - [LogLoader partition=__consumer_offsets-44, dir=/tmp/kafka-unit11182757027218931278] Loading producer state till offset 0 with message format version 2 09:09:36 09:09:36.496 [data-plane-kafka-request-handler-1] DEBUG kafka.utils.KafkaScheduler - Scheduling task flush-metadata-file with initial delay 0 ms and period -1 ms. 09:09:36 09:09:36.496 [data-plane-kafka-request-handler-1] DEBUG kafka.utils.KafkaScheduler - Scheduling task PeriodicProducerExpirationCheck with initial delay 600000 ms and period 600000 ms. 09:09:36 09:09:36.496 [data-plane-kafka-request-handler-1] INFO kafka.log.LogManager - Created log for partition __consumer_offsets-44 in /tmp/kafka-unit11182757027218931278/__consumer_offsets-44 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} 09:09:36 09:09:36.497 [data-plane-kafka-request-handler-1] INFO kafka.cluster.Partition - [Partition __consumer_offsets-44 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-44 09:09:36 09:09:36.497 [data-plane-kafka-request-handler-1] INFO kafka.cluster.Partition - [Partition __consumer_offsets-44 broker=1] Log loaded for partition __consumer_offsets-44 with initial high watermark 0 09:09:36 09:09:36.497 [data-plane-kafka-request-handler-1] INFO state.change.logger - [Broker id=1] Leader __consumer_offsets-44 with topic id Some(RNuWA2mhQKGQZwHuRiMICQ) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas []. Previous leader epoch was -1. 09:09:36 09:09:36.497 [data-plane-kafka-request-handler-1] DEBUG kafka.server.epoch.LeaderEpochFileCache - [LeaderEpochCache __consumer_offsets-44] Appended new epoch entry EpochEntry(epoch=0, startOffset=0). Cache now contains 1 entries. 09:09:36 09:09:36.502 [ProcessThread(sid:0 cport:46481):] DEBUG org.apache.zookeeper.server.SessionTrackerImpl - Checking session 0x1000002945e0000 09:09:36 09:09:36.502 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - Processing request:: sessionid:0x1000002945e0000 type:getData cxid:0xea zxid:0xfffffffffffffffe txntype:unknown reqpath:/config/topics/__consumer_offsets 09:09:36 09:09:36.502 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - sessionid:0x1000002945e0000 type:getData cxid:0xea zxid:0xfffffffffffffffe txntype:unknown reqpath:/config/topics/__consumer_offsets 09:09:36 09:09:36.502 [SyncThread:0] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Permission requested: 1 09:09:36 09:09:36.502 [SyncThread:0] DEBUG org.apache.zookeeper.server.ZooKeeperServer - ACLs for node: [31,s{'world,'anyone} 09:09:36 ] 09:09:36 09:09:36.502 [SyncThread:0] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Client credentials: ['sasl,'zooclient 09:09:36 , 'ip,'127.0.0.1 09:09:36 ] 09:09:36 09:09:36.502 [main-SendThread(127.0.0.1:46481)] DEBUG org.apache.zookeeper.ClientCnxn - Reading reply session id: 0x1000002945e0000, packet:: clientPath:/config/topics/__consumer_offsets serverPath:/config/topics/__consumer_offsets finished:false header:: 234,4 replyHeader:: 234,139,0 request:: '/config/topics/__consumer_offsets,F response:: #7b2276657273696f6e223a312c22636f6e666967223a7b22636f6d7072657373696f6e2e74797065223a2270726f6475636572222c22636c65616e75702e706f6c696379223a22636f6d70616374222c227365676d656e742e6279746573223a22313034383537363030227d7d,s{37,37,1770973775609,1770973775609,0,0,0,0,109,0,37} 09:09:36 09:09:36.505 [data-plane-kafka-request-handler-1] DEBUG kafka.log.OffsetIndex - Loaded index file /tmp/kafka-unit11182757027218931278/__consumer_offsets-14/00000000000000000000.index with maxEntries = 1310720, maxIndexSize = 10485760, entries = 0, lastOffset = 0, file position = 0 09:09:36 09:09:36.505 [data-plane-kafka-request-handler-1] DEBUG kafka.log.AbstractIndex - Index /tmp/kafka-unit11182757027218931278/__consumer_offsets-14/00000000000000000000.index was not resized because it already has size 10485760 09:09:36 09:09:36.505 [data-plane-kafka-request-handler-1] DEBUG kafka.log.TimeIndex - Loaded index file /tmp/kafka-unit11182757027218931278/__consumer_offsets-14/00000000000000000000.timeindex with maxEntries = 873813, maxIndexSize = 10485760, entries = 0, lastOffset = TimestampOffset(-1,0), file position = 0 09:09:36 09:09:36.505 [data-plane-kafka-request-handler-1] DEBUG kafka.log.AbstractIndex - Index /tmp/kafka-unit11182757027218931278/__consumer_offsets-14/00000000000000000000.timeindex was not resized because it already has size 10485756 09:09:36 09:09:36.505 [data-plane-kafka-request-handler-1] INFO kafka.log.UnifiedLog$ - [LogLoader partition=__consumer_offsets-14, dir=/tmp/kafka-unit11182757027218931278] Loading producer state till offset 0 with message format version 2 09:09:36 09:09:36.505 [data-plane-kafka-request-handler-1] DEBUG kafka.utils.KafkaScheduler - Scheduling task flush-metadata-file with initial delay 0 ms and period -1 ms. 09:09:36 09:09:36.506 [data-plane-kafka-request-handler-1] DEBUG kafka.utils.KafkaScheduler - Scheduling task PeriodicProducerExpirationCheck with initial delay 600000 ms and period 600000 ms. 09:09:36 09:09:36.506 [data-plane-kafka-request-handler-1] INFO kafka.log.LogManager - Created log for partition __consumer_offsets-14 in /tmp/kafka-unit11182757027218931278/__consumer_offsets-14 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} 09:09:36 09:09:36.507 [data-plane-kafka-request-handler-1] INFO kafka.cluster.Partition - [Partition __consumer_offsets-14 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-14 09:09:36 09:09:36.507 [data-plane-kafka-request-handler-1] INFO kafka.cluster.Partition - [Partition __consumer_offsets-14 broker=1] Log loaded for partition __consumer_offsets-14 with initial high watermark 0 09:09:36 09:09:36.507 [data-plane-kafka-request-handler-1] INFO state.change.logger - [Broker id=1] Leader __consumer_offsets-14 with topic id Some(RNuWA2mhQKGQZwHuRiMICQ) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas []. Previous leader epoch was -1. 09:09:36 09:09:36.507 [data-plane-kafka-request-handler-1] DEBUG kafka.server.epoch.LeaderEpochFileCache - [LeaderEpochCache __consumer_offsets-14] Appended new epoch entry EpochEntry(epoch=0, startOffset=0). Cache now contains 1 entries. 09:09:36 09:09:36.549 [main] DEBUG org.apache.kafka.clients.NetworkClient - [Consumer clientId=mso-123456-consumer-f5c1d64b-2a77-4abe-97e2-5d595acfbd82, groupId=mso-group] Sending metadata request MetadataRequestData(topics=[MetadataRequestTopic(topicId=AAAAAAAAAAAAAAAAAAAAAA, name='my-test-topic')], allowAutoTopicCreation=false, includeClusterAuthorizedOperations=false, includeTopicAuthorizedOperations=false) to node localhost:40117 (id: 1 rack: null) 09:09:36 09:09:36.550 [main] DEBUG org.apache.kafka.clients.NetworkClient - [Consumer clientId=mso-123456-consumer-f5c1d64b-2a77-4abe-97e2-5d595acfbd82, groupId=mso-group] Sending METADATA request with header RequestHeader(apiKey=METADATA, apiVersion=12, clientId=mso-123456-consumer-f5c1d64b-2a77-4abe-97e2-5d595acfbd82, correlationId=20) and timeout 30000 to node 1: MetadataRequestData(topics=[MetadataRequestTopic(topicId=AAAAAAAAAAAAAAAAAAAAAA, name='my-test-topic')], allowAutoTopicCreation=false, includeClusterAuthorizedOperations=false, includeTopicAuthorizedOperations=false) 09:09:36 09:09:36.560 [main] DEBUG org.apache.kafka.clients.NetworkClient - [Consumer clientId=mso-123456-consumer-f5c1d64b-2a77-4abe-97e2-5d595acfbd82, groupId=mso-group] Received METADATA response from node 1 for request with header RequestHeader(apiKey=METADATA, apiVersion=12, clientId=mso-123456-consumer-f5c1d64b-2a77-4abe-97e2-5d595acfbd82, correlationId=20): MetadataResponseData(throttleTimeMs=0, brokers=[MetadataResponseBroker(nodeId=1, host='localhost', port=40117, rack=null)], clusterId='lcpOyY1-QY2MMThgHGGgSA', controllerId=1, topics=[MetadataResponseTopic(errorCode=0, name='my-test-topic', topicId=rpOTusxPRiGyrsjjjH_fwA, isInternal=false, partitions=[MetadataResponsePartition(errorCode=0, partitionIndex=0, leaderId=1, leaderEpoch=0, replicaNodes=[1], isrNodes=[1], offlineReplicas=[])], topicAuthorizedOperations=-2147483648)], clusterAuthorizedOperations=-2147483648) 09:09:36 09:09:36.560 [main] DEBUG org.apache.kafka.clients.Metadata - [Consumer clientId=mso-123456-consumer-f5c1d64b-2a77-4abe-97e2-5d595acfbd82, groupId=mso-group] Updating last seen epoch for partition my-test-topic-0 from 0 to epoch 0 from new metadata 09:09:36 09:09:36.560 [data-plane-kafka-network-thread-1-ListenerName(SASL_PLAINTEXT)-SASL_PLAINTEXT-0] DEBUG kafka.request.logger - Completed request:{"isForwarded":false,"requestHeader":{"requestApiKey":3,"requestApiVersion":12,"correlationId":20,"clientId":"mso-123456-consumer-f5c1d64b-2a77-4abe-97e2-5d595acfbd82","requestApiKeyName":"METADATA"},"request":{"topics":[{"topicId":"AAAAAAAAAAAAAAAAAAAAAA","name":"my-test-topic"}],"allowAutoTopicCreation":false,"includeTopicAuthorizedOperations":false},"response":{"throttleTimeMs":0,"brokers":[{"nodeId":1,"host":"localhost","port":40117,"rack":null}],"clusterId":"lcpOyY1-QY2MMThgHGGgSA","controllerId":1,"topics":[{"errorCode":0,"name":"my-test-topic","topicId":"rpOTusxPRiGyrsjjjH_fwA","isInternal":false,"partitions":[{"errorCode":0,"partitionIndex":0,"leaderId":1,"leaderEpoch":0,"replicaNodes":[1],"isrNodes":[1],"offlineReplicas":[]}],"topicAuthorizedOperations":-2147483648}]},"connection":"127.0.0.1:40117-127.0.0.1:49642-3","totalTimeMs":2.687,"requestQueueTimeMs":0.451,"localTimeMs":1.798,"remoteTimeMs":0.0,"throttleTimeMs":0,"responseQueueTimeMs":0.111,"sendTimeMs":0.324,"securityProtocol":"SASL_PLAINTEXT","principal":"User:admin","listener":"SASL_PLAINTEXT","clientInformation":{"softwareName":"apache-kafka-java","softwareVersion":"3.3.1"}} 09:09:36 09:09:36.561 [main] DEBUG org.apache.kafka.clients.Metadata - [Consumer clientId=mso-123456-consumer-f5c1d64b-2a77-4abe-97e2-5d595acfbd82, groupId=mso-group] Updated cluster metadata updateVersion 11 to MetadataCache{clusterId='lcpOyY1-QY2MMThgHGGgSA', nodes={1=localhost:40117 (id: 1 rack: null)}, partitions=[PartitionMetadata(error=NONE, partition=my-test-topic-0, leader=Optional[1], leaderEpoch=Optional[0], replicas=1, isr=1, offlineReplicas=)], controller=localhost:40117 (id: 1 rack: null)} 09:09:36 09:09:36.561 [main] DEBUG org.apache.kafka.clients.consumer.internals.ConsumerCoordinator - [Consumer clientId=mso-123456-consumer-f5c1d64b-2a77-4abe-97e2-5d595acfbd82, groupId=mso-group] Sending FindCoordinator request to broker localhost:40117 (id: 1 rack: null) 09:09:36 09:09:36.561 [main] DEBUG org.apache.kafka.clients.NetworkClient - [Consumer clientId=mso-123456-consumer-f5c1d64b-2a77-4abe-97e2-5d595acfbd82, groupId=mso-group] Sending FIND_COORDINATOR request with header RequestHeader(apiKey=FIND_COORDINATOR, apiVersion=4, clientId=mso-123456-consumer-f5c1d64b-2a77-4abe-97e2-5d595acfbd82, correlationId=21) and timeout 30000 to node 1: FindCoordinatorRequestData(key='', keyType=0, coordinatorKeys=[mso-group]) 09:09:36 09:09:36.562 [ProcessThread(sid:0 cport:46481):] DEBUG org.apache.zookeeper.server.SessionTrackerImpl - Checking session 0x1000002945e0000 09:09:36 09:09:36.562 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - Processing request:: sessionid:0x1000002945e0000 type:getData cxid:0xeb zxid:0xfffffffffffffffe txntype:unknown reqpath:/config/topics/__consumer_offsets 09:09:36 09:09:36.562 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - sessionid:0x1000002945e0000 type:getData cxid:0xeb zxid:0xfffffffffffffffe txntype:unknown reqpath:/config/topics/__consumer_offsets 09:09:36 09:09:36.562 [SyncThread:0] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Permission requested: 1 09:09:36 09:09:36.562 [SyncThread:0] DEBUG org.apache.zookeeper.server.ZooKeeperServer - ACLs for node: [31,s{'world,'anyone} 09:09:36 ] 09:09:36 09:09:36.562 [SyncThread:0] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Client credentials: ['sasl,'zooclient 09:09:36 , 'ip,'127.0.0.1 09:09:36 ] 09:09:36 09:09:36.563 [main-SendThread(127.0.0.1:46481)] DEBUG org.apache.zookeeper.ClientCnxn - Reading reply session id: 0x1000002945e0000, packet:: clientPath:/config/topics/__consumer_offsets serverPath:/config/topics/__consumer_offsets finished:false header:: 235,4 replyHeader:: 235,139,0 request:: '/config/topics/__consumer_offsets,F response:: #7b2276657273696f6e223a312c22636f6e666967223a7b22636f6d7072657373696f6e2e74797065223a2270726f6475636572222c22636c65616e75702e706f6c696379223a22636f6d70616374222c227365676d656e742e6279746573223a22313034383537363030227d7d,s{37,37,1770973775609,1770973775609,0,0,0,0,109,0,37} 09:09:36 09:09:36.563 [ProcessThread(sid:0 cport:46481):] DEBUG org.apache.zookeeper.server.SessionTrackerImpl - Checking session 0x1000002945e0000 09:09:36 09:09:36.563 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - Processing request:: sessionid:0x1000002945e0000 type:exists cxid:0xec zxid:0xfffffffffffffffe txntype:unknown reqpath:/admin/delete_topics/__consumer_offsets 09:09:36 09:09:36.563 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - sessionid:0x1000002945e0000 type:exists cxid:0xec zxid:0xfffffffffffffffe txntype:unknown reqpath:/admin/delete_topics/__consumer_offsets 09:09:36 09:09:36.564 [main-SendThread(127.0.0.1:46481)] DEBUG org.apache.zookeeper.ClientCnxn - Reading reply session id: 0x1000002945e0000, packet:: clientPath:/admin/delete_topics/__consumer_offsets serverPath:/admin/delete_topics/__consumer_offsets finished:false header:: 236,3 replyHeader:: 236,139,-101 request:: '/admin/delete_topics/__consumer_offsets,F response:: 09:09:36 09:09:36.564 [ProcessThread(sid:0 cport:46481):] DEBUG org.apache.zookeeper.server.SessionTrackerImpl - Checking session 0x1000002945e0000 09:09:36 09:09:36.565 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - Processing request:: sessionid:0x1000002945e0000 type:exists cxid:0xed zxid:0xfffffffffffffffe txntype:unknown reqpath:/brokers/topics/__consumer_offsets 09:09:36 09:09:36.565 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - sessionid:0x1000002945e0000 type:exists cxid:0xed zxid:0xfffffffffffffffe txntype:unknown reqpath:/brokers/topics/__consumer_offsets 09:09:36 09:09:36.565 [main-SendThread(127.0.0.1:46481)] DEBUG org.apache.zookeeper.ClientCnxn - Reading reply session id: 0x1000002945e0000, packet:: clientPath:/brokers/topics/__consumer_offsets serverPath:/brokers/topics/__consumer_offsets finished:false header:: 237,3 replyHeader:: 237,139,0 request:: '/brokers/topics/__consumer_offsets,T response:: s{38,38,1770973775670,1770973775670,0,1,0,0,548,1,39} 09:09:36 09:09:36.565 [data-plane-kafka-request-handler-0] DEBUG kafka.server.ZkAdminManager - [Admin Manager on Broker 1]: Topic creation failed since topic '__consumer_offsets' already exists. 09:09:36 org.apache.kafka.common.errors.TopicExistsException: Topic '__consumer_offsets' already exists. 09:09:36 09:09:36.566 [data-plane-kafka-request-handler-0] DEBUG kafka.server.DefaultAutoTopicCreationManager - Cleared inflight topic creation state for HashMap(__consumer_offsets -> CreatableTopic(name='__consumer_offsets', numPartitions=50, replicationFactor=1, assignments=[], configs=[CreateableTopicConfig(name='compression.type', value='producer'), CreateableTopicConfig(name='cleanup.policy', value='compact'), CreateableTopicConfig(name='segment.bytes', value='104857600')])) 09:09:36 09:09:36.566 [data-plane-kafka-network-thread-1-ListenerName(SASL_PLAINTEXT)-SASL_PLAINTEXT-0] DEBUG kafka.request.logger - Completed request:{"isForwarded":false,"requestHeader":{"requestApiKey":10,"requestApiVersion":4,"correlationId":21,"clientId":"mso-123456-consumer-f5c1d64b-2a77-4abe-97e2-5d595acfbd82","requestApiKeyName":"FIND_COORDINATOR"},"request":{"keyType":0,"coordinatorKeys":["mso-group"]},"response":{"throttleTimeMs":0,"coordinators":[{"key":"mso-group","nodeId":-1,"host":"","port":-1,"errorCode":15,"errorMessage":""}]},"connection":"127.0.0.1:40117-127.0.0.1:49642-3","totalTimeMs":4.167,"requestQueueTimeMs":0.091,"localTimeMs":3.926,"remoteTimeMs":0.0,"throttleTimeMs":0,"responseQueueTimeMs":0.044,"sendTimeMs":0.104,"securityProtocol":"SASL_PLAINTEXT","principal":"User:admin","listener":"SASL_PLAINTEXT","clientInformation":{"softwareName":"apache-kafka-java","softwareVersion":"3.3.1"}} 09:09:36 09:09:36.566 [main] DEBUG org.apache.kafka.clients.NetworkClient - [Consumer clientId=mso-123456-consumer-f5c1d64b-2a77-4abe-97e2-5d595acfbd82, groupId=mso-group] Received FIND_COORDINATOR response from node 1 for request with header RequestHeader(apiKey=FIND_COORDINATOR, apiVersion=4, clientId=mso-123456-consumer-f5c1d64b-2a77-4abe-97e2-5d595acfbd82, correlationId=21): FindCoordinatorResponseData(throttleTimeMs=0, errorCode=0, errorMessage='', nodeId=0, host='', port=0, coordinators=[Coordinator(key='mso-group', nodeId=-1, host='', port=-1, errorCode=15, errorMessage='')]) 09:09:36 09:09:36.567 [main] DEBUG org.apache.kafka.clients.consumer.internals.ConsumerCoordinator - [Consumer clientId=mso-123456-consumer-f5c1d64b-2a77-4abe-97e2-5d595acfbd82, groupId=mso-group] Received FindCoordinator response ClientResponse(receivedTimeMs=1770973776566, latencyMs=5, disconnected=false, requestHeader=RequestHeader(apiKey=FIND_COORDINATOR, apiVersion=4, clientId=mso-123456-consumer-f5c1d64b-2a77-4abe-97e2-5d595acfbd82, correlationId=21), responseBody=FindCoordinatorResponseData(throttleTimeMs=0, errorCode=0, errorMessage='', nodeId=0, host='', port=0, coordinators=[Coordinator(key='mso-group', nodeId=-1, host='', port=-1, errorCode=15, errorMessage='')])) 09:09:36 09:09:36.567 [main] DEBUG org.apache.kafka.clients.consumer.internals.ConsumerCoordinator - [Consumer clientId=mso-123456-consumer-f5c1d64b-2a77-4abe-97e2-5d595acfbd82, groupId=mso-group] Group coordinator lookup failed: 09:09:36 09:09:36.567 [main] DEBUG org.apache.kafka.clients.consumer.internals.ConsumerCoordinator - [Consumer clientId=mso-123456-consumer-f5c1d64b-2a77-4abe-97e2-5d595acfbd82, groupId=mso-group] Coordinator discovery failed, refreshing metadata 09:09:36 org.apache.kafka.common.errors.CoordinatorNotAvailableException: The coordinator is not available. 09:09:36 09:09:36.568 [data-plane-kafka-request-handler-1] DEBUG kafka.log.OffsetIndex - Loaded index file /tmp/kafka-unit11182757027218931278/__consumer_offsets-23/00000000000000000000.index with maxEntries = 1310720, maxIndexSize = 10485760, entries = 0, lastOffset = 0, file position = 0 09:09:36 09:09:36.568 [data-plane-kafka-request-handler-1] DEBUG kafka.log.AbstractIndex - Index /tmp/kafka-unit11182757027218931278/__consumer_offsets-23/00000000000000000000.index was not resized because it already has size 10485760 09:09:36 09:09:36.568 [data-plane-kafka-request-handler-1] DEBUG kafka.log.TimeIndex - Loaded index file /tmp/kafka-unit11182757027218931278/__consumer_offsets-23/00000000000000000000.timeindex with maxEntries = 873813, maxIndexSize = 10485760, entries = 0, lastOffset = TimestampOffset(-1,0), file position = 0 09:09:36 09:09:36.568 [data-plane-kafka-request-handler-1] DEBUG kafka.log.AbstractIndex - Index /tmp/kafka-unit11182757027218931278/__consumer_offsets-23/00000000000000000000.timeindex was not resized because it already has size 10485756 09:09:36 09:09:36.569 [data-plane-kafka-request-handler-1] INFO kafka.log.UnifiedLog$ - [LogLoader partition=__consumer_offsets-23, dir=/tmp/kafka-unit11182757027218931278] Loading producer state till offset 0 with message format version 2 09:09:36 09:09:36.569 [data-plane-kafka-request-handler-1] DEBUG kafka.utils.KafkaScheduler - Scheduling task flush-metadata-file with initial delay 0 ms and period -1 ms. 09:09:36 09:09:36.570 [data-plane-kafka-request-handler-1] DEBUG kafka.utils.KafkaScheduler - Scheduling task PeriodicProducerExpirationCheck with initial delay 600000 ms and period 600000 ms. 09:09:36 09:09:36.571 [data-plane-kafka-request-handler-1] INFO kafka.log.LogManager - Created log for partition __consumer_offsets-23 in /tmp/kafka-unit11182757027218931278/__consumer_offsets-23 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} 09:09:36 09:09:36.571 [data-plane-kafka-request-handler-1] INFO kafka.cluster.Partition - [Partition __consumer_offsets-23 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-23 09:09:36 09:09:36.571 [data-plane-kafka-request-handler-1] INFO kafka.cluster.Partition - [Partition __consumer_offsets-23 broker=1] Log loaded for partition __consumer_offsets-23 with initial high watermark 0 09:09:36 09:09:36.571 [data-plane-kafka-request-handler-1] INFO state.change.logger - [Broker id=1] Leader __consumer_offsets-23 with topic id Some(RNuWA2mhQKGQZwHuRiMICQ) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas []. Previous leader epoch was -1. 09:09:36 09:09:36.571 [data-plane-kafka-request-handler-1] DEBUG kafka.server.epoch.LeaderEpochFileCache - [LeaderEpochCache __consumer_offsets-23] Appended new epoch entry EpochEntry(epoch=0, startOffset=0). Cache now contains 1 entries. 09:09:36 09:09:36.582 [ProcessThread(sid:0 cport:46481):] DEBUG org.apache.zookeeper.server.SessionTrackerImpl - Checking session 0x1000002945e0000 09:09:36 09:09:36.582 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - Processing request:: sessionid:0x1000002945e0000 type:getData cxid:0xee zxid:0xfffffffffffffffe txntype:unknown reqpath:/config/topics/__consumer_offsets 09:09:36 09:09:36.582 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - sessionid:0x1000002945e0000 type:getData cxid:0xee zxid:0xfffffffffffffffe txntype:unknown reqpath:/config/topics/__consumer_offsets 09:09:36 09:09:36.582 [SyncThread:0] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Permission requested: 1 09:09:36 09:09:36.582 [SyncThread:0] DEBUG org.apache.zookeeper.server.ZooKeeperServer - ACLs for node: [31,s{'world,'anyone} 09:09:36 ] 09:09:36 09:09:36.582 [SyncThread:0] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Client credentials: ['sasl,'zooclient 09:09:36 , 'ip,'127.0.0.1 09:09:36 ] 09:09:36 09:09:36.582 [main-SendThread(127.0.0.1:46481)] DEBUG org.apache.zookeeper.ClientCnxn - Reading reply session id: 0x1000002945e0000, packet:: clientPath:/config/topics/__consumer_offsets serverPath:/config/topics/__consumer_offsets finished:false header:: 238,4 replyHeader:: 238,139,0 request:: '/config/topics/__consumer_offsets,F response:: #7b2276657273696f6e223a312c22636f6e666967223a7b22636f6d7072657373696f6e2e74797065223a2270726f6475636572222c22636c65616e75702e706f6c696379223a22636f6d70616374222c227365676d656e742e6279746573223a22313034383537363030227d7d,s{37,37,1770973775609,1770973775609,0,0,0,0,109,0,37} 09:09:36 09:09:36.588 [data-plane-kafka-request-handler-1] DEBUG kafka.log.OffsetIndex - Loaded index file /tmp/kafka-unit11182757027218931278/__consumer_offsets-38/00000000000000000000.index with maxEntries = 1310720, maxIndexSize = 10485760, entries = 0, lastOffset = 0, file position = 0 09:09:36 09:09:36.588 [data-plane-kafka-request-handler-1] DEBUG kafka.log.AbstractIndex - Index /tmp/kafka-unit11182757027218931278/__consumer_offsets-38/00000000000000000000.index was not resized because it already has size 10485760 09:09:36 09:09:36.589 [data-plane-kafka-request-handler-1] DEBUG kafka.log.TimeIndex - Loaded index file /tmp/kafka-unit11182757027218931278/__consumer_offsets-38/00000000000000000000.timeindex with maxEntries = 873813, maxIndexSize = 10485760, entries = 0, lastOffset = TimestampOffset(-1,0), file position = 0 09:09:36 09:09:36.589 [data-plane-kafka-request-handler-1] DEBUG kafka.log.AbstractIndex - Index /tmp/kafka-unit11182757027218931278/__consumer_offsets-38/00000000000000000000.timeindex was not resized because it already has size 10485756 09:09:36 09:09:36.589 [data-plane-kafka-request-handler-1] INFO kafka.log.UnifiedLog$ - [LogLoader partition=__consumer_offsets-38, dir=/tmp/kafka-unit11182757027218931278] Loading producer state till offset 0 with message format version 2 09:09:36 09:09:36.589 [data-plane-kafka-request-handler-1] DEBUG kafka.utils.KafkaScheduler - Scheduling task flush-metadata-file with initial delay 0 ms and period -1 ms. 09:09:36 09:09:36.590 [data-plane-kafka-request-handler-1] DEBUG kafka.utils.KafkaScheduler - Scheduling task PeriodicProducerExpirationCheck with initial delay 600000 ms and period 600000 ms. 09:09:36 09:09:36.590 [data-plane-kafka-request-handler-1] INFO kafka.log.LogManager - Created log for partition __consumer_offsets-38 in /tmp/kafka-unit11182757027218931278/__consumer_offsets-38 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} 09:09:36 09:09:36.590 [data-plane-kafka-request-handler-1] INFO kafka.cluster.Partition - [Partition __consumer_offsets-38 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-38 09:09:36 09:09:36.590 [data-plane-kafka-request-handler-1] INFO kafka.cluster.Partition - [Partition __consumer_offsets-38 broker=1] Log loaded for partition __consumer_offsets-38 with initial high watermark 0 09:09:36 09:09:36.590 [data-plane-kafka-request-handler-1] INFO state.change.logger - [Broker id=1] Leader __consumer_offsets-38 with topic id Some(RNuWA2mhQKGQZwHuRiMICQ) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas []. Previous leader epoch was -1. 09:09:36 09:09:36.590 [data-plane-kafka-request-handler-1] DEBUG kafka.server.epoch.LeaderEpochFileCache - [LeaderEpochCache __consumer_offsets-38] Appended new epoch entry EpochEntry(epoch=0, startOffset=0). Cache now contains 1 entries. 09:09:36 09:09:36.594 [ProcessThread(sid:0 cport:46481):] DEBUG org.apache.zookeeper.server.SessionTrackerImpl - Checking session 0x1000002945e0000 09:09:36 09:09:36.594 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - Processing request:: sessionid:0x1000002945e0000 type:getData cxid:0xef zxid:0xfffffffffffffffe txntype:unknown reqpath:/config/topics/__consumer_offsets 09:09:36 09:09:36.594 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - sessionid:0x1000002945e0000 type:getData cxid:0xef zxid:0xfffffffffffffffe txntype:unknown reqpath:/config/topics/__consumer_offsets 09:09:36 09:09:36.594 [SyncThread:0] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Permission requested: 1 09:09:36 09:09:36.594 [SyncThread:0] DEBUG org.apache.zookeeper.server.ZooKeeperServer - ACLs for node: [31,s{'world,'anyone} 09:09:36 ] 09:09:36 09:09:36.595 [SyncThread:0] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Client credentials: ['sasl,'zooclient 09:09:36 , 'ip,'127.0.0.1 09:09:36 ] 09:09:36 09:09:36.595 [main-SendThread(127.0.0.1:46481)] DEBUG org.apache.zookeeper.ClientCnxn - Reading reply session id: 0x1000002945e0000, packet:: clientPath:/config/topics/__consumer_offsets serverPath:/config/topics/__consumer_offsets finished:false header:: 239,4 replyHeader:: 239,139,0 request:: '/config/topics/__consumer_offsets,F response:: #7b2276657273696f6e223a312c22636f6e666967223a7b22636f6d7072657373696f6e2e74797065223a2270726f6475636572222c22636c65616e75702e706f6c696379223a22636f6d70616374222c227365676d656e742e6279746573223a22313034383537363030227d7d,s{37,37,1770973775609,1770973775609,0,0,0,0,109,0,37} 09:09:36 09:09:36.597 [data-plane-kafka-request-handler-1] DEBUG kafka.log.OffsetIndex - Loaded index file /tmp/kafka-unit11182757027218931278/__consumer_offsets-8/00000000000000000000.index with maxEntries = 1310720, maxIndexSize = 10485760, entries = 0, lastOffset = 0, file position = 0 09:09:36 09:09:36.597 [data-plane-kafka-request-handler-1] DEBUG kafka.log.AbstractIndex - Index /tmp/kafka-unit11182757027218931278/__consumer_offsets-8/00000000000000000000.index was not resized because it already has size 10485760 09:09:36 09:09:36.597 [data-plane-kafka-request-handler-1] DEBUG kafka.log.TimeIndex - Loaded index file /tmp/kafka-unit11182757027218931278/__consumer_offsets-8/00000000000000000000.timeindex with maxEntries = 873813, maxIndexSize = 10485760, entries = 0, lastOffset = TimestampOffset(-1,0), file position = 0 09:09:36 09:09:36.597 [data-plane-kafka-request-handler-1] DEBUG kafka.log.AbstractIndex - Index /tmp/kafka-unit11182757027218931278/__consumer_offsets-8/00000000000000000000.timeindex was not resized because it already has size 10485756 09:09:36 09:09:36.598 [data-plane-kafka-request-handler-1] INFO kafka.log.UnifiedLog$ - [LogLoader partition=__consumer_offsets-8, dir=/tmp/kafka-unit11182757027218931278] Loading producer state till offset 0 with message format version 2 09:09:36 09:09:36.598 [data-plane-kafka-request-handler-1] DEBUG kafka.utils.KafkaScheduler - Scheduling task flush-metadata-file with initial delay 0 ms and period -1 ms. 09:09:36 09:09:36.598 [data-plane-kafka-request-handler-1] DEBUG kafka.utils.KafkaScheduler - Scheduling task PeriodicProducerExpirationCheck with initial delay 600000 ms and period 600000 ms. 09:09:36 09:09:36.599 [data-plane-kafka-request-handler-1] INFO kafka.log.LogManager - Created log for partition __consumer_offsets-8 in /tmp/kafka-unit11182757027218931278/__consumer_offsets-8 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} 09:09:36 09:09:36.599 [data-plane-kafka-request-handler-1] INFO kafka.cluster.Partition - [Partition __consumer_offsets-8 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-8 09:09:36 09:09:36.599 [data-plane-kafka-request-handler-1] INFO kafka.cluster.Partition - [Partition __consumer_offsets-8 broker=1] Log loaded for partition __consumer_offsets-8 with initial high watermark 0 09:09:36 09:09:36.599 [data-plane-kafka-request-handler-1] INFO state.change.logger - [Broker id=1] Leader __consumer_offsets-8 with topic id Some(RNuWA2mhQKGQZwHuRiMICQ) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas []. Previous leader epoch was -1. 09:09:36 09:09:36.599 [data-plane-kafka-request-handler-1] DEBUG kafka.server.epoch.LeaderEpochFileCache - [LeaderEpochCache __consumer_offsets-8] Appended new epoch entry EpochEntry(epoch=0, startOffset=0). Cache now contains 1 entries. 09:09:36 09:09:36.604 [ProcessThread(sid:0 cport:46481):] DEBUG org.apache.zookeeper.server.SessionTrackerImpl - Checking session 0x1000002945e0000 09:09:36 09:09:36.604 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - Processing request:: sessionid:0x1000002945e0000 type:getData cxid:0xf0 zxid:0xfffffffffffffffe txntype:unknown reqpath:/config/topics/__consumer_offsets 09:09:36 09:09:36.604 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - sessionid:0x1000002945e0000 type:getData cxid:0xf0 zxid:0xfffffffffffffffe txntype:unknown reqpath:/config/topics/__consumer_offsets 09:09:36 09:09:36.604 [SyncThread:0] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Permission requested: 1 09:09:36 09:09:36.604 [SyncThread:0] DEBUG org.apache.zookeeper.server.ZooKeeperServer - ACLs for node: [31,s{'world,'anyone} 09:09:36 ] 09:09:36 09:09:36.604 [SyncThread:0] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Client credentials: ['sasl,'zooclient 09:09:36 , 'ip,'127.0.0.1 09:09:36 ] 09:09:36 09:09:36.604 [main-SendThread(127.0.0.1:46481)] DEBUG org.apache.zookeeper.ClientCnxn - Reading reply session id: 0x1000002945e0000, packet:: clientPath:/config/topics/__consumer_offsets serverPath:/config/topics/__consumer_offsets finished:false header:: 240,4 replyHeader:: 240,139,0 request:: '/config/topics/__consumer_offsets,F response:: #7b2276657273696f6e223a312c22636f6e666967223a7b22636f6d7072657373696f6e2e74797065223a2270726f6475636572222c22636c65616e75702e706f6c696379223a22636f6d70616374222c227365676d656e742e6279746573223a22313034383537363030227d7d,s{37,37,1770973775609,1770973775609,0,0,0,0,109,0,37} 09:09:36 09:09:36.606 [data-plane-kafka-request-handler-1] DEBUG kafka.log.OffsetIndex - Loaded index file /tmp/kafka-unit11182757027218931278/__consumer_offsets-45/00000000000000000000.index with maxEntries = 1310720, maxIndexSize = 10485760, entries = 0, lastOffset = 0, file position = 0 09:09:36 09:09:36.606 [data-plane-kafka-request-handler-1] DEBUG kafka.log.AbstractIndex - Index /tmp/kafka-unit11182757027218931278/__consumer_offsets-45/00000000000000000000.index was not resized because it already has size 10485760 09:09:36 09:09:36.606 [data-plane-kafka-request-handler-1] DEBUG kafka.log.TimeIndex - Loaded index file /tmp/kafka-unit11182757027218931278/__consumer_offsets-45/00000000000000000000.timeindex with maxEntries = 873813, maxIndexSize = 10485760, entries = 0, lastOffset = TimestampOffset(-1,0), file position = 0 09:09:36 09:09:36.606 [data-plane-kafka-request-handler-1] DEBUG kafka.log.AbstractIndex - Index /tmp/kafka-unit11182757027218931278/__consumer_offsets-45/00000000000000000000.timeindex was not resized because it already has size 10485756 09:09:36 09:09:36.606 [data-plane-kafka-request-handler-1] INFO kafka.log.UnifiedLog$ - [LogLoader partition=__consumer_offsets-45, dir=/tmp/kafka-unit11182757027218931278] Loading producer state till offset 0 with message format version 2 09:09:36 09:09:36.607 [data-plane-kafka-request-handler-1] DEBUG kafka.utils.KafkaScheduler - Scheduling task flush-metadata-file with initial delay 0 ms and period -1 ms. 09:09:36 09:09:36.607 [data-plane-kafka-request-handler-1] DEBUG kafka.utils.KafkaScheduler - Scheduling task PeriodicProducerExpirationCheck with initial delay 600000 ms and period 600000 ms. 09:09:36 09:09:36.607 [data-plane-kafka-request-handler-1] INFO kafka.log.LogManager - Created log for partition __consumer_offsets-45 in /tmp/kafka-unit11182757027218931278/__consumer_offsets-45 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} 09:09:36 09:09:36.607 [data-plane-kafka-request-handler-1] INFO kafka.cluster.Partition - [Partition __consumer_offsets-45 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-45 09:09:36 09:09:36.607 [data-plane-kafka-request-handler-1] INFO kafka.cluster.Partition - [Partition __consumer_offsets-45 broker=1] Log loaded for partition __consumer_offsets-45 with initial high watermark 0 09:09:36 09:09:36.608 [data-plane-kafka-request-handler-1] INFO state.change.logger - [Broker id=1] Leader __consumer_offsets-45 with topic id Some(RNuWA2mhQKGQZwHuRiMICQ) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas []. Previous leader epoch was -1. 09:09:36 09:09:36.608 [data-plane-kafka-request-handler-1] DEBUG kafka.server.epoch.LeaderEpochFileCache - [LeaderEpochCache __consumer_offsets-45] Appended new epoch entry EpochEntry(epoch=0, startOffset=0). Cache now contains 1 entries. 09:09:36 09:09:36.620 [ProcessThread(sid:0 cport:46481):] DEBUG org.apache.zookeeper.server.SessionTrackerImpl - Checking session 0x1000002945e0000 09:09:36 09:09:36.620 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - Processing request:: sessionid:0x1000002945e0000 type:getData cxid:0xf1 zxid:0xfffffffffffffffe txntype:unknown reqpath:/config/topics/__consumer_offsets 09:09:36 09:09:36.620 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - sessionid:0x1000002945e0000 type:getData cxid:0xf1 zxid:0xfffffffffffffffe txntype:unknown reqpath:/config/topics/__consumer_offsets 09:09:36 09:09:36.620 [SyncThread:0] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Permission requested: 1 09:09:36 09:09:36.620 [SyncThread:0] DEBUG org.apache.zookeeper.server.ZooKeeperServer - ACLs for node: [31,s{'world,'anyone} 09:09:36 ] 09:09:36 09:09:36.620 [SyncThread:0] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Client credentials: ['sasl,'zooclient 09:09:36 , 'ip,'127.0.0.1 09:09:36 ] 09:09:36 09:09:36.620 [main-SendThread(127.0.0.1:46481)] DEBUG org.apache.zookeeper.ClientCnxn - Reading reply session id: 0x1000002945e0000, packet:: clientPath:/config/topics/__consumer_offsets serverPath:/config/topics/__consumer_offsets finished:false header:: 241,4 replyHeader:: 241,139,0 request:: '/config/topics/__consumer_offsets,F response:: #7b2276657273696f6e223a312c22636f6e666967223a7b22636f6d7072657373696f6e2e74797065223a2270726f6475636572222c22636c65616e75702e706f6c696379223a22636f6d70616374222c227365676d656e742e6279746573223a22313034383537363030227d7d,s{37,37,1770973775609,1770973775609,0,0,0,0,109,0,37} 09:09:36 09:09:36.623 [data-plane-kafka-request-handler-1] DEBUG kafka.log.OffsetIndex - Loaded index file /tmp/kafka-unit11182757027218931278/__consumer_offsets-15/00000000000000000000.index with maxEntries = 1310720, maxIndexSize = 10485760, entries = 0, lastOffset = 0, file position = 0 09:09:36 09:09:36.623 [data-plane-kafka-request-handler-1] DEBUG kafka.log.AbstractIndex - Index /tmp/kafka-unit11182757027218931278/__consumer_offsets-15/00000000000000000000.index was not resized because it already has size 10485760 09:09:36 09:09:36.623 [data-plane-kafka-request-handler-1] DEBUG kafka.log.TimeIndex - Loaded index file /tmp/kafka-unit11182757027218931278/__consumer_offsets-15/00000000000000000000.timeindex with maxEntries = 873813, maxIndexSize = 10485760, entries = 0, lastOffset = TimestampOffset(-1,0), file position = 0 09:09:36 09:09:36.623 [data-plane-kafka-request-handler-1] DEBUG kafka.log.AbstractIndex - Index /tmp/kafka-unit11182757027218931278/__consumer_offsets-15/00000000000000000000.timeindex was not resized because it already has size 10485756 09:09:36 09:09:36.623 [data-plane-kafka-request-handler-1] INFO kafka.log.UnifiedLog$ - [LogLoader partition=__consumer_offsets-15, dir=/tmp/kafka-unit11182757027218931278] Loading producer state till offset 0 with message format version 2 09:09:36 09:09:36.623 [data-plane-kafka-request-handler-1] DEBUG kafka.utils.KafkaScheduler - Scheduling task flush-metadata-file with initial delay 0 ms and period -1 ms. 09:09:36 09:09:36.624 [data-plane-kafka-request-handler-1] DEBUG kafka.utils.KafkaScheduler - Scheduling task PeriodicProducerExpirationCheck with initial delay 600000 ms and period 600000 ms. 09:09:36 09:09:36.624 [data-plane-kafka-request-handler-1] INFO kafka.log.LogManager - Created log for partition __consumer_offsets-15 in /tmp/kafka-unit11182757027218931278/__consumer_offsets-15 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} 09:09:36 09:09:36.624 [data-plane-kafka-request-handler-1] INFO kafka.cluster.Partition - [Partition __consumer_offsets-15 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-15 09:09:36 09:09:36.624 [data-plane-kafka-request-handler-1] INFO kafka.cluster.Partition - [Partition __consumer_offsets-15 broker=1] Log loaded for partition __consumer_offsets-15 with initial high watermark 0 09:09:36 09:09:36.624 [data-plane-kafka-request-handler-1] INFO state.change.logger - [Broker id=1] Leader __consumer_offsets-15 with topic id Some(RNuWA2mhQKGQZwHuRiMICQ) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas []. Previous leader epoch was -1. 09:09:36 09:09:36.625 [data-plane-kafka-request-handler-1] DEBUG kafka.server.epoch.LeaderEpochFileCache - [LeaderEpochCache __consumer_offsets-15] Appended new epoch entry EpochEntry(epoch=0, startOffset=0). Cache now contains 1 entries. 09:09:36 09:09:36.630 [ProcessThread(sid:0 cport:46481):] DEBUG org.apache.zookeeper.server.SessionTrackerImpl - Checking session 0x1000002945e0000 09:09:36 09:09:36.630 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - Processing request:: sessionid:0x1000002945e0000 type:getData cxid:0xf2 zxid:0xfffffffffffffffe txntype:unknown reqpath:/config/topics/__consumer_offsets 09:09:36 09:09:36.630 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - sessionid:0x1000002945e0000 type:getData cxid:0xf2 zxid:0xfffffffffffffffe txntype:unknown reqpath:/config/topics/__consumer_offsets 09:09:36 09:09:36.630 [SyncThread:0] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Permission requested: 1 09:09:36 09:09:36.630 [SyncThread:0] DEBUG org.apache.zookeeper.server.ZooKeeperServer - ACLs for node: [31,s{'world,'anyone} 09:09:36 ] 09:09:36 09:09:36.630 [SyncThread:0] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Client credentials: ['sasl,'zooclient 09:09:36 , 'ip,'127.0.0.1 09:09:36 ] 09:09:36 09:09:36.630 [main-SendThread(127.0.0.1:46481)] DEBUG org.apache.zookeeper.ClientCnxn - Reading reply session id: 0x1000002945e0000, packet:: clientPath:/config/topics/__consumer_offsets serverPath:/config/topics/__consumer_offsets finished:false header:: 242,4 replyHeader:: 242,139,0 request:: '/config/topics/__consumer_offsets,F response:: #7b2276657273696f6e223a312c22636f6e666967223a7b22636f6d7072657373696f6e2e74797065223a2270726f6475636572222c22636c65616e75702e706f6c696379223a22636f6d70616374222c227365676d656e742e6279746573223a22313034383537363030227d7d,s{37,37,1770973775609,1770973775609,0,0,0,0,109,0,37} 09:09:36 09:09:36.632 [data-plane-kafka-request-handler-1] DEBUG kafka.log.OffsetIndex - Loaded index file /tmp/kafka-unit11182757027218931278/__consumer_offsets-30/00000000000000000000.index with maxEntries = 1310720, maxIndexSize = 10485760, entries = 0, lastOffset = 0, file position = 0 09:09:36 09:09:36.633 [data-plane-kafka-request-handler-1] DEBUG kafka.log.AbstractIndex - Index /tmp/kafka-unit11182757027218931278/__consumer_offsets-30/00000000000000000000.index was not resized because it already has size 10485760 09:09:36 09:09:36.633 [data-plane-kafka-request-handler-1] DEBUG kafka.log.TimeIndex - Loaded index file /tmp/kafka-unit11182757027218931278/__consumer_offsets-30/00000000000000000000.timeindex with maxEntries = 873813, maxIndexSize = 10485760, entries = 0, lastOffset = TimestampOffset(-1,0), file position = 0 09:09:36 09:09:36.633 [data-plane-kafka-request-handler-1] DEBUG kafka.log.AbstractIndex - Index /tmp/kafka-unit11182757027218931278/__consumer_offsets-30/00000000000000000000.timeindex was not resized because it already has size 10485756 09:09:36 09:09:36.633 [data-plane-kafka-request-handler-1] INFO kafka.log.UnifiedLog$ - [LogLoader partition=__consumer_offsets-30, dir=/tmp/kafka-unit11182757027218931278] Loading producer state till offset 0 with message format version 2 09:09:36 09:09:36.633 [data-plane-kafka-request-handler-1] DEBUG kafka.utils.KafkaScheduler - Scheduling task flush-metadata-file with initial delay 0 ms and period -1 ms. 09:09:36 09:09:36.634 [data-plane-kafka-request-handler-1] DEBUG kafka.utils.KafkaScheduler - Scheduling task PeriodicProducerExpirationCheck with initial delay 600000 ms and period 600000 ms. 09:09:36 09:09:36.634 [data-plane-kafka-request-handler-1] INFO kafka.log.LogManager - Created log for partition __consumer_offsets-30 in /tmp/kafka-unit11182757027218931278/__consumer_offsets-30 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} 09:09:36 09:09:36.634 [data-plane-kafka-request-handler-1] INFO kafka.cluster.Partition - [Partition __consumer_offsets-30 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-30 09:09:36 09:09:36.634 [data-plane-kafka-request-handler-1] INFO kafka.cluster.Partition - [Partition __consumer_offsets-30 broker=1] Log loaded for partition __consumer_offsets-30 with initial high watermark 0 09:09:36 09:09:36.634 [data-plane-kafka-request-handler-1] INFO state.change.logger - [Broker id=1] Leader __consumer_offsets-30 with topic id Some(RNuWA2mhQKGQZwHuRiMICQ) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas []. Previous leader epoch was -1. 09:09:36 09:09:36.634 [data-plane-kafka-request-handler-1] DEBUG kafka.server.epoch.LeaderEpochFileCache - [LeaderEpochCache __consumer_offsets-30] Appended new epoch entry EpochEntry(epoch=0, startOffset=0). Cache now contains 1 entries. 09:09:36 09:09:36.660 [main] DEBUG org.apache.kafka.clients.NetworkClient - [Consumer clientId=mso-123456-consumer-f5c1d64b-2a77-4abe-97e2-5d595acfbd82, groupId=mso-group] Sending metadata request MetadataRequestData(topics=[MetadataRequestTopic(topicId=AAAAAAAAAAAAAAAAAAAAAA, name='my-test-topic')], allowAutoTopicCreation=false, includeClusterAuthorizedOperations=false, includeTopicAuthorizedOperations=false) to node localhost:40117 (id: 1 rack: null) 09:09:36 09:09:36.661 [main] DEBUG org.apache.kafka.clients.NetworkClient - [Consumer clientId=mso-123456-consumer-f5c1d64b-2a77-4abe-97e2-5d595acfbd82, groupId=mso-group] Sending METADATA request with header RequestHeader(apiKey=METADATA, apiVersion=12, clientId=mso-123456-consumer-f5c1d64b-2a77-4abe-97e2-5d595acfbd82, correlationId=22) and timeout 30000 to node 1: MetadataRequestData(topics=[MetadataRequestTopic(topicId=AAAAAAAAAAAAAAAAAAAAAA, name='my-test-topic')], allowAutoTopicCreation=false, includeClusterAuthorizedOperations=false, includeTopicAuthorizedOperations=false) 09:09:36 09:09:36.664 [main] DEBUG org.apache.kafka.clients.NetworkClient - [Consumer clientId=mso-123456-consumer-f5c1d64b-2a77-4abe-97e2-5d595acfbd82, groupId=mso-group] Received METADATA response from node 1 for request with header RequestHeader(apiKey=METADATA, apiVersion=12, clientId=mso-123456-consumer-f5c1d64b-2a77-4abe-97e2-5d595acfbd82, correlationId=22): MetadataResponseData(throttleTimeMs=0, brokers=[MetadataResponseBroker(nodeId=1, host='localhost', port=40117, rack=null)], clusterId='lcpOyY1-QY2MMThgHGGgSA', controllerId=1, topics=[MetadataResponseTopic(errorCode=0, name='my-test-topic', topicId=rpOTusxPRiGyrsjjjH_fwA, isInternal=false, partitions=[MetadataResponsePartition(errorCode=0, partitionIndex=0, leaderId=1, leaderEpoch=0, replicaNodes=[1], isrNodes=[1], offlineReplicas=[])], topicAuthorizedOperations=-2147483648)], clusterAuthorizedOperations=-2147483648) 09:09:36 09:09:36.664 [main] DEBUG org.apache.kafka.clients.Metadata - [Consumer clientId=mso-123456-consumer-f5c1d64b-2a77-4abe-97e2-5d595acfbd82, groupId=mso-group] Updating last seen epoch for partition my-test-topic-0 from 0 to epoch 0 from new metadata 09:09:36 09:09:36.664 [data-plane-kafka-network-thread-1-ListenerName(SASL_PLAINTEXT)-SASL_PLAINTEXT-0] DEBUG kafka.request.logger - Completed request:{"isForwarded":false,"requestHeader":{"requestApiKey":3,"requestApiVersion":12,"correlationId":22,"clientId":"mso-123456-consumer-f5c1d64b-2a77-4abe-97e2-5d595acfbd82","requestApiKeyName":"METADATA"},"request":{"topics":[{"topicId":"AAAAAAAAAAAAAAAAAAAAAA","name":"my-test-topic"}],"allowAutoTopicCreation":false,"includeTopicAuthorizedOperations":false},"response":{"throttleTimeMs":0,"brokers":[{"nodeId":1,"host":"localhost","port":40117,"rack":null}],"clusterId":"lcpOyY1-QY2MMThgHGGgSA","controllerId":1,"topics":[{"errorCode":0,"name":"my-test-topic","topicId":"rpOTusxPRiGyrsjjjH_fwA","isInternal":false,"partitions":[{"errorCode":0,"partitionIndex":0,"leaderId":1,"leaderEpoch":0,"replicaNodes":[1],"isrNodes":[1],"offlineReplicas":[]}],"topicAuthorizedOperations":-2147483648}]},"connection":"127.0.0.1:40117-127.0.0.1:49642-3","totalTimeMs":2.08,"requestQueueTimeMs":0.308,"localTimeMs":1.351,"remoteTimeMs":0.0,"throttleTimeMs":0,"responseQueueTimeMs":0.117,"sendTimeMs":0.303,"securityProtocol":"SASL_PLAINTEXT","principal":"User:admin","listener":"SASL_PLAINTEXT","clientInformation":{"softwareName":"apache-kafka-java","softwareVersion":"3.3.1"}} 09:09:36 09:09:36.665 [main] DEBUG org.apache.kafka.clients.Metadata - [Consumer clientId=mso-123456-consumer-f5c1d64b-2a77-4abe-97e2-5d595acfbd82, groupId=mso-group] Updated cluster metadata updateVersion 12 to MetadataCache{clusterId='lcpOyY1-QY2MMThgHGGgSA', nodes={1=localhost:40117 (id: 1 rack: null)}, partitions=[PartitionMetadata(error=NONE, partition=my-test-topic-0, leader=Optional[1], leaderEpoch=Optional[0], replicas=1, isr=1, offlineReplicas=)], controller=localhost:40117 (id: 1 rack: null)} 09:09:36 09:09:36.665 [main] DEBUG org.apache.kafka.clients.consumer.internals.ConsumerCoordinator - [Consumer clientId=mso-123456-consumer-f5c1d64b-2a77-4abe-97e2-5d595acfbd82, groupId=mso-group] Sending FindCoordinator request to broker localhost:40117 (id: 1 rack: null) 09:09:36 09:09:36.665 [main] DEBUG org.apache.kafka.clients.NetworkClient - [Consumer clientId=mso-123456-consumer-f5c1d64b-2a77-4abe-97e2-5d595acfbd82, groupId=mso-group] Sending FIND_COORDINATOR request with header RequestHeader(apiKey=FIND_COORDINATOR, apiVersion=4, clientId=mso-123456-consumer-f5c1d64b-2a77-4abe-97e2-5d595acfbd82, correlationId=23) and timeout 30000 to node 1: FindCoordinatorRequestData(key='', keyType=0, coordinatorKeys=[mso-group]) 09:09:36 09:09:36.668 [ProcessThread(sid:0 cport:46481):] DEBUG org.apache.zookeeper.server.SessionTrackerImpl - Checking session 0x1000002945e0000 09:09:36 09:09:36.668 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - Processing request:: sessionid:0x1000002945e0000 type:exists cxid:0xf3 zxid:0xfffffffffffffffe txntype:unknown reqpath:/admin/delete_topics/__consumer_offsets 09:09:36 09:09:36.668 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - sessionid:0x1000002945e0000 type:exists cxid:0xf3 zxid:0xfffffffffffffffe txntype:unknown reqpath:/admin/delete_topics/__consumer_offsets 09:09:36 09:09:36.669 [main-SendThread(127.0.0.1:46481)] DEBUG org.apache.zookeeper.ClientCnxn - Reading reply session id: 0x1000002945e0000, packet:: clientPath:/admin/delete_topics/__consumer_offsets serverPath:/admin/delete_topics/__consumer_offsets finished:false header:: 243,3 replyHeader:: 243,139,-101 request:: '/admin/delete_topics/__consumer_offsets,F response:: 09:09:36 09:09:36.670 [ProcessThread(sid:0 cport:46481):] DEBUG org.apache.zookeeper.server.SessionTrackerImpl - Checking session 0x1000002945e0000 09:09:36 09:09:36.670 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - Processing request:: sessionid:0x1000002945e0000 type:exists cxid:0xf4 zxid:0xfffffffffffffffe txntype:unknown reqpath:/brokers/topics/__consumer_offsets 09:09:36 09:09:36.670 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - sessionid:0x1000002945e0000 type:exists cxid:0xf4 zxid:0xfffffffffffffffe txntype:unknown reqpath:/brokers/topics/__consumer_offsets 09:09:36 09:09:36.670 [main-SendThread(127.0.0.1:46481)] DEBUG org.apache.zookeeper.ClientCnxn - Reading reply session id: 0x1000002945e0000, packet:: clientPath:/brokers/topics/__consumer_offsets serverPath:/brokers/topics/__consumer_offsets finished:false header:: 244,3 replyHeader:: 244,139,0 request:: '/brokers/topics/__consumer_offsets,T response:: s{38,38,1770973775670,1770973775670,0,1,0,0,548,1,39} 09:09:36 09:09:36.670 [data-plane-kafka-request-handler-0] DEBUG kafka.server.ZkAdminManager - [Admin Manager on Broker 1]: Topic creation failed since topic '__consumer_offsets' already exists. 09:09:36 org.apache.kafka.common.errors.TopicExistsException: Topic '__consumer_offsets' already exists. 09:09:36 09:09:36.671 [data-plane-kafka-request-handler-0] DEBUG kafka.server.DefaultAutoTopicCreationManager - Cleared inflight topic creation state for HashMap(__consumer_offsets -> CreatableTopic(name='__consumer_offsets', numPartitions=50, replicationFactor=1, assignments=[], configs=[CreateableTopicConfig(name='compression.type', value='producer'), CreateableTopicConfig(name='cleanup.policy', value='compact'), CreateableTopicConfig(name='segment.bytes', value='104857600')])) 09:09:36 09:09:36.672 [main] DEBUG org.apache.kafka.clients.NetworkClient - [Consumer clientId=mso-123456-consumer-f5c1d64b-2a77-4abe-97e2-5d595acfbd82, groupId=mso-group] Received FIND_COORDINATOR response from node 1 for request with header RequestHeader(apiKey=FIND_COORDINATOR, apiVersion=4, clientId=mso-123456-consumer-f5c1d64b-2a77-4abe-97e2-5d595acfbd82, correlationId=23): FindCoordinatorResponseData(throttleTimeMs=0, errorCode=0, errorMessage='', nodeId=0, host='', port=0, coordinators=[Coordinator(key='mso-group', nodeId=-1, host='', port=-1, errorCode=15, errorMessage='')]) 09:09:36 09:09:36.672 [data-plane-kafka-network-thread-1-ListenerName(SASL_PLAINTEXT)-SASL_PLAINTEXT-0] DEBUG kafka.request.logger - Completed request:{"isForwarded":false,"requestHeader":{"requestApiKey":10,"requestApiVersion":4,"correlationId":23,"clientId":"mso-123456-consumer-f5c1d64b-2a77-4abe-97e2-5d595acfbd82","requestApiKeyName":"FIND_COORDINATOR"},"request":{"keyType":0,"coordinatorKeys":["mso-group"]},"response":{"throttleTimeMs":0,"coordinators":[{"key":"mso-group","nodeId":-1,"host":"","port":-1,"errorCode":15,"errorMessage":""}]},"connection":"127.0.0.1:40117-127.0.0.1:49642-3","totalTimeMs":5.565,"requestQueueTimeMs":0.169,"localTimeMs":5.08,"remoteTimeMs":0.0,"throttleTimeMs":0,"responseQueueTimeMs":0.073,"sendTimeMs":0.241,"securityProtocol":"SASL_PLAINTEXT","principal":"User:admin","listener":"SASL_PLAINTEXT","clientInformation":{"softwareName":"apache-kafka-java","softwareVersion":"3.3.1"}} 09:09:36 09:09:36.672 [main] DEBUG org.apache.kafka.clients.consumer.internals.ConsumerCoordinator - [Consumer clientId=mso-123456-consumer-f5c1d64b-2a77-4abe-97e2-5d595acfbd82, groupId=mso-group] Received FindCoordinator response ClientResponse(receivedTimeMs=1770973776671, latencyMs=6, disconnected=false, requestHeader=RequestHeader(apiKey=FIND_COORDINATOR, apiVersion=4, clientId=mso-123456-consumer-f5c1d64b-2a77-4abe-97e2-5d595acfbd82, correlationId=23), responseBody=FindCoordinatorResponseData(throttleTimeMs=0, errorCode=0, errorMessage='', nodeId=0, host='', port=0, coordinators=[Coordinator(key='mso-group', nodeId=-1, host='', port=-1, errorCode=15, errorMessage='')])) 09:09:36 09:09:36.672 [main] DEBUG org.apache.kafka.clients.consumer.internals.ConsumerCoordinator - [Consumer clientId=mso-123456-consumer-f5c1d64b-2a77-4abe-97e2-5d595acfbd82, groupId=mso-group] Group coordinator lookup failed: 09:09:36 09:09:36.672 [main] DEBUG org.apache.kafka.clients.consumer.internals.ConsumerCoordinator - [Consumer clientId=mso-123456-consumer-f5c1d64b-2a77-4abe-97e2-5d595acfbd82, groupId=mso-group] Coordinator discovery failed, refreshing metadata 09:09:36 org.apache.kafka.common.errors.CoordinatorNotAvailableException: The coordinator is not available. 09:09:36 09:09:36.703 [ProcessThread(sid:0 cport:46481):] DEBUG org.apache.zookeeper.server.SessionTrackerImpl - Checking session 0x1000002945e0000 09:09:36 09:09:36.703 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - Processing request:: sessionid:0x1000002945e0000 type:getData cxid:0xf5 zxid:0xfffffffffffffffe txntype:unknown reqpath:/config/topics/__consumer_offsets 09:09:36 09:09:36.703 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - sessionid:0x1000002945e0000 type:getData cxid:0xf5 zxid:0xfffffffffffffffe txntype:unknown reqpath:/config/topics/__consumer_offsets 09:09:36 09:09:36.703 [SyncThread:0] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Permission requested: 1 09:09:36 09:09:36.703 [SyncThread:0] DEBUG org.apache.zookeeper.server.ZooKeeperServer - ACLs for node: [31,s{'world,'anyone} 09:09:36 ] 09:09:36 09:09:36.703 [SyncThread:0] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Client credentials: ['sasl,'zooclient 09:09:36 , 'ip,'127.0.0.1 09:09:36 ] 09:09:36 09:09:36.704 [main-SendThread(127.0.0.1:46481)] DEBUG org.apache.zookeeper.ClientCnxn - Reading reply session id: 0x1000002945e0000, packet:: clientPath:/config/topics/__consumer_offsets serverPath:/config/topics/__consumer_offsets finished:false header:: 245,4 replyHeader:: 245,139,0 request:: '/config/topics/__consumer_offsets,F response:: #7b2276657273696f6e223a312c22636f6e666967223a7b22636f6d7072657373696f6e2e74797065223a2270726f6475636572222c22636c65616e75702e706f6c696379223a22636f6d70616374222c227365676d656e742e6279746573223a22313034383537363030227d7d,s{37,37,1770973775609,1770973775609,0,0,0,0,109,0,37} 09:09:36 09:09:36.707 [data-plane-kafka-request-handler-1] DEBUG kafka.log.OffsetIndex - Loaded index file /tmp/kafka-unit11182757027218931278/__consumer_offsets-0/00000000000000000000.index with maxEntries = 1310720, maxIndexSize = 10485760, entries = 0, lastOffset = 0, file position = 0 09:09:36 09:09:36.707 [data-plane-kafka-request-handler-1] DEBUG kafka.log.AbstractIndex - Index /tmp/kafka-unit11182757027218931278/__consumer_offsets-0/00000000000000000000.index was not resized because it already has size 10485760 09:09:36 09:09:36.707 [data-plane-kafka-request-handler-1] DEBUG kafka.log.TimeIndex - Loaded index file /tmp/kafka-unit11182757027218931278/__consumer_offsets-0/00000000000000000000.timeindex with maxEntries = 873813, maxIndexSize = 10485760, entries = 0, lastOffset = TimestampOffset(-1,0), file position = 0 09:09:36 09:09:36.707 [data-plane-kafka-request-handler-1] DEBUG kafka.log.AbstractIndex - Index /tmp/kafka-unit11182757027218931278/__consumer_offsets-0/00000000000000000000.timeindex was not resized because it already has size 10485756 09:09:36 09:09:36.708 [data-plane-kafka-request-handler-1] INFO kafka.log.UnifiedLog$ - [LogLoader partition=__consumer_offsets-0, dir=/tmp/kafka-unit11182757027218931278] Loading producer state till offset 0 with message format version 2 09:09:36 09:09:36.708 [data-plane-kafka-request-handler-1] DEBUG kafka.utils.KafkaScheduler - Scheduling task flush-metadata-file with initial delay 0 ms and period -1 ms. 09:09:36 09:09:36.709 [data-plane-kafka-request-handler-1] DEBUG kafka.utils.KafkaScheduler - Scheduling task PeriodicProducerExpirationCheck with initial delay 600000 ms and period 600000 ms. 09:09:36 09:09:36.709 [data-plane-kafka-request-handler-1] INFO kafka.log.LogManager - Created log for partition __consumer_offsets-0 in /tmp/kafka-unit11182757027218931278/__consumer_offsets-0 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} 09:09:36 09:09:36.709 [data-plane-kafka-request-handler-1] INFO kafka.cluster.Partition - [Partition __consumer_offsets-0 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-0 09:09:36 09:09:36.709 [data-plane-kafka-request-handler-1] INFO kafka.cluster.Partition - [Partition __consumer_offsets-0 broker=1] Log loaded for partition __consumer_offsets-0 with initial high watermark 0 09:09:36 09:09:36.709 [data-plane-kafka-request-handler-1] INFO state.change.logger - [Broker id=1] Leader __consumer_offsets-0 with topic id Some(RNuWA2mhQKGQZwHuRiMICQ) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas []. Previous leader epoch was -1. 09:09:36 09:09:36.709 [data-plane-kafka-request-handler-1] DEBUG kafka.server.epoch.LeaderEpochFileCache - [LeaderEpochCache __consumer_offsets-0] Appended new epoch entry EpochEntry(epoch=0, startOffset=0). Cache now contains 1 entries. 09:09:36 09:09:36.723 [ProcessThread(sid:0 cport:46481):] DEBUG org.apache.zookeeper.server.SessionTrackerImpl - Checking session 0x1000002945e0000 09:09:36 09:09:36.723 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - Processing request:: sessionid:0x1000002945e0000 type:getData cxid:0xf6 zxid:0xfffffffffffffffe txntype:unknown reqpath:/config/topics/__consumer_offsets 09:09:36 09:09:36.723 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - sessionid:0x1000002945e0000 type:getData cxid:0xf6 zxid:0xfffffffffffffffe txntype:unknown reqpath:/config/topics/__consumer_offsets 09:09:36 09:09:36.723 [SyncThread:0] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Permission requested: 1 09:09:36 09:09:36.723 [SyncThread:0] DEBUG org.apache.zookeeper.server.ZooKeeperServer - ACLs for node: [31,s{'world,'anyone} 09:09:36 ] 09:09:36 09:09:36.723 [SyncThread:0] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Client credentials: ['sasl,'zooclient 09:09:36 , 'ip,'127.0.0.1 09:09:36 ] 09:09:36 09:09:36.723 [main-SendThread(127.0.0.1:46481)] DEBUG org.apache.zookeeper.ClientCnxn - Reading reply session id: 0x1000002945e0000, packet:: clientPath:/config/topics/__consumer_offsets serverPath:/config/topics/__consumer_offsets finished:false header:: 246,4 replyHeader:: 246,139,0 request:: '/config/topics/__consumer_offsets,F response:: #7b2276657273696f6e223a312c22636f6e666967223a7b22636f6d7072657373696f6e2e74797065223a2270726f6475636572222c22636c65616e75702e706f6c696379223a22636f6d70616374222c227365676d656e742e6279746573223a22313034383537363030227d7d,s{37,37,1770973775609,1770973775609,0,0,0,0,109,0,37} 09:09:36 09:09:36.726 [data-plane-kafka-request-handler-1] DEBUG kafka.log.OffsetIndex - Loaded index file /tmp/kafka-unit11182757027218931278/__consumer_offsets-35/00000000000000000000.index with maxEntries = 1310720, maxIndexSize = 10485760, entries = 0, lastOffset = 0, file position = 0 09:09:36 09:09:36.726 [data-plane-kafka-request-handler-1] DEBUG kafka.log.AbstractIndex - Index /tmp/kafka-unit11182757027218931278/__consumer_offsets-35/00000000000000000000.index was not resized because it already has size 10485760 09:09:36 09:09:36.727 [data-plane-kafka-request-handler-1] DEBUG kafka.log.TimeIndex - Loaded index file /tmp/kafka-unit11182757027218931278/__consumer_offsets-35/00000000000000000000.timeindex with maxEntries = 873813, maxIndexSize = 10485760, entries = 0, lastOffset = TimestampOffset(-1,0), file position = 0 09:09:36 09:09:36.727 [data-plane-kafka-request-handler-1] DEBUG kafka.log.AbstractIndex - Index /tmp/kafka-unit11182757027218931278/__consumer_offsets-35/00000000000000000000.timeindex was not resized because it already has size 10485756 09:09:36 09:09:36.727 [data-plane-kafka-request-handler-1] INFO kafka.log.UnifiedLog$ - [LogLoader partition=__consumer_offsets-35, dir=/tmp/kafka-unit11182757027218931278] Loading producer state till offset 0 with message format version 2 09:09:36 09:09:36.727 [data-plane-kafka-request-handler-1] DEBUG kafka.utils.KafkaScheduler - Scheduling task flush-metadata-file with initial delay 0 ms and period -1 ms. 09:09:36 09:09:36.728 [data-plane-kafka-request-handler-1] DEBUG kafka.utils.KafkaScheduler - Scheduling task PeriodicProducerExpirationCheck with initial delay 600000 ms and period 600000 ms. 09:09:36 09:09:36.729 [data-plane-kafka-request-handler-1] INFO kafka.log.LogManager - Created log for partition __consumer_offsets-35 in /tmp/kafka-unit11182757027218931278/__consumer_offsets-35 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} 09:09:36 09:09:36.729 [data-plane-kafka-request-handler-1] INFO kafka.cluster.Partition - [Partition __consumer_offsets-35 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-35 09:09:36 09:09:36.729 [data-plane-kafka-request-handler-1] INFO kafka.cluster.Partition - [Partition __consumer_offsets-35 broker=1] Log loaded for partition __consumer_offsets-35 with initial high watermark 0 09:09:36 09:09:36.729 [data-plane-kafka-request-handler-1] INFO state.change.logger - [Broker id=1] Leader __consumer_offsets-35 with topic id Some(RNuWA2mhQKGQZwHuRiMICQ) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas []. Previous leader epoch was -1. 09:09:36 09:09:36.729 [data-plane-kafka-request-handler-1] DEBUG kafka.server.epoch.LeaderEpochFileCache - [LeaderEpochCache __consumer_offsets-35] Appended new epoch entry EpochEntry(epoch=0, startOffset=0). Cache now contains 1 entries. 09:09:36 09:09:36.764 [main] DEBUG org.apache.kafka.clients.NetworkClient - [Consumer clientId=mso-123456-consumer-f5c1d64b-2a77-4abe-97e2-5d595acfbd82, groupId=mso-group] Sending metadata request MetadataRequestData(topics=[MetadataRequestTopic(topicId=AAAAAAAAAAAAAAAAAAAAAA, name='my-test-topic')], allowAutoTopicCreation=false, includeClusterAuthorizedOperations=false, includeTopicAuthorizedOperations=false) to node localhost:40117 (id: 1 rack: null) 09:09:36 09:09:36.765 [main] DEBUG org.apache.kafka.clients.NetworkClient - [Consumer clientId=mso-123456-consumer-f5c1d64b-2a77-4abe-97e2-5d595acfbd82, groupId=mso-group] Sending METADATA request with header RequestHeader(apiKey=METADATA, apiVersion=12, clientId=mso-123456-consumer-f5c1d64b-2a77-4abe-97e2-5d595acfbd82, correlationId=24) and timeout 30000 to node 1: MetadataRequestData(topics=[MetadataRequestTopic(topicId=AAAAAAAAAAAAAAAAAAAAAA, name='my-test-topic')], allowAutoTopicCreation=false, includeClusterAuthorizedOperations=false, includeTopicAuthorizedOperations=false) 09:09:36 09:09:36.767 [main] DEBUG org.apache.kafka.clients.NetworkClient - [Consumer clientId=mso-123456-consumer-f5c1d64b-2a77-4abe-97e2-5d595acfbd82, groupId=mso-group] Received METADATA response from node 1 for request with header RequestHeader(apiKey=METADATA, apiVersion=12, clientId=mso-123456-consumer-f5c1d64b-2a77-4abe-97e2-5d595acfbd82, correlationId=24): MetadataResponseData(throttleTimeMs=0, brokers=[MetadataResponseBroker(nodeId=1, host='localhost', port=40117, rack=null)], clusterId='lcpOyY1-QY2MMThgHGGgSA', controllerId=1, topics=[MetadataResponseTopic(errorCode=0, name='my-test-topic', topicId=rpOTusxPRiGyrsjjjH_fwA, isInternal=false, partitions=[MetadataResponsePartition(errorCode=0, partitionIndex=0, leaderId=1, leaderEpoch=0, replicaNodes=[1], isrNodes=[1], offlineReplicas=[])], topicAuthorizedOperations=-2147483648)], clusterAuthorizedOperations=-2147483648) 09:09:36 09:09:36.767 [main] DEBUG org.apache.kafka.clients.Metadata - [Consumer clientId=mso-123456-consumer-f5c1d64b-2a77-4abe-97e2-5d595acfbd82, groupId=mso-group] Updating last seen epoch for partition my-test-topic-0 from 0 to epoch 0 from new metadata 09:09:36 09:09:36.767 [data-plane-kafka-network-thread-1-ListenerName(SASL_PLAINTEXT)-SASL_PLAINTEXT-0] DEBUG kafka.request.logger - Completed request:{"isForwarded":false,"requestHeader":{"requestApiKey":3,"requestApiVersion":12,"correlationId":24,"clientId":"mso-123456-consumer-f5c1d64b-2a77-4abe-97e2-5d595acfbd82","requestApiKeyName":"METADATA"},"request":{"topics":[{"topicId":"AAAAAAAAAAAAAAAAAAAAAA","name":"my-test-topic"}],"allowAutoTopicCreation":false,"includeTopicAuthorizedOperations":false},"response":{"throttleTimeMs":0,"brokers":[{"nodeId":1,"host":"localhost","port":40117,"rack":null}],"clusterId":"lcpOyY1-QY2MMThgHGGgSA","controllerId":1,"topics":[{"errorCode":0,"name":"my-test-topic","topicId":"rpOTusxPRiGyrsjjjH_fwA","isInternal":false,"partitions":[{"errorCode":0,"partitionIndex":0,"leaderId":1,"leaderEpoch":0,"replicaNodes":[1],"isrNodes":[1],"offlineReplicas":[]}],"topicAuthorizedOperations":-2147483648}]},"connection":"127.0.0.1:40117-127.0.0.1:49642-3","totalTimeMs":1.737,"requestQueueTimeMs":0.259,"localTimeMs":1.039,"remoteTimeMs":0.0,"throttleTimeMs":0,"responseQueueTimeMs":0.126,"sendTimeMs":0.311,"securityProtocol":"SASL_PLAINTEXT","principal":"User:admin","listener":"SASL_PLAINTEXT","clientInformation":{"softwareName":"apache-kafka-java","softwareVersion":"3.3.1"}} 09:09:36 09:09:36.768 [main] DEBUG org.apache.kafka.clients.Metadata - [Consumer clientId=mso-123456-consumer-f5c1d64b-2a77-4abe-97e2-5d595acfbd82, groupId=mso-group] Updated cluster metadata updateVersion 13 to MetadataCache{clusterId='lcpOyY1-QY2MMThgHGGgSA', nodes={1=localhost:40117 (id: 1 rack: null)}, partitions=[PartitionMetadata(error=NONE, partition=my-test-topic-0, leader=Optional[1], leaderEpoch=Optional[0], replicas=1, isr=1, offlineReplicas=)], controller=localhost:40117 (id: 1 rack: null)} 09:09:36 09:09:36.768 [main] DEBUG org.apache.kafka.clients.consumer.internals.ConsumerCoordinator - [Consumer clientId=mso-123456-consumer-f5c1d64b-2a77-4abe-97e2-5d595acfbd82, groupId=mso-group] Sending FindCoordinator request to broker localhost:40117 (id: 1 rack: null) 09:09:36 09:09:36.768 [main] DEBUG org.apache.kafka.clients.NetworkClient - [Consumer clientId=mso-123456-consumer-f5c1d64b-2a77-4abe-97e2-5d595acfbd82, groupId=mso-group] Sending FIND_COORDINATOR request with header RequestHeader(apiKey=FIND_COORDINATOR, apiVersion=4, clientId=mso-123456-consumer-f5c1d64b-2a77-4abe-97e2-5d595acfbd82, correlationId=25) and timeout 30000 to node 1: FindCoordinatorRequestData(key='', keyType=0, coordinatorKeys=[mso-group]) 09:09:36 09:09:36.770 [ProcessThread(sid:0 cport:46481):] DEBUG org.apache.zookeeper.server.SessionTrackerImpl - Checking session 0x1000002945e0000 09:09:36 09:09:36.770 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - Processing request:: sessionid:0x1000002945e0000 type:exists cxid:0xf7 zxid:0xfffffffffffffffe txntype:unknown reqpath:/admin/delete_topics/__consumer_offsets 09:09:36 09:09:36.771 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - sessionid:0x1000002945e0000 type:exists cxid:0xf7 zxid:0xfffffffffffffffe txntype:unknown reqpath:/admin/delete_topics/__consumer_offsets 09:09:36 09:09:36.771 [main-SendThread(127.0.0.1:46481)] DEBUG org.apache.zookeeper.ClientCnxn - Reading reply session id: 0x1000002945e0000, packet:: clientPath:/admin/delete_topics/__consumer_offsets serverPath:/admin/delete_topics/__consumer_offsets finished:false header:: 247,3 replyHeader:: 247,139,-101 request:: '/admin/delete_topics/__consumer_offsets,F response:: 09:09:36 09:09:36.772 [ProcessThread(sid:0 cport:46481):] DEBUG org.apache.zookeeper.server.SessionTrackerImpl - Checking session 0x1000002945e0000 09:09:36 09:09:36.772 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - Processing request:: sessionid:0x1000002945e0000 type:exists cxid:0xf8 zxid:0xfffffffffffffffe txntype:unknown reqpath:/brokers/topics/__consumer_offsets 09:09:36 09:09:36.772 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - sessionid:0x1000002945e0000 type:exists cxid:0xf8 zxid:0xfffffffffffffffe txntype:unknown reqpath:/brokers/topics/__consumer_offsets 09:09:36 09:09:36.773 [main-SendThread(127.0.0.1:46481)] DEBUG org.apache.zookeeper.ClientCnxn - Reading reply session id: 0x1000002945e0000, packet:: clientPath:/brokers/topics/__consumer_offsets serverPath:/brokers/topics/__consumer_offsets finished:false header:: 248,3 replyHeader:: 248,139,0 request:: '/brokers/topics/__consumer_offsets,T response:: s{38,38,1770973775670,1770973775670,0,1,0,0,548,1,39} 09:09:36 09:09:36.773 [data-plane-kafka-request-handler-0] DEBUG kafka.server.ZkAdminManager - [Admin Manager on Broker 1]: Topic creation failed since topic '__consumer_offsets' already exists. 09:09:36 org.apache.kafka.common.errors.TopicExistsException: Topic '__consumer_offsets' already exists. 09:09:36 09:09:36.774 [data-plane-kafka-request-handler-0] DEBUG kafka.server.DefaultAutoTopicCreationManager - Cleared inflight topic creation state for HashMap(__consumer_offsets -> CreatableTopic(name='__consumer_offsets', numPartitions=50, replicationFactor=1, assignments=[], configs=[CreateableTopicConfig(name='compression.type', value='producer'), CreateableTopicConfig(name='cleanup.policy', value='compact'), CreateableTopicConfig(name='segment.bytes', value='104857600')])) 09:09:36 09:09:36.775 [main] DEBUG org.apache.kafka.clients.NetworkClient - [Consumer clientId=mso-123456-consumer-f5c1d64b-2a77-4abe-97e2-5d595acfbd82, groupId=mso-group] Received FIND_COORDINATOR response from node 1 for request with header RequestHeader(apiKey=FIND_COORDINATOR, apiVersion=4, clientId=mso-123456-consumer-f5c1d64b-2a77-4abe-97e2-5d595acfbd82, correlationId=25): FindCoordinatorResponseData(throttleTimeMs=0, errorCode=0, errorMessage='', nodeId=0, host='', port=0, coordinators=[Coordinator(key='mso-group', nodeId=-1, host='', port=-1, errorCode=15, errorMessage='')]) 09:09:36 09:09:36.775 [main] DEBUG org.apache.kafka.clients.consumer.internals.ConsumerCoordinator - [Consumer clientId=mso-123456-consumer-f5c1d64b-2a77-4abe-97e2-5d595acfbd82, groupId=mso-group] Received FindCoordinator response ClientResponse(receivedTimeMs=1770973776775, latencyMs=7, disconnected=false, requestHeader=RequestHeader(apiKey=FIND_COORDINATOR, apiVersion=4, clientId=mso-123456-consumer-f5c1d64b-2a77-4abe-97e2-5d595acfbd82, correlationId=25), responseBody=FindCoordinatorResponseData(throttleTimeMs=0, errorCode=0, errorMessage='', nodeId=0, host='', port=0, coordinators=[Coordinator(key='mso-group', nodeId=-1, host='', port=-1, errorCode=15, errorMessage='')])) 09:09:36 09:09:36.775 [main] DEBUG org.apache.kafka.clients.consumer.internals.ConsumerCoordinator - [Consumer clientId=mso-123456-consumer-f5c1d64b-2a77-4abe-97e2-5d595acfbd82, groupId=mso-group] Group coordinator lookup failed: 09:09:36 09:09:36.775 [data-plane-kafka-network-thread-1-ListenerName(SASL_PLAINTEXT)-SASL_PLAINTEXT-0] DEBUG kafka.request.logger - Completed request:{"isForwarded":false,"requestHeader":{"requestApiKey":10,"requestApiVersion":4,"correlationId":25,"clientId":"mso-123456-consumer-f5c1d64b-2a77-4abe-97e2-5d595acfbd82","requestApiKeyName":"FIND_COORDINATOR"},"request":{"keyType":0,"coordinatorKeys":["mso-group"]},"response":{"throttleTimeMs":0,"coordinators":[{"key":"mso-group","nodeId":-1,"host":"","port":-1,"errorCode":15,"errorMessage":""}]},"connection":"127.0.0.1:40117-127.0.0.1:49642-3","totalTimeMs":6.405,"requestQueueTimeMs":0.21,"localTimeMs":5.715,"remoteTimeMs":0.0,"throttleTimeMs":0,"responseQueueTimeMs":0.194,"sendTimeMs":0.285,"securityProtocol":"SASL_PLAINTEXT","principal":"User:admin","listener":"SASL_PLAINTEXT","clientInformation":{"softwareName":"apache-kafka-java","softwareVersion":"3.3.1"}} 09:09:36 09:09:36.775 [main] DEBUG org.apache.kafka.clients.consumer.internals.ConsumerCoordinator - [Consumer clientId=mso-123456-consumer-f5c1d64b-2a77-4abe-97e2-5d595acfbd82, groupId=mso-group] Coordinator discovery failed, refreshing metadata 09:09:36 org.apache.kafka.common.errors.CoordinatorNotAvailableException: The coordinator is not available. 09:09:36 09:09:36.808 [ProcessThread(sid:0 cport:46481):] DEBUG org.apache.zookeeper.server.SessionTrackerImpl - Checking session 0x1000002945e0000 09:09:36 09:09:36.808 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - Processing request:: sessionid:0x1000002945e0000 type:getData cxid:0xf9 zxid:0xfffffffffffffffe txntype:unknown reqpath:/config/topics/__consumer_offsets 09:09:36 09:09:36.808 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - sessionid:0x1000002945e0000 type:getData cxid:0xf9 zxid:0xfffffffffffffffe txntype:unknown reqpath:/config/topics/__consumer_offsets 09:09:36 09:09:36.808 [SyncThread:0] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Permission requested: 1 09:09:36 09:09:36.808 [SyncThread:0] DEBUG org.apache.zookeeper.server.ZooKeeperServer - ACLs for node: [31,s{'world,'anyone} 09:09:36 ] 09:09:36 09:09:36.808 [SyncThread:0] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Client credentials: ['sasl,'zooclient 09:09:36 , 'ip,'127.0.0.1 09:09:36 ] 09:09:36 09:09:36.809 [main-SendThread(127.0.0.1:46481)] DEBUG org.apache.zookeeper.ClientCnxn - Reading reply session id: 0x1000002945e0000, packet:: clientPath:/config/topics/__consumer_offsets serverPath:/config/topics/__consumer_offsets finished:false header:: 249,4 replyHeader:: 249,139,0 request:: '/config/topics/__consumer_offsets,F response:: #7b2276657273696f6e223a312c22636f6e666967223a7b22636f6d7072657373696f6e2e74797065223a2270726f6475636572222c22636c65616e75702e706f6c696379223a22636f6d70616374222c227365676d656e742e6279746573223a22313034383537363030227d7d,s{37,37,1770973775609,1770973775609,0,0,0,0,109,0,37} 09:09:36 09:09:36.812 [data-plane-kafka-request-handler-1] DEBUG kafka.log.OffsetIndex - Loaded index file /tmp/kafka-unit11182757027218931278/__consumer_offsets-5/00000000000000000000.index with maxEntries = 1310720, maxIndexSize = 10485760, entries = 0, lastOffset = 0, file position = 0 09:09:36 09:09:36.812 [data-plane-kafka-request-handler-1] DEBUG kafka.log.AbstractIndex - Index /tmp/kafka-unit11182757027218931278/__consumer_offsets-5/00000000000000000000.index was not resized because it already has size 10485760 09:09:36 09:09:36.812 [data-plane-kafka-request-handler-1] DEBUG kafka.log.TimeIndex - Loaded index file /tmp/kafka-unit11182757027218931278/__consumer_offsets-5/00000000000000000000.timeindex with maxEntries = 873813, maxIndexSize = 10485760, entries = 0, lastOffset = TimestampOffset(-1,0), file position = 0 09:09:36 09:09:36.812 [data-plane-kafka-request-handler-1] DEBUG kafka.log.AbstractIndex - Index /tmp/kafka-unit11182757027218931278/__consumer_offsets-5/00000000000000000000.timeindex was not resized because it already has size 10485756 09:09:36 09:09:36.812 [data-plane-kafka-request-handler-1] INFO kafka.log.UnifiedLog$ - [LogLoader partition=__consumer_offsets-5, dir=/tmp/kafka-unit11182757027218931278] Loading producer state till offset 0 with message format version 2 09:09:36 09:09:36.813 [data-plane-kafka-request-handler-1] DEBUG kafka.utils.KafkaScheduler - Scheduling task flush-metadata-file with initial delay 0 ms and period -1 ms. 09:09:36 09:09:36.813 [data-plane-kafka-request-handler-1] DEBUG kafka.utils.KafkaScheduler - Scheduling task PeriodicProducerExpirationCheck with initial delay 600000 ms and period 600000 ms. 09:09:36 09:09:36.813 [data-plane-kafka-request-handler-1] INFO kafka.log.LogManager - Created log for partition __consumer_offsets-5 in /tmp/kafka-unit11182757027218931278/__consumer_offsets-5 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} 09:09:36 09:09:36.813 [data-plane-kafka-request-handler-1] INFO kafka.cluster.Partition - [Partition __consumer_offsets-5 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-5 09:09:36 09:09:36.814 [data-plane-kafka-request-handler-1] INFO kafka.cluster.Partition - [Partition __consumer_offsets-5 broker=1] Log loaded for partition __consumer_offsets-5 with initial high watermark 0 09:09:36 09:09:36.814 [data-plane-kafka-request-handler-1] INFO state.change.logger - [Broker id=1] Leader __consumer_offsets-5 with topic id Some(RNuWA2mhQKGQZwHuRiMICQ) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas []. Previous leader epoch was -1. 09:09:36 09:09:36.814 [data-plane-kafka-request-handler-1] DEBUG kafka.server.epoch.LeaderEpochFileCache - [LeaderEpochCache __consumer_offsets-5] Appended new epoch entry EpochEntry(epoch=0, startOffset=0). Cache now contains 1 entries. 09:09:36 09:09:36.841 [ProcessThread(sid:0 cport:46481):] DEBUG org.apache.zookeeper.server.SessionTrackerImpl - Checking session 0x1000002945e0000 09:09:36 09:09:36.841 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - Processing request:: sessionid:0x1000002945e0000 type:getData cxid:0xfa zxid:0xfffffffffffffffe txntype:unknown reqpath:/config/topics/__consumer_offsets 09:09:36 09:09:36.841 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - sessionid:0x1000002945e0000 type:getData cxid:0xfa zxid:0xfffffffffffffffe txntype:unknown reqpath:/config/topics/__consumer_offsets 09:09:36 09:09:36.841 [SyncThread:0] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Permission requested: 1 09:09:36 09:09:36.841 [SyncThread:0] DEBUG org.apache.zookeeper.server.ZooKeeperServer - ACLs for node: [31,s{'world,'anyone} 09:09:36 ] 09:09:36 09:09:36.841 [SyncThread:0] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Client credentials: ['sasl,'zooclient 09:09:36 , 'ip,'127.0.0.1 09:09:36 ] 09:09:36 09:09:36.841 [main-SendThread(127.0.0.1:46481)] DEBUG org.apache.zookeeper.ClientCnxn - Reading reply session id: 0x1000002945e0000, packet:: clientPath:/config/topics/__consumer_offsets serverPath:/config/topics/__consumer_offsets finished:false header:: 250,4 replyHeader:: 250,139,0 request:: '/config/topics/__consumer_offsets,F response:: #7b2276657273696f6e223a312c22636f6e666967223a7b22636f6d7072657373696f6e2e74797065223a2270726f6475636572222c22636c65616e75702e706f6c696379223a22636f6d70616374222c227365676d656e742e6279746573223a22313034383537363030227d7d,s{37,37,1770973775609,1770973775609,0,0,0,0,109,0,37} 09:09:36 09:09:36.846 [data-plane-kafka-request-handler-1] DEBUG kafka.log.OffsetIndex - Loaded index file /tmp/kafka-unit11182757027218931278/__consumer_offsets-20/00000000000000000000.index with maxEntries = 1310720, maxIndexSize = 10485760, entries = 0, lastOffset = 0, file position = 0 09:09:36 09:09:36.846 [data-plane-kafka-request-handler-1] DEBUG kafka.log.AbstractIndex - Index /tmp/kafka-unit11182757027218931278/__consumer_offsets-20/00000000000000000000.index was not resized because it already has size 10485760 09:09:36 09:09:36.846 [data-plane-kafka-request-handler-1] DEBUG kafka.log.TimeIndex - Loaded index file /tmp/kafka-unit11182757027218931278/__consumer_offsets-20/00000000000000000000.timeindex with maxEntries = 873813, maxIndexSize = 10485760, entries = 0, lastOffset = TimestampOffset(-1,0), file position = 0 09:09:36 09:09:36.846 [data-plane-kafka-request-handler-1] DEBUG kafka.log.AbstractIndex - Index /tmp/kafka-unit11182757027218931278/__consumer_offsets-20/00000000000000000000.timeindex was not resized because it already has size 10485756 09:09:36 09:09:36.847 [data-plane-kafka-request-handler-1] INFO kafka.log.UnifiedLog$ - [LogLoader partition=__consumer_offsets-20, dir=/tmp/kafka-unit11182757027218931278] Loading producer state till offset 0 with message format version 2 09:09:36 09:09:36.847 [data-plane-kafka-request-handler-1] DEBUG kafka.utils.KafkaScheduler - Scheduling task flush-metadata-file with initial delay 0 ms and period -1 ms. 09:09:36 09:09:36.848 [data-plane-kafka-request-handler-1] DEBUG kafka.utils.KafkaScheduler - Scheduling task PeriodicProducerExpirationCheck with initial delay 600000 ms and period 600000 ms. 09:09:36 09:09:36.848 [data-plane-kafka-request-handler-1] INFO kafka.log.LogManager - Created log for partition __consumer_offsets-20 in /tmp/kafka-unit11182757027218931278/__consumer_offsets-20 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} 09:09:36 09:09:36.849 [data-plane-kafka-request-handler-1] INFO kafka.cluster.Partition - [Partition __consumer_offsets-20 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-20 09:09:36 09:09:36.849 [data-plane-kafka-request-handler-1] INFO kafka.cluster.Partition - [Partition __consumer_offsets-20 broker=1] Log loaded for partition __consumer_offsets-20 with initial high watermark 0 09:09:36 09:09:36.849 [data-plane-kafka-request-handler-1] INFO state.change.logger - [Broker id=1] Leader __consumer_offsets-20 with topic id Some(RNuWA2mhQKGQZwHuRiMICQ) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas []. Previous leader epoch was -1. 09:09:36 09:09:36.849 [data-plane-kafka-request-handler-1] DEBUG kafka.server.epoch.LeaderEpochFileCache - [LeaderEpochCache __consumer_offsets-20] Appended new epoch entry EpochEntry(epoch=0, startOffset=0). Cache now contains 1 entries. 09:09:36 09:09:36.868 [main] DEBUG org.apache.kafka.clients.NetworkClient - [Consumer clientId=mso-123456-consumer-f5c1d64b-2a77-4abe-97e2-5d595acfbd82, groupId=mso-group] Sending metadata request MetadataRequestData(topics=[MetadataRequestTopic(topicId=AAAAAAAAAAAAAAAAAAAAAA, name='my-test-topic')], allowAutoTopicCreation=false, includeClusterAuthorizedOperations=false, includeTopicAuthorizedOperations=false) to node localhost:40117 (id: 1 rack: null) 09:09:36 09:09:36.868 [main] DEBUG org.apache.kafka.clients.NetworkClient - [Consumer clientId=mso-123456-consumer-f5c1d64b-2a77-4abe-97e2-5d595acfbd82, groupId=mso-group] Sending METADATA request with header RequestHeader(apiKey=METADATA, apiVersion=12, clientId=mso-123456-consumer-f5c1d64b-2a77-4abe-97e2-5d595acfbd82, correlationId=26) and timeout 30000 to node 1: MetadataRequestData(topics=[MetadataRequestTopic(topicId=AAAAAAAAAAAAAAAAAAAAAA, name='my-test-topic')], allowAutoTopicCreation=false, includeClusterAuthorizedOperations=false, includeTopicAuthorizedOperations=false) 09:09:36 09:09:36.872 [main] DEBUG org.apache.kafka.clients.NetworkClient - [Consumer clientId=mso-123456-consumer-f5c1d64b-2a77-4abe-97e2-5d595acfbd82, groupId=mso-group] Received METADATA response from node 1 for request with header RequestHeader(apiKey=METADATA, apiVersion=12, clientId=mso-123456-consumer-f5c1d64b-2a77-4abe-97e2-5d595acfbd82, correlationId=26): MetadataResponseData(throttleTimeMs=0, brokers=[MetadataResponseBroker(nodeId=1, host='localhost', port=40117, rack=null)], clusterId='lcpOyY1-QY2MMThgHGGgSA', controllerId=1, topics=[MetadataResponseTopic(errorCode=0, name='my-test-topic', topicId=rpOTusxPRiGyrsjjjH_fwA, isInternal=false, partitions=[MetadataResponsePartition(errorCode=0, partitionIndex=0, leaderId=1, leaderEpoch=0, replicaNodes=[1], isrNodes=[1], offlineReplicas=[])], topicAuthorizedOperations=-2147483648)], clusterAuthorizedOperations=-2147483648) 09:09:36 09:09:36.872 [main] DEBUG org.apache.kafka.clients.Metadata - [Consumer clientId=mso-123456-consumer-f5c1d64b-2a77-4abe-97e2-5d595acfbd82, groupId=mso-group] Updating last seen epoch for partition my-test-topic-0 from 0 to epoch 0 from new metadata 09:09:36 09:09:36.872 [data-plane-kafka-network-thread-1-ListenerName(SASL_PLAINTEXT)-SASL_PLAINTEXT-0] DEBUG kafka.request.logger - Completed request:{"isForwarded":false,"requestHeader":{"requestApiKey":3,"requestApiVersion":12,"correlationId":26,"clientId":"mso-123456-consumer-f5c1d64b-2a77-4abe-97e2-5d595acfbd82","requestApiKeyName":"METADATA"},"request":{"topics":[{"topicId":"AAAAAAAAAAAAAAAAAAAAAA","name":"my-test-topic"}],"allowAutoTopicCreation":false,"includeTopicAuthorizedOperations":false},"response":{"throttleTimeMs":0,"brokers":[{"nodeId":1,"host":"localhost","port":40117,"rack":null}],"clusterId":"lcpOyY1-QY2MMThgHGGgSA","controllerId":1,"topics":[{"errorCode":0,"name":"my-test-topic","topicId":"rpOTusxPRiGyrsjjjH_fwA","isInternal":false,"partitions":[{"errorCode":0,"partitionIndex":0,"leaderId":1,"leaderEpoch":0,"replicaNodes":[1],"isrNodes":[1],"offlineReplicas":[]}],"topicAuthorizedOperations":-2147483648}]},"connection":"127.0.0.1:40117-127.0.0.1:49642-3","totalTimeMs":2.624,"requestQueueTimeMs":0.447,"localTimeMs":1.752,"remoteTimeMs":0.0,"throttleTimeMs":0,"responseQueueTimeMs":0.117,"sendTimeMs":0.306,"securityProtocol":"SASL_PLAINTEXT","principal":"User:admin","listener":"SASL_PLAINTEXT","clientInformation":{"softwareName":"apache-kafka-java","softwareVersion":"3.3.1"}} 09:09:36 09:09:36.873 [main] DEBUG org.apache.kafka.clients.Metadata - [Consumer clientId=mso-123456-consumer-f5c1d64b-2a77-4abe-97e2-5d595acfbd82, groupId=mso-group] Updated cluster metadata updateVersion 14 to MetadataCache{clusterId='lcpOyY1-QY2MMThgHGGgSA', nodes={1=localhost:40117 (id: 1 rack: null)}, partitions=[PartitionMetadata(error=NONE, partition=my-test-topic-0, leader=Optional[1], leaderEpoch=Optional[0], replicas=1, isr=1, offlineReplicas=)], controller=localhost:40117 (id: 1 rack: null)} 09:09:36 09:09:36.873 [main] DEBUG org.apache.kafka.clients.consumer.internals.ConsumerCoordinator - [Consumer clientId=mso-123456-consumer-f5c1d64b-2a77-4abe-97e2-5d595acfbd82, groupId=mso-group] Sending FindCoordinator request to broker localhost:40117 (id: 1 rack: null) 09:09:36 09:09:36.873 [main] DEBUG org.apache.kafka.clients.NetworkClient - [Consumer clientId=mso-123456-consumer-f5c1d64b-2a77-4abe-97e2-5d595acfbd82, groupId=mso-group] Sending FIND_COORDINATOR request with header RequestHeader(apiKey=FIND_COORDINATOR, apiVersion=4, clientId=mso-123456-consumer-f5c1d64b-2a77-4abe-97e2-5d595acfbd82, correlationId=27) and timeout 30000 to node 1: FindCoordinatorRequestData(key='', keyType=0, coordinatorKeys=[mso-group]) 09:09:36 09:09:36.875 [ProcessThread(sid:0 cport:46481):] DEBUG org.apache.zookeeper.server.SessionTrackerImpl - Checking session 0x1000002945e0000 09:09:36 09:09:36.875 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - Processing request:: sessionid:0x1000002945e0000 type:exists cxid:0xfb zxid:0xfffffffffffffffe txntype:unknown reqpath:/admin/delete_topics/__consumer_offsets 09:09:36 09:09:36.875 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - sessionid:0x1000002945e0000 type:exists cxid:0xfb zxid:0xfffffffffffffffe txntype:unknown reqpath:/admin/delete_topics/__consumer_offsets 09:09:36 09:09:36.876 [main-SendThread(127.0.0.1:46481)] DEBUG org.apache.zookeeper.ClientCnxn - Reading reply session id: 0x1000002945e0000, packet:: clientPath:/admin/delete_topics/__consumer_offsets serverPath:/admin/delete_topics/__consumer_offsets finished:false header:: 251,3 replyHeader:: 251,139,-101 request:: '/admin/delete_topics/__consumer_offsets,F response:: 09:09:36 09:09:36.876 [ProcessThread(sid:0 cport:46481):] DEBUG org.apache.zookeeper.server.SessionTrackerImpl - Checking session 0x1000002945e0000 09:09:36 09:09:36.877 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - Processing request:: sessionid:0x1000002945e0000 type:exists cxid:0xfc zxid:0xfffffffffffffffe txntype:unknown reqpath:/brokers/topics/__consumer_offsets 09:09:36 09:09:36.877 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - sessionid:0x1000002945e0000 type:exists cxid:0xfc zxid:0xfffffffffffffffe txntype:unknown reqpath:/brokers/topics/__consumer_offsets 09:09:36 09:09:36.877 [main-SendThread(127.0.0.1:46481)] DEBUG org.apache.zookeeper.ClientCnxn - Reading reply session id: 0x1000002945e0000, packet:: clientPath:/brokers/topics/__consumer_offsets serverPath:/brokers/topics/__consumer_offsets finished:false header:: 252,3 replyHeader:: 252,139,0 request:: '/brokers/topics/__consumer_offsets,T response:: s{38,38,1770973775670,1770973775670,0,1,0,0,548,1,39} 09:09:36 09:09:36.877 [data-plane-kafka-request-handler-0] DEBUG kafka.server.ZkAdminManager - [Admin Manager on Broker 1]: Topic creation failed since topic '__consumer_offsets' already exists. 09:09:36 org.apache.kafka.common.errors.TopicExistsException: Topic '__consumer_offsets' already exists. 09:09:36 09:09:36.878 [data-plane-kafka-request-handler-0] DEBUG kafka.server.DefaultAutoTopicCreationManager - Cleared inflight topic creation state for HashMap(__consumer_offsets -> CreatableTopic(name='__consumer_offsets', numPartitions=50, replicationFactor=1, assignments=[], configs=[CreateableTopicConfig(name='compression.type', value='producer'), CreateableTopicConfig(name='cleanup.policy', value='compact'), CreateableTopicConfig(name='segment.bytes', value='104857600')])) 09:09:36 09:09:36.879 [main] DEBUG org.apache.kafka.clients.NetworkClient - [Consumer clientId=mso-123456-consumer-f5c1d64b-2a77-4abe-97e2-5d595acfbd82, groupId=mso-group] Received FIND_COORDINATOR response from node 1 for request with header RequestHeader(apiKey=FIND_COORDINATOR, apiVersion=4, clientId=mso-123456-consumer-f5c1d64b-2a77-4abe-97e2-5d595acfbd82, correlationId=27): FindCoordinatorResponseData(throttleTimeMs=0, errorCode=0, errorMessage='', nodeId=0, host='', port=0, coordinators=[Coordinator(key='mso-group', nodeId=-1, host='', port=-1, errorCode=15, errorMessage='')]) 09:09:36 09:09:36.879 [main] DEBUG org.apache.kafka.clients.consumer.internals.ConsumerCoordinator - [Consumer clientId=mso-123456-consumer-f5c1d64b-2a77-4abe-97e2-5d595acfbd82, groupId=mso-group] Received FindCoordinator response ClientResponse(receivedTimeMs=1770973776878, latencyMs=5, disconnected=false, requestHeader=RequestHeader(apiKey=FIND_COORDINATOR, apiVersion=4, clientId=mso-123456-consumer-f5c1d64b-2a77-4abe-97e2-5d595acfbd82, correlationId=27), responseBody=FindCoordinatorResponseData(throttleTimeMs=0, errorCode=0, errorMessage='', nodeId=0, host='', port=0, coordinators=[Coordinator(key='mso-group', nodeId=-1, host='', port=-1, errorCode=15, errorMessage='')])) 09:09:36 09:09:36.879 [main] DEBUG org.apache.kafka.clients.consumer.internals.ConsumerCoordinator - [Consumer clientId=mso-123456-consumer-f5c1d64b-2a77-4abe-97e2-5d595acfbd82, groupId=mso-group] Group coordinator lookup failed: 09:09:36 09:09:36.879 [main] DEBUG org.apache.kafka.clients.consumer.internals.ConsumerCoordinator - [Consumer clientId=mso-123456-consumer-f5c1d64b-2a77-4abe-97e2-5d595acfbd82, groupId=mso-group] Coordinator discovery failed, refreshing metadata 09:09:36 org.apache.kafka.common.errors.CoordinatorNotAvailableException: The coordinator is not available. 09:09:36 09:09:36.879 [data-plane-kafka-network-thread-1-ListenerName(SASL_PLAINTEXT)-SASL_PLAINTEXT-0] DEBUG kafka.request.logger - Completed request:{"isForwarded":false,"requestHeader":{"requestApiKey":10,"requestApiVersion":4,"correlationId":27,"clientId":"mso-123456-consumer-f5c1d64b-2a77-4abe-97e2-5d595acfbd82","requestApiKeyName":"FIND_COORDINATOR"},"request":{"keyType":0,"coordinatorKeys":["mso-group"]},"response":{"throttleTimeMs":0,"coordinators":[{"key":"mso-group","nodeId":-1,"host":"","port":-1,"errorCode":15,"errorMessage":""}]},"connection":"127.0.0.1:40117-127.0.0.1:49642-3","totalTimeMs":5.052,"requestQueueTimeMs":0.169,"localTimeMs":4.562,"remoteTimeMs":0.0,"throttleTimeMs":0,"responseQueueTimeMs":0.099,"sendTimeMs":0.22,"securityProtocol":"SASL_PLAINTEXT","principal":"User:admin","listener":"SASL_PLAINTEXT","clientInformation":{"softwareName":"apache-kafka-java","softwareVersion":"3.3.1"}} 09:09:36 09:09:36.881 [ProcessThread(sid:0 cport:46481):] DEBUG org.apache.zookeeper.server.SessionTrackerImpl - Checking session 0x1000002945e0000 09:09:36 09:09:36.881 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - Processing request:: sessionid:0x1000002945e0000 type:getData cxid:0xfd zxid:0xfffffffffffffffe txntype:unknown reqpath:/config/topics/__consumer_offsets 09:09:36 09:09:36.881 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - sessionid:0x1000002945e0000 type:getData cxid:0xfd zxid:0xfffffffffffffffe txntype:unknown reqpath:/config/topics/__consumer_offsets 09:09:36 09:09:36.881 [SyncThread:0] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Permission requested: 1 09:09:36 09:09:36.881 [SyncThread:0] DEBUG org.apache.zookeeper.server.ZooKeeperServer - ACLs for node: [31,s{'world,'anyone} 09:09:36 ] 09:09:36 09:09:36.881 [SyncThread:0] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Client credentials: ['sasl,'zooclient 09:09:36 , 'ip,'127.0.0.1 09:09:36 ] 09:09:36 09:09:36.882 [main-SendThread(127.0.0.1:46481)] DEBUG org.apache.zookeeper.ClientCnxn - Reading reply session id: 0x1000002945e0000, packet:: clientPath:/config/topics/__consumer_offsets serverPath:/config/topics/__consumer_offsets finished:false header:: 253,4 replyHeader:: 253,139,0 request:: '/config/topics/__consumer_offsets,F response:: #7b2276657273696f6e223a312c22636f6e666967223a7b22636f6d7072657373696f6e2e74797065223a2270726f6475636572222c22636c65616e75702e706f6c696379223a22636f6d70616374222c227365676d656e742e6279746573223a22313034383537363030227d7d,s{37,37,1770973775609,1770973775609,0,0,0,0,109,0,37} 09:09:36 09:09:36.884 [data-plane-kafka-request-handler-1] DEBUG kafka.log.OffsetIndex - Loaded index file /tmp/kafka-unit11182757027218931278/__consumer_offsets-27/00000000000000000000.index with maxEntries = 1310720, maxIndexSize = 10485760, entries = 0, lastOffset = 0, file position = 0 09:09:36 09:09:36.884 [data-plane-kafka-request-handler-1] DEBUG kafka.log.AbstractIndex - Index /tmp/kafka-unit11182757027218931278/__consumer_offsets-27/00000000000000000000.index was not resized because it already has size 10485760 09:09:36 09:09:36.884 [data-plane-kafka-request-handler-1] DEBUG kafka.log.TimeIndex - Loaded index file /tmp/kafka-unit11182757027218931278/__consumer_offsets-27/00000000000000000000.timeindex with maxEntries = 873813, maxIndexSize = 10485760, entries = 0, lastOffset = TimestampOffset(-1,0), file position = 0 09:09:36 09:09:36.884 [data-plane-kafka-request-handler-1] DEBUG kafka.log.AbstractIndex - Index /tmp/kafka-unit11182757027218931278/__consumer_offsets-27/00000000000000000000.timeindex was not resized because it already has size 10485756 09:09:36 09:09:36.884 [data-plane-kafka-request-handler-1] INFO kafka.log.UnifiedLog$ - [LogLoader partition=__consumer_offsets-27, dir=/tmp/kafka-unit11182757027218931278] Loading producer state till offset 0 with message format version 2 09:09:36 09:09:36.885 [data-plane-kafka-request-handler-1] DEBUG kafka.utils.KafkaScheduler - Scheduling task flush-metadata-file with initial delay 0 ms and period -1 ms. 09:09:36 09:09:36.885 [data-plane-kafka-request-handler-1] DEBUG kafka.utils.KafkaScheduler - Scheduling task PeriodicProducerExpirationCheck with initial delay 600000 ms and period 600000 ms. 09:09:36 09:09:36.885 [data-plane-kafka-request-handler-1] INFO kafka.log.LogManager - Created log for partition __consumer_offsets-27 in /tmp/kafka-unit11182757027218931278/__consumer_offsets-27 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} 09:09:36 09:09:36.885 [data-plane-kafka-request-handler-1] INFO kafka.cluster.Partition - [Partition __consumer_offsets-27 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-27 09:09:36 09:09:36.885 [data-plane-kafka-request-handler-1] INFO kafka.cluster.Partition - [Partition __consumer_offsets-27 broker=1] Log loaded for partition __consumer_offsets-27 with initial high watermark 0 09:09:36 09:09:36.885 [data-plane-kafka-request-handler-1] INFO state.change.logger - [Broker id=1] Leader __consumer_offsets-27 with topic id Some(RNuWA2mhQKGQZwHuRiMICQ) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas []. Previous leader epoch was -1. 09:09:36 09:09:36.886 [data-plane-kafka-request-handler-1] DEBUG kafka.server.epoch.LeaderEpochFileCache - [LeaderEpochCache __consumer_offsets-27] Appended new epoch entry EpochEntry(epoch=0, startOffset=0). Cache now contains 1 entries. 09:09:36 09:09:36.890 [ProcessThread(sid:0 cport:46481):] DEBUG org.apache.zookeeper.server.SessionTrackerImpl - Checking session 0x1000002945e0000 09:09:36 09:09:36.890 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - Processing request:: sessionid:0x1000002945e0000 type:getData cxid:0xfe zxid:0xfffffffffffffffe txntype:unknown reqpath:/config/topics/__consumer_offsets 09:09:36 09:09:36.890 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - sessionid:0x1000002945e0000 type:getData cxid:0xfe zxid:0xfffffffffffffffe txntype:unknown reqpath:/config/topics/__consumer_offsets 09:09:36 09:09:36.890 [SyncThread:0] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Permission requested: 1 09:09:36 09:09:36.890 [SyncThread:0] DEBUG org.apache.zookeeper.server.ZooKeeperServer - ACLs for node: [31,s{'world,'anyone} 09:09:36 ] 09:09:36 09:09:36.890 [SyncThread:0] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Client credentials: ['sasl,'zooclient 09:09:36 , 'ip,'127.0.0.1 09:09:36 ] 09:09:36 09:09:36.890 [main-SendThread(127.0.0.1:46481)] DEBUG org.apache.zookeeper.ClientCnxn - Reading reply session id: 0x1000002945e0000, packet:: clientPath:/config/topics/__consumer_offsets serverPath:/config/topics/__consumer_offsets finished:false header:: 254,4 replyHeader:: 254,139,0 request:: '/config/topics/__consumer_offsets,F response:: #7b2276657273696f6e223a312c22636f6e666967223a7b22636f6d7072657373696f6e2e74797065223a2270726f6475636572222c22636c65616e75702e706f6c696379223a22636f6d70616374222c227365676d656e742e6279746573223a22313034383537363030227d7d,s{37,37,1770973775609,1770973775609,0,0,0,0,109,0,37} 09:09:36 09:09:36.892 [data-plane-kafka-request-handler-1] DEBUG kafka.log.OffsetIndex - Loaded index file /tmp/kafka-unit11182757027218931278/__consumer_offsets-42/00000000000000000000.index with maxEntries = 1310720, maxIndexSize = 10485760, entries = 0, lastOffset = 0, file position = 0 09:09:36 09:09:36.892 [data-plane-kafka-request-handler-1] DEBUG kafka.log.AbstractIndex - Index /tmp/kafka-unit11182757027218931278/__consumer_offsets-42/00000000000000000000.index was not resized because it already has size 10485760 09:09:36 09:09:36.892 [data-plane-kafka-request-handler-1] DEBUG kafka.log.TimeIndex - Loaded index file /tmp/kafka-unit11182757027218931278/__consumer_offsets-42/00000000000000000000.timeindex with maxEntries = 873813, maxIndexSize = 10485760, entries = 0, lastOffset = TimestampOffset(-1,0), file position = 0 09:09:36 09:09:36.892 [data-plane-kafka-request-handler-1] DEBUG kafka.log.AbstractIndex - Index /tmp/kafka-unit11182757027218931278/__consumer_offsets-42/00000000000000000000.timeindex was not resized because it already has size 10485756 09:09:36 09:09:36.892 [data-plane-kafka-request-handler-1] INFO kafka.log.UnifiedLog$ - [LogLoader partition=__consumer_offsets-42, dir=/tmp/kafka-unit11182757027218931278] Loading producer state till offset 0 with message format version 2 09:09:36 09:09:36.893 [data-plane-kafka-request-handler-1] DEBUG kafka.utils.KafkaScheduler - Scheduling task flush-metadata-file with initial delay 0 ms and period -1 ms. 09:09:36 09:09:36.893 [data-plane-kafka-request-handler-1] DEBUG kafka.utils.KafkaScheduler - Scheduling task PeriodicProducerExpirationCheck with initial delay 600000 ms and period 600000 ms. 09:09:36 09:09:36.893 [data-plane-kafka-request-handler-1] INFO kafka.log.LogManager - Created log for partition __consumer_offsets-42 in /tmp/kafka-unit11182757027218931278/__consumer_offsets-42 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} 09:09:36 09:09:36.893 [data-plane-kafka-request-handler-1] INFO kafka.cluster.Partition - [Partition __consumer_offsets-42 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-42 09:09:36 09:09:36.893 [data-plane-kafka-request-handler-1] INFO kafka.cluster.Partition - [Partition __consumer_offsets-42 broker=1] Log loaded for partition __consumer_offsets-42 with initial high watermark 0 09:09:36 09:09:36.893 [data-plane-kafka-request-handler-1] INFO state.change.logger - [Broker id=1] Leader __consumer_offsets-42 with topic id Some(RNuWA2mhQKGQZwHuRiMICQ) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas []. Previous leader epoch was -1. 09:09:36 09:09:36.893 [data-plane-kafka-request-handler-1] DEBUG kafka.server.epoch.LeaderEpochFileCache - [LeaderEpochCache __consumer_offsets-42] Appended new epoch entry EpochEntry(epoch=0, startOffset=0). Cache now contains 1 entries. 09:09:36 09:09:36.899 [ProcessThread(sid:0 cport:46481):] DEBUG org.apache.zookeeper.server.SessionTrackerImpl - Checking session 0x1000002945e0000 09:09:36 09:09:36.899 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - Processing request:: sessionid:0x1000002945e0000 type:getData cxid:0xff zxid:0xfffffffffffffffe txntype:unknown reqpath:/config/topics/__consumer_offsets 09:09:36 09:09:36.899 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - sessionid:0x1000002945e0000 type:getData cxid:0xff zxid:0xfffffffffffffffe txntype:unknown reqpath:/config/topics/__consumer_offsets 09:09:36 09:09:36.899 [SyncThread:0] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Permission requested: 1 09:09:36 09:09:36.899 [SyncThread:0] DEBUG org.apache.zookeeper.server.ZooKeeperServer - ACLs for node: [31,s{'world,'anyone} 09:09:36 ] 09:09:36 09:09:36.899 [SyncThread:0] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Client credentials: ['sasl,'zooclient 09:09:36 , 'ip,'127.0.0.1 09:09:36 ] 09:09:36 09:09:36.899 [main-SendThread(127.0.0.1:46481)] DEBUG org.apache.zookeeper.ClientCnxn - Reading reply session id: 0x1000002945e0000, packet:: clientPath:/config/topics/__consumer_offsets serverPath:/config/topics/__consumer_offsets finished:false header:: 255,4 replyHeader:: 255,139,0 request:: '/config/topics/__consumer_offsets,F response:: #7b2276657273696f6e223a312c22636f6e666967223a7b22636f6d7072657373696f6e2e74797065223a2270726f6475636572222c22636c65616e75702e706f6c696379223a22636f6d70616374222c227365676d656e742e6279746573223a22313034383537363030227d7d,s{37,37,1770973775609,1770973775609,0,0,0,0,109,0,37} 09:09:36 09:09:36.903 [data-plane-kafka-request-handler-1] DEBUG kafka.log.OffsetIndex - Loaded index file /tmp/kafka-unit11182757027218931278/__consumer_offsets-12/00000000000000000000.index with maxEntries = 1310720, maxIndexSize = 10485760, entries = 0, lastOffset = 0, file position = 0 09:09:36 09:09:36.903 [data-plane-kafka-request-handler-1] DEBUG kafka.log.AbstractIndex - Index /tmp/kafka-unit11182757027218931278/__consumer_offsets-12/00000000000000000000.index was not resized because it already has size 10485760 09:09:36 09:09:36.903 [data-plane-kafka-request-handler-1] DEBUG kafka.log.TimeIndex - Loaded index file /tmp/kafka-unit11182757027218931278/__consumer_offsets-12/00000000000000000000.timeindex with maxEntries = 873813, maxIndexSize = 10485760, entries = 0, lastOffset = TimestampOffset(-1,0), file position = 0 09:09:36 09:09:36.903 [data-plane-kafka-request-handler-1] DEBUG kafka.log.AbstractIndex - Index /tmp/kafka-unit11182757027218931278/__consumer_offsets-12/00000000000000000000.timeindex was not resized because it already has size 10485756 09:09:36 09:09:36.903 [data-plane-kafka-request-handler-1] INFO kafka.log.UnifiedLog$ - [LogLoader partition=__consumer_offsets-12, dir=/tmp/kafka-unit11182757027218931278] Loading producer state till offset 0 with message format version 2 09:09:36 09:09:36.904 [data-plane-kafka-request-handler-1] DEBUG kafka.utils.KafkaScheduler - Scheduling task flush-metadata-file with initial delay 0 ms and period -1 ms. 09:09:36 09:09:36.904 [data-plane-kafka-request-handler-1] DEBUG kafka.utils.KafkaScheduler - Scheduling task PeriodicProducerExpirationCheck with initial delay 600000 ms and period 600000 ms. 09:09:36 09:09:36.904 [data-plane-kafka-request-handler-1] INFO kafka.log.LogManager - Created log for partition __consumer_offsets-12 in /tmp/kafka-unit11182757027218931278/__consumer_offsets-12 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} 09:09:36 09:09:36.904 [data-plane-kafka-request-handler-1] INFO kafka.cluster.Partition - [Partition __consumer_offsets-12 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-12 09:09:36 09:09:36.904 [data-plane-kafka-request-handler-1] INFO kafka.cluster.Partition - [Partition __consumer_offsets-12 broker=1] Log loaded for partition __consumer_offsets-12 with initial high watermark 0 09:09:36 09:09:36.905 [data-plane-kafka-request-handler-1] INFO state.change.logger - [Broker id=1] Leader __consumer_offsets-12 with topic id Some(RNuWA2mhQKGQZwHuRiMICQ) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas []. Previous leader epoch was -1. 09:09:36 09:09:36.905 [data-plane-kafka-request-handler-1] DEBUG kafka.server.epoch.LeaderEpochFileCache - [LeaderEpochCache __consumer_offsets-12] Appended new epoch entry EpochEntry(epoch=0, startOffset=0). Cache now contains 1 entries. 09:09:36 09:09:36.909 [ProcessThread(sid:0 cport:46481):] DEBUG org.apache.zookeeper.server.SessionTrackerImpl - Checking session 0x1000002945e0000 09:09:36 09:09:36.909 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - Processing request:: sessionid:0x1000002945e0000 type:getData cxid:0x100 zxid:0xfffffffffffffffe txntype:unknown reqpath:/config/topics/__consumer_offsets 09:09:36 09:09:36.910 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - sessionid:0x1000002945e0000 type:getData cxid:0x100 zxid:0xfffffffffffffffe txntype:unknown reqpath:/config/topics/__consumer_offsets 09:09:36 09:09:36.910 [SyncThread:0] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Permission requested: 1 09:09:36 09:09:36.910 [SyncThread:0] DEBUG org.apache.zookeeper.server.ZooKeeperServer - ACLs for node: [31,s{'world,'anyone} 09:09:36 ] 09:09:36 09:09:36.910 [SyncThread:0] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Client credentials: ['sasl,'zooclient 09:09:36 , 'ip,'127.0.0.1 09:09:36 ] 09:09:36 09:09:36.910 [main-SendThread(127.0.0.1:46481)] DEBUG org.apache.zookeeper.ClientCnxn - Reading reply session id: 0x1000002945e0000, packet:: clientPath:/config/topics/__consumer_offsets serverPath:/config/topics/__consumer_offsets finished:false header:: 256,4 replyHeader:: 256,139,0 request:: '/config/topics/__consumer_offsets,F response:: #7b2276657273696f6e223a312c22636f6e666967223a7b22636f6d7072657373696f6e2e74797065223a2270726f6475636572222c22636c65616e75702e706f6c696379223a22636f6d70616374222c227365676d656e742e6279746573223a22313034383537363030227d7d,s{37,37,1770973775609,1770973775609,0,0,0,0,109,0,37} 09:09:36 09:09:36.913 [data-plane-kafka-request-handler-1] DEBUG kafka.log.OffsetIndex - Loaded index file /tmp/kafka-unit11182757027218931278/__consumer_offsets-21/00000000000000000000.index with maxEntries = 1310720, maxIndexSize = 10485760, entries = 0, lastOffset = 0, file position = 0 09:09:36 09:09:36.913 [data-plane-kafka-request-handler-1] DEBUG kafka.log.AbstractIndex - Index /tmp/kafka-unit11182757027218931278/__consumer_offsets-21/00000000000000000000.index was not resized because it already has size 10485760 09:09:36 09:09:36.913 [data-plane-kafka-request-handler-1] DEBUG kafka.log.TimeIndex - Loaded index file /tmp/kafka-unit11182757027218931278/__consumer_offsets-21/00000000000000000000.timeindex with maxEntries = 873813, maxIndexSize = 10485760, entries = 0, lastOffset = TimestampOffset(-1,0), file position = 0 09:09:36 09:09:36.913 [data-plane-kafka-request-handler-1] DEBUG kafka.log.AbstractIndex - Index /tmp/kafka-unit11182757027218931278/__consumer_offsets-21/00000000000000000000.timeindex was not resized because it already has size 10485756 09:09:36 09:09:36.913 [data-plane-kafka-request-handler-1] INFO kafka.log.UnifiedLog$ - [LogLoader partition=__consumer_offsets-21, dir=/tmp/kafka-unit11182757027218931278] Loading producer state till offset 0 with message format version 2 09:09:36 09:09:36.913 [data-plane-kafka-request-handler-1] DEBUG kafka.utils.KafkaScheduler - Scheduling task flush-metadata-file with initial delay 0 ms and period -1 ms. 09:09:36 09:09:36.914 [data-plane-kafka-request-handler-1] DEBUG kafka.utils.KafkaScheduler - Scheduling task PeriodicProducerExpirationCheck with initial delay 600000 ms and period 600000 ms. 09:09:36 09:09:36.914 [data-plane-kafka-request-handler-1] INFO kafka.log.LogManager - Created log for partition __consumer_offsets-21 in /tmp/kafka-unit11182757027218931278/__consumer_offsets-21 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} 09:09:36 09:09:36.914 [data-plane-kafka-request-handler-1] INFO kafka.cluster.Partition - [Partition __consumer_offsets-21 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-21 09:09:36 09:09:36.914 [data-plane-kafka-request-handler-1] INFO kafka.cluster.Partition - [Partition __consumer_offsets-21 broker=1] Log loaded for partition __consumer_offsets-21 with initial high watermark 0 09:09:36 09:09:36.914 [data-plane-kafka-request-handler-1] INFO state.change.logger - [Broker id=1] Leader __consumer_offsets-21 with topic id Some(RNuWA2mhQKGQZwHuRiMICQ) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas []. Previous leader epoch was -1. 09:09:36 09:09:36.914 [data-plane-kafka-request-handler-1] DEBUG kafka.server.epoch.LeaderEpochFileCache - [LeaderEpochCache __consumer_offsets-21] Appended new epoch entry EpochEntry(epoch=0, startOffset=0). Cache now contains 1 entries. 09:09:36 09:09:36.919 [ProcessThread(sid:0 cport:46481):] DEBUG org.apache.zookeeper.server.SessionTrackerImpl - Checking session 0x1000002945e0000 09:09:36 09:09:36.919 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - Processing request:: sessionid:0x1000002945e0000 type:getData cxid:0x101 zxid:0xfffffffffffffffe txntype:unknown reqpath:/config/topics/__consumer_offsets 09:09:36 09:09:36.919 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - sessionid:0x1000002945e0000 type:getData cxid:0x101 zxid:0xfffffffffffffffe txntype:unknown reqpath:/config/topics/__consumer_offsets 09:09:36 09:09:36.919 [SyncThread:0] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Permission requested: 1 09:09:36 09:09:36.919 [SyncThread:0] DEBUG org.apache.zookeeper.server.ZooKeeperServer - ACLs for node: [31,s{'world,'anyone} 09:09:36 ] 09:09:36 09:09:36.919 [SyncThread:0] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Client credentials: ['sasl,'zooclient 09:09:36 , 'ip,'127.0.0.1 09:09:36 ] 09:09:36 09:09:36.919 [main-SendThread(127.0.0.1:46481)] DEBUG org.apache.zookeeper.ClientCnxn - Reading reply session id: 0x1000002945e0000, packet:: clientPath:/config/topics/__consumer_offsets serverPath:/config/topics/__consumer_offsets finished:false header:: 257,4 replyHeader:: 257,139,0 request:: '/config/topics/__consumer_offsets,F response:: #7b2276657273696f6e223a312c22636f6e666967223a7b22636f6d7072657373696f6e2e74797065223a2270726f6475636572222c22636c65616e75702e706f6c696379223a22636f6d70616374222c227365676d656e742e6279746573223a22313034383537363030227d7d,s{37,37,1770973775609,1770973775609,0,0,0,0,109,0,37} 09:09:36 09:09:36.921 [data-plane-kafka-request-handler-1] DEBUG kafka.log.OffsetIndex - Loaded index file /tmp/kafka-unit11182757027218931278/__consumer_offsets-36/00000000000000000000.index with maxEntries = 1310720, maxIndexSize = 10485760, entries = 0, lastOffset = 0, file position = 0 09:09:36 09:09:36.921 [data-plane-kafka-request-handler-1] DEBUG kafka.log.AbstractIndex - Index /tmp/kafka-unit11182757027218931278/__consumer_offsets-36/00000000000000000000.index was not resized because it already has size 10485760 09:09:36 09:09:36.921 [data-plane-kafka-request-handler-1] DEBUG kafka.log.TimeIndex - Loaded index file /tmp/kafka-unit11182757027218931278/__consumer_offsets-36/00000000000000000000.timeindex with maxEntries = 873813, maxIndexSize = 10485760, entries = 0, lastOffset = TimestampOffset(-1,0), file position = 0 09:09:36 09:09:36.921 [data-plane-kafka-request-handler-1] DEBUG kafka.log.AbstractIndex - Index /tmp/kafka-unit11182757027218931278/__consumer_offsets-36/00000000000000000000.timeindex was not resized because it already has size 10485756 09:09:36 09:09:36.921 [data-plane-kafka-request-handler-1] INFO kafka.log.UnifiedLog$ - [LogLoader partition=__consumer_offsets-36, dir=/tmp/kafka-unit11182757027218931278] Loading producer state till offset 0 with message format version 2 09:09:36 09:09:36.922 [data-plane-kafka-request-handler-1] DEBUG kafka.utils.KafkaScheduler - Scheduling task flush-metadata-file with initial delay 0 ms and period -1 ms. 09:09:36 09:09:36.922 [data-plane-kafka-request-handler-1] DEBUG kafka.utils.KafkaScheduler - Scheduling task PeriodicProducerExpirationCheck with initial delay 600000 ms and period 600000 ms. 09:09:36 09:09:36.922 [data-plane-kafka-request-handler-1] INFO kafka.log.LogManager - Created log for partition __consumer_offsets-36 in /tmp/kafka-unit11182757027218931278/__consumer_offsets-36 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} 09:09:36 09:09:36.922 [data-plane-kafka-request-handler-1] INFO kafka.cluster.Partition - [Partition __consumer_offsets-36 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-36 09:09:36 09:09:36.922 [data-plane-kafka-request-handler-1] INFO kafka.cluster.Partition - [Partition __consumer_offsets-36 broker=1] Log loaded for partition __consumer_offsets-36 with initial high watermark 0 09:09:36 09:09:36.922 [data-plane-kafka-request-handler-1] INFO state.change.logger - [Broker id=1] Leader __consumer_offsets-36 with topic id Some(RNuWA2mhQKGQZwHuRiMICQ) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas []. Previous leader epoch was -1. 09:09:36 09:09:36.922 [data-plane-kafka-request-handler-1] DEBUG kafka.server.epoch.LeaderEpochFileCache - [LeaderEpochCache __consumer_offsets-36] Appended new epoch entry EpochEntry(epoch=0, startOffset=0). Cache now contains 1 entries. 09:09:36 09:09:36.928 [ProcessThread(sid:0 cport:46481):] DEBUG org.apache.zookeeper.server.SessionTrackerImpl - Checking session 0x1000002945e0000 09:09:36 09:09:36.928 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - Processing request:: sessionid:0x1000002945e0000 type:getData cxid:0x102 zxid:0xfffffffffffffffe txntype:unknown reqpath:/config/topics/__consumer_offsets 09:09:36 09:09:36.928 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - sessionid:0x1000002945e0000 type:getData cxid:0x102 zxid:0xfffffffffffffffe txntype:unknown reqpath:/config/topics/__consumer_offsets 09:09:36 09:09:36.928 [SyncThread:0] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Permission requested: 1 09:09:36 09:09:36.928 [SyncThread:0] DEBUG org.apache.zookeeper.server.ZooKeeperServer - ACLs for node: [31,s{'world,'anyone} 09:09:36 ] 09:09:36 09:09:36.928 [SyncThread:0] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Client credentials: ['sasl,'zooclient 09:09:36 , 'ip,'127.0.0.1 09:09:36 ] 09:09:36 09:09:36.928 [main-SendThread(127.0.0.1:46481)] DEBUG org.apache.zookeeper.ClientCnxn - Reading reply session id: 0x1000002945e0000, packet:: clientPath:/config/topics/__consumer_offsets serverPath:/config/topics/__consumer_offsets finished:false header:: 258,4 replyHeader:: 258,139,0 request:: '/config/topics/__consumer_offsets,F response:: #7b2276657273696f6e223a312c22636f6e666967223a7b22636f6d7072657373696f6e2e74797065223a2270726f6475636572222c22636c65616e75702e706f6c696379223a22636f6d70616374222c227365676d656e742e6279746573223a22313034383537363030227d7d,s{37,37,1770973775609,1770973775609,0,0,0,0,109,0,37} 09:09:36 09:09:36.930 [data-plane-kafka-request-handler-1] DEBUG kafka.log.OffsetIndex - Loaded index file /tmp/kafka-unit11182757027218931278/__consumer_offsets-6/00000000000000000000.index with maxEntries = 1310720, maxIndexSize = 10485760, entries = 0, lastOffset = 0, file position = 0 09:09:36 09:09:36.930 [data-plane-kafka-request-handler-1] DEBUG kafka.log.AbstractIndex - Index /tmp/kafka-unit11182757027218931278/__consumer_offsets-6/00000000000000000000.index was not resized because it already has size 10485760 09:09:36 09:09:36.930 [data-plane-kafka-request-handler-1] DEBUG kafka.log.TimeIndex - Loaded index file /tmp/kafka-unit11182757027218931278/__consumer_offsets-6/00000000000000000000.timeindex with maxEntries = 873813, maxIndexSize = 10485760, entries = 0, lastOffset = TimestampOffset(-1,0), file position = 0 09:09:36 09:09:36.930 [data-plane-kafka-request-handler-1] DEBUG kafka.log.AbstractIndex - Index /tmp/kafka-unit11182757027218931278/__consumer_offsets-6/00000000000000000000.timeindex was not resized because it already has size 10485756 09:09:36 09:09:36.930 [data-plane-kafka-request-handler-1] INFO kafka.log.UnifiedLog$ - [LogLoader partition=__consumer_offsets-6, dir=/tmp/kafka-unit11182757027218931278] Loading producer state till offset 0 with message format version 2 09:09:36 09:09:36.930 [data-plane-kafka-request-handler-1] DEBUG kafka.utils.KafkaScheduler - Scheduling task flush-metadata-file with initial delay 0 ms and period -1 ms. 09:09:36 09:09:36.931 [data-plane-kafka-request-handler-1] DEBUG kafka.utils.KafkaScheduler - Scheduling task PeriodicProducerExpirationCheck with initial delay 600000 ms and period 600000 ms. 09:09:36 09:09:36.931 [data-plane-kafka-request-handler-1] INFO kafka.log.LogManager - Created log for partition __consumer_offsets-6 in /tmp/kafka-unit11182757027218931278/__consumer_offsets-6 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} 09:09:36 09:09:36.931 [data-plane-kafka-request-handler-1] INFO kafka.cluster.Partition - [Partition __consumer_offsets-6 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-6 09:09:36 09:09:36.931 [data-plane-kafka-request-handler-1] INFO kafka.cluster.Partition - [Partition __consumer_offsets-6 broker=1] Log loaded for partition __consumer_offsets-6 with initial high watermark 0 09:09:36 09:09:36.931 [data-plane-kafka-request-handler-1] INFO state.change.logger - [Broker id=1] Leader __consumer_offsets-6 with topic id Some(RNuWA2mhQKGQZwHuRiMICQ) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas []. Previous leader epoch was -1. 09:09:36 09:09:36.931 [data-plane-kafka-request-handler-1] DEBUG kafka.server.epoch.LeaderEpochFileCache - [LeaderEpochCache __consumer_offsets-6] Appended new epoch entry EpochEntry(epoch=0, startOffset=0). Cache now contains 1 entries. 09:09:36 09:09:36.935 [ProcessThread(sid:0 cport:46481):] DEBUG org.apache.zookeeper.server.SessionTrackerImpl - Checking session 0x1000002945e0000 09:09:36 09:09:36.935 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - Processing request:: sessionid:0x1000002945e0000 type:getData cxid:0x103 zxid:0xfffffffffffffffe txntype:unknown reqpath:/config/topics/__consumer_offsets 09:09:36 09:09:36.935 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - sessionid:0x1000002945e0000 type:getData cxid:0x103 zxid:0xfffffffffffffffe txntype:unknown reqpath:/config/topics/__consumer_offsets 09:09:36 09:09:36.935 [SyncThread:0] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Permission requested: 1 09:09:36 09:09:36.936 [SyncThread:0] DEBUG org.apache.zookeeper.server.ZooKeeperServer - ACLs for node: [31,s{'world,'anyone} 09:09:36 ] 09:09:36 09:09:36.936 [SyncThread:0] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Client credentials: ['sasl,'zooclient 09:09:36 , 'ip,'127.0.0.1 09:09:36 ] 09:09:36 09:09:36.936 [main-SendThread(127.0.0.1:46481)] DEBUG org.apache.zookeeper.ClientCnxn - Reading reply session id: 0x1000002945e0000, packet:: clientPath:/config/topics/__consumer_offsets serverPath:/config/topics/__consumer_offsets finished:false header:: 259,4 replyHeader:: 259,139,0 request:: '/config/topics/__consumer_offsets,F response:: #7b2276657273696f6e223a312c22636f6e666967223a7b22636f6d7072657373696f6e2e74797065223a2270726f6475636572222c22636c65616e75702e706f6c696379223a22636f6d70616374222c227365676d656e742e6279746573223a22313034383537363030227d7d,s{37,37,1770973775609,1770973775609,0,0,0,0,109,0,37} 09:09:36 09:09:36.937 [data-plane-kafka-request-handler-1] DEBUG kafka.log.OffsetIndex - Loaded index file /tmp/kafka-unit11182757027218931278/__consumer_offsets-43/00000000000000000000.index with maxEntries = 1310720, maxIndexSize = 10485760, entries = 0, lastOffset = 0, file position = 0 09:09:36 09:09:36.937 [data-plane-kafka-request-handler-1] DEBUG kafka.log.AbstractIndex - Index /tmp/kafka-unit11182757027218931278/__consumer_offsets-43/00000000000000000000.index was not resized because it already has size 10485760 09:09:36 09:09:36.937 [data-plane-kafka-request-handler-1] DEBUG kafka.log.TimeIndex - Loaded index file /tmp/kafka-unit11182757027218931278/__consumer_offsets-43/00000000000000000000.timeindex with maxEntries = 873813, maxIndexSize = 10485760, entries = 0, lastOffset = TimestampOffset(-1,0), file position = 0 09:09:36 09:09:36.938 [data-plane-kafka-request-handler-1] DEBUG kafka.log.AbstractIndex - Index /tmp/kafka-unit11182757027218931278/__consumer_offsets-43/00000000000000000000.timeindex was not resized because it already has size 10485756 09:09:36 09:09:36.938 [data-plane-kafka-request-handler-1] INFO kafka.log.UnifiedLog$ - [LogLoader partition=__consumer_offsets-43, dir=/tmp/kafka-unit11182757027218931278] Loading producer state till offset 0 with message format version 2 09:09:36 09:09:36.938 [data-plane-kafka-request-handler-1] DEBUG kafka.utils.KafkaScheduler - Scheduling task flush-metadata-file with initial delay 0 ms and period -1 ms. 09:09:36 09:09:36.939 [data-plane-kafka-request-handler-1] DEBUG kafka.utils.KafkaScheduler - Scheduling task PeriodicProducerExpirationCheck with initial delay 600000 ms and period 600000 ms. 09:09:36 09:09:36.939 [data-plane-kafka-request-handler-1] INFO kafka.log.LogManager - Created log for partition __consumer_offsets-43 in /tmp/kafka-unit11182757027218931278/__consumer_offsets-43 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} 09:09:36 09:09:36.939 [data-plane-kafka-request-handler-1] INFO kafka.cluster.Partition - [Partition __consumer_offsets-43 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-43 09:09:36 09:09:36.939 [data-plane-kafka-request-handler-1] INFO kafka.cluster.Partition - [Partition __consumer_offsets-43 broker=1] Log loaded for partition __consumer_offsets-43 with initial high watermark 0 09:09:36 09:09:36.939 [data-plane-kafka-request-handler-1] INFO state.change.logger - [Broker id=1] Leader __consumer_offsets-43 with topic id Some(RNuWA2mhQKGQZwHuRiMICQ) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas []. Previous leader epoch was -1. 09:09:36 09:09:36.939 [data-plane-kafka-request-handler-1] DEBUG kafka.server.epoch.LeaderEpochFileCache - [LeaderEpochCache __consumer_offsets-43] Appended new epoch entry EpochEntry(epoch=0, startOffset=0). Cache now contains 1 entries. 09:09:36 09:09:36.942 [ProcessThread(sid:0 cport:46481):] DEBUG org.apache.zookeeper.server.SessionTrackerImpl - Checking session 0x1000002945e0000 09:09:36 09:09:36.942 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - Processing request:: sessionid:0x1000002945e0000 type:getData cxid:0x104 zxid:0xfffffffffffffffe txntype:unknown reqpath:/config/topics/__consumer_offsets 09:09:36 09:09:36.942 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - sessionid:0x1000002945e0000 type:getData cxid:0x104 zxid:0xfffffffffffffffe txntype:unknown reqpath:/config/topics/__consumer_offsets 09:09:36 09:09:36.942 [SyncThread:0] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Permission requested: 1 09:09:36 09:09:36.942 [SyncThread:0] DEBUG org.apache.zookeeper.server.ZooKeeperServer - ACLs for node: [31,s{'world,'anyone} 09:09:36 ] 09:09:36 09:09:36.942 [SyncThread:0] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Client credentials: ['sasl,'zooclient 09:09:36 , 'ip,'127.0.0.1 09:09:36 ] 09:09:36 09:09:36.942 [main-SendThread(127.0.0.1:46481)] DEBUG org.apache.zookeeper.ClientCnxn - Reading reply session id: 0x1000002945e0000, packet:: clientPath:/config/topics/__consumer_offsets serverPath:/config/topics/__consumer_offsets finished:false header:: 260,4 replyHeader:: 260,139,0 request:: '/config/topics/__consumer_offsets,F response:: #7b2276657273696f6e223a312c22636f6e666967223a7b22636f6d7072657373696f6e2e74797065223a2270726f6475636572222c22636c65616e75702e706f6c696379223a22636f6d70616374222c227365676d656e742e6279746573223a22313034383537363030227d7d,s{37,37,1770973775609,1770973775609,0,0,0,0,109,0,37} 09:09:36 09:09:36.944 [data-plane-kafka-request-handler-1] DEBUG kafka.log.OffsetIndex - Loaded index file /tmp/kafka-unit11182757027218931278/__consumer_offsets-13/00000000000000000000.index with maxEntries = 1310720, maxIndexSize = 10485760, entries = 0, lastOffset = 0, file position = 0 09:09:36 09:09:36.944 [data-plane-kafka-request-handler-1] DEBUG kafka.log.AbstractIndex - Index /tmp/kafka-unit11182757027218931278/__consumer_offsets-13/00000000000000000000.index was not resized because it already has size 10485760 09:09:36 09:09:36.944 [data-plane-kafka-request-handler-1] DEBUG kafka.log.TimeIndex - Loaded index file /tmp/kafka-unit11182757027218931278/__consumer_offsets-13/00000000000000000000.timeindex with maxEntries = 873813, maxIndexSize = 10485760, entries = 0, lastOffset = TimestampOffset(-1,0), file position = 0 09:09:36 09:09:36.944 [data-plane-kafka-request-handler-1] DEBUG kafka.log.AbstractIndex - Index /tmp/kafka-unit11182757027218931278/__consumer_offsets-13/00000000000000000000.timeindex was not resized because it already has size 10485756 09:09:36 09:09:36.944 [data-plane-kafka-request-handler-1] INFO kafka.log.UnifiedLog$ - [LogLoader partition=__consumer_offsets-13, dir=/tmp/kafka-unit11182757027218931278] Loading producer state till offset 0 with message format version 2 09:09:36 09:09:36.944 [data-plane-kafka-request-handler-1] DEBUG kafka.utils.KafkaScheduler - Scheduling task flush-metadata-file with initial delay 0 ms and period -1 ms. 09:09:36 09:09:36.944 [data-plane-kafka-request-handler-1] DEBUG kafka.utils.KafkaScheduler - Scheduling task PeriodicProducerExpirationCheck with initial delay 600000 ms and period 600000 ms. 09:09:36 09:09:36.945 [data-plane-kafka-request-handler-1] INFO kafka.log.LogManager - Created log for partition __consumer_offsets-13 in /tmp/kafka-unit11182757027218931278/__consumer_offsets-13 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} 09:09:36 09:09:36.945 [data-plane-kafka-request-handler-1] INFO kafka.cluster.Partition - [Partition __consumer_offsets-13 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-13 09:09:36 09:09:36.945 [data-plane-kafka-request-handler-1] INFO kafka.cluster.Partition - [Partition __consumer_offsets-13 broker=1] Log loaded for partition __consumer_offsets-13 with initial high watermark 0 09:09:36 09:09:36.945 [data-plane-kafka-request-handler-1] INFO state.change.logger - [Broker id=1] Leader __consumer_offsets-13 with topic id Some(RNuWA2mhQKGQZwHuRiMICQ) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas []. Previous leader epoch was -1. 09:09:36 09:09:36.945 [data-plane-kafka-request-handler-1] DEBUG kafka.server.epoch.LeaderEpochFileCache - [LeaderEpochCache __consumer_offsets-13] Appended new epoch entry EpochEntry(epoch=0, startOffset=0). Cache now contains 1 entries. 09:09:36 09:09:36.949 [ProcessThread(sid:0 cport:46481):] DEBUG org.apache.zookeeper.server.SessionTrackerImpl - Checking session 0x1000002945e0000 09:09:36 09:09:36.949 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - Processing request:: sessionid:0x1000002945e0000 type:getData cxid:0x105 zxid:0xfffffffffffffffe txntype:unknown reqpath:/config/topics/__consumer_offsets 09:09:36 09:09:36.949 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - sessionid:0x1000002945e0000 type:getData cxid:0x105 zxid:0xfffffffffffffffe txntype:unknown reqpath:/config/topics/__consumer_offsets 09:09:36 09:09:36.949 [SyncThread:0] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Permission requested: 1 09:09:36 09:09:36.949 [SyncThread:0] DEBUG org.apache.zookeeper.server.ZooKeeperServer - ACLs for node: [31,s{'world,'anyone} 09:09:36 ] 09:09:36 09:09:36.949 [SyncThread:0] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Client credentials: ['sasl,'zooclient 09:09:36 , 'ip,'127.0.0.1 09:09:36 ] 09:09:36 09:09:36.949 [main-SendThread(127.0.0.1:46481)] DEBUG org.apache.zookeeper.ClientCnxn - Reading reply session id: 0x1000002945e0000, packet:: clientPath:/config/topics/__consumer_offsets serverPath:/config/topics/__consumer_offsets finished:false header:: 261,4 replyHeader:: 261,139,0 request:: '/config/topics/__consumer_offsets,F response:: #7b2276657273696f6e223a312c22636f6e666967223a7b22636f6d7072657373696f6e2e74797065223a2270726f6475636572222c22636c65616e75702e706f6c696379223a22636f6d70616374222c227365676d656e742e6279746573223a22313034383537363030227d7d,s{37,37,1770973775609,1770973775609,0,0,0,0,109,0,37} 09:09:36 09:09:36.962 [data-plane-kafka-request-handler-1] DEBUG kafka.log.OffsetIndex - Loaded index file /tmp/kafka-unit11182757027218931278/__consumer_offsets-28/00000000000000000000.index with maxEntries = 1310720, maxIndexSize = 10485760, entries = 0, lastOffset = 0, file position = 0 09:09:36 09:09:36.962 [data-plane-kafka-request-handler-1] DEBUG kafka.log.AbstractIndex - Index /tmp/kafka-unit11182757027218931278/__consumer_offsets-28/00000000000000000000.index was not resized because it already has size 10485760 09:09:36 09:09:36.962 [data-plane-kafka-request-handler-1] DEBUG kafka.log.TimeIndex - Loaded index file /tmp/kafka-unit11182757027218931278/__consumer_offsets-28/00000000000000000000.timeindex with maxEntries = 873813, maxIndexSize = 10485760, entries = 0, lastOffset = TimestampOffset(-1,0), file position = 0 09:09:36 09:09:36.962 [data-plane-kafka-request-handler-1] DEBUG kafka.log.AbstractIndex - Index /tmp/kafka-unit11182757027218931278/__consumer_offsets-28/00000000000000000000.timeindex was not resized because it already has size 10485756 09:09:36 09:09:36.963 [data-plane-kafka-request-handler-1] INFO kafka.log.UnifiedLog$ - [LogLoader partition=__consumer_offsets-28, dir=/tmp/kafka-unit11182757027218931278] Loading producer state till offset 0 with message format version 2 09:09:36 09:09:36.963 [data-plane-kafka-request-handler-1] DEBUG kafka.utils.KafkaScheduler - Scheduling task flush-metadata-file with initial delay 0 ms and period -1 ms. 09:09:36 09:09:36.963 [data-plane-kafka-request-handler-1] DEBUG kafka.utils.KafkaScheduler - Scheduling task PeriodicProducerExpirationCheck with initial delay 600000 ms and period 600000 ms. 09:09:36 09:09:36.963 [data-plane-kafka-request-handler-1] INFO kafka.log.LogManager - Created log for partition __consumer_offsets-28 in /tmp/kafka-unit11182757027218931278/__consumer_offsets-28 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} 09:09:36 09:09:36.964 [data-plane-kafka-request-handler-1] INFO kafka.cluster.Partition - [Partition __consumer_offsets-28 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-28 09:09:36 09:09:36.964 [data-plane-kafka-request-handler-1] INFO kafka.cluster.Partition - [Partition __consumer_offsets-28 broker=1] Log loaded for partition __consumer_offsets-28 with initial high watermark 0 09:09:36 09:09:36.964 [data-plane-kafka-request-handler-1] INFO state.change.logger - [Broker id=1] Leader __consumer_offsets-28 with topic id Some(RNuWA2mhQKGQZwHuRiMICQ) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas []. Previous leader epoch was -1. 09:09:36 09:09:36.964 [data-plane-kafka-request-handler-1] DEBUG kafka.server.epoch.LeaderEpochFileCache - [LeaderEpochCache __consumer_offsets-28] Appended new epoch entry EpochEntry(epoch=0, startOffset=0). Cache now contains 1 entries. 09:09:36 09:09:36.971 [data-plane-kafka-request-handler-1] INFO kafka.coordinator.group.GroupCoordinator - [GroupCoordinator 1]: Elected as the group coordinator for partition 3 in epoch 0 09:09:37 09:09:36.972 [data-plane-kafka-request-handler-1] INFO kafka.coordinator.group.GroupMetadataManager - [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-3 for epoch 0 09:09:37 09:09:36.972 [main] DEBUG org.apache.kafka.clients.NetworkClient - [Consumer clientId=mso-123456-consumer-f5c1d64b-2a77-4abe-97e2-5d595acfbd82, groupId=mso-group] Sending metadata request MetadataRequestData(topics=[MetadataRequestTopic(topicId=AAAAAAAAAAAAAAAAAAAAAA, name='my-test-topic')], allowAutoTopicCreation=false, includeClusterAuthorizedOperations=false, includeTopicAuthorizedOperations=false) to node localhost:40117 (id: 1 rack: null) 09:09:37 09:09:36.972 [main] DEBUG org.apache.kafka.clients.NetworkClient - [Consumer clientId=mso-123456-consumer-f5c1d64b-2a77-4abe-97e2-5d595acfbd82, groupId=mso-group] Sending METADATA request with header RequestHeader(apiKey=METADATA, apiVersion=12, clientId=mso-123456-consumer-f5c1d64b-2a77-4abe-97e2-5d595acfbd82, correlationId=28) and timeout 30000 to node 1: MetadataRequestData(topics=[MetadataRequestTopic(topicId=AAAAAAAAAAAAAAAAAAAAAA, name='my-test-topic')], allowAutoTopicCreation=false, includeClusterAuthorizedOperations=false, includeTopicAuthorizedOperations=false) 09:09:37 09:09:36.973 [data-plane-kafka-request-handler-1] DEBUG kafka.utils.KafkaScheduler - Scheduling task __consumer_offsets-3 with initial delay 0 ms and period -1 ms. 09:09:37 09:09:36.973 [data-plane-kafka-request-handler-1] INFO kafka.coordinator.group.GroupCoordinator - [GroupCoordinator 1]: Elected as the group coordinator for partition 18 in epoch 0 09:09:37 09:09:36.973 [data-plane-kafka-request-handler-1] INFO kafka.coordinator.group.GroupMetadataManager - [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-18 for epoch 0 09:09:37 09:09:36.973 [data-plane-kafka-request-handler-1] DEBUG kafka.utils.KafkaScheduler - Scheduling task __consumer_offsets-18 with initial delay 0 ms and period -1 ms. 09:09:37 09:09:36.973 [data-plane-kafka-request-handler-1] INFO kafka.coordinator.group.GroupCoordinator - [GroupCoordinator 1]: Elected as the group coordinator for partition 41 in epoch 0 09:09:37 09:09:36.973 [data-plane-kafka-request-handler-1] INFO kafka.coordinator.group.GroupMetadataManager - [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-41 for epoch 0 09:09:37 09:09:36.973 [data-plane-kafka-request-handler-1] DEBUG kafka.utils.KafkaScheduler - Scheduling task __consumer_offsets-41 with initial delay 0 ms and period -1 ms. 09:09:37 09:09:36.974 [data-plane-kafka-request-handler-1] INFO kafka.coordinator.group.GroupCoordinator - [GroupCoordinator 1]: Elected as the group coordinator for partition 10 in epoch 0 09:09:37 09:09:36.974 [data-plane-kafka-request-handler-1] INFO kafka.coordinator.group.GroupMetadataManager - [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-10 for epoch 0 09:09:37 09:09:36.974 [data-plane-kafka-request-handler-1] DEBUG kafka.utils.KafkaScheduler - Scheduling task __consumer_offsets-10 with initial delay 0 ms and period -1 ms. 09:09:37 09:09:36.974 [data-plane-kafka-request-handler-1] INFO kafka.coordinator.group.GroupCoordinator - [GroupCoordinator 1]: Elected as the group coordinator for partition 33 in epoch 0 09:09:37 09:09:36.974 [data-plane-kafka-request-handler-1] INFO kafka.coordinator.group.GroupMetadataManager - [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-33 for epoch 0 09:09:37 09:09:36.974 [data-plane-kafka-request-handler-1] DEBUG kafka.utils.KafkaScheduler - Scheduling task __consumer_offsets-33 with initial delay 0 ms and period -1 ms. 09:09:37 09:09:36.974 [data-plane-kafka-request-handler-1] INFO kafka.coordinator.group.GroupCoordinator - [GroupCoordinator 1]: Elected as the group coordinator for partition 48 in epoch 0 09:09:37 09:09:36.974 [data-plane-kafka-request-handler-1] INFO kafka.coordinator.group.GroupMetadataManager - [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-48 for epoch 0 09:09:37 09:09:36.974 [data-plane-kafka-request-handler-1] DEBUG kafka.utils.KafkaScheduler - Scheduling task __consumer_offsets-48 with initial delay 0 ms and period -1 ms. 09:09:37 09:09:36.974 [data-plane-kafka-request-handler-1] INFO kafka.coordinator.group.GroupCoordinator - [GroupCoordinator 1]: Elected as the group coordinator for partition 19 in epoch 0 09:09:37 09:09:36.974 [data-plane-kafka-request-handler-1] INFO kafka.coordinator.group.GroupMetadataManager - [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-19 for epoch 0 09:09:37 09:09:36.974 [data-plane-kafka-request-handler-1] DEBUG kafka.utils.KafkaScheduler - Scheduling task __consumer_offsets-19 with initial delay 0 ms and period -1 ms. 09:09:37 09:09:36.974 [data-plane-kafka-request-handler-1] INFO kafka.coordinator.group.GroupCoordinator - [GroupCoordinator 1]: Elected as the group coordinator for partition 34 in epoch 0 09:09:37 09:09:36.974 [data-plane-kafka-request-handler-1] INFO kafka.coordinator.group.GroupMetadataManager - [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-34 for epoch 0 09:09:37 09:09:36.974 [data-plane-kafka-request-handler-1] DEBUG kafka.utils.KafkaScheduler - Scheduling task __consumer_offsets-34 with initial delay 0 ms and period -1 ms. 09:09:37 09:09:36.974 [data-plane-kafka-request-handler-1] INFO kafka.coordinator.group.GroupCoordinator - [GroupCoordinator 1]: Elected as the group coordinator for partition 4 in epoch 0 09:09:37 09:09:36.974 [data-plane-kafka-request-handler-1] INFO kafka.coordinator.group.GroupMetadataManager - [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-4 for epoch 0 09:09:37 09:09:36.974 [data-plane-kafka-request-handler-1] DEBUG kafka.utils.KafkaScheduler - Scheduling task __consumer_offsets-4 with initial delay 0 ms and period -1 ms. 09:09:37 09:09:36.974 [data-plane-kafka-request-handler-1] INFO kafka.coordinator.group.GroupCoordinator - [GroupCoordinator 1]: Elected as the group coordinator for partition 11 in epoch 0 09:09:37 09:09:36.974 [data-plane-kafka-request-handler-1] INFO kafka.coordinator.group.GroupMetadataManager - [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-11 for epoch 0 09:09:37 09:09:36.974 [data-plane-kafka-request-handler-1] DEBUG kafka.utils.KafkaScheduler - Scheduling task __consumer_offsets-11 with initial delay 0 ms and period -1 ms. 09:09:37 09:09:36.974 [data-plane-kafka-request-handler-1] INFO kafka.coordinator.group.GroupCoordinator - [GroupCoordinator 1]: Elected as the group coordinator for partition 26 in epoch 0 09:09:37 09:09:36.974 [data-plane-kafka-request-handler-1] INFO kafka.coordinator.group.GroupMetadataManager - [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-26 for epoch 0 09:09:37 09:09:36.974 [data-plane-kafka-request-handler-1] DEBUG kafka.utils.KafkaScheduler - Scheduling task __consumer_offsets-26 with initial delay 0 ms and period -1 ms. 09:09:37 09:09:36.974 [data-plane-kafka-request-handler-1] INFO kafka.coordinator.group.GroupCoordinator - [GroupCoordinator 1]: Elected as the group coordinator for partition 49 in epoch 0 09:09:37 09:09:36.974 [data-plane-kafka-request-handler-1] INFO kafka.coordinator.group.GroupMetadataManager - [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-49 for epoch 0 09:09:37 09:09:36.974 [data-plane-kafka-request-handler-1] DEBUG kafka.utils.KafkaScheduler - Scheduling task __consumer_offsets-49 with initial delay 0 ms and period -1 ms. 09:09:37 09:09:36.974 [data-plane-kafka-request-handler-1] INFO kafka.coordinator.group.GroupCoordinator - [GroupCoordinator 1]: Elected as the group coordinator for partition 39 in epoch 0 09:09:37 09:09:36.974 [data-plane-kafka-request-handler-1] INFO kafka.coordinator.group.GroupMetadataManager - [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-39 for epoch 0 09:09:37 09:09:36.975 [data-plane-kafka-request-handler-1] DEBUG kafka.utils.KafkaScheduler - Scheduling task __consumer_offsets-39 with initial delay 0 ms and period -1 ms. 09:09:37 09:09:36.975 [data-plane-kafka-request-handler-1] INFO kafka.coordinator.group.GroupCoordinator - [GroupCoordinator 1]: Elected as the group coordinator for partition 9 in epoch 0 09:09:37 09:09:36.975 [data-plane-kafka-request-handler-1] INFO kafka.coordinator.group.GroupMetadataManager - [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-9 for epoch 0 09:09:37 09:09:36.975 [data-plane-kafka-request-handler-1] DEBUG kafka.utils.KafkaScheduler - Scheduling task __consumer_offsets-9 with initial delay 0 ms and period -1 ms. 09:09:37 09:09:36.975 [data-plane-kafka-request-handler-1] INFO kafka.coordinator.group.GroupCoordinator - [GroupCoordinator 1]: Elected as the group coordinator for partition 24 in epoch 0 09:09:37 09:09:36.975 [data-plane-kafka-request-handler-1] INFO kafka.coordinator.group.GroupMetadataManager - [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-24 for epoch 0 09:09:37 09:09:36.975 [data-plane-kafka-request-handler-1] DEBUG kafka.utils.KafkaScheduler - Scheduling task __consumer_offsets-24 with initial delay 0 ms and period -1 ms. 09:09:37 09:09:36.975 [data-plane-kafka-request-handler-1] INFO kafka.coordinator.group.GroupCoordinator - [GroupCoordinator 1]: Elected as the group coordinator for partition 31 in epoch 0 09:09:37 09:09:36.975 [data-plane-kafka-request-handler-1] INFO kafka.coordinator.group.GroupMetadataManager - [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-31 for epoch 0 09:09:37 09:09:36.975 [data-plane-kafka-request-handler-1] DEBUG kafka.utils.KafkaScheduler - Scheduling task __consumer_offsets-31 with initial delay 0 ms and period -1 ms. 09:09:37 09:09:36.975 [data-plane-kafka-request-handler-1] INFO kafka.coordinator.group.GroupCoordinator - [GroupCoordinator 1]: Elected as the group coordinator for partition 46 in epoch 0 09:09:37 09:09:36.975 [data-plane-kafka-request-handler-1] INFO kafka.coordinator.group.GroupMetadataManager - [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-46 for epoch 0 09:09:37 09:09:36.975 [data-plane-kafka-request-handler-1] DEBUG kafka.utils.KafkaScheduler - Scheduling task __consumer_offsets-46 with initial delay 0 ms and period -1 ms. 09:09:37 09:09:36.975 [data-plane-kafka-request-handler-1] INFO kafka.coordinator.group.GroupCoordinator - [GroupCoordinator 1]: Elected as the group coordinator for partition 1 in epoch 0 09:09:37 09:09:36.975 [data-plane-kafka-request-handler-1] INFO kafka.coordinator.group.GroupMetadataManager - [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-1 for epoch 0 09:09:37 09:09:36.975 [data-plane-kafka-request-handler-1] DEBUG kafka.utils.KafkaScheduler - Scheduling task __consumer_offsets-1 with initial delay 0 ms and period -1 ms. 09:09:37 09:09:36.975 [data-plane-kafka-request-handler-1] INFO kafka.coordinator.group.GroupCoordinator - [GroupCoordinator 1]: Elected as the group coordinator for partition 16 in epoch 0 09:09:37 09:09:36.975 [data-plane-kafka-request-handler-1] INFO kafka.coordinator.group.GroupMetadataManager - [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-16 for epoch 0 09:09:37 09:09:36.975 [data-plane-kafka-request-handler-1] DEBUG kafka.utils.KafkaScheduler - Scheduling task __consumer_offsets-16 with initial delay 0 ms and period -1 ms. 09:09:37 09:09:36.975 [data-plane-kafka-request-handler-1] INFO kafka.coordinator.group.GroupCoordinator - [GroupCoordinator 1]: Elected as the group coordinator for partition 2 in epoch 0 09:09:37 09:09:36.975 [data-plane-kafka-request-handler-1] INFO kafka.coordinator.group.GroupMetadataManager - [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-2 for epoch 0 09:09:37 09:09:36.975 [data-plane-kafka-request-handler-1] DEBUG kafka.utils.KafkaScheduler - Scheduling task __consumer_offsets-2 with initial delay 0 ms and period -1 ms. 09:09:37 09:09:36.975 [data-plane-kafka-request-handler-1] INFO kafka.coordinator.group.GroupCoordinator - [GroupCoordinator 1]: Elected as the group coordinator for partition 25 in epoch 0 09:09:37 09:09:36.975 [data-plane-kafka-request-handler-1] INFO kafka.coordinator.group.GroupMetadataManager - [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-25 for epoch 0 09:09:37 09:09:36.975 [data-plane-kafka-request-handler-1] DEBUG kafka.utils.KafkaScheduler - Scheduling task __consumer_offsets-25 with initial delay 0 ms and period -1 ms. 09:09:37 09:09:36.975 [group-metadata-manager-0] DEBUG kafka.coordinator.group.GroupMetadataManager - [GroupMetadataManager brokerId=1] Started loading offsets and group metadata from __consumer_offsets-3 for epoch 0 09:09:37 09:09:36.975 [data-plane-kafka-request-handler-1] INFO kafka.coordinator.group.GroupCoordinator - [GroupCoordinator 1]: Elected as the group coordinator for partition 40 in epoch 0 09:09:37 09:09:36.975 [data-plane-kafka-request-handler-1] INFO kafka.coordinator.group.GroupMetadataManager - [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-40 for epoch 0 09:09:37 09:09:36.976 [data-plane-kafka-request-handler-1] DEBUG kafka.utils.KafkaScheduler - Scheduling task __consumer_offsets-40 with initial delay 0 ms and period -1 ms. 09:09:37 09:09:36.976 [data-plane-kafka-request-handler-1] INFO kafka.coordinator.group.GroupCoordinator - [GroupCoordinator 1]: Elected as the group coordinator for partition 47 in epoch 0 09:09:37 09:09:36.976 [data-plane-kafka-request-handler-1] INFO kafka.coordinator.group.GroupMetadataManager - [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-47 for epoch 0 09:09:37 09:09:36.976 [data-plane-kafka-request-handler-1] DEBUG kafka.utils.KafkaScheduler - Scheduling task __consumer_offsets-47 with initial delay 0 ms and period -1 ms. 09:09:37 09:09:36.976 [data-plane-kafka-request-handler-1] INFO kafka.coordinator.group.GroupCoordinator - [GroupCoordinator 1]: Elected as the group coordinator for partition 17 in epoch 0 09:09:37 09:09:36.976 [data-plane-kafka-request-handler-1] INFO kafka.coordinator.group.GroupMetadataManager - [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-17 for epoch 0 09:09:37 09:09:36.976 [data-plane-kafka-request-handler-1] DEBUG kafka.utils.KafkaScheduler - Scheduling task __consumer_offsets-17 with initial delay 0 ms and period -1 ms. 09:09:37 09:09:36.976 [data-plane-kafka-request-handler-1] INFO kafka.coordinator.group.GroupCoordinator - [GroupCoordinator 1]: Elected as the group coordinator for partition 32 in epoch 0 09:09:37 09:09:36.976 [data-plane-kafka-request-handler-1] INFO kafka.coordinator.group.GroupMetadataManager - [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-32 for epoch 0 09:09:37 09:09:36.976 [data-plane-kafka-request-handler-1] DEBUG kafka.utils.KafkaScheduler - Scheduling task __consumer_offsets-32 with initial delay 0 ms and period -1 ms. 09:09:37 09:09:36.976 [data-plane-kafka-network-thread-1-ListenerName(SASL_PLAINTEXT)-SASL_PLAINTEXT-0] DEBUG kafka.request.logger - Completed request:{"isForwarded":false,"requestHeader":{"requestApiKey":3,"requestApiVersion":12,"correlationId":28,"clientId":"mso-123456-consumer-f5c1d64b-2a77-4abe-97e2-5d595acfbd82","requestApiKeyName":"METADATA"},"request":{"topics":[{"topicId":"AAAAAAAAAAAAAAAAAAAAAA","name":"my-test-topic"}],"allowAutoTopicCreation":false,"includeTopicAuthorizedOperations":false},"response":{"throttleTimeMs":0,"brokers":[{"nodeId":1,"host":"localhost","port":40117,"rack":null}],"clusterId":"lcpOyY1-QY2MMThgHGGgSA","controllerId":1,"topics":[{"errorCode":0,"name":"my-test-topic","topicId":"rpOTusxPRiGyrsjjjH_fwA","isInternal":false,"partitions":[{"errorCode":0,"partitionIndex":0,"leaderId":1,"leaderEpoch":0,"replicaNodes":[1],"isrNodes":[1],"offlineReplicas":[]}],"topicAuthorizedOperations":-2147483648}]},"connection":"127.0.0.1:40117-127.0.0.1:49642-3","totalTimeMs":1.778,"requestQueueTimeMs":0.291,"localTimeMs":1.162,"remoteTimeMs":0.0,"throttleTimeMs":0,"responseQueueTimeMs":0.082,"sendTimeMs":0.242,"securityProtocol":"SASL_PLAINTEXT","principal":"User:admin","listener":"SASL_PLAINTEXT","clientInformation":{"softwareName":"apache-kafka-java","softwareVersion":"3.3.1"}} 09:09:37 09:09:36.976 [data-plane-kafka-request-handler-1] INFO kafka.coordinator.group.GroupCoordinator - [GroupCoordinator 1]: Elected as the group coordinator for partition 37 in epoch 0 09:09:37 09:09:36.976 [data-plane-kafka-request-handler-1] INFO kafka.coordinator.group.GroupMetadataManager - [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-37 for epoch 0 09:09:37 09:09:36.976 [data-plane-kafka-request-handler-1] DEBUG kafka.utils.KafkaScheduler - Scheduling task __consumer_offsets-37 with initial delay 0 ms and period -1 ms. 09:09:37 09:09:36.976 [data-plane-kafka-request-handler-1] INFO kafka.coordinator.group.GroupCoordinator - [GroupCoordinator 1]: Elected as the group coordinator for partition 7 in epoch 0 09:09:37 09:09:36.976 [data-plane-kafka-request-handler-1] INFO kafka.coordinator.group.GroupMetadataManager - [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-7 for epoch 0 09:09:37 09:09:36.976 [data-plane-kafka-request-handler-1] DEBUG kafka.utils.KafkaScheduler - Scheduling task __consumer_offsets-7 with initial delay 0 ms and period -1 ms. 09:09:37 09:09:36.976 [data-plane-kafka-request-handler-1] INFO kafka.coordinator.group.GroupCoordinator - [GroupCoordinator 1]: Elected as the group coordinator for partition 22 in epoch 0 09:09:37 09:09:36.976 [data-plane-kafka-request-handler-1] INFO kafka.coordinator.group.GroupMetadataManager - [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-22 for epoch 0 09:09:37 09:09:36.976 [data-plane-kafka-request-handler-1] DEBUG kafka.utils.KafkaScheduler - Scheduling task __consumer_offsets-22 with initial delay 0 ms and period -1 ms. 09:09:37 09:09:36.976 [data-plane-kafka-request-handler-1] INFO kafka.coordinator.group.GroupCoordinator - [GroupCoordinator 1]: Elected as the group coordinator for partition 29 in epoch 0 09:09:37 09:09:36.976 [data-plane-kafka-request-handler-1] INFO kafka.coordinator.group.GroupMetadataManager - [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-29 for epoch 0 09:09:37 09:09:36.976 [data-plane-kafka-request-handler-1] DEBUG kafka.utils.KafkaScheduler - Scheduling task __consumer_offsets-29 with initial delay 0 ms and period -1 ms. 09:09:37 09:09:36.976 [data-plane-kafka-request-handler-1] INFO kafka.coordinator.group.GroupCoordinator - [GroupCoordinator 1]: Elected as the group coordinator for partition 44 in epoch 0 09:09:37 09:09:36.976 [data-plane-kafka-request-handler-1] INFO kafka.coordinator.group.GroupMetadataManager - [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-44 for epoch 0 09:09:37 09:09:36.976 [data-plane-kafka-request-handler-1] DEBUG kafka.utils.KafkaScheduler - Scheduling task __consumer_offsets-44 with initial delay 0 ms and period -1 ms. 09:09:37 09:09:36.976 [data-plane-kafka-request-handler-1] INFO kafka.coordinator.group.GroupCoordinator - [GroupCoordinator 1]: Elected as the group coordinator for partition 14 in epoch 0 09:09:37 09:09:36.976 [data-plane-kafka-request-handler-1] INFO kafka.coordinator.group.GroupMetadataManager - [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-14 for epoch 0 09:09:37 09:09:36.976 [data-plane-kafka-request-handler-1] DEBUG kafka.utils.KafkaScheduler - Scheduling task __consumer_offsets-14 with initial delay 0 ms and period -1 ms. 09:09:37 09:09:36.977 [data-plane-kafka-request-handler-1] INFO kafka.coordinator.group.GroupCoordinator - [GroupCoordinator 1]: Elected as the group coordinator for partition 23 in epoch 0 09:09:37 09:09:36.977 [data-plane-kafka-request-handler-1] INFO kafka.coordinator.group.GroupMetadataManager - [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-23 for epoch 0 09:09:37 09:09:36.977 [data-plane-kafka-request-handler-1] DEBUG kafka.utils.KafkaScheduler - Scheduling task __consumer_offsets-23 with initial delay 0 ms and period -1 ms. 09:09:37 09:09:36.977 [data-plane-kafka-request-handler-1] INFO kafka.coordinator.group.GroupCoordinator - [GroupCoordinator 1]: Elected as the group coordinator for partition 38 in epoch 0 09:09:37 09:09:36.977 [data-plane-kafka-request-handler-1] INFO kafka.coordinator.group.GroupMetadataManager - [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-38 for epoch 0 09:09:37 09:09:36.977 [data-plane-kafka-request-handler-1] DEBUG kafka.utils.KafkaScheduler - Scheduling task __consumer_offsets-38 with initial delay 0 ms and period -1 ms. 09:09:37 09:09:36.977 [data-plane-kafka-request-handler-1] INFO kafka.coordinator.group.GroupCoordinator - [GroupCoordinator 1]: Elected as the group coordinator for partition 8 in epoch 0 09:09:37 09:09:36.977 [data-plane-kafka-request-handler-1] INFO kafka.coordinator.group.GroupMetadataManager - [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-8 for epoch 0 09:09:37 09:09:36.977 [main] DEBUG org.apache.kafka.clients.NetworkClient - [Consumer clientId=mso-123456-consumer-f5c1d64b-2a77-4abe-97e2-5d595acfbd82, groupId=mso-group] Received METADATA response from node 1 for request with header RequestHeader(apiKey=METADATA, apiVersion=12, clientId=mso-123456-consumer-f5c1d64b-2a77-4abe-97e2-5d595acfbd82, correlationId=28): MetadataResponseData(throttleTimeMs=0, brokers=[MetadataResponseBroker(nodeId=1, host='localhost', port=40117, rack=null)], clusterId='lcpOyY1-QY2MMThgHGGgSA', controllerId=1, topics=[MetadataResponseTopic(errorCode=0, name='my-test-topic', topicId=rpOTusxPRiGyrsjjjH_fwA, isInternal=false, partitions=[MetadataResponsePartition(errorCode=0, partitionIndex=0, leaderId=1, leaderEpoch=0, replicaNodes=[1], isrNodes=[1], offlineReplicas=[])], topicAuthorizedOperations=-2147483648)], clusterAuthorizedOperations=-2147483648) 09:09:37 09:09:36.977 [data-plane-kafka-request-handler-1] DEBUG kafka.utils.KafkaScheduler - Scheduling task __consumer_offsets-8 with initial delay 0 ms and period -1 ms. 09:09:37 09:09:36.977 [data-plane-kafka-request-handler-1] INFO kafka.coordinator.group.GroupCoordinator - [GroupCoordinator 1]: Elected as the group coordinator for partition 45 in epoch 0 09:09:37 09:09:36.977 [data-plane-kafka-request-handler-1] INFO kafka.coordinator.group.GroupMetadataManager - [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-45 for epoch 0 09:09:37 09:09:36.977 [data-plane-kafka-request-handler-1] DEBUG kafka.utils.KafkaScheduler - Scheduling task __consumer_offsets-45 with initial delay 0 ms and period -1 ms. 09:09:37 09:09:36.977 [data-plane-kafka-request-handler-1] INFO kafka.coordinator.group.GroupCoordinator - [GroupCoordinator 1]: Elected as the group coordinator for partition 15 in epoch 0 09:09:37 09:09:36.977 [data-plane-kafka-request-handler-1] INFO kafka.coordinator.group.GroupMetadataManager - [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-15 for epoch 0 09:09:37 09:09:36.977 [data-plane-kafka-request-handler-1] DEBUG kafka.utils.KafkaScheduler - Scheduling task __consumer_offsets-15 with initial delay 0 ms and period -1 ms. 09:09:37 09:09:36.977 [data-plane-kafka-request-handler-1] INFO kafka.coordinator.group.GroupCoordinator - [GroupCoordinator 1]: Elected as the group coordinator for partition 30 in epoch 0 09:09:37 09:09:36.977 [data-plane-kafka-request-handler-1] INFO kafka.coordinator.group.GroupMetadataManager - [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-30 for epoch 0 09:09:37 09:09:36.977 [data-plane-kafka-request-handler-1] DEBUG kafka.utils.KafkaScheduler - Scheduling task __consumer_offsets-30 with initial delay 0 ms and period -1 ms. 09:09:37 09:09:36.977 [data-plane-kafka-request-handler-1] INFO kafka.coordinator.group.GroupCoordinator - [GroupCoordinator 1]: Elected as the group coordinator for partition 0 in epoch 0 09:09:37 09:09:36.977 [data-plane-kafka-request-handler-1] INFO kafka.coordinator.group.GroupMetadataManager - [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-0 for epoch 0 09:09:37 09:09:36.977 [data-plane-kafka-request-handler-1] DEBUG kafka.utils.KafkaScheduler - Scheduling task __consumer_offsets-0 with initial delay 0 ms and period -1 ms. 09:09:37 09:09:36.977 [data-plane-kafka-request-handler-1] INFO kafka.coordinator.group.GroupCoordinator - [GroupCoordinator 1]: Elected as the group coordinator for partition 35 in epoch 0 09:09:37 09:09:36.977 [data-plane-kafka-request-handler-1] INFO kafka.coordinator.group.GroupMetadataManager - [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-35 for epoch 0 09:09:37 09:09:36.977 [data-plane-kafka-request-handler-1] DEBUG kafka.utils.KafkaScheduler - Scheduling task __consumer_offsets-35 with initial delay 0 ms and period -1 ms. 09:09:37 09:09:36.977 [data-plane-kafka-request-handler-1] INFO kafka.coordinator.group.GroupCoordinator - [GroupCoordinator 1]: Elected as the group coordinator for partition 5 in epoch 0 09:09:37 09:09:36.977 [data-plane-kafka-request-handler-1] INFO kafka.coordinator.group.GroupMetadataManager - [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-5 for epoch 0 09:09:37 09:09:36.977 [data-plane-kafka-request-handler-1] DEBUG kafka.utils.KafkaScheduler - Scheduling task __consumer_offsets-5 with initial delay 0 ms and period -1 ms. 09:09:37 09:09:36.977 [data-plane-kafka-request-handler-1] INFO kafka.coordinator.group.GroupCoordinator - [GroupCoordinator 1]: Elected as the group coordinator for partition 20 in epoch 0 09:09:37 09:09:36.977 [data-plane-kafka-request-handler-1] INFO kafka.coordinator.group.GroupMetadataManager - [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-20 for epoch 0 09:09:37 09:09:36.978 [data-plane-kafka-request-handler-1] DEBUG kafka.utils.KafkaScheduler - Scheduling task __consumer_offsets-20 with initial delay 0 ms and period -1 ms. 09:09:37 09:09:36.978 [data-plane-kafka-request-handler-1] INFO kafka.coordinator.group.GroupCoordinator - [GroupCoordinator 1]: Elected as the group coordinator for partition 27 in epoch 0 09:09:37 09:09:36.978 [data-plane-kafka-request-handler-1] INFO kafka.coordinator.group.GroupMetadataManager - [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-27 for epoch 0 09:09:37 09:09:36.978 [data-plane-kafka-request-handler-1] DEBUG kafka.utils.KafkaScheduler - Scheduling task __consumer_offsets-27 with initial delay 0 ms and period -1 ms. 09:09:37 09:09:36.978 [data-plane-kafka-request-handler-1] INFO kafka.coordinator.group.GroupCoordinator - [GroupCoordinator 1]: Elected as the group coordinator for partition 42 in epoch 0 09:09:37 09:09:36.978 [data-plane-kafka-request-handler-1] INFO kafka.coordinator.group.GroupMetadataManager - [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-42 for epoch 0 09:09:37 09:09:36.977 [main] DEBUG org.apache.kafka.clients.Metadata - [Consumer clientId=mso-123456-consumer-f5c1d64b-2a77-4abe-97e2-5d595acfbd82, groupId=mso-group] Updating last seen epoch for partition my-test-topic-0 from 0 to epoch 0 from new metadata 09:09:37 09:09:36.978 [data-plane-kafka-request-handler-1] DEBUG kafka.utils.KafkaScheduler - Scheduling task __consumer_offsets-42 with initial delay 0 ms and period -1 ms. 09:09:37 09:09:36.978 [data-plane-kafka-request-handler-1] INFO kafka.coordinator.group.GroupCoordinator - [GroupCoordinator 1]: Elected as the group coordinator for partition 12 in epoch 0 09:09:37 09:09:36.978 [data-plane-kafka-request-handler-1] INFO kafka.coordinator.group.GroupMetadataManager - [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-12 for epoch 0 09:09:37 09:09:36.978 [data-plane-kafka-request-handler-1] DEBUG kafka.utils.KafkaScheduler - Scheduling task __consumer_offsets-12 with initial delay 0 ms and period -1 ms. 09:09:37 09:09:36.978 [data-plane-kafka-request-handler-1] INFO kafka.coordinator.group.GroupCoordinator - [GroupCoordinator 1]: Elected as the group coordinator for partition 21 in epoch 0 09:09:37 09:09:36.978 [data-plane-kafka-request-handler-1] INFO kafka.coordinator.group.GroupMetadataManager - [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-21 for epoch 0 09:09:37 09:09:36.978 [main] DEBUG org.apache.kafka.clients.Metadata - [Consumer clientId=mso-123456-consumer-f5c1d64b-2a77-4abe-97e2-5d595acfbd82, groupId=mso-group] Updated cluster metadata updateVersion 15 to MetadataCache{clusterId='lcpOyY1-QY2MMThgHGGgSA', nodes={1=localhost:40117 (id: 1 rack: null)}, partitions=[PartitionMetadata(error=NONE, partition=my-test-topic-0, leader=Optional[1], leaderEpoch=Optional[0], replicas=1, isr=1, offlineReplicas=)], controller=localhost:40117 (id: 1 rack: null)} 09:09:37 09:09:36.978 [data-plane-kafka-request-handler-1] DEBUG kafka.utils.KafkaScheduler - Scheduling task __consumer_offsets-21 with initial delay 0 ms and period -1 ms. 09:09:37 09:09:36.978 [data-plane-kafka-request-handler-1] INFO kafka.coordinator.group.GroupCoordinator - [GroupCoordinator 1]: Elected as the group coordinator for partition 36 in epoch 0 09:09:37 09:09:36.978 [data-plane-kafka-request-handler-1] INFO kafka.coordinator.group.GroupMetadataManager - [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-36 for epoch 0 09:09:37 09:09:36.978 [main] DEBUG org.apache.kafka.clients.consumer.internals.ConsumerCoordinator - [Consumer clientId=mso-123456-consumer-f5c1d64b-2a77-4abe-97e2-5d595acfbd82, groupId=mso-group] Sending FindCoordinator request to broker localhost:40117 (id: 1 rack: null) 09:09:37 09:09:36.978 [data-plane-kafka-request-handler-1] DEBUG kafka.utils.KafkaScheduler - Scheduling task __consumer_offsets-36 with initial delay 0 ms and period -1 ms. 09:09:37 09:09:36.978 [data-plane-kafka-request-handler-1] INFO kafka.coordinator.group.GroupCoordinator - [GroupCoordinator 1]: Elected as the group coordinator for partition 6 in epoch 0 09:09:37 09:09:36.978 [data-plane-kafka-request-handler-1] INFO kafka.coordinator.group.GroupMetadataManager - [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-6 for epoch 0 09:09:37 09:09:36.978 [data-plane-kafka-request-handler-1] DEBUG kafka.utils.KafkaScheduler - Scheduling task __consumer_offsets-6 with initial delay 0 ms and period -1 ms. 09:09:37 09:09:36.978 [main] DEBUG org.apache.kafka.clients.NetworkClient - [Consumer clientId=mso-123456-consumer-f5c1d64b-2a77-4abe-97e2-5d595acfbd82, groupId=mso-group] Sending FIND_COORDINATOR request with header RequestHeader(apiKey=FIND_COORDINATOR, apiVersion=4, clientId=mso-123456-consumer-f5c1d64b-2a77-4abe-97e2-5d595acfbd82, correlationId=29) and timeout 30000 to node 1: FindCoordinatorRequestData(key='', keyType=0, coordinatorKeys=[mso-group]) 09:09:37 09:09:36.978 [data-plane-kafka-request-handler-1] INFO kafka.coordinator.group.GroupCoordinator - [GroupCoordinator 1]: Elected as the group coordinator for partition 43 in epoch 0 09:09:37 09:09:36.978 [data-plane-kafka-request-handler-1] INFO kafka.coordinator.group.GroupMetadataManager - [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-43 for epoch 0 09:09:37 09:09:36.978 [data-plane-kafka-request-handler-1] DEBUG kafka.utils.KafkaScheduler - Scheduling task __consumer_offsets-43 with initial delay 0 ms and period -1 ms. 09:09:37 09:09:36.978 [data-plane-kafka-request-handler-1] INFO kafka.coordinator.group.GroupCoordinator - [GroupCoordinator 1]: Elected as the group coordinator for partition 13 in epoch 0 09:09:37 09:09:36.978 [data-plane-kafka-request-handler-1] INFO kafka.coordinator.group.GroupMetadataManager - [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-13 for epoch 0 09:09:37 09:09:36.978 [data-plane-kafka-request-handler-1] DEBUG kafka.utils.KafkaScheduler - Scheduling task __consumer_offsets-13 with initial delay 0 ms and period -1 ms. 09:09:37 09:09:36.978 [data-plane-kafka-request-handler-1] INFO kafka.coordinator.group.GroupCoordinator - [GroupCoordinator 1]: Elected as the group coordinator for partition 28 in epoch 0 09:09:37 09:09:36.978 [data-plane-kafka-request-handler-1] INFO kafka.coordinator.group.GroupMetadataManager - [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-28 for epoch 0 09:09:37 09:09:36.978 [data-plane-kafka-request-handler-1] DEBUG kafka.utils.KafkaScheduler - Scheduling task __consumer_offsets-28 with initial delay 0 ms and period -1 ms. 09:09:37 09:09:36.979 [data-plane-kafka-request-handler-1] INFO state.change.logger - [Broker id=1] Finished LeaderAndIsr request in 1015ms correlationId 3 from controller 1 for 50 partitions 09:09:37 09:09:36.980 [ProcessThread(sid:0 cport:46481):] DEBUG org.apache.zookeeper.server.SessionTrackerImpl - Checking session 0x1000002945e0000 09:09:37 09:09:36.980 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - Processing request:: sessionid:0x1000002945e0000 type:exists cxid:0x106 zxid:0xfffffffffffffffe txntype:unknown reqpath:/admin/delete_topics/__consumer_offsets 09:09:37 09:09:36.980 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - sessionid:0x1000002945e0000 type:exists cxid:0x106 zxid:0xfffffffffffffffe txntype:unknown reqpath:/admin/delete_topics/__consumer_offsets 09:09:37 09:09:36.980 [main-SendThread(127.0.0.1:46481)] DEBUG org.apache.zookeeper.ClientCnxn - Reading reply session id: 0x1000002945e0000, packet:: clientPath:/admin/delete_topics/__consumer_offsets serverPath:/admin/delete_topics/__consumer_offsets finished:false header:: 262,3 replyHeader:: 262,139,-101 request:: '/admin/delete_topics/__consumer_offsets,F response:: 09:09:37 09:09:36.981 [TestBroker:1:Controller-1-to-broker-1-send-thread] DEBUG org.apache.kafka.clients.NetworkClient - [Controller id=1, targetBrokerId=1] Received LEADER_AND_ISR response from node 1 for request with header RequestHeader(apiKey=LEADER_AND_ISR, apiVersion=6, clientId=1, correlationId=3): LeaderAndIsrResponseData(errorCode=0, partitionErrors=[], topics=[LeaderAndIsrTopicError(topicId=RNuWA2mhQKGQZwHuRiMICQ, partitionErrors=[LeaderAndIsrPartitionError(topicName='', partitionIndex=13, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=46, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=9, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=42, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=21, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=17, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=30, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=26, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=5, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=38, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=1, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=34, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=16, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=45, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=12, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=41, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=24, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=20, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=49, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=0, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=29, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=25, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=8, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=37, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=4, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=33, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=15, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=48, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=11, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=44, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=23, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=19, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=32, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=28, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=7, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=40, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=3, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=36, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=47, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=14, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=43, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=10, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=22, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=18, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=31, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=27, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=39, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=6, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=35, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=2, errorCode=0)])]) 09:09:37 09:09:36.981 [group-metadata-manager-0] INFO kafka.coordinator.group.GroupMetadataManager - [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-3 in 8 milliseconds for epoch 0, of which 3 milliseconds was spent in the scheduler. 09:09:37 09:09:36.982 [data-plane-kafka-network-thread-1-ListenerName(SASL_PLAINTEXT)-SASL_PLAINTEXT-0] DEBUG kafka.request.logger - Completed request:{"isForwarded":false,"requestHeader":{"requestApiKey":4,"requestApiVersion":6,"correlationId":3,"clientId":"1","requestApiKeyName":"LEADER_AND_ISR"},"request":{"controllerId":1,"controllerEpoch":1,"brokerEpoch":25,"type":0,"topicStates":[{"topicName":"__consumer_offsets","topicId":"RNuWA2mhQKGQZwHuRiMICQ","partitionStates":[{"partitionIndex":13,"controllerEpoch":1,"leader":1,"leaderEpoch":0,"isr":[1],"partitionEpoch":0,"replicas":[1],"addingReplicas":[],"removingReplicas":[],"isNew":true,"leaderRecoveryState":0},{"partitionIndex":46,"controllerEpoch":1,"leader":1,"leaderEpoch":0,"isr":[1],"partitionEpoch":0,"replicas":[1],"addingReplicas":[],"removingReplicas":[],"isNew":true,"leaderRecoveryState":0},{"partitionIndex":9,"controllerEpoch":1,"leader":1,"leaderEpoch":0,"isr":[1],"partitionEpoch":0,"replicas":[1],"addingReplicas":[],"removingReplicas":[],"isNew":true,"leaderRecoveryState":0},{"partitionIndex":42,"controllerEpoch":1,"leader":1,"leaderEpoch":0,"isr":[1],"partitionEpoch":0,"replicas":[1],"addingReplicas":[],"removingReplicas":[],"isNew":true,"leaderRecoveryState":0},{"partitionIndex":21,"controllerEpoch":1,"leader":1,"leaderEpoch":0,"isr":[1],"partitionEpoch":0,"replicas":[1],"addingReplicas":[],"removingReplicas":[],"isNew":true,"leaderRecoveryState":0},{"partitionIndex":17,"controllerEpoch":1,"leader":1,"leaderEpoch":0,"isr":[1],"partitionEpoch":0,"replicas":[1],"addingReplicas":[],"removingReplicas":[],"isNew":true,"leaderRecoveryState":0},{"partitionIndex":30,"controllerEpoch":1,"leader":1,"leaderEpoch":0,"isr":[1],"partitionEpoch":0,"replicas":[1],"addingReplicas":[],"removingReplicas":[],"isNew":true,"leaderRecoveryState":0},{"partitionIndex":26,"controllerEpoch":1,"leader":1,"leaderEpoch":0,"isr":[1],"partitionEpoch":0,"replicas":[1],"addingReplicas":[],"removingReplicas":[],"isNew":true,"leaderRecoveryState":0},{"partitionIndex":5,"controllerEpoch":1,"leader":1,"leaderEpoch":0,"isr":[1],"partitionEpoch":0,"replicas":[1],"addingReplicas":[],"removingReplicas":[],"isNew":true,"leaderRecoveryState":0},{"partitionIndex":38,"controllerEpoch":1,"leader":1,"leaderEpoch":0,"isr":[1],"partitionEpoch":0,"replicas":[1],"addingReplicas":[],"removingReplicas":[],"isNew":true,"leaderRecoveryState":0},{"partitionIndex":1,"controllerEpoch":1,"leader":1,"leaderEpoch":0,"isr":[1],"partitionEpoch":0,"replicas":[1],"addingReplicas":[],"removingReplicas":[],"isNew":true,"leaderRecoveryState":0},{"partitionIndex":34,"controllerEpoch":1,"leader":1,"leaderEpoch":0,"isr":[1],"partitionEpoch":0,"replicas":[1],"addingReplicas":[],"removingReplicas":[],"isNew":true,"leaderRecoveryState":0},{"partitionIndex":16,"controllerEpoch":1,"leader":1,"leaderEpoch":0,"isr":[1],"partitionEpoch":0,"replicas":[1],"addingReplicas":[],"removingReplicas":[],"isNew":true,"leaderRecoveryState":0},{"partitionIndex":45,"controllerEpoch":1,"leader":1,"leaderEpoch":0,"isr":[1],"partitionEpoch":0,"replicas":[1],"addingReplicas":[],"removingReplicas":[],"isNew":true,"leaderRecoveryState":0},{"partitionIndex":12,"controllerEpoch":1,"leader":1,"leaderEpoch":0,"isr":[1],"partitionEpoch":0,"replicas":[1],"addingReplicas":[],"removingReplicas":[],"isNew":true,"leaderRecoveryState":0},{"partitionIndex":41,"controllerEpoch":1,"leader":1,"leaderEpoch":0,"isr":[1],"partitionEpoch":0,"replicas":[1],"addingReplicas":[],"removingReplicas":[],"isNew":true,"leaderRecoveryState":0},{"partitionIndex":24,"controllerEpoch":1,"leader":1,"leaderEpoch":0,"isr":[1],"partitionEpoch":0,"replicas":[1],"addingReplicas":[],"removingReplicas":[],"isNew":true,"leaderRecoveryState":0},{"partitionIndex":20,"controllerEpoch":1,"leader":1,"leaderEpoch":0,"isr":[1],"partitionEpoch":0,"replicas":[1],"addingReplicas":[],"removingReplicas":[],"isNew":true,"leaderRecoveryState":0},{"partitionIndex":49,"controllerEpoch":1,"leader":1,"leaderEpoch":0,"isr":[1],"partitionEpoch":0,"replicas":[1],"addingReplicas":[],"removingReplicas":[],"isNew":true,"leaderRecoveryState":0},{"partitionIndex":0,"controllerEpoch":1,"leader":1,"leaderEpoch":0,"isr":[1],"partitionEpoch":0,"replicas":[1],"addingReplicas":[],"removingReplicas":[],"isNew":true,"leaderRecoveryState":0},{"partitionIndex":29,"controllerEpoch":1,"leader":1,"leaderEpoch":0,"isr":[1],"partitionEpoch":0,"replicas":[1],"addingReplicas":[],"removingReplicas":[],"isNew":true,"leaderRecoveryState":0},{"partitionIndex":25,"controllerEpoch":1,"leader":1,"leaderEpoch":0,"isr":[1],"partitionEpoch":0,"replicas":[1],"addingReplicas":[],"removingReplicas":[],"isNew":true,"leaderRecoveryState":0},{"partitionIndex":8,"controllerEpoch":1,"leader":1,"leaderEpoch":0,"isr":[1],"partitionEpoch":0,"replicas":[1],"addingReplicas":[],"removingReplicas":[],"isNew":true,"leaderRecoveryState":0},{"partitionIndex":37,"controllerEpoch":1,"leader":1,"leaderEpoch":0,"isr":[1],"partitionEpoch":0,"replicas":[1],"addingReplicas":[],"removingReplicas":[],"isNew":true,"leaderRecoveryState":0},{"partitionIndex":4,"controllerEpoch":1,"leader":1,"leaderEpoch":0,"isr":[1],"partitionEpoch":0,"replicas":[1],"addingReplicas":[],"removingReplicas":[],"isNew":true,"leaderRecoveryState":0},{"partitionIndex":33,"controllerEpoch":1,"leader":1,"leaderEpoch":0,"isr":[1],"partitionEpoch":0,"replicas":[1],"addingReplicas":[],"removingReplicas":[],"isNew":true,"leaderRecoveryState":0},{"partitionIndex":15,"controllerEpoch":1,"leader":1,"leaderEpoch":0,"isr":[1],"partitionEpoch":0,"replicas":[1],"addingReplicas":[],"removingReplicas":[],"isNew":true,"leaderRecoveryState":0},{"partitionIndex":48,"controllerEpoch":1,"leader":1,"leaderEpoch":0,"isr":[1],"partitionEpoch":0,"replicas":[1],"addingReplicas":[],"removingReplicas":[],"isNew":true,"leaderRecoveryState":0},{"partitionIndex":11,"controllerEpoch":1,"leader":1,"leaderEpoch":0,"isr":[1],"partitionEpoch":0,"replicas":[1],"addingReplicas":[],"removingReplicas":[],"isNew":true,"leaderRecoveryState":0},{"partitionIndex":44,"controllerEpoch":1,"leader":1,"leaderEpoch":0,"isr":[1],"partitionEpoch":0,"replicas":[1],"addingReplicas":[],"removingReplicas":[],"isNew":true,"leaderRecoveryState":0},{"partitionIndex":23,"controllerEpoch":1,"leader":1,"leaderEpoch":0,"isr":[1],"partitionEpoch":0,"replicas":[1],"addingReplicas":[],"removingReplicas":[],"isNew":true,"leaderRecoveryState":0},{"partitionIndex":19,"controllerEpoch":1,"leader":1,"leaderEpoch":0,"isr":[1],"partitionEpoch":0,"replicas":[1],"addingReplicas":[],"removingReplicas":[],"isNew":true,"leaderRecoveryState":0},{"partitionIndex":32,"controllerEpoch":1,"leader":1,"leaderEpoch":0,"isr":[1],"partitionEpoch":0,"replicas":[1],"addingReplicas":[],"removingReplicas":[],"isNew":true,"leaderRecoveryState":0},{"partitionIndex":28,"controllerEpoch":1,"leader":1,"leaderEpoch":0,"isr":[1],"partitionEpoch":0,"replicas":[1],"addingReplicas":[],"removingReplicas":[],"isNew":true,"leaderRecoveryState":0},{"partitionIndex":7,"controllerEpoch":1,"leader":1,"leaderEpoch":0,"isr":[1],"partitionEpoch":0,"replicas":[1],"addingReplicas":[],"removingReplicas":[],"isNew":true,"leaderRecoveryState":0},{"partitionIndex":40,"controllerEpoch":1,"leader":1,"leaderEpoch":0,"isr":[1],"partitionEpoch":0,"replicas":[1],"addingReplicas":[],"removingReplicas":[],"isNew":true,"leaderRecoveryState":0},{"partitionIndex":3,"controllerEpoch":1,"leader":1,"leaderEpoch":0,"isr":[1],"partitionEpoch":0,"replicas":[1],"addingReplicas":[],"removingReplicas":[],"isNew":true,"leaderRecoveryState":0},{"partitionIndex":36,"controllerEpoch":1,"leader":1,"leaderEpoch":0,"isr":[1],"partitionEpoch":0,"replicas":[1],"addingReplicas":[],"removingReplicas":[],"isNew":true,"leaderRecoveryState":0},{"partitionIndex":47,"controllerEpoch":1,"leader":1,"leaderEpoch":0,"isr":[1],"partitionEpoch":0,"replicas":[1],"addingReplicas":[],"removingReplicas":[],"isNew":true,"leaderRecoveryState":0},{"partitionIndex":14,"controllerEpoch":1,"leader":1,"leaderEpoch":0,"isr":[1],"partitionEpoch":0,"replicas":[1],"addingReplicas":[],"removingReplicas":[],"isNew":true,"leaderRecoveryState":0},{"partitionIndex":43,"controllerEpoch":1,"leader":1,"leaderEpoch":0,"isr":[1],"partitionEpoch":0,"replicas":[1],"addingReplicas":[],"removingReplicas":[],"isNew":true,"leaderRecoveryState":0},{"partitionIndex":10,"controllerEpoch":1,"leader":1,"leaderEpoch":0,"isr":[1],"partitionEpoch":0,"replicas":[1],"addingReplicas":[],"removingReplicas":[],"isNew":true,"leaderRecoveryState":0},{"partitionIndex":22,"controllerEpoch":1,"leader":1,"leaderEpoch":0,"isr":[1],"partitionEpoch":0,"replicas":[1],"addingReplicas":[],"removingReplicas":[],"isNew":true,"leaderRecoveryState":0},{"partitionIndex":18,"controllerEpoch":1,"leader":1,"leaderEpoch":0,"isr":[1],"partitionEpoch":0,"replicas":[1],"addingReplicas":[],"removingReplicas":[],"isNew":true,"leaderRecoveryState":0},{"partitionIndex":31,"controllerEpoch":1,"leader":1,"leaderEpoch":0,"isr":[1],"partitionEpoch":0,"replicas":[1],"addingReplicas":[],"removingReplicas":[],"isNew":true,"leaderRecoveryState":0},{"partitionIndex":27,"controllerEpoch":1,"leader":1,"leaderEpoch":0,"isr":[1],"partitionEpoch":0,"replicas":[1],"addingReplicas":[],"removingReplicas":[],"isNew":true,"leaderRecoveryState":0},{"partitionIndex":39,"controllerEpoch":1,"leader":1,"leaderEpoch":0,"isr":[1],"partitionEpoch":0,"replicas":[1],"addingReplicas":[],"removingReplicas":[],"isNew":true,"leaderRecoveryState":0},{"partitionIndex":6,"controllerEpoch":1,"leader":1,"leaderEpoch":0,"isr":[1],"partitionEpoch":0,"replicas":[1],"addingReplicas":[],"removingReplicas":[],"isNew":true,"leaderRecoveryState":0},{"partitionIndex":35,"controllerEpoch":1,"leader":1,"leaderEpoch":0,"isr":[1],"partitionEpoch":0,"replicas":[1],"addingReplicas":[],"removingReplicas":[],"isNew":true,"leaderRecoveryState":0},{"partitionIndex":2,"controllerEpoch":1,"leader":1,"leaderEpoch":0,"isr":[1],"partitionEpoch":0,"replicas":[1],"addingReplicas":[],"removingReplicas":[],"isNew":true,"leaderRecoveryState":0}]}],"liveLeaders":[{"brokerId":1,"hostName":"localhost","port":40117}]},"response":{"errorCode":0,"topics":[{"topicId":"RNuWA2mhQKGQZwHuRiMICQ","partitionErrors":[{"partitionIndex":13,"errorCode":0},{"partitionIndex":46,"errorCode":0},{"partitionIndex":9,"errorCode":0},{"partitionIndex":42,"errorCode":0},{"partitionIndex":21,"errorCode":0},{"partitionIndex":17,"errorCode":0},{"partitionIndex":30,"errorCode":0},{"partitionIndex":26,"errorCode":0},{"partitionIndex":5,"errorCode":0},{"partitionIndex":38,"errorCode":0},{"partitionIndex":1,"errorCode":0},{"partitionIndex":34,"errorCode":0},{"partitionIndex":16,"errorCode":0},{"partitionIndex":45,"errorCode":0},{"partitionIndex":12,"errorCode":0},{"partitionIndex":41,"errorCode":0},{"partitionIndex":24,"errorCode":0},{"partitionIndex":20,"errorCode":0},{"partitionIndex":49,"errorCode":0},{"partitionIndex":0,"errorCode":0},{"partitionIndex":29,"errorCode":0},{"partitionIndex":25,"errorCode":0},{"partitionIndex":8,"errorCode":0},{"partitionIndex":37,"errorCode":0},{"partitionIndex":4,"errorCode":0},{"partitionIndex":33,"errorCode":0},{"partitionIndex":15,"errorCode":0},{"partitionIndex":48,"errorCode":0},{"partitionIndex":11,"errorCode":0},{"partitionIndex":44,"errorCode":0},{"partitionIndex":23,"errorCode":0},{"partitionIndex":19,"errorCode":0},{"partitionIndex":32,"errorCode":0},{"partitionIndex":28,"errorCode":0},{"partitionIndex":7,"errorCode":0},{"partitionIndex":40,"errorCode":0},{"partitionIndex":3,"errorCode":0},{"partitionIndex":36,"errorCode":0},{"partitionIndex":47,"errorCode":0},{"partitionIndex":14,"errorCode":0},{"partitionIndex":43,"errorCode":0},{"partitionIndex":10,"errorCode":0},{"partitionIndex":22,"errorCode":0},{"partitionIndex":18,"errorCode":0},{"partitionIndex":31,"errorCode":0},{"partitionIndex":27,"errorCode":0},{"partitionIndex":39,"errorCode":0},{"partitionIndex":6,"errorCode":0},{"partitionIndex":35,"errorCode":0},{"partitionIndex":2,"errorCode":0}]}]},"connection":"127.0.0.1:40117-127.0.0.1:49630-0","totalTimeMs":1016.554,"requestQueueTimeMs":0.745,"localTimeMs":1015.557,"remoteTimeMs":0.0,"throttleTimeMs":0,"responseQueueTimeMs":0.104,"sendTimeMs":0.148,"securityProtocol":"SASL_PLAINTEXT","principal":"User:admin","listener":"SASL_PLAINTEXT","clientInformation":{"softwareName":"unknown","softwareVersion":"unknown"}} 09:09:37 09:09:36.982 [group-metadata-manager-0] DEBUG kafka.coordinator.group.GroupMetadataManager - [GroupMetadataManager brokerId=1] Started loading offsets and group metadata from __consumer_offsets-18 for epoch 0 09:09:37 09:09:36.982 [group-metadata-manager-0] INFO kafka.coordinator.group.GroupMetadataManager - [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-18 in 9 milliseconds for epoch 0, of which 9 milliseconds was spent in the scheduler. 09:09:37 09:09:36.982 [group-metadata-manager-0] DEBUG kafka.coordinator.group.GroupMetadataManager - [GroupMetadataManager brokerId=1] Started loading offsets and group metadata from __consumer_offsets-41 for epoch 0 09:09:37 09:09:36.982 [TestBroker:1:Controller-1-to-broker-1-send-thread] DEBUG org.apache.kafka.clients.NetworkClient - [Controller id=1, targetBrokerId=1] Sending UPDATE_METADATA request with header RequestHeader(apiKey=UPDATE_METADATA, apiVersion=7, clientId=1, correlationId=4) and timeout 30000 to node 1: UpdateMetadataRequestData(controllerId=1, controllerEpoch=1, brokerEpoch=25, ungroupedPartitionStates=[], topicStates=[UpdateMetadataTopicState(topicName='__consumer_offsets', topicId=RNuWA2mhQKGQZwHuRiMICQ, partitionStates=[UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=13, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]), UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=46, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]), UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=9, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]), UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=42, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]), UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=21, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]), UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=17, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]), UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=30, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]), UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=26, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]), UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=5, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]), UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=38, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]), UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=1, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]), UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=34, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]), UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=16, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]), UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=45, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]), UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=12, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]), UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=41, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]), UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=24, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]), UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=20, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]), UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=49, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]), UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=0, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]), UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=29, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]), UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=25, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]), UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=8, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]), UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=37, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]), UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=4, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]), UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=33, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]), UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=15, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]), UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=48, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]), UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=11, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]), UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=44, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]), UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=23, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]), UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=19, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]), UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=32, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]), UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=28, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]), UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=7, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]), UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=40, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]), UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=3, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]), UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=36, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]), UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=47, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]), UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=14, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]), UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=43, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]), UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=10, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]), UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=22, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]), UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=18, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]), UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=31, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]), UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=27, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]), UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=39, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]), UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=6, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]), UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=35, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]), UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=2, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[])])], liveBrokers=[UpdateMetadataBroker(id=1, v0Host='', v0Port=0, endpoints=[UpdateMetadataEndpoint(port=40117, host='localhost', listener='SASL_PLAINTEXT', securityProtocol=2)], rack=null)]) 09:09:37 09:09:36.982 [ProcessThread(sid:0 cport:46481):] DEBUG org.apache.zookeeper.server.SessionTrackerImpl - Checking session 0x1000002945e0000 09:09:37 09:09:36.983 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - Processing request:: sessionid:0x1000002945e0000 type:exists cxid:0x107 zxid:0xfffffffffffffffe txntype:unknown reqpath:/brokers/topics/__consumer_offsets 09:09:37 09:09:36.983 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - sessionid:0x1000002945e0000 type:exists cxid:0x107 zxid:0xfffffffffffffffe txntype:unknown reqpath:/brokers/topics/__consumer_offsets 09:09:37 09:09:36.983 [group-metadata-manager-0] INFO kafka.coordinator.group.GroupMetadataManager - [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-41 in 10 milliseconds for epoch 0, of which 9 milliseconds was spent in the scheduler. 09:09:37 09:09:36.983 [group-metadata-manager-0] DEBUG kafka.coordinator.group.GroupMetadataManager - [GroupMetadataManager brokerId=1] Started loading offsets and group metadata from __consumer_offsets-10 for epoch 0 09:09:37 09:09:36.983 [main-SendThread(127.0.0.1:46481)] DEBUG org.apache.zookeeper.ClientCnxn - Reading reply session id: 0x1000002945e0000, packet:: clientPath:/brokers/topics/__consumer_offsets serverPath:/brokers/topics/__consumer_offsets finished:false header:: 263,3 replyHeader:: 263,139,0 request:: '/brokers/topics/__consumer_offsets,T response:: s{38,38,1770973775670,1770973775670,0,1,0,0,548,1,39} 09:09:37 09:09:36.983 [group-metadata-manager-0] INFO kafka.coordinator.group.GroupMetadataManager - [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-10 in 9 milliseconds for epoch 0, of which 9 milliseconds was spent in the scheduler. 09:09:37 09:09:36.983 [group-metadata-manager-0] DEBUG kafka.coordinator.group.GroupMetadataManager - [GroupMetadataManager brokerId=1] Started loading offsets and group metadata from __consumer_offsets-33 for epoch 0 09:09:37 09:09:36.983 [data-plane-kafka-request-handler-0] DEBUG kafka.server.ZkAdminManager - [Admin Manager on Broker 1]: Topic creation failed since topic '__consumer_offsets' already exists. 09:09:37 org.apache.kafka.common.errors.TopicExistsException: Topic '__consumer_offsets' already exists. 09:09:37 09:09:36.984 [group-metadata-manager-0] INFO kafka.coordinator.group.GroupMetadataManager - [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-33 in 10 milliseconds for epoch 0, of which 9 milliseconds was spent in the scheduler. 09:09:37 09:09:36.984 [data-plane-kafka-request-handler-0] DEBUG kafka.server.DefaultAutoTopicCreationManager - Cleared inflight topic creation state for HashMap(__consumer_offsets -> CreatableTopic(name='__consumer_offsets', numPartitions=50, replicationFactor=1, assignments=[], configs=[CreateableTopicConfig(name='compression.type', value='producer'), CreateableTopicConfig(name='cleanup.policy', value='compact'), CreateableTopicConfig(name='segment.bytes', value='104857600')])) 09:09:37 09:09:36.984 [group-metadata-manager-0] DEBUG kafka.coordinator.group.GroupMetadataManager - [GroupMetadataManager brokerId=1] Started loading offsets and group metadata from __consumer_offsets-48 for epoch 0 09:09:37 09:09:36.984 [group-metadata-manager-0] INFO kafka.coordinator.group.GroupMetadataManager - [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-48 in 10 milliseconds for epoch 0, of which 10 milliseconds was spent in the scheduler. 09:09:37 09:09:36.984 [group-metadata-manager-0] DEBUG kafka.coordinator.group.GroupMetadataManager - [GroupMetadataManager brokerId=1] Started loading offsets and group metadata from __consumer_offsets-19 for epoch 0 09:09:37 09:09:36.984 [group-metadata-manager-0] INFO kafka.coordinator.group.GroupMetadataManager - [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-19 in 10 milliseconds for epoch 0, of which 10 milliseconds was spent in the scheduler. 09:09:37 09:09:36.984 [group-metadata-manager-0] DEBUG kafka.coordinator.group.GroupMetadataManager - [GroupMetadataManager brokerId=1] Started loading offsets and group metadata from __consumer_offsets-34 for epoch 0 09:09:37 09:09:36.984 [group-metadata-manager-0] INFO kafka.coordinator.group.GroupMetadataManager - [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-34 in 10 milliseconds for epoch 0, of which 10 milliseconds was spent in the scheduler. 09:09:37 09:09:36.985 [group-metadata-manager-0] DEBUG kafka.coordinator.group.GroupMetadataManager - [GroupMetadataManager brokerId=1] Started loading offsets and group metadata from __consumer_offsets-4 for epoch 0 09:09:37 09:09:36.985 [main] DEBUG org.apache.kafka.clients.NetworkClient - [Consumer clientId=mso-123456-consumer-f5c1d64b-2a77-4abe-97e2-5d595acfbd82, groupId=mso-group] Received FIND_COORDINATOR response from node 1 for request with header RequestHeader(apiKey=FIND_COORDINATOR, apiVersion=4, clientId=mso-123456-consumer-f5c1d64b-2a77-4abe-97e2-5d595acfbd82, correlationId=29): FindCoordinatorResponseData(throttleTimeMs=0, errorCode=0, errorMessage='', nodeId=0, host='', port=0, coordinators=[Coordinator(key='mso-group', nodeId=-1, host='', port=-1, errorCode=15, errorMessage='')]) 09:09:37 09:09:36.985 [data-plane-kafka-network-thread-1-ListenerName(SASL_PLAINTEXT)-SASL_PLAINTEXT-0] DEBUG kafka.request.logger - Completed request:{"isForwarded":false,"requestHeader":{"requestApiKey":10,"requestApiVersion":4,"correlationId":29,"clientId":"mso-123456-consumer-f5c1d64b-2a77-4abe-97e2-5d595acfbd82","requestApiKeyName":"FIND_COORDINATOR"},"request":{"keyType":0,"coordinatorKeys":["mso-group"]},"response":{"throttleTimeMs":0,"coordinators":[{"key":"mso-group","nodeId":-1,"host":"","port":-1,"errorCode":15,"errorMessage":""}]},"connection":"127.0.0.1:40117-127.0.0.1:49642-3","totalTimeMs":5.934,"requestQueueTimeMs":0.141,"localTimeMs":5.132,"remoteTimeMs":0.0,"throttleTimeMs":0,"responseQueueTimeMs":0.551,"sendTimeMs":0.107,"securityProtocol":"SASL_PLAINTEXT","principal":"User:admin","listener":"SASL_PLAINTEXT","clientInformation":{"softwareName":"apache-kafka-java","softwareVersion":"3.3.1"}} 09:09:37 09:09:36.985 [group-metadata-manager-0] INFO kafka.coordinator.group.GroupMetadataManager - [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-4 in 11 milliseconds for epoch 0, of which 10 milliseconds was spent in the scheduler. 09:09:37 09:09:36.985 [main] DEBUG org.apache.kafka.clients.consumer.internals.ConsumerCoordinator - [Consumer clientId=mso-123456-consumer-f5c1d64b-2a77-4abe-97e2-5d595acfbd82, groupId=mso-group] Received FindCoordinator response ClientResponse(receivedTimeMs=1770973776984, latencyMs=6, disconnected=false, requestHeader=RequestHeader(apiKey=FIND_COORDINATOR, apiVersion=4, clientId=mso-123456-consumer-f5c1d64b-2a77-4abe-97e2-5d595acfbd82, correlationId=29), responseBody=FindCoordinatorResponseData(throttleTimeMs=0, errorCode=0, errorMessage='', nodeId=0, host='', port=0, coordinators=[Coordinator(key='mso-group', nodeId=-1, host='', port=-1, errorCode=15, errorMessage='')])) 09:09:37 09:09:36.985 [main] DEBUG org.apache.kafka.clients.consumer.internals.ConsumerCoordinator - [Consumer clientId=mso-123456-consumer-f5c1d64b-2a77-4abe-97e2-5d595acfbd82, groupId=mso-group] Group coordinator lookup failed: 09:09:37 09:09:36.985 [group-metadata-manager-0] DEBUG kafka.coordinator.group.GroupMetadataManager - [GroupMetadataManager brokerId=1] Started loading offsets and group metadata from __consumer_offsets-11 for epoch 0 09:09:37 09:09:36.985 [main] DEBUG org.apache.kafka.clients.consumer.internals.ConsumerCoordinator - [Consumer clientId=mso-123456-consumer-f5c1d64b-2a77-4abe-97e2-5d595acfbd82, groupId=mso-group] Coordinator discovery failed, refreshing metadata 09:09:37 org.apache.kafka.common.errors.CoordinatorNotAvailableException: The coordinator is not available. 09:09:37 09:09:36.985 [group-metadata-manager-0] INFO kafka.coordinator.group.GroupMetadataManager - [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-11 in 11 milliseconds for epoch 0, of which 11 milliseconds was spent in the scheduler. 09:09:37 09:09:36.985 [group-metadata-manager-0] DEBUG kafka.coordinator.group.GroupMetadataManager - [GroupMetadataManager brokerId=1] Started loading offsets and group metadata from __consumer_offsets-26 for epoch 0 09:09:37 09:09:36.985 [data-plane-kafka-request-handler-1] INFO state.change.logger - [Broker id=1] Add 50 partitions and deleted 0 partitions from metadata cache in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 4 09:09:37 09:09:36.986 [TestBroker:1:Controller-1-to-broker-1-send-thread] DEBUG org.apache.kafka.clients.NetworkClient - [Controller id=1, targetBrokerId=1] Received UPDATE_METADATA response from node 1 for request with header RequestHeader(apiKey=UPDATE_METADATA, apiVersion=7, clientId=1, correlationId=4): UpdateMetadataResponseData(errorCode=0) 09:09:37 09:09:36.987 [data-plane-kafka-network-thread-1-ListenerName(SASL_PLAINTEXT)-SASL_PLAINTEXT-0] DEBUG kafka.request.logger - Completed request:{"isForwarded":false,"requestHeader":{"requestApiKey":6,"requestApiVersion":7,"correlationId":4,"clientId":"1","requestApiKeyName":"UPDATE_METADATA"},"request":{"controllerId":1,"controllerEpoch":1,"brokerEpoch":25,"topicStates":[{"topicName":"__consumer_offsets","topicId":"RNuWA2mhQKGQZwHuRiMICQ","partitionStates":[{"partitionIndex":13,"controllerEpoch":1,"leader":1,"leaderEpoch":0,"isr":[1],"zkVersion":0,"replicas":[1],"offlineReplicas":[]},{"partitionIndex":46,"controllerEpoch":1,"leader":1,"leaderEpoch":0,"isr":[1],"zkVersion":0,"replicas":[1],"offlineReplicas":[]},{"partitionIndex":9,"controllerEpoch":1,"leader":1,"leaderEpoch":0,"isr":[1],"zkVersion":0,"replicas":[1],"offlineReplicas":[]},{"partitionIndex":42,"controllerEpoch":1,"leader":1,"leaderEpoch":0,"isr":[1],"zkVersion":0,"replicas":[1],"offlineReplicas":[]},{"partitionIndex":21,"controllerEpoch":1,"leader":1,"leaderEpoch":0,"isr":[1],"zkVersion":0,"replicas":[1],"offlineReplicas":[]},{"partitionIndex":17,"controllerEpoch":1,"leader":1,"leaderEpoch":0,"isr":[1],"zkVersion":0,"replicas":[1],"offlineReplicas":[]},{"partitionIndex":30,"controllerEpoch":1,"leader":1,"leaderEpoch":0,"isr":[1],"zkVersion":0,"replicas":[1],"offlineReplicas":[]},{"partitionIndex":26,"controllerEpoch":1,"leader":1,"leaderEpoch":0,"isr":[1],"zkVersion":0,"replicas":[1],"offlineReplicas":[]},{"partitionIndex":5,"controllerEpoch":1,"leader":1,"leaderEpoch":0,"isr":[1],"zkVersion":0,"replicas":[1],"offlineReplicas":[]},{"partitionIndex":38,"controllerEpoch":1,"leader":1,"leaderEpoch":0,"isr":[1],"zkVersion":0,"replicas":[1],"offlineReplicas":[]},{"partitionIndex":1,"controllerEpoch":1,"leader":1,"leaderEpoch":0,"isr":[1],"zkVersion":0,"replicas":[1],"offlineReplicas":[]},{"partitionIndex":34,"controllerEpoch":1,"leader":1,"leaderEpoch":0,"isr":[1],"zkVersion":0,"replicas":[1],"offlineReplicas":[]},{"partitionIndex":16,"controllerEpoch":1,"leader":1,"leaderEpoch":0,"isr":[1],"zkVersion":0,"replicas":[1],"offlineReplicas":[]},{"partitionIndex":45,"controllerEpoch":1,"leader":1,"leaderEpoch":0,"isr":[1],"zkVersion":0,"replicas":[1],"offlineReplicas":[]},{"partitionIndex":12,"controllerEpoch":1,"leader":1,"leaderEpoch":0,"isr":[1],"zkVersion":0,"replicas":[1],"offlineReplicas":[]},{"partitionIndex":41,"controllerEpoch":1,"leader":1,"leaderEpoch":0,"isr":[1],"zkVersion":0,"replicas":[1],"offlineReplicas":[]},{"partitionIndex":24,"controllerEpoch":1,"leader":1,"leaderEpoch":0,"isr":[1],"zkVersion":0,"replicas":[1],"offlineReplicas":[]},{"partitionIndex":20,"controllerEpoch":1,"leader":1,"leaderEpoch":0,"isr":[1],"zkVersion":0,"replicas":[1],"offlineReplicas":[]},{"partitionIndex":49,"controllerEpoch":1,"leader":1,"leaderEpoch":0,"isr":[1],"zkVersion":0,"replicas":[1],"offlineReplicas":[]},{"partitionIndex":0,"controllerEpoch":1,"leader":1,"leaderEpoch":0,"isr":[1],"zkVersion":0,"replicas":[1],"offlineReplicas":[]},{"partitionIndex":29,"controllerEpoch":1,"leader":1,"leaderEpoch":0,"isr":[1],"zkVersion":0,"replicas":[1],"offlineReplicas":[]},{"partitionIndex":25,"controllerEpoch":1,"leader":1,"leaderEpoch":0,"isr":[1],"zkVersion":0,"replicas":[1],"offlineReplicas":[]},{"partitionIndex":8,"controllerEpoch":1,"leader":1,"leaderEpoch":0,"isr":[1],"zkVersion":0,"replicas":[1],"offlineReplicas":[]},{"partitionIndex":37,"controllerEpoch":1,"leader":1,"leaderEpoch":0,"isr":[1],"zkVersion":0,"replicas":[1],"offlineReplicas":[]},{"partitionIndex":4,"controllerEpoch":1,"leader":1,"leaderEpoch":0,"isr":[1],"zkVersion":0,"replicas":[1],"offlineReplicas":[]},{"partitionIndex":33,"controllerEpoch":1,"leader":1,"leaderEpoch":0,"isr":[1],"zkVersion":0,"replicas":[1],"offlineReplicas":[]},{"partitionIndex":15,"controllerEpoch":1,"leader":1,"leaderEpoch":0,"isr":[1],"zkVersion":0,"replicas":[1],"offlineReplicas":[]},{"partitionIndex":48,"controllerEpoch":1,"leader":1,"leaderEpoch":0,"isr":[1],"zkVersion":0,"replicas":[1],"offlineReplicas":[]},{"partitionIndex":11,"controllerEpoch":1,"leader":1,"leaderEpoch":0,"isr":[1],"zkVersion":0,"replicas":[1],"offlineReplicas":[]},{"partitionIndex":44,"controllerEpoch":1,"leader":1,"leaderEpoch":0,"isr":[1],"zkVersion":0,"replicas":[1],"offlineReplicas":[]},{"partitionIndex":23,"controllerEpoch":1,"leader":1,"leaderEpoch":0,"isr":[1],"zkVersion":0,"replicas":[1],"offlineReplicas":[]},{"partitionIndex":19,"controllerEpoch":1,"leader":1,"leaderEpoch":0,"isr":[1],"zkVersion":0,"replicas":[1],"offlineReplicas":[]},{"partitionIndex":32,"controllerEpoch":1,"leader":1,"leaderEpoch":0,"isr":[1],"zkVersion":0,"replicas":[1],"offlineReplicas":[]},{"partitionIndex":28,"controllerEpoch":1,"leader":1,"leaderEpoch":0,"isr":[1],"zkVersion":0,"replicas":[1],"offlineReplicas":[]},{"partitionIndex":7,"controllerEpoch":1,"leader":1,"leaderEpoch":0,"isr":[1],"zkVersion":0,"replicas":[1],"offlineReplicas":[]},{"partitionIndex":40,"controllerEpoch":1,"leader":1,"leaderEpoch":0,"isr":[1],"zkVersion":0,"replicas":[1],"offlineReplicas":[]},{"partitionIndex":3,"controllerEpoch":1,"leader":1,"leaderEpoch":0,"isr":[1],"zkVersion":0,"replicas":[1],"offlineReplicas":[]},{"partitionIndex":36,"controllerEpoch":1,"leader":1,"leaderEpoch":0,"isr":[1],"zkVersion":0,"replicas":[1],"offlineReplicas":[]},{"partitionIndex":47,"controllerEpoch":1,"leader":1,"leaderEpoch":0,"isr":[1],"zkVersion":0,"replicas":[1],"offlineReplicas":[]},{"partitionIndex":14,"controllerEpoch":1,"leader":1,"leaderEpoch":0,"isr":[1],"zkVersion":0,"replicas":[1],"offlineReplicas":[]},{"partitionIndex":43,"controllerEpoch":1,"leader":1,"leaderEpoch":0,"isr":[1],"zkVersion":0,"replicas":[1],"offlineReplicas":[]},{"partitionIndex":10,"controllerEpoch":1,"leader":1,"leaderEpoch":0,"isr":[1],"zkVersion":0,"replicas":[1],"offlineReplicas":[]},{"partitionIndex":22,"controllerEpoch":1,"leader":1,"leaderEpoch":0,"isr":[1],"zkVersion":0,"replicas":[1],"offlineReplicas":[]},{"partitionIndex":18,"controllerEpoch":1,"leader":1,"leaderEpoch":0,"isr":[1],"zkVersion":0,"replicas":[1],"offlineReplicas":[]},{"partitionIndex":31,"controllerEpoch":1,"leader":1,"leaderEpoch":0,"isr":[1],"zkVersion":0,"replicas":[1],"offlineReplicas":[]},{"partitionIndex":27,"controllerEpoch":1,"leader":1,"leaderEpoch":0,"isr":[1],"zkVersion":0,"replicas":[1],"offlineReplicas":[]},{"partitionIndex":39,"controllerEpoch":1,"leader":1,"leaderEpoch":0,"isr":[1],"zkVersion":0,"replicas":[1],"offlineReplicas":[]},{"partitionIndex":6,"controllerEpoch":1,"leader":1,"leaderEpoch":0,"isr":[1],"zkVersion":0,"replicas":[1],"offlineReplicas":[]},{"partitionIndex":35,"controllerEpoch":1,"leader":1,"leaderEpoch":0,"isr":[1],"zkVersion":0,"replicas":[1],"offlineReplicas":[]},{"partitionIndex":2,"controllerEpoch":1,"leader":1,"leaderEpoch":0,"isr":[1],"zkVersion":0,"replicas":[1],"offlineReplicas":[]}]}],"liveBrokers":[{"id":1,"endpoints":[{"port":40117,"host":"localhost","listener":"SASL_PLAINTEXT","securityProtocol":2}],"rack":null}]},"response":{"errorCode":0},"connection":"127.0.0.1:40117-127.0.0.1:49630-0","totalTimeMs":1.981,"requestQueueTimeMs":0.712,"localTimeMs":1.03,"remoteTimeMs":0.0,"throttleTimeMs":0,"responseQueueTimeMs":0.051,"sendTimeMs":0.186,"securityProtocol":"SASL_PLAINTEXT","principal":"User:admin","listener":"SASL_PLAINTEXT","clientInformation":{"softwareName":"unknown","softwareVersion":"unknown"}} 09:09:37 09:09:36.987 [group-metadata-manager-0] INFO kafka.coordinator.group.GroupMetadataManager - [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-26 in 13 milliseconds for epoch 0, of which 11 milliseconds was spent in the scheduler. 09:09:37 09:09:36.987 [group-metadata-manager-0] DEBUG kafka.coordinator.group.GroupMetadataManager - [GroupMetadataManager brokerId=1] Started loading offsets and group metadata from __consumer_offsets-49 for epoch 0 09:09:37 09:09:36.987 [group-metadata-manager-0] INFO kafka.coordinator.group.GroupMetadataManager - [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-49 in 13 milliseconds for epoch 0, of which 13 milliseconds was spent in the scheduler. 09:09:37 09:09:36.987 [group-metadata-manager-0] DEBUG kafka.coordinator.group.GroupMetadataManager - [GroupMetadataManager brokerId=1] Started loading offsets and group metadata from __consumer_offsets-39 for epoch 0 09:09:37 09:09:36.988 [group-metadata-manager-0] INFO kafka.coordinator.group.GroupMetadataManager - [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-39 in 13 milliseconds for epoch 0, of which 12 milliseconds was spent in the scheduler. 09:09:37 09:09:36.988 [group-metadata-manager-0] DEBUG kafka.coordinator.group.GroupMetadataManager - [GroupMetadataManager brokerId=1] Started loading offsets and group metadata from __consumer_offsets-9 for epoch 0 09:09:37 09:09:36.988 [group-metadata-manager-0] INFO kafka.coordinator.group.GroupMetadataManager - [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-9 in 13 milliseconds for epoch 0, of which 13 milliseconds was spent in the scheduler. 09:09:37 09:09:36.988 [group-metadata-manager-0] DEBUG kafka.coordinator.group.GroupMetadataManager - [GroupMetadataManager brokerId=1] Started loading offsets and group metadata from __consumer_offsets-24 for epoch 0 09:09:37 09:09:36.988 [group-metadata-manager-0] INFO kafka.coordinator.group.GroupMetadataManager - [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-24 in 13 milliseconds for epoch 0, of which 13 milliseconds was spent in the scheduler. 09:09:37 09:09:36.988 [group-metadata-manager-0] DEBUG kafka.coordinator.group.GroupMetadataManager - [GroupMetadataManager brokerId=1] Started loading offsets and group metadata from __consumer_offsets-31 for epoch 0 09:09:37 09:09:36.988 [group-metadata-manager-0] INFO kafka.coordinator.group.GroupMetadataManager - [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-31 in 13 milliseconds for epoch 0, of which 13 milliseconds was spent in the scheduler. 09:09:37 09:09:36.988 [group-metadata-manager-0] DEBUG kafka.coordinator.group.GroupMetadataManager - [GroupMetadataManager brokerId=1] Started loading offsets and group metadata from __consumer_offsets-46 for epoch 0 09:09:37 09:09:36.988 [group-metadata-manager-0] INFO kafka.coordinator.group.GroupMetadataManager - [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-46 in 13 milliseconds for epoch 0, of which 13 milliseconds was spent in the scheduler. 09:09:37 09:09:36.988 [group-metadata-manager-0] DEBUG kafka.coordinator.group.GroupMetadataManager - [GroupMetadataManager brokerId=1] Started loading offsets and group metadata from __consumer_offsets-1 for epoch 0 09:09:37 09:09:36.989 [group-metadata-manager-0] INFO kafka.coordinator.group.GroupMetadataManager - [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-1 in 14 milliseconds for epoch 0, of which 13 milliseconds was spent in the scheduler. 09:09:37 09:09:36.989 [group-metadata-manager-0] DEBUG kafka.coordinator.group.GroupMetadataManager - [GroupMetadataManager brokerId=1] Started loading offsets and group metadata from __consumer_offsets-16 for epoch 0 09:09:37 09:09:36.989 [group-metadata-manager-0] INFO kafka.coordinator.group.GroupMetadataManager - [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-16 in 14 milliseconds for epoch 0, of which 14 milliseconds was spent in the scheduler. 09:09:37 09:09:36.989 [group-metadata-manager-0] DEBUG kafka.coordinator.group.GroupMetadataManager - [GroupMetadataManager brokerId=1] Started loading offsets and group metadata from __consumer_offsets-2 for epoch 0 09:09:37 09:09:36.989 [group-metadata-manager-0] INFO kafka.coordinator.group.GroupMetadataManager - [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-2 in 14 milliseconds for epoch 0, of which 14 milliseconds was spent in the scheduler. 09:09:37 09:09:36.989 [group-metadata-manager-0] DEBUG kafka.coordinator.group.GroupMetadataManager - [GroupMetadataManager brokerId=1] Started loading offsets and group metadata from __consumer_offsets-25 for epoch 0 09:09:37 09:09:36.989 [group-metadata-manager-0] INFO kafka.coordinator.group.GroupMetadataManager - [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-25 in 14 milliseconds for epoch 0, of which 14 milliseconds was spent in the scheduler. 09:09:37 09:09:36.989 [group-metadata-manager-0] DEBUG kafka.coordinator.group.GroupMetadataManager - [GroupMetadataManager brokerId=1] Started loading offsets and group metadata from __consumer_offsets-40 for epoch 0 09:09:37 09:09:36.989 [group-metadata-manager-0] INFO kafka.coordinator.group.GroupMetadataManager - [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-40 in 14 milliseconds for epoch 0, of which 14 milliseconds was spent in the scheduler. 09:09:37 09:09:36.989 [group-metadata-manager-0] DEBUG kafka.coordinator.group.GroupMetadataManager - [GroupMetadataManager brokerId=1] Started loading offsets and group metadata from __consumer_offsets-47 for epoch 0 09:09:37 09:09:36.990 [group-metadata-manager-0] INFO kafka.coordinator.group.GroupMetadataManager - [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-47 in 14 milliseconds for epoch 0, of which 13 milliseconds was spent in the scheduler. 09:09:37 09:09:36.990 [group-metadata-manager-0] DEBUG kafka.coordinator.group.GroupMetadataManager - [GroupMetadataManager brokerId=1] Started loading offsets and group metadata from __consumer_offsets-17 for epoch 0 09:09:37 09:09:36.991 [group-metadata-manager-0] INFO kafka.coordinator.group.GroupMetadataManager - [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-17 in 15 milliseconds for epoch 0, of which 14 milliseconds was spent in the scheduler. 09:09:37 09:09:36.991 [group-metadata-manager-0] DEBUG kafka.coordinator.group.GroupMetadataManager - [GroupMetadataManager brokerId=1] Started loading offsets and group metadata from __consumer_offsets-32 for epoch 0 09:09:37 09:09:36.991 [group-metadata-manager-0] INFO kafka.coordinator.group.GroupMetadataManager - [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-32 in 15 milliseconds for epoch 0, of which 15 milliseconds was spent in the scheduler. 09:09:37 09:09:36.991 [group-metadata-manager-0] DEBUG kafka.coordinator.group.GroupMetadataManager - [GroupMetadataManager brokerId=1] Started loading offsets and group metadata from __consumer_offsets-37 for epoch 0 09:09:37 09:09:36.991 [group-metadata-manager-0] INFO kafka.coordinator.group.GroupMetadataManager - [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-37 in 15 milliseconds for epoch 0, of which 15 milliseconds was spent in the scheduler. 09:09:37 09:09:36.992 [group-metadata-manager-0] DEBUG kafka.coordinator.group.GroupMetadataManager - [GroupMetadataManager brokerId=1] Started loading offsets and group metadata from __consumer_offsets-7 for epoch 0 09:09:37 09:09:36.992 [group-metadata-manager-0] INFO kafka.coordinator.group.GroupMetadataManager - [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-7 in 16 milliseconds for epoch 0, of which 16 milliseconds was spent in the scheduler. 09:09:37 09:09:36.992 [group-metadata-manager-0] DEBUG kafka.coordinator.group.GroupMetadataManager - [GroupMetadataManager brokerId=1] Started loading offsets and group metadata from __consumer_offsets-22 for epoch 0 09:09:37 09:09:36.992 [group-metadata-manager-0] INFO kafka.coordinator.group.GroupMetadataManager - [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-22 in 16 milliseconds for epoch 0, of which 16 milliseconds was spent in the scheduler. 09:09:37 09:09:36.992 [group-metadata-manager-0] DEBUG kafka.coordinator.group.GroupMetadataManager - [GroupMetadataManager brokerId=1] Started loading offsets and group metadata from __consumer_offsets-29 for epoch 0 09:09:37 09:09:36.992 [group-metadata-manager-0] INFO kafka.coordinator.group.GroupMetadataManager - [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-29 in 16 milliseconds for epoch 0, of which 16 milliseconds was spent in the scheduler. 09:09:37 09:09:36.992 [group-metadata-manager-0] DEBUG kafka.coordinator.group.GroupMetadataManager - [GroupMetadataManager brokerId=1] Started loading offsets and group metadata from __consumer_offsets-44 for epoch 0 09:09:37 09:09:36.992 [group-metadata-manager-0] INFO kafka.coordinator.group.GroupMetadataManager - [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-44 in 16 milliseconds for epoch 0, of which 16 milliseconds was spent in the scheduler. 09:09:37 09:09:36.992 [group-metadata-manager-0] DEBUG kafka.coordinator.group.GroupMetadataManager - [GroupMetadataManager brokerId=1] Started loading offsets and group metadata from __consumer_offsets-14 for epoch 0 09:09:37 09:09:36.992 [group-metadata-manager-0] INFO kafka.coordinator.group.GroupMetadataManager - [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-14 in 16 milliseconds for epoch 0, of which 16 milliseconds was spent in the scheduler. 09:09:37 09:09:36.992 [group-metadata-manager-0] DEBUG kafka.coordinator.group.GroupMetadataManager - [GroupMetadataManager brokerId=1] Started loading offsets and group metadata from __consumer_offsets-23 for epoch 0 09:09:37 09:09:36.993 [group-metadata-manager-0] INFO kafka.coordinator.group.GroupMetadataManager - [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-23 in 16 milliseconds for epoch 0, of which 15 milliseconds was spent in the scheduler. 09:09:37 09:09:36.993 [group-metadata-manager-0] DEBUG kafka.coordinator.group.GroupMetadataManager - [GroupMetadataManager brokerId=1] Started loading offsets and group metadata from __consumer_offsets-38 for epoch 0 09:09:37 09:09:36.993 [group-metadata-manager-0] INFO kafka.coordinator.group.GroupMetadataManager - [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-38 in 16 milliseconds for epoch 0, of which 16 milliseconds was spent in the scheduler. 09:09:37 09:09:36.993 [group-metadata-manager-0] DEBUG kafka.coordinator.group.GroupMetadataManager - [GroupMetadataManager brokerId=1] Started loading offsets and group metadata from __consumer_offsets-8 for epoch 0 09:09:37 09:09:36.993 [group-metadata-manager-0] INFO kafka.coordinator.group.GroupMetadataManager - [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-8 in 16 milliseconds for epoch 0, of which 16 milliseconds was spent in the scheduler. 09:09:37 09:09:36.993 [group-metadata-manager-0] DEBUG kafka.coordinator.group.GroupMetadataManager - [GroupMetadataManager brokerId=1] Started loading offsets and group metadata from __consumer_offsets-45 for epoch 0 09:09:37 09:09:36.993 [group-metadata-manager-0] INFO kafka.coordinator.group.GroupMetadataManager - [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-45 in 16 milliseconds for epoch 0, of which 16 milliseconds was spent in the scheduler. 09:09:37 09:09:36.993 [group-metadata-manager-0] DEBUG kafka.coordinator.group.GroupMetadataManager - [GroupMetadataManager brokerId=1] Started loading offsets and group metadata from __consumer_offsets-15 for epoch 0 09:09:37 09:09:36.993 [group-metadata-manager-0] INFO kafka.coordinator.group.GroupMetadataManager - [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-15 in 16 milliseconds for epoch 0, of which 16 milliseconds was spent in the scheduler. 09:09:37 09:09:36.993 [group-metadata-manager-0] DEBUG kafka.coordinator.group.GroupMetadataManager - [GroupMetadataManager brokerId=1] Started loading offsets and group metadata from __consumer_offsets-30 for epoch 0 09:09:37 09:09:36.993 [group-metadata-manager-0] INFO kafka.coordinator.group.GroupMetadataManager - [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-30 in 16 milliseconds for epoch 0, of which 16 milliseconds was spent in the scheduler. 09:09:37 09:09:36.994 [group-metadata-manager-0] DEBUG kafka.coordinator.group.GroupMetadataManager - [GroupMetadataManager brokerId=1] Started loading offsets and group metadata from __consumer_offsets-0 for epoch 0 09:09:37 09:09:36.994 [group-metadata-manager-0] INFO kafka.coordinator.group.GroupMetadataManager - [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-0 in 17 milliseconds for epoch 0, of which 17 milliseconds was spent in the scheduler. 09:09:37 09:09:36.994 [group-metadata-manager-0] DEBUG kafka.coordinator.group.GroupMetadataManager - [GroupMetadataManager brokerId=1] Started loading offsets and group metadata from __consumer_offsets-35 for epoch 0 09:09:37 09:09:36.994 [group-metadata-manager-0] INFO kafka.coordinator.group.GroupMetadataManager - [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-35 in 17 milliseconds for epoch 0, of which 17 milliseconds was spent in the scheduler. 09:09:37 09:09:36.994 [group-metadata-manager-0] DEBUG kafka.coordinator.group.GroupMetadataManager - [GroupMetadataManager brokerId=1] Started loading offsets and group metadata from __consumer_offsets-5 for epoch 0 09:09:37 09:09:36.994 [group-metadata-manager-0] INFO kafka.coordinator.group.GroupMetadataManager - [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-5 in 17 milliseconds for epoch 0, of which 17 milliseconds was spent in the scheduler. 09:09:37 09:09:36.994 [group-metadata-manager-0] DEBUG kafka.coordinator.group.GroupMetadataManager - [GroupMetadataManager brokerId=1] Started loading offsets and group metadata from __consumer_offsets-20 for epoch 0 09:09:37 09:09:36.994 [group-metadata-manager-0] INFO kafka.coordinator.group.GroupMetadataManager - [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-20 in 16 milliseconds for epoch 0, of which 16 milliseconds was spent in the scheduler. 09:09:37 09:09:36.994 [group-metadata-manager-0] DEBUG kafka.coordinator.group.GroupMetadataManager - [GroupMetadataManager brokerId=1] Started loading offsets and group metadata from __consumer_offsets-27 for epoch 0 09:09:37 09:09:36.994 [group-metadata-manager-0] INFO kafka.coordinator.group.GroupMetadataManager - [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-27 in 16 milliseconds for epoch 0, of which 16 milliseconds was spent in the scheduler. 09:09:37 09:09:36.994 [group-metadata-manager-0] DEBUG kafka.coordinator.group.GroupMetadataManager - [GroupMetadataManager brokerId=1] Started loading offsets and group metadata from __consumer_offsets-42 for epoch 0 09:09:37 09:09:36.995 [group-metadata-manager-0] INFO kafka.coordinator.group.GroupMetadataManager - [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-42 in 17 milliseconds for epoch 0, of which 16 milliseconds was spent in the scheduler. 09:09:37 09:09:36.995 [group-metadata-manager-0] DEBUG kafka.coordinator.group.GroupMetadataManager - [GroupMetadataManager brokerId=1] Started loading offsets and group metadata from __consumer_offsets-12 for epoch 0 09:09:37 09:09:36.995 [group-metadata-manager-0] INFO kafka.coordinator.group.GroupMetadataManager - [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-12 in 17 milliseconds for epoch 0, of which 17 milliseconds was spent in the scheduler. 09:09:37 09:09:36.995 [group-metadata-manager-0] DEBUG kafka.coordinator.group.GroupMetadataManager - [GroupMetadataManager brokerId=1] Started loading offsets and group metadata from __consumer_offsets-21 for epoch 0 09:09:37 09:09:36.995 [group-metadata-manager-0] INFO kafka.coordinator.group.GroupMetadataManager - [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-21 in 17 milliseconds for epoch 0, of which 17 milliseconds was spent in the scheduler. 09:09:37 09:09:36.995 [group-metadata-manager-0] DEBUG kafka.coordinator.group.GroupMetadataManager - [GroupMetadataManager brokerId=1] Started loading offsets and group metadata from __consumer_offsets-36 for epoch 0 09:09:37 09:09:36.995 [group-metadata-manager-0] INFO kafka.coordinator.group.GroupMetadataManager - [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-36 in 17 milliseconds for epoch 0, of which 17 milliseconds was spent in the scheduler. 09:09:37 09:09:36.995 [group-metadata-manager-0] DEBUG kafka.coordinator.group.GroupMetadataManager - [GroupMetadataManager brokerId=1] Started loading offsets and group metadata from __consumer_offsets-6 for epoch 0 09:09:37 09:09:36.995 [group-metadata-manager-0] INFO kafka.coordinator.group.GroupMetadataManager - [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-6 in 17 milliseconds for epoch 0, of which 17 milliseconds was spent in the scheduler. 09:09:37 09:09:36.995 [group-metadata-manager-0] DEBUG kafka.coordinator.group.GroupMetadataManager - [GroupMetadataManager brokerId=1] Started loading offsets and group metadata from __consumer_offsets-43 for epoch 0 09:09:37 09:09:36.995 [group-metadata-manager-0] INFO kafka.coordinator.group.GroupMetadataManager - [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-43 in 17 milliseconds for epoch 0, of which 17 milliseconds was spent in the scheduler. 09:09:37 09:09:36.996 [group-metadata-manager-0] DEBUG kafka.coordinator.group.GroupMetadataManager - [GroupMetadataManager brokerId=1] Started loading offsets and group metadata from __consumer_offsets-13 for epoch 0 09:09:37 09:09:36.996 [group-metadata-manager-0] INFO kafka.coordinator.group.GroupMetadataManager - [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-13 in 18 milliseconds for epoch 0, of which 18 milliseconds was spent in the scheduler. 09:09:37 09:09:36.996 [group-metadata-manager-0] DEBUG kafka.coordinator.group.GroupMetadataManager - [GroupMetadataManager brokerId=1] Started loading offsets and group metadata from __consumer_offsets-28 for epoch 0 09:09:37 09:09:36.996 [group-metadata-manager-0] INFO kafka.coordinator.group.GroupMetadataManager - [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-28 in 18 milliseconds for epoch 0, of which 18 milliseconds was spent in the scheduler. 09:09:37 09:09:37.077 [main] DEBUG org.apache.kafka.clients.NetworkClient - [Consumer clientId=mso-123456-consumer-f5c1d64b-2a77-4abe-97e2-5d595acfbd82, groupId=mso-group] Sending metadata request MetadataRequestData(topics=[MetadataRequestTopic(topicId=AAAAAAAAAAAAAAAAAAAAAA, name='my-test-topic')], allowAutoTopicCreation=false, includeClusterAuthorizedOperations=false, includeTopicAuthorizedOperations=false) to node localhost:40117 (id: 1 rack: null) 09:09:37 09:09:37.078 [main] DEBUG org.apache.kafka.clients.NetworkClient - [Consumer clientId=mso-123456-consumer-f5c1d64b-2a77-4abe-97e2-5d595acfbd82, groupId=mso-group] Sending METADATA request with header RequestHeader(apiKey=METADATA, apiVersion=12, clientId=mso-123456-consumer-f5c1d64b-2a77-4abe-97e2-5d595acfbd82, correlationId=30) and timeout 30000 to node 1: MetadataRequestData(topics=[MetadataRequestTopic(topicId=AAAAAAAAAAAAAAAAAAAAAA, name='my-test-topic')], allowAutoTopicCreation=false, includeClusterAuthorizedOperations=false, includeTopicAuthorizedOperations=false) 09:09:37 09:09:37.081 [main] DEBUG org.apache.kafka.clients.NetworkClient - [Consumer clientId=mso-123456-consumer-f5c1d64b-2a77-4abe-97e2-5d595acfbd82, groupId=mso-group] Received METADATA response from node 1 for request with header RequestHeader(apiKey=METADATA, apiVersion=12, clientId=mso-123456-consumer-f5c1d64b-2a77-4abe-97e2-5d595acfbd82, correlationId=30): MetadataResponseData(throttleTimeMs=0, brokers=[MetadataResponseBroker(nodeId=1, host='localhost', port=40117, rack=null)], clusterId='lcpOyY1-QY2MMThgHGGgSA', controllerId=1, topics=[MetadataResponseTopic(errorCode=0, name='my-test-topic', topicId=rpOTusxPRiGyrsjjjH_fwA, isInternal=false, partitions=[MetadataResponsePartition(errorCode=0, partitionIndex=0, leaderId=1, leaderEpoch=0, replicaNodes=[1], isrNodes=[1], offlineReplicas=[])], topicAuthorizedOperations=-2147483648)], clusterAuthorizedOperations=-2147483648) 09:09:37 09:09:37.081 [main] DEBUG org.apache.kafka.clients.Metadata - [Consumer clientId=mso-123456-consumer-f5c1d64b-2a77-4abe-97e2-5d595acfbd82, groupId=mso-group] Updating last seen epoch for partition my-test-topic-0 from 0 to epoch 0 from new metadata 09:09:37 09:09:37.081 [data-plane-kafka-network-thread-1-ListenerName(SASL_PLAINTEXT)-SASL_PLAINTEXT-0] DEBUG kafka.request.logger - Completed request:{"isForwarded":false,"requestHeader":{"requestApiKey":3,"requestApiVersion":12,"correlationId":30,"clientId":"mso-123456-consumer-f5c1d64b-2a77-4abe-97e2-5d595acfbd82","requestApiKeyName":"METADATA"},"request":{"topics":[{"topicId":"AAAAAAAAAAAAAAAAAAAAAA","name":"my-test-topic"}],"allowAutoTopicCreation":false,"includeTopicAuthorizedOperations":false},"response":{"throttleTimeMs":0,"brokers":[{"nodeId":1,"host":"localhost","port":40117,"rack":null}],"clusterId":"lcpOyY1-QY2MMThgHGGgSA","controllerId":1,"topics":[{"errorCode":0,"name":"my-test-topic","topicId":"rpOTusxPRiGyrsjjjH_fwA","isInternal":false,"partitions":[{"errorCode":0,"partitionIndex":0,"leaderId":1,"leaderEpoch":0,"replicaNodes":[1],"isrNodes":[1],"offlineReplicas":[]}],"topicAuthorizedOperations":-2147483648}]},"connection":"127.0.0.1:40117-127.0.0.1:49642-3","totalTimeMs":2.169,"requestQueueTimeMs":0.323,"localTimeMs":1.439,"remoteTimeMs":0.0,"throttleTimeMs":0,"responseQueueTimeMs":0.131,"sendTimeMs":0.274,"securityProtocol":"SASL_PLAINTEXT","principal":"User:admin","listener":"SASL_PLAINTEXT","clientInformation":{"softwareName":"apache-kafka-java","softwareVersion":"3.3.1"}} 09:09:37 09:09:37.082 [main] DEBUG org.apache.kafka.clients.Metadata - [Consumer clientId=mso-123456-consumer-f5c1d64b-2a77-4abe-97e2-5d595acfbd82, groupId=mso-group] Updated cluster metadata updateVersion 16 to MetadataCache{clusterId='lcpOyY1-QY2MMThgHGGgSA', nodes={1=localhost:40117 (id: 1 rack: null)}, partitions=[PartitionMetadata(error=NONE, partition=my-test-topic-0, leader=Optional[1], leaderEpoch=Optional[0], replicas=1, isr=1, offlineReplicas=)], controller=localhost:40117 (id: 1 rack: null)} 09:09:37 09:09:37.082 [main] DEBUG org.apache.kafka.clients.consumer.internals.ConsumerCoordinator - [Consumer clientId=mso-123456-consumer-f5c1d64b-2a77-4abe-97e2-5d595acfbd82, groupId=mso-group] Sending FindCoordinator request to broker localhost:40117 (id: 1 rack: null) 09:09:37 09:09:37.082 [main] DEBUG org.apache.kafka.clients.NetworkClient - [Consumer clientId=mso-123456-consumer-f5c1d64b-2a77-4abe-97e2-5d595acfbd82, groupId=mso-group] Sending FIND_COORDINATOR request with header RequestHeader(apiKey=FIND_COORDINATOR, apiVersion=4, clientId=mso-123456-consumer-f5c1d64b-2a77-4abe-97e2-5d595acfbd82, correlationId=31) and timeout 30000 to node 1: FindCoordinatorRequestData(key='', keyType=0, coordinatorKeys=[mso-group]) 09:09:37 09:09:37.085 [main] DEBUG org.apache.kafka.clients.NetworkClient - [Consumer clientId=mso-123456-consumer-f5c1d64b-2a77-4abe-97e2-5d595acfbd82, groupId=mso-group] Received FIND_COORDINATOR response from node 1 for request with header RequestHeader(apiKey=FIND_COORDINATOR, apiVersion=4, clientId=mso-123456-consumer-f5c1d64b-2a77-4abe-97e2-5d595acfbd82, correlationId=31): FindCoordinatorResponseData(throttleTimeMs=0, errorCode=0, errorMessage='', nodeId=0, host='', port=0, coordinators=[Coordinator(key='mso-group', nodeId=1, host='localhost', port=40117, errorCode=0, errorMessage='')]) 09:09:37 09:09:37.086 [main] DEBUG org.apache.kafka.clients.consumer.internals.ConsumerCoordinator - [Consumer clientId=mso-123456-consumer-f5c1d64b-2a77-4abe-97e2-5d595acfbd82, groupId=mso-group] Received FindCoordinator response ClientResponse(receivedTimeMs=1770973777085, latencyMs=3, disconnected=false, requestHeader=RequestHeader(apiKey=FIND_COORDINATOR, apiVersion=4, clientId=mso-123456-consumer-f5c1d64b-2a77-4abe-97e2-5d595acfbd82, correlationId=31), responseBody=FindCoordinatorResponseData(throttleTimeMs=0, errorCode=0, errorMessage='', nodeId=0, host='', port=0, coordinators=[Coordinator(key='mso-group', nodeId=1, host='localhost', port=40117, errorCode=0, errorMessage='')])) 09:09:37 09:09:37.086 [data-plane-kafka-network-thread-1-ListenerName(SASL_PLAINTEXT)-SASL_PLAINTEXT-0] DEBUG kafka.request.logger - Completed request:{"isForwarded":false,"requestHeader":{"requestApiKey":10,"requestApiVersion":4,"correlationId":31,"clientId":"mso-123456-consumer-f5c1d64b-2a77-4abe-97e2-5d595acfbd82","requestApiKeyName":"FIND_COORDINATOR"},"request":{"keyType":0,"coordinatorKeys":["mso-group"]},"response":{"throttleTimeMs":0,"coordinators":[{"key":"mso-group","nodeId":1,"host":"localhost","port":40117,"errorCode":0,"errorMessage":""}]},"connection":"127.0.0.1:40117-127.0.0.1:49642-3","totalTimeMs":2.69,"requestQueueTimeMs":0.129,"localTimeMs":2.218,"remoteTimeMs":0.0,"throttleTimeMs":0,"responseQueueTimeMs":0.075,"sendTimeMs":0.266,"securityProtocol":"SASL_PLAINTEXT","principal":"User:admin","listener":"SASL_PLAINTEXT","clientInformation":{"softwareName":"apache-kafka-java","softwareVersion":"3.3.1"}} 09:09:37 09:09:37.086 [main] INFO org.apache.kafka.clients.consumer.internals.ConsumerCoordinator - [Consumer clientId=mso-123456-consumer-f5c1d64b-2a77-4abe-97e2-5d595acfbd82, groupId=mso-group] Discovered group coordinator localhost:40117 (id: 2147483646 rack: null) 09:09:37 09:09:37.086 [main] DEBUG org.apache.kafka.clients.ClientUtils - Resolved host localhost as 127.0.0.1 09:09:37 09:09:37.086 [main] DEBUG org.apache.kafka.clients.NetworkClient - [Consumer clientId=mso-123456-consumer-f5c1d64b-2a77-4abe-97e2-5d595acfbd82, groupId=mso-group] Initiating connection to node localhost:40117 (id: 2147483646 rack: null) using address localhost/127.0.0.1 09:09:37 09:09:37.086 [main] DEBUG org.apache.kafka.common.security.authenticator.SaslClientAuthenticator - [Consumer clientId=mso-123456-consumer-f5c1d64b-2a77-4abe-97e2-5d595acfbd82, groupId=mso-group] Set SASL client state to SEND_APIVERSIONS_REQUEST 09:09:37 09:09:37.086 [main] DEBUG org.apache.kafka.common.security.authenticator.SaslClientAuthenticator - [Consumer clientId=mso-123456-consumer-f5c1d64b-2a77-4abe-97e2-5d595acfbd82, groupId=mso-group] Creating SaslClient: client=null;service=kafka;serviceHostname=localhost;mechs=[PLAIN] 09:09:37 09:09:37.087 [data-plane-kafka-socket-acceptor-ListenerName(SASL_PLAINTEXT)-SASL_PLAINTEXT-40117] DEBUG kafka.network.DataPlaneAcceptor - Accepted connection from /127.0.0.1:49644 on /127.0.0.1:40117 and assigned it to processor 1, sendBufferSize [actual|requested]: [102400|102400] recvBufferSize [actual|requested]: [102400|102400] 09:09:37 09:09:37.087 [data-plane-kafka-network-thread-1-ListenerName(SASL_PLAINTEXT)-SASL_PLAINTEXT-1] DEBUG kafka.network.Processor - Processor 1 listening to new connection from /127.0.0.1:49644 09:09:37 09:09:37.090 [main] DEBUG org.apache.kafka.clients.consumer.internals.ConsumerCoordinator - [Consumer clientId=mso-123456-consumer-f5c1d64b-2a77-4abe-97e2-5d595acfbd82, groupId=mso-group] Executing onJoinPrepare with generation -1 and memberId 09:09:37 09:09:37.090 [kafka-coordinator-heartbeat-thread | mso-group] DEBUG org.apache.kafka.clients.consumer.internals.ConsumerCoordinator - [Consumer clientId=mso-123456-consumer-f5c1d64b-2a77-4abe-97e2-5d595acfbd82, groupId=mso-group] Heartbeat thread started 09:09:37 09:09:37.090 [main] DEBUG org.apache.kafka.clients.consumer.internals.ConsumerCoordinator - [Consumer clientId=mso-123456-consumer-f5c1d64b-2a77-4abe-97e2-5d595acfbd82, groupId=mso-group] Marking assigned partitions pending for revocation: [] 09:09:37 09:09:37.092 [main] DEBUG org.apache.kafka.clients.consumer.internals.ConsumerCoordinator - [Consumer clientId=mso-123456-consumer-f5c1d64b-2a77-4abe-97e2-5d595acfbd82, groupId=mso-group] Sending asynchronous auto-commit of offsets {} 09:09:37 09:09:37.095 [main] DEBUG org.apache.kafka.common.network.Selector - [Consumer clientId=mso-123456-consumer-f5c1d64b-2a77-4abe-97e2-5d595acfbd82, groupId=mso-group] Created socket with SO_RCVBUF = 65536, SO_SNDBUF = 131072, SO_TIMEOUT = 0 to node 2147483646 09:09:37 09:09:37.095 [main] DEBUG org.apache.kafka.common.security.authenticator.SaslClientAuthenticator - [Consumer clientId=mso-123456-consumer-f5c1d64b-2a77-4abe-97e2-5d595acfbd82, groupId=mso-group] Set SASL client state to RECEIVE_APIVERSIONS_RESPONSE 09:09:37 09:09:37.095 [main] DEBUG org.apache.kafka.clients.NetworkClient - [Consumer clientId=mso-123456-consumer-f5c1d64b-2a77-4abe-97e2-5d595acfbd82, groupId=mso-group] Completed connection to node 2147483646. Fetching API versions. 09:09:37 09:09:37.095 [data-plane-kafka-network-thread-1-ListenerName(SASL_PLAINTEXT)-SASL_PLAINTEXT-1] DEBUG org.apache.kafka.common.security.authenticator.SaslServerAuthenticator - Set SASL server state to HANDSHAKE_OR_VERSIONS_REQUEST during authentication 09:09:37 09:09:37.095 [data-plane-kafka-network-thread-1-ListenerName(SASL_PLAINTEXT)-SASL_PLAINTEXT-1] DEBUG org.apache.kafka.common.security.authenticator.SaslServerAuthenticator - Handling Kafka request API_VERSIONS during authentication 09:09:37 09:09:37.095 [main] INFO org.apache.kafka.clients.consumer.internals.ConsumerCoordinator - [Consumer clientId=mso-123456-consumer-f5c1d64b-2a77-4abe-97e2-5d595acfbd82, groupId=mso-group] (Re-)joining group 09:09:37 09:09:37.095 [data-plane-kafka-network-thread-1-ListenerName(SASL_PLAINTEXT)-SASL_PLAINTEXT-1] DEBUG org.apache.kafka.common.security.authenticator.SaslServerAuthenticator - Set SASL server state to HANDSHAKE_REQUEST during authentication 09:09:37 09:09:37.096 [main] DEBUG org.apache.kafka.clients.consumer.internals.ConsumerCoordinator - [Consumer clientId=mso-123456-consumer-f5c1d64b-2a77-4abe-97e2-5d595acfbd82, groupId=mso-group] Joining group with current subscription: [my-test-topic] 09:09:37 09:09:37.101 [main] DEBUG org.apache.kafka.clients.consumer.internals.ConsumerCoordinator - [Consumer clientId=mso-123456-consumer-f5c1d64b-2a77-4abe-97e2-5d595acfbd82, groupId=mso-group] Sending JoinGroup (JoinGroupRequestData(groupId='mso-group', sessionTimeoutMs=50000, rebalanceTimeoutMs=600000, memberId='', groupInstanceId=null, protocolType='consumer', protocols=[JoinGroupRequestProtocol(name='range', metadata=[0, 1, 0, 0, 0, 1, 0, 13, 109, 121, 45, 116, 101, 115, 116, 45, 116, 111, 112, 105, 99, -1, -1, -1, -1, 0, 0, 0, 0]), JoinGroupRequestProtocol(name='cooperative-sticky', metadata=[0, 1, 0, 0, 0, 1, 0, 13, 109, 121, 45, 116, 101, 115, 116, 45, 116, 111, 112, 105, 99, 0, 0, 0, 4, -1, -1, -1, -1, 0, 0, 0, 0])], reason='')) to coordinator localhost:40117 (id: 2147483646 rack: null) 09:09:37 09:09:37.102 [main] DEBUG org.apache.kafka.common.security.authenticator.SaslClientAuthenticator - [Consumer clientId=mso-123456-consumer-f5c1d64b-2a77-4abe-97e2-5d595acfbd82, groupId=mso-group] Set SASL client state to SEND_HANDSHAKE_REQUEST 09:09:37 09:09:37.102 [main] DEBUG org.apache.kafka.common.security.authenticator.SaslClientAuthenticator - [Consumer clientId=mso-123456-consumer-f5c1d64b-2a77-4abe-97e2-5d595acfbd82, groupId=mso-group] Set SASL client state to RECEIVE_HANDSHAKE_RESPONSE 09:09:37 09:09:37.102 [data-plane-kafka-network-thread-1-ListenerName(SASL_PLAINTEXT)-SASL_PLAINTEXT-1] DEBUG org.apache.kafka.common.security.authenticator.SaslServerAuthenticator - Handling Kafka request SASL_HANDSHAKE during authentication 09:09:37 09:09:37.102 [data-plane-kafka-network-thread-1-ListenerName(SASL_PLAINTEXT)-SASL_PLAINTEXT-1] DEBUG org.apache.kafka.common.security.authenticator.SaslServerAuthenticator - Using SASL mechanism 'PLAIN' provided by client 09:09:37 09:09:37.103 [data-plane-kafka-network-thread-1-ListenerName(SASL_PLAINTEXT)-SASL_PLAINTEXT-1] DEBUG org.apache.kafka.common.security.authenticator.SaslServerAuthenticator - Set SASL server state to AUTHENTICATE during authentication 09:09:37 09:09:37.105 [main] DEBUG org.apache.kafka.common.security.authenticator.SaslClientAuthenticator - [Consumer clientId=mso-123456-consumer-f5c1d64b-2a77-4abe-97e2-5d595acfbd82, groupId=mso-group] Set SASL client state to INITIAL 09:09:37 09:09:37.105 [main] DEBUG org.apache.kafka.common.security.authenticator.SaslClientAuthenticator - [Consumer clientId=mso-123456-consumer-f5c1d64b-2a77-4abe-97e2-5d595acfbd82, groupId=mso-group] Set SASL client state to INTERMEDIATE 09:09:37 09:09:37.105 [main] DEBUG org.apache.kafka.clients.consumer.internals.ConsumerCoordinator - [Consumer clientId=mso-123456-consumer-f5c1d64b-2a77-4abe-97e2-5d595acfbd82, groupId=mso-group] Completed asynchronous auto-commit of offsets {} 09:09:37 09:09:37.105 [data-plane-kafka-network-thread-1-ListenerName(SASL_PLAINTEXT)-SASL_PLAINTEXT-1] DEBUG org.apache.kafka.common.security.authenticator.SaslServerAuthenticator - Authentication complete; session max lifetime from broker config=0 ms, no credential expiration; no session expiration, sending 0 ms to client 09:09:37 09:09:37.105 [data-plane-kafka-network-thread-1-ListenerName(SASL_PLAINTEXT)-SASL_PLAINTEXT-1] DEBUG org.apache.kafka.common.security.authenticator.SaslServerAuthenticator - Set SASL server state to COMPLETE during authentication 09:09:37 09:09:37.105 [data-plane-kafka-network-thread-1-ListenerName(SASL_PLAINTEXT)-SASL_PLAINTEXT-1] DEBUG org.apache.kafka.common.network.Selector - [SocketServer listenerType=ZK_BROKER, nodeId=1] Successfully authenticated with /127.0.0.1 09:09:37 09:09:37.105 [main] DEBUG org.apache.kafka.common.security.authenticator.SaslClientAuthenticator - [Consumer clientId=mso-123456-consumer-f5c1d64b-2a77-4abe-97e2-5d595acfbd82, groupId=mso-group] Set SASL client state to COMPLETE 09:09:37 09:09:37.105 [main] DEBUG org.apache.kafka.common.security.authenticator.SaslClientAuthenticator - [Consumer clientId=mso-123456-consumer-f5c1d64b-2a77-4abe-97e2-5d595acfbd82, groupId=mso-group] Finished authentication with no session expiration and no session re-authentication 09:09:37 09:09:37.106 [main] DEBUG org.apache.kafka.common.network.Selector - [Consumer clientId=mso-123456-consumer-f5c1d64b-2a77-4abe-97e2-5d595acfbd82, groupId=mso-group] Successfully authenticated with localhost/127.0.0.1 09:09:37 09:09:37.106 [main] DEBUG org.apache.kafka.clients.NetworkClient - [Consumer clientId=mso-123456-consumer-f5c1d64b-2a77-4abe-97e2-5d595acfbd82, groupId=mso-group] Initiating API versions fetch from node 2147483646. 09:09:37 09:09:37.106 [main] DEBUG org.apache.kafka.clients.NetworkClient - [Consumer clientId=mso-123456-consumer-f5c1d64b-2a77-4abe-97e2-5d595acfbd82, groupId=mso-group] Sending API_VERSIONS request with header RequestHeader(apiKey=API_VERSIONS, apiVersion=3, clientId=mso-123456-consumer-f5c1d64b-2a77-4abe-97e2-5d595acfbd82, correlationId=33) and timeout 30000 to node 2147483646: ApiVersionsRequestData(clientSoftwareName='apache-kafka-java', clientSoftwareVersion='3.3.1') 09:09:37 09:09:37.107 [main] DEBUG org.apache.kafka.clients.NetworkClient - [Consumer clientId=mso-123456-consumer-f5c1d64b-2a77-4abe-97e2-5d595acfbd82, groupId=mso-group] Received API_VERSIONS response from node 2147483646 for request with header RequestHeader(apiKey=API_VERSIONS, apiVersion=3, clientId=mso-123456-consumer-f5c1d64b-2a77-4abe-97e2-5d595acfbd82, correlationId=33): ApiVersionsResponseData(errorCode=0, apiKeys=[ApiVersion(apiKey=0, minVersion=0, maxVersion=9), ApiVersion(apiKey=1, minVersion=0, maxVersion=13), ApiVersion(apiKey=2, minVersion=0, maxVersion=7), ApiVersion(apiKey=3, minVersion=0, maxVersion=12), ApiVersion(apiKey=4, minVersion=0, maxVersion=6), ApiVersion(apiKey=5, minVersion=0, maxVersion=3), ApiVersion(apiKey=6, minVersion=0, maxVersion=7), ApiVersion(apiKey=7, minVersion=0, maxVersion=3), ApiVersion(apiKey=8, minVersion=0, maxVersion=8), ApiVersion(apiKey=9, minVersion=0, maxVersion=8), ApiVersion(apiKey=10, minVersion=0, maxVersion=4), ApiVersion(apiKey=11, minVersion=0, maxVersion=9), ApiVersion(apiKey=12, minVersion=0, maxVersion=4), ApiVersion(apiKey=13, minVersion=0, maxVersion=5), ApiVersion(apiKey=14, minVersion=0, maxVersion=5), ApiVersion(apiKey=15, minVersion=0, maxVersion=5), ApiVersion(apiKey=16, minVersion=0, maxVersion=4), ApiVersion(apiKey=17, minVersion=0, maxVersion=1), ApiVersion(apiKey=18, minVersion=0, maxVersion=3), ApiVersion(apiKey=19, minVersion=0, maxVersion=7), ApiVersion(apiKey=20, minVersion=0, maxVersion=6), ApiVersion(apiKey=21, minVersion=0, maxVersion=2), ApiVersion(apiKey=22, minVersion=0, maxVersion=4), ApiVersion(apiKey=23, minVersion=0, maxVersion=4), ApiVersion(apiKey=24, minVersion=0, maxVersion=3), ApiVersion(apiKey=25, minVersion=0, maxVersion=3), ApiVersion(apiKey=26, minVersion=0, maxVersion=3), ApiVersion(apiKey=27, minVersion=0, maxVersion=1), ApiVersion(apiKey=28, minVersion=0, maxVersion=3), ApiVersion(apiKey=29, minVersion=0, maxVersion=3), ApiVersion(apiKey=30, minVersion=0, maxVersion=3), ApiVersion(apiKey=31, minVersion=0, maxVersion=3), ApiVersion(apiKey=32, minVersion=0, maxVersion=4), ApiVersion(apiKey=33, minVersion=0, maxVersion=2), ApiVersion(apiKey=34, minVersion=0, maxVersion=2), ApiVersion(apiKey=35, minVersion=0, maxVersion=4), ApiVersion(apiKey=36, minVersion=0, maxVersion=2), ApiVersion(apiKey=37, minVersion=0, maxVersion=3), ApiVersion(apiKey=38, minVersion=0, maxVersion=3), ApiVersion(apiKey=39, minVersion=0, maxVersion=2), ApiVersion(apiKey=40, minVersion=0, maxVersion=2), ApiVersion(apiKey=41, minVersion=0, maxVersion=3), ApiVersion(apiKey=42, minVersion=0, maxVersion=2), ApiVersion(apiKey=43, minVersion=0, maxVersion=2), ApiVersion(apiKey=44, minVersion=0, maxVersion=1), ApiVersion(apiKey=45, minVersion=0, maxVersion=0), ApiVersion(apiKey=46, minVersion=0, maxVersion=0), ApiVersion(apiKey=47, minVersion=0, maxVersion=0), ApiVersion(apiKey=48, minVersion=0, maxVersion=1), ApiVersion(apiKey=49, minVersion=0, maxVersion=1), ApiVersion(apiKey=50, minVersion=0, maxVersion=0), ApiVersion(apiKey=51, minVersion=0, maxVersion=0), ApiVersion(apiKey=56, minVersion=0, maxVersion=2), ApiVersion(apiKey=57, minVersion=0, maxVersion=1), ApiVersion(apiKey=60, minVersion=0, maxVersion=0), ApiVersion(apiKey=61, minVersion=0, maxVersion=0), ApiVersion(apiKey=65, minVersion=0, maxVersion=0), ApiVersion(apiKey=66, minVersion=0, maxVersion=0), ApiVersion(apiKey=67, minVersion=0, maxVersion=0)], throttleTimeMs=0, supportedFeatures=[], finalizedFeaturesEpoch=0, finalizedFeatures=[]) 09:09:37 09:09:37.108 [main] DEBUG org.apache.kafka.clients.NetworkClient - [Consumer clientId=mso-123456-consumer-f5c1d64b-2a77-4abe-97e2-5d595acfbd82, groupId=mso-group] Node 2147483646 has finalized features epoch: 0, finalized features: [], supported features: [], API versions: (Produce(0): 0 to 9 [usable: 9], Fetch(1): 0 to 13 [usable: 13], ListOffsets(2): 0 to 7 [usable: 7], Metadata(3): 0 to 12 [usable: 12], LeaderAndIsr(4): 0 to 6 [usable: 6], StopReplica(5): 0 to 3 [usable: 3], UpdateMetadata(6): 0 to 7 [usable: 7], ControlledShutdown(7): 0 to 3 [usable: 3], OffsetCommit(8): 0 to 8 [usable: 8], OffsetFetch(9): 0 to 8 [usable: 8], FindCoordinator(10): 0 to 4 [usable: 4], JoinGroup(11): 0 to 9 [usable: 9], Heartbeat(12): 0 to 4 [usable: 4], LeaveGroup(13): 0 to 5 [usable: 5], SyncGroup(14): 0 to 5 [usable: 5], DescribeGroups(15): 0 to 5 [usable: 5], ListGroups(16): 0 to 4 [usable: 4], SaslHandshake(17): 0 to 1 [usable: 1], ApiVersions(18): 0 to 3 [usable: 3], CreateTopics(19): 0 to 7 [usable: 7], DeleteTopics(20): 0 to 6 [usable: 6], DeleteRecords(21): 0 to 2 [usable: 2], InitProducerId(22): 0 to 4 [usable: 4], OffsetForLeaderEpoch(23): 0 to 4 [usable: 4], AddPartitionsToTxn(24): 0 to 3 [usable: 3], AddOffsetsToTxn(25): 0 to 3 [usable: 3], EndTxn(26): 0 to 3 [usable: 3], WriteTxnMarkers(27): 0 to 1 [usable: 1], TxnOffsetCommit(28): 0 to 3 [usable: 3], DescribeAcls(29): 0 to 3 [usable: 3], CreateAcls(30): 0 to 3 [usable: 3], DeleteAcls(31): 0 to 3 [usable: 3], DescribeConfigs(32): 0 to 4 [usable: 4], AlterConfigs(33): 0 to 2 [usable: 2], AlterReplicaLogDirs(34): 0 to 2 [usable: 2], DescribeLogDirs(35): 0 to 4 [usable: 4], SaslAuthenticate(36): 0 to 2 [usable: 2], CreatePartitions(37): 0 to 3 [usable: 3], CreateDelegationToken(38): 0 to 3 [usable: 3], RenewDelegationToken(39): 0 to 2 [usable: 2], ExpireDelegationToken(40): 0 to 2 [usable: 2], DescribeDelegationToken(41): 0 to 3 [usable: 3], DeleteGroups(42): 0 to 2 [usable: 2], ElectLeaders(43): 0 to 2 [usable: 2], IncrementalAlterConfigs(44): 0 to 1 [usable: 1], AlterPartitionReassignments(45): 0 [usable: 0], ListPartitionReassignments(46): 0 [usable: 0], OffsetDelete(47): 0 [usable: 0], DescribeClientQuotas(48): 0 to 1 [usable: 1], AlterClientQuotas(49): 0 to 1 [usable: 1], DescribeUserScramCredentials(50): 0 [usable: 0], AlterUserScramCredentials(51): 0 [usable: 0], AlterPartition(56): 0 to 2 [usable: 2], UpdateFeatures(57): 0 to 1 [usable: 1], DescribeCluster(60): 0 [usable: 0], DescribeProducers(61): 0 [usable: 0], DescribeTransactions(65): 0 [usable: 0], ListTransactions(66): 0 [usable: 0], AllocateProducerIds(67): 0 [usable: 0]). 09:09:37 09:09:37.108 [main] DEBUG org.apache.kafka.clients.NetworkClient - [Consumer clientId=mso-123456-consumer-f5c1d64b-2a77-4abe-97e2-5d595acfbd82, groupId=mso-group] Sending JOIN_GROUP request with header RequestHeader(apiKey=JOIN_GROUP, apiVersion=9, clientId=mso-123456-consumer-f5c1d64b-2a77-4abe-97e2-5d595acfbd82, correlationId=32) and timeout 605000 to node 2147483646: JoinGroupRequestData(groupId='mso-group', sessionTimeoutMs=50000, rebalanceTimeoutMs=600000, memberId='', groupInstanceId=null, protocolType='consumer', protocols=[JoinGroupRequestProtocol(name='range', metadata=[0, 1, 0, 0, 0, 1, 0, 13, 109, 121, 45, 116, 101, 115, 116, 45, 116, 111, 112, 105, 99, -1, -1, -1, -1, 0, 0, 0, 0]), JoinGroupRequestProtocol(name='cooperative-sticky', metadata=[0, 1, 0, 0, 0, 1, 0, 13, 109, 121, 45, 116, 101, 115, 116, 45, 116, 111, 112, 105, 99, 0, 0, 0, 4, -1, -1, -1, -1, 0, 0, 0, 0])], reason='') 09:09:37 09:09:37.108 [data-plane-kafka-network-thread-1-ListenerName(SASL_PLAINTEXT)-SASL_PLAINTEXT-1] DEBUG kafka.request.logger - Completed request:{"isForwarded":false,"requestHeader":{"requestApiKey":18,"requestApiVersion":3,"correlationId":33,"clientId":"mso-123456-consumer-f5c1d64b-2a77-4abe-97e2-5d595acfbd82","requestApiKeyName":"API_VERSIONS"},"request":{"clientSoftwareName":"apache-kafka-java","clientSoftwareVersion":"3.3.1"},"response":{"errorCode":0,"apiKeys":[{"apiKey":0,"minVersion":0,"maxVersion":9},{"apiKey":1,"minVersion":0,"maxVersion":13},{"apiKey":2,"minVersion":0,"maxVersion":7},{"apiKey":3,"minVersion":0,"maxVersion":12},{"apiKey":4,"minVersion":0,"maxVersion":6},{"apiKey":5,"minVersion":0,"maxVersion":3},{"apiKey":6,"minVersion":0,"maxVersion":7},{"apiKey":7,"minVersion":0,"maxVersion":3},{"apiKey":8,"minVersion":0,"maxVersion":8},{"apiKey":9,"minVersion":0,"maxVersion":8},{"apiKey":10,"minVersion":0,"maxVersion":4},{"apiKey":11,"minVersion":0,"maxVersion":9},{"apiKey":12,"minVersion":0,"maxVersion":4},{"apiKey":13,"minVersion":0,"maxVersion":5},{"apiKey":14,"minVersion":0,"maxVersion":5},{"apiKey":15,"minVersion":0,"maxVersion":5},{"apiKey":16,"minVersion":0,"maxVersion":4},{"apiKey":17,"minVersion":0,"maxVersion":1},{"apiKey":18,"minVersion":0,"maxVersion":3},{"apiKey":19,"minVersion":0,"maxVersion":7},{"apiKey":20,"minVersion":0,"maxVersion":6},{"apiKey":21,"minVersion":0,"maxVersion":2},{"apiKey":22,"minVersion":0,"maxVersion":4},{"apiKey":23,"minVersion":0,"maxVersion":4},{"apiKey":24,"minVersion":0,"maxVersion":3},{"apiKey":25,"minVersion":0,"maxVersion":3},{"apiKey":26,"minVersion":0,"maxVersion":3},{"apiKey":27,"minVersion":0,"maxVersion":1},{"apiKey":28,"minVersion":0,"maxVersion":3},{"apiKey":29,"minVersion":0,"maxVersion":3},{"apiKey":30,"minVersion":0,"maxVersion":3},{"apiKey":31,"minVersion":0,"maxVersion":3},{"apiKey":32,"minVersion":0,"maxVersion":4},{"apiKey":33,"minVersion":0,"maxVersion":2},{"apiKey":34,"minVersion":0,"maxVersion":2},{"apiKey":35,"minVersion":0,"maxVersion":4},{"apiKey":36,"minVersion":0,"maxVersion":2},{"apiKey":37,"minVersion":0,"maxVersion":3},{"apiKey":38,"minVersion":0,"maxVersion":3},{"apiKey":39,"minVersion":0,"maxVersion":2},{"apiKey":40,"minVersion":0,"maxVersion":2},{"apiKey":41,"minVersion":0,"maxVersion":3},{"apiKey":42,"minVersion":0,"maxVersion":2},{"apiKey":43,"minVersion":0,"maxVersion":2},{"apiKey":44,"minVersion":0,"maxVersion":1},{"apiKey":45,"minVersion":0,"maxVersion":0},{"apiKey":46,"minVersion":0,"maxVersion":0},{"apiKey":47,"minVersion":0,"maxVersion":0},{"apiKey":48,"minVersion":0,"maxVersion":1},{"apiKey":49,"minVersion":0,"maxVersion":1},{"apiKey":50,"minVersion":0,"maxVersion":0},{"apiKey":51,"minVersion":0,"maxVersion":0},{"apiKey":56,"minVersion":0,"maxVersion":2},{"apiKey":57,"minVersion":0,"maxVersion":1},{"apiKey":60,"minVersion":0,"maxVersion":0},{"apiKey":61,"minVersion":0,"maxVersion":0},{"apiKey":65,"minVersion":0,"maxVersion":0},{"apiKey":66,"minVersion":0,"maxVersion":0},{"apiKey":67,"minVersion":0,"maxVersion":0}],"throttleTimeMs":0,"finalizedFeaturesEpoch":0},"connection":"127.0.0.1:40117-127.0.0.1:49644-3","totalTimeMs":1.113,"requestQueueTimeMs":0.203,"localTimeMs":0.607,"remoteTimeMs":0.0,"throttleTimeMs":0,"responseQueueTimeMs":0.068,"sendTimeMs":0.234,"securityProtocol":"SASL_PLAINTEXT","principal":"User:admin","listener":"SASL_PLAINTEXT","clientInformation":{"softwareName":"unknown","softwareVersion":"unknown"}} 09:09:37 09:09:37.121 [data-plane-kafka-request-handler-1] INFO kafka.coordinator.group.GroupCoordinator - [GroupCoordinator 1]: Dynamic member with unknown member id joins group mso-group in Empty state. Created a new member id mso-123456-consumer-f5c1d64b-2a77-4abe-97e2-5d595acfbd82-165bc2ae-a1b1-4995-8a9f-1bee3f2ba8f0 and request the member to rejoin with this id. 09:09:37 09:09:37.127 [data-plane-kafka-network-thread-1-ListenerName(SASL_PLAINTEXT)-SASL_PLAINTEXT-1] DEBUG kafka.request.logger - Completed request:{"isForwarded":false,"requestHeader":{"requestApiKey":11,"requestApiVersion":9,"correlationId":32,"clientId":"mso-123456-consumer-f5c1d64b-2a77-4abe-97e2-5d595acfbd82","requestApiKeyName":"JOIN_GROUP"},"request":{"groupId":"mso-group","sessionTimeoutMs":50000,"rebalanceTimeoutMs":600000,"memberId":"","groupInstanceId":null,"protocolType":"consumer","protocols":[{"name":"range","metadata":"AAEAAAABAA1teS10ZXN0LXRvcGlj/////wAAAAA="},{"name":"cooperative-sticky","metadata":"AAEAAAABAA1teS10ZXN0LXRvcGljAAAABP////8AAAAA"}],"reason":""},"response":{"throttleTimeMs":0,"errorCode":79,"generationId":-1,"protocolType":null,"protocolName":null,"leader":"","skipAssignment":false,"memberId":"mso-123456-consumer-f5c1d64b-2a77-4abe-97e2-5d595acfbd82-165bc2ae-a1b1-4995-8a9f-1bee3f2ba8f0","members":[]},"connection":"127.0.0.1:40117-127.0.0.1:49644-3","totalTimeMs":17.241,"requestQueueTimeMs":3.876,"localTimeMs":13.007,"remoteTimeMs":0.0,"throttleTimeMs":0,"responseQueueTimeMs":0.092,"sendTimeMs":0.265,"securityProtocol":"SASL_PLAINTEXT","principal":"User:admin","listener":"SASL_PLAINTEXT","clientInformation":{"softwareName":"apache-kafka-java","softwareVersion":"3.3.1"}} 09:09:37 09:09:37.128 [main] DEBUG org.apache.kafka.clients.NetworkClient - [Consumer clientId=mso-123456-consumer-f5c1d64b-2a77-4abe-97e2-5d595acfbd82, groupId=mso-group] Received JOIN_GROUP response from node 2147483646 for request with header RequestHeader(apiKey=JOIN_GROUP, apiVersion=9, clientId=mso-123456-consumer-f5c1d64b-2a77-4abe-97e2-5d595acfbd82, correlationId=32): JoinGroupResponseData(throttleTimeMs=0, errorCode=79, generationId=-1, protocolType=null, protocolName=null, leader='', skipAssignment=false, memberId='mso-123456-consumer-f5c1d64b-2a77-4abe-97e2-5d595acfbd82-165bc2ae-a1b1-4995-8a9f-1bee3f2ba8f0', members=[]) 09:09:37 09:09:37.128 [main] DEBUG org.apache.kafka.clients.consumer.internals.ConsumerCoordinator - [Consumer clientId=mso-123456-consumer-f5c1d64b-2a77-4abe-97e2-5d595acfbd82, groupId=mso-group] JoinGroup failed due to non-fatal error: MEMBER_ID_REQUIRED. Will set the member id as mso-123456-consumer-f5c1d64b-2a77-4abe-97e2-5d595acfbd82-165bc2ae-a1b1-4995-8a9f-1bee3f2ba8f0 and then rejoin. Sent generation was Generation{generationId=-1, memberId='', protocol='null'} 09:09:37 09:09:37.128 [main] INFO org.apache.kafka.clients.consumer.internals.ConsumerCoordinator - [Consumer clientId=mso-123456-consumer-f5c1d64b-2a77-4abe-97e2-5d595acfbd82, groupId=mso-group] Request joining group due to: need to re-join with the given member-id: mso-123456-consumer-f5c1d64b-2a77-4abe-97e2-5d595acfbd82-165bc2ae-a1b1-4995-8a9f-1bee3f2ba8f0 09:09:37 09:09:37.128 [main] INFO org.apache.kafka.clients.consumer.internals.ConsumerCoordinator - [Consumer clientId=mso-123456-consumer-f5c1d64b-2a77-4abe-97e2-5d595acfbd82, groupId=mso-group] Request joining group due to: rebalance failed due to 'The group member needs to have a valid member id before actually entering a consumer group.' (MemberIdRequiredException) 09:09:37 09:09:37.128 [main] INFO org.apache.kafka.clients.consumer.internals.ConsumerCoordinator - [Consumer clientId=mso-123456-consumer-f5c1d64b-2a77-4abe-97e2-5d595acfbd82, groupId=mso-group] (Re-)joining group 09:09:37 09:09:37.128 [main] DEBUG org.apache.kafka.clients.consumer.internals.ConsumerCoordinator - [Consumer clientId=mso-123456-consumer-f5c1d64b-2a77-4abe-97e2-5d595acfbd82, groupId=mso-group] Joining group with current subscription: [my-test-topic] 09:09:37 09:09:37.129 [main] DEBUG org.apache.kafka.clients.consumer.internals.ConsumerCoordinator - [Consumer clientId=mso-123456-consumer-f5c1d64b-2a77-4abe-97e2-5d595acfbd82, groupId=mso-group] Sending JoinGroup (JoinGroupRequestData(groupId='mso-group', sessionTimeoutMs=50000, rebalanceTimeoutMs=600000, memberId='mso-123456-consumer-f5c1d64b-2a77-4abe-97e2-5d595acfbd82-165bc2ae-a1b1-4995-8a9f-1bee3f2ba8f0', groupInstanceId=null, protocolType='consumer', protocols=[JoinGroupRequestProtocol(name='range', metadata=[0, 1, 0, 0, 0, 1, 0, 13, 109, 121, 45, 116, 101, 115, 116, 45, 116, 111, 112, 105, 99, -1, -1, -1, -1, 0, 0, 0, 0]), JoinGroupRequestProtocol(name='cooperative-sticky', metadata=[0, 1, 0, 0, 0, 1, 0, 13, 109, 121, 45, 116, 101, 115, 116, 45, 116, 111, 112, 105, 99, 0, 0, 0, 4, -1, -1, -1, -1, 0, 0, 0, 0])], reason='rebalance failed due to MemberIdRequiredException')) to coordinator localhost:40117 (id: 2147483646 rack: null) 09:09:37 09:09:37.129 [main] DEBUG org.apache.kafka.clients.NetworkClient - [Consumer clientId=mso-123456-consumer-f5c1d64b-2a77-4abe-97e2-5d595acfbd82, groupId=mso-group] Sending JOIN_GROUP request with header RequestHeader(apiKey=JOIN_GROUP, apiVersion=9, clientId=mso-123456-consumer-f5c1d64b-2a77-4abe-97e2-5d595acfbd82, correlationId=34) and timeout 605000 to node 2147483646: JoinGroupRequestData(groupId='mso-group', sessionTimeoutMs=50000, rebalanceTimeoutMs=600000, memberId='mso-123456-consumer-f5c1d64b-2a77-4abe-97e2-5d595acfbd82-165bc2ae-a1b1-4995-8a9f-1bee3f2ba8f0', groupInstanceId=null, protocolType='consumer', protocols=[JoinGroupRequestProtocol(name='range', metadata=[0, 1, 0, 0, 0, 1, 0, 13, 109, 121, 45, 116, 101, 115, 116, 45, 116, 111, 112, 105, 99, -1, -1, -1, -1, 0, 0, 0, 0]), JoinGroupRequestProtocol(name='cooperative-sticky', metadata=[0, 1, 0, 0, 0, 1, 0, 13, 109, 121, 45, 116, 101, 115, 116, 45, 116, 111, 112, 105, 99, 0, 0, 0, 4, -1, -1, -1, -1, 0, 0, 0, 0])], reason='rebalance failed due to MemberIdRequiredException') 09:09:37 09:09:37.135 [data-plane-kafka-request-handler-0] DEBUG kafka.coordinator.group.GroupCoordinator - [GroupCoordinator 1]: Pending dynamic member with id mso-123456-consumer-f5c1d64b-2a77-4abe-97e2-5d595acfbd82-165bc2ae-a1b1-4995-8a9f-1bee3f2ba8f0 joins group mso-group in Empty state. Adding to the group now. 09:09:37 09:09:37.138 [data-plane-kafka-request-handler-0] DEBUG kafka.server.DelayedOperationPurgatory - Request key MemberKey(mso-group,mso-123456-consumer-f5c1d64b-2a77-4abe-97e2-5d595acfbd82-165bc2ae-a1b1-4995-8a9f-1bee3f2ba8f0) unblocked 1 Heartbeat operations 09:09:37 09:09:37.140 [data-plane-kafka-request-handler-0] INFO kafka.coordinator.group.GroupCoordinator - [GroupCoordinator 1]: Preparing to rebalance group mso-group in state PreparingRebalance with old generation 0 (__consumer_offsets-37) (reason: Adding new member mso-123456-consumer-f5c1d64b-2a77-4abe-97e2-5d595acfbd82-165bc2ae-a1b1-4995-8a9f-1bee3f2ba8f0 with group instance id None; client reason: rebalance failed due to MemberIdRequiredException) 09:09:39 09:09:39.705 [controller-event-thread] INFO kafka.controller.KafkaController - [Controller id=1] Processing automatic preferred replica leader election 09:09:39 09:09:39.714 [controller-event-thread] DEBUG kafka.controller.KafkaController - [Controller id=1] Topics not in preferred replica for broker 1 HashMap() 09:09:39 09:09:39.714 [controller-event-thread] DEBUG kafka.utils.KafkaScheduler - Scheduling task auto-leader-rebalance-task with initial delay 300000 ms and period -1000 ms. 09:09:40 09:09:40.149 [executor-Rebalance] INFO kafka.coordinator.group.GroupCoordinator - [GroupCoordinator 1]: Stabilized group mso-group generation 1 (__consumer_offsets-37) with 1 members 09:09:40 09:09:40.153 [executor-Rebalance] DEBUG kafka.server.DelayedOperationPurgatory - Request key MemberKey(mso-group,mso-123456-consumer-f5c1d64b-2a77-4abe-97e2-5d595acfbd82-165bc2ae-a1b1-4995-8a9f-1bee3f2ba8f0) unblocked 1 Heartbeat operations 09:09:40 09:09:40.154 [main] DEBUG org.apache.kafka.clients.NetworkClient - [Consumer clientId=mso-123456-consumer-f5c1d64b-2a77-4abe-97e2-5d595acfbd82, groupId=mso-group] Received JOIN_GROUP response from node 2147483646 for request with header RequestHeader(apiKey=JOIN_GROUP, apiVersion=9, clientId=mso-123456-consumer-f5c1d64b-2a77-4abe-97e2-5d595acfbd82, correlationId=34): JoinGroupResponseData(throttleTimeMs=0, errorCode=0, generationId=1, protocolType='consumer', protocolName='range', leader='mso-123456-consumer-f5c1d64b-2a77-4abe-97e2-5d595acfbd82-165bc2ae-a1b1-4995-8a9f-1bee3f2ba8f0', skipAssignment=false, memberId='mso-123456-consumer-f5c1d64b-2a77-4abe-97e2-5d595acfbd82-165bc2ae-a1b1-4995-8a9f-1bee3f2ba8f0', members=[JoinGroupResponseMember(memberId='mso-123456-consumer-f5c1d64b-2a77-4abe-97e2-5d595acfbd82-165bc2ae-a1b1-4995-8a9f-1bee3f2ba8f0', groupInstanceId=null, metadata=[0, 1, 0, 0, 0, 1, 0, 13, 109, 121, 45, 116, 101, 115, 116, 45, 116, 111, 112, 105, 99, -1, -1, -1, -1, 0, 0, 0, 0])]) 09:09:40 09:09:40.154 [main] DEBUG org.apache.kafka.clients.consumer.internals.ConsumerCoordinator - [Consumer clientId=mso-123456-consumer-f5c1d64b-2a77-4abe-97e2-5d595acfbd82, groupId=mso-group] Received successful JoinGroup response: JoinGroupResponseData(throttleTimeMs=0, errorCode=0, generationId=1, protocolType='consumer', protocolName='range', leader='mso-123456-consumer-f5c1d64b-2a77-4abe-97e2-5d595acfbd82-165bc2ae-a1b1-4995-8a9f-1bee3f2ba8f0', skipAssignment=false, memberId='mso-123456-consumer-f5c1d64b-2a77-4abe-97e2-5d595acfbd82-165bc2ae-a1b1-4995-8a9f-1bee3f2ba8f0', members=[JoinGroupResponseMember(memberId='mso-123456-consumer-f5c1d64b-2a77-4abe-97e2-5d595acfbd82-165bc2ae-a1b1-4995-8a9f-1bee3f2ba8f0', groupInstanceId=null, metadata=[0, 1, 0, 0, 0, 1, 0, 13, 109, 121, 45, 116, 101, 115, 116, 45, 116, 111, 112, 105, 99, -1, -1, -1, -1, 0, 0, 0, 0])]) 09:09:40 09:09:40.154 [main] DEBUG org.apache.kafka.clients.consumer.internals.ConsumerCoordinator - [Consumer clientId=mso-123456-consumer-f5c1d64b-2a77-4abe-97e2-5d595acfbd82, groupId=mso-group] Enabling heartbeat thread 09:09:40 09:09:40.155 [data-plane-kafka-network-thread-1-ListenerName(SASL_PLAINTEXT)-SASL_PLAINTEXT-1] DEBUG kafka.request.logger - Completed request:{"isForwarded":false,"requestHeader":{"requestApiKey":11,"requestApiVersion":9,"correlationId":34,"clientId":"mso-123456-consumer-f5c1d64b-2a77-4abe-97e2-5d595acfbd82","requestApiKeyName":"JOIN_GROUP"},"request":{"groupId":"mso-group","sessionTimeoutMs":50000,"rebalanceTimeoutMs":600000,"memberId":"mso-123456-consumer-f5c1d64b-2a77-4abe-97e2-5d595acfbd82-165bc2ae-a1b1-4995-8a9f-1bee3f2ba8f0","groupInstanceId":null,"protocolType":"consumer","protocols":[{"name":"range","metadata":"AAEAAAABAA1teS10ZXN0LXRvcGlj/////wAAAAA="},{"name":"cooperative-sticky","metadata":"AAEAAAABAA1teS10ZXN0LXRvcGljAAAABP////8AAAAA"}],"reason":"rebalance failed due to MemberIdRequiredException"},"response":{"throttleTimeMs":0,"errorCode":0,"generationId":1,"protocolType":"consumer","protocolName":"range","leader":"mso-123456-consumer-f5c1d64b-2a77-4abe-97e2-5d595acfbd82-165bc2ae-a1b1-4995-8a9f-1bee3f2ba8f0","skipAssignment":false,"memberId":"mso-123456-consumer-f5c1d64b-2a77-4abe-97e2-5d595acfbd82-165bc2ae-a1b1-4995-8a9f-1bee3f2ba8f0","members":[{"memberId":"mso-123456-consumer-f5c1d64b-2a77-4abe-97e2-5d595acfbd82-165bc2ae-a1b1-4995-8a9f-1bee3f2ba8f0","groupInstanceId":null,"metadata":"AAEAAAABAA1teS10ZXN0LXRvcGlj/////wAAAAA="}]},"connection":"127.0.0.1:40117-127.0.0.1:49644-3","totalTimeMs":3022.073,"requestQueueTimeMs":0.314,"localTimeMs":9.155,"remoteTimeMs":3011.544,"throttleTimeMs":0,"responseQueueTimeMs":0.125,"sendTimeMs":0.933,"securityProtocol":"SASL_PLAINTEXT","principal":"User:admin","listener":"SASL_PLAINTEXT","clientInformation":{"softwareName":"apache-kafka-java","softwareVersion":"3.3.1"}} 09:09:40 09:09:40.155 [main] INFO org.apache.kafka.clients.consumer.internals.ConsumerCoordinator - [Consumer clientId=mso-123456-consumer-f5c1d64b-2a77-4abe-97e2-5d595acfbd82, groupId=mso-group] Successfully joined group with generation Generation{generationId=1, memberId='mso-123456-consumer-f5c1d64b-2a77-4abe-97e2-5d595acfbd82-165bc2ae-a1b1-4995-8a9f-1bee3f2ba8f0', protocol='range'} 09:09:40 09:09:40.156 [main] DEBUG org.apache.kafka.clients.consumer.internals.ConsumerCoordinator - [Consumer clientId=mso-123456-consumer-f5c1d64b-2a77-4abe-97e2-5d595acfbd82, groupId=mso-group] Performing assignment using strategy range with subscriptions {mso-123456-consumer-f5c1d64b-2a77-4abe-97e2-5d595acfbd82-165bc2ae-a1b1-4995-8a9f-1bee3f2ba8f0=Subscription(topics=[my-test-topic], ownedPartitions=[], groupInstanceId=null)} 09:09:40 09:09:40.161 [main] INFO org.apache.kafka.clients.consumer.internals.ConsumerCoordinator - [Consumer clientId=mso-123456-consumer-f5c1d64b-2a77-4abe-97e2-5d595acfbd82, groupId=mso-group] Finished assignment for group at generation 1: {mso-123456-consumer-f5c1d64b-2a77-4abe-97e2-5d595acfbd82-165bc2ae-a1b1-4995-8a9f-1bee3f2ba8f0=Assignment(partitions=[my-test-topic-0])} 09:09:40 09:09:40.165 [main] DEBUG org.apache.kafka.clients.consumer.internals.ConsumerCoordinator - [Consumer clientId=mso-123456-consumer-f5c1d64b-2a77-4abe-97e2-5d595acfbd82, groupId=mso-group] Sending leader SyncGroup to coordinator localhost:40117 (id: 2147483646 rack: null): SyncGroupRequestData(groupId='mso-group', generationId=1, memberId='mso-123456-consumer-f5c1d64b-2a77-4abe-97e2-5d595acfbd82-165bc2ae-a1b1-4995-8a9f-1bee3f2ba8f0', groupInstanceId=null, protocolType='consumer', protocolName='range', assignments=[SyncGroupRequestAssignment(memberId='mso-123456-consumer-f5c1d64b-2a77-4abe-97e2-5d595acfbd82-165bc2ae-a1b1-4995-8a9f-1bee3f2ba8f0', assignment=[0, 1, 0, 0, 0, 1, 0, 13, 109, 121, 45, 116, 101, 115, 116, 45, 116, 111, 112, 105, 99, 0, 0, 0, 1, 0, 0, 0, 0, -1, -1, -1, -1])]) 09:09:40 09:09:40.167 [kafka-coordinator-heartbeat-thread | mso-group] DEBUG org.apache.kafka.clients.NetworkClient - [Consumer clientId=mso-123456-consumer-f5c1d64b-2a77-4abe-97e2-5d595acfbd82, groupId=mso-group] Sending SYNC_GROUP request with header RequestHeader(apiKey=SYNC_GROUP, apiVersion=5, clientId=mso-123456-consumer-f5c1d64b-2a77-4abe-97e2-5d595acfbd82, correlationId=35) and timeout 30000 to node 2147483646: SyncGroupRequestData(groupId='mso-group', generationId=1, memberId='mso-123456-consumer-f5c1d64b-2a77-4abe-97e2-5d595acfbd82-165bc2ae-a1b1-4995-8a9f-1bee3f2ba8f0', groupInstanceId=null, protocolType='consumer', protocolName='range', assignments=[SyncGroupRequestAssignment(memberId='mso-123456-consumer-f5c1d64b-2a77-4abe-97e2-5d595acfbd82-165bc2ae-a1b1-4995-8a9f-1bee3f2ba8f0', assignment=[0, 1, 0, 0, 0, 1, 0, 13, 109, 121, 45, 116, 101, 115, 116, 45, 116, 111, 112, 105, 99, 0, 0, 0, 1, 0, 0, 0, 0, -1, -1, -1, -1])]) 09:09:40 09:09:40.175 [data-plane-kafka-request-handler-1] DEBUG kafka.server.DelayedOperationPurgatory - Request key GroupSyncKey(mso-group) unblocked 1 Rebalance operations 09:09:40 09:09:40.175 [data-plane-kafka-request-handler-1] INFO kafka.coordinator.group.GroupCoordinator - [GroupCoordinator 1]: Assignment received from leader mso-123456-consumer-f5c1d64b-2a77-4abe-97e2-5d595acfbd82-165bc2ae-a1b1-4995-8a9f-1bee3f2ba8f0 for group mso-group for generation 1. The group has 1 members, 0 of which are static. 09:09:40 09:09:40.225 [data-plane-kafka-request-handler-1] DEBUG kafka.log.UnifiedLog - [UnifiedLog partition=__consumer_offsets-37, dir=/tmp/kafka-unit11182757027218931278] Flushing log up to offset 1 (exclusive)with recovery point 1, last flushed: 1770973776456, current time: 1770973780225,unflushed: 1 09:09:40 09:09:40.301 [data-plane-kafka-request-handler-1] DEBUG kafka.cluster.Partition - [Partition __consumer_offsets-37 broker=1] High watermark updated from (offset=0 segment=[0:0]) to (offset=1 segment=[0:458]) 09:09:40 09:09:40.305 [data-plane-kafka-request-handler-1] DEBUG kafka.server.ReplicaManager - [ReplicaManager broker=1] Produce to local log in 103 ms 09:09:40 09:09:40.317 [data-plane-kafka-request-handler-1] DEBUG kafka.server.DelayedOperationPurgatory - Request key MemberKey(mso-group,mso-123456-consumer-f5c1d64b-2a77-4abe-97e2-5d595acfbd82-165bc2ae-a1b1-4995-8a9f-1bee3f2ba8f0) unblocked 1 Heartbeat operations 09:09:40 09:09:40.318 [main] DEBUG org.apache.kafka.clients.NetworkClient - [Consumer clientId=mso-123456-consumer-f5c1d64b-2a77-4abe-97e2-5d595acfbd82, groupId=mso-group] Received SYNC_GROUP response from node 2147483646 for request with header RequestHeader(apiKey=SYNC_GROUP, apiVersion=5, clientId=mso-123456-consumer-f5c1d64b-2a77-4abe-97e2-5d595acfbd82, correlationId=35): SyncGroupResponseData(throttleTimeMs=0, errorCode=0, protocolType='consumer', protocolName='range', assignment=[0, 1, 0, 0, 0, 1, 0, 13, 109, 121, 45, 116, 101, 115, 116, 45, 116, 111, 112, 105, 99, 0, 0, 0, 1, 0, 0, 0, 0, -1, -1, -1, -1]) 09:09:40 09:09:40.318 [main] DEBUG org.apache.kafka.clients.consumer.internals.ConsumerCoordinator - [Consumer clientId=mso-123456-consumer-f5c1d64b-2a77-4abe-97e2-5d595acfbd82, groupId=mso-group] Received successful SyncGroup response: SyncGroupResponseData(throttleTimeMs=0, errorCode=0, protocolType='consumer', protocolName='range', assignment=[0, 1, 0, 0, 0, 1, 0, 13, 109, 121, 45, 116, 101, 115, 116, 45, 116, 111, 112, 105, 99, 0, 0, 0, 1, 0, 0, 0, 0, -1, -1, -1, -1]) 09:09:40 09:09:40.318 [main] INFO org.apache.kafka.clients.consumer.internals.ConsumerCoordinator - [Consumer clientId=mso-123456-consumer-f5c1d64b-2a77-4abe-97e2-5d595acfbd82, groupId=mso-group] Successfully synced group in generation Generation{generationId=1, memberId='mso-123456-consumer-f5c1d64b-2a77-4abe-97e2-5d595acfbd82-165bc2ae-a1b1-4995-8a9f-1bee3f2ba8f0', protocol='range'} 09:09:40 09:09:40.319 [main] DEBUG org.apache.kafka.clients.consumer.internals.ConsumerCoordinator - [Consumer clientId=mso-123456-consumer-f5c1d64b-2a77-4abe-97e2-5d595acfbd82, groupId=mso-group] Executing onJoinComplete with generation 1 and memberId mso-123456-consumer-f5c1d64b-2a77-4abe-97e2-5d595acfbd82-165bc2ae-a1b1-4995-8a9f-1bee3f2ba8f0 09:09:40 09:09:40.319 [data-plane-kafka-network-thread-1-ListenerName(SASL_PLAINTEXT)-SASL_PLAINTEXT-1] DEBUG kafka.request.logger - Completed request:{"isForwarded":false,"requestHeader":{"requestApiKey":14,"requestApiVersion":5,"correlationId":35,"clientId":"mso-123456-consumer-f5c1d64b-2a77-4abe-97e2-5d595acfbd82","requestApiKeyName":"SYNC_GROUP"},"request":{"groupId":"mso-group","generationId":1,"memberId":"mso-123456-consumer-f5c1d64b-2a77-4abe-97e2-5d595acfbd82-165bc2ae-a1b1-4995-8a9f-1bee3f2ba8f0","groupInstanceId":null,"protocolType":"consumer","protocolName":"range","assignments":[{"memberId":"mso-123456-consumer-f5c1d64b-2a77-4abe-97e2-5d595acfbd82-165bc2ae-a1b1-4995-8a9f-1bee3f2ba8f0","assignment":"AAEAAAABAA1teS10ZXN0LXRvcGljAAAAAQAAAAD/////"}]},"response":{"throttleTimeMs":0,"errorCode":0,"protocolType":"consumer","protocolName":"range","assignment":"AAEAAAABAA1teS10ZXN0LXRvcGljAAAAAQAAAAD/////"},"connection":"127.0.0.1:40117-127.0.0.1:49644-3","totalTimeMs":148.982,"requestQueueTimeMs":1.55,"localTimeMs":146.604,"remoteTimeMs":0.0,"throttleTimeMs":0,"responseQueueTimeMs":0.436,"sendTimeMs":0.39,"securityProtocol":"SASL_PLAINTEXT","principal":"User:admin","listener":"SASL_PLAINTEXT","clientInformation":{"softwareName":"apache-kafka-java","softwareVersion":"3.3.1"}} 09:09:40 09:09:40.319 [main] INFO org.apache.kafka.clients.consumer.internals.ConsumerCoordinator - [Consumer clientId=mso-123456-consumer-f5c1d64b-2a77-4abe-97e2-5d595acfbd82, groupId=mso-group] Notifying assignor about the new Assignment(partitions=[my-test-topic-0]) 09:09:40 09:09:40.323 [main] INFO org.apache.kafka.clients.consumer.internals.ConsumerCoordinator - [Consumer clientId=mso-123456-consumer-f5c1d64b-2a77-4abe-97e2-5d595acfbd82, groupId=mso-group] Adding newly assigned partitions: my-test-topic-0 09:09:40 09:09:40.326 [main] DEBUG org.apache.kafka.clients.consumer.internals.ConsumerCoordinator - [Consumer clientId=mso-123456-consumer-f5c1d64b-2a77-4abe-97e2-5d595acfbd82, groupId=mso-group] Fetching committed offsets for partitions: [my-test-topic-0] 09:09:40 09:09:40.328 [main] DEBUG org.apache.kafka.clients.NetworkClient - [Consumer clientId=mso-123456-consumer-f5c1d64b-2a77-4abe-97e2-5d595acfbd82, groupId=mso-group] Sending OFFSET_FETCH request with header RequestHeader(apiKey=OFFSET_FETCH, apiVersion=8, clientId=mso-123456-consumer-f5c1d64b-2a77-4abe-97e2-5d595acfbd82, correlationId=36) and timeout 30000 to node 2147483646: OffsetFetchRequestData(groupId='', topics=[], groups=[OffsetFetchRequestGroup(groupId='mso-group', topics=[OffsetFetchRequestTopics(name='my-test-topic', partitionIndexes=[0])])], requireStable=true) 09:09:40 09:09:40.350 [main] DEBUG org.apache.kafka.clients.NetworkClient - [Consumer clientId=mso-123456-consumer-f5c1d64b-2a77-4abe-97e2-5d595acfbd82, groupId=mso-group] Received OFFSET_FETCH response from node 2147483646 for request with header RequestHeader(apiKey=OFFSET_FETCH, apiVersion=8, clientId=mso-123456-consumer-f5c1d64b-2a77-4abe-97e2-5d595acfbd82, correlationId=36): OffsetFetchResponseData(throttleTimeMs=0, topics=[], errorCode=0, groups=[OffsetFetchResponseGroup(groupId='mso-group', topics=[OffsetFetchResponseTopics(name='my-test-topic', partitions=[OffsetFetchResponsePartitions(partitionIndex=0, committedOffset=-1, committedLeaderEpoch=-1, metadata='', errorCode=0)])], errorCode=0)]) 09:09:40 09:09:40.350 [data-plane-kafka-network-thread-1-ListenerName(SASL_PLAINTEXT)-SASL_PLAINTEXT-1] DEBUG kafka.request.logger - Completed request:{"isForwarded":false,"requestHeader":{"requestApiKey":9,"requestApiVersion":8,"correlationId":36,"clientId":"mso-123456-consumer-f5c1d64b-2a77-4abe-97e2-5d595acfbd82","requestApiKeyName":"OFFSET_FETCH"},"request":{"groups":[{"groupId":"mso-group","topics":[{"name":"my-test-topic","partitionIndexes":[0]}]}],"requireStable":true},"response":{"throttleTimeMs":0,"groups":[{"groupId":"mso-group","topics":[{"name":"my-test-topic","partitions":[{"partitionIndex":0,"committedOffset":-1,"committedLeaderEpoch":-1,"metadata":"","errorCode":0}]}],"errorCode":0}]},"connection":"127.0.0.1:40117-127.0.0.1:49644-3","totalTimeMs":20.581,"requestQueueTimeMs":3.736,"localTimeMs":16.428,"remoteTimeMs":0.0,"throttleTimeMs":0,"responseQueueTimeMs":0.123,"sendTimeMs":0.293,"securityProtocol":"SASL_PLAINTEXT","principal":"User:admin","listener":"SASL_PLAINTEXT","clientInformation":{"softwareName":"apache-kafka-java","softwareVersion":"3.3.1"}} 09:09:40 09:09:40.351 [main] INFO org.apache.kafka.clients.consumer.internals.ConsumerCoordinator - [Consumer clientId=mso-123456-consumer-f5c1d64b-2a77-4abe-97e2-5d595acfbd82, groupId=mso-group] Found no committed offset for partition my-test-topic-0 09:09:40 09:09:40.357 [main] DEBUG org.apache.kafka.clients.consumer.internals.Fetcher - [Consumer clientId=mso-123456-consumer-f5c1d64b-2a77-4abe-97e2-5d595acfbd82, groupId=mso-group] Sending ListOffsetRequest ListOffsetsRequestData(replicaId=-1, isolationLevel=0, topics=[ListOffsetsTopic(name='my-test-topic', partitions=[ListOffsetsPartition(partitionIndex=0, currentLeaderEpoch=0, timestamp=-1, maxNumOffsets=1)])]) to broker localhost:40117 (id: 1 rack: null) 09:09:40 09:09:40.358 [main] DEBUG org.apache.kafka.clients.NetworkClient - [Consumer clientId=mso-123456-consumer-f5c1d64b-2a77-4abe-97e2-5d595acfbd82, groupId=mso-group] Sending LIST_OFFSETS request with header RequestHeader(apiKey=LIST_OFFSETS, apiVersion=7, clientId=mso-123456-consumer-f5c1d64b-2a77-4abe-97e2-5d595acfbd82, correlationId=37) and timeout 30000 to node 1: ListOffsetsRequestData(replicaId=-1, isolationLevel=0, topics=[ListOffsetsTopic(name='my-test-topic', partitions=[ListOffsetsPartition(partitionIndex=0, currentLeaderEpoch=0, timestamp=-1, maxNumOffsets=1)])]) 09:09:40 09:09:40.377 [main] DEBUG org.apache.kafka.clients.NetworkClient - [Consumer clientId=mso-123456-consumer-f5c1d64b-2a77-4abe-97e2-5d595acfbd82, groupId=mso-group] Received LIST_OFFSETS response from node 1 for request with header RequestHeader(apiKey=LIST_OFFSETS, apiVersion=7, clientId=mso-123456-consumer-f5c1d64b-2a77-4abe-97e2-5d595acfbd82, correlationId=37): ListOffsetsResponseData(throttleTimeMs=0, topics=[ListOffsetsTopicResponse(name='my-test-topic', partitions=[ListOffsetsPartitionResponse(partitionIndex=0, errorCode=0, oldStyleOffsets=[], timestamp=-1, offset=0, leaderEpoch=0)])]) 09:09:40 09:09:40.377 [data-plane-kafka-network-thread-1-ListenerName(SASL_PLAINTEXT)-SASL_PLAINTEXT-0] DEBUG kafka.request.logger - Completed request:{"isForwarded":false,"requestHeader":{"requestApiKey":2,"requestApiVersion":7,"correlationId":37,"clientId":"mso-123456-consumer-f5c1d64b-2a77-4abe-97e2-5d595acfbd82","requestApiKeyName":"LIST_OFFSETS"},"request":{"replicaId":-1,"isolationLevel":0,"topics":[{"name":"my-test-topic","partitions":[{"partitionIndex":0,"currentLeaderEpoch":0,"timestamp":-1}]}]},"response":{"throttleTimeMs":0,"topics":[{"name":"my-test-topic","partitions":[{"partitionIndex":0,"errorCode":0,"timestamp":-1,"offset":0,"leaderEpoch":0}]}]},"connection":"127.0.0.1:40117-127.0.0.1:49642-3","totalTimeMs":17.638,"requestQueueTimeMs":1.891,"localTimeMs":15.315,"remoteTimeMs":0.0,"throttleTimeMs":0,"responseQueueTimeMs":0.114,"sendTimeMs":0.318,"securityProtocol":"SASL_PLAINTEXT","principal":"User:admin","listener":"SASL_PLAINTEXT","clientInformation":{"softwareName":"apache-kafka-java","softwareVersion":"3.3.1"}} 09:09:40 09:09:40.378 [main] DEBUG org.apache.kafka.clients.consumer.internals.Fetcher - [Consumer clientId=mso-123456-consumer-f5c1d64b-2a77-4abe-97e2-5d595acfbd82, groupId=mso-group] Handling ListOffsetResponse response for my-test-topic-0. Fetched offset 0, timestamp -1 09:09:40 09:09:40.379 [main] DEBUG org.apache.kafka.clients.Metadata - [Consumer clientId=mso-123456-consumer-f5c1d64b-2a77-4abe-97e2-5d595acfbd82, groupId=mso-group] Not replacing existing epoch 0 with new epoch 0 for partition my-test-topic-0 09:09:40 09:09:40.380 [main] INFO org.apache.kafka.clients.consumer.internals.SubscriptionState - [Consumer clientId=mso-123456-consumer-f5c1d64b-2a77-4abe-97e2-5d595acfbd82, groupId=mso-group] Resetting offset for partition my-test-topic-0 to position FetchPosition{offset=0, offsetEpoch=Optional.empty, currentLeader=LeaderAndEpoch{leader=Optional[localhost:40117 (id: 1 rack: null)], epoch=0}}. 09:09:40 09:09:40.385 [main] DEBUG org.apache.kafka.clients.consumer.internals.Fetcher - [Consumer clientId=mso-123456-consumer-f5c1d64b-2a77-4abe-97e2-5d595acfbd82, groupId=mso-group] Added READ_UNCOMMITTED fetch request for partition my-test-topic-0 at position FetchPosition{offset=0, offsetEpoch=Optional.empty, currentLeader=LeaderAndEpoch{leader=Optional[localhost:40117 (id: 1 rack: null)], epoch=0}} to node localhost:40117 (id: 1 rack: null) 09:09:40 09:09:40.385 [main] DEBUG org.apache.kafka.clients.FetchSessionHandler - [Consumer clientId=mso-123456-consumer-f5c1d64b-2a77-4abe-97e2-5d595acfbd82, groupId=mso-group] Built full fetch (sessionId=INVALID, epoch=INITIAL) for node 1 with 1 partition(s). 09:09:40 09:09:40.386 [main] DEBUG org.apache.kafka.clients.consumer.internals.Fetcher - [Consumer clientId=mso-123456-consumer-f5c1d64b-2a77-4abe-97e2-5d595acfbd82, groupId=mso-group] Sending READ_UNCOMMITTED FullFetchRequest(toSend=(my-test-topic-0), canUseTopicIds=True) to broker localhost:40117 (id: 1 rack: null) 09:09:40 09:09:40.389 [main] DEBUG org.apache.kafka.clients.NetworkClient - [Consumer clientId=mso-123456-consumer-f5c1d64b-2a77-4abe-97e2-5d595acfbd82, groupId=mso-group] Sending FETCH request with header RequestHeader(apiKey=FETCH, apiVersion=13, clientId=mso-123456-consumer-f5c1d64b-2a77-4abe-97e2-5d595acfbd82, correlationId=38) and timeout 30000 to node 1: FetchRequestData(clusterId=null, replicaId=-1, maxWaitMs=500, minBytes=1, maxBytes=52428800, isolationLevel=0, sessionId=0, sessionEpoch=0, topics=[FetchTopic(topic='my-test-topic', topicId=rpOTusxPRiGyrsjjjH_fwA, partitions=[FetchPartition(partition=0, currentLeaderEpoch=0, fetchOffset=0, lastFetchedEpoch=-1, logStartOffset=-1, partitionMaxBytes=1048576)])], forgottenTopicsData=[], rackId='') 09:09:40 09:09:40.399 [data-plane-kafka-request-handler-0] DEBUG kafka.server.FetchManager - Created a new full FetchContext with 1 partition(s). 09:09:40 09:09:40.928 [executor-Fetch] DEBUG kafka.server.FetchSessionCache - Created fetch session FetchSession(id=1021167583, privileged=false, partitionMap.size=1, usesTopicIds=true, creationMs=1770973780925, lastUsedMs=1770973780925, epoch=1) 09:09:40 09:09:40.932 [executor-Fetch] DEBUG kafka.server.FullFetchContext - Full fetch context with session id 1021167583 returning 1 partition(s) 09:09:40 09:09:40.941 [main] DEBUG org.apache.kafka.clients.NetworkClient - [Consumer clientId=mso-123456-consumer-f5c1d64b-2a77-4abe-97e2-5d595acfbd82, groupId=mso-group] Received FETCH response from node 1 for request with header RequestHeader(apiKey=FETCH, apiVersion=13, clientId=mso-123456-consumer-f5c1d64b-2a77-4abe-97e2-5d595acfbd82, correlationId=38): FetchResponseData(throttleTimeMs=0, errorCode=0, sessionId=1021167583, responses=[FetchableTopicResponse(topic='', topicId=rpOTusxPRiGyrsjjjH_fwA, partitions=[PartitionData(partitionIndex=0, errorCode=0, highWatermark=0, lastStableOffset=0, logStartOffset=0, divergingEpoch=EpochEndOffset(epoch=-1, endOffset=-1), currentLeader=LeaderIdAndEpoch(leaderId=-1, leaderEpoch=-1), snapshotId=SnapshotId(endOffset=-1, epoch=-1), abortedTransactions=null, preferredReadReplica=-1, records=MemoryRecords(size=0, buffer=java.nio.HeapByteBuffer[pos=0 lim=0 cap=3]))])]) 09:09:40 09:09:40.942 [main] DEBUG org.apache.kafka.clients.FetchSessionHandler - [Consumer clientId=mso-123456-consumer-f5c1d64b-2a77-4abe-97e2-5d595acfbd82, groupId=mso-group] Node 1 sent a full fetch response that created a new incremental fetch session 1021167583 with 1 response partition(s) 09:09:40 09:09:40.942 [data-plane-kafka-network-thread-1-ListenerName(SASL_PLAINTEXT)-SASL_PLAINTEXT-0] DEBUG kafka.request.logger - Completed request:{"isForwarded":false,"requestHeader":{"requestApiKey":1,"requestApiVersion":13,"correlationId":38,"clientId":"mso-123456-consumer-f5c1d64b-2a77-4abe-97e2-5d595acfbd82","requestApiKeyName":"FETCH"},"request":{"replicaId":-1,"maxWaitMs":500,"minBytes":1,"maxBytes":52428800,"isolationLevel":0,"sessionId":0,"sessionEpoch":0,"topics":[{"topicId":"rpOTusxPRiGyrsjjjH_fwA","partitions":[{"partition":0,"currentLeaderEpoch":0,"fetchOffset":0,"lastFetchedEpoch":-1,"logStartOffset":-1,"partitionMaxBytes":1048576}]}],"forgottenTopicsData":[],"rackId":""},"response":{"throttleTimeMs":0,"errorCode":0,"sessionId":1021167583,"responses":[{"topicId":"rpOTusxPRiGyrsjjjH_fwA","partitions":[{"partitionIndex":0,"errorCode":0,"highWatermark":0,"lastStableOffset":0,"logStartOffset":0,"abortedTransactions":null,"preferredReadReplica":-1,"recordsSizeInBytes":0}]}]},"connection":"127.0.0.1:40117-127.0.0.1:49642-3","totalTimeMs":550.87,"requestQueueTimeMs":4.83,"localTimeMs":21.384,"remoteTimeMs":523.871,"throttleTimeMs":0,"responseQueueTimeMs":0.188,"sendTimeMs":0.595,"securityProtocol":"SASL_PLAINTEXT","principal":"User:admin","listener":"SASL_PLAINTEXT","clientInformation":{"softwareName":"apache-kafka-java","softwareVersion":"3.3.1"}} 09:09:40 09:09:40.943 [main] DEBUG org.apache.kafka.clients.consumer.internals.Fetcher - [Consumer clientId=mso-123456-consumer-f5c1d64b-2a77-4abe-97e2-5d595acfbd82, groupId=mso-group] Fetch READ_UNCOMMITTED at offset 0 for partition my-test-topic-0 returned fetch data PartitionData(partitionIndex=0, errorCode=0, highWatermark=0, lastStableOffset=0, logStartOffset=0, divergingEpoch=EpochEndOffset(epoch=-1, endOffset=-1), currentLeader=LeaderIdAndEpoch(leaderId=-1, leaderEpoch=-1), snapshotId=SnapshotId(endOffset=-1, epoch=-1), abortedTransactions=null, preferredReadReplica=-1, records=MemoryRecords(size=0, buffer=java.nio.HeapByteBuffer[pos=0 lim=0 cap=3])) 09:09:40 09:09:40.945 [main] DEBUG org.apache.kafka.clients.consumer.internals.Fetcher - [Consumer clientId=mso-123456-consumer-f5c1d64b-2a77-4abe-97e2-5d595acfbd82, groupId=mso-group] Added READ_UNCOMMITTED fetch request for partition my-test-topic-0 at position FetchPosition{offset=0, offsetEpoch=Optional.empty, currentLeader=LeaderAndEpoch{leader=Optional[localhost:40117 (id: 1 rack: null)], epoch=0}} to node localhost:40117 (id: 1 rack: null) 09:09:40 09:09:40.945 [main] DEBUG org.apache.kafka.clients.FetchSessionHandler - [Consumer clientId=mso-123456-consumer-f5c1d64b-2a77-4abe-97e2-5d595acfbd82, groupId=mso-group] Built incremental fetch (sessionId=1021167583, epoch=1) for node 1. Added 0 partition(s), altered 0 partition(s), removed 0 partition(s), replaced 0 partition(s) out of 1 partition(s) 09:09:40 09:09:40.945 [main] DEBUG org.apache.kafka.clients.consumer.internals.Fetcher - [Consumer clientId=mso-123456-consumer-f5c1d64b-2a77-4abe-97e2-5d595acfbd82, groupId=mso-group] Sending READ_UNCOMMITTED IncrementalFetchRequest(toSend=(), toForget=(), toReplace=(), implied=(my-test-topic-0), canUseTopicIds=True) to broker localhost:40117 (id: 1 rack: null) 09:09:40 09:09:40.945 [main] DEBUG org.apache.kafka.clients.NetworkClient - [Consumer clientId=mso-123456-consumer-f5c1d64b-2a77-4abe-97e2-5d595acfbd82, groupId=mso-group] Sending FETCH request with header RequestHeader(apiKey=FETCH, apiVersion=13, clientId=mso-123456-consumer-f5c1d64b-2a77-4abe-97e2-5d595acfbd82, correlationId=39) and timeout 30000 to node 1: FetchRequestData(clusterId=null, replicaId=-1, maxWaitMs=500, minBytes=1, maxBytes=52428800, isolationLevel=0, sessionId=1021167583, sessionEpoch=1, topics=[], forgottenTopicsData=[], rackId='') 09:09:40 09:09:40.948 [data-plane-kafka-request-handler-1] DEBUG kafka.server.FetchManager - Created a new incremental FetchContext for session id 1021167583, epoch 2: added 0 partition(s), updated 0 partition(s), removed 0 partition(s) 09:09:41 09:09:41.455 [executor-Fetch] DEBUG kafka.server.IncrementalFetchContext - Incremental fetch context with session id 1021167583 returning 0 partition(s) 09:09:41 09:09:41.457 [main] DEBUG org.apache.kafka.clients.NetworkClient - [Consumer clientId=mso-123456-consumer-f5c1d64b-2a77-4abe-97e2-5d595acfbd82, groupId=mso-group] Received FETCH response from node 1 for request with header RequestHeader(apiKey=FETCH, apiVersion=13, clientId=mso-123456-consumer-f5c1d64b-2a77-4abe-97e2-5d595acfbd82, correlationId=39): FetchResponseData(throttleTimeMs=0, errorCode=0, sessionId=1021167583, responses=[]) 09:09:41 09:09:41.457 [main] DEBUG org.apache.kafka.clients.FetchSessionHandler - [Consumer clientId=mso-123456-consumer-f5c1d64b-2a77-4abe-97e2-5d595acfbd82, groupId=mso-group] Node 1 sent an incremental fetch response with throttleTimeMs = 0 for session 1021167583 with 0 response partition(s), 1 implied partition(s) 09:09:41 09:09:41.457 [data-plane-kafka-network-thread-1-ListenerName(SASL_PLAINTEXT)-SASL_PLAINTEXT-0] DEBUG kafka.request.logger - Completed request:{"isForwarded":false,"requestHeader":{"requestApiKey":1,"requestApiVersion":13,"correlationId":39,"clientId":"mso-123456-consumer-f5c1d64b-2a77-4abe-97e2-5d595acfbd82","requestApiKeyName":"FETCH"},"request":{"replicaId":-1,"maxWaitMs":500,"minBytes":1,"maxBytes":52428800,"isolationLevel":0,"sessionId":1021167583,"sessionEpoch":1,"topics":[],"forgottenTopicsData":[],"rackId":""},"response":{"throttleTimeMs":0,"errorCode":0,"sessionId":1021167583,"responses":[]},"connection":"127.0.0.1:40117-127.0.0.1:49642-3","totalTimeMs":510.885,"requestQueueTimeMs":0.247,"localTimeMs":5.9,"remoteTimeMs":504.157,"throttleTimeMs":0,"responseQueueTimeMs":0.161,"sendTimeMs":0.418,"securityProtocol":"SASL_PLAINTEXT","principal":"User:admin","listener":"SASL_PLAINTEXT","clientInformation":{"softwareName":"apache-kafka-java","softwareVersion":"3.3.1"}} 09:09:41 09:09:41.458 [main] DEBUG org.apache.kafka.clients.consumer.internals.Fetcher - [Consumer clientId=mso-123456-consumer-f5c1d64b-2a77-4abe-97e2-5d595acfbd82, groupId=mso-group] Added READ_UNCOMMITTED fetch request for partition my-test-topic-0 at position FetchPosition{offset=0, offsetEpoch=Optional.empty, currentLeader=LeaderAndEpoch{leader=Optional[localhost:40117 (id: 1 rack: null)], epoch=0}} to node localhost:40117 (id: 1 rack: null) 09:09:41 09:09:41.458 [main] DEBUG org.apache.kafka.clients.FetchSessionHandler - [Consumer clientId=mso-123456-consumer-f5c1d64b-2a77-4abe-97e2-5d595acfbd82, groupId=mso-group] Built incremental fetch (sessionId=1021167583, epoch=2) for node 1. Added 0 partition(s), altered 0 partition(s), removed 0 partition(s), replaced 0 partition(s) out of 1 partition(s) 09:09:41 09:09:41.458 [main] DEBUG org.apache.kafka.clients.consumer.internals.Fetcher - [Consumer clientId=mso-123456-consumer-f5c1d64b-2a77-4abe-97e2-5d595acfbd82, groupId=mso-group] Sending READ_UNCOMMITTED IncrementalFetchRequest(toSend=(), toForget=(), toReplace=(), implied=(my-test-topic-0), canUseTopicIds=True) to broker localhost:40117 (id: 1 rack: null) 09:09:41 09:09:41.458 [main] DEBUG org.apache.kafka.clients.NetworkClient - [Consumer clientId=mso-123456-consumer-f5c1d64b-2a77-4abe-97e2-5d595acfbd82, groupId=mso-group] Sending FETCH request with header RequestHeader(apiKey=FETCH, apiVersion=13, clientId=mso-123456-consumer-f5c1d64b-2a77-4abe-97e2-5d595acfbd82, correlationId=40) and timeout 30000 to node 1: FetchRequestData(clusterId=null, replicaId=-1, maxWaitMs=500, minBytes=1, maxBytes=52428800, isolationLevel=0, sessionId=1021167583, sessionEpoch=2, topics=[], forgottenTopicsData=[], rackId='') 09:09:41 09:09:41.459 [data-plane-kafka-request-handler-1] DEBUG kafka.server.FetchManager - Created a new incremental FetchContext for session id 1021167583, epoch 3: added 0 partition(s), updated 0 partition(s), removed 0 partition(s) 09:09:41 09:09:41.961 [executor-Fetch] DEBUG kafka.server.IncrementalFetchContext - Incremental fetch context with session id 1021167583 returning 0 partition(s) 09:09:41 09:09:41.963 [main] DEBUG org.apache.kafka.clients.NetworkClient - [Consumer clientId=mso-123456-consumer-f5c1d64b-2a77-4abe-97e2-5d595acfbd82, groupId=mso-group] Received FETCH response from node 1 for request with header RequestHeader(apiKey=FETCH, apiVersion=13, clientId=mso-123456-consumer-f5c1d64b-2a77-4abe-97e2-5d595acfbd82, correlationId=40): FetchResponseData(throttleTimeMs=0, errorCode=0, sessionId=1021167583, responses=[]) 09:09:41 09:09:41.963 [main] DEBUG org.apache.kafka.clients.FetchSessionHandler - [Consumer clientId=mso-123456-consumer-f5c1d64b-2a77-4abe-97e2-5d595acfbd82, groupId=mso-group] Node 1 sent an incremental fetch response with throttleTimeMs = 0 for session 1021167583 with 0 response partition(s), 1 implied partition(s) 09:09:41 09:09:41.963 [data-plane-kafka-network-thread-1-ListenerName(SASL_PLAINTEXT)-SASL_PLAINTEXT-0] DEBUG kafka.request.logger - Completed request:{"isForwarded":false,"requestHeader":{"requestApiKey":1,"requestApiVersion":13,"correlationId":40,"clientId":"mso-123456-consumer-f5c1d64b-2a77-4abe-97e2-5d595acfbd82","requestApiKeyName":"FETCH"},"request":{"replicaId":-1,"maxWaitMs":500,"minBytes":1,"maxBytes":52428800,"isolationLevel":0,"sessionId":1021167583,"sessionEpoch":2,"topics":[],"forgottenTopicsData":[],"rackId":""},"response":{"throttleTimeMs":0,"errorCode":0,"sessionId":1021167583,"responses":[]},"connection":"127.0.0.1:40117-127.0.0.1:49642-3","totalTimeMs":503.62,"requestQueueTimeMs":0.219,"localTimeMs":1.378,"remoteTimeMs":501.424,"throttleTimeMs":0,"responseQueueTimeMs":0.17,"sendTimeMs":0.427,"securityProtocol":"SASL_PLAINTEXT","principal":"User:admin","listener":"SASL_PLAINTEXT","clientInformation":{"softwareName":"apache-kafka-java","softwareVersion":"3.3.1"}} 09:09:41 09:09:41.963 [main] DEBUG org.apache.kafka.clients.consumer.internals.Fetcher - [Consumer clientId=mso-123456-consumer-f5c1d64b-2a77-4abe-97e2-5d595acfbd82, groupId=mso-group] Added READ_UNCOMMITTED fetch request for partition my-test-topic-0 at position FetchPosition{offset=0, offsetEpoch=Optional.empty, currentLeader=LeaderAndEpoch{leader=Optional[localhost:40117 (id: 1 rack: null)], epoch=0}} to node localhost:40117 (id: 1 rack: null) 09:09:41 09:09:41.964 [main] DEBUG org.apache.kafka.clients.FetchSessionHandler - [Consumer clientId=mso-123456-consumer-f5c1d64b-2a77-4abe-97e2-5d595acfbd82, groupId=mso-group] Built incremental fetch (sessionId=1021167583, epoch=3) for node 1. Added 0 partition(s), altered 0 partition(s), removed 0 partition(s), replaced 0 partition(s) out of 1 partition(s) 09:09:41 09:09:41.964 [main] DEBUG org.apache.kafka.clients.consumer.internals.Fetcher - [Consumer clientId=mso-123456-consumer-f5c1d64b-2a77-4abe-97e2-5d595acfbd82, groupId=mso-group] Sending READ_UNCOMMITTED IncrementalFetchRequest(toSend=(), toForget=(), toReplace=(), implied=(my-test-topic-0), canUseTopicIds=True) to broker localhost:40117 (id: 1 rack: null) 09:09:41 09:09:41.964 [main] DEBUG org.apache.kafka.clients.NetworkClient - [Consumer clientId=mso-123456-consumer-f5c1d64b-2a77-4abe-97e2-5d595acfbd82, groupId=mso-group] Sending FETCH request with header RequestHeader(apiKey=FETCH, apiVersion=13, clientId=mso-123456-consumer-f5c1d64b-2a77-4abe-97e2-5d595acfbd82, correlationId=41) and timeout 30000 to node 1: FetchRequestData(clusterId=null, replicaId=-1, maxWaitMs=500, minBytes=1, maxBytes=52428800, isolationLevel=0, sessionId=1021167583, sessionEpoch=3, topics=[], forgottenTopicsData=[], rackId='') 09:09:41 09:09:41.965 [data-plane-kafka-request-handler-1] DEBUG kafka.server.FetchManager - Created a new incremental FetchContext for session id 1021167583, epoch 4: added 0 partition(s), updated 0 partition(s), removed 0 partition(s) 09:09:42 09:09:42.466 [executor-Fetch] DEBUG kafka.server.IncrementalFetchContext - Incremental fetch context with session id 1021167583 returning 0 partition(s) 09:09:42 09:09:42.468 [main] DEBUG org.apache.kafka.clients.NetworkClient - [Consumer clientId=mso-123456-consumer-f5c1d64b-2a77-4abe-97e2-5d595acfbd82, groupId=mso-group] Received FETCH response from node 1 for request with header RequestHeader(apiKey=FETCH, apiVersion=13, clientId=mso-123456-consumer-f5c1d64b-2a77-4abe-97e2-5d595acfbd82, correlationId=41): FetchResponseData(throttleTimeMs=0, errorCode=0, sessionId=1021167583, responses=[]) 09:09:42 09:09:42.468 [data-plane-kafka-network-thread-1-ListenerName(SASL_PLAINTEXT)-SASL_PLAINTEXT-0] DEBUG kafka.request.logger - Completed request:{"isForwarded":false,"requestHeader":{"requestApiKey":1,"requestApiVersion":13,"correlationId":41,"clientId":"mso-123456-consumer-f5c1d64b-2a77-4abe-97e2-5d595acfbd82","requestApiKeyName":"FETCH"},"request":{"replicaId":-1,"maxWaitMs":500,"minBytes":1,"maxBytes":52428800,"isolationLevel":0,"sessionId":1021167583,"sessionEpoch":3,"topics":[],"forgottenTopicsData":[],"rackId":""},"response":{"throttleTimeMs":0,"errorCode":0,"sessionId":1021167583,"responses":[]},"connection":"127.0.0.1:40117-127.0.0.1:49642-3","totalTimeMs":503.001,"requestQueueTimeMs":0.21,"localTimeMs":0.925,"remoteTimeMs":501.158,"throttleTimeMs":0,"responseQueueTimeMs":0.367,"sendTimeMs":0.339,"securityProtocol":"SASL_PLAINTEXT","principal":"User:admin","listener":"SASL_PLAINTEXT","clientInformation":{"softwareName":"apache-kafka-java","softwareVersion":"3.3.1"}} 09:09:42 09:09:42.468 [main] DEBUG org.apache.kafka.clients.FetchSessionHandler - [Consumer clientId=mso-123456-consumer-f5c1d64b-2a77-4abe-97e2-5d595acfbd82, groupId=mso-group] Node 1 sent an incremental fetch response with throttleTimeMs = 0 for session 1021167583 with 0 response partition(s), 1 implied partition(s) 09:09:42 09:09:42.469 [main] DEBUG org.apache.kafka.clients.consumer.internals.Fetcher - [Consumer clientId=mso-123456-consumer-f5c1d64b-2a77-4abe-97e2-5d595acfbd82, groupId=mso-group] Added READ_UNCOMMITTED fetch request for partition my-test-topic-0 at position FetchPosition{offset=0, offsetEpoch=Optional.empty, currentLeader=LeaderAndEpoch{leader=Optional[localhost:40117 (id: 1 rack: null)], epoch=0}} to node localhost:40117 (id: 1 rack: null) 09:09:42 09:09:42.469 [main] DEBUG org.apache.kafka.clients.FetchSessionHandler - [Consumer clientId=mso-123456-consumer-f5c1d64b-2a77-4abe-97e2-5d595acfbd82, groupId=mso-group] Built incremental fetch (sessionId=1021167583, epoch=4) for node 1. Added 0 partition(s), altered 0 partition(s), removed 0 partition(s), replaced 0 partition(s) out of 1 partition(s) 09:09:42 09:09:42.469 [main] DEBUG org.apache.kafka.clients.consumer.internals.Fetcher - [Consumer clientId=mso-123456-consumer-f5c1d64b-2a77-4abe-97e2-5d595acfbd82, groupId=mso-group] Sending READ_UNCOMMITTED IncrementalFetchRequest(toSend=(), toForget=(), toReplace=(), implied=(my-test-topic-0), canUseTopicIds=True) to broker localhost:40117 (id: 1 rack: null) 09:09:42 09:09:42.469 [main] DEBUG org.apache.kafka.clients.NetworkClient - [Consumer clientId=mso-123456-consumer-f5c1d64b-2a77-4abe-97e2-5d595acfbd82, groupId=mso-group] Sending FETCH request with header RequestHeader(apiKey=FETCH, apiVersion=13, clientId=mso-123456-consumer-f5c1d64b-2a77-4abe-97e2-5d595acfbd82, correlationId=42) and timeout 30000 to node 1: FetchRequestData(clusterId=null, replicaId=-1, maxWaitMs=500, minBytes=1, maxBytes=52428800, isolationLevel=0, sessionId=1021167583, sessionEpoch=4, topics=[], forgottenTopicsData=[], rackId='') 09:09:42 09:09:42.470 [data-plane-kafka-request-handler-0] DEBUG kafka.server.FetchManager - Created a new incremental FetchContext for session id 1021167583, epoch 5: added 0 partition(s), updated 0 partition(s), removed 0 partition(s) 09:09:42 09:09:42.973 [executor-Fetch] DEBUG kafka.server.IncrementalFetchContext - Incremental fetch context with session id 1021167583 returning 0 partition(s) 09:09:42 09:09:42.974 [main] DEBUG org.apache.kafka.clients.NetworkClient - [Consumer clientId=mso-123456-consumer-f5c1d64b-2a77-4abe-97e2-5d595acfbd82, groupId=mso-group] Received FETCH response from node 1 for request with header RequestHeader(apiKey=FETCH, apiVersion=13, clientId=mso-123456-consumer-f5c1d64b-2a77-4abe-97e2-5d595acfbd82, correlationId=42): FetchResponseData(throttleTimeMs=0, errorCode=0, sessionId=1021167583, responses=[]) 09:09:42 09:09:42.974 [data-plane-kafka-network-thread-1-ListenerName(SASL_PLAINTEXT)-SASL_PLAINTEXT-0] DEBUG kafka.request.logger - Completed request:{"isForwarded":false,"requestHeader":{"requestApiKey":1,"requestApiVersion":13,"correlationId":42,"clientId":"mso-123456-consumer-f5c1d64b-2a77-4abe-97e2-5d595acfbd82","requestApiKeyName":"FETCH"},"request":{"replicaId":-1,"maxWaitMs":500,"minBytes":1,"maxBytes":52428800,"isolationLevel":0,"sessionId":1021167583,"sessionEpoch":4,"topics":[],"forgottenTopicsData":[],"rackId":""},"response":{"throttleTimeMs":0,"errorCode":0,"sessionId":1021167583,"responses":[]},"connection":"127.0.0.1:40117-127.0.0.1:49642-3","totalTimeMs":503.756,"requestQueueTimeMs":0.227,"localTimeMs":1.269,"remoteTimeMs":501.787,"throttleTimeMs":0,"responseQueueTimeMs":0.158,"sendTimeMs":0.313,"securityProtocol":"SASL_PLAINTEXT","principal":"User:admin","listener":"SASL_PLAINTEXT","clientInformation":{"softwareName":"apache-kafka-java","softwareVersion":"3.3.1"}} 09:09:42 09:09:42.974 [main] DEBUG org.apache.kafka.clients.FetchSessionHandler - [Consumer clientId=mso-123456-consumer-f5c1d64b-2a77-4abe-97e2-5d595acfbd82, groupId=mso-group] Node 1 sent an incremental fetch response with throttleTimeMs = 0 for session 1021167583 with 0 response partition(s), 1 implied partition(s) 09:09:42 09:09:42.975 [main] DEBUG org.apache.kafka.clients.consumer.internals.Fetcher - [Consumer clientId=mso-123456-consumer-f5c1d64b-2a77-4abe-97e2-5d595acfbd82, groupId=mso-group] Added READ_UNCOMMITTED fetch request for partition my-test-topic-0 at position FetchPosition{offset=0, offsetEpoch=Optional.empty, currentLeader=LeaderAndEpoch{leader=Optional[localhost:40117 (id: 1 rack: null)], epoch=0}} to node localhost:40117 (id: 1 rack: null) 09:09:42 09:09:42.975 [main] DEBUG org.apache.kafka.clients.FetchSessionHandler - [Consumer clientId=mso-123456-consumer-f5c1d64b-2a77-4abe-97e2-5d595acfbd82, groupId=mso-group] Built incremental fetch (sessionId=1021167583, epoch=5) for node 1. Added 0 partition(s), altered 0 partition(s), removed 0 partition(s), replaced 0 partition(s) out of 1 partition(s) 09:09:42 09:09:42.975 [main] DEBUG org.apache.kafka.clients.consumer.internals.Fetcher - [Consumer clientId=mso-123456-consumer-f5c1d64b-2a77-4abe-97e2-5d595acfbd82, groupId=mso-group] Sending READ_UNCOMMITTED IncrementalFetchRequest(toSend=(), toForget=(), toReplace=(), implied=(my-test-topic-0), canUseTopicIds=True) to broker localhost:40117 (id: 1 rack: null) 09:09:42 09:09:42.976 [main] DEBUG org.apache.kafka.clients.NetworkClient - [Consumer clientId=mso-123456-consumer-f5c1d64b-2a77-4abe-97e2-5d595acfbd82, groupId=mso-group] Sending FETCH request with header RequestHeader(apiKey=FETCH, apiVersion=13, clientId=mso-123456-consumer-f5c1d64b-2a77-4abe-97e2-5d595acfbd82, correlationId=43) and timeout 30000 to node 1: FetchRequestData(clusterId=null, replicaId=-1, maxWaitMs=500, minBytes=1, maxBytes=52428800, isolationLevel=0, sessionId=1021167583, sessionEpoch=5, topics=[], forgottenTopicsData=[], rackId='') 09:09:42 09:09:42.977 [data-plane-kafka-request-handler-0] DEBUG kafka.server.FetchManager - Created a new incremental FetchContext for session id 1021167583, epoch 6: added 0 partition(s), updated 0 partition(s), removed 0 partition(s) 09:09:43 09:09:43.156 [kafka-coordinator-heartbeat-thread | mso-group] DEBUG org.apache.kafka.clients.consumer.internals.ConsumerCoordinator - [Consumer clientId=mso-123456-consumer-f5c1d64b-2a77-4abe-97e2-5d595acfbd82, groupId=mso-group] Sending Heartbeat request with generation 1 and member id mso-123456-consumer-f5c1d64b-2a77-4abe-97e2-5d595acfbd82-165bc2ae-a1b1-4995-8a9f-1bee3f2ba8f0 to coordinator localhost:40117 (id: 2147483646 rack: null) 09:09:43 09:09:43.159 [kafka-coordinator-heartbeat-thread | mso-group] DEBUG org.apache.kafka.clients.NetworkClient - [Consumer clientId=mso-123456-consumer-f5c1d64b-2a77-4abe-97e2-5d595acfbd82, groupId=mso-group] Sending HEARTBEAT request with header RequestHeader(apiKey=HEARTBEAT, apiVersion=4, clientId=mso-123456-consumer-f5c1d64b-2a77-4abe-97e2-5d595acfbd82, correlationId=44) and timeout 30000 to node 2147483646: HeartbeatRequestData(groupId='mso-group', generationId=1, memberId='mso-123456-consumer-f5c1d64b-2a77-4abe-97e2-5d595acfbd82-165bc2ae-a1b1-4995-8a9f-1bee3f2ba8f0', groupInstanceId=null) 09:09:43 09:09:43.164 [data-plane-kafka-request-handler-1] DEBUG kafka.server.DelayedOperationPurgatory - Request key MemberKey(mso-group,mso-123456-consumer-f5c1d64b-2a77-4abe-97e2-5d595acfbd82-165bc2ae-a1b1-4995-8a9f-1bee3f2ba8f0) unblocked 1 Heartbeat operations 09:09:43 09:09:43.167 [main] DEBUG org.apache.kafka.clients.NetworkClient - [Consumer clientId=mso-123456-consumer-f5c1d64b-2a77-4abe-97e2-5d595acfbd82, groupId=mso-group] Received HEARTBEAT response from node 2147483646 for request with header RequestHeader(apiKey=HEARTBEAT, apiVersion=4, clientId=mso-123456-consumer-f5c1d64b-2a77-4abe-97e2-5d595acfbd82, correlationId=44): HeartbeatResponseData(throttleTimeMs=0, errorCode=0) 09:09:43 09:09:43.167 [main] DEBUG org.apache.kafka.clients.consumer.internals.ConsumerCoordinator - [Consumer clientId=mso-123456-consumer-f5c1d64b-2a77-4abe-97e2-5d595acfbd82, groupId=mso-group] Received successful Heartbeat response 09:09:43 09:09:43.167 [data-plane-kafka-network-thread-1-ListenerName(SASL_PLAINTEXT)-SASL_PLAINTEXT-1] DEBUG kafka.request.logger - Completed request:{"isForwarded":false,"requestHeader":{"requestApiKey":12,"requestApiVersion":4,"correlationId":44,"clientId":"mso-123456-consumer-f5c1d64b-2a77-4abe-97e2-5d595acfbd82","requestApiKeyName":"HEARTBEAT"},"request":{"groupId":"mso-group","generationId":1,"memberId":"mso-123456-consumer-f5c1d64b-2a77-4abe-97e2-5d595acfbd82-165bc2ae-a1b1-4995-8a9f-1bee3f2ba8f0","groupInstanceId":null},"response":{"throttleTimeMs":0,"errorCode":0},"connection":"127.0.0.1:40117-127.0.0.1:49644-3","totalTimeMs":6.905,"requestQueueTimeMs":1.572,"localTimeMs":5.073,"remoteTimeMs":0.0,"throttleTimeMs":0,"responseQueueTimeMs":0.084,"sendTimeMs":0.174,"securityProtocol":"SASL_PLAINTEXT","principal":"User:admin","listener":"SASL_PLAINTEXT","clientInformation":{"softwareName":"apache-kafka-java","softwareVersion":"3.3.1"}} 09:09:43 09:09:43.479 [executor-Fetch] DEBUG kafka.server.IncrementalFetchContext - Incremental fetch context with session id 1021167583 returning 0 partition(s) 09:09:43 09:09:43.480 [main] DEBUG org.apache.kafka.clients.NetworkClient - [Consumer clientId=mso-123456-consumer-f5c1d64b-2a77-4abe-97e2-5d595acfbd82, groupId=mso-group] Received FETCH response from node 1 for request with header RequestHeader(apiKey=FETCH, apiVersion=13, clientId=mso-123456-consumer-f5c1d64b-2a77-4abe-97e2-5d595acfbd82, correlationId=43): FetchResponseData(throttleTimeMs=0, errorCode=0, sessionId=1021167583, responses=[]) 09:09:43 09:09:43.481 [main] DEBUG org.apache.kafka.clients.FetchSessionHandler - [Consumer clientId=mso-123456-consumer-f5c1d64b-2a77-4abe-97e2-5d595acfbd82, groupId=mso-group] Node 1 sent an incremental fetch response with throttleTimeMs = 0 for session 1021167583 with 0 response partition(s), 1 implied partition(s) 09:09:43 09:09:43.481 [data-plane-kafka-network-thread-1-ListenerName(SASL_PLAINTEXT)-SASL_PLAINTEXT-0] DEBUG kafka.request.logger - Completed request:{"isForwarded":false,"requestHeader":{"requestApiKey":1,"requestApiVersion":13,"correlationId":43,"clientId":"mso-123456-consumer-f5c1d64b-2a77-4abe-97e2-5d595acfbd82","requestApiKeyName":"FETCH"},"request":{"replicaId":-1,"maxWaitMs":500,"minBytes":1,"maxBytes":52428800,"isolationLevel":0,"sessionId":1021167583,"sessionEpoch":5,"topics":[],"forgottenTopicsData":[],"rackId":""},"response":{"throttleTimeMs":0,"errorCode":0,"sessionId":1021167583,"responses":[]},"connection":"127.0.0.1:40117-127.0.0.1:49642-3","totalTimeMs":503.993,"requestQueueTimeMs":0.166,"localTimeMs":1.527,"remoteTimeMs":501.744,"throttleTimeMs":0,"responseQueueTimeMs":0.182,"sendTimeMs":0.372,"securityProtocol":"SASL_PLAINTEXT","principal":"User:admin","listener":"SASL_PLAINTEXT","clientInformation":{"softwareName":"apache-kafka-java","softwareVersion":"3.3.1"}} 09:09:43 09:09:43.481 [main] DEBUG org.apache.kafka.clients.consumer.internals.Fetcher - [Consumer clientId=mso-123456-consumer-f5c1d64b-2a77-4abe-97e2-5d595acfbd82, groupId=mso-group] Added READ_UNCOMMITTED fetch request for partition my-test-topic-0 at position FetchPosition{offset=0, offsetEpoch=Optional.empty, currentLeader=LeaderAndEpoch{leader=Optional[localhost:40117 (id: 1 rack: null)], epoch=0}} to node localhost:40117 (id: 1 rack: null) 09:09:43 09:09:43.481 [main] DEBUG org.apache.kafka.clients.FetchSessionHandler - [Consumer clientId=mso-123456-consumer-f5c1d64b-2a77-4abe-97e2-5d595acfbd82, groupId=mso-group] Built incremental fetch (sessionId=1021167583, epoch=6) for node 1. Added 0 partition(s), altered 0 partition(s), removed 0 partition(s), replaced 0 partition(s) out of 1 partition(s) 09:09:43 09:09:43.481 [main] DEBUG org.apache.kafka.clients.consumer.internals.Fetcher - [Consumer clientId=mso-123456-consumer-f5c1d64b-2a77-4abe-97e2-5d595acfbd82, groupId=mso-group] Sending READ_UNCOMMITTED IncrementalFetchRequest(toSend=(), toForget=(), toReplace=(), implied=(my-test-topic-0), canUseTopicIds=True) to broker localhost:40117 (id: 1 rack: null) 09:09:43 09:09:43.482 [main] DEBUG org.apache.kafka.clients.NetworkClient - [Consumer clientId=mso-123456-consumer-f5c1d64b-2a77-4abe-97e2-5d595acfbd82, groupId=mso-group] Sending FETCH request with header RequestHeader(apiKey=FETCH, apiVersion=13, clientId=mso-123456-consumer-f5c1d64b-2a77-4abe-97e2-5d595acfbd82, correlationId=45) and timeout 30000 to node 1: FetchRequestData(clusterId=null, replicaId=-1, maxWaitMs=500, minBytes=1, maxBytes=52428800, isolationLevel=0, sessionId=1021167583, sessionEpoch=6, topics=[], forgottenTopicsData=[], rackId='') 09:09:43 09:09:43.483 [data-plane-kafka-request-handler-0] DEBUG kafka.server.FetchManager - Created a new incremental FetchContext for session id 1021167583, epoch 7: added 0 partition(s), updated 0 partition(s), removed 0 partition(s) 09:09:43 09:09:43.984 [executor-Fetch] DEBUG kafka.server.IncrementalFetchContext - Incremental fetch context with session id 1021167583 returning 0 partition(s) 09:09:43 09:09:43.986 [main] DEBUG org.apache.kafka.clients.NetworkClient - [Consumer clientId=mso-123456-consumer-f5c1d64b-2a77-4abe-97e2-5d595acfbd82, groupId=mso-group] Received FETCH response from node 1 for request with header RequestHeader(apiKey=FETCH, apiVersion=13, clientId=mso-123456-consumer-f5c1d64b-2a77-4abe-97e2-5d595acfbd82, correlationId=45): FetchResponseData(throttleTimeMs=0, errorCode=0, sessionId=1021167583, responses=[]) 09:09:44 09:09:43.986 [main] DEBUG org.apache.kafka.clients.FetchSessionHandler - [Consumer clientId=mso-123456-consumer-f5c1d64b-2a77-4abe-97e2-5d595acfbd82, groupId=mso-group] Node 1 sent an incremental fetch response with throttleTimeMs = 0 for session 1021167583 with 0 response partition(s), 1 implied partition(s) 09:09:44 09:09:43.987 [data-plane-kafka-network-thread-1-ListenerName(SASL_PLAINTEXT)-SASL_PLAINTEXT-0] DEBUG kafka.request.logger - Completed request:{"isForwarded":false,"requestHeader":{"requestApiKey":1,"requestApiVersion":13,"correlationId":45,"clientId":"mso-123456-consumer-f5c1d64b-2a77-4abe-97e2-5d595acfbd82","requestApiKeyName":"FETCH"},"request":{"replicaId":-1,"maxWaitMs":500,"minBytes":1,"maxBytes":52428800,"isolationLevel":0,"sessionId":1021167583,"sessionEpoch":6,"topics":[],"forgottenTopicsData":[],"rackId":""},"response":{"throttleTimeMs":0,"errorCode":0,"sessionId":1021167583,"responses":[]},"connection":"127.0.0.1:40117-127.0.0.1:49642-3","totalTimeMs":503.239,"requestQueueTimeMs":0.21,"localTimeMs":1.287,"remoteTimeMs":501.226,"throttleTimeMs":0,"responseQueueTimeMs":0.196,"sendTimeMs":0.318,"securityProtocol":"SASL_PLAINTEXT","principal":"User:admin","listener":"SASL_PLAINTEXT","clientInformation":{"softwareName":"apache-kafka-java","softwareVersion":"3.3.1"}} 09:09:44 09:09:43.987 [main] DEBUG org.apache.kafka.clients.consumer.internals.Fetcher - [Consumer clientId=mso-123456-consumer-f5c1d64b-2a77-4abe-97e2-5d595acfbd82, groupId=mso-group] Added READ_UNCOMMITTED fetch request for partition my-test-topic-0 at position FetchPosition{offset=0, offsetEpoch=Optional.empty, currentLeader=LeaderAndEpoch{leader=Optional[localhost:40117 (id: 1 rack: null)], epoch=0}} to node localhost:40117 (id: 1 rack: null) 09:09:44 09:09:43.987 [main] DEBUG org.apache.kafka.clients.FetchSessionHandler - [Consumer clientId=mso-123456-consumer-f5c1d64b-2a77-4abe-97e2-5d595acfbd82, groupId=mso-group] Built incremental fetch (sessionId=1021167583, epoch=7) for node 1. Added 0 partition(s), altered 0 partition(s), removed 0 partition(s), replaced 0 partition(s) out of 1 partition(s) 09:09:44 09:09:43.987 [main] DEBUG org.apache.kafka.clients.consumer.internals.Fetcher - [Consumer clientId=mso-123456-consumer-f5c1d64b-2a77-4abe-97e2-5d595acfbd82, groupId=mso-group] Sending READ_UNCOMMITTED IncrementalFetchRequest(toSend=(), toForget=(), toReplace=(), implied=(my-test-topic-0), canUseTopicIds=True) to broker localhost:40117 (id: 1 rack: null) 09:09:44 09:09:43.987 [main] DEBUG org.apache.kafka.clients.NetworkClient - [Consumer clientId=mso-123456-consumer-f5c1d64b-2a77-4abe-97e2-5d595acfbd82, groupId=mso-group] Sending FETCH request with header RequestHeader(apiKey=FETCH, apiVersion=13, clientId=mso-123456-consumer-f5c1d64b-2a77-4abe-97e2-5d595acfbd82, correlationId=46) and timeout 30000 to node 1: FetchRequestData(clusterId=null, replicaId=-1, maxWaitMs=500, minBytes=1, maxBytes=52428800, isolationLevel=0, sessionId=1021167583, sessionEpoch=7, topics=[], forgottenTopicsData=[], rackId='') 09:09:44 09:09:43.988 [data-plane-kafka-request-handler-1] DEBUG kafka.server.FetchManager - Created a new incremental FetchContext for session id 1021167583, epoch 8: added 0 partition(s), updated 0 partition(s), removed 0 partition(s) 09:09:44 09:09:44.523 [executor-Fetch] DEBUG kafka.server.IncrementalFetchContext - Incremental fetch context with session id 1021167583 returning 0 partition(s) 09:09:44 09:09:44.524 [main] DEBUG org.apache.kafka.clients.NetworkClient - [Consumer clientId=mso-123456-consumer-f5c1d64b-2a77-4abe-97e2-5d595acfbd82, groupId=mso-group] Received FETCH response from node 1 for request with header RequestHeader(apiKey=FETCH, apiVersion=13, clientId=mso-123456-consumer-f5c1d64b-2a77-4abe-97e2-5d595acfbd82, correlationId=46): FetchResponseData(throttleTimeMs=0, errorCode=0, sessionId=1021167583, responses=[]) 09:09:44 09:09:44.525 [data-plane-kafka-network-thread-1-ListenerName(SASL_PLAINTEXT)-SASL_PLAINTEXT-0] DEBUG kafka.request.logger - Completed request:{"isForwarded":false,"requestHeader":{"requestApiKey":1,"requestApiVersion":13,"correlationId":46,"clientId":"mso-123456-consumer-f5c1d64b-2a77-4abe-97e2-5d595acfbd82","requestApiKeyName":"FETCH"},"request":{"replicaId":-1,"maxWaitMs":500,"minBytes":1,"maxBytes":52428800,"isolationLevel":0,"sessionId":1021167583,"sessionEpoch":7,"topics":[],"forgottenTopicsData":[],"rackId":""},"response":{"throttleTimeMs":0,"errorCode":0,"sessionId":1021167583,"responses":[]},"connection":"127.0.0.1:40117-127.0.0.1:49642-3","totalTimeMs":536.38,"requestQueueTimeMs":0.166,"localTimeMs":33.702,"remoteTimeMs":501.998,"throttleTimeMs":0,"responseQueueTimeMs":0.174,"sendTimeMs":0.339,"securityProtocol":"SASL_PLAINTEXT","principal":"User:admin","listener":"SASL_PLAINTEXT","clientInformation":{"softwareName":"apache-kafka-java","softwareVersion":"3.3.1"}} 09:09:44 09:09:44.525 [main] DEBUG org.apache.kafka.clients.FetchSessionHandler - [Consumer clientId=mso-123456-consumer-f5c1d64b-2a77-4abe-97e2-5d595acfbd82, groupId=mso-group] Node 1 sent an incremental fetch response with throttleTimeMs = 0 for session 1021167583 with 0 response partition(s), 1 implied partition(s) 09:09:44 09:09:44.525 [main] DEBUG org.apache.kafka.clients.consumer.internals.Fetcher - [Consumer clientId=mso-123456-consumer-f5c1d64b-2a77-4abe-97e2-5d595acfbd82, groupId=mso-group] Added READ_UNCOMMITTED fetch request for partition my-test-topic-0 at position FetchPosition{offset=0, offsetEpoch=Optional.empty, currentLeader=LeaderAndEpoch{leader=Optional[localhost:40117 (id: 1 rack: null)], epoch=0}} to node localhost:40117 (id: 1 rack: null) 09:09:44 09:09:44.525 [main] DEBUG org.apache.kafka.clients.FetchSessionHandler - [Consumer clientId=mso-123456-consumer-f5c1d64b-2a77-4abe-97e2-5d595acfbd82, groupId=mso-group] Built incremental fetch (sessionId=1021167583, epoch=8) for node 1. Added 0 partition(s), altered 0 partition(s), removed 0 partition(s), replaced 0 partition(s) out of 1 partition(s) 09:09:44 09:09:44.525 [main] DEBUG org.apache.kafka.clients.consumer.internals.Fetcher - [Consumer clientId=mso-123456-consumer-f5c1d64b-2a77-4abe-97e2-5d595acfbd82, groupId=mso-group] Sending READ_UNCOMMITTED IncrementalFetchRequest(toSend=(), toForget=(), toReplace=(), implied=(my-test-topic-0), canUseTopicIds=True) to broker localhost:40117 (id: 1 rack: null) 09:09:44 09:09:44.526 [main] DEBUG org.apache.kafka.clients.NetworkClient - [Consumer clientId=mso-123456-consumer-f5c1d64b-2a77-4abe-97e2-5d595acfbd82, groupId=mso-group] Sending FETCH request with header RequestHeader(apiKey=FETCH, apiVersion=13, clientId=mso-123456-consumer-f5c1d64b-2a77-4abe-97e2-5d595acfbd82, correlationId=47) and timeout 30000 to node 1: FetchRequestData(clusterId=null, replicaId=-1, maxWaitMs=500, minBytes=1, maxBytes=52428800, isolationLevel=0, sessionId=1021167583, sessionEpoch=8, topics=[], forgottenTopicsData=[], rackId='') 09:09:44 09:09:44.527 [data-plane-kafka-request-handler-1] DEBUG kafka.server.FetchManager - Created a new incremental FetchContext for session id 1021167583, epoch 9: added 0 partition(s), updated 0 partition(s), removed 0 partition(s) 09:09:45 09:09:45.029 [executor-Fetch] DEBUG kafka.server.IncrementalFetchContext - Incremental fetch context with session id 1021167583 returning 0 partition(s) 09:09:45 09:09:45.030 [main] DEBUG org.apache.kafka.clients.NetworkClient - [Consumer clientId=mso-123456-consumer-f5c1d64b-2a77-4abe-97e2-5d595acfbd82, groupId=mso-group] Received FETCH response from node 1 for request with header RequestHeader(apiKey=FETCH, apiVersion=13, clientId=mso-123456-consumer-f5c1d64b-2a77-4abe-97e2-5d595acfbd82, correlationId=47): FetchResponseData(throttleTimeMs=0, errorCode=0, sessionId=1021167583, responses=[]) 09:09:45 09:09:45.031 [data-plane-kafka-network-thread-1-ListenerName(SASL_PLAINTEXT)-SASL_PLAINTEXT-0] DEBUG kafka.request.logger - Completed request:{"isForwarded":false,"requestHeader":{"requestApiKey":1,"requestApiVersion":13,"correlationId":47,"clientId":"mso-123456-consumer-f5c1d64b-2a77-4abe-97e2-5d595acfbd82","requestApiKeyName":"FETCH"},"request":{"replicaId":-1,"maxWaitMs":500,"minBytes":1,"maxBytes":52428800,"isolationLevel":0,"sessionId":1021167583,"sessionEpoch":8,"topics":[],"forgottenTopicsData":[],"rackId":""},"response":{"throttleTimeMs":0,"errorCode":0,"sessionId":1021167583,"responses":[]},"connection":"127.0.0.1:40117-127.0.0.1:49642-3","totalTimeMs":503.974,"requestQueueTimeMs":0.191,"localTimeMs":1.183,"remoteTimeMs":502.202,"throttleTimeMs":0,"responseQueueTimeMs":0.097,"sendTimeMs":0.299,"securityProtocol":"SASL_PLAINTEXT","principal":"User:admin","listener":"SASL_PLAINTEXT","clientInformation":{"softwareName":"apache-kafka-java","softwareVersion":"3.3.1"}} 09:09:45 09:09:45.031 [main] DEBUG org.apache.kafka.clients.FetchSessionHandler - [Consumer clientId=mso-123456-consumer-f5c1d64b-2a77-4abe-97e2-5d595acfbd82, groupId=mso-group] Node 1 sent an incremental fetch response with throttleTimeMs = 0 for session 1021167583 with 0 response partition(s), 1 implied partition(s) 09:09:45 09:09:45.031 [main] DEBUG org.apache.kafka.clients.consumer.internals.Fetcher - [Consumer clientId=mso-123456-consumer-f5c1d64b-2a77-4abe-97e2-5d595acfbd82, groupId=mso-group] Added READ_UNCOMMITTED fetch request for partition my-test-topic-0 at position FetchPosition{offset=0, offsetEpoch=Optional.empty, currentLeader=LeaderAndEpoch{leader=Optional[localhost:40117 (id: 1 rack: null)], epoch=0}} to node localhost:40117 (id: 1 rack: null) 09:09:45 09:09:45.032 [main] DEBUG org.apache.kafka.clients.FetchSessionHandler - [Consumer clientId=mso-123456-consumer-f5c1d64b-2a77-4abe-97e2-5d595acfbd82, groupId=mso-group] Built incremental fetch (sessionId=1021167583, epoch=9) for node 1. Added 0 partition(s), altered 0 partition(s), removed 0 partition(s), replaced 0 partition(s) out of 1 partition(s) 09:09:45 09:09:45.032 [main] DEBUG org.apache.kafka.clients.consumer.internals.Fetcher - [Consumer clientId=mso-123456-consumer-f5c1d64b-2a77-4abe-97e2-5d595acfbd82, groupId=mso-group] Sending READ_UNCOMMITTED IncrementalFetchRequest(toSend=(), toForget=(), toReplace=(), implied=(my-test-topic-0), canUseTopicIds=True) to broker localhost:40117 (id: 1 rack: null) 09:09:45 09:09:45.032 [main] DEBUG org.apache.kafka.clients.NetworkClient - [Consumer clientId=mso-123456-consumer-f5c1d64b-2a77-4abe-97e2-5d595acfbd82, groupId=mso-group] Sending FETCH request with header RequestHeader(apiKey=FETCH, apiVersion=13, clientId=mso-123456-consumer-f5c1d64b-2a77-4abe-97e2-5d595acfbd82, correlationId=48) and timeout 30000 to node 1: FetchRequestData(clusterId=null, replicaId=-1, maxWaitMs=500, minBytes=1, maxBytes=52428800, isolationLevel=0, sessionId=1021167583, sessionEpoch=9, topics=[], forgottenTopicsData=[], rackId='') 09:09:45 09:09:45.033 [data-plane-kafka-request-handler-1] DEBUG kafka.server.FetchManager - Created a new incremental FetchContext for session id 1021167583, epoch 10: added 0 partition(s), updated 0 partition(s), removed 0 partition(s) 09:09:45 09:09:45.324 [main] DEBUG org.apache.kafka.clients.consumer.internals.ConsumerCoordinator - [Consumer clientId=mso-123456-consumer-f5c1d64b-2a77-4abe-97e2-5d595acfbd82, groupId=mso-group] Sending asynchronous auto-commit of offsets {my-test-topic-0=OffsetAndMetadata{offset=0, leaderEpoch=null, metadata=''}} 09:09:45 09:09:45.329 [main] DEBUG org.apache.kafka.clients.NetworkClient - [Consumer clientId=mso-123456-consumer-f5c1d64b-2a77-4abe-97e2-5d595acfbd82, groupId=mso-group] Sending OFFSET_COMMIT request with header RequestHeader(apiKey=OFFSET_COMMIT, apiVersion=8, clientId=mso-123456-consumer-f5c1d64b-2a77-4abe-97e2-5d595acfbd82, correlationId=49) and timeout 30000 to node 2147483646: OffsetCommitRequestData(groupId='mso-group', generationId=1, memberId='mso-123456-consumer-f5c1d64b-2a77-4abe-97e2-5d595acfbd82-165bc2ae-a1b1-4995-8a9f-1bee3f2ba8f0', groupInstanceId=null, retentionTimeMs=-1, topics=[OffsetCommitRequestTopic(name='my-test-topic', partitions=[OffsetCommitRequestPartition(partitionIndex=0, committedOffset=0, committedLeaderEpoch=-1, commitTimestamp=-1, committedMetadata='')])]) 09:09:45 09:09:45.343 [data-plane-kafka-request-handler-0] DEBUG kafka.server.DelayedOperationPurgatory - Request key MemberKey(mso-group,mso-123456-consumer-f5c1d64b-2a77-4abe-97e2-5d595acfbd82-165bc2ae-a1b1-4995-8a9f-1bee3f2ba8f0) unblocked 1 Heartbeat operations 09:09:45 09:09:45.361 [data-plane-kafka-request-handler-0] DEBUG kafka.log.UnifiedLog - [UnifiedLog partition=__consumer_offsets-37, dir=/tmp/kafka-unit11182757027218931278] Flushing log up to offset 2 (exclusive)with recovery point 2, last flushed: 1770973780300, current time: 1770973785361,unflushed: 1 09:09:45 09:09:45.382 [data-plane-kafka-request-handler-0] DEBUG kafka.cluster.Partition - [Partition __consumer_offsets-37 broker=1] High watermark updated from (offset=1 segment=[0:458]) to (offset=2 segment=[0:582]) 09:09:45 09:09:45.382 [data-plane-kafka-request-handler-0] DEBUG kafka.server.ReplicaManager - [ReplicaManager broker=1] Produce to local log in 31 ms 09:09:45 09:09:45.393 [main] DEBUG org.apache.kafka.clients.NetworkClient - [Consumer clientId=mso-123456-consumer-f5c1d64b-2a77-4abe-97e2-5d595acfbd82, groupId=mso-group] Received OFFSET_COMMIT response from node 2147483646 for request with header RequestHeader(apiKey=OFFSET_COMMIT, apiVersion=8, clientId=mso-123456-consumer-f5c1d64b-2a77-4abe-97e2-5d595acfbd82, correlationId=49): OffsetCommitResponseData(throttleTimeMs=0, topics=[OffsetCommitResponseTopic(name='my-test-topic', partitions=[OffsetCommitResponsePartition(partitionIndex=0, errorCode=0)])]) 09:09:45 09:09:45.394 [data-plane-kafka-network-thread-1-ListenerName(SASL_PLAINTEXT)-SASL_PLAINTEXT-1] DEBUG kafka.request.logger - Completed request:{"isForwarded":false,"requestHeader":{"requestApiKey":8,"requestApiVersion":8,"correlationId":49,"clientId":"mso-123456-consumer-f5c1d64b-2a77-4abe-97e2-5d595acfbd82","requestApiKeyName":"OFFSET_COMMIT"},"request":{"groupId":"mso-group","generationId":1,"memberId":"mso-123456-consumer-f5c1d64b-2a77-4abe-97e2-5d595acfbd82-165bc2ae-a1b1-4995-8a9f-1bee3f2ba8f0","groupInstanceId":null,"topics":[{"name":"my-test-topic","partitions":[{"partitionIndex":0,"committedOffset":0,"committedLeaderEpoch":-1,"committedMetadata":""}]}]},"response":{"throttleTimeMs":0,"topics":[{"name":"my-test-topic","partitions":[{"partitionIndex":0,"errorCode":0}]}]},"connection":"127.0.0.1:40117-127.0.0.1:49644-3","totalTimeMs":62.239,"requestQueueTimeMs":5.072,"localTimeMs":56.708,"remoteTimeMs":0.0,"throttleTimeMs":0,"responseQueueTimeMs":0.12,"sendTimeMs":0.338,"securityProtocol":"SASL_PLAINTEXT","principal":"User:admin","listener":"SASL_PLAINTEXT","clientInformation":{"softwareName":"apache-kafka-java","softwareVersion":"3.3.1"}} 09:09:45 09:09:45.394 [main] DEBUG org.apache.kafka.clients.consumer.internals.ConsumerCoordinator - [Consumer clientId=mso-123456-consumer-f5c1d64b-2a77-4abe-97e2-5d595acfbd82, groupId=mso-group] Committed offset 0 for partition my-test-topic-0 09:09:45 09:09:45.394 [main] DEBUG org.apache.kafka.clients.consumer.internals.ConsumerCoordinator - [Consumer clientId=mso-123456-consumer-f5c1d64b-2a77-4abe-97e2-5d595acfbd82, groupId=mso-group] Completed asynchronous auto-commit of offsets {my-test-topic-0=OffsetAndMetadata{offset=0, leaderEpoch=null, metadata=''}} 09:09:45 09:09:45.536 [executor-Fetch] DEBUG kafka.server.IncrementalFetchContext - Incremental fetch context with session id 1021167583 returning 0 partition(s) 09:09:45 09:09:45.537 [main] DEBUG org.apache.kafka.clients.NetworkClient - [Consumer clientId=mso-123456-consumer-f5c1d64b-2a77-4abe-97e2-5d595acfbd82, groupId=mso-group] Received FETCH response from node 1 for request with header RequestHeader(apiKey=FETCH, apiVersion=13, clientId=mso-123456-consumer-f5c1d64b-2a77-4abe-97e2-5d595acfbd82, correlationId=48): FetchResponseData(throttleTimeMs=0, errorCode=0, sessionId=1021167583, responses=[]) 09:09:45 09:09:45.537 [data-plane-kafka-network-thread-1-ListenerName(SASL_PLAINTEXT)-SASL_PLAINTEXT-0] DEBUG kafka.request.logger - Completed request:{"isForwarded":false,"requestHeader":{"requestApiKey":1,"requestApiVersion":13,"correlationId":48,"clientId":"mso-123456-consumer-f5c1d64b-2a77-4abe-97e2-5d595acfbd82","requestApiKeyName":"FETCH"},"request":{"replicaId":-1,"maxWaitMs":500,"minBytes":1,"maxBytes":52428800,"isolationLevel":0,"sessionId":1021167583,"sessionEpoch":9,"topics":[],"forgottenTopicsData":[],"rackId":""},"response":{"throttleTimeMs":0,"errorCode":0,"sessionId":1021167583,"responses":[]},"connection":"127.0.0.1:40117-127.0.0.1:49642-3","totalTimeMs":503.978,"requestQueueTimeMs":0.205,"localTimeMs":1.191,"remoteTimeMs":501.973,"throttleTimeMs":0,"responseQueueTimeMs":0.177,"sendTimeMs":0.43,"securityProtocol":"SASL_PLAINTEXT","principal":"User:admin","listener":"SASL_PLAINTEXT","clientInformation":{"softwareName":"apache-kafka-java","softwareVersion":"3.3.1"}} 09:09:45 09:09:45.538 [kafka-coordinator-heartbeat-thread | mso-group] DEBUG org.apache.kafka.clients.FetchSessionHandler - [Consumer clientId=mso-123456-consumer-f5c1d64b-2a77-4abe-97e2-5d595acfbd82, groupId=mso-group] Node 1 sent an incremental fetch response with throttleTimeMs = 0 for session 1021167583 with 0 response partition(s), 1 implied partition(s) 09:09:45 09:09:45.539 [main] DEBUG org.apache.kafka.clients.consumer.internals.Fetcher - [Consumer clientId=mso-123456-consumer-f5c1d64b-2a77-4abe-97e2-5d595acfbd82, groupId=mso-group] Added READ_UNCOMMITTED fetch request for partition my-test-topic-0 at position FetchPosition{offset=0, offsetEpoch=Optional.empty, currentLeader=LeaderAndEpoch{leader=Optional[localhost:40117 (id: 1 rack: null)], epoch=0}} to node localhost:40117 (id: 1 rack: null) 09:09:45 09:09:45.539 [main] DEBUG org.apache.kafka.clients.FetchSessionHandler - [Consumer clientId=mso-123456-consumer-f5c1d64b-2a77-4abe-97e2-5d595acfbd82, groupId=mso-group] Built incremental fetch (sessionId=1021167583, epoch=10) for node 1. Added 0 partition(s), altered 0 partition(s), removed 0 partition(s), replaced 0 partition(s) out of 1 partition(s) 09:09:45 09:09:45.539 [main] DEBUG org.apache.kafka.clients.consumer.internals.Fetcher - [Consumer clientId=mso-123456-consumer-f5c1d64b-2a77-4abe-97e2-5d595acfbd82, groupId=mso-group] Sending READ_UNCOMMITTED IncrementalFetchRequest(toSend=(), toForget=(), toReplace=(), implied=(my-test-topic-0), canUseTopicIds=True) to broker localhost:40117 (id: 1 rack: null) 09:09:45 09:09:45.539 [main] DEBUG org.apache.kafka.clients.NetworkClient - [Consumer clientId=mso-123456-consumer-f5c1d64b-2a77-4abe-97e2-5d595acfbd82, groupId=mso-group] Sending FETCH request with header RequestHeader(apiKey=FETCH, apiVersion=13, clientId=mso-123456-consumer-f5c1d64b-2a77-4abe-97e2-5d595acfbd82, correlationId=50) and timeout 30000 to node 1: FetchRequestData(clusterId=null, replicaId=-1, maxWaitMs=500, minBytes=1, maxBytes=52428800, isolationLevel=0, sessionId=1021167583, sessionEpoch=10, topics=[], forgottenTopicsData=[], rackId='') 09:09:45 09:09:45.540 [data-plane-kafka-request-handler-1] DEBUG kafka.server.FetchManager - Created a new incremental FetchContext for session id 1021167583, epoch 11: added 0 partition(s), updated 0 partition(s), removed 0 partition(s) 09:09:46 09:09:46.043 [executor-Fetch] DEBUG kafka.server.IncrementalFetchContext - Incremental fetch context with session id 1021167583 returning 0 partition(s) 09:09:46 09:09:46.045 [main] DEBUG org.apache.kafka.clients.NetworkClient - [Consumer clientId=mso-123456-consumer-f5c1d64b-2a77-4abe-97e2-5d595acfbd82, groupId=mso-group] Received FETCH response from node 1 for request with header RequestHeader(apiKey=FETCH, apiVersion=13, clientId=mso-123456-consumer-f5c1d64b-2a77-4abe-97e2-5d595acfbd82, correlationId=50): FetchResponseData(throttleTimeMs=0, errorCode=0, sessionId=1021167583, responses=[]) 09:09:46 09:09:46.045 [data-plane-kafka-network-thread-1-ListenerName(SASL_PLAINTEXT)-SASL_PLAINTEXT-0] DEBUG kafka.request.logger - Completed request:{"isForwarded":false,"requestHeader":{"requestApiKey":1,"requestApiVersion":13,"correlationId":50,"clientId":"mso-123456-consumer-f5c1d64b-2a77-4abe-97e2-5d595acfbd82","requestApiKeyName":"FETCH"},"request":{"replicaId":-1,"maxWaitMs":500,"minBytes":1,"maxBytes":52428800,"isolationLevel":0,"sessionId":1021167583,"sessionEpoch":10,"topics":[],"forgottenTopicsData":[],"rackId":""},"response":{"throttleTimeMs":0,"errorCode":0,"sessionId":1021167583,"responses":[]},"connection":"127.0.0.1:40117-127.0.0.1:49642-3","totalTimeMs":504.632,"requestQueueTimeMs":0.253,"localTimeMs":1.367,"remoteTimeMs":502.455,"throttleTimeMs":0,"responseQueueTimeMs":0.193,"sendTimeMs":0.362,"securityProtocol":"SASL_PLAINTEXT","principal":"User:admin","listener":"SASL_PLAINTEXT","clientInformation":{"softwareName":"apache-kafka-java","softwareVersion":"3.3.1"}} 09:09:46 09:09:46.045 [main] DEBUG org.apache.kafka.clients.FetchSessionHandler - [Consumer clientId=mso-123456-consumer-f5c1d64b-2a77-4abe-97e2-5d595acfbd82, groupId=mso-group] Node 1 sent an incremental fetch response with throttleTimeMs = 0 for session 1021167583 with 0 response partition(s), 1 implied partition(s) 09:09:46 09:09:46.046 [main] DEBUG org.apache.kafka.clients.consumer.internals.Fetcher - [Consumer clientId=mso-123456-consumer-f5c1d64b-2a77-4abe-97e2-5d595acfbd82, groupId=mso-group] Added READ_UNCOMMITTED fetch request for partition my-test-topic-0 at position FetchPosition{offset=0, offsetEpoch=Optional.empty, currentLeader=LeaderAndEpoch{leader=Optional[localhost:40117 (id: 1 rack: null)], epoch=0}} to node localhost:40117 (id: 1 rack: null) 09:09:46 09:09:46.046 [main] DEBUG org.apache.kafka.clients.FetchSessionHandler - [Consumer clientId=mso-123456-consumer-f5c1d64b-2a77-4abe-97e2-5d595acfbd82, groupId=mso-group] Built incremental fetch (sessionId=1021167583, epoch=11) for node 1. Added 0 partition(s), altered 0 partition(s), removed 0 partition(s), replaced 0 partition(s) out of 1 partition(s) 09:09:46 09:09:46.046 [main] DEBUG org.apache.kafka.clients.consumer.internals.Fetcher - [Consumer clientId=mso-123456-consumer-f5c1d64b-2a77-4abe-97e2-5d595acfbd82, groupId=mso-group] Sending READ_UNCOMMITTED IncrementalFetchRequest(toSend=(), toForget=(), toReplace=(), implied=(my-test-topic-0), canUseTopicIds=True) to broker localhost:40117 (id: 1 rack: null) 09:09:46 09:09:46.046 [main] DEBUG org.apache.kafka.clients.NetworkClient - [Consumer clientId=mso-123456-consumer-f5c1d64b-2a77-4abe-97e2-5d595acfbd82, groupId=mso-group] Sending FETCH request with header RequestHeader(apiKey=FETCH, apiVersion=13, clientId=mso-123456-consumer-f5c1d64b-2a77-4abe-97e2-5d595acfbd82, correlationId=51) and timeout 30000 to node 1: FetchRequestData(clusterId=null, replicaId=-1, maxWaitMs=500, minBytes=1, maxBytes=52428800, isolationLevel=0, sessionId=1021167583, sessionEpoch=11, topics=[], forgottenTopicsData=[], rackId='') 09:09:46 09:09:46.047 [data-plane-kafka-request-handler-1] DEBUG kafka.server.FetchManager - Created a new incremental FetchContext for session id 1021167583, epoch 12: added 0 partition(s), updated 0 partition(s), removed 0 partition(s) 09:09:46 09:09:46.156 [kafka-coordinator-heartbeat-thread | mso-group] DEBUG org.apache.kafka.clients.consumer.internals.ConsumerCoordinator - [Consumer clientId=mso-123456-consumer-f5c1d64b-2a77-4abe-97e2-5d595acfbd82, groupId=mso-group] Sending Heartbeat request with generation 1 and member id mso-123456-consumer-f5c1d64b-2a77-4abe-97e2-5d595acfbd82-165bc2ae-a1b1-4995-8a9f-1bee3f2ba8f0 to coordinator localhost:40117 (id: 2147483646 rack: null) 09:09:46 09:09:46.157 [kafka-coordinator-heartbeat-thread | mso-group] DEBUG org.apache.kafka.clients.NetworkClient - [Consumer clientId=mso-123456-consumer-f5c1d64b-2a77-4abe-97e2-5d595acfbd82, groupId=mso-group] Sending HEARTBEAT request with header RequestHeader(apiKey=HEARTBEAT, apiVersion=4, clientId=mso-123456-consumer-f5c1d64b-2a77-4abe-97e2-5d595acfbd82, correlationId=52) and timeout 30000 to node 2147483646: HeartbeatRequestData(groupId='mso-group', generationId=1, memberId='mso-123456-consumer-f5c1d64b-2a77-4abe-97e2-5d595acfbd82-165bc2ae-a1b1-4995-8a9f-1bee3f2ba8f0', groupInstanceId=null) 09:09:46 09:09:46.158 [data-plane-kafka-request-handler-0] DEBUG kafka.server.DelayedOperationPurgatory - Request key MemberKey(mso-group,mso-123456-consumer-f5c1d64b-2a77-4abe-97e2-5d595acfbd82-165bc2ae-a1b1-4995-8a9f-1bee3f2ba8f0) unblocked 1 Heartbeat operations 09:09:46 09:09:46.160 [main] DEBUG org.apache.kafka.clients.NetworkClient - [Consumer clientId=mso-123456-consumer-f5c1d64b-2a77-4abe-97e2-5d595acfbd82, groupId=mso-group] Received HEARTBEAT response from node 2147483646 for request with header RequestHeader(apiKey=HEARTBEAT, apiVersion=4, clientId=mso-123456-consumer-f5c1d64b-2a77-4abe-97e2-5d595acfbd82, correlationId=52): HeartbeatResponseData(throttleTimeMs=0, errorCode=0) 09:09:46 09:09:46.160 [data-plane-kafka-network-thread-1-ListenerName(SASL_PLAINTEXT)-SASL_PLAINTEXT-1] DEBUG kafka.request.logger - Completed request:{"isForwarded":false,"requestHeader":{"requestApiKey":12,"requestApiVersion":4,"correlationId":52,"clientId":"mso-123456-consumer-f5c1d64b-2a77-4abe-97e2-5d595acfbd82","requestApiKeyName":"HEARTBEAT"},"request":{"groupId":"mso-group","generationId":1,"memberId":"mso-123456-consumer-f5c1d64b-2a77-4abe-97e2-5d595acfbd82-165bc2ae-a1b1-4995-8a9f-1bee3f2ba8f0","groupInstanceId":null},"response":{"throttleTimeMs":0,"errorCode":0},"connection":"127.0.0.1:40117-127.0.0.1:49644-3","totalTimeMs":1.638,"requestQueueTimeMs":0.287,"localTimeMs":0.867,"remoteTimeMs":0.0,"throttleTimeMs":0,"responseQueueTimeMs":0.106,"sendTimeMs":0.377,"securityProtocol":"SASL_PLAINTEXT","principal":"User:admin","listener":"SASL_PLAINTEXT","clientInformation":{"softwareName":"apache-kafka-java","softwareVersion":"3.3.1"}} 09:09:46 09:09:46.160 [main] DEBUG org.apache.kafka.clients.consumer.internals.ConsumerCoordinator - [Consumer clientId=mso-123456-consumer-f5c1d64b-2a77-4abe-97e2-5d595acfbd82, groupId=mso-group] Received successful Heartbeat response 09:09:46 09:09:46.550 [executor-Fetch] DEBUG kafka.server.IncrementalFetchContext - Incremental fetch context with session id 1021167583 returning 0 partition(s) 09:09:46 09:09:46.552 [main] DEBUG org.apache.kafka.clients.NetworkClient - [Consumer clientId=mso-123456-consumer-f5c1d64b-2a77-4abe-97e2-5d595acfbd82, groupId=mso-group] Received FETCH response from node 1 for request with header RequestHeader(apiKey=FETCH, apiVersion=13, clientId=mso-123456-consumer-f5c1d64b-2a77-4abe-97e2-5d595acfbd82, correlationId=51): FetchResponseData(throttleTimeMs=0, errorCode=0, sessionId=1021167583, responses=[]) 09:09:46 09:09:46.552 [data-plane-kafka-network-thread-1-ListenerName(SASL_PLAINTEXT)-SASL_PLAINTEXT-0] DEBUG kafka.request.logger - Completed request:{"isForwarded":false,"requestHeader":{"requestApiKey":1,"requestApiVersion":13,"correlationId":51,"clientId":"mso-123456-consumer-f5c1d64b-2a77-4abe-97e2-5d595acfbd82","requestApiKeyName":"FETCH"},"request":{"replicaId":-1,"maxWaitMs":500,"minBytes":1,"maxBytes":52428800,"isolationLevel":0,"sessionId":1021167583,"sessionEpoch":11,"topics":[],"forgottenTopicsData":[],"rackId":""},"response":{"throttleTimeMs":0,"errorCode":0,"sessionId":1021167583,"responses":[]},"connection":"127.0.0.1:40117-127.0.0.1:49642-3","totalTimeMs":504.622,"requestQueueTimeMs":0.199,"localTimeMs":1.287,"remoteTimeMs":502.49,"throttleTimeMs":0,"responseQueueTimeMs":0.321,"sendTimeMs":0.324,"securityProtocol":"SASL_PLAINTEXT","principal":"User:admin","listener":"SASL_PLAINTEXT","clientInformation":{"softwareName":"apache-kafka-java","softwareVersion":"3.3.1"}} 09:09:46 09:09:46.552 [main] DEBUG org.apache.kafka.clients.FetchSessionHandler - [Consumer clientId=mso-123456-consumer-f5c1d64b-2a77-4abe-97e2-5d595acfbd82, groupId=mso-group] Node 1 sent an incremental fetch response with throttleTimeMs = 0 for session 1021167583 with 0 response partition(s), 1 implied partition(s) 09:09:46 09:09:46.553 [main] DEBUG org.apache.kafka.clients.consumer.internals.Fetcher - [Consumer clientId=mso-123456-consumer-f5c1d64b-2a77-4abe-97e2-5d595acfbd82, groupId=mso-group] Added READ_UNCOMMITTED fetch request for partition my-test-topic-0 at position FetchPosition{offset=0, offsetEpoch=Optional.empty, currentLeader=LeaderAndEpoch{leader=Optional[localhost:40117 (id: 1 rack: null)], epoch=0}} to node localhost:40117 (id: 1 rack: null) 09:09:46 09:09:46.553 [main] DEBUG org.apache.kafka.clients.FetchSessionHandler - [Consumer clientId=mso-123456-consumer-f5c1d64b-2a77-4abe-97e2-5d595acfbd82, groupId=mso-group] Built incremental fetch (sessionId=1021167583, epoch=12) for node 1. Added 0 partition(s), altered 0 partition(s), removed 0 partition(s), replaced 0 partition(s) out of 1 partition(s) 09:09:46 09:09:46.553 [main] DEBUG org.apache.kafka.clients.consumer.internals.Fetcher - [Consumer clientId=mso-123456-consumer-f5c1d64b-2a77-4abe-97e2-5d595acfbd82, groupId=mso-group] Sending READ_UNCOMMITTED IncrementalFetchRequest(toSend=(), toForget=(), toReplace=(), implied=(my-test-topic-0), canUseTopicIds=True) to broker localhost:40117 (id: 1 rack: null) 09:09:46 09:09:46.554 [main] DEBUG org.apache.kafka.clients.NetworkClient - [Consumer clientId=mso-123456-consumer-f5c1d64b-2a77-4abe-97e2-5d595acfbd82, groupId=mso-group] Sending FETCH request with header RequestHeader(apiKey=FETCH, apiVersion=13, clientId=mso-123456-consumer-f5c1d64b-2a77-4abe-97e2-5d595acfbd82, correlationId=53) and timeout 30000 to node 1: FetchRequestData(clusterId=null, replicaId=-1, maxWaitMs=500, minBytes=1, maxBytes=52428800, isolationLevel=0, sessionId=1021167583, sessionEpoch=12, topics=[], forgottenTopicsData=[], rackId='') 09:09:46 09:09:46.555 [data-plane-kafka-request-handler-1] DEBUG kafka.server.FetchManager - Created a new incremental FetchContext for session id 1021167583, epoch 13: added 0 partition(s), updated 0 partition(s), removed 0 partition(s) 09:09:46 09:09:46.993 [ProcessThread(sid:0 cport:46481):] DEBUG org.apache.zookeeper.server.SessionTrackerImpl - Checking session 0x1000002945e0000 09:09:46 09:09:46.993 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - Processing request:: sessionid:0x1000002945e0000 type:ping cxid:0xfffffffffffffffe zxid:0xfffffffffffffffe txntype:unknown reqpath:n/a 09:09:46 09:09:46.994 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - sessionid:0x1000002945e0000 type:ping cxid:0xfffffffffffffffe zxid:0xfffffffffffffffe txntype:unknown reqpath:n/a 09:09:47 09:09:46.994 [main-SendThread(127.0.0.1:46481)] DEBUG org.apache.zookeeper.ClientCnxn - Got ping response for session id: 0x1000002945e0000 after 1ms. 09:09:47 09:09:47.057 [executor-Fetch] DEBUG kafka.server.IncrementalFetchContext - Incremental fetch context with session id 1021167583 returning 0 partition(s) 09:09:47 09:09:47.058 [main] DEBUG org.apache.kafka.clients.NetworkClient - [Consumer clientId=mso-123456-consumer-f5c1d64b-2a77-4abe-97e2-5d595acfbd82, groupId=mso-group] Received FETCH response from node 1 for request with header RequestHeader(apiKey=FETCH, apiVersion=13, clientId=mso-123456-consumer-f5c1d64b-2a77-4abe-97e2-5d595acfbd82, correlationId=53): FetchResponseData(throttleTimeMs=0, errorCode=0, sessionId=1021167583, responses=[]) 09:09:47 09:09:47.058 [main] DEBUG org.apache.kafka.clients.FetchSessionHandler - [Consumer clientId=mso-123456-consumer-f5c1d64b-2a77-4abe-97e2-5d595acfbd82, groupId=mso-group] Node 1 sent an incremental fetch response with throttleTimeMs = 0 for session 1021167583 with 0 response partition(s), 1 implied partition(s) 09:09:47 09:09:47.058 [data-plane-kafka-network-thread-1-ListenerName(SASL_PLAINTEXT)-SASL_PLAINTEXT-0] DEBUG kafka.request.logger - Completed request:{"isForwarded":false,"requestHeader":{"requestApiKey":1,"requestApiVersion":13,"correlationId":53,"clientId":"mso-123456-consumer-f5c1d64b-2a77-4abe-97e2-5d595acfbd82","requestApiKeyName":"FETCH"},"request":{"replicaId":-1,"maxWaitMs":500,"minBytes":1,"maxBytes":52428800,"isolationLevel":0,"sessionId":1021167583,"sessionEpoch":12,"topics":[],"forgottenTopicsData":[],"rackId":""},"response":{"throttleTimeMs":0,"errorCode":0,"sessionId":1021167583,"responses":[]},"connection":"127.0.0.1:40117-127.0.0.1:49642-3","totalTimeMs":502.959,"requestQueueTimeMs":0.187,"localTimeMs":1.103,"remoteTimeMs":501.123,"throttleTimeMs":0,"responseQueueTimeMs":0.163,"sendTimeMs":0.381,"securityProtocol":"SASL_PLAINTEXT","principal":"User:admin","listener":"SASL_PLAINTEXT","clientInformation":{"softwareName":"apache-kafka-java","softwareVersion":"3.3.1"}} 09:09:47 09:09:47.058 [main] DEBUG org.apache.kafka.clients.consumer.internals.Fetcher - [Consumer clientId=mso-123456-consumer-f5c1d64b-2a77-4abe-97e2-5d595acfbd82, groupId=mso-group] Added READ_UNCOMMITTED fetch request for partition my-test-topic-0 at position FetchPosition{offset=0, offsetEpoch=Optional.empty, currentLeader=LeaderAndEpoch{leader=Optional[localhost:40117 (id: 1 rack: null)], epoch=0}} to node localhost:40117 (id: 1 rack: null) 09:09:47 09:09:47.059 [main] DEBUG org.apache.kafka.clients.FetchSessionHandler - [Consumer clientId=mso-123456-consumer-f5c1d64b-2a77-4abe-97e2-5d595acfbd82, groupId=mso-group] Built incremental fetch (sessionId=1021167583, epoch=13) for node 1. Added 0 partition(s), altered 0 partition(s), removed 0 partition(s), replaced 0 partition(s) out of 1 partition(s) 09:09:47 09:09:47.059 [main] DEBUG org.apache.kafka.clients.consumer.internals.Fetcher - [Consumer clientId=mso-123456-consumer-f5c1d64b-2a77-4abe-97e2-5d595acfbd82, groupId=mso-group] Sending READ_UNCOMMITTED IncrementalFetchRequest(toSend=(), toForget=(), toReplace=(), implied=(my-test-topic-0), canUseTopicIds=True) to broker localhost:40117 (id: 1 rack: null) 09:09:47 09:09:47.059 [main] DEBUG org.apache.kafka.clients.NetworkClient - [Consumer clientId=mso-123456-consumer-f5c1d64b-2a77-4abe-97e2-5d595acfbd82, groupId=mso-group] Sending FETCH request with header RequestHeader(apiKey=FETCH, apiVersion=13, clientId=mso-123456-consumer-f5c1d64b-2a77-4abe-97e2-5d595acfbd82, correlationId=54) and timeout 30000 to node 1: FetchRequestData(clusterId=null, replicaId=-1, maxWaitMs=500, minBytes=1, maxBytes=52428800, isolationLevel=0, sessionId=1021167583, sessionEpoch=13, topics=[], forgottenTopicsData=[], rackId='') 09:09:47 09:09:47.060 [data-plane-kafka-request-handler-1] DEBUG kafka.server.FetchManager - Created a new incremental FetchContext for session id 1021167583, epoch 14: added 0 partition(s), updated 0 partition(s), removed 0 partition(s) 09:09:47 09:09:47.563 [executor-Fetch] DEBUG kafka.server.IncrementalFetchContext - Incremental fetch context with session id 1021167583 returning 0 partition(s) 09:09:47 09:09:47.565 [data-plane-kafka-network-thread-1-ListenerName(SASL_PLAINTEXT)-SASL_PLAINTEXT-0] DEBUG kafka.request.logger - Completed request:{"isForwarded":false,"requestHeader":{"requestApiKey":1,"requestApiVersion":13,"correlationId":54,"clientId":"mso-123456-consumer-f5c1d64b-2a77-4abe-97e2-5d595acfbd82","requestApiKeyName":"FETCH"},"request":{"replicaId":-1,"maxWaitMs":500,"minBytes":1,"maxBytes":52428800,"isolationLevel":0,"sessionId":1021167583,"sessionEpoch":13,"topics":[],"forgottenTopicsData":[],"rackId":""},"response":{"throttleTimeMs":0,"errorCode":0,"sessionId":1021167583,"responses":[]},"connection":"127.0.0.1:40117-127.0.0.1:49642-3","totalTimeMs":504.521,"requestQueueTimeMs":0.209,"localTimeMs":1.519,"remoteTimeMs":502.263,"throttleTimeMs":0,"responseQueueTimeMs":0.177,"sendTimeMs":0.351,"securityProtocol":"SASL_PLAINTEXT","principal":"User:admin","listener":"SASL_PLAINTEXT","clientInformation":{"softwareName":"apache-kafka-java","softwareVersion":"3.3.1"}} 09:09:47 09:09:47.565 [main] DEBUG org.apache.kafka.clients.NetworkClient - [Consumer clientId=mso-123456-consumer-f5c1d64b-2a77-4abe-97e2-5d595acfbd82, groupId=mso-group] Received FETCH response from node 1 for request with header RequestHeader(apiKey=FETCH, apiVersion=13, clientId=mso-123456-consumer-f5c1d64b-2a77-4abe-97e2-5d595acfbd82, correlationId=54): FetchResponseData(throttleTimeMs=0, errorCode=0, sessionId=1021167583, responses=[]) 09:09:47 09:09:47.565 [main] DEBUG org.apache.kafka.clients.FetchSessionHandler - [Consumer clientId=mso-123456-consumer-f5c1d64b-2a77-4abe-97e2-5d595acfbd82, groupId=mso-group] Node 1 sent an incremental fetch response with throttleTimeMs = 0 for session 1021167583 with 0 response partition(s), 1 implied partition(s) 09:09:47 09:09:47.566 [main] DEBUG org.apache.kafka.clients.consumer.internals.Fetcher - [Consumer clientId=mso-123456-consumer-f5c1d64b-2a77-4abe-97e2-5d595acfbd82, groupId=mso-group] Added READ_UNCOMMITTED fetch request for partition my-test-topic-0 at position FetchPosition{offset=0, offsetEpoch=Optional.empty, currentLeader=LeaderAndEpoch{leader=Optional[localhost:40117 (id: 1 rack: null)], epoch=0}} to node localhost:40117 (id: 1 rack: null) 09:09:47 09:09:47.566 [main] DEBUG org.apache.kafka.clients.FetchSessionHandler - [Consumer clientId=mso-123456-consumer-f5c1d64b-2a77-4abe-97e2-5d595acfbd82, groupId=mso-group] Built incremental fetch (sessionId=1021167583, epoch=14) for node 1. Added 0 partition(s), altered 0 partition(s), removed 0 partition(s), replaced 0 partition(s) out of 1 partition(s) 09:09:47 09:09:47.566 [main] DEBUG org.apache.kafka.clients.consumer.internals.Fetcher - [Consumer clientId=mso-123456-consumer-f5c1d64b-2a77-4abe-97e2-5d595acfbd82, groupId=mso-group] Sending READ_UNCOMMITTED IncrementalFetchRequest(toSend=(), toForget=(), toReplace=(), implied=(my-test-topic-0), canUseTopicIds=True) to broker localhost:40117 (id: 1 rack: null) 09:09:47 09:09:47.566 [main] DEBUG org.apache.kafka.clients.NetworkClient - [Consumer clientId=mso-123456-consumer-f5c1d64b-2a77-4abe-97e2-5d595acfbd82, groupId=mso-group] Sending FETCH request with header RequestHeader(apiKey=FETCH, apiVersion=13, clientId=mso-123456-consumer-f5c1d64b-2a77-4abe-97e2-5d595acfbd82, correlationId=55) and timeout 30000 to node 1: FetchRequestData(clusterId=null, replicaId=-1, maxWaitMs=500, minBytes=1, maxBytes=52428800, isolationLevel=0, sessionId=1021167583, sessionEpoch=14, topics=[], forgottenTopicsData=[], rackId='') 09:09:47 09:09:47.567 [data-plane-kafka-request-handler-0] DEBUG kafka.server.FetchManager - Created a new incremental FetchContext for session id 1021167583, epoch 15: added 0 partition(s), updated 0 partition(s), removed 0 partition(s) 09:09:48 09:09:48.069 [executor-Fetch] DEBUG kafka.server.IncrementalFetchContext - Incremental fetch context with session id 1021167583 returning 0 partition(s) 09:09:48 09:09:48.071 [main] DEBUG org.apache.kafka.clients.NetworkClient - [Consumer clientId=mso-123456-consumer-f5c1d64b-2a77-4abe-97e2-5d595acfbd82, groupId=mso-group] Received FETCH response from node 1 for request with header RequestHeader(apiKey=FETCH, apiVersion=13, clientId=mso-123456-consumer-f5c1d64b-2a77-4abe-97e2-5d595acfbd82, correlationId=55): FetchResponseData(throttleTimeMs=0, errorCode=0, sessionId=1021167583, responses=[]) 09:09:48 09:09:48.071 [main] DEBUG org.apache.kafka.clients.FetchSessionHandler - [Consumer clientId=mso-123456-consumer-f5c1d64b-2a77-4abe-97e2-5d595acfbd82, groupId=mso-group] Node 1 sent an incremental fetch response with throttleTimeMs = 0 for session 1021167583 with 0 response partition(s), 1 implied partition(s) 09:09:48 09:09:48.071 [data-plane-kafka-network-thread-1-ListenerName(SASL_PLAINTEXT)-SASL_PLAINTEXT-0] DEBUG kafka.request.logger - Completed request:{"isForwarded":false,"requestHeader":{"requestApiKey":1,"requestApiVersion":13,"correlationId":55,"clientId":"mso-123456-consumer-f5c1d64b-2a77-4abe-97e2-5d595acfbd82","requestApiKeyName":"FETCH"},"request":{"replicaId":-1,"maxWaitMs":500,"minBytes":1,"maxBytes":52428800,"isolationLevel":0,"sessionId":1021167583,"sessionEpoch":14,"topics":[],"forgottenTopicsData":[],"rackId":""},"response":{"throttleTimeMs":0,"errorCode":0,"sessionId":1021167583,"responses":[]},"connection":"127.0.0.1:40117-127.0.0.1:49642-3","totalTimeMs":503.653,"requestQueueTimeMs":0.244,"localTimeMs":1.298,"remoteTimeMs":501.624,"throttleTimeMs":0,"responseQueueTimeMs":0.139,"sendTimeMs":0.346,"securityProtocol":"SASL_PLAINTEXT","principal":"User:admin","listener":"SASL_PLAINTEXT","clientInformation":{"softwareName":"apache-kafka-java","softwareVersion":"3.3.1"}} 09:09:48 09:09:48.072 [main] DEBUG org.apache.kafka.clients.consumer.internals.Fetcher - [Consumer clientId=mso-123456-consumer-f5c1d64b-2a77-4abe-97e2-5d595acfbd82, groupId=mso-group] Added READ_UNCOMMITTED fetch request for partition my-test-topic-0 at position FetchPosition{offset=0, offsetEpoch=Optional.empty, currentLeader=LeaderAndEpoch{leader=Optional[localhost:40117 (id: 1 rack: null)], epoch=0}} to node localhost:40117 (id: 1 rack: null) 09:09:48 09:09:48.072 [main] DEBUG org.apache.kafka.clients.FetchSessionHandler - [Consumer clientId=mso-123456-consumer-f5c1d64b-2a77-4abe-97e2-5d595acfbd82, groupId=mso-group] Built incremental fetch (sessionId=1021167583, epoch=15) for node 1. Added 0 partition(s), altered 0 partition(s), removed 0 partition(s), replaced 0 partition(s) out of 1 partition(s) 09:09:48 09:09:48.072 [main] DEBUG org.apache.kafka.clients.consumer.internals.Fetcher - [Consumer clientId=mso-123456-consumer-f5c1d64b-2a77-4abe-97e2-5d595acfbd82, groupId=mso-group] Sending READ_UNCOMMITTED IncrementalFetchRequest(toSend=(), toForget=(), toReplace=(), implied=(my-test-topic-0), canUseTopicIds=True) to broker localhost:40117 (id: 1 rack: null) 09:09:48 09:09:48.072 [main] DEBUG org.apache.kafka.clients.NetworkClient - [Consumer clientId=mso-123456-consumer-f5c1d64b-2a77-4abe-97e2-5d595acfbd82, groupId=mso-group] Sending FETCH request with header RequestHeader(apiKey=FETCH, apiVersion=13, clientId=mso-123456-consumer-f5c1d64b-2a77-4abe-97e2-5d595acfbd82, correlationId=56) and timeout 30000 to node 1: FetchRequestData(clusterId=null, replicaId=-1, maxWaitMs=500, minBytes=1, maxBytes=52428800, isolationLevel=0, sessionId=1021167583, sessionEpoch=15, topics=[], forgottenTopicsData=[], rackId='') 09:09:48 09:09:48.073 [data-plane-kafka-request-handler-0] DEBUG kafka.server.FetchManager - Created a new incremental FetchContext for session id 1021167583, epoch 16: added 0 partition(s), updated 0 partition(s), removed 0 partition(s) 09:09:48 09:09:48.362 [kafka-log-cleaner-thread-0] DEBUG kafka.log.LogCleanerManager$ - Finding range of cleanable offsets for log=__consumer_offsets-13. Last clean offset=None now=1770973788352 => firstDirtyOffset=0 firstUncleanableOffset=0 activeSegment.baseOffset=0 09:09:48 09:09:48.367 [kafka-log-cleaner-thread-0] DEBUG kafka.log.LogCleanerManager$ - Finding range of cleanable offsets for log=__consumer_offsets-46. Last clean offset=None now=1770973788352 => firstDirtyOffset=0 firstUncleanableOffset=0 activeSegment.baseOffset=0 09:09:48 09:09:48.367 [kafka-log-cleaner-thread-0] DEBUG kafka.log.LogCleanerManager$ - Finding range of cleanable offsets for log=__consumer_offsets-9. Last clean offset=None now=1770973788352 => firstDirtyOffset=0 firstUncleanableOffset=0 activeSegment.baseOffset=0 09:09:48 09:09:48.367 [kafka-log-cleaner-thread-0] DEBUG kafka.log.LogCleanerManager$ - Finding range of cleanable offsets for log=__consumer_offsets-42. Last clean offset=None now=1770973788352 => firstDirtyOffset=0 firstUncleanableOffset=0 activeSegment.baseOffset=0 09:09:48 09:09:48.368 [kafka-log-cleaner-thread-0] DEBUG kafka.log.LogCleanerManager$ - Finding range of cleanable offsets for log=__consumer_offsets-21. Last clean offset=None now=1770973788352 => firstDirtyOffset=0 firstUncleanableOffset=0 activeSegment.baseOffset=0 09:09:48 09:09:48.368 [kafka-log-cleaner-thread-0] DEBUG kafka.log.LogCleanerManager$ - Finding range of cleanable offsets for log=__consumer_offsets-17. Last clean offset=None now=1770973788352 => firstDirtyOffset=0 firstUncleanableOffset=0 activeSegment.baseOffset=0 09:09:48 09:09:48.368 [kafka-log-cleaner-thread-0] DEBUG kafka.log.LogCleanerManager$ - Finding range of cleanable offsets for log=__consumer_offsets-30. Last clean offset=None now=1770973788352 => firstDirtyOffset=0 firstUncleanableOffset=0 activeSegment.baseOffset=0 09:09:48 09:09:48.368 [kafka-log-cleaner-thread-0] DEBUG kafka.log.LogCleanerManager$ - Finding range of cleanable offsets for log=__consumer_offsets-26. Last clean offset=None now=1770973788352 => firstDirtyOffset=0 firstUncleanableOffset=0 activeSegment.baseOffset=0 09:09:48 09:09:48.368 [kafka-log-cleaner-thread-0] DEBUG kafka.log.LogCleanerManager$ - Finding range of cleanable offsets for log=__consumer_offsets-5. Last clean offset=None now=1770973788352 => firstDirtyOffset=0 firstUncleanableOffset=0 activeSegment.baseOffset=0 09:09:48 09:09:48.368 [kafka-log-cleaner-thread-0] DEBUG kafka.log.LogCleanerManager$ - Finding range of cleanable offsets for log=__consumer_offsets-38. Last clean offset=None now=1770973788352 => firstDirtyOffset=0 firstUncleanableOffset=0 activeSegment.baseOffset=0 09:09:48 09:09:48.369 [kafka-log-cleaner-thread-0] DEBUG kafka.log.LogCleanerManager$ - Finding range of cleanable offsets for log=__consumer_offsets-1. Last clean offset=None now=1770973788352 => firstDirtyOffset=0 firstUncleanableOffset=0 activeSegment.baseOffset=0 09:09:48 09:09:48.369 [kafka-log-cleaner-thread-0] DEBUG kafka.log.LogCleanerManager$ - Finding range of cleanable offsets for log=__consumer_offsets-34. Last clean offset=None now=1770973788352 => firstDirtyOffset=0 firstUncleanableOffset=0 activeSegment.baseOffset=0 09:09:48 09:09:48.369 [kafka-log-cleaner-thread-0] DEBUG kafka.log.LogCleanerManager$ - Finding range of cleanable offsets for log=__consumer_offsets-16. Last clean offset=None now=1770973788352 => firstDirtyOffset=0 firstUncleanableOffset=0 activeSegment.baseOffset=0 09:09:48 09:09:48.369 [kafka-log-cleaner-thread-0] DEBUG kafka.log.LogCleanerManager$ - Finding range of cleanable offsets for log=__consumer_offsets-45. Last clean offset=None now=1770973788352 => firstDirtyOffset=0 firstUncleanableOffset=0 activeSegment.baseOffset=0 09:09:48 09:09:48.370 [kafka-log-cleaner-thread-0] DEBUG kafka.log.LogCleanerManager$ - Finding range of cleanable offsets for log=__consumer_offsets-12. Last clean offset=None now=1770973788352 => firstDirtyOffset=0 firstUncleanableOffset=0 activeSegment.baseOffset=0 09:09:48 09:09:48.370 [kafka-log-cleaner-thread-0] DEBUG kafka.log.LogCleanerManager$ - Finding range of cleanable offsets for log=__consumer_offsets-41. Last clean offset=None now=1770973788352 => firstDirtyOffset=0 firstUncleanableOffset=0 activeSegment.baseOffset=0 09:09:48 09:09:48.370 [kafka-log-cleaner-thread-0] DEBUG kafka.log.LogCleanerManager$ - Finding range of cleanable offsets for log=__consumer_offsets-24. Last clean offset=None now=1770973788352 => firstDirtyOffset=0 firstUncleanableOffset=0 activeSegment.baseOffset=0 09:09:48 09:09:48.371 [kafka-log-cleaner-thread-0] DEBUG kafka.log.LogCleanerManager$ - Finding range of cleanable offsets for log=__consumer_offsets-20. Last clean offset=None now=1770973788352 => firstDirtyOffset=0 firstUncleanableOffset=0 activeSegment.baseOffset=0 09:09:48 09:09:48.371 [kafka-log-cleaner-thread-0] DEBUG kafka.log.LogCleanerManager$ - Finding range of cleanable offsets for log=__consumer_offsets-49. Last clean offset=None now=1770973788352 => firstDirtyOffset=0 firstUncleanableOffset=0 activeSegment.baseOffset=0 09:09:48 09:09:48.371 [kafka-log-cleaner-thread-0] DEBUG kafka.log.LogCleanerManager$ - Finding range of cleanable offsets for log=__consumer_offsets-0. Last clean offset=None now=1770973788352 => firstDirtyOffset=0 firstUncleanableOffset=0 activeSegment.baseOffset=0 09:09:48 09:09:48.372 [kafka-log-cleaner-thread-0] DEBUG kafka.log.LogCleanerManager$ - Finding range of cleanable offsets for log=__consumer_offsets-29. Last clean offset=None now=1770973788352 => firstDirtyOffset=0 firstUncleanableOffset=0 activeSegment.baseOffset=0 09:09:48 09:09:48.372 [kafka-log-cleaner-thread-0] DEBUG kafka.log.LogCleanerManager$ - Finding range of cleanable offsets for log=__consumer_offsets-25. Last clean offset=None now=1770973788352 => firstDirtyOffset=0 firstUncleanableOffset=0 activeSegment.baseOffset=0 09:09:48 09:09:48.372 [kafka-log-cleaner-thread-0] DEBUG kafka.log.LogCleanerManager$ - Finding range of cleanable offsets for log=__consumer_offsets-8. Last clean offset=None now=1770973788352 => firstDirtyOffset=0 firstUncleanableOffset=0 activeSegment.baseOffset=0 09:09:48 09:09:48.373 [kafka-log-cleaner-thread-0] DEBUG kafka.log.LogCleanerManager$ - Finding range of cleanable offsets for log=__consumer_offsets-37. Last clean offset=None now=1770973788352 => firstDirtyOffset=0 firstUncleanableOffset=0 activeSegment.baseOffset=0 09:09:48 09:09:48.373 [kafka-log-cleaner-thread-0] DEBUG kafka.log.LogCleanerManager$ - Finding range of cleanable offsets for log=__consumer_offsets-4. Last clean offset=None now=1770973788352 => firstDirtyOffset=0 firstUncleanableOffset=0 activeSegment.baseOffset=0 09:09:48 09:09:48.373 [kafka-log-cleaner-thread-0] DEBUG kafka.log.LogCleanerManager$ - Finding range of cleanable offsets for log=__consumer_offsets-33. Last clean offset=None now=1770973788352 => firstDirtyOffset=0 firstUncleanableOffset=0 activeSegment.baseOffset=0 09:09:48 09:09:48.374 [kafka-log-cleaner-thread-0] DEBUG kafka.log.LogCleanerManager$ - Finding range of cleanable offsets for log=__consumer_offsets-15. Last clean offset=None now=1770973788352 => firstDirtyOffset=0 firstUncleanableOffset=0 activeSegment.baseOffset=0 09:09:48 09:09:48.374 [kafka-log-cleaner-thread-0] DEBUG kafka.log.LogCleanerManager$ - Finding range of cleanable offsets for log=__consumer_offsets-48. Last clean offset=None now=1770973788352 => firstDirtyOffset=0 firstUncleanableOffset=0 activeSegment.baseOffset=0 09:09:48 09:09:48.374 [kafka-log-cleaner-thread-0] DEBUG kafka.log.LogCleanerManager$ - Finding range of cleanable offsets for log=__consumer_offsets-11. Last clean offset=None now=1770973788352 => firstDirtyOffset=0 firstUncleanableOffset=0 activeSegment.baseOffset=0 09:09:48 09:09:48.374 [kafka-log-cleaner-thread-0] DEBUG kafka.log.LogCleanerManager$ - Finding range of cleanable offsets for log=__consumer_offsets-44. Last clean offset=None now=1770973788352 => firstDirtyOffset=0 firstUncleanableOffset=0 activeSegment.baseOffset=0 09:09:48 09:09:48.375 [kafka-log-cleaner-thread-0] DEBUG kafka.log.LogCleanerManager$ - Finding range of cleanable offsets for log=__consumer_offsets-23. Last clean offset=None now=1770973788352 => firstDirtyOffset=0 firstUncleanableOffset=0 activeSegment.baseOffset=0 09:09:48 09:09:48.376 [kafka-log-cleaner-thread-0] DEBUG kafka.log.LogCleanerManager$ - Finding range of cleanable offsets for log=__consumer_offsets-19. Last clean offset=None now=1770973788352 => firstDirtyOffset=0 firstUncleanableOffset=0 activeSegment.baseOffset=0 09:09:48 09:09:48.376 [kafka-log-cleaner-thread-0] DEBUG kafka.log.LogCleanerManager$ - Finding range of cleanable offsets for log=__consumer_offsets-32. Last clean offset=None now=1770973788352 => firstDirtyOffset=0 firstUncleanableOffset=0 activeSegment.baseOffset=0 09:09:48 09:09:48.376 [kafka-log-cleaner-thread-0] DEBUG kafka.log.LogCleanerManager$ - Finding range of cleanable offsets for log=__consumer_offsets-28. Last clean offset=None now=1770973788352 => firstDirtyOffset=0 firstUncleanableOffset=0 activeSegment.baseOffset=0 09:09:48 09:09:48.376 [kafka-log-cleaner-thread-0] DEBUG kafka.log.LogCleanerManager$ - Finding range of cleanable offsets for log=__consumer_offsets-7. Last clean offset=None now=1770973788352 => firstDirtyOffset=0 firstUncleanableOffset=0 activeSegment.baseOffset=0 09:09:48 09:09:48.376 [kafka-log-cleaner-thread-0] DEBUG kafka.log.LogCleanerManager$ - Finding range of cleanable offsets for log=__consumer_offsets-40. Last clean offset=None now=1770973788352 => firstDirtyOffset=0 firstUncleanableOffset=0 activeSegment.baseOffset=0 09:09:48 09:09:48.377 [kafka-log-cleaner-thread-0] DEBUG kafka.log.LogCleanerManager$ - Finding range of cleanable offsets for log=__consumer_offsets-3. Last clean offset=None now=1770973788352 => firstDirtyOffset=0 firstUncleanableOffset=0 activeSegment.baseOffset=0 09:09:48 09:09:48.377 [kafka-log-cleaner-thread-0] DEBUG kafka.log.LogCleanerManager$ - Finding range of cleanable offsets for log=__consumer_offsets-36. Last clean offset=None now=1770973788352 => firstDirtyOffset=0 firstUncleanableOffset=0 activeSegment.baseOffset=0 09:09:48 09:09:48.377 [kafka-log-cleaner-thread-0] DEBUG kafka.log.LogCleanerManager$ - Finding range of cleanable offsets for log=__consumer_offsets-47. Last clean offset=None now=1770973788352 => firstDirtyOffset=0 firstUncleanableOffset=0 activeSegment.baseOffset=0 09:09:48 09:09:48.378 [kafka-log-cleaner-thread-0] DEBUG kafka.log.LogCleanerManager$ - Finding range of cleanable offsets for log=__consumer_offsets-14. Last clean offset=None now=1770973788352 => firstDirtyOffset=0 firstUncleanableOffset=0 activeSegment.baseOffset=0 09:09:48 09:09:48.378 [kafka-log-cleaner-thread-0] DEBUG kafka.log.LogCleanerManager$ - Finding range of cleanable offsets for log=__consumer_offsets-43. Last clean offset=None now=1770973788352 => firstDirtyOffset=0 firstUncleanableOffset=0 activeSegment.baseOffset=0 09:09:48 09:09:48.378 [kafka-log-cleaner-thread-0] DEBUG kafka.log.LogCleanerManager$ - Finding range of cleanable offsets for log=__consumer_offsets-10. Last clean offset=None now=1770973788352 => firstDirtyOffset=0 firstUncleanableOffset=0 activeSegment.baseOffset=0 09:09:48 09:09:48.385 [kafka-log-cleaner-thread-0] DEBUG kafka.log.LogCleanerManager$ - Finding range of cleanable offsets for log=__consumer_offsets-22. Last clean offset=None now=1770973788352 => firstDirtyOffset=0 firstUncleanableOffset=0 activeSegment.baseOffset=0 09:09:48 09:09:48.385 [kafka-log-cleaner-thread-0] DEBUG kafka.log.LogCleanerManager$ - Finding range of cleanable offsets for log=__consumer_offsets-18. Last clean offset=None now=1770973788352 => firstDirtyOffset=0 firstUncleanableOffset=0 activeSegment.baseOffset=0 09:09:48 09:09:48.386 [kafka-log-cleaner-thread-0] DEBUG kafka.log.LogCleanerManager$ - Finding range of cleanable offsets for log=__consumer_offsets-31. Last clean offset=None now=1770973788352 => firstDirtyOffset=0 firstUncleanableOffset=0 activeSegment.baseOffset=0 09:09:48 09:09:48.386 [kafka-log-cleaner-thread-0] DEBUG kafka.log.LogCleanerManager$ - Finding range of cleanable offsets for log=__consumer_offsets-27. Last clean offset=None now=1770973788352 => firstDirtyOffset=0 firstUncleanableOffset=0 activeSegment.baseOffset=0 09:09:48 09:09:48.386 [kafka-log-cleaner-thread-0] DEBUG kafka.log.LogCleanerManager$ - Finding range of cleanable offsets for log=__consumer_offsets-39. Last clean offset=None now=1770973788352 => firstDirtyOffset=0 firstUncleanableOffset=0 activeSegment.baseOffset=0 09:09:48 09:09:48.386 [kafka-log-cleaner-thread-0] DEBUG kafka.log.LogCleanerManager$ - Finding range of cleanable offsets for log=__consumer_offsets-6. Last clean offset=None now=1770973788352 => firstDirtyOffset=0 firstUncleanableOffset=0 activeSegment.baseOffset=0 09:09:48 09:09:48.387 [kafka-log-cleaner-thread-0] DEBUG kafka.log.LogCleanerManager$ - Finding range of cleanable offsets for log=__consumer_offsets-35. Last clean offset=None now=1770973788352 => firstDirtyOffset=0 firstUncleanableOffset=0 activeSegment.baseOffset=0 09:09:48 09:09:48.387 [kafka-log-cleaner-thread-0] DEBUG kafka.log.LogCleanerManager$ - Finding range of cleanable offsets for log=__consumer_offsets-2. Last clean offset=None now=1770973788352 => firstDirtyOffset=0 firstUncleanableOffset=0 activeSegment.baseOffset=0 09:09:48 09:09:48.576 [executor-Fetch] DEBUG kafka.server.IncrementalFetchContext - Incremental fetch context with session id 1021167583 returning 0 partition(s) 09:09:48 09:09:48.577 [main] DEBUG org.apache.kafka.clients.NetworkClient - [Consumer clientId=mso-123456-consumer-f5c1d64b-2a77-4abe-97e2-5d595acfbd82, groupId=mso-group] Received FETCH response from node 1 for request with header RequestHeader(apiKey=FETCH, apiVersion=13, clientId=mso-123456-consumer-f5c1d64b-2a77-4abe-97e2-5d595acfbd82, correlationId=56): FetchResponseData(throttleTimeMs=0, errorCode=0, sessionId=1021167583, responses=[]) 09:09:48 09:09:48.577 [main] DEBUG org.apache.kafka.clients.FetchSessionHandler - [Consumer clientId=mso-123456-consumer-f5c1d64b-2a77-4abe-97e2-5d595acfbd82, groupId=mso-group] Node 1 sent an incremental fetch response with throttleTimeMs = 0 for session 1021167583 with 0 response partition(s), 1 implied partition(s) 09:09:48 09:09:48.577 [data-plane-kafka-network-thread-1-ListenerName(SASL_PLAINTEXT)-SASL_PLAINTEXT-0] DEBUG kafka.request.logger - Completed request:{"isForwarded":false,"requestHeader":{"requestApiKey":1,"requestApiVersion":13,"correlationId":56,"clientId":"mso-123456-consumer-f5c1d64b-2a77-4abe-97e2-5d595acfbd82","requestApiKeyName":"FETCH"},"request":{"replicaId":-1,"maxWaitMs":500,"minBytes":1,"maxBytes":52428800,"isolationLevel":0,"sessionId":1021167583,"sessionEpoch":15,"topics":[],"forgottenTopicsData":[],"rackId":""},"response":{"throttleTimeMs":0,"errorCode":0,"sessionId":1021167583,"responses":[]},"connection":"127.0.0.1:40117-127.0.0.1:49642-3","totalTimeMs":503.814,"requestQueueTimeMs":0.239,"localTimeMs":1.169,"remoteTimeMs":501.888,"throttleTimeMs":0,"responseQueueTimeMs":0.176,"sendTimeMs":0.341,"securityProtocol":"SASL_PLAINTEXT","principal":"User:admin","listener":"SASL_PLAINTEXT","clientInformation":{"softwareName":"apache-kafka-java","softwareVersion":"3.3.1"}} 09:09:48 09:09:48.578 [main] DEBUG org.apache.kafka.clients.consumer.internals.Fetcher - [Consumer clientId=mso-123456-consumer-f5c1d64b-2a77-4abe-97e2-5d595acfbd82, groupId=mso-group] Added READ_UNCOMMITTED fetch request for partition my-test-topic-0 at position FetchPosition{offset=0, offsetEpoch=Optional.empty, currentLeader=LeaderAndEpoch{leader=Optional[localhost:40117 (id: 1 rack: null)], epoch=0}} to node localhost:40117 (id: 1 rack: null) 09:09:48 09:09:48.578 [main] DEBUG org.apache.kafka.clients.FetchSessionHandler - [Consumer clientId=mso-123456-consumer-f5c1d64b-2a77-4abe-97e2-5d595acfbd82, groupId=mso-group] Built incremental fetch (sessionId=1021167583, epoch=16) for node 1. Added 0 partition(s), altered 0 partition(s), removed 0 partition(s), replaced 0 partition(s) out of 1 partition(s) 09:09:48 09:09:48.578 [main] DEBUG org.apache.kafka.clients.consumer.internals.Fetcher - [Consumer clientId=mso-123456-consumer-f5c1d64b-2a77-4abe-97e2-5d595acfbd82, groupId=mso-group] Sending READ_UNCOMMITTED IncrementalFetchRequest(toSend=(), toForget=(), toReplace=(), implied=(my-test-topic-0), canUseTopicIds=True) to broker localhost:40117 (id: 1 rack: null) 09:09:48 09:09:48.578 [main] DEBUG org.apache.kafka.clients.NetworkClient - [Consumer clientId=mso-123456-consumer-f5c1d64b-2a77-4abe-97e2-5d595acfbd82, groupId=mso-group] Sending FETCH request with header RequestHeader(apiKey=FETCH, apiVersion=13, clientId=mso-123456-consumer-f5c1d64b-2a77-4abe-97e2-5d595acfbd82, correlationId=57) and timeout 30000 to node 1: FetchRequestData(clusterId=null, replicaId=-1, maxWaitMs=500, minBytes=1, maxBytes=52428800, isolationLevel=0, sessionId=1021167583, sessionEpoch=16, topics=[], forgottenTopicsData=[], rackId='') 09:09:48 09:09:48.579 [data-plane-kafka-request-handler-0] DEBUG kafka.server.FetchManager - Created a new incremental FetchContext for session id 1021167583, epoch 17: added 0 partition(s), updated 0 partition(s), removed 0 partition(s) 09:09:49 09:09:49.082 [executor-Fetch] DEBUG kafka.server.IncrementalFetchContext - Incremental fetch context with session id 1021167583 returning 0 partition(s) 09:09:49 09:09:49.083 [main] DEBUG org.apache.kafka.clients.NetworkClient - [Consumer clientId=mso-123456-consumer-f5c1d64b-2a77-4abe-97e2-5d595acfbd82, groupId=mso-group] Received FETCH response from node 1 for request with header RequestHeader(apiKey=FETCH, apiVersion=13, clientId=mso-123456-consumer-f5c1d64b-2a77-4abe-97e2-5d595acfbd82, correlationId=57): FetchResponseData(throttleTimeMs=0, errorCode=0, sessionId=1021167583, responses=[]) 09:09:49 09:09:49.083 [data-plane-kafka-network-thread-1-ListenerName(SASL_PLAINTEXT)-SASL_PLAINTEXT-0] DEBUG kafka.request.logger - Completed request:{"isForwarded":false,"requestHeader":{"requestApiKey":1,"requestApiVersion":13,"correlationId":57,"clientId":"mso-123456-consumer-f5c1d64b-2a77-4abe-97e2-5d595acfbd82","requestApiKeyName":"FETCH"},"request":{"replicaId":-1,"maxWaitMs":500,"minBytes":1,"maxBytes":52428800,"isolationLevel":0,"sessionId":1021167583,"sessionEpoch":16,"topics":[],"forgottenTopicsData":[],"rackId":""},"response":{"throttleTimeMs":0,"errorCode":0,"sessionId":1021167583,"responses":[]},"connection":"127.0.0.1:40117-127.0.0.1:49642-3","totalTimeMs":503.719,"requestQueueTimeMs":0.197,"localTimeMs":1.168,"remoteTimeMs":501.951,"throttleTimeMs":0,"responseQueueTimeMs":0.172,"sendTimeMs":0.228,"securityProtocol":"SASL_PLAINTEXT","principal":"User:admin","listener":"SASL_PLAINTEXT","clientInformation":{"softwareName":"apache-kafka-java","softwareVersion":"3.3.1"}} 09:09:49 09:09:49.083 [main] DEBUG org.apache.kafka.clients.FetchSessionHandler - [Consumer clientId=mso-123456-consumer-f5c1d64b-2a77-4abe-97e2-5d595acfbd82, groupId=mso-group] Node 1 sent an incremental fetch response with throttleTimeMs = 0 for session 1021167583 with 0 response partition(s), 1 implied partition(s) 09:09:49 09:09:49.084 [main] DEBUG org.apache.kafka.clients.consumer.internals.Fetcher - [Consumer clientId=mso-123456-consumer-f5c1d64b-2a77-4abe-97e2-5d595acfbd82, groupId=mso-group] Added READ_UNCOMMITTED fetch request for partition my-test-topic-0 at position FetchPosition{offset=0, offsetEpoch=Optional.empty, currentLeader=LeaderAndEpoch{leader=Optional[localhost:40117 (id: 1 rack: null)], epoch=0}} to node localhost:40117 (id: 1 rack: null) 09:09:49 09:09:49.084 [main] DEBUG org.apache.kafka.clients.FetchSessionHandler - [Consumer clientId=mso-123456-consumer-f5c1d64b-2a77-4abe-97e2-5d595acfbd82, groupId=mso-group] Built incremental fetch (sessionId=1021167583, epoch=17) for node 1. Added 0 partition(s), altered 0 partition(s), removed 0 partition(s), replaced 0 partition(s) out of 1 partition(s) 09:09:49 09:09:49.084 [main] DEBUG org.apache.kafka.clients.consumer.internals.Fetcher - [Consumer clientId=mso-123456-consumer-f5c1d64b-2a77-4abe-97e2-5d595acfbd82, groupId=mso-group] Sending READ_UNCOMMITTED IncrementalFetchRequest(toSend=(), toForget=(), toReplace=(), implied=(my-test-topic-0), canUseTopicIds=True) to broker localhost:40117 (id: 1 rack: null) 09:09:49 09:09:49.084 [main] DEBUG org.apache.kafka.clients.NetworkClient - [Consumer clientId=mso-123456-consumer-f5c1d64b-2a77-4abe-97e2-5d595acfbd82, groupId=mso-group] Sending FETCH request with header RequestHeader(apiKey=FETCH, apiVersion=13, clientId=mso-123456-consumer-f5c1d64b-2a77-4abe-97e2-5d595acfbd82, correlationId=58) and timeout 30000 to node 1: FetchRequestData(clusterId=null, replicaId=-1, maxWaitMs=500, minBytes=1, maxBytes=52428800, isolationLevel=0, sessionId=1021167583, sessionEpoch=17, topics=[], forgottenTopicsData=[], rackId='') 09:09:49 09:09:49.085 [data-plane-kafka-request-handler-1] DEBUG kafka.server.FetchManager - Created a new incremental FetchContext for session id 1021167583, epoch 18: added 0 partition(s), updated 0 partition(s), removed 0 partition(s) 09:09:49 09:09:49.157 [kafka-coordinator-heartbeat-thread | mso-group] DEBUG org.apache.kafka.clients.consumer.internals.ConsumerCoordinator - [Consumer clientId=mso-123456-consumer-f5c1d64b-2a77-4abe-97e2-5d595acfbd82, groupId=mso-group] Sending Heartbeat request with generation 1 and member id mso-123456-consumer-f5c1d64b-2a77-4abe-97e2-5d595acfbd82-165bc2ae-a1b1-4995-8a9f-1bee3f2ba8f0 to coordinator localhost:40117 (id: 2147483646 rack: null) 09:09:49 09:09:49.157 [kafka-coordinator-heartbeat-thread | mso-group] DEBUG org.apache.kafka.clients.NetworkClient - [Consumer clientId=mso-123456-consumer-f5c1d64b-2a77-4abe-97e2-5d595acfbd82, groupId=mso-group] Sending HEARTBEAT request with header RequestHeader(apiKey=HEARTBEAT, apiVersion=4, clientId=mso-123456-consumer-f5c1d64b-2a77-4abe-97e2-5d595acfbd82, correlationId=59) and timeout 30000 to node 2147483646: HeartbeatRequestData(groupId='mso-group', generationId=1, memberId='mso-123456-consumer-f5c1d64b-2a77-4abe-97e2-5d595acfbd82-165bc2ae-a1b1-4995-8a9f-1bee3f2ba8f0', groupInstanceId=null) 09:09:49 09:09:49.158 [data-plane-kafka-request-handler-0] DEBUG kafka.server.DelayedOperationPurgatory - Request key MemberKey(mso-group,mso-123456-consumer-f5c1d64b-2a77-4abe-97e2-5d595acfbd82-165bc2ae-a1b1-4995-8a9f-1bee3f2ba8f0) unblocked 1 Heartbeat operations 09:09:49 09:09:49.159 [main] DEBUG org.apache.kafka.clients.NetworkClient - [Consumer clientId=mso-123456-consumer-f5c1d64b-2a77-4abe-97e2-5d595acfbd82, groupId=mso-group] Received HEARTBEAT response from node 2147483646 for request with header RequestHeader(apiKey=HEARTBEAT, apiVersion=4, clientId=mso-123456-consumer-f5c1d64b-2a77-4abe-97e2-5d595acfbd82, correlationId=59): HeartbeatResponseData(throttleTimeMs=0, errorCode=0) 09:09:49 09:09:49.159 [main] DEBUG org.apache.kafka.clients.consumer.internals.ConsumerCoordinator - [Consumer clientId=mso-123456-consumer-f5c1d64b-2a77-4abe-97e2-5d595acfbd82, groupId=mso-group] Received successful Heartbeat response 09:09:49 09:09:49.159 [data-plane-kafka-network-thread-1-ListenerName(SASL_PLAINTEXT)-SASL_PLAINTEXT-1] DEBUG kafka.request.logger - Completed request:{"isForwarded":false,"requestHeader":{"requestApiKey":12,"requestApiVersion":4,"correlationId":59,"clientId":"mso-123456-consumer-f5c1d64b-2a77-4abe-97e2-5d595acfbd82","requestApiKeyName":"HEARTBEAT"},"request":{"groupId":"mso-group","generationId":1,"memberId":"mso-123456-consumer-f5c1d64b-2a77-4abe-97e2-5d595acfbd82-165bc2ae-a1b1-4995-8a9f-1bee3f2ba8f0","groupInstanceId":null},"response":{"throttleTimeMs":0,"errorCode":0},"connection":"127.0.0.1:40117-127.0.0.1:49644-3","totalTimeMs":1.31,"requestQueueTimeMs":0.241,"localTimeMs":0.782,"remoteTimeMs":0.0,"throttleTimeMs":0,"responseQueueTimeMs":0.076,"sendTimeMs":0.209,"securityProtocol":"SASL_PLAINTEXT","principal":"User:admin","listener":"SASL_PLAINTEXT","clientInformation":{"softwareName":"apache-kafka-java","softwareVersion":"3.3.1"}} 09:09:49 09:09:49.588 [executor-Fetch] DEBUG kafka.server.IncrementalFetchContext - Incremental fetch context with session id 1021167583 returning 0 partition(s) 09:09:49 09:09:49.589 [main] DEBUG org.apache.kafka.clients.NetworkClient - [Consumer clientId=mso-123456-consumer-f5c1d64b-2a77-4abe-97e2-5d595acfbd82, groupId=mso-group] Received FETCH response from node 1 for request with header RequestHeader(apiKey=FETCH, apiVersion=13, clientId=mso-123456-consumer-f5c1d64b-2a77-4abe-97e2-5d595acfbd82, correlationId=58): FetchResponseData(throttleTimeMs=0, errorCode=0, sessionId=1021167583, responses=[]) 09:09:49 09:09:49.590 [data-plane-kafka-network-thread-1-ListenerName(SASL_PLAINTEXT)-SASL_PLAINTEXT-0] DEBUG kafka.request.logger - Completed request:{"isForwarded":false,"requestHeader":{"requestApiKey":1,"requestApiVersion":13,"correlationId":58,"clientId":"mso-123456-consumer-f5c1d64b-2a77-4abe-97e2-5d595acfbd82","requestApiKeyName":"FETCH"},"request":{"replicaId":-1,"maxWaitMs":500,"minBytes":1,"maxBytes":52428800,"isolationLevel":0,"sessionId":1021167583,"sessionEpoch":17,"topics":[],"forgottenTopicsData":[],"rackId":""},"response":{"throttleTimeMs":0,"errorCode":0,"sessionId":1021167583,"responses":[]},"connection":"127.0.0.1:40117-127.0.0.1:49642-3","totalTimeMs":503.853,"requestQueueTimeMs":0.174,"localTimeMs":1.036,"remoteTimeMs":502.132,"throttleTimeMs":0,"responseQueueTimeMs":0.146,"sendTimeMs":0.363,"securityProtocol":"SASL_PLAINTEXT","principal":"User:admin","listener":"SASL_PLAINTEXT","clientInformation":{"softwareName":"apache-kafka-java","softwareVersion":"3.3.1"}} 09:09:49 09:09:49.590 [kafka-coordinator-heartbeat-thread | mso-group] DEBUG org.apache.kafka.clients.FetchSessionHandler - [Consumer clientId=mso-123456-consumer-f5c1d64b-2a77-4abe-97e2-5d595acfbd82, groupId=mso-group] Node 1 sent an incremental fetch response with throttleTimeMs = 0 for session 1021167583 with 0 response partition(s), 1 implied partition(s) 09:09:49 09:09:49.590 [main] DEBUG org.apache.kafka.clients.consumer.internals.Fetcher - [Consumer clientId=mso-123456-consumer-f5c1d64b-2a77-4abe-97e2-5d595acfbd82, groupId=mso-group] Added READ_UNCOMMITTED fetch request for partition my-test-topic-0 at position FetchPosition{offset=0, offsetEpoch=Optional.empty, currentLeader=LeaderAndEpoch{leader=Optional[localhost:40117 (id: 1 rack: null)], epoch=0}} to node localhost:40117 (id: 1 rack: null) 09:09:49 09:09:49.590 [main] DEBUG org.apache.kafka.clients.FetchSessionHandler - [Consumer clientId=mso-123456-consumer-f5c1d64b-2a77-4abe-97e2-5d595acfbd82, groupId=mso-group] Built incremental fetch (sessionId=1021167583, epoch=18) for node 1. Added 0 partition(s), altered 0 partition(s), removed 0 partition(s), replaced 0 partition(s) out of 1 partition(s) 09:09:49 09:09:49.591 [main] DEBUG org.apache.kafka.clients.consumer.internals.Fetcher - [Consumer clientId=mso-123456-consumer-f5c1d64b-2a77-4abe-97e2-5d595acfbd82, groupId=mso-group] Sending READ_UNCOMMITTED IncrementalFetchRequest(toSend=(), toForget=(), toReplace=(), implied=(my-test-topic-0), canUseTopicIds=True) to broker localhost:40117 (id: 1 rack: null) 09:09:49 09:09:49.591 [main] DEBUG org.apache.kafka.clients.NetworkClient - [Consumer clientId=mso-123456-consumer-f5c1d64b-2a77-4abe-97e2-5d595acfbd82, groupId=mso-group] Sending FETCH request with header RequestHeader(apiKey=FETCH, apiVersion=13, clientId=mso-123456-consumer-f5c1d64b-2a77-4abe-97e2-5d595acfbd82, correlationId=60) and timeout 30000 to node 1: FetchRequestData(clusterId=null, replicaId=-1, maxWaitMs=500, minBytes=1, maxBytes=52428800, isolationLevel=0, sessionId=1021167583, sessionEpoch=18, topics=[], forgottenTopicsData=[], rackId='') 09:09:49 09:09:49.592 [data-plane-kafka-request-handler-1] DEBUG kafka.server.FetchManager - Created a new incremental FetchContext for session id 1021167583, epoch 19: added 0 partition(s), updated 0 partition(s), removed 0 partition(s) 09:09:50 09:09:50.093 [executor-Fetch] DEBUG kafka.server.IncrementalFetchContext - Incremental fetch context with session id 1021167583 returning 0 partition(s) 09:09:50 09:09:50.094 [main] DEBUG org.apache.kafka.clients.NetworkClient - [Consumer clientId=mso-123456-consumer-f5c1d64b-2a77-4abe-97e2-5d595acfbd82, groupId=mso-group] Received FETCH response from node 1 for request with header RequestHeader(apiKey=FETCH, apiVersion=13, clientId=mso-123456-consumer-f5c1d64b-2a77-4abe-97e2-5d595acfbd82, correlationId=60): FetchResponseData(throttleTimeMs=0, errorCode=0, sessionId=1021167583, responses=[]) 09:09:50 09:09:50.095 [kafka-coordinator-heartbeat-thread | mso-group] DEBUG org.apache.kafka.clients.FetchSessionHandler - [Consumer clientId=mso-123456-consumer-f5c1d64b-2a77-4abe-97e2-5d595acfbd82, groupId=mso-group] Node 1 sent an incremental fetch response with throttleTimeMs = 0 for session 1021167583 with 0 response partition(s), 1 implied partition(s) 09:09:50 09:09:50.096 [main] DEBUG org.apache.kafka.clients.consumer.internals.Fetcher - [Consumer clientId=mso-123456-consumer-f5c1d64b-2a77-4abe-97e2-5d595acfbd82, groupId=mso-group] Added READ_UNCOMMITTED fetch request for partition my-test-topic-0 at position FetchPosition{offset=0, offsetEpoch=Optional.empty, currentLeader=LeaderAndEpoch{leader=Optional[localhost:40117 (id: 1 rack: null)], epoch=0}} to node localhost:40117 (id: 1 rack: null) 09:09:50 09:09:50.095 [data-plane-kafka-network-thread-1-ListenerName(SASL_PLAINTEXT)-SASL_PLAINTEXT-0] DEBUG kafka.request.logger - Completed request:{"isForwarded":false,"requestHeader":{"requestApiKey":1,"requestApiVersion":13,"correlationId":60,"clientId":"mso-123456-consumer-f5c1d64b-2a77-4abe-97e2-5d595acfbd82","requestApiKeyName":"FETCH"},"request":{"replicaId":-1,"maxWaitMs":500,"minBytes":1,"maxBytes":52428800,"isolationLevel":0,"sessionId":1021167583,"sessionEpoch":18,"topics":[],"forgottenTopicsData":[],"rackId":""},"response":{"throttleTimeMs":0,"errorCode":0,"sessionId":1021167583,"responses":[]},"connection":"127.0.0.1:40117-127.0.0.1:49642-3","totalTimeMs":502.683,"requestQueueTimeMs":0.209,"localTimeMs":1.229,"remoteTimeMs":500.735,"throttleTimeMs":0,"responseQueueTimeMs":0.127,"sendTimeMs":0.382,"securityProtocol":"SASL_PLAINTEXT","principal":"User:admin","listener":"SASL_PLAINTEXT","clientInformation":{"softwareName":"apache-kafka-java","softwareVersion":"3.3.1"}} 09:09:50 09:09:50.096 [main] DEBUG org.apache.kafka.clients.FetchSessionHandler - [Consumer clientId=mso-123456-consumer-f5c1d64b-2a77-4abe-97e2-5d595acfbd82, groupId=mso-group] Built incremental fetch (sessionId=1021167583, epoch=19) for node 1. Added 0 partition(s), altered 0 partition(s), removed 0 partition(s), replaced 0 partition(s) out of 1 partition(s) 09:09:50 09:09:50.096 [main] DEBUG org.apache.kafka.clients.consumer.internals.Fetcher - [Consumer clientId=mso-123456-consumer-f5c1d64b-2a77-4abe-97e2-5d595acfbd82, groupId=mso-group] Sending READ_UNCOMMITTED IncrementalFetchRequest(toSend=(), toForget=(), toReplace=(), implied=(my-test-topic-0), canUseTopicIds=True) to broker localhost:40117 (id: 1 rack: null) 09:09:50 09:09:50.096 [main] DEBUG org.apache.kafka.clients.NetworkClient - [Consumer clientId=mso-123456-consumer-f5c1d64b-2a77-4abe-97e2-5d595acfbd82, groupId=mso-group] Sending FETCH request with header RequestHeader(apiKey=FETCH, apiVersion=13, clientId=mso-123456-consumer-f5c1d64b-2a77-4abe-97e2-5d595acfbd82, correlationId=61) and timeout 30000 to node 1: FetchRequestData(clusterId=null, replicaId=-1, maxWaitMs=500, minBytes=1, maxBytes=52428800, isolationLevel=0, sessionId=1021167583, sessionEpoch=19, topics=[], forgottenTopicsData=[], rackId='') 09:09:50 09:09:50.097 [data-plane-kafka-request-handler-1] DEBUG kafka.server.FetchManager - Created a new incremental FetchContext for session id 1021167583, epoch 20: added 0 partition(s), updated 0 partition(s), removed 0 partition(s) 09:09:50 09:09:50.320 [main] DEBUG org.apache.kafka.clients.consumer.internals.ConsumerCoordinator - [Consumer clientId=mso-123456-consumer-f5c1d64b-2a77-4abe-97e2-5d595acfbd82, groupId=mso-group] Sending asynchronous auto-commit of offsets {my-test-topic-0=OffsetAndMetadata{offset=0, leaderEpoch=null, metadata=''}} 09:09:50 09:09:50.321 [main] DEBUG org.apache.kafka.clients.NetworkClient - [Consumer clientId=mso-123456-consumer-f5c1d64b-2a77-4abe-97e2-5d595acfbd82, groupId=mso-group] Sending OFFSET_COMMIT request with header RequestHeader(apiKey=OFFSET_COMMIT, apiVersion=8, clientId=mso-123456-consumer-f5c1d64b-2a77-4abe-97e2-5d595acfbd82, correlationId=62) and timeout 30000 to node 2147483646: OffsetCommitRequestData(groupId='mso-group', generationId=1, memberId='mso-123456-consumer-f5c1d64b-2a77-4abe-97e2-5d595acfbd82-165bc2ae-a1b1-4995-8a9f-1bee3f2ba8f0', groupInstanceId=null, retentionTimeMs=-1, topics=[OffsetCommitRequestTopic(name='my-test-topic', partitions=[OffsetCommitRequestPartition(partitionIndex=0, committedOffset=0, committedLeaderEpoch=-1, commitTimestamp=-1, committedMetadata='')])]) 09:09:50 09:09:50.322 [data-plane-kafka-request-handler-0] DEBUG kafka.server.DelayedOperationPurgatory - Request key MemberKey(mso-group,mso-123456-consumer-f5c1d64b-2a77-4abe-97e2-5d595acfbd82-165bc2ae-a1b1-4995-8a9f-1bee3f2ba8f0) unblocked 1 Heartbeat operations 09:09:50 09:09:50.324 [data-plane-kafka-request-handler-0] DEBUG kafka.log.UnifiedLog - [UnifiedLog partition=__consumer_offsets-37, dir=/tmp/kafka-unit11182757027218931278] Flushing log up to offset 3 (exclusive)with recovery point 3, last flushed: 1770973785381, current time: 1770973790324,unflushed: 1 09:09:50 09:09:50.363 [data-plane-kafka-request-handler-0] DEBUG kafka.cluster.Partition - [Partition __consumer_offsets-37 broker=1] High watermark updated from (offset=2 segment=[0:582]) to (offset=3 segment=[0:706]) 09:09:50 09:09:50.363 [data-plane-kafka-request-handler-0] DEBUG kafka.server.ReplicaManager - [ReplicaManager broker=1] Produce to local log in 40 ms 09:09:50 09:09:50.365 [main] DEBUG org.apache.kafka.clients.NetworkClient - [Consumer clientId=mso-123456-consumer-f5c1d64b-2a77-4abe-97e2-5d595acfbd82, groupId=mso-group] Received OFFSET_COMMIT response from node 2147483646 for request with header RequestHeader(apiKey=OFFSET_COMMIT, apiVersion=8, clientId=mso-123456-consumer-f5c1d64b-2a77-4abe-97e2-5d595acfbd82, correlationId=62): OffsetCommitResponseData(throttleTimeMs=0, topics=[OffsetCommitResponseTopic(name='my-test-topic', partitions=[OffsetCommitResponsePartition(partitionIndex=0, errorCode=0)])]) 09:09:50 09:09:50.365 [main] DEBUG org.apache.kafka.clients.consumer.internals.ConsumerCoordinator - [Consumer clientId=mso-123456-consumer-f5c1d64b-2a77-4abe-97e2-5d595acfbd82, groupId=mso-group] Committed offset 0 for partition my-test-topic-0 09:09:50 09:09:50.365 [main] DEBUG org.apache.kafka.clients.consumer.internals.ConsumerCoordinator - [Consumer clientId=mso-123456-consumer-f5c1d64b-2a77-4abe-97e2-5d595acfbd82, groupId=mso-group] Completed asynchronous auto-commit of offsets {my-test-topic-0=OffsetAndMetadata{offset=0, leaderEpoch=null, metadata=''}} 09:09:50 09:09:50.366 [data-plane-kafka-network-thread-1-ListenerName(SASL_PLAINTEXT)-SASL_PLAINTEXT-1] DEBUG kafka.request.logger - Completed request:{"isForwarded":false,"requestHeader":{"requestApiKey":8,"requestApiVersion":8,"correlationId":62,"clientId":"mso-123456-consumer-f5c1d64b-2a77-4abe-97e2-5d595acfbd82","requestApiKeyName":"OFFSET_COMMIT"},"request":{"groupId":"mso-group","generationId":1,"memberId":"mso-123456-consumer-f5c1d64b-2a77-4abe-97e2-5d595acfbd82-165bc2ae-a1b1-4995-8a9f-1bee3f2ba8f0","groupInstanceId":null,"topics":[{"name":"my-test-topic","partitions":[{"partitionIndex":0,"committedOffset":0,"committedLeaderEpoch":-1,"committedMetadata":""}]}]},"response":{"throttleTimeMs":0,"topics":[{"name":"my-test-topic","partitions":[{"partitionIndex":0,"errorCode":0}]}]},"connection":"127.0.0.1:40117-127.0.0.1:49644-3","totalTimeMs":43.745,"requestQueueTimeMs":0.347,"localTimeMs":42.985,"remoteTimeMs":0.0,"throttleTimeMs":0,"responseQueueTimeMs":0.12,"sendTimeMs":0.292,"securityProtocol":"SASL_PLAINTEXT","principal":"User:admin","listener":"SASL_PLAINTEXT","clientInformation":{"softwareName":"apache-kafka-java","softwareVersion":"3.3.1"}} 09:09:50 09:09:50.600 [executor-Fetch] DEBUG kafka.server.IncrementalFetchContext - Incremental fetch context with session id 1021167583 returning 0 partition(s) 09:09:50 09:09:50.601 [main] DEBUG org.apache.kafka.clients.NetworkClient - [Consumer clientId=mso-123456-consumer-f5c1d64b-2a77-4abe-97e2-5d595acfbd82, groupId=mso-group] Received FETCH response from node 1 for request with header RequestHeader(apiKey=FETCH, apiVersion=13, clientId=mso-123456-consumer-f5c1d64b-2a77-4abe-97e2-5d595acfbd82, correlationId=61): FetchResponseData(throttleTimeMs=0, errorCode=0, sessionId=1021167583, responses=[]) 09:09:50 09:09:50.601 [kafka-coordinator-heartbeat-thread | mso-group] DEBUG org.apache.kafka.clients.FetchSessionHandler - [Consumer clientId=mso-123456-consumer-f5c1d64b-2a77-4abe-97e2-5d595acfbd82, groupId=mso-group] Node 1 sent an incremental fetch response with throttleTimeMs = 0 for session 1021167583 with 0 response partition(s), 1 implied partition(s) 09:09:50 09:09:50.602 [data-plane-kafka-network-thread-1-ListenerName(SASL_PLAINTEXT)-SASL_PLAINTEXT-0] DEBUG kafka.request.logger - Completed request:{"isForwarded":false,"requestHeader":{"requestApiKey":1,"requestApiVersion":13,"correlationId":61,"clientId":"mso-123456-consumer-f5c1d64b-2a77-4abe-97e2-5d595acfbd82","requestApiKeyName":"FETCH"},"request":{"replicaId":-1,"maxWaitMs":500,"minBytes":1,"maxBytes":52428800,"isolationLevel":0,"sessionId":1021167583,"sessionEpoch":19,"topics":[],"forgottenTopicsData":[],"rackId":""},"response":{"throttleTimeMs":0,"errorCode":0,"sessionId":1021167583,"responses":[]},"connection":"127.0.0.1:40117-127.0.0.1:49642-3","totalTimeMs":504.25,"requestQueueTimeMs":0.166,"localTimeMs":1.389,"remoteTimeMs":502.228,"throttleTimeMs":0,"responseQueueTimeMs":0.152,"sendTimeMs":0.313,"securityProtocol":"SASL_PLAINTEXT","principal":"User:admin","listener":"SASL_PLAINTEXT","clientInformation":{"softwareName":"apache-kafka-java","softwareVersion":"3.3.1"}} 09:09:50 09:09:50.602 [main] DEBUG org.apache.kafka.clients.consumer.internals.Fetcher - [Consumer clientId=mso-123456-consumer-f5c1d64b-2a77-4abe-97e2-5d595acfbd82, groupId=mso-group] Added READ_UNCOMMITTED fetch request for partition my-test-topic-0 at position FetchPosition{offset=0, offsetEpoch=Optional.empty, currentLeader=LeaderAndEpoch{leader=Optional[localhost:40117 (id: 1 rack: null)], epoch=0}} to node localhost:40117 (id: 1 rack: null) 09:09:50 09:09:50.602 [main] DEBUG org.apache.kafka.clients.FetchSessionHandler - [Consumer clientId=mso-123456-consumer-f5c1d64b-2a77-4abe-97e2-5d595acfbd82, groupId=mso-group] Built incremental fetch (sessionId=1021167583, epoch=20) for node 1. Added 0 partition(s), altered 0 partition(s), removed 0 partition(s), replaced 0 partition(s) out of 1 partition(s) 09:09:50 09:09:50.602 [main] DEBUG org.apache.kafka.clients.consumer.internals.Fetcher - [Consumer clientId=mso-123456-consumer-f5c1d64b-2a77-4abe-97e2-5d595acfbd82, groupId=mso-group] Sending READ_UNCOMMITTED IncrementalFetchRequest(toSend=(), toForget=(), toReplace=(), implied=(my-test-topic-0), canUseTopicIds=True) to broker localhost:40117 (id: 1 rack: null) 09:09:50 09:09:50.602 [main] DEBUG org.apache.kafka.clients.NetworkClient - [Consumer clientId=mso-123456-consumer-f5c1d64b-2a77-4abe-97e2-5d595acfbd82, groupId=mso-group] Sending FETCH request with header RequestHeader(apiKey=FETCH, apiVersion=13, clientId=mso-123456-consumer-f5c1d64b-2a77-4abe-97e2-5d595acfbd82, correlationId=63) and timeout 30000 to node 1: FetchRequestData(clusterId=null, replicaId=-1, maxWaitMs=500, minBytes=1, maxBytes=52428800, isolationLevel=0, sessionId=1021167583, sessionEpoch=20, topics=[], forgottenTopicsData=[], rackId='') 09:09:50 09:09:50.603 [data-plane-kafka-request-handler-0] DEBUG kafka.server.FetchManager - Created a new incremental FetchContext for session id 1021167583, epoch 21: added 0 partition(s), updated 0 partition(s), removed 0 partition(s) 09:09:51 09:09:51.106 [executor-Fetch] DEBUG kafka.server.IncrementalFetchContext - Incremental fetch context with session id 1021167583 returning 0 partition(s) 09:09:51 09:09:51.107 [main] DEBUG org.apache.kafka.clients.NetworkClient - [Consumer clientId=mso-123456-consumer-f5c1d64b-2a77-4abe-97e2-5d595acfbd82, groupId=mso-group] Received FETCH response from node 1 for request with header RequestHeader(apiKey=FETCH, apiVersion=13, clientId=mso-123456-consumer-f5c1d64b-2a77-4abe-97e2-5d595acfbd82, correlationId=63): FetchResponseData(throttleTimeMs=0, errorCode=0, sessionId=1021167583, responses=[]) 09:09:51 09:09:51.108 [data-plane-kafka-network-thread-1-ListenerName(SASL_PLAINTEXT)-SASL_PLAINTEXT-0] DEBUG kafka.request.logger - Completed request:{"isForwarded":false,"requestHeader":{"requestApiKey":1,"requestApiVersion":13,"correlationId":63,"clientId":"mso-123456-consumer-f5c1d64b-2a77-4abe-97e2-5d595acfbd82","requestApiKeyName":"FETCH"},"request":{"replicaId":-1,"maxWaitMs":500,"minBytes":1,"maxBytes":52428800,"isolationLevel":0,"sessionId":1021167583,"sessionEpoch":20,"topics":[],"forgottenTopicsData":[],"rackId":""},"response":{"throttleTimeMs":0,"errorCode":0,"sessionId":1021167583,"responses":[]},"connection":"127.0.0.1:40117-127.0.0.1:49642-3","totalTimeMs":504.4,"requestQueueTimeMs":0.213,"localTimeMs":1.324,"remoteTimeMs":502.33,"throttleTimeMs":0,"responseQueueTimeMs":0.161,"sendTimeMs":0.37,"securityProtocol":"SASL_PLAINTEXT","principal":"User:admin","listener":"SASL_PLAINTEXT","clientInformation":{"softwareName":"apache-kafka-java","softwareVersion":"3.3.1"}} 09:09:51 09:09:51.108 [kafka-coordinator-heartbeat-thread | mso-group] DEBUG org.apache.kafka.clients.FetchSessionHandler - [Consumer clientId=mso-123456-consumer-f5c1d64b-2a77-4abe-97e2-5d595acfbd82, groupId=mso-group] Node 1 sent an incremental fetch response with throttleTimeMs = 0 for session 1021167583 with 0 response partition(s), 1 implied partition(s) 09:09:51 09:09:51.108 [main] DEBUG org.apache.kafka.clients.consumer.internals.Fetcher - [Consumer clientId=mso-123456-consumer-f5c1d64b-2a77-4abe-97e2-5d595acfbd82, groupId=mso-group] Added READ_UNCOMMITTED fetch request for partition my-test-topic-0 at position FetchPosition{offset=0, offsetEpoch=Optional.empty, currentLeader=LeaderAndEpoch{leader=Optional[localhost:40117 (id: 1 rack: null)], epoch=0}} to node localhost:40117 (id: 1 rack: null) 09:09:51 09:09:51.109 [main] DEBUG org.apache.kafka.clients.FetchSessionHandler - [Consumer clientId=mso-123456-consumer-f5c1d64b-2a77-4abe-97e2-5d595acfbd82, groupId=mso-group] Built incremental fetch (sessionId=1021167583, epoch=21) for node 1. Added 0 partition(s), altered 0 partition(s), removed 0 partition(s), replaced 0 partition(s) out of 1 partition(s) 09:09:51 09:09:51.109 [main] DEBUG org.apache.kafka.clients.consumer.internals.Fetcher - [Consumer clientId=mso-123456-consumer-f5c1d64b-2a77-4abe-97e2-5d595acfbd82, groupId=mso-group] Sending READ_UNCOMMITTED IncrementalFetchRequest(toSend=(), toForget=(), toReplace=(), implied=(my-test-topic-0), canUseTopicIds=True) to broker localhost:40117 (id: 1 rack: null) 09:09:51 09:09:51.109 [main] DEBUG org.apache.kafka.clients.NetworkClient - [Consumer clientId=mso-123456-consumer-f5c1d64b-2a77-4abe-97e2-5d595acfbd82, groupId=mso-group] Sending FETCH request with header RequestHeader(apiKey=FETCH, apiVersion=13, clientId=mso-123456-consumer-f5c1d64b-2a77-4abe-97e2-5d595acfbd82, correlationId=64) and timeout 30000 to node 1: FetchRequestData(clusterId=null, replicaId=-1, maxWaitMs=500, minBytes=1, maxBytes=52428800, isolationLevel=0, sessionId=1021167583, sessionEpoch=21, topics=[], forgottenTopicsData=[], rackId='') 09:09:51 09:09:51.110 [data-plane-kafka-request-handler-0] DEBUG kafka.server.FetchManager - Created a new incremental FetchContext for session id 1021167583, epoch 22: added 0 partition(s), updated 0 partition(s), removed 0 partition(s) 09:09:51 09:09:51.614 [executor-Fetch] DEBUG kafka.server.IncrementalFetchContext - Incremental fetch context with session id 1021167583 returning 0 partition(s) 09:09:51 09:09:51.615 [main] DEBUG org.apache.kafka.clients.NetworkClient - [Consumer clientId=mso-123456-consumer-f5c1d64b-2a77-4abe-97e2-5d595acfbd82, groupId=mso-group] Received FETCH response from node 1 for request with header RequestHeader(apiKey=FETCH, apiVersion=13, clientId=mso-123456-consumer-f5c1d64b-2a77-4abe-97e2-5d595acfbd82, correlationId=64): FetchResponseData(throttleTimeMs=0, errorCode=0, sessionId=1021167583, responses=[]) 09:09:51 09:09:51.615 [data-plane-kafka-network-thread-1-ListenerName(SASL_PLAINTEXT)-SASL_PLAINTEXT-0] DEBUG kafka.request.logger - Completed request:{"isForwarded":false,"requestHeader":{"requestApiKey":1,"requestApiVersion":13,"correlationId":64,"clientId":"mso-123456-consumer-f5c1d64b-2a77-4abe-97e2-5d595acfbd82","requestApiKeyName":"FETCH"},"request":{"replicaId":-1,"maxWaitMs":500,"minBytes":1,"maxBytes":52428800,"isolationLevel":0,"sessionId":1021167583,"sessionEpoch":21,"topics":[],"forgottenTopicsData":[],"rackId":""},"response":{"throttleTimeMs":0,"errorCode":0,"sessionId":1021167583,"responses":[]},"connection":"127.0.0.1:40117-127.0.0.1:49642-3","totalTimeMs":504.759,"requestQueueTimeMs":0.235,"localTimeMs":1.231,"remoteTimeMs":502.85,"throttleTimeMs":0,"responseQueueTimeMs":0.146,"sendTimeMs":0.294,"securityProtocol":"SASL_PLAINTEXT","principal":"User:admin","listener":"SASL_PLAINTEXT","clientInformation":{"softwareName":"apache-kafka-java","softwareVersion":"3.3.1"}} 09:09:51 09:09:51.615 [kafka-coordinator-heartbeat-thread | mso-group] DEBUG org.apache.kafka.clients.FetchSessionHandler - [Consumer clientId=mso-123456-consumer-f5c1d64b-2a77-4abe-97e2-5d595acfbd82, groupId=mso-group] Node 1 sent an incremental fetch response with throttleTimeMs = 0 for session 1021167583 with 0 response partition(s), 1 implied partition(s) 09:09:51 09:09:51.616 [main] DEBUG org.apache.kafka.clients.consumer.internals.Fetcher - [Consumer clientId=mso-123456-consumer-f5c1d64b-2a77-4abe-97e2-5d595acfbd82, groupId=mso-group] Added READ_UNCOMMITTED fetch request for partition my-test-topic-0 at position FetchPosition{offset=0, offsetEpoch=Optional.empty, currentLeader=LeaderAndEpoch{leader=Optional[localhost:40117 (id: 1 rack: null)], epoch=0}} to node localhost:40117 (id: 1 rack: null) 09:09:51 09:09:51.616 [main] DEBUG org.apache.kafka.clients.FetchSessionHandler - [Consumer clientId=mso-123456-consumer-f5c1d64b-2a77-4abe-97e2-5d595acfbd82, groupId=mso-group] Built incremental fetch (sessionId=1021167583, epoch=22) for node 1. Added 0 partition(s), altered 0 partition(s), removed 0 partition(s), replaced 0 partition(s) out of 1 partition(s) 09:09:51 09:09:51.616 [main] DEBUG org.apache.kafka.clients.consumer.internals.Fetcher - [Consumer clientId=mso-123456-consumer-f5c1d64b-2a77-4abe-97e2-5d595acfbd82, groupId=mso-group] Sending READ_UNCOMMITTED IncrementalFetchRequest(toSend=(), toForget=(), toReplace=(), implied=(my-test-topic-0), canUseTopicIds=True) to broker localhost:40117 (id: 1 rack: null) 09:09:51 09:09:51.616 [main] DEBUG org.apache.kafka.clients.NetworkClient - [Consumer clientId=mso-123456-consumer-f5c1d64b-2a77-4abe-97e2-5d595acfbd82, groupId=mso-group] Sending FETCH request with header RequestHeader(apiKey=FETCH, apiVersion=13, clientId=mso-123456-consumer-f5c1d64b-2a77-4abe-97e2-5d595acfbd82, correlationId=65) and timeout 30000 to node 1: FetchRequestData(clusterId=null, replicaId=-1, maxWaitMs=500, minBytes=1, maxBytes=52428800, isolationLevel=0, sessionId=1021167583, sessionEpoch=22, topics=[], forgottenTopicsData=[], rackId='') 09:09:51 09:09:51.618 [data-plane-kafka-request-handler-0] DEBUG kafka.server.FetchManager - Created a new incremental FetchContext for session id 1021167583, epoch 23: added 0 partition(s), updated 0 partition(s), removed 0 partition(s) 09:09:52 09:09:52.120 [executor-Fetch] DEBUG kafka.server.IncrementalFetchContext - Incremental fetch context with session id 1021167583 returning 0 partition(s) 09:09:52 09:09:52.122 [main] DEBUG org.apache.kafka.clients.NetworkClient - [Consumer clientId=mso-123456-consumer-f5c1d64b-2a77-4abe-97e2-5d595acfbd82, groupId=mso-group] Received FETCH response from node 1 for request with header RequestHeader(apiKey=FETCH, apiVersion=13, clientId=mso-123456-consumer-f5c1d64b-2a77-4abe-97e2-5d595acfbd82, correlationId=65): FetchResponseData(throttleTimeMs=0, errorCode=0, sessionId=1021167583, responses=[]) 09:09:52 09:09:52.122 [data-plane-kafka-network-thread-1-ListenerName(SASL_PLAINTEXT)-SASL_PLAINTEXT-0] DEBUG kafka.request.logger - Completed request:{"isForwarded":false,"requestHeader":{"requestApiKey":1,"requestApiVersion":13,"correlationId":65,"clientId":"mso-123456-consumer-f5c1d64b-2a77-4abe-97e2-5d595acfbd82","requestApiKeyName":"FETCH"},"request":{"replicaId":-1,"maxWaitMs":500,"minBytes":1,"maxBytes":52428800,"isolationLevel":0,"sessionId":1021167583,"sessionEpoch":22,"topics":[],"forgottenTopicsData":[],"rackId":""},"response":{"throttleTimeMs":0,"errorCode":0,"sessionId":1021167583,"responses":[]},"connection":"127.0.0.1:40117-127.0.0.1:49642-3","totalTimeMs":504.544,"requestQueueTimeMs":0.309,"localTimeMs":1.337,"remoteTimeMs":501.82,"throttleTimeMs":0,"responseQueueTimeMs":0.147,"sendTimeMs":0.928,"securityProtocol":"SASL_PLAINTEXT","principal":"User:admin","listener":"SASL_PLAINTEXT","clientInformation":{"softwareName":"apache-kafka-java","softwareVersion":"3.3.1"}} 09:09:52 09:09:52.122 [kafka-coordinator-heartbeat-thread | mso-group] DEBUG org.apache.kafka.clients.FetchSessionHandler - [Consumer clientId=mso-123456-consumer-f5c1d64b-2a77-4abe-97e2-5d595acfbd82, groupId=mso-group] Node 1 sent an incremental fetch response with throttleTimeMs = 0 for session 1021167583 with 0 response partition(s), 1 implied partition(s) 09:09:52 09:09:52.123 [main] DEBUG org.apache.kafka.clients.consumer.internals.Fetcher - [Consumer clientId=mso-123456-consumer-f5c1d64b-2a77-4abe-97e2-5d595acfbd82, groupId=mso-group] Added READ_UNCOMMITTED fetch request for partition my-test-topic-0 at position FetchPosition{offset=0, offsetEpoch=Optional.empty, currentLeader=LeaderAndEpoch{leader=Optional[localhost:40117 (id: 1 rack: null)], epoch=0}} to node localhost:40117 (id: 1 rack: null) 09:09:52 09:09:52.123 [main] DEBUG org.apache.kafka.clients.FetchSessionHandler - [Consumer clientId=mso-123456-consumer-f5c1d64b-2a77-4abe-97e2-5d595acfbd82, groupId=mso-group] Built incremental fetch (sessionId=1021167583, epoch=23) for node 1. Added 0 partition(s), altered 0 partition(s), removed 0 partition(s), replaced 0 partition(s) out of 1 partition(s) 09:09:52 09:09:52.123 [main] DEBUG org.apache.kafka.clients.consumer.internals.Fetcher - [Consumer clientId=mso-123456-consumer-f5c1d64b-2a77-4abe-97e2-5d595acfbd82, groupId=mso-group] Sending READ_UNCOMMITTED IncrementalFetchRequest(toSend=(), toForget=(), toReplace=(), implied=(my-test-topic-0), canUseTopicIds=True) to broker localhost:40117 (id: 1 rack: null) 09:09:52 09:09:52.123 [main] DEBUG org.apache.kafka.clients.NetworkClient - [Consumer clientId=mso-123456-consumer-f5c1d64b-2a77-4abe-97e2-5d595acfbd82, groupId=mso-group] Sending FETCH request with header RequestHeader(apiKey=FETCH, apiVersion=13, clientId=mso-123456-consumer-f5c1d64b-2a77-4abe-97e2-5d595acfbd82, correlationId=66) and timeout 30000 to node 1: FetchRequestData(clusterId=null, replicaId=-1, maxWaitMs=500, minBytes=1, maxBytes=52428800, isolationLevel=0, sessionId=1021167583, sessionEpoch=23, topics=[], forgottenTopicsData=[], rackId='') 09:09:52 09:09:52.124 [data-plane-kafka-request-handler-1] DEBUG kafka.server.FetchManager - Created a new incremental FetchContext for session id 1021167583, epoch 24: added 0 partition(s), updated 0 partition(s), removed 0 partition(s) 09:09:52 09:09:52.158 [kafka-coordinator-heartbeat-thread | mso-group] DEBUG org.apache.kafka.clients.consumer.internals.ConsumerCoordinator - [Consumer clientId=mso-123456-consumer-f5c1d64b-2a77-4abe-97e2-5d595acfbd82, groupId=mso-group] Sending Heartbeat request with generation 1 and member id mso-123456-consumer-f5c1d64b-2a77-4abe-97e2-5d595acfbd82-165bc2ae-a1b1-4995-8a9f-1bee3f2ba8f0 to coordinator localhost:40117 (id: 2147483646 rack: null) 09:09:52 09:09:52.158 [kafka-coordinator-heartbeat-thread | mso-group] DEBUG org.apache.kafka.clients.NetworkClient - [Consumer clientId=mso-123456-consumer-f5c1d64b-2a77-4abe-97e2-5d595acfbd82, groupId=mso-group] Sending HEARTBEAT request with header RequestHeader(apiKey=HEARTBEAT, apiVersion=4, clientId=mso-123456-consumer-f5c1d64b-2a77-4abe-97e2-5d595acfbd82, correlationId=67) and timeout 30000 to node 2147483646: HeartbeatRequestData(groupId='mso-group', generationId=1, memberId='mso-123456-consumer-f5c1d64b-2a77-4abe-97e2-5d595acfbd82-165bc2ae-a1b1-4995-8a9f-1bee3f2ba8f0', groupInstanceId=null) 09:09:52 09:09:52.159 [data-plane-kafka-request-handler-0] DEBUG kafka.server.DelayedOperationPurgatory - Request key MemberKey(mso-group,mso-123456-consumer-f5c1d64b-2a77-4abe-97e2-5d595acfbd82-165bc2ae-a1b1-4995-8a9f-1bee3f2ba8f0) unblocked 1 Heartbeat operations 09:09:52 09:09:52.160 [main] DEBUG org.apache.kafka.clients.NetworkClient - [Consumer clientId=mso-123456-consumer-f5c1d64b-2a77-4abe-97e2-5d595acfbd82, groupId=mso-group] Received HEARTBEAT response from node 2147483646 for request with header RequestHeader(apiKey=HEARTBEAT, apiVersion=4, clientId=mso-123456-consumer-f5c1d64b-2a77-4abe-97e2-5d595acfbd82, correlationId=67): HeartbeatResponseData(throttleTimeMs=0, errorCode=0) 09:09:52 09:09:52.160 [main] DEBUG org.apache.kafka.clients.consumer.internals.ConsumerCoordinator - [Consumer clientId=mso-123456-consumer-f5c1d64b-2a77-4abe-97e2-5d595acfbd82, groupId=mso-group] Received successful Heartbeat response 09:09:52 09:09:52.160 [data-plane-kafka-network-thread-1-ListenerName(SASL_PLAINTEXT)-SASL_PLAINTEXT-1] DEBUG kafka.request.logger - Completed request:{"isForwarded":false,"requestHeader":{"requestApiKey":12,"requestApiVersion":4,"correlationId":67,"clientId":"mso-123456-consumer-f5c1d64b-2a77-4abe-97e2-5d595acfbd82","requestApiKeyName":"HEARTBEAT"},"request":{"groupId":"mso-group","generationId":1,"memberId":"mso-123456-consumer-f5c1d64b-2a77-4abe-97e2-5d595acfbd82-165bc2ae-a1b1-4995-8a9f-1bee3f2ba8f0","groupInstanceId":null},"response":{"throttleTimeMs":0,"errorCode":0},"connection":"127.0.0.1:40117-127.0.0.1:49644-3","totalTimeMs":1.389,"requestQueueTimeMs":0.25,"localTimeMs":0.761,"remoteTimeMs":0.0,"throttleTimeMs":0,"responseQueueTimeMs":0.122,"sendTimeMs":0.255,"securityProtocol":"SASL_PLAINTEXT","principal":"User:admin","listener":"SASL_PLAINTEXT","clientInformation":{"softwareName":"apache-kafka-java","softwareVersion":"3.3.1"}} 09:09:52 09:09:52.627 [executor-Fetch] DEBUG kafka.server.IncrementalFetchContext - Incremental fetch context with session id 1021167583 returning 0 partition(s) 09:09:52 09:09:52.628 [main] DEBUG org.apache.kafka.clients.NetworkClient - [Consumer clientId=mso-123456-consumer-f5c1d64b-2a77-4abe-97e2-5d595acfbd82, groupId=mso-group] Received FETCH response from node 1 for request with header RequestHeader(apiKey=FETCH, apiVersion=13, clientId=mso-123456-consumer-f5c1d64b-2a77-4abe-97e2-5d595acfbd82, correlationId=66): FetchResponseData(throttleTimeMs=0, errorCode=0, sessionId=1021167583, responses=[]) 09:09:52 09:09:52.628 [data-plane-kafka-network-thread-1-ListenerName(SASL_PLAINTEXT)-SASL_PLAINTEXT-0] DEBUG kafka.request.logger - Completed request:{"isForwarded":false,"requestHeader":{"requestApiKey":1,"requestApiVersion":13,"correlationId":66,"clientId":"mso-123456-consumer-f5c1d64b-2a77-4abe-97e2-5d595acfbd82","requestApiKeyName":"FETCH"},"request":{"replicaId":-1,"maxWaitMs":500,"minBytes":1,"maxBytes":52428800,"isolationLevel":0,"sessionId":1021167583,"sessionEpoch":23,"topics":[],"forgottenTopicsData":[],"rackId":""},"response":{"throttleTimeMs":0,"errorCode":0,"sessionId":1021167583,"responses":[]},"connection":"127.0.0.1:40117-127.0.0.1:49642-3","totalTimeMs":503.53,"requestQueueTimeMs":0.152,"localTimeMs":1.104,"remoteTimeMs":501.905,"throttleTimeMs":0,"responseQueueTimeMs":0.143,"sendTimeMs":0.224,"securityProtocol":"SASL_PLAINTEXT","principal":"User:admin","listener":"SASL_PLAINTEXT","clientInformation":{"softwareName":"apache-kafka-java","softwareVersion":"3.3.1"}} 09:09:52 09:09:52.628 [kafka-coordinator-heartbeat-thread | mso-group] DEBUG org.apache.kafka.clients.FetchSessionHandler - [Consumer clientId=mso-123456-consumer-f5c1d64b-2a77-4abe-97e2-5d595acfbd82, groupId=mso-group] Node 1 sent an incremental fetch response with throttleTimeMs = 0 for session 1021167583 with 0 response partition(s), 1 implied partition(s) 09:09:52 09:09:52.629 [main] DEBUG org.apache.kafka.clients.consumer.internals.Fetcher - [Consumer clientId=mso-123456-consumer-f5c1d64b-2a77-4abe-97e2-5d595acfbd82, groupId=mso-group] Added READ_UNCOMMITTED fetch request for partition my-test-topic-0 at position FetchPosition{offset=0, offsetEpoch=Optional.empty, currentLeader=LeaderAndEpoch{leader=Optional[localhost:40117 (id: 1 rack: null)], epoch=0}} to node localhost:40117 (id: 1 rack: null) 09:09:52 09:09:52.629 [main] DEBUG org.apache.kafka.clients.FetchSessionHandler - [Consumer clientId=mso-123456-consumer-f5c1d64b-2a77-4abe-97e2-5d595acfbd82, groupId=mso-group] Built incremental fetch (sessionId=1021167583, epoch=24) for node 1. Added 0 partition(s), altered 0 partition(s), removed 0 partition(s), replaced 0 partition(s) out of 1 partition(s) 09:09:52 09:09:52.629 [main] DEBUG org.apache.kafka.clients.consumer.internals.Fetcher - [Consumer clientId=mso-123456-consumer-f5c1d64b-2a77-4abe-97e2-5d595acfbd82, groupId=mso-group] Sending READ_UNCOMMITTED IncrementalFetchRequest(toSend=(), toForget=(), toReplace=(), implied=(my-test-topic-0), canUseTopicIds=True) to broker localhost:40117 (id: 1 rack: null) 09:09:52 09:09:52.629 [main] DEBUG org.apache.kafka.clients.NetworkClient - [Consumer clientId=mso-123456-consumer-f5c1d64b-2a77-4abe-97e2-5d595acfbd82, groupId=mso-group] Sending FETCH request with header RequestHeader(apiKey=FETCH, apiVersion=13, clientId=mso-123456-consumer-f5c1d64b-2a77-4abe-97e2-5d595acfbd82, correlationId=68) and timeout 30000 to node 1: FetchRequestData(clusterId=null, replicaId=-1, maxWaitMs=500, minBytes=1, maxBytes=52428800, isolationLevel=0, sessionId=1021167583, sessionEpoch=24, topics=[], forgottenTopicsData=[], rackId='') 09:09:52 09:09:52.630 [data-plane-kafka-request-handler-1] DEBUG kafka.server.FetchManager - Created a new incremental FetchContext for session id 1021167583, epoch 25: added 0 partition(s), updated 0 partition(s), removed 0 partition(s) 09:09:53 09:09:53.133 [executor-Fetch] DEBUG kafka.server.IncrementalFetchContext - Incremental fetch context with session id 1021167583 returning 0 partition(s) 09:09:53 09:09:53.134 [main] DEBUG org.apache.kafka.clients.NetworkClient - [Consumer clientId=mso-123456-consumer-f5c1d64b-2a77-4abe-97e2-5d595acfbd82, groupId=mso-group] Received FETCH response from node 1 for request with header RequestHeader(apiKey=FETCH, apiVersion=13, clientId=mso-123456-consumer-f5c1d64b-2a77-4abe-97e2-5d595acfbd82, correlationId=68): FetchResponseData(throttleTimeMs=0, errorCode=0, sessionId=1021167583, responses=[]) 09:09:53 09:09:53.135 [data-plane-kafka-network-thread-1-ListenerName(SASL_PLAINTEXT)-SASL_PLAINTEXT-0] DEBUG kafka.request.logger - Completed request:{"isForwarded":false,"requestHeader":{"requestApiKey":1,"requestApiVersion":13,"correlationId":68,"clientId":"mso-123456-consumer-f5c1d64b-2a77-4abe-97e2-5d595acfbd82","requestApiKeyName":"FETCH"},"request":{"replicaId":-1,"maxWaitMs":500,"minBytes":1,"maxBytes":52428800,"isolationLevel":0,"sessionId":1021167583,"sessionEpoch":24,"topics":[],"forgottenTopicsData":[],"rackId":""},"response":{"throttleTimeMs":0,"errorCode":0,"sessionId":1021167583,"responses":[]},"connection":"127.0.0.1:40117-127.0.0.1:49642-3","totalTimeMs":504.233,"requestQueueTimeMs":0.157,"localTimeMs":1.074,"remoteTimeMs":502.548,"throttleTimeMs":0,"responseQueueTimeMs":0.145,"sendTimeMs":0.308,"securityProtocol":"SASL_PLAINTEXT","principal":"User:admin","listener":"SASL_PLAINTEXT","clientInformation":{"softwareName":"apache-kafka-java","softwareVersion":"3.3.1"}} 09:09:53 09:09:53.135 [kafka-coordinator-heartbeat-thread | mso-group] DEBUG org.apache.kafka.clients.FetchSessionHandler - [Consumer clientId=mso-123456-consumer-f5c1d64b-2a77-4abe-97e2-5d595acfbd82, groupId=mso-group] Node 1 sent an incremental fetch response with throttleTimeMs = 0 for session 1021167583 with 0 response partition(s), 1 implied partition(s) 09:09:53 09:09:53.136 [main] DEBUG org.apache.kafka.clients.consumer.internals.Fetcher - [Consumer clientId=mso-123456-consumer-f5c1d64b-2a77-4abe-97e2-5d595acfbd82, groupId=mso-group] Added READ_UNCOMMITTED fetch request for partition my-test-topic-0 at position FetchPosition{offset=0, offsetEpoch=Optional.empty, currentLeader=LeaderAndEpoch{leader=Optional[localhost:40117 (id: 1 rack: null)], epoch=0}} to node localhost:40117 (id: 1 rack: null) 09:09:53 09:09:53.136 [main] DEBUG org.apache.kafka.clients.FetchSessionHandler - [Consumer clientId=mso-123456-consumer-f5c1d64b-2a77-4abe-97e2-5d595acfbd82, groupId=mso-group] Built incremental fetch (sessionId=1021167583, epoch=25) for node 1. Added 0 partition(s), altered 0 partition(s), removed 0 partition(s), replaced 0 partition(s) out of 1 partition(s) 09:09:53 09:09:53.136 [main] DEBUG org.apache.kafka.clients.consumer.internals.Fetcher - [Consumer clientId=mso-123456-consumer-f5c1d64b-2a77-4abe-97e2-5d595acfbd82, groupId=mso-group] Sending READ_UNCOMMITTED IncrementalFetchRequest(toSend=(), toForget=(), toReplace=(), implied=(my-test-topic-0), canUseTopicIds=True) to broker localhost:40117 (id: 1 rack: null) 09:09:53 09:09:53.136 [main] DEBUG org.apache.kafka.clients.NetworkClient - [Consumer clientId=mso-123456-consumer-f5c1d64b-2a77-4abe-97e2-5d595acfbd82, groupId=mso-group] Sending FETCH request with header RequestHeader(apiKey=FETCH, apiVersion=13, clientId=mso-123456-consumer-f5c1d64b-2a77-4abe-97e2-5d595acfbd82, correlationId=69) and timeout 30000 to node 1: FetchRequestData(clusterId=null, replicaId=-1, maxWaitMs=500, minBytes=1, maxBytes=52428800, isolationLevel=0, sessionId=1021167583, sessionEpoch=25, topics=[], forgottenTopicsData=[], rackId='') 09:09:53 09:09:53.137 [data-plane-kafka-request-handler-1] DEBUG kafka.server.FetchManager - Created a new incremental FetchContext for session id 1021167583, epoch 26: added 0 partition(s), updated 0 partition(s), removed 0 partition(s) 09:09:53 09:09:53.640 [executor-Fetch] DEBUG kafka.server.IncrementalFetchContext - Incremental fetch context with session id 1021167583 returning 0 partition(s) 09:09:53 09:09:53.642 [main] DEBUG org.apache.kafka.clients.NetworkClient - [Consumer clientId=mso-123456-consumer-f5c1d64b-2a77-4abe-97e2-5d595acfbd82, groupId=mso-group] Received FETCH response from node 1 for request with header RequestHeader(apiKey=FETCH, apiVersion=13, clientId=mso-123456-consumer-f5c1d64b-2a77-4abe-97e2-5d595acfbd82, correlationId=69): FetchResponseData(throttleTimeMs=0, errorCode=0, sessionId=1021167583, responses=[]) 09:09:53 09:09:53.642 [data-plane-kafka-network-thread-1-ListenerName(SASL_PLAINTEXT)-SASL_PLAINTEXT-0] DEBUG kafka.request.logger - Completed request:{"isForwarded":false,"requestHeader":{"requestApiKey":1,"requestApiVersion":13,"correlationId":69,"clientId":"mso-123456-consumer-f5c1d64b-2a77-4abe-97e2-5d595acfbd82","requestApiKeyName":"FETCH"},"request":{"replicaId":-1,"maxWaitMs":500,"minBytes":1,"maxBytes":52428800,"isolationLevel":0,"sessionId":1021167583,"sessionEpoch":25,"topics":[],"forgottenTopicsData":[],"rackId":""},"response":{"throttleTimeMs":0,"errorCode":0,"sessionId":1021167583,"responses":[]},"connection":"127.0.0.1:40117-127.0.0.1:49642-3","totalTimeMs":504.561,"requestQueueTimeMs":0.325,"localTimeMs":1.288,"remoteTimeMs":502.521,"throttleTimeMs":0,"responseQueueTimeMs":0.16,"sendTimeMs":0.265,"securityProtocol":"SASL_PLAINTEXT","principal":"User:admin","listener":"SASL_PLAINTEXT","clientInformation":{"softwareName":"apache-kafka-java","softwareVersion":"3.3.1"}} 09:09:53 09:09:53.642 [kafka-coordinator-heartbeat-thread | mso-group] DEBUG org.apache.kafka.clients.FetchSessionHandler - [Consumer clientId=mso-123456-consumer-f5c1d64b-2a77-4abe-97e2-5d595acfbd82, groupId=mso-group] Node 1 sent an incremental fetch response with throttleTimeMs = 0 for session 1021167583 with 0 response partition(s), 1 implied partition(s) 09:09:53 09:09:53.642 [main] DEBUG org.apache.kafka.clients.consumer.internals.Fetcher - [Consumer clientId=mso-123456-consumer-f5c1d64b-2a77-4abe-97e2-5d595acfbd82, groupId=mso-group] Added READ_UNCOMMITTED fetch request for partition my-test-topic-0 at position FetchPosition{offset=0, offsetEpoch=Optional.empty, currentLeader=LeaderAndEpoch{leader=Optional[localhost:40117 (id: 1 rack: null)], epoch=0}} to node localhost:40117 (id: 1 rack: null) 09:09:53 09:09:53.642 [main] DEBUG org.apache.kafka.clients.FetchSessionHandler - [Consumer clientId=mso-123456-consumer-f5c1d64b-2a77-4abe-97e2-5d595acfbd82, groupId=mso-group] Built incremental fetch (sessionId=1021167583, epoch=26) for node 1. Added 0 partition(s), altered 0 partition(s), removed 0 partition(s), replaced 0 partition(s) out of 1 partition(s) 09:09:53 09:09:53.643 [main] DEBUG org.apache.kafka.clients.consumer.internals.Fetcher - [Consumer clientId=mso-123456-consumer-f5c1d64b-2a77-4abe-97e2-5d595acfbd82, groupId=mso-group] Sending READ_UNCOMMITTED IncrementalFetchRequest(toSend=(), toForget=(), toReplace=(), implied=(my-test-topic-0), canUseTopicIds=True) to broker localhost:40117 (id: 1 rack: null) 09:09:53 09:09:53.643 [main] DEBUG org.apache.kafka.clients.NetworkClient - [Consumer clientId=mso-123456-consumer-f5c1d64b-2a77-4abe-97e2-5d595acfbd82, groupId=mso-group] Sending FETCH request with header RequestHeader(apiKey=FETCH, apiVersion=13, clientId=mso-123456-consumer-f5c1d64b-2a77-4abe-97e2-5d595acfbd82, correlationId=70) and timeout 30000 to node 1: FetchRequestData(clusterId=null, replicaId=-1, maxWaitMs=500, minBytes=1, maxBytes=52428800, isolationLevel=0, sessionId=1021167583, sessionEpoch=26, topics=[], forgottenTopicsData=[], rackId='') 09:09:53 09:09:53.644 [data-plane-kafka-request-handler-0] DEBUG kafka.server.FetchManager - Created a new incremental FetchContext for session id 1021167583, epoch 27: added 0 partition(s), updated 0 partition(s), removed 0 partition(s) 09:09:54 09:09:54.146 [executor-Fetch] DEBUG kafka.server.IncrementalFetchContext - Incremental fetch context with session id 1021167583 returning 0 partition(s) 09:09:54 09:09:54.147 [main] DEBUG org.apache.kafka.clients.NetworkClient - [Consumer clientId=mso-123456-consumer-f5c1d64b-2a77-4abe-97e2-5d595acfbd82, groupId=mso-group] Received FETCH response from node 1 for request with header RequestHeader(apiKey=FETCH, apiVersion=13, clientId=mso-123456-consumer-f5c1d64b-2a77-4abe-97e2-5d595acfbd82, correlationId=70): FetchResponseData(throttleTimeMs=0, errorCode=0, sessionId=1021167583, responses=[]) 09:09:54 09:09:54.148 [kafka-coordinator-heartbeat-thread | mso-group] DEBUG org.apache.kafka.clients.FetchSessionHandler - [Consumer clientId=mso-123456-consumer-f5c1d64b-2a77-4abe-97e2-5d595acfbd82, groupId=mso-group] Node 1 sent an incremental fetch response with throttleTimeMs = 0 for session 1021167583 with 0 response partition(s), 1 implied partition(s) 09:09:54 09:09:54.148 [data-plane-kafka-network-thread-1-ListenerName(SASL_PLAINTEXT)-SASL_PLAINTEXT-0] DEBUG kafka.request.logger - Completed request:{"isForwarded":false,"requestHeader":{"requestApiKey":1,"requestApiVersion":13,"correlationId":70,"clientId":"mso-123456-consumer-f5c1d64b-2a77-4abe-97e2-5d595acfbd82","requestApiKeyName":"FETCH"},"request":{"replicaId":-1,"maxWaitMs":500,"minBytes":1,"maxBytes":52428800,"isolationLevel":0,"sessionId":1021167583,"sessionEpoch":26,"topics":[],"forgottenTopicsData":[],"rackId":""},"response":{"throttleTimeMs":0,"errorCode":0,"sessionId":1021167583,"responses":[]},"connection":"127.0.0.1:40117-127.0.0.1:49642-3","totalTimeMs":504.074,"requestQueueTimeMs":0.158,"localTimeMs":1.073,"remoteTimeMs":502.392,"throttleTimeMs":0,"responseQueueTimeMs":0.147,"sendTimeMs":0.302,"securityProtocol":"SASL_PLAINTEXT","principal":"User:admin","listener":"SASL_PLAINTEXT","clientInformation":{"softwareName":"apache-kafka-java","softwareVersion":"3.3.1"}} 09:09:54 09:09:54.148 [main] DEBUG org.apache.kafka.clients.consumer.internals.Fetcher - [Consumer clientId=mso-123456-consumer-f5c1d64b-2a77-4abe-97e2-5d595acfbd82, groupId=mso-group] Added READ_UNCOMMITTED fetch request for partition my-test-topic-0 at position FetchPosition{offset=0, offsetEpoch=Optional.empty, currentLeader=LeaderAndEpoch{leader=Optional[localhost:40117 (id: 1 rack: null)], epoch=0}} to node localhost:40117 (id: 1 rack: null) 09:09:54 09:09:54.148 [main] DEBUG org.apache.kafka.clients.FetchSessionHandler - [Consumer clientId=mso-123456-consumer-f5c1d64b-2a77-4abe-97e2-5d595acfbd82, groupId=mso-group] Built incremental fetch (sessionId=1021167583, epoch=27) for node 1. Added 0 partition(s), altered 0 partition(s), removed 0 partition(s), replaced 0 partition(s) out of 1 partition(s) 09:09:54 09:09:54.149 [main] DEBUG org.apache.kafka.clients.consumer.internals.Fetcher - [Consumer clientId=mso-123456-consumer-f5c1d64b-2a77-4abe-97e2-5d595acfbd82, groupId=mso-group] Sending READ_UNCOMMITTED IncrementalFetchRequest(toSend=(), toForget=(), toReplace=(), implied=(my-test-topic-0), canUseTopicIds=True) to broker localhost:40117 (id: 1 rack: null) 09:09:54 09:09:54.149 [main] DEBUG org.apache.kafka.clients.NetworkClient - [Consumer clientId=mso-123456-consumer-f5c1d64b-2a77-4abe-97e2-5d595acfbd82, groupId=mso-group] Sending FETCH request with header RequestHeader(apiKey=FETCH, apiVersion=13, clientId=mso-123456-consumer-f5c1d64b-2a77-4abe-97e2-5d595acfbd82, correlationId=71) and timeout 30000 to node 1: FetchRequestData(clusterId=null, replicaId=-1, maxWaitMs=500, minBytes=1, maxBytes=52428800, isolationLevel=0, sessionId=1021167583, sessionEpoch=27, topics=[], forgottenTopicsData=[], rackId='') 09:09:54 09:09:54.150 [data-plane-kafka-request-handler-0] DEBUG kafka.server.FetchManager - Created a new incremental FetchContext for session id 1021167583, epoch 28: added 0 partition(s), updated 0 partition(s), removed 0 partition(s) 09:09:54 09:09:54.652 [executor-Fetch] DEBUG kafka.server.IncrementalFetchContext - Incremental fetch context with session id 1021167583 returning 0 partition(s) 09:09:54 09:09:54.653 [main] DEBUG org.apache.kafka.clients.NetworkClient - [Consumer clientId=mso-123456-consumer-f5c1d64b-2a77-4abe-97e2-5d595acfbd82, groupId=mso-group] Received FETCH response from node 1 for request with header RequestHeader(apiKey=FETCH, apiVersion=13, clientId=mso-123456-consumer-f5c1d64b-2a77-4abe-97e2-5d595acfbd82, correlationId=71): FetchResponseData(throttleTimeMs=0, errorCode=0, sessionId=1021167583, responses=[]) 09:09:54 09:09:54.653 [data-plane-kafka-network-thread-1-ListenerName(SASL_PLAINTEXT)-SASL_PLAINTEXT-0] DEBUG kafka.request.logger - Completed request:{"isForwarded":false,"requestHeader":{"requestApiKey":1,"requestApiVersion":13,"correlationId":71,"clientId":"mso-123456-consumer-f5c1d64b-2a77-4abe-97e2-5d595acfbd82","requestApiKeyName":"FETCH"},"request":{"replicaId":-1,"maxWaitMs":500,"minBytes":1,"maxBytes":52428800,"isolationLevel":0,"sessionId":1021167583,"sessionEpoch":27,"topics":[],"forgottenTopicsData":[],"rackId":""},"response":{"throttleTimeMs":0,"errorCode":0,"sessionId":1021167583,"responses":[]},"connection":"127.0.0.1:40117-127.0.0.1:49642-3","totalTimeMs":503.743,"requestQueueTimeMs":0.215,"localTimeMs":1.334,"remoteTimeMs":501.732,"throttleTimeMs":0,"responseQueueTimeMs":0.18,"sendTimeMs":0.279,"securityProtocol":"SASL_PLAINTEXT","principal":"User:admin","listener":"SASL_PLAINTEXT","clientInformation":{"softwareName":"apache-kafka-java","softwareVersion":"3.3.1"}} 09:09:54 09:09:54.654 [kafka-coordinator-heartbeat-thread | mso-group] DEBUG org.apache.kafka.clients.FetchSessionHandler - [Consumer clientId=mso-123456-consumer-f5c1d64b-2a77-4abe-97e2-5d595acfbd82, groupId=mso-group] Node 1 sent an incremental fetch response with throttleTimeMs = 0 for session 1021167583 with 0 response partition(s), 1 implied partition(s) 09:09:54 09:09:54.655 [main] DEBUG org.apache.kafka.clients.consumer.internals.Fetcher - [Consumer clientId=mso-123456-consumer-f5c1d64b-2a77-4abe-97e2-5d595acfbd82, groupId=mso-group] Added READ_UNCOMMITTED fetch request for partition my-test-topic-0 at position FetchPosition{offset=0, offsetEpoch=Optional.empty, currentLeader=LeaderAndEpoch{leader=Optional[localhost:40117 (id: 1 rack: null)], epoch=0}} to node localhost:40117 (id: 1 rack: null) 09:09:54 09:09:54.655 [main] DEBUG org.apache.kafka.clients.FetchSessionHandler - [Consumer clientId=mso-123456-consumer-f5c1d64b-2a77-4abe-97e2-5d595acfbd82, groupId=mso-group] Built incremental fetch (sessionId=1021167583, epoch=28) for node 1. Added 0 partition(s), altered 0 partition(s), removed 0 partition(s), replaced 0 partition(s) out of 1 partition(s) 09:09:54 09:09:54.655 [main] DEBUG org.apache.kafka.clients.consumer.internals.Fetcher - [Consumer clientId=mso-123456-consumer-f5c1d64b-2a77-4abe-97e2-5d595acfbd82, groupId=mso-group] Sending READ_UNCOMMITTED IncrementalFetchRequest(toSend=(), toForget=(), toReplace=(), implied=(my-test-topic-0), canUseTopicIds=True) to broker localhost:40117 (id: 1 rack: null) 09:09:54 09:09:54.656 [main] DEBUG org.apache.kafka.clients.NetworkClient - [Consumer clientId=mso-123456-consumer-f5c1d64b-2a77-4abe-97e2-5d595acfbd82, groupId=mso-group] Sending FETCH request with header RequestHeader(apiKey=FETCH, apiVersion=13, clientId=mso-123456-consumer-f5c1d64b-2a77-4abe-97e2-5d595acfbd82, correlationId=72) and timeout 30000 to node 1: FetchRequestData(clusterId=null, replicaId=-1, maxWaitMs=500, minBytes=1, maxBytes=52428800, isolationLevel=0, sessionId=1021167583, sessionEpoch=28, topics=[], forgottenTopicsData=[], rackId='') 09:09:54 09:09:54.657 [data-plane-kafka-request-handler-0] DEBUG kafka.server.FetchManager - Created a new incremental FetchContext for session id 1021167583, epoch 29: added 0 partition(s), updated 0 partition(s), removed 0 partition(s) 09:09:55 09:09:55.160 [kafka-coordinator-heartbeat-thread | mso-group] DEBUG org.apache.kafka.clients.consumer.internals.ConsumerCoordinator - [Consumer clientId=mso-123456-consumer-f5c1d64b-2a77-4abe-97e2-5d595acfbd82, groupId=mso-group] Sending Heartbeat request with generation 1 and member id mso-123456-consumer-f5c1d64b-2a77-4abe-97e2-5d595acfbd82-165bc2ae-a1b1-4995-8a9f-1bee3f2ba8f0 to coordinator localhost:40117 (id: 2147483646 rack: null) 09:09:55 09:09:55.160 [executor-Fetch] DEBUG kafka.server.IncrementalFetchContext - Incremental fetch context with session id 1021167583 returning 0 partition(s) 09:09:55 09:09:55.160 [kafka-coordinator-heartbeat-thread | mso-group] DEBUG org.apache.kafka.clients.NetworkClient - [Consumer clientId=mso-123456-consumer-f5c1d64b-2a77-4abe-97e2-5d595acfbd82, groupId=mso-group] Sending HEARTBEAT request with header RequestHeader(apiKey=HEARTBEAT, apiVersion=4, clientId=mso-123456-consumer-f5c1d64b-2a77-4abe-97e2-5d595acfbd82, correlationId=73) and timeout 30000 to node 2147483646: HeartbeatRequestData(groupId='mso-group', generationId=1, memberId='mso-123456-consumer-f5c1d64b-2a77-4abe-97e2-5d595acfbd82-165bc2ae-a1b1-4995-8a9f-1bee3f2ba8f0', groupInstanceId=null) 09:09:55 09:09:55.161 [main] DEBUG org.apache.kafka.clients.NetworkClient - [Consumer clientId=mso-123456-consumer-f5c1d64b-2a77-4abe-97e2-5d595acfbd82, groupId=mso-group] Received FETCH response from node 1 for request with header RequestHeader(apiKey=FETCH, apiVersion=13, clientId=mso-123456-consumer-f5c1d64b-2a77-4abe-97e2-5d595acfbd82, correlationId=72): FetchResponseData(throttleTimeMs=0, errorCode=0, sessionId=1021167583, responses=[]) 09:09:55 09:09:55.162 [main] DEBUG org.apache.kafka.clients.FetchSessionHandler - [Consumer clientId=mso-123456-consumer-f5c1d64b-2a77-4abe-97e2-5d595acfbd82, groupId=mso-group] Node 1 sent an incremental fetch response with throttleTimeMs = 0 for session 1021167583 with 0 response partition(s), 1 implied partition(s) 09:09:55 09:09:55.162 [data-plane-kafka-request-handler-1] DEBUG kafka.server.DelayedOperationPurgatory - Request key MemberKey(mso-group,mso-123456-consumer-f5c1d64b-2a77-4abe-97e2-5d595acfbd82-165bc2ae-a1b1-4995-8a9f-1bee3f2ba8f0) unblocked 1 Heartbeat operations 09:09:55 09:09:55.163 [main] DEBUG org.apache.kafka.clients.consumer.internals.Fetcher - [Consumer clientId=mso-123456-consumer-f5c1d64b-2a77-4abe-97e2-5d595acfbd82, groupId=mso-group] Added READ_UNCOMMITTED fetch request for partition my-test-topic-0 at position FetchPosition{offset=0, offsetEpoch=Optional.empty, currentLeader=LeaderAndEpoch{leader=Optional[localhost:40117 (id: 1 rack: null)], epoch=0}} to node localhost:40117 (id: 1 rack: null) 09:09:55 09:09:55.163 [main] DEBUG org.apache.kafka.clients.FetchSessionHandler - [Consumer clientId=mso-123456-consumer-f5c1d64b-2a77-4abe-97e2-5d595acfbd82, groupId=mso-group] Built incremental fetch (sessionId=1021167583, epoch=29) for node 1. Added 0 partition(s), altered 0 partition(s), removed 0 partition(s), replaced 0 partition(s) out of 1 partition(s) 09:09:55 09:09:55.163 [main] DEBUG org.apache.kafka.clients.consumer.internals.Fetcher - [Consumer clientId=mso-123456-consumer-f5c1d64b-2a77-4abe-97e2-5d595acfbd82, groupId=mso-group] Sending READ_UNCOMMITTED IncrementalFetchRequest(toSend=(), toForget=(), toReplace=(), implied=(my-test-topic-0), canUseTopicIds=True) to broker localhost:40117 (id: 1 rack: null) 09:09:55 09:09:55.163 [data-plane-kafka-network-thread-1-ListenerName(SASL_PLAINTEXT)-SASL_PLAINTEXT-0] DEBUG kafka.request.logger - Completed request:{"isForwarded":false,"requestHeader":{"requestApiKey":1,"requestApiVersion":13,"correlationId":72,"clientId":"mso-123456-consumer-f5c1d64b-2a77-4abe-97e2-5d595acfbd82","requestApiKeyName":"FETCH"},"request":{"replicaId":-1,"maxWaitMs":500,"minBytes":1,"maxBytes":52428800,"isolationLevel":0,"sessionId":1021167583,"sessionEpoch":28,"topics":[],"forgottenTopicsData":[],"rackId":""},"response":{"throttleTimeMs":0,"errorCode":0,"sessionId":1021167583,"responses":[]},"connection":"127.0.0.1:40117-127.0.0.1:49642-3","totalTimeMs":504.628,"requestQueueTimeMs":0.141,"localTimeMs":0.739,"remoteTimeMs":503.133,"throttleTimeMs":0,"responseQueueTimeMs":0.246,"sendTimeMs":0.368,"securityProtocol":"SASL_PLAINTEXT","principal":"User:admin","listener":"SASL_PLAINTEXT","clientInformation":{"softwareName":"apache-kafka-java","softwareVersion":"3.3.1"}} 09:09:55 09:09:55.163 [main] DEBUG org.apache.kafka.clients.NetworkClient - [Consumer clientId=mso-123456-consumer-f5c1d64b-2a77-4abe-97e2-5d595acfbd82, groupId=mso-group] Sending FETCH request with header RequestHeader(apiKey=FETCH, apiVersion=13, clientId=mso-123456-consumer-f5c1d64b-2a77-4abe-97e2-5d595acfbd82, correlationId=74) and timeout 30000 to node 1: FetchRequestData(clusterId=null, replicaId=-1, maxWaitMs=500, minBytes=1, maxBytes=52428800, isolationLevel=0, sessionId=1021167583, sessionEpoch=29, topics=[], forgottenTopicsData=[], rackId='') 09:09:55 09:09:55.165 [data-plane-kafka-request-handler-0] DEBUG kafka.server.FetchManager - Created a new incremental FetchContext for session id 1021167583, epoch 30: added 0 partition(s), updated 0 partition(s), removed 0 partition(s) 09:09:55 09:09:55.174 [main] DEBUG org.apache.kafka.clients.NetworkClient - [Consumer clientId=mso-123456-consumer-f5c1d64b-2a77-4abe-97e2-5d595acfbd82, groupId=mso-group] Received HEARTBEAT response from node 2147483646 for request with header RequestHeader(apiKey=HEARTBEAT, apiVersion=4, clientId=mso-123456-consumer-f5c1d64b-2a77-4abe-97e2-5d595acfbd82, correlationId=73): HeartbeatResponseData(throttleTimeMs=0, errorCode=0) 09:09:55 09:09:55.174 [main] DEBUG org.apache.kafka.clients.consumer.internals.ConsumerCoordinator - [Consumer clientId=mso-123456-consumer-f5c1d64b-2a77-4abe-97e2-5d595acfbd82, groupId=mso-group] Received successful Heartbeat response 09:09:55 09:09:55.175 [data-plane-kafka-network-thread-1-ListenerName(SASL_PLAINTEXT)-SASL_PLAINTEXT-1] DEBUG kafka.request.logger - Completed request:{"isForwarded":false,"requestHeader":{"requestApiKey":12,"requestApiVersion":4,"correlationId":73,"clientId":"mso-123456-consumer-f5c1d64b-2a77-4abe-97e2-5d595acfbd82","requestApiKeyName":"HEARTBEAT"},"request":{"groupId":"mso-group","generationId":1,"memberId":"mso-123456-consumer-f5c1d64b-2a77-4abe-97e2-5d595acfbd82-165bc2ae-a1b1-4995-8a9f-1bee3f2ba8f0","groupInstanceId":null},"response":{"throttleTimeMs":0,"errorCode":0},"connection":"127.0.0.1:40117-127.0.0.1:49644-3","totalTimeMs":13.342,"requestQueueTimeMs":0.449,"localTimeMs":1.643,"remoteTimeMs":0.0,"throttleTimeMs":0,"responseQueueTimeMs":11.038,"sendTimeMs":0.211,"securityProtocol":"SASL_PLAINTEXT","principal":"User:admin","listener":"SASL_PLAINTEXT","clientInformation":{"softwareName":"apache-kafka-java","softwareVersion":"3.3.1"}} 09:09:55 09:09:55.321 [main] DEBUG org.apache.kafka.clients.consumer.internals.ConsumerCoordinator - [Consumer clientId=mso-123456-consumer-f5c1d64b-2a77-4abe-97e2-5d595acfbd82, groupId=mso-group] Sending asynchronous auto-commit of offsets {my-test-topic-0=OffsetAndMetadata{offset=0, leaderEpoch=null, metadata=''}} 09:09:55 09:09:55.321 [main] DEBUG org.apache.kafka.clients.NetworkClient - [Consumer clientId=mso-123456-consumer-f5c1d64b-2a77-4abe-97e2-5d595acfbd82, groupId=mso-group] Sending OFFSET_COMMIT request with header RequestHeader(apiKey=OFFSET_COMMIT, apiVersion=8, clientId=mso-123456-consumer-f5c1d64b-2a77-4abe-97e2-5d595acfbd82, correlationId=75) and timeout 30000 to node 2147483646: OffsetCommitRequestData(groupId='mso-group', generationId=1, memberId='mso-123456-consumer-f5c1d64b-2a77-4abe-97e2-5d595acfbd82-165bc2ae-a1b1-4995-8a9f-1bee3f2ba8f0', groupInstanceId=null, retentionTimeMs=-1, topics=[OffsetCommitRequestTopic(name='my-test-topic', partitions=[OffsetCommitRequestPartition(partitionIndex=0, committedOffset=0, committedLeaderEpoch=-1, commitTimestamp=-1, committedMetadata='')])]) 09:09:55 09:09:55.323 [data-plane-kafka-request-handler-0] DEBUG kafka.server.DelayedOperationPurgatory - Request key MemberKey(mso-group,mso-123456-consumer-f5c1d64b-2a77-4abe-97e2-5d595acfbd82-165bc2ae-a1b1-4995-8a9f-1bee3f2ba8f0) unblocked 1 Heartbeat operations 09:09:55 09:09:55.325 [data-plane-kafka-request-handler-0] DEBUG kafka.log.UnifiedLog - [UnifiedLog partition=__consumer_offsets-37, dir=/tmp/kafka-unit11182757027218931278] Flushing log up to offset 4 (exclusive)with recovery point 4, last flushed: 1770973790363, current time: 1770973795325,unflushed: 1 09:09:55 09:09:55.328 [data-plane-kafka-request-handler-0] DEBUG kafka.cluster.Partition - [Partition __consumer_offsets-37 broker=1] High watermark updated from (offset=3 segment=[0:706]) to (offset=4 segment=[0:830]) 09:09:55 09:09:55.328 [data-plane-kafka-request-handler-0] DEBUG kafka.server.ReplicaManager - [ReplicaManager broker=1] Produce to local log in 4 ms 09:09:55 09:09:55.330 [main] DEBUG org.apache.kafka.clients.NetworkClient - [Consumer clientId=mso-123456-consumer-f5c1d64b-2a77-4abe-97e2-5d595acfbd82, groupId=mso-group] Received OFFSET_COMMIT response from node 2147483646 for request with header RequestHeader(apiKey=OFFSET_COMMIT, apiVersion=8, clientId=mso-123456-consumer-f5c1d64b-2a77-4abe-97e2-5d595acfbd82, correlationId=75): OffsetCommitResponseData(throttleTimeMs=0, topics=[OffsetCommitResponseTopic(name='my-test-topic', partitions=[OffsetCommitResponsePartition(partitionIndex=0, errorCode=0)])]) 09:09:55 09:09:55.330 [data-plane-kafka-network-thread-1-ListenerName(SASL_PLAINTEXT)-SASL_PLAINTEXT-1] DEBUG kafka.request.logger - Completed request:{"isForwarded":false,"requestHeader":{"requestApiKey":8,"requestApiVersion":8,"correlationId":75,"clientId":"mso-123456-consumer-f5c1d64b-2a77-4abe-97e2-5d595acfbd82","requestApiKeyName":"OFFSET_COMMIT"},"request":{"groupId":"mso-group","generationId":1,"memberId":"mso-123456-consumer-f5c1d64b-2a77-4abe-97e2-5d595acfbd82-165bc2ae-a1b1-4995-8a9f-1bee3f2ba8f0","groupInstanceId":null,"topics":[{"name":"my-test-topic","partitions":[{"partitionIndex":0,"committedOffset":0,"committedLeaderEpoch":-1,"committedMetadata":""}]}]},"response":{"throttleTimeMs":0,"topics":[{"name":"my-test-topic","partitions":[{"partitionIndex":0,"errorCode":0}]}]},"connection":"127.0.0.1:40117-127.0.0.1:49644-3","totalTimeMs":7.82,"requestQueueTimeMs":0.238,"localTimeMs":7.298,"remoteTimeMs":0.0,"throttleTimeMs":0,"responseQueueTimeMs":0.093,"sendTimeMs":0.189,"securityProtocol":"SASL_PLAINTEXT","principal":"User:admin","listener":"SASL_PLAINTEXT","clientInformation":{"softwareName":"apache-kafka-java","softwareVersion":"3.3.1"}} 09:09:55 09:09:55.330 [main] DEBUG org.apache.kafka.clients.consumer.internals.ConsumerCoordinator - [Consumer clientId=mso-123456-consumer-f5c1d64b-2a77-4abe-97e2-5d595acfbd82, groupId=mso-group] Committed offset 0 for partition my-test-topic-0 09:09:55 09:09:55.330 [main] DEBUG org.apache.kafka.clients.consumer.internals.ConsumerCoordinator - [Consumer clientId=mso-123456-consumer-f5c1d64b-2a77-4abe-97e2-5d595acfbd82, groupId=mso-group] Completed asynchronous auto-commit of offsets {my-test-topic-0=OffsetAndMetadata{offset=0, leaderEpoch=null, metadata=''}} 09:09:55 09:09:55.565 [main] INFO org.apache.kafka.clients.producer.ProducerConfig - ProducerConfig values: 09:09:55 acks = -1 09:09:55 batch.size = 16384 09:09:55 bootstrap.servers = [SASL_PLAINTEXT://localhost:40117] 09:09:55 buffer.memory = 33554432 09:09:55 client.dns.lookup = use_all_dns_ips 09:09:55 client.id = mso-123456-producer-a431b68b-e45a-4ed5-8b19-0482c7f0322b 09:09:55 compression.type = none 09:09:55 connections.max.idle.ms = 540000 09:09:55 delivery.timeout.ms = 120000 09:09:55 enable.idempotence = true 09:09:55 interceptor.classes = [] 09:09:55 key.serializer = class org.apache.kafka.common.serialization.StringSerializer 09:09:55 linger.ms = 0 09:09:55 max.block.ms = 60000 09:09:55 max.in.flight.requests.per.connection = 5 09:09:55 max.request.size = 1048576 09:09:55 metadata.max.age.ms = 300000 09:09:55 metadata.max.idle.ms = 300000 09:09:55 metric.reporters = [] 09:09:55 metrics.num.samples = 2 09:09:55 metrics.recording.level = INFO 09:09:55 metrics.sample.window.ms = 30000 09:09:55 partitioner.adaptive.partitioning.enable = true 09:09:55 partitioner.availability.timeout.ms = 0 09:09:55 partitioner.class = null 09:09:55 partitioner.ignore.keys = false 09:09:55 receive.buffer.bytes = 32768 09:09:55 reconnect.backoff.max.ms = 1000 09:09:55 reconnect.backoff.ms = 50 09:09:55 request.timeout.ms = 30000 09:09:55 retries = 2147483647 09:09:55 retry.backoff.ms = 100 09:09:55 sasl.client.callback.handler.class = null 09:09:55 sasl.jaas.config = [hidden] 09:09:55 sasl.kerberos.kinit.cmd = /usr/bin/kinit 09:09:55 sasl.kerberos.min.time.before.relogin = 60000 09:09:55 sasl.kerberos.service.name = null 09:09:55 sasl.kerberos.ticket.renew.jitter = 0.05 09:09:55 sasl.kerberos.ticket.renew.window.factor = 0.8 09:09:55 sasl.login.callback.handler.class = null 09:09:55 sasl.login.class = null 09:09:55 sasl.login.connect.timeout.ms = null 09:09:55 sasl.login.read.timeout.ms = null 09:09:55 sasl.login.refresh.buffer.seconds = 300 09:09:55 sasl.login.refresh.min.period.seconds = 60 09:09:55 sasl.login.refresh.window.factor = 0.8 09:09:55 sasl.login.refresh.window.jitter = 0.05 09:09:55 sasl.login.retry.backoff.max.ms = 10000 09:09:55 sasl.login.retry.backoff.ms = 100 09:09:55 sasl.mechanism = PLAIN 09:09:55 sasl.oauthbearer.clock.skew.seconds = 30 09:09:55 sasl.oauthbearer.expected.audience = null 09:09:55 sasl.oauthbearer.expected.issuer = null 09:09:55 sasl.oauthbearer.jwks.endpoint.refresh.ms = 3600000 09:09:55 sasl.oauthbearer.jwks.endpoint.retry.backoff.max.ms = 10000 09:09:55 sasl.oauthbearer.jwks.endpoint.retry.backoff.ms = 100 09:09:55 sasl.oauthbearer.jwks.endpoint.url = null 09:09:55 sasl.oauthbearer.scope.claim.name = scope 09:09:55 sasl.oauthbearer.sub.claim.name = sub 09:09:55 sasl.oauthbearer.token.endpoint.url = null 09:09:55 security.protocol = SASL_PLAINTEXT 09:09:55 security.providers = null 09:09:55 send.buffer.bytes = 131072 09:09:55 socket.connection.setup.timeout.max.ms = 30000 09:09:55 socket.connection.setup.timeout.ms = 10000 09:09:55 ssl.cipher.suites = null 09:09:55 ssl.enabled.protocols = [TLSv1.2, TLSv1.3] 09:09:55 ssl.endpoint.identification.algorithm = https 09:09:55 ssl.engine.factory.class = null 09:09:55 ssl.key.password = null 09:09:55 ssl.keymanager.algorithm = SunX509 09:09:55 ssl.keystore.certificate.chain = null 09:09:55 ssl.keystore.key = null 09:09:55 ssl.keystore.location = null 09:09:55 ssl.keystore.password = null 09:09:55 ssl.keystore.type = JKS 09:09:55 ssl.protocol = TLSv1.3 09:09:55 ssl.provider = null 09:09:55 ssl.secure.random.implementation = null 09:09:55 ssl.trustmanager.algorithm = PKIX 09:09:55 ssl.truststore.certificates = null 09:09:55 ssl.truststore.location = null 09:09:55 ssl.truststore.password = null 09:09:55 ssl.truststore.type = JKS 09:09:55 transaction.timeout.ms = 60000 09:09:55 transactional.id = null 09:09:55 value.serializer = class org.apache.kafka.common.serialization.StringSerializer 09:09:55 09:09:55 09:09:55.576 [main] INFO org.apache.kafka.clients.producer.KafkaProducer - [Producer clientId=mso-123456-producer-a431b68b-e45a-4ed5-8b19-0482c7f0322b] Instantiated an idempotent producer. 09:09:55 09:09:55.592 [main] INFO org.apache.kafka.common.utils.AppInfoParser - Kafka version: 3.3.1 09:09:55 09:09:55.592 [main] INFO org.apache.kafka.common.utils.AppInfoParser - Kafka commitId: e23c59d00e687ff5 09:09:55 09:09:55.592 [kafka-producer-network-thread | mso-123456-producer-a431b68b-e45a-4ed5-8b19-0482c7f0322b] DEBUG org.apache.kafka.clients.producer.internals.Sender - [Producer clientId=mso-123456-producer-a431b68b-e45a-4ed5-8b19-0482c7f0322b] Starting Kafka producer I/O thread. 09:09:55 09:09:55.592 [main] INFO org.apache.kafka.common.utils.AppInfoParser - Kafka startTimeMs: 1770973795592 09:09:55 09:09:55.592 [main] DEBUG org.apache.kafka.clients.producer.KafkaProducer - [Producer clientId=mso-123456-producer-a431b68b-e45a-4ed5-8b19-0482c7f0322b] Kafka producer started 09:09:55 09:09:55.592 [kafka-producer-network-thread | mso-123456-producer-a431b68b-e45a-4ed5-8b19-0482c7f0322b] DEBUG org.apache.kafka.clients.producer.internals.TransactionManager - [Producer clientId=mso-123456-producer-a431b68b-e45a-4ed5-8b19-0482c7f0322b] Transition from state UNINITIALIZED to INITIALIZING 09:09:55 09:09:55.594 [kafka-producer-network-thread | mso-123456-producer-a431b68b-e45a-4ed5-8b19-0482c7f0322b] DEBUG org.apache.kafka.clients.producer.internals.TransactionManager - [Producer clientId=mso-123456-producer-a431b68b-e45a-4ed5-8b19-0482c7f0322b] Enqueuing transactional request InitProducerIdRequestData(transactionalId=null, transactionTimeoutMs=2147483647, producerId=-1, producerEpoch=-1) 09:09:55 09:09:55.594 [kafka-producer-network-thread | mso-123456-producer-a431b68b-e45a-4ed5-8b19-0482c7f0322b] DEBUG org.apache.kafka.clients.NetworkClient - [Producer clientId=mso-123456-producer-a431b68b-e45a-4ed5-8b19-0482c7f0322b] Initialize connection to node localhost:40117 (id: -1 rack: null) for sending metadata request 09:09:55 09:09:55.595 [kafka-producer-network-thread | mso-123456-producer-a431b68b-e45a-4ed5-8b19-0482c7f0322b] DEBUG org.apache.kafka.clients.ClientUtils - Resolved host localhost as 127.0.0.1 09:09:55 09:09:55.595 [kafka-producer-network-thread | mso-123456-producer-a431b68b-e45a-4ed5-8b19-0482c7f0322b] DEBUG org.apache.kafka.clients.NetworkClient - [Producer clientId=mso-123456-producer-a431b68b-e45a-4ed5-8b19-0482c7f0322b] Initiating connection to node localhost:40117 (id: -1 rack: null) using address localhost/127.0.0.1 09:09:55 09:09:55.595 [data-plane-kafka-socket-acceptor-ListenerName(SASL_PLAINTEXT)-SASL_PLAINTEXT-40117] DEBUG kafka.network.DataPlaneAcceptor - Accepted connection from /127.0.0.1:33314 on /127.0.0.1:40117 and assigned it to processor 0, sendBufferSize [actual|requested]: [102400|102400] recvBufferSize [actual|requested]: [102400|102400] 09:09:55 09:09:55.595 [data-plane-kafka-network-thread-1-ListenerName(SASL_PLAINTEXT)-SASL_PLAINTEXT-0] DEBUG kafka.network.Processor - Processor 0 listening to new connection from /127.0.0.1:33314 09:09:55 09:09:55.596 [kafka-producer-network-thread | mso-123456-producer-a431b68b-e45a-4ed5-8b19-0482c7f0322b] DEBUG org.apache.kafka.common.security.authenticator.SaslClientAuthenticator - [Producer clientId=mso-123456-producer-a431b68b-e45a-4ed5-8b19-0482c7f0322b] Set SASL client state to SEND_APIVERSIONS_REQUEST 09:09:55 09:09:55.596 [kafka-producer-network-thread | mso-123456-producer-a431b68b-e45a-4ed5-8b19-0482c7f0322b] DEBUG org.apache.kafka.common.security.authenticator.SaslClientAuthenticator - [Producer clientId=mso-123456-producer-a431b68b-e45a-4ed5-8b19-0482c7f0322b] Creating SaslClient: client=null;service=kafka;serviceHostname=localhost;mechs=[PLAIN] 09:09:55 09:09:55.598 [kafka-producer-network-thread | mso-123456-producer-a431b68b-e45a-4ed5-8b19-0482c7f0322b] DEBUG org.apache.kafka.common.network.Selector - [Producer clientId=mso-123456-producer-a431b68b-e45a-4ed5-8b19-0482c7f0322b] Created socket with SO_RCVBUF = 32768, SO_SNDBUF = 131072, SO_TIMEOUT = 0 to node -1 09:09:55 09:09:55.598 [kafka-producer-network-thread | mso-123456-producer-a431b68b-e45a-4ed5-8b19-0482c7f0322b] DEBUG org.apache.kafka.common.security.authenticator.SaslClientAuthenticator - [Producer clientId=mso-123456-producer-a431b68b-e45a-4ed5-8b19-0482c7f0322b] Set SASL client state to RECEIVE_APIVERSIONS_RESPONSE 09:09:55 09:09:55.599 [data-plane-kafka-network-thread-1-ListenerName(SASL_PLAINTEXT)-SASL_PLAINTEXT-0] DEBUG org.apache.kafka.common.security.authenticator.SaslServerAuthenticator - Set SASL server state to HANDSHAKE_OR_VERSIONS_REQUEST during authentication 09:09:55 09:09:55.599 [data-plane-kafka-network-thread-1-ListenerName(SASL_PLAINTEXT)-SASL_PLAINTEXT-0] DEBUG org.apache.kafka.common.security.authenticator.SaslServerAuthenticator - Handling Kafka request API_VERSIONS during authentication 09:09:55 09:09:55.599 [kafka-producer-network-thread | mso-123456-producer-a431b68b-e45a-4ed5-8b19-0482c7f0322b] DEBUG org.apache.kafka.clients.NetworkClient - [Producer clientId=mso-123456-producer-a431b68b-e45a-4ed5-8b19-0482c7f0322b] Completed connection to node -1. Fetching API versions. 09:09:55 09:09:55.599 [data-plane-kafka-network-thread-1-ListenerName(SASL_PLAINTEXT)-SASL_PLAINTEXT-0] DEBUG org.apache.kafka.common.security.authenticator.SaslServerAuthenticator - Set SASL server state to HANDSHAKE_REQUEST during authentication 09:09:55 09:09:55.600 [kafka-producer-network-thread | mso-123456-producer-a431b68b-e45a-4ed5-8b19-0482c7f0322b] DEBUG org.apache.kafka.common.security.authenticator.SaslClientAuthenticator - [Producer clientId=mso-123456-producer-a431b68b-e45a-4ed5-8b19-0482c7f0322b] Set SASL client state to SEND_HANDSHAKE_REQUEST 09:09:55 09:09:55.600 [kafka-producer-network-thread | mso-123456-producer-a431b68b-e45a-4ed5-8b19-0482c7f0322b] DEBUG org.apache.kafka.common.security.authenticator.SaslClientAuthenticator - [Producer clientId=mso-123456-producer-a431b68b-e45a-4ed5-8b19-0482c7f0322b] Set SASL client state to RECEIVE_HANDSHAKE_RESPONSE 09:09:55 09:09:55.601 [data-plane-kafka-network-thread-1-ListenerName(SASL_PLAINTEXT)-SASL_PLAINTEXT-0] DEBUG org.apache.kafka.common.security.authenticator.SaslServerAuthenticator - Handling Kafka request SASL_HANDSHAKE during authentication 09:09:55 09:09:55.601 [data-plane-kafka-network-thread-1-ListenerName(SASL_PLAINTEXT)-SASL_PLAINTEXT-0] DEBUG org.apache.kafka.common.security.authenticator.SaslServerAuthenticator - Using SASL mechanism 'PLAIN' provided by client 09:09:55 09:09:55.601 [kafka-producer-network-thread | mso-123456-producer-a431b68b-e45a-4ed5-8b19-0482c7f0322b] DEBUG org.apache.kafka.common.security.authenticator.SaslClientAuthenticator - [Producer clientId=mso-123456-producer-a431b68b-e45a-4ed5-8b19-0482c7f0322b] Set SASL client state to INITIAL 09:09:55 09:09:55.601 [kafka-producer-network-thread | mso-123456-producer-a431b68b-e45a-4ed5-8b19-0482c7f0322b] DEBUG org.apache.kafka.common.security.authenticator.SaslClientAuthenticator - [Producer clientId=mso-123456-producer-a431b68b-e45a-4ed5-8b19-0482c7f0322b] Set SASL client state to INTERMEDIATE 09:09:55 09:09:55.601 [data-plane-kafka-network-thread-1-ListenerName(SASL_PLAINTEXT)-SASL_PLAINTEXT-0] DEBUG org.apache.kafka.common.security.authenticator.SaslServerAuthenticator - Set SASL server state to AUTHENTICATE during authentication 09:09:55 09:09:55.602 [data-plane-kafka-network-thread-1-ListenerName(SASL_PLAINTEXT)-SASL_PLAINTEXT-0] DEBUG org.apache.kafka.common.security.authenticator.SaslServerAuthenticator - Authentication complete; session max lifetime from broker config=0 ms, no credential expiration; no session expiration, sending 0 ms to client 09:09:55 09:09:55.602 [data-plane-kafka-network-thread-1-ListenerName(SASL_PLAINTEXT)-SASL_PLAINTEXT-0] DEBUG org.apache.kafka.common.security.authenticator.SaslServerAuthenticator - Set SASL server state to COMPLETE during authentication 09:09:55 09:09:55.602 [kafka-producer-network-thread | mso-123456-producer-a431b68b-e45a-4ed5-8b19-0482c7f0322b] DEBUG org.apache.kafka.common.security.authenticator.SaslClientAuthenticator - [Producer clientId=mso-123456-producer-a431b68b-e45a-4ed5-8b19-0482c7f0322b] Set SASL client state to COMPLETE 09:09:55 09:09:55.602 [kafka-producer-network-thread | mso-123456-producer-a431b68b-e45a-4ed5-8b19-0482c7f0322b] DEBUG org.apache.kafka.common.security.authenticator.SaslClientAuthenticator - [Producer clientId=mso-123456-producer-a431b68b-e45a-4ed5-8b19-0482c7f0322b] Finished authentication with no session expiration and no session re-authentication 09:09:55 09:09:55.602 [kafka-producer-network-thread | mso-123456-producer-a431b68b-e45a-4ed5-8b19-0482c7f0322b] DEBUG org.apache.kafka.common.network.Selector - [Producer clientId=mso-123456-producer-a431b68b-e45a-4ed5-8b19-0482c7f0322b] Successfully authenticated with localhost/127.0.0.1 09:09:55 09:09:55.602 [data-plane-kafka-network-thread-1-ListenerName(SASL_PLAINTEXT)-SASL_PLAINTEXT-0] DEBUG org.apache.kafka.common.network.Selector - [SocketServer listenerType=ZK_BROKER, nodeId=1] Successfully authenticated with /127.0.0.1 09:09:55 09:09:55.602 [kafka-producer-network-thread | mso-123456-producer-a431b68b-e45a-4ed5-8b19-0482c7f0322b] DEBUG org.apache.kafka.clients.NetworkClient - [Producer clientId=mso-123456-producer-a431b68b-e45a-4ed5-8b19-0482c7f0322b] Initiating API versions fetch from node -1. 09:09:55 09:09:55.602 [kafka-producer-network-thread | mso-123456-producer-a431b68b-e45a-4ed5-8b19-0482c7f0322b] DEBUG org.apache.kafka.clients.NetworkClient - [Producer clientId=mso-123456-producer-a431b68b-e45a-4ed5-8b19-0482c7f0322b] Sending API_VERSIONS request with header RequestHeader(apiKey=API_VERSIONS, apiVersion=3, clientId=mso-123456-producer-a431b68b-e45a-4ed5-8b19-0482c7f0322b, correlationId=0) and timeout 30000 to node -1: ApiVersionsRequestData(clientSoftwareName='apache-kafka-java', clientSoftwareVersion='3.3.1') 09:09:55 09:09:55.604 [kafka-producer-network-thread | mso-123456-producer-a431b68b-e45a-4ed5-8b19-0482c7f0322b] DEBUG org.apache.kafka.clients.NetworkClient - [Producer clientId=mso-123456-producer-a431b68b-e45a-4ed5-8b19-0482c7f0322b] Received API_VERSIONS response from node -1 for request with header RequestHeader(apiKey=API_VERSIONS, apiVersion=3, clientId=mso-123456-producer-a431b68b-e45a-4ed5-8b19-0482c7f0322b, correlationId=0): ApiVersionsResponseData(errorCode=0, apiKeys=[ApiVersion(apiKey=0, minVersion=0, maxVersion=9), ApiVersion(apiKey=1, minVersion=0, maxVersion=13), ApiVersion(apiKey=2, minVersion=0, maxVersion=7), ApiVersion(apiKey=3, minVersion=0, maxVersion=12), ApiVersion(apiKey=4, minVersion=0, maxVersion=6), ApiVersion(apiKey=5, minVersion=0, maxVersion=3), ApiVersion(apiKey=6, minVersion=0, maxVersion=7), ApiVersion(apiKey=7, minVersion=0, maxVersion=3), ApiVersion(apiKey=8, minVersion=0, maxVersion=8), ApiVersion(apiKey=9, minVersion=0, maxVersion=8), ApiVersion(apiKey=10, minVersion=0, maxVersion=4), ApiVersion(apiKey=11, minVersion=0, maxVersion=9), ApiVersion(apiKey=12, minVersion=0, maxVersion=4), ApiVersion(apiKey=13, minVersion=0, maxVersion=5), ApiVersion(apiKey=14, minVersion=0, maxVersion=5), ApiVersion(apiKey=15, minVersion=0, maxVersion=5), ApiVersion(apiKey=16, minVersion=0, maxVersion=4), ApiVersion(apiKey=17, minVersion=0, maxVersion=1), ApiVersion(apiKey=18, minVersion=0, maxVersion=3), ApiVersion(apiKey=19, minVersion=0, maxVersion=7), ApiVersion(apiKey=20, minVersion=0, maxVersion=6), ApiVersion(apiKey=21, minVersion=0, maxVersion=2), ApiVersion(apiKey=22, minVersion=0, maxVersion=4), ApiVersion(apiKey=23, minVersion=0, maxVersion=4), ApiVersion(apiKey=24, minVersion=0, maxVersion=3), ApiVersion(apiKey=25, minVersion=0, maxVersion=3), ApiVersion(apiKey=26, minVersion=0, maxVersion=3), ApiVersion(apiKey=27, minVersion=0, maxVersion=1), ApiVersion(apiKey=28, minVersion=0, maxVersion=3), ApiVersion(apiKey=29, minVersion=0, maxVersion=3), ApiVersion(apiKey=30, minVersion=0, maxVersion=3), ApiVersion(apiKey=31, minVersion=0, maxVersion=3), ApiVersion(apiKey=32, minVersion=0, maxVersion=4), ApiVersion(apiKey=33, minVersion=0, maxVersion=2), ApiVersion(apiKey=34, minVersion=0, maxVersion=2), ApiVersion(apiKey=35, minVersion=0, maxVersion=4), ApiVersion(apiKey=36, minVersion=0, maxVersion=2), ApiVersion(apiKey=37, minVersion=0, maxVersion=3), ApiVersion(apiKey=38, minVersion=0, maxVersion=3), ApiVersion(apiKey=39, minVersion=0, maxVersion=2), ApiVersion(apiKey=40, minVersion=0, maxVersion=2), ApiVersion(apiKey=41, minVersion=0, maxVersion=3), ApiVersion(apiKey=42, minVersion=0, maxVersion=2), ApiVersion(apiKey=43, minVersion=0, maxVersion=2), ApiVersion(apiKey=44, minVersion=0, maxVersion=1), ApiVersion(apiKey=45, minVersion=0, maxVersion=0), ApiVersion(apiKey=46, minVersion=0, maxVersion=0), ApiVersion(apiKey=47, minVersion=0, maxVersion=0), ApiVersion(apiKey=48, minVersion=0, maxVersion=1), ApiVersion(apiKey=49, minVersion=0, maxVersion=1), ApiVersion(apiKey=50, minVersion=0, maxVersion=0), ApiVersion(apiKey=51, minVersion=0, maxVersion=0), ApiVersion(apiKey=56, minVersion=0, maxVersion=2), ApiVersion(apiKey=57, minVersion=0, maxVersion=1), ApiVersion(apiKey=60, minVersion=0, maxVersion=0), ApiVersion(apiKey=61, minVersion=0, maxVersion=0), ApiVersion(apiKey=65, minVersion=0, maxVersion=0), ApiVersion(apiKey=66, minVersion=0, maxVersion=0), ApiVersion(apiKey=67, minVersion=0, maxVersion=0)], throttleTimeMs=0, supportedFeatures=[], finalizedFeaturesEpoch=0, finalizedFeatures=[]) 09:09:55 09:09:55.605 [data-plane-kafka-network-thread-1-ListenerName(SASL_PLAINTEXT)-SASL_PLAINTEXT-0] DEBUG kafka.request.logger - Completed request:{"isForwarded":false,"requestHeader":{"requestApiKey":18,"requestApiVersion":3,"correlationId":0,"clientId":"mso-123456-producer-a431b68b-e45a-4ed5-8b19-0482c7f0322b","requestApiKeyName":"API_VERSIONS"},"request":{"clientSoftwareName":"apache-kafka-java","clientSoftwareVersion":"3.3.1"},"response":{"errorCode":0,"apiKeys":[{"apiKey":0,"minVersion":0,"maxVersion":9},{"apiKey":1,"minVersion":0,"maxVersion":13},{"apiKey":2,"minVersion":0,"maxVersion":7},{"apiKey":3,"minVersion":0,"maxVersion":12},{"apiKey":4,"minVersion":0,"maxVersion":6},{"apiKey":5,"minVersion":0,"maxVersion":3},{"apiKey":6,"minVersion":0,"maxVersion":7},{"apiKey":7,"minVersion":0,"maxVersion":3},{"apiKey":8,"minVersion":0,"maxVersion":8},{"apiKey":9,"minVersion":0,"maxVersion":8},{"apiKey":10,"minVersion":0,"maxVersion":4},{"apiKey":11,"minVersion":0,"maxVersion":9},{"apiKey":12,"minVersion":0,"maxVersion":4},{"apiKey":13,"minVersion":0,"maxVersion":5},{"apiKey":14,"minVersion":0,"maxVersion":5},{"apiKey":15,"minVersion":0,"maxVersion":5},{"apiKey":16,"minVersion":0,"maxVersion":4},{"apiKey":17,"minVersion":0,"maxVersion":1},{"apiKey":18,"minVersion":0,"maxVersion":3},{"apiKey":19,"minVersion":0,"maxVersion":7},{"apiKey":20,"minVersion":0,"maxVersion":6},{"apiKey":21,"minVersion":0,"maxVersion":2},{"apiKey":22,"minVersion":0,"maxVersion":4},{"apiKey":23,"minVersion":0,"maxVersion":4},{"apiKey":24,"minVersion":0,"maxVersion":3},{"apiKey":25,"minVersion":0,"maxVersion":3},{"apiKey":26,"minVersion":0,"maxVersion":3},{"apiKey":27,"minVersion":0,"maxVersion":1},{"apiKey":28,"minVersion":0,"maxVersion":3},{"apiKey":29,"minVersion":0,"maxVersion":3},{"apiKey":30,"minVersion":0,"maxVersion":3},{"apiKey":31,"minVersion":0,"maxVersion":3},{"apiKey":32,"minVersion":0,"maxVersion":4},{"apiKey":33,"minVersion":0,"maxVersion":2},{"apiKey":34,"minVersion":0,"maxVersion":2},{"apiKey":35,"minVersion":0,"maxVersion":4},{"apiKey":36,"minVersion":0,"maxVersion":2},{"apiKey":37,"minVersion":0,"maxVersion":3},{"apiKey":38,"minVersion":0,"maxVersion":3},{"apiKey":39,"minVersion":0,"maxVersion":2},{"apiKey":40,"minVersion":0,"maxVersion":2},{"apiKey":41,"minVersion":0,"maxVersion":3},{"apiKey":42,"minVersion":0,"maxVersion":2},{"apiKey":43,"minVersion":0,"maxVersion":2},{"apiKey":44,"minVersion":0,"maxVersion":1},{"apiKey":45,"minVersion":0,"maxVersion":0},{"apiKey":46,"minVersion":0,"maxVersion":0},{"apiKey":47,"minVersion":0,"maxVersion":0},{"apiKey":48,"minVersion":0,"maxVersion":1},{"apiKey":49,"minVersion":0,"maxVersion":1},{"apiKey":50,"minVersion":0,"maxVersion":0},{"apiKey":51,"minVersion":0,"maxVersion":0},{"apiKey":56,"minVersion":0,"maxVersion":2},{"apiKey":57,"minVersion":0,"maxVersion":1},{"apiKey":60,"minVersion":0,"maxVersion":0},{"apiKey":61,"minVersion":0,"maxVersion":0},{"apiKey":65,"minVersion":0,"maxVersion":0},{"apiKey":66,"minVersion":0,"maxVersion":0},{"apiKey":67,"minVersion":0,"maxVersion":0}],"throttleTimeMs":0,"finalizedFeaturesEpoch":0},"connection":"127.0.0.1:40117-127.0.0.1:33314-4","totalTimeMs":1.271,"requestQueueTimeMs":0.235,"localTimeMs":0.785,"remoteTimeMs":0.0,"throttleTimeMs":0,"responseQueueTimeMs":0.072,"sendTimeMs":0.177,"securityProtocol":"SASL_PLAINTEXT","principal":"User:admin","listener":"SASL_PLAINTEXT","clientInformation":{"softwareName":"unknown","softwareVersion":"unknown"}} 09:09:55 09:09:55.605 [kafka-producer-network-thread | mso-123456-producer-a431b68b-e45a-4ed5-8b19-0482c7f0322b] DEBUG org.apache.kafka.clients.NetworkClient - [Producer clientId=mso-123456-producer-a431b68b-e45a-4ed5-8b19-0482c7f0322b] Node -1 has finalized features epoch: 0, finalized features: [], supported features: [], API versions: (Produce(0): 0 to 9 [usable: 9], Fetch(1): 0 to 13 [usable: 13], ListOffsets(2): 0 to 7 [usable: 7], Metadata(3): 0 to 12 [usable: 12], LeaderAndIsr(4): 0 to 6 [usable: 6], StopReplica(5): 0 to 3 [usable: 3], UpdateMetadata(6): 0 to 7 [usable: 7], ControlledShutdown(7): 0 to 3 [usable: 3], OffsetCommit(8): 0 to 8 [usable: 8], OffsetFetch(9): 0 to 8 [usable: 8], FindCoordinator(10): 0 to 4 [usable: 4], JoinGroup(11): 0 to 9 [usable: 9], Heartbeat(12): 0 to 4 [usable: 4], LeaveGroup(13): 0 to 5 [usable: 5], SyncGroup(14): 0 to 5 [usable: 5], DescribeGroups(15): 0 to 5 [usable: 5], ListGroups(16): 0 to 4 [usable: 4], SaslHandshake(17): 0 to 1 [usable: 1], ApiVersions(18): 0 to 3 [usable: 3], CreateTopics(19): 0 to 7 [usable: 7], DeleteTopics(20): 0 to 6 [usable: 6], DeleteRecords(21): 0 to 2 [usable: 2], InitProducerId(22): 0 to 4 [usable: 4], OffsetForLeaderEpoch(23): 0 to 4 [usable: 4], AddPartitionsToTxn(24): 0 to 3 [usable: 3], AddOffsetsToTxn(25): 0 to 3 [usable: 3], EndTxn(26): 0 to 3 [usable: 3], WriteTxnMarkers(27): 0 to 1 [usable: 1], TxnOffsetCommit(28): 0 to 3 [usable: 3], DescribeAcls(29): 0 to 3 [usable: 3], CreateAcls(30): 0 to 3 [usable: 3], DeleteAcls(31): 0 to 3 [usable: 3], DescribeConfigs(32): 0 to 4 [usable: 4], AlterConfigs(33): 0 to 2 [usable: 2], AlterReplicaLogDirs(34): 0 to 2 [usable: 2], DescribeLogDirs(35): 0 to 4 [usable: 4], SaslAuthenticate(36): 0 to 2 [usable: 2], CreatePartitions(37): 0 to 3 [usable: 3], CreateDelegationToken(38): 0 to 3 [usable: 3], RenewDelegationToken(39): 0 to 2 [usable: 2], ExpireDelegationToken(40): 0 to 2 [usable: 2], DescribeDelegationToken(41): 0 to 3 [usable: 3], DeleteGroups(42): 0 to 2 [usable: 2], ElectLeaders(43): 0 to 2 [usable: 2], IncrementalAlterConfigs(44): 0 to 1 [usable: 1], AlterPartitionReassignments(45): 0 [usable: 0], ListPartitionReassignments(46): 0 [usable: 0], OffsetDelete(47): 0 [usable: 0], DescribeClientQuotas(48): 0 to 1 [usable: 1], AlterClientQuotas(49): 0 to 1 [usable: 1], DescribeUserScramCredentials(50): 0 [usable: 0], AlterUserScramCredentials(51): 0 [usable: 0], AlterPartition(56): 0 to 2 [usable: 2], UpdateFeatures(57): 0 to 1 [usable: 1], DescribeCluster(60): 0 [usable: 0], DescribeProducers(61): 0 [usable: 0], DescribeTransactions(65): 0 [usable: 0], ListTransactions(66): 0 [usable: 0], AllocateProducerIds(67): 0 [usable: 0]). 09:09:55 09:09:55.605 [kafka-producer-network-thread | mso-123456-producer-a431b68b-e45a-4ed5-8b19-0482c7f0322b] DEBUG org.apache.kafka.clients.NetworkClient - [Producer clientId=mso-123456-producer-a431b68b-e45a-4ed5-8b19-0482c7f0322b] Sending metadata request MetadataRequestData(topics=[MetadataRequestTopic(topicId=AAAAAAAAAAAAAAAAAAAAAA, name='my-test-topic')], allowAutoTopicCreation=true, includeClusterAuthorizedOperations=false, includeTopicAuthorizedOperations=false) to node localhost:40117 (id: -1 rack: null) 09:09:55 09:09:55.605 [kafka-producer-network-thread | mso-123456-producer-a431b68b-e45a-4ed5-8b19-0482c7f0322b] DEBUG org.apache.kafka.clients.NetworkClient - [Producer clientId=mso-123456-producer-a431b68b-e45a-4ed5-8b19-0482c7f0322b] Sending METADATA request with header RequestHeader(apiKey=METADATA, apiVersion=12, clientId=mso-123456-producer-a431b68b-e45a-4ed5-8b19-0482c7f0322b, correlationId=1) and timeout 30000 to node -1: MetadataRequestData(topics=[MetadataRequestTopic(topicId=AAAAAAAAAAAAAAAAAAAAAA, name='my-test-topic')], allowAutoTopicCreation=true, includeClusterAuthorizedOperations=false, includeTopicAuthorizedOperations=false) 09:09:55 09:09:55.606 [kafka-producer-network-thread | mso-123456-producer-a431b68b-e45a-4ed5-8b19-0482c7f0322b] DEBUG org.apache.kafka.clients.producer.internals.Sender - [Producer clientId=mso-123456-producer-a431b68b-e45a-4ed5-8b19-0482c7f0322b] Sending transactional request InitProducerIdRequestData(transactionalId=null, transactionTimeoutMs=2147483647, producerId=-1, producerEpoch=-1) to node localhost:40117 (id: -1 rack: null) with correlation ID 2 09:09:55 09:09:55.606 [kafka-producer-network-thread | mso-123456-producer-a431b68b-e45a-4ed5-8b19-0482c7f0322b] DEBUG org.apache.kafka.clients.NetworkClient - [Producer clientId=mso-123456-producer-a431b68b-e45a-4ed5-8b19-0482c7f0322b] Sending INIT_PRODUCER_ID request with header RequestHeader(apiKey=INIT_PRODUCER_ID, apiVersion=4, clientId=mso-123456-producer-a431b68b-e45a-4ed5-8b19-0482c7f0322b, correlationId=2) and timeout 30000 to node -1: InitProducerIdRequestData(transactionalId=null, transactionTimeoutMs=2147483647, producerId=-1, producerEpoch=-1) 09:09:55 09:09:55.607 [data-plane-kafka-network-thread-1-ListenerName(SASL_PLAINTEXT)-SASL_PLAINTEXT-0] DEBUG kafka.request.logger - Completed request:{"isForwarded":false,"requestHeader":{"requestApiKey":3,"requestApiVersion":12,"correlationId":1,"clientId":"mso-123456-producer-a431b68b-e45a-4ed5-8b19-0482c7f0322b","requestApiKeyName":"METADATA"},"request":{"topics":[{"topicId":"AAAAAAAAAAAAAAAAAAAAAA","name":"my-test-topic"}],"allowAutoTopicCreation":true,"includeTopicAuthorizedOperations":false},"response":{"throttleTimeMs":0,"brokers":[{"nodeId":1,"host":"localhost","port":40117,"rack":null}],"clusterId":"lcpOyY1-QY2MMThgHGGgSA","controllerId":1,"topics":[{"errorCode":0,"name":"my-test-topic","topicId":"rpOTusxPRiGyrsjjjH_fwA","isInternal":false,"partitions":[{"errorCode":0,"partitionIndex":0,"leaderId":1,"leaderEpoch":0,"replicaNodes":[1],"isrNodes":[1],"offlineReplicas":[]}],"topicAuthorizedOperations":-2147483648}]},"connection":"127.0.0.1:40117-127.0.0.1:33314-4","totalTimeMs":1.191,"requestQueueTimeMs":0.198,"localTimeMs":0.839,"remoteTimeMs":0.0,"throttleTimeMs":0,"responseQueueTimeMs":0.056,"sendTimeMs":0.096,"securityProtocol":"SASL_PLAINTEXT","principal":"User:admin","listener":"SASL_PLAINTEXT","clientInformation":{"softwareName":"apache-kafka-java","softwareVersion":"3.3.1"}} 09:09:55 09:09:55.607 [kafka-producer-network-thread | mso-123456-producer-a431b68b-e45a-4ed5-8b19-0482c7f0322b] DEBUG org.apache.kafka.clients.NetworkClient - [Producer clientId=mso-123456-producer-a431b68b-e45a-4ed5-8b19-0482c7f0322b] Received METADATA response from node -1 for request with header RequestHeader(apiKey=METADATA, apiVersion=12, clientId=mso-123456-producer-a431b68b-e45a-4ed5-8b19-0482c7f0322b, correlationId=1): MetadataResponseData(throttleTimeMs=0, brokers=[MetadataResponseBroker(nodeId=1, host='localhost', port=40117, rack=null)], clusterId='lcpOyY1-QY2MMThgHGGgSA', controllerId=1, topics=[MetadataResponseTopic(errorCode=0, name='my-test-topic', topicId=rpOTusxPRiGyrsjjjH_fwA, isInternal=false, partitions=[MetadataResponsePartition(errorCode=0, partitionIndex=0, leaderId=1, leaderEpoch=0, replicaNodes=[1], isrNodes=[1], offlineReplicas=[])], topicAuthorizedOperations=-2147483648)], clusterAuthorizedOperations=-2147483648) 09:09:55 09:09:55.607 [kafka-producer-network-thread | mso-123456-producer-a431b68b-e45a-4ed5-8b19-0482c7f0322b] INFO org.apache.kafka.clients.Metadata - [Producer clientId=mso-123456-producer-a431b68b-e45a-4ed5-8b19-0482c7f0322b] Resetting the last seen epoch of partition my-test-topic-0 to 0 since the associated topicId changed from null to rpOTusxPRiGyrsjjjH_fwA 09:09:55 09:09:55.608 [kafka-producer-network-thread | mso-123456-producer-a431b68b-e45a-4ed5-8b19-0482c7f0322b] INFO org.apache.kafka.clients.Metadata - [Producer clientId=mso-123456-producer-a431b68b-e45a-4ed5-8b19-0482c7f0322b] Cluster ID: lcpOyY1-QY2MMThgHGGgSA 09:09:55 09:09:55.608 [kafka-producer-network-thread | mso-123456-producer-a431b68b-e45a-4ed5-8b19-0482c7f0322b] DEBUG org.apache.kafka.clients.Metadata - [Producer clientId=mso-123456-producer-a431b68b-e45a-4ed5-8b19-0482c7f0322b] Updated cluster metadata updateVersion 2 to MetadataCache{clusterId='lcpOyY1-QY2MMThgHGGgSA', nodes={1=localhost:40117 (id: 1 rack: null)}, partitions=[PartitionMetadata(error=NONE, partition=my-test-topic-0, leader=Optional[1], leaderEpoch=Optional[0], replicas=1, isr=1, offlineReplicas=)], controller=localhost:40117 (id: 1 rack: null)} 09:09:55 09:09:55.610 [data-plane-kafka-request-handler-0] DEBUG kafka.coordinator.transaction.RPCProducerIdManager - [RPC ProducerId Manager 1]: Requesting next Producer ID block 09:09:55 09:09:55.613 [TestBroker:1:BrokerToControllerChannelManager broker=1 name=forwarding] DEBUG org.apache.kafka.clients.ClientUtils - Resolved host localhost as 127.0.0.1 09:09:55 09:09:55.613 [TestBroker:1:BrokerToControllerChannelManager broker=1 name=forwarding] DEBUG org.apache.kafka.clients.NetworkClient - [BrokerToControllerChannelManager broker=1 name=forwarding] Initiating connection to node localhost:40117 (id: 1 rack: null) using address localhost/127.0.0.1 09:09:55 09:09:55.614 [data-plane-kafka-socket-acceptor-ListenerName(SASL_PLAINTEXT)-SASL_PLAINTEXT-40117] DEBUG kafka.network.DataPlaneAcceptor - Accepted connection from /127.0.0.1:33316 on /127.0.0.1:40117 and assigned it to processor 1, sendBufferSize [actual|requested]: [102400|102400] recvBufferSize [actual|requested]: [102400|102400] 09:09:55 09:09:55.614 [TestBroker:1:BrokerToControllerChannelManager broker=1 name=forwarding] DEBUG org.apache.kafka.common.security.authenticator.SaslClientAuthenticator - [BrokerToControllerChannelManager broker=1 name=forwarding] Set SASL client state to SEND_APIVERSIONS_REQUEST 09:09:55 09:09:55.614 [TestBroker:1:BrokerToControllerChannelManager broker=1 name=forwarding] DEBUG org.apache.kafka.common.security.authenticator.SaslClientAuthenticator - [BrokerToControllerChannelManager broker=1 name=forwarding] Creating SaslClient: client=null;service=kafka;serviceHostname=localhost;mechs=[PLAIN] 09:09:55 09:09:55.615 [TestBroker:1:BrokerToControllerChannelManager broker=1 name=forwarding] DEBUG org.apache.kafka.common.network.Selector - [BrokerToControllerChannelManager broker=1 name=forwarding] Created socket with SO_RCVBUF = 65536, SO_SNDBUF = 1313280, SO_TIMEOUT = 0 to node 1 09:09:55 09:09:55.615 [TestBroker:1:BrokerToControllerChannelManager broker=1 name=forwarding] DEBUG org.apache.kafka.common.security.authenticator.SaslClientAuthenticator - [BrokerToControllerChannelManager broker=1 name=forwarding] Set SASL client state to RECEIVE_APIVERSIONS_RESPONSE 09:09:55 09:09:55.615 [TestBroker:1:BrokerToControllerChannelManager broker=1 name=forwarding] DEBUG org.apache.kafka.clients.NetworkClient - [BrokerToControllerChannelManager broker=1 name=forwarding] Completed connection to node 1. Fetching API versions. 09:09:55 09:09:55.621 [data-plane-kafka-network-thread-1-ListenerName(SASL_PLAINTEXT)-SASL_PLAINTEXT-1] DEBUG kafka.network.Processor - Processor 1 listening to new connection from /127.0.0.1:33316 09:09:55 09:09:55.621 [data-plane-kafka-network-thread-1-ListenerName(SASL_PLAINTEXT)-SASL_PLAINTEXT-1] DEBUG org.apache.kafka.common.security.authenticator.SaslServerAuthenticator - Set SASL server state to HANDSHAKE_OR_VERSIONS_REQUEST during authentication 09:09:55 09:09:55.621 [data-plane-kafka-network-thread-1-ListenerName(SASL_PLAINTEXT)-SASL_PLAINTEXT-1] DEBUG org.apache.kafka.common.security.authenticator.SaslServerAuthenticator - Handling Kafka request API_VERSIONS during authentication 09:09:55 09:09:55.622 [data-plane-kafka-network-thread-1-ListenerName(SASL_PLAINTEXT)-SASL_PLAINTEXT-1] DEBUG org.apache.kafka.common.security.authenticator.SaslServerAuthenticator - Set SASL server state to HANDSHAKE_REQUEST during authentication 09:09:55 09:09:55.622 [TestBroker:1:BrokerToControllerChannelManager broker=1 name=forwarding] DEBUG org.apache.kafka.common.security.authenticator.SaslClientAuthenticator - [BrokerToControllerChannelManager broker=1 name=forwarding] Set SASL client state to SEND_HANDSHAKE_REQUEST 09:09:55 09:09:55.622 [TestBroker:1:BrokerToControllerChannelManager broker=1 name=forwarding] DEBUG org.apache.kafka.common.security.authenticator.SaslClientAuthenticator - [BrokerToControllerChannelManager broker=1 name=forwarding] Set SASL client state to RECEIVE_HANDSHAKE_RESPONSE 09:09:55 09:09:55.622 [data-plane-kafka-network-thread-1-ListenerName(SASL_PLAINTEXT)-SASL_PLAINTEXT-1] DEBUG org.apache.kafka.common.security.authenticator.SaslServerAuthenticator - Handling Kafka request SASL_HANDSHAKE during authentication 09:09:55 09:09:55.622 [data-plane-kafka-network-thread-1-ListenerName(SASL_PLAINTEXT)-SASL_PLAINTEXT-1] DEBUG org.apache.kafka.common.security.authenticator.SaslServerAuthenticator - Using SASL mechanism 'PLAIN' provided by client 09:09:55 09:09:55.622 [TestBroker:1:BrokerToControllerChannelManager broker=1 name=forwarding] DEBUG org.apache.kafka.common.security.authenticator.SaslClientAuthenticator - [BrokerToControllerChannelManager broker=1 name=forwarding] Set SASL client state to INITIAL 09:09:55 09:09:55.622 [TestBroker:1:BrokerToControllerChannelManager broker=1 name=forwarding] DEBUG org.apache.kafka.common.security.authenticator.SaslClientAuthenticator - [BrokerToControllerChannelManager broker=1 name=forwarding] Set SASL client state to INTERMEDIATE 09:09:55 09:09:55.623 [data-plane-kafka-network-thread-1-ListenerName(SASL_PLAINTEXT)-SASL_PLAINTEXT-1] DEBUG org.apache.kafka.common.security.authenticator.SaslServerAuthenticator - Set SASL server state to AUTHENTICATE during authentication 09:09:55 09:09:55.623 [data-plane-kafka-network-thread-1-ListenerName(SASL_PLAINTEXT)-SASL_PLAINTEXT-1] DEBUG org.apache.kafka.common.security.authenticator.SaslServerAuthenticator - Authentication complete; session max lifetime from broker config=0 ms, no credential expiration; no session expiration, sending 0 ms to client 09:09:55 09:09:55.623 [data-plane-kafka-network-thread-1-ListenerName(SASL_PLAINTEXT)-SASL_PLAINTEXT-1] DEBUG org.apache.kafka.common.security.authenticator.SaslServerAuthenticator - Set SASL server state to COMPLETE during authentication 09:09:55 09:09:55.623 [data-plane-kafka-network-thread-1-ListenerName(SASL_PLAINTEXT)-SASL_PLAINTEXT-1] DEBUG org.apache.kafka.common.network.Selector - [SocketServer listenerType=ZK_BROKER, nodeId=1] Successfully authenticated with /127.0.0.1 09:09:55 09:09:55.623 [TestBroker:1:BrokerToControllerChannelManager broker=1 name=forwarding] DEBUG org.apache.kafka.common.security.authenticator.SaslClientAuthenticator - [BrokerToControllerChannelManager broker=1 name=forwarding] Set SASL client state to COMPLETE 09:09:55 09:09:55.623 [TestBroker:1:BrokerToControllerChannelManager broker=1 name=forwarding] DEBUG org.apache.kafka.common.security.authenticator.SaslClientAuthenticator - [BrokerToControllerChannelManager broker=1 name=forwarding] Finished authentication with no session expiration and no session re-authentication 09:09:55 09:09:55.623 [TestBroker:1:BrokerToControllerChannelManager broker=1 name=forwarding] DEBUG org.apache.kafka.common.network.Selector - [BrokerToControllerChannelManager broker=1 name=forwarding] Successfully authenticated with localhost/127.0.0.1 09:09:55 09:09:55.623 [TestBroker:1:BrokerToControllerChannelManager broker=1 name=forwarding] DEBUG org.apache.kafka.clients.NetworkClient - [BrokerToControllerChannelManager broker=1 name=forwarding] Initiating API versions fetch from node 1. 09:09:55 09:09:55.623 [TestBroker:1:BrokerToControllerChannelManager broker=1 name=forwarding] DEBUG org.apache.kafka.clients.NetworkClient - [BrokerToControllerChannelManager broker=1 name=forwarding] Sending API_VERSIONS request with header RequestHeader(apiKey=API_VERSIONS, apiVersion=3, clientId=1, correlationId=1) and timeout 30000 to node 1: ApiVersionsRequestData(clientSoftwareName='apache-kafka-java', clientSoftwareVersion='3.3.1') 09:09:55 09:09:55.625 [TestBroker:1:BrokerToControllerChannelManager broker=1 name=forwarding] DEBUG org.apache.kafka.clients.NetworkClient - [BrokerToControllerChannelManager broker=1 name=forwarding] Received API_VERSIONS response from node 1 for request with header RequestHeader(apiKey=API_VERSIONS, apiVersion=3, clientId=1, correlationId=1): ApiVersionsResponseData(errorCode=0, apiKeys=[ApiVersion(apiKey=0, minVersion=0, maxVersion=9), ApiVersion(apiKey=1, minVersion=0, maxVersion=13), ApiVersion(apiKey=2, minVersion=0, maxVersion=7), ApiVersion(apiKey=3, minVersion=0, maxVersion=12), ApiVersion(apiKey=4, minVersion=0, maxVersion=6), ApiVersion(apiKey=5, minVersion=0, maxVersion=3), ApiVersion(apiKey=6, minVersion=0, maxVersion=7), ApiVersion(apiKey=7, minVersion=0, maxVersion=3), ApiVersion(apiKey=8, minVersion=0, maxVersion=8), ApiVersion(apiKey=9, minVersion=0, maxVersion=8), ApiVersion(apiKey=10, minVersion=0, maxVersion=4), ApiVersion(apiKey=11, minVersion=0, maxVersion=9), ApiVersion(apiKey=12, minVersion=0, maxVersion=4), ApiVersion(apiKey=13, minVersion=0, maxVersion=5), ApiVersion(apiKey=14, minVersion=0, maxVersion=5), ApiVersion(apiKey=15, minVersion=0, maxVersion=5), ApiVersion(apiKey=16, minVersion=0, maxVersion=4), ApiVersion(apiKey=17, minVersion=0, maxVersion=1), ApiVersion(apiKey=18, minVersion=0, maxVersion=3), ApiVersion(apiKey=19, minVersion=0, maxVersion=7), ApiVersion(apiKey=20, minVersion=0, maxVersion=6), ApiVersion(apiKey=21, minVersion=0, maxVersion=2), ApiVersion(apiKey=22, minVersion=0, maxVersion=4), ApiVersion(apiKey=23, minVersion=0, maxVersion=4), ApiVersion(apiKey=24, minVersion=0, maxVersion=3), ApiVersion(apiKey=25, minVersion=0, maxVersion=3), ApiVersion(apiKey=26, minVersion=0, maxVersion=3), ApiVersion(apiKey=27, minVersion=0, maxVersion=1), ApiVersion(apiKey=28, minVersion=0, maxVersion=3), ApiVersion(apiKey=29, minVersion=0, maxVersion=3), ApiVersion(apiKey=30, minVersion=0, maxVersion=3), ApiVersion(apiKey=31, minVersion=0, maxVersion=3), ApiVersion(apiKey=32, minVersion=0, maxVersion=4), ApiVersion(apiKey=33, minVersion=0, maxVersion=2), ApiVersion(apiKey=34, minVersion=0, maxVersion=2), ApiVersion(apiKey=35, minVersion=0, maxVersion=4), ApiVersion(apiKey=36, minVersion=0, maxVersion=2), ApiVersion(apiKey=37, minVersion=0, maxVersion=3), ApiVersion(apiKey=38, minVersion=0, maxVersion=3), ApiVersion(apiKey=39, minVersion=0, maxVersion=2), ApiVersion(apiKey=40, minVersion=0, maxVersion=2), ApiVersion(apiKey=41, minVersion=0, maxVersion=3), ApiVersion(apiKey=42, minVersion=0, maxVersion=2), ApiVersion(apiKey=43, minVersion=0, maxVersion=2), ApiVersion(apiKey=44, minVersion=0, maxVersion=1), ApiVersion(apiKey=45, minVersion=0, maxVersion=0), ApiVersion(apiKey=46, minVersion=0, maxVersion=0), ApiVersion(apiKey=47, minVersion=0, maxVersion=0), ApiVersion(apiKey=48, minVersion=0, maxVersion=1), ApiVersion(apiKey=49, minVersion=0, maxVersion=1), ApiVersion(apiKey=50, minVersion=0, maxVersion=0), ApiVersion(apiKey=51, minVersion=0, maxVersion=0), ApiVersion(apiKey=56, minVersion=0, maxVersion=2), ApiVersion(apiKey=57, minVersion=0, maxVersion=1), ApiVersion(apiKey=60, minVersion=0, maxVersion=0), ApiVersion(apiKey=61, minVersion=0, maxVersion=0), ApiVersion(apiKey=65, minVersion=0, maxVersion=0), ApiVersion(apiKey=66, minVersion=0, maxVersion=0), ApiVersion(apiKey=67, minVersion=0, maxVersion=0)], throttleTimeMs=0, supportedFeatures=[], finalizedFeaturesEpoch=0, finalizedFeatures=[]) 09:09:55 09:09:55.625 [data-plane-kafka-network-thread-1-ListenerName(SASL_PLAINTEXT)-SASL_PLAINTEXT-1] DEBUG kafka.request.logger - Completed request:{"isForwarded":false,"requestHeader":{"requestApiKey":18,"requestApiVersion":3,"correlationId":1,"clientId":"1","requestApiKeyName":"API_VERSIONS"},"request":{"clientSoftwareName":"apache-kafka-java","clientSoftwareVersion":"3.3.1"},"response":{"errorCode":0,"apiKeys":[{"apiKey":0,"minVersion":0,"maxVersion":9},{"apiKey":1,"minVersion":0,"maxVersion":13},{"apiKey":2,"minVersion":0,"maxVersion":7},{"apiKey":3,"minVersion":0,"maxVersion":12},{"apiKey":4,"minVersion":0,"maxVersion":6},{"apiKey":5,"minVersion":0,"maxVersion":3},{"apiKey":6,"minVersion":0,"maxVersion":7},{"apiKey":7,"minVersion":0,"maxVersion":3},{"apiKey":8,"minVersion":0,"maxVersion":8},{"apiKey":9,"minVersion":0,"maxVersion":8},{"apiKey":10,"minVersion":0,"maxVersion":4},{"apiKey":11,"minVersion":0,"maxVersion":9},{"apiKey":12,"minVersion":0,"maxVersion":4},{"apiKey":13,"minVersion":0,"maxVersion":5},{"apiKey":14,"minVersion":0,"maxVersion":5},{"apiKey":15,"minVersion":0,"maxVersion":5},{"apiKey":16,"minVersion":0,"maxVersion":4},{"apiKey":17,"minVersion":0,"maxVersion":1},{"apiKey":18,"minVersion":0,"maxVersion":3},{"apiKey":19,"minVersion":0,"maxVersion":7},{"apiKey":20,"minVersion":0,"maxVersion":6},{"apiKey":21,"minVersion":0,"maxVersion":2},{"apiKey":22,"minVersion":0,"maxVersion":4},{"apiKey":23,"minVersion":0,"maxVersion":4},{"apiKey":24,"minVersion":0,"maxVersion":3},{"apiKey":25,"minVersion":0,"maxVersion":3},{"apiKey":26,"minVersion":0,"maxVersion":3},{"apiKey":27,"minVersion":0,"maxVersion":1},{"apiKey":28,"minVersion":0,"maxVersion":3},{"apiKey":29,"minVersion":0,"maxVersion":3},{"apiKey":30,"minVersion":0,"maxVersion":3},{"apiKey":31,"minVersion":0,"maxVersion":3},{"apiKey":32,"minVersion":0,"maxVersion":4},{"apiKey":33,"minVersion":0,"maxVersion":2},{"apiKey":34,"minVersion":0,"maxVersion":2},{"apiKey":35,"minVersion":0,"maxVersion":4},{"apiKey":36,"minVersion":0,"maxVersion":2},{"apiKey":37,"minVersion":0,"maxVersion":3},{"apiKey":38,"minVersion":0,"maxVersion":3},{"apiKey":39,"minVersion":0,"maxVersion":2},{"apiKey":40,"minVersion":0,"maxVersion":2},{"apiKey":41,"minVersion":0,"maxVersion":3},{"apiKey":42,"minVersion":0,"maxVersion":2},{"apiKey":43,"minVersion":0,"maxVersion":2},{"apiKey":44,"minVersion":0,"maxVersion":1},{"apiKey":45,"minVersion":0,"maxVersion":0},{"apiKey":46,"minVersion":0,"maxVersion":0},{"apiKey":47,"minVersion":0,"maxVersion":0},{"apiKey":48,"minVersion":0,"maxVersion":1},{"apiKey":49,"minVersion":0,"maxVersion":1},{"apiKey":50,"minVersion":0,"maxVersion":0},{"apiKey":51,"minVersion":0,"maxVersion":0},{"apiKey":56,"minVersion":0,"maxVersion":2},{"apiKey":57,"minVersion":0,"maxVersion":1},{"apiKey":60,"minVersion":0,"maxVersion":0},{"apiKey":61,"minVersion":0,"maxVersion":0},{"apiKey":65,"minVersion":0,"maxVersion":0},{"apiKey":66,"minVersion":0,"maxVersion":0},{"apiKey":67,"minVersion":0,"maxVersion":0}],"throttleTimeMs":0,"finalizedFeaturesEpoch":0},"connection":"127.0.0.1:40117-127.0.0.1:33316-4","totalTimeMs":0.975,"requestQueueTimeMs":0.246,"localTimeMs":0.508,"remoteTimeMs":0.0,"throttleTimeMs":0,"responseQueueTimeMs":0.048,"sendTimeMs":0.171,"securityProtocol":"SASL_PLAINTEXT","principal":"User:admin","listener":"SASL_PLAINTEXT","clientInformation":{"softwareName":"unknown","softwareVersion":"unknown"}} 09:09:55 09:09:55.625 [TestBroker:1:BrokerToControllerChannelManager broker=1 name=forwarding] DEBUG org.apache.kafka.clients.NetworkClient - [BrokerToControllerChannelManager broker=1 name=forwarding] Node 1 has finalized features epoch: 0, finalized features: [], supported features: [], API versions: (Produce(0): 0 to 9 [usable: 9], Fetch(1): 0 to 13 [usable: 13], ListOffsets(2): 0 to 7 [usable: 7], Metadata(3): 0 to 12 [usable: 12], LeaderAndIsr(4): 0 to 6 [usable: 6], StopReplica(5): 0 to 3 [usable: 3], UpdateMetadata(6): 0 to 7 [usable: 7], ControlledShutdown(7): 0 to 3 [usable: 3], OffsetCommit(8): 0 to 8 [usable: 8], OffsetFetch(9): 0 to 8 [usable: 8], FindCoordinator(10): 0 to 4 [usable: 4], JoinGroup(11): 0 to 9 [usable: 9], Heartbeat(12): 0 to 4 [usable: 4], LeaveGroup(13): 0 to 5 [usable: 5], SyncGroup(14): 0 to 5 [usable: 5], DescribeGroups(15): 0 to 5 [usable: 5], ListGroups(16): 0 to 4 [usable: 4], SaslHandshake(17): 0 to 1 [usable: 1], ApiVersions(18): 0 to 3 [usable: 3], CreateTopics(19): 0 to 7 [usable: 7], DeleteTopics(20): 0 to 6 [usable: 6], DeleteRecords(21): 0 to 2 [usable: 2], InitProducerId(22): 0 to 4 [usable: 4], OffsetForLeaderEpoch(23): 0 to 4 [usable: 4], AddPartitionsToTxn(24): 0 to 3 [usable: 3], AddOffsetsToTxn(25): 0 to 3 [usable: 3], EndTxn(26): 0 to 3 [usable: 3], WriteTxnMarkers(27): 0 to 1 [usable: 1], TxnOffsetCommit(28): 0 to 3 [usable: 3], DescribeAcls(29): 0 to 3 [usable: 3], CreateAcls(30): 0 to 3 [usable: 3], DeleteAcls(31): 0 to 3 [usable: 3], DescribeConfigs(32): 0 to 4 [usable: 4], AlterConfigs(33): 0 to 2 [usable: 2], AlterReplicaLogDirs(34): 0 to 2 [usable: 2], DescribeLogDirs(35): 0 to 4 [usable: 4], SaslAuthenticate(36): 0 to 2 [usable: 2], CreatePartitions(37): 0 to 3 [usable: 3], CreateDelegationToken(38): 0 to 3 [usable: 3], RenewDelegationToken(39): 0 to 2 [usable: 2], ExpireDelegationToken(40): 0 to 2 [usable: 2], DescribeDelegationToken(41): 0 to 3 [usable: 3], DeleteGroups(42): 0 to 2 [usable: 2], ElectLeaders(43): 0 to 2 [usable: 2], IncrementalAlterConfigs(44): 0 to 1 [usable: 1], AlterPartitionReassignments(45): 0 [usable: 0], ListPartitionReassignments(46): 0 [usable: 0], OffsetDelete(47): 0 [usable: 0], DescribeClientQuotas(48): 0 to 1 [usable: 1], AlterClientQuotas(49): 0 to 1 [usable: 1], DescribeUserScramCredentials(50): 0 [usable: 0], AlterUserScramCredentials(51): 0 [usable: 0], AlterPartition(56): 0 to 2 [usable: 2], UpdateFeatures(57): 0 to 1 [usable: 1], DescribeCluster(60): 0 [usable: 0], DescribeProducers(61): 0 [usable: 0], DescribeTransactions(65): 0 [usable: 0], ListTransactions(66): 0 [usable: 0], AllocateProducerIds(67): 0 [usable: 0]). 09:09:55 09:09:55.626 [TestBroker:1:BrokerToControllerChannelManager broker=1 name=forwarding] DEBUG org.apache.kafka.clients.NetworkClient - [BrokerToControllerChannelManager broker=1 name=forwarding] Sending ALLOCATE_PRODUCER_IDS request with header RequestHeader(apiKey=ALLOCATE_PRODUCER_IDS, apiVersion=0, clientId=1, correlationId=0) and timeout 30000 to node 1: AllocateProducerIdsRequestData(brokerId=1, brokerEpoch=25) 09:09:55 09:09:55.638 [ProcessThread(sid:0 cport:46481):] DEBUG org.apache.zookeeper.server.SessionTrackerImpl - Checking session 0x1000002945e0000 09:09:55 09:09:55.638 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - Processing request:: sessionid:0x1000002945e0000 type:getData cxid:0x108 zxid:0xfffffffffffffffe txntype:unknown reqpath:/latest_producer_id_block 09:09:55 09:09:55.638 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - sessionid:0x1000002945e0000 type:getData cxid:0x108 zxid:0xfffffffffffffffe txntype:unknown reqpath:/latest_producer_id_block 09:09:55 09:09:55.638 [SyncThread:0] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Permission requested: 1 09:09:55 09:09:55.638 [SyncThread:0] DEBUG org.apache.zookeeper.server.ZooKeeperServer - ACLs for node: [31,s{'world,'anyone} 09:09:55 ] 09:09:55 09:09:55.639 [SyncThread:0] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Client credentials: ['sasl,'zooclient 09:09:55 , 'ip,'127.0.0.1 09:09:55 ] 09:09:55 09:09:55.639 [main-SendThread(127.0.0.1:46481)] DEBUG org.apache.zookeeper.ClientCnxn - Reading reply session id: 0x1000002945e0000, packet:: clientPath:/latest_producer_id_block serverPath:/latest_producer_id_block finished:false header:: 264,4 replyHeader:: 264,139,0 request:: '/latest_producer_id_block,F response:: ,s{15,15,1770973772528,1770973772528,0,0,0,0,0,0,15} 09:09:55 09:09:55.640 [controller-event-thread] DEBUG kafka.controller.KafkaController - [Controller id=1] There is no producerId block yet (Zk path version 0), creating the first block 09:09:55 09:09:55.641 [ProcessThread(sid:0 cport:46481):] DEBUG org.apache.zookeeper.server.SessionTrackerImpl - Checking session 0x1000002945e0000 09:09:55 09:09:55.642 [ProcessThread(sid:0 cport:46481):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Permission requested: 2 09:09:55 09:09:55.642 [ProcessThread(sid:0 cport:46481):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - ACLs for node: [31,s{'world,'anyone} 09:09:55 ] 09:09:55 09:09:55.642 [ProcessThread(sid:0 cport:46481):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Client credentials: ['sasl,'zooclient 09:09:55 , 'ip,'127.0.0.1 09:09:55 ] 09:09:55 09:09:55.642 [ProcessThread(sid:0 cport:46481):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from data tree is: 271017000807 09:09:55 09:09:55.644 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - Processing request:: sessionid:0x1000002945e0000 type:setData cxid:0x109 zxid:0x8c txntype:5 reqpath:n/a 09:09:55 09:09:55.644 [SyncThread:0] DEBUG org.apache.zookeeper.common.PathTrie - latest_producer_id_block 09:09:55 09:09:55.644 [SyncThread:0] DEBUG org.apache.zookeeper.server.DataTree - Digests are matching for Zxid: 8c, Digest in log and actual tree: 267671441431 09:09:55 09:09:55.644 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - sessionid:0x1000002945e0000 type:setData cxid:0x109 zxid:0x8c txntype:5 reqpath:n/a 09:09:55 09:09:55.644 [main-SendThread(127.0.0.1:46481)] DEBUG org.apache.zookeeper.ClientCnxn - Reading reply session id: 0x1000002945e0000, packet:: clientPath:/latest_producer_id_block serverPath:/latest_producer_id_block finished:false header:: 265,5 replyHeader:: 265,140,0 request:: '/latest_producer_id_block,#7b2276657273696f6e223a312c2262726f6b6572223a312c22626c6f636b5f7374617274223a2230222c22626c6f636b5f656e64223a22393939227d,0 response:: s{15,140,1770973772528,1770973795641,1,0,0,0,60,0,15} 09:09:55 09:09:55.646 [controller-event-thread] DEBUG kafka.zk.KafkaZkClient - Conditional update of path /latest_producer_id_block with value {"version":1,"broker":1,"block_start":"0","block_end":"999"} and expected version 0 succeeded, returning the new version: 1 09:09:55 09:09:55.646 [controller-event-thread] INFO kafka.controller.KafkaController - [Controller id=1] Acquired new producerId block ProducerIdsBlock(assignedBrokerId=1, firstProducerId=0, size=1000) by writing to Zk with path version 1 09:09:55 09:09:55.649 [TestBroker:1:BrokerToControllerChannelManager broker=1 name=forwarding] DEBUG org.apache.kafka.clients.NetworkClient - [BrokerToControllerChannelManager broker=1 name=forwarding] Received ALLOCATE_PRODUCER_IDS response from node 1 for request with header RequestHeader(apiKey=ALLOCATE_PRODUCER_IDS, apiVersion=0, clientId=1, correlationId=0): AllocateProducerIdsResponseData(throttleTimeMs=0, errorCode=0, producerIdStart=0, producerIdLen=1000) 09:09:55 09:09:55.649 [data-plane-kafka-network-thread-1-ListenerName(SASL_PLAINTEXT)-SASL_PLAINTEXT-1] DEBUG kafka.request.logger - Completed request:{"isForwarded":false,"requestHeader":{"requestApiKey":67,"requestApiVersion":0,"correlationId":0,"clientId":"1","requestApiKeyName":"ALLOCATE_PRODUCER_IDS"},"request":{"brokerId":1,"brokerEpoch":25},"response":{"throttleTimeMs":0,"errorCode":0,"producerIdStart":0,"producerIdLen":1000},"connection":"127.0.0.1:40117-127.0.0.1:33316-4","totalTimeMs":17.35,"requestQueueTimeMs":1.186,"localTimeMs":1.443,"remoteTimeMs":14.391,"throttleTimeMs":0,"responseQueueTimeMs":0.109,"sendTimeMs":0.22,"securityProtocol":"SASL_PLAINTEXT","principal":"User:admin","listener":"SASL_PLAINTEXT","clientInformation":{"softwareName":"apache-kafka-java","softwareVersion":"3.3.1"}} 09:09:55 09:09:55.650 [TestBroker:1:BrokerToControllerChannelManager broker=1 name=forwarding] DEBUG kafka.coordinator.transaction.RPCProducerIdManager - [RPC ProducerId Manager 1]: Got next producer ID block from controller AllocateProducerIdsResponseData(throttleTimeMs=0, errorCode=0, producerIdStart=0, producerIdLen=1000) 09:09:55 09:09:55.653 [kafka-producer-network-thread | mso-123456-producer-a431b68b-e45a-4ed5-8b19-0482c7f0322b] DEBUG org.apache.kafka.clients.NetworkClient - [Producer clientId=mso-123456-producer-a431b68b-e45a-4ed5-8b19-0482c7f0322b] Received INIT_PRODUCER_ID response from node -1 for request with header RequestHeader(apiKey=INIT_PRODUCER_ID, apiVersion=4, clientId=mso-123456-producer-a431b68b-e45a-4ed5-8b19-0482c7f0322b, correlationId=2): InitProducerIdResponseData(throttleTimeMs=0, errorCode=0, producerId=0, producerEpoch=0) 09:09:55 09:09:55.653 [kafka-producer-network-thread | mso-123456-producer-a431b68b-e45a-4ed5-8b19-0482c7f0322b] INFO org.apache.kafka.clients.producer.internals.TransactionManager - [Producer clientId=mso-123456-producer-a431b68b-e45a-4ed5-8b19-0482c7f0322b] ProducerId set to 0 with epoch 0 09:09:55 09:09:55.653 [kafka-producer-network-thread | mso-123456-producer-a431b68b-e45a-4ed5-8b19-0482c7f0322b] DEBUG org.apache.kafka.clients.producer.internals.TransactionManager - [Producer clientId=mso-123456-producer-a431b68b-e45a-4ed5-8b19-0482c7f0322b] Transition from state INITIALIZING to READY 09:09:55 09:09:55.654 [data-plane-kafka-network-thread-1-ListenerName(SASL_PLAINTEXT)-SASL_PLAINTEXT-0] DEBUG kafka.request.logger - Completed request:{"isForwarded":false,"requestHeader":{"requestApiKey":22,"requestApiVersion":4,"correlationId":2,"clientId":"mso-123456-producer-a431b68b-e45a-4ed5-8b19-0482c7f0322b","requestApiKeyName":"INIT_PRODUCER_ID"},"request":{"transactionalId":null,"transactionTimeoutMs":2147483647,"producerId":-1,"producerEpoch":-1},"response":{"throttleTimeMs":0,"errorCode":0,"producerId":0,"producerEpoch":0},"connection":"127.0.0.1:40117-127.0.0.1:33314-4","totalTimeMs":45.944,"requestQueueTimeMs":1.159,"localTimeMs":44.451,"remoteTimeMs":0.0,"throttleTimeMs":0,"responseQueueTimeMs":0.098,"sendTimeMs":0.234,"securityProtocol":"SASL_PLAINTEXT","principal":"User:admin","listener":"SASL_PLAINTEXT","clientInformation":{"softwareName":"apache-kafka-java","softwareVersion":"3.3.1"}} 09:09:55 09:09:55.654 [kafka-producer-network-thread | mso-123456-producer-a431b68b-e45a-4ed5-8b19-0482c7f0322b] DEBUG org.apache.kafka.clients.ClientUtils - Resolved host localhost as 127.0.0.1 09:09:55 09:09:55.654 [kafka-producer-network-thread | mso-123456-producer-a431b68b-e45a-4ed5-8b19-0482c7f0322b] DEBUG org.apache.kafka.clients.NetworkClient - [Producer clientId=mso-123456-producer-a431b68b-e45a-4ed5-8b19-0482c7f0322b] Initiating connection to node localhost:40117 (id: 1 rack: null) using address localhost/127.0.0.1 09:09:55 09:09:55.654 [kafka-producer-network-thread | mso-123456-producer-a431b68b-e45a-4ed5-8b19-0482c7f0322b] DEBUG org.apache.kafka.common.security.authenticator.SaslClientAuthenticator - [Producer clientId=mso-123456-producer-a431b68b-e45a-4ed5-8b19-0482c7f0322b] Set SASL client state to SEND_APIVERSIONS_REQUEST 09:09:55 09:09:55.655 [kafka-producer-network-thread | mso-123456-producer-a431b68b-e45a-4ed5-8b19-0482c7f0322b] DEBUG org.apache.kafka.common.security.authenticator.SaslClientAuthenticator - [Producer clientId=mso-123456-producer-a431b68b-e45a-4ed5-8b19-0482c7f0322b] Creating SaslClient: client=null;service=kafka;serviceHostname=localhost;mechs=[PLAIN] 09:09:55 09:09:55.655 [data-plane-kafka-socket-acceptor-ListenerName(SASL_PLAINTEXT)-SASL_PLAINTEXT-40117] DEBUG kafka.network.DataPlaneAcceptor - Accepted connection from /127.0.0.1:33318 on /127.0.0.1:40117 and assigned it to processor 0, sendBufferSize [actual|requested]: [102400|102400] recvBufferSize [actual|requested]: [102400|102400] 09:09:55 09:09:55.655 [kafka-producer-network-thread | mso-123456-producer-a431b68b-e45a-4ed5-8b19-0482c7f0322b] DEBUG org.apache.kafka.common.network.Selector - [Producer clientId=mso-123456-producer-a431b68b-e45a-4ed5-8b19-0482c7f0322b] Created socket with SO_RCVBUF = 32768, SO_SNDBUF = 131072, SO_TIMEOUT = 0 to node 1 09:09:55 09:09:55.655 [kafka-producer-network-thread | mso-123456-producer-a431b68b-e45a-4ed5-8b19-0482c7f0322b] DEBUG org.apache.kafka.common.security.authenticator.SaslClientAuthenticator - [Producer clientId=mso-123456-producer-a431b68b-e45a-4ed5-8b19-0482c7f0322b] Set SASL client state to RECEIVE_APIVERSIONS_RESPONSE 09:09:55 09:09:55.655 [kafka-producer-network-thread | mso-123456-producer-a431b68b-e45a-4ed5-8b19-0482c7f0322b] DEBUG org.apache.kafka.clients.NetworkClient - [Producer clientId=mso-123456-producer-a431b68b-e45a-4ed5-8b19-0482c7f0322b] Completed connection to node 1. Fetching API versions. 09:09:55 09:09:55.655 [data-plane-kafka-network-thread-1-ListenerName(SASL_PLAINTEXT)-SASL_PLAINTEXT-0] DEBUG kafka.network.Processor - Processor 0 listening to new connection from /127.0.0.1:33318 09:09:55 09:09:55.656 [data-plane-kafka-network-thread-1-ListenerName(SASL_PLAINTEXT)-SASL_PLAINTEXT-0] DEBUG org.apache.kafka.common.security.authenticator.SaslServerAuthenticator - Set SASL server state to HANDSHAKE_OR_VERSIONS_REQUEST during authentication 09:09:55 09:09:55.656 [data-plane-kafka-network-thread-1-ListenerName(SASL_PLAINTEXT)-SASL_PLAINTEXT-0] DEBUG org.apache.kafka.common.security.authenticator.SaslServerAuthenticator - Handling Kafka request API_VERSIONS during authentication 09:09:55 09:09:55.657 [data-plane-kafka-network-thread-1-ListenerName(SASL_PLAINTEXT)-SASL_PLAINTEXT-0] DEBUG org.apache.kafka.common.security.authenticator.SaslServerAuthenticator - Set SASL server state to HANDSHAKE_REQUEST during authentication 09:09:55 09:09:55.657 [kafka-producer-network-thread | mso-123456-producer-a431b68b-e45a-4ed5-8b19-0482c7f0322b] DEBUG org.apache.kafka.common.security.authenticator.SaslClientAuthenticator - [Producer clientId=mso-123456-producer-a431b68b-e45a-4ed5-8b19-0482c7f0322b] Set SASL client state to SEND_HANDSHAKE_REQUEST 09:09:55 09:09:55.657 [kafka-producer-network-thread | mso-123456-producer-a431b68b-e45a-4ed5-8b19-0482c7f0322b] DEBUG org.apache.kafka.common.security.authenticator.SaslClientAuthenticator - [Producer clientId=mso-123456-producer-a431b68b-e45a-4ed5-8b19-0482c7f0322b] Set SASL client state to RECEIVE_HANDSHAKE_RESPONSE 09:09:55 09:09:55.658 [data-plane-kafka-network-thread-1-ListenerName(SASL_PLAINTEXT)-SASL_PLAINTEXT-0] DEBUG org.apache.kafka.common.security.authenticator.SaslServerAuthenticator - Handling Kafka request SASL_HANDSHAKE during authentication 09:09:55 09:09:55.658 [data-plane-kafka-network-thread-1-ListenerName(SASL_PLAINTEXT)-SASL_PLAINTEXT-0] DEBUG org.apache.kafka.common.security.authenticator.SaslServerAuthenticator - Using SASL mechanism 'PLAIN' provided by client 09:09:55 09:09:55.658 [kafka-producer-network-thread | mso-123456-producer-a431b68b-e45a-4ed5-8b19-0482c7f0322b] DEBUG org.apache.kafka.common.security.authenticator.SaslClientAuthenticator - [Producer clientId=mso-123456-producer-a431b68b-e45a-4ed5-8b19-0482c7f0322b] Set SASL client state to INITIAL 09:09:55 09:09:55.659 [kafka-producer-network-thread | mso-123456-producer-a431b68b-e45a-4ed5-8b19-0482c7f0322b] DEBUG org.apache.kafka.common.security.authenticator.SaslClientAuthenticator - [Producer clientId=mso-123456-producer-a431b68b-e45a-4ed5-8b19-0482c7f0322b] Set SASL client state to INTERMEDIATE 09:09:55 09:09:55.659 [data-plane-kafka-network-thread-1-ListenerName(SASL_PLAINTEXT)-SASL_PLAINTEXT-0] DEBUG org.apache.kafka.common.security.authenticator.SaslServerAuthenticator - Set SASL server state to AUTHENTICATE during authentication 09:09:55 09:09:55.659 [data-plane-kafka-network-thread-1-ListenerName(SASL_PLAINTEXT)-SASL_PLAINTEXT-0] DEBUG org.apache.kafka.common.security.authenticator.SaslServerAuthenticator - Authentication complete; session max lifetime from broker config=0 ms, no credential expiration; no session expiration, sending 0 ms to client 09:09:55 09:09:55.659 [data-plane-kafka-network-thread-1-ListenerName(SASL_PLAINTEXT)-SASL_PLAINTEXT-0] DEBUG org.apache.kafka.common.security.authenticator.SaslServerAuthenticator - Set SASL server state to COMPLETE during authentication 09:09:55 09:09:55.659 [data-plane-kafka-network-thread-1-ListenerName(SASL_PLAINTEXT)-SASL_PLAINTEXT-0] DEBUG org.apache.kafka.common.network.Selector - [SocketServer listenerType=ZK_BROKER, nodeId=1] Successfully authenticated with /127.0.0.1 09:09:55 09:09:55.659 [kafka-producer-network-thread | mso-123456-producer-a431b68b-e45a-4ed5-8b19-0482c7f0322b] DEBUG org.apache.kafka.common.security.authenticator.SaslClientAuthenticator - [Producer clientId=mso-123456-producer-a431b68b-e45a-4ed5-8b19-0482c7f0322b] Set SASL client state to COMPLETE 09:09:55 09:09:55.659 [kafka-producer-network-thread | mso-123456-producer-a431b68b-e45a-4ed5-8b19-0482c7f0322b] DEBUG org.apache.kafka.common.security.authenticator.SaslClientAuthenticator - [Producer clientId=mso-123456-producer-a431b68b-e45a-4ed5-8b19-0482c7f0322b] Finished authentication with no session expiration and no session re-authentication 09:09:55 09:09:55.659 [kafka-producer-network-thread | mso-123456-producer-a431b68b-e45a-4ed5-8b19-0482c7f0322b] DEBUG org.apache.kafka.common.network.Selector - [Producer clientId=mso-123456-producer-a431b68b-e45a-4ed5-8b19-0482c7f0322b] Successfully authenticated with localhost/127.0.0.1 09:09:55 09:09:55.659 [kafka-producer-network-thread | mso-123456-producer-a431b68b-e45a-4ed5-8b19-0482c7f0322b] DEBUG org.apache.kafka.clients.NetworkClient - [Producer clientId=mso-123456-producer-a431b68b-e45a-4ed5-8b19-0482c7f0322b] Initiating API versions fetch from node 1. 09:09:55 09:09:55.660 [kafka-producer-network-thread | mso-123456-producer-a431b68b-e45a-4ed5-8b19-0482c7f0322b] DEBUG org.apache.kafka.clients.NetworkClient - [Producer clientId=mso-123456-producer-a431b68b-e45a-4ed5-8b19-0482c7f0322b] Sending API_VERSIONS request with header RequestHeader(apiKey=API_VERSIONS, apiVersion=3, clientId=mso-123456-producer-a431b68b-e45a-4ed5-8b19-0482c7f0322b, correlationId=3) and timeout 30000 to node 1: ApiVersionsRequestData(clientSoftwareName='apache-kafka-java', clientSoftwareVersion='3.3.1') 09:09:55 09:09:55.661 [kafka-producer-network-thread | mso-123456-producer-a431b68b-e45a-4ed5-8b19-0482c7f0322b] DEBUG org.apache.kafka.clients.NetworkClient - [Producer clientId=mso-123456-producer-a431b68b-e45a-4ed5-8b19-0482c7f0322b] Received API_VERSIONS response from node 1 for request with header RequestHeader(apiKey=API_VERSIONS, apiVersion=3, clientId=mso-123456-producer-a431b68b-e45a-4ed5-8b19-0482c7f0322b, correlationId=3): ApiVersionsResponseData(errorCode=0, apiKeys=[ApiVersion(apiKey=0, minVersion=0, maxVersion=9), ApiVersion(apiKey=1, minVersion=0, maxVersion=13), ApiVersion(apiKey=2, minVersion=0, maxVersion=7), ApiVersion(apiKey=3, minVersion=0, maxVersion=12), ApiVersion(apiKey=4, minVersion=0, maxVersion=6), ApiVersion(apiKey=5, minVersion=0, maxVersion=3), ApiVersion(apiKey=6, minVersion=0, maxVersion=7), ApiVersion(apiKey=7, minVersion=0, maxVersion=3), ApiVersion(apiKey=8, minVersion=0, maxVersion=8), ApiVersion(apiKey=9, minVersion=0, maxVersion=8), ApiVersion(apiKey=10, minVersion=0, maxVersion=4), ApiVersion(apiKey=11, minVersion=0, maxVersion=9), ApiVersion(apiKey=12, minVersion=0, maxVersion=4), ApiVersion(apiKey=13, minVersion=0, maxVersion=5), ApiVersion(apiKey=14, minVersion=0, maxVersion=5), ApiVersion(apiKey=15, minVersion=0, maxVersion=5), ApiVersion(apiKey=16, minVersion=0, maxVersion=4), ApiVersion(apiKey=17, minVersion=0, maxVersion=1), ApiVersion(apiKey=18, minVersion=0, maxVersion=3), ApiVersion(apiKey=19, minVersion=0, maxVersion=7), ApiVersion(apiKey=20, minVersion=0, maxVersion=6), ApiVersion(apiKey=21, minVersion=0, maxVersion=2), ApiVersion(apiKey=22, minVersion=0, maxVersion=4), ApiVersion(apiKey=23, minVersion=0, maxVersion=4), ApiVersion(apiKey=24, minVersion=0, maxVersion=3), ApiVersion(apiKey=25, minVersion=0, maxVersion=3), ApiVersion(apiKey=26, minVersion=0, maxVersion=3), ApiVersion(apiKey=27, minVersion=0, maxVersion=1), ApiVersion(apiKey=28, minVersion=0, maxVersion=3), ApiVersion(apiKey=29, minVersion=0, maxVersion=3), ApiVersion(apiKey=30, minVersion=0, maxVersion=3), ApiVersion(apiKey=31, minVersion=0, maxVersion=3), ApiVersion(apiKey=32, minVersion=0, maxVersion=4), ApiVersion(apiKey=33, minVersion=0, maxVersion=2), ApiVersion(apiKey=34, minVersion=0, maxVersion=2), ApiVersion(apiKey=35, minVersion=0, maxVersion=4), ApiVersion(apiKey=36, minVersion=0, maxVersion=2), ApiVersion(apiKey=37, minVersion=0, maxVersion=3), ApiVersion(apiKey=38, minVersion=0, maxVersion=3), ApiVersion(apiKey=39, minVersion=0, maxVersion=2), ApiVersion(apiKey=40, minVersion=0, maxVersion=2), ApiVersion(apiKey=41, minVersion=0, maxVersion=3), ApiVersion(apiKey=42, minVersion=0, maxVersion=2), ApiVersion(apiKey=43, minVersion=0, maxVersion=2), ApiVersion(apiKey=44, minVersion=0, maxVersion=1), ApiVersion(apiKey=45, minVersion=0, maxVersion=0), ApiVersion(apiKey=46, minVersion=0, maxVersion=0), ApiVersion(apiKey=47, minVersion=0, maxVersion=0), ApiVersion(apiKey=48, minVersion=0, maxVersion=1), ApiVersion(apiKey=49, minVersion=0, maxVersion=1), ApiVersion(apiKey=50, minVersion=0, maxVersion=0), ApiVersion(apiKey=51, minVersion=0, maxVersion=0), ApiVersion(apiKey=56, minVersion=0, maxVersion=2), ApiVersion(apiKey=57, minVersion=0, maxVersion=1), ApiVersion(apiKey=60, minVersion=0, maxVersion=0), ApiVersion(apiKey=61, minVersion=0, maxVersion=0), ApiVersion(apiKey=65, minVersion=0, maxVersion=0), ApiVersion(apiKey=66, minVersion=0, maxVersion=0), ApiVersion(apiKey=67, minVersion=0, maxVersion=0)], throttleTimeMs=0, supportedFeatures=[], finalizedFeaturesEpoch=0, finalizedFeatures=[]) 09:09:55 09:09:55.662 [kafka-producer-network-thread | mso-123456-producer-a431b68b-e45a-4ed5-8b19-0482c7f0322b] DEBUG org.apache.kafka.clients.NetworkClient - [Producer clientId=mso-123456-producer-a431b68b-e45a-4ed5-8b19-0482c7f0322b] Node 1 has finalized features epoch: 0, finalized features: [], supported features: [], API versions: (Produce(0): 0 to 9 [usable: 9], Fetch(1): 0 to 13 [usable: 13], ListOffsets(2): 0 to 7 [usable: 7], Metadata(3): 0 to 12 [usable: 12], LeaderAndIsr(4): 0 to 6 [usable: 6], StopReplica(5): 0 to 3 [usable: 3], UpdateMetadata(6): 0 to 7 [usable: 7], ControlledShutdown(7): 0 to 3 [usable: 3], OffsetCommit(8): 0 to 8 [usable: 8], OffsetFetch(9): 0 to 8 [usable: 8], FindCoordinator(10): 0 to 4 [usable: 4], JoinGroup(11): 0 to 9 [usable: 9], Heartbeat(12): 0 to 4 [usable: 4], LeaveGroup(13): 0 to 5 [usable: 5], SyncGroup(14): 0 to 5 [usable: 5], DescribeGroups(15): 0 to 5 [usable: 5], ListGroups(16): 0 to 4 [usable: 4], SaslHandshake(17): 0 to 1 [usable: 1], ApiVersions(18): 0 to 3 [usable: 3], CreateTopics(19): 0 to 7 [usable: 7], DeleteTopics(20): 0 to 6 [usable: 6], DeleteRecords(21): 0 to 2 [usable: 2], InitProducerId(22): 0 to 4 [usable: 4], OffsetForLeaderEpoch(23): 0 to 4 [usable: 4], AddPartitionsToTxn(24): 0 to 3 [usable: 3], AddOffsetsToTxn(25): 0 to 3 [usable: 3], EndTxn(26): 0 to 3 [usable: 3], WriteTxnMarkers(27): 0 to 1 [usable: 1], TxnOffsetCommit(28): 0 to 3 [usable: 3], DescribeAcls(29): 0 to 3 [usable: 3], CreateAcls(30): 0 to 3 [usable: 3], DeleteAcls(31): 0 to 3 [usable: 3], DescribeConfigs(32): 0 to 4 [usable: 4], AlterConfigs(33): 0 to 2 [usable: 2], AlterReplicaLogDirs(34): 0 to 2 [usable: 2], DescribeLogDirs(35): 0 to 4 [usable: 4], SaslAuthenticate(36): 0 to 2 [usable: 2], CreatePartitions(37): 0 to 3 [usable: 3], CreateDelegationToken(38): 0 to 3 [usable: 3], RenewDelegationToken(39): 0 to 2 [usable: 2], ExpireDelegationToken(40): 0 to 2 [usable: 2], DescribeDelegationToken(41): 0 to 3 [usable: 3], DeleteGroups(42): 0 to 2 [usable: 2], ElectLeaders(43): 0 to 2 [usable: 2], IncrementalAlterConfigs(44): 0 to 1 [usable: 1], AlterPartitionReassignments(45): 0 [usable: 0], ListPartitionReassignments(46): 0 [usable: 0], OffsetDelete(47): 0 [usable: 0], DescribeClientQuotas(48): 0 to 1 [usable: 1], AlterClientQuotas(49): 0 to 1 [usable: 1], DescribeUserScramCredentials(50): 0 [usable: 0], AlterUserScramCredentials(51): 0 [usable: 0], AlterPartition(56): 0 to 2 [usable: 2], UpdateFeatures(57): 0 to 1 [usable: 1], DescribeCluster(60): 0 [usable: 0], DescribeProducers(61): 0 [usable: 0], DescribeTransactions(65): 0 [usable: 0], ListTransactions(66): 0 [usable: 0], AllocateProducerIds(67): 0 [usable: 0]). 09:09:55 09:09:55.662 [data-plane-kafka-network-thread-1-ListenerName(SASL_PLAINTEXT)-SASL_PLAINTEXT-0] DEBUG kafka.request.logger - Completed request:{"isForwarded":false,"requestHeader":{"requestApiKey":18,"requestApiVersion":3,"correlationId":3,"clientId":"mso-123456-producer-a431b68b-e45a-4ed5-8b19-0482c7f0322b","requestApiKeyName":"API_VERSIONS"},"request":{"clientSoftwareName":"apache-kafka-java","clientSoftwareVersion":"3.3.1"},"response":{"errorCode":0,"apiKeys":[{"apiKey":0,"minVersion":0,"maxVersion":9},{"apiKey":1,"minVersion":0,"maxVersion":13},{"apiKey":2,"minVersion":0,"maxVersion":7},{"apiKey":3,"minVersion":0,"maxVersion":12},{"apiKey":4,"minVersion":0,"maxVersion":6},{"apiKey":5,"minVersion":0,"maxVersion":3},{"apiKey":6,"minVersion":0,"maxVersion":7},{"apiKey":7,"minVersion":0,"maxVersion":3},{"apiKey":8,"minVersion":0,"maxVersion":8},{"apiKey":9,"minVersion":0,"maxVersion":8},{"apiKey":10,"minVersion":0,"maxVersion":4},{"apiKey":11,"minVersion":0,"maxVersion":9},{"apiKey":12,"minVersion":0,"maxVersion":4},{"apiKey":13,"minVersion":0,"maxVersion":5},{"apiKey":14,"minVersion":0,"maxVersion":5},{"apiKey":15,"minVersion":0,"maxVersion":5},{"apiKey":16,"minVersion":0,"maxVersion":4},{"apiKey":17,"minVersion":0,"maxVersion":1},{"apiKey":18,"minVersion":0,"maxVersion":3},{"apiKey":19,"minVersion":0,"maxVersion":7},{"apiKey":20,"minVersion":0,"maxVersion":6},{"apiKey":21,"minVersion":0,"maxVersion":2},{"apiKey":22,"minVersion":0,"maxVersion":4},{"apiKey":23,"minVersion":0,"maxVersion":4},{"apiKey":24,"minVersion":0,"maxVersion":3},{"apiKey":25,"minVersion":0,"maxVersion":3},{"apiKey":26,"minVersion":0,"maxVersion":3},{"apiKey":27,"minVersion":0,"maxVersion":1},{"apiKey":28,"minVersion":0,"maxVersion":3},{"apiKey":29,"minVersion":0,"maxVersion":3},{"apiKey":30,"minVersion":0,"maxVersion":3},{"apiKey":31,"minVersion":0,"maxVersion":3},{"apiKey":32,"minVersion":0,"maxVersion":4},{"apiKey":33,"minVersion":0,"maxVersion":2},{"apiKey":34,"minVersion":0,"maxVersion":2},{"apiKey":35,"minVersion":0,"maxVersion":4},{"apiKey":36,"minVersion":0,"maxVersion":2},{"apiKey":37,"minVersion":0,"maxVersion":3},{"apiKey":38,"minVersion":0,"maxVersion":3},{"apiKey":39,"minVersion":0,"maxVersion":2},{"apiKey":40,"minVersion":0,"maxVersion":2},{"apiKey":41,"minVersion":0,"maxVersion":3},{"apiKey":42,"minVersion":0,"maxVersion":2},{"apiKey":43,"minVersion":0,"maxVersion":2},{"apiKey":44,"minVersion":0,"maxVersion":1},{"apiKey":45,"minVersion":0,"maxVersion":0},{"apiKey":46,"minVersion":0,"maxVersion":0},{"apiKey":47,"minVersion":0,"maxVersion":0},{"apiKey":48,"minVersion":0,"maxVersion":1},{"apiKey":49,"minVersion":0,"maxVersion":1},{"apiKey":50,"minVersion":0,"maxVersion":0},{"apiKey":51,"minVersion":0,"maxVersion":0},{"apiKey":56,"minVersion":0,"maxVersion":2},{"apiKey":57,"minVersion":0,"maxVersion":1},{"apiKey":60,"minVersion":0,"maxVersion":0},{"apiKey":61,"minVersion":0,"maxVersion":0},{"apiKey":65,"minVersion":0,"maxVersion":0},{"apiKey":66,"minVersion":0,"maxVersion":0},{"apiKey":67,"minVersion":0,"maxVersion":0}],"throttleTimeMs":0,"finalizedFeaturesEpoch":0},"connection":"127.0.0.1:40117-127.0.0.1:33318-5","totalTimeMs":1.174,"requestQueueTimeMs":0.214,"localTimeMs":0.679,"remoteTimeMs":0.0,"throttleTimeMs":0,"responseQueueTimeMs":0.079,"sendTimeMs":0.2,"securityProtocol":"SASL_PLAINTEXT","principal":"User:admin","listener":"SASL_PLAINTEXT","clientInformation":{"softwareName":"unknown","softwareVersion":"unknown"}} 09:09:55 09:09:55.664 [kafka-producer-network-thread | mso-123456-producer-a431b68b-e45a-4ed5-8b19-0482c7f0322b] DEBUG org.apache.kafka.clients.producer.internals.TransactionManager - [Producer clientId=mso-123456-producer-a431b68b-e45a-4ed5-8b19-0482c7f0322b] ProducerId of partition my-test-topic-0 set to 0 with epoch 0. Reinitialize sequence at beginning. 09:09:55 09:09:55.664 [kafka-producer-network-thread | mso-123456-producer-a431b68b-e45a-4ed5-8b19-0482c7f0322b] DEBUG org.apache.kafka.clients.producer.internals.RecordAccumulator - [Producer clientId=mso-123456-producer-a431b68b-e45a-4ed5-8b19-0482c7f0322b] Assigned producerId 0 and producerEpoch 0 to batch with base sequence 0 being sent to partition my-test-topic-0 09:09:55 09:09:55.666 [executor-Fetch] DEBUG kafka.server.IncrementalFetchContext - Incremental fetch context with session id 1021167583 returning 0 partition(s) 09:09:55 09:09:55.667 [data-plane-kafka-network-thread-1-ListenerName(SASL_PLAINTEXT)-SASL_PLAINTEXT-0] DEBUG kafka.request.logger - Completed request:{"isForwarded":false,"requestHeader":{"requestApiKey":1,"requestApiVersion":13,"correlationId":74,"clientId":"mso-123456-consumer-f5c1d64b-2a77-4abe-97e2-5d595acfbd82","requestApiKeyName":"FETCH"},"request":{"replicaId":-1,"maxWaitMs":500,"minBytes":1,"maxBytes":52428800,"isolationLevel":0,"sessionId":1021167583,"sessionEpoch":29,"topics":[],"forgottenTopicsData":[],"rackId":""},"response":{"throttleTimeMs":0,"errorCode":0,"sessionId":1021167583,"responses":[]},"connection":"127.0.0.1:40117-127.0.0.1:49642-3","totalTimeMs":502.481,"requestQueueTimeMs":0.233,"localTimeMs":7.219,"remoteTimeMs":494.779,"throttleTimeMs":0,"responseQueueTimeMs":0.072,"sendTimeMs":0.176,"securityProtocol":"SASL_PLAINTEXT","principal":"User:admin","listener":"SASL_PLAINTEXT","clientInformation":{"softwareName":"apache-kafka-java","softwareVersion":"3.3.1"}} 09:09:55 09:09:55.669 [kafka-producer-network-thread | mso-123456-producer-a431b68b-e45a-4ed5-8b19-0482c7f0322b] DEBUG org.apache.kafka.clients.NetworkClient - [Producer clientId=mso-123456-producer-a431b68b-e45a-4ed5-8b19-0482c7f0322b] Sending PRODUCE request with header RequestHeader(apiKey=PRODUCE, apiVersion=9, clientId=mso-123456-producer-a431b68b-e45a-4ed5-8b19-0482c7f0322b, correlationId=4) and timeout 30000 to node 1: {acks=-1,timeout=30000,partitionSizes=[my-test-topic-0=106]} 09:09:55 09:09:55.691 [data-plane-kafka-request-handler-0] DEBUG kafka.log.UnifiedLog - [UnifiedLog partition=my-test-topic-0, dir=/tmp/kafka-unit11182757027218931278] Flushing log up to offset 3 (exclusive)with recovery point 3, last flushed: 1770973775304, current time: 1770973795691,unflushed: 3 09:09:55 09:09:55.693 [data-plane-kafka-request-handler-0] DEBUG kafka.cluster.Partition - [Partition my-test-topic-0 broker=1] High watermark updated from (offset=0 segment=[0:0]) to (offset=3 segment=[0:106]) 09:09:55 09:09:55.693 [data-plane-kafka-request-handler-0] DEBUG kafka.server.ReplicaManager - [ReplicaManager broker=1] Produce to local log in 18 ms 09:09:55 09:09:55.699 [kafka-producer-network-thread | mso-123456-producer-a431b68b-e45a-4ed5-8b19-0482c7f0322b] DEBUG org.apache.kafka.clients.NetworkClient - [Producer clientId=mso-123456-producer-a431b68b-e45a-4ed5-8b19-0482c7f0322b] Received PRODUCE response from node 1 for request with header RequestHeader(apiKey=PRODUCE, apiVersion=9, clientId=mso-123456-producer-a431b68b-e45a-4ed5-8b19-0482c7f0322b, correlationId=4): ProduceResponseData(responses=[TopicProduceResponse(name='my-test-topic', partitionResponses=[PartitionProduceResponse(index=0, errorCode=0, baseOffset=0, logAppendTimeMs=-1, logStartOffset=0, recordErrors=[], errorMessage=null)])], throttleTimeMs=0) 09:09:55 09:09:55.700 [data-plane-kafka-network-thread-1-ListenerName(SASL_PLAINTEXT)-SASL_PLAINTEXT-0] DEBUG kafka.request.logger - Completed request:{"isForwarded":false,"requestHeader":{"requestApiKey":0,"requestApiVersion":9,"correlationId":4,"clientId":"mso-123456-producer-a431b68b-e45a-4ed5-8b19-0482c7f0322b","requestApiKeyName":"PRODUCE"},"request":{"transactionalId":null,"acks":-1,"timeoutMs":30000,"topicData":[{"name":"my-test-topic","partitionData":[{"index":0,"recordsSizeInBytes":106}]}]},"response":{"responses":[{"name":"my-test-topic","partitionResponses":[{"index":0,"errorCode":0,"baseOffset":0,"logAppendTimeMs":-1,"logStartOffset":0,"recordErrors":[],"errorMessage":null}]}],"throttleTimeMs":0},"connection":"127.0.0.1:40117-127.0.0.1:33318-5","totalTimeMs":29.019,"requestQueueTimeMs":3.63,"localTimeMs":25.005,"remoteTimeMs":0.0,"throttleTimeMs":0,"responseQueueTimeMs":0.125,"sendTimeMs":0.258,"securityProtocol":"SASL_PLAINTEXT","principal":"User:admin","listener":"SASL_PLAINTEXT","clientInformation":{"softwareName":"apache-kafka-java","softwareVersion":"3.3.1"}} 09:09:55 09:09:55.703 [kafka-producer-network-thread | mso-123456-producer-a431b68b-e45a-4ed5-8b19-0482c7f0322b] DEBUG org.apache.kafka.clients.producer.internals.TransactionManager - [Producer clientId=mso-123456-producer-a431b68b-e45a-4ed5-8b19-0482c7f0322b] ProducerId: 0; Set last ack'd sequence number for topic-partition my-test-topic-0 to 2 09:09:55 09:09:55.706 [main] DEBUG org.apache.kafka.clients.NetworkClient - [Consumer clientId=mso-123456-consumer-f5c1d64b-2a77-4abe-97e2-5d595acfbd82, groupId=mso-group] Received FETCH response from node 1 for request with header RequestHeader(apiKey=FETCH, apiVersion=13, clientId=mso-123456-consumer-f5c1d64b-2a77-4abe-97e2-5d595acfbd82, correlationId=74): FetchResponseData(throttleTimeMs=0, errorCode=0, sessionId=1021167583, responses=[]) 09:09:55 09:09:55.706 [main] DEBUG org.apache.kafka.clients.FetchSessionHandler - [Consumer clientId=mso-123456-consumer-f5c1d64b-2a77-4abe-97e2-5d595acfbd82, groupId=mso-group] Node 1 sent an incremental fetch response with throttleTimeMs = 0 for session 1021167583 with 0 response partition(s), 1 implied partition(s) 09:09:55 09:09:55.706 [main] DEBUG org.apache.kafka.clients.consumer.internals.Fetcher - [Consumer clientId=mso-123456-consumer-f5c1d64b-2a77-4abe-97e2-5d595acfbd82, groupId=mso-group] Added READ_UNCOMMITTED fetch request for partition my-test-topic-0 at position FetchPosition{offset=0, offsetEpoch=Optional.empty, currentLeader=LeaderAndEpoch{leader=Optional[localhost:40117 (id: 1 rack: null)], epoch=0}} to node localhost:40117 (id: 1 rack: null) 09:09:55 09:09:55.706 [main] DEBUG org.apache.kafka.clients.FetchSessionHandler - [Consumer clientId=mso-123456-consumer-f5c1d64b-2a77-4abe-97e2-5d595acfbd82, groupId=mso-group] Built incremental fetch (sessionId=1021167583, epoch=30) for node 1. Added 0 partition(s), altered 0 partition(s), removed 0 partition(s), replaced 0 partition(s) out of 1 partition(s) 09:09:55 09:09:55.706 [main] DEBUG org.apache.kafka.clients.consumer.internals.Fetcher - [Consumer clientId=mso-123456-consumer-f5c1d64b-2a77-4abe-97e2-5d595acfbd82, groupId=mso-group] Sending READ_UNCOMMITTED IncrementalFetchRequest(toSend=(), toForget=(), toReplace=(), implied=(my-test-topic-0), canUseTopicIds=True) to broker localhost:40117 (id: 1 rack: null) 09:09:55 09:09:55.706 [main] DEBUG org.apache.kafka.clients.NetworkClient - [Consumer clientId=mso-123456-consumer-f5c1d64b-2a77-4abe-97e2-5d595acfbd82, groupId=mso-group] Sending FETCH request with header RequestHeader(apiKey=FETCH, apiVersion=13, clientId=mso-123456-consumer-f5c1d64b-2a77-4abe-97e2-5d595acfbd82, correlationId=76) and timeout 30000 to node 1: FetchRequestData(clusterId=null, replicaId=-1, maxWaitMs=500, minBytes=1, maxBytes=52428800, isolationLevel=0, sessionId=1021167583, sessionEpoch=30, topics=[], forgottenTopicsData=[], rackId='') 09:09:55 09:09:55.707 [data-plane-kafka-request-handler-1] DEBUG kafka.server.FetchManager - Created a new incremental FetchContext for session id 1021167583, epoch 31: added 0 partition(s), updated 0 partition(s), removed 0 partition(s) 09:09:55 09:09:55.710 [data-plane-kafka-request-handler-1] DEBUG kafka.server.IncrementalFetchContext - Incremental fetch context with session id 1021167583 returning 1 partition(s) 09:09:55 09:09:55.732 [main] DEBUG org.apache.kafka.clients.NetworkClient - [Consumer clientId=mso-123456-consumer-f5c1d64b-2a77-4abe-97e2-5d595acfbd82, groupId=mso-group] Received FETCH response from node 1 for request with header RequestHeader(apiKey=FETCH, apiVersion=13, clientId=mso-123456-consumer-f5c1d64b-2a77-4abe-97e2-5d595acfbd82, correlationId=76): FetchResponseData(throttleTimeMs=0, errorCode=0, sessionId=1021167583, responses=[FetchableTopicResponse(topic='', topicId=rpOTusxPRiGyrsjjjH_fwA, partitions=[PartitionData(partitionIndex=0, errorCode=0, highWatermark=3, lastStableOffset=3, logStartOffset=0, divergingEpoch=EpochEndOffset(epoch=-1, endOffset=-1), currentLeader=LeaderIdAndEpoch(leaderId=-1, leaderEpoch=-1), snapshotId=SnapshotId(endOffset=-1, epoch=-1), abortedTransactions=null, preferredReadReplica=-1, records=MemoryRecords(size=106, buffer=java.nio.HeapByteBuffer[pos=0 lim=106 cap=109]))])]) 09:09:55 09:09:55.733 [main] DEBUG org.apache.kafka.clients.FetchSessionHandler - [Consumer clientId=mso-123456-consumer-f5c1d64b-2a77-4abe-97e2-5d595acfbd82, groupId=mso-group] Node 1 sent an incremental fetch response with throttleTimeMs = 0 for session 1021167583 with 1 response partition(s) 09:09:55 09:09:55.733 [main] DEBUG org.apache.kafka.clients.consumer.internals.Fetcher - [Consumer clientId=mso-123456-consumer-f5c1d64b-2a77-4abe-97e2-5d595acfbd82, groupId=mso-group] Fetch READ_UNCOMMITTED at offset 0 for partition my-test-topic-0 returned fetch data PartitionData(partitionIndex=0, errorCode=0, highWatermark=3, lastStableOffset=3, logStartOffset=0, divergingEpoch=EpochEndOffset(epoch=-1, endOffset=-1), currentLeader=LeaderIdAndEpoch(leaderId=-1, leaderEpoch=-1), snapshotId=SnapshotId(endOffset=-1, epoch=-1), abortedTransactions=null, preferredReadReplica=-1, records=MemoryRecords(size=106, buffer=java.nio.HeapByteBuffer[pos=0 lim=106 cap=109])) 09:09:55 09:09:55.733 [data-plane-kafka-network-thread-1-ListenerName(SASL_PLAINTEXT)-SASL_PLAINTEXT-0] DEBUG kafka.request.logger - Completed request:{"isForwarded":false,"requestHeader":{"requestApiKey":1,"requestApiVersion":13,"correlationId":76,"clientId":"mso-123456-consumer-f5c1d64b-2a77-4abe-97e2-5d595acfbd82","requestApiKeyName":"FETCH"},"request":{"replicaId":-1,"maxWaitMs":500,"minBytes":1,"maxBytes":52428800,"isolationLevel":0,"sessionId":1021167583,"sessionEpoch":30,"topics":[],"forgottenTopicsData":[],"rackId":""},"response":{"throttleTimeMs":0,"errorCode":0,"sessionId":1021167583,"responses":[{"topicId":"rpOTusxPRiGyrsjjjH_fwA","partitions":[{"partitionIndex":0,"errorCode":0,"highWatermark":3,"lastStableOffset":3,"logStartOffset":0,"abortedTransactions":null,"preferredReadReplica":-1,"recordsSizeInBytes":106}]}]},"connection":"127.0.0.1:40117-127.0.0.1:49642-3","totalTimeMs":25.536,"requestQueueTimeMs":0.177,"localTimeMs":4.584,"remoteTimeMs":0.0,"throttleTimeMs":0,"responseQueueTimeMs":0.105,"sendTimeMs":20.668,"securityProtocol":"SASL_PLAINTEXT","principal":"User:admin","listener":"SASL_PLAINTEXT","clientInformation":{"softwareName":"apache-kafka-java","softwareVersion":"3.3.1"}} 09:09:55 09:09:55.734 [main] DEBUG org.apache.kafka.clients.consumer.internals.Fetcher - [Consumer clientId=mso-123456-consumer-f5c1d64b-2a77-4abe-97e2-5d595acfbd82, groupId=mso-group] Added READ_UNCOMMITTED fetch request for partition my-test-topic-0 at position FetchPosition{offset=3, offsetEpoch=Optional[0], currentLeader=LeaderAndEpoch{leader=Optional[localhost:40117 (id: 1 rack: null)], epoch=0}} to node localhost:40117 (id: 1 rack: null) 09:09:55 09:09:55.734 [main] DEBUG org.apache.kafka.clients.FetchSessionHandler - [Consumer clientId=mso-123456-consumer-f5c1d64b-2a77-4abe-97e2-5d595acfbd82, groupId=mso-group] Built incremental fetch (sessionId=1021167583, epoch=31) for node 1. Added 0 partition(s), altered 1 partition(s), removed 0 partition(s), replaced 0 partition(s) out of 1 partition(s) 09:09:55 09:09:55.734 [main] DEBUG org.apache.kafka.clients.consumer.internals.Fetcher - [Consumer clientId=mso-123456-consumer-f5c1d64b-2a77-4abe-97e2-5d595acfbd82, groupId=mso-group] Sending READ_UNCOMMITTED IncrementalFetchRequest(toSend=(my-test-topic-0), toForget=(), toReplace=(), implied=(), canUseTopicIds=True) to broker localhost:40117 (id: 1 rack: null) 09:09:55 09:09:55.735 [main] DEBUG org.apache.kafka.clients.NetworkClient - [Consumer clientId=mso-123456-consumer-f5c1d64b-2a77-4abe-97e2-5d595acfbd82, groupId=mso-group] Sending FETCH request with header RequestHeader(apiKey=FETCH, apiVersion=13, clientId=mso-123456-consumer-f5c1d64b-2a77-4abe-97e2-5d595acfbd82, correlationId=77) and timeout 30000 to node 1: FetchRequestData(clusterId=null, replicaId=-1, maxWaitMs=500, minBytes=1, maxBytes=52428800, isolationLevel=0, sessionId=1021167583, sessionEpoch=31, topics=[FetchTopic(topic='my-test-topic', topicId=rpOTusxPRiGyrsjjjH_fwA, partitions=[FetchPartition(partition=0, currentLeaderEpoch=0, fetchOffset=3, lastFetchedEpoch=-1, logStartOffset=-1, partitionMaxBytes=1048576)])], forgottenTopicsData=[], rackId='') 09:09:55 09:09:55.736 [data-plane-kafka-request-handler-0] DEBUG kafka.server.FetchManager - Created a new incremental FetchContext for session id 1021167583, epoch 32: added 0 partition(s), updated 1 partition(s), removed 0 partition(s) 09:09:55 09:09:55.749 [main] INFO kafka.server.KafkaServer - [KafkaServer id=1] shutting down 09:09:55 09:09:55.750 [main] INFO kafka.server.KafkaServer - [KafkaServer id=1] Starting controlled shutdown 09:09:55 09:09:55.751 [main] DEBUG org.apache.kafka.clients.ClientUtils - Resolved host localhost as 127.0.0.1 09:09:55 09:09:55.751 [main] DEBUG org.apache.kafka.clients.NetworkClient - [KafkaServer id=1] Initiating connection to node localhost:40117 (id: 1 rack: null) using address localhost/127.0.0.1 09:09:55 09:09:55.751 [main] DEBUG org.apache.kafka.common.security.authenticator.SaslClientAuthenticator - [KafkaServer id=1] Set SASL client state to SEND_APIVERSIONS_REQUEST 09:09:55 09:09:55.752 [main] DEBUG org.apache.kafka.common.security.authenticator.SaslClientAuthenticator - [KafkaServer id=1] Creating SaslClient: client=null;service=kafka;serviceHostname=localhost;mechs=[PLAIN] 09:09:55 09:09:55.752 [data-plane-kafka-socket-acceptor-ListenerName(SASL_PLAINTEXT)-SASL_PLAINTEXT-40117] DEBUG kafka.network.DataPlaneAcceptor - Accepted connection from /127.0.0.1:33320 on /127.0.0.1:40117 and assigned it to processor 1, sendBufferSize [actual|requested]: [102400|102400] recvBufferSize [actual|requested]: [102400|102400] 09:09:55 09:09:55.752 [main] DEBUG org.apache.kafka.common.network.Selector - [KafkaServer id=1] Created socket with SO_RCVBUF = 65536, SO_SNDBUF = 1313280, SO_TIMEOUT = 0 to node 1 09:09:55 09:09:55.752 [data-plane-kafka-network-thread-1-ListenerName(SASL_PLAINTEXT)-SASL_PLAINTEXT-1] DEBUG kafka.network.Processor - Processor 1 listening to new connection from /127.0.0.1:33320 09:09:55 09:09:55.752 [main] DEBUG org.apache.kafka.common.security.authenticator.SaslClientAuthenticator - [KafkaServer id=1] Set SASL client state to RECEIVE_APIVERSIONS_RESPONSE 09:09:55 09:09:55.752 [main] DEBUG org.apache.kafka.clients.NetworkClient - [KafkaServer id=1] Completed connection to node 1. Ready. 09:09:55 09:09:55.752 [data-plane-kafka-network-thread-1-ListenerName(SASL_PLAINTEXT)-SASL_PLAINTEXT-1] DEBUG org.apache.kafka.common.security.authenticator.SaslServerAuthenticator - Set SASL server state to HANDSHAKE_OR_VERSIONS_REQUEST during authentication 09:09:55 09:09:55.752 [data-plane-kafka-network-thread-1-ListenerName(SASL_PLAINTEXT)-SASL_PLAINTEXT-1] DEBUG org.apache.kafka.common.security.authenticator.SaslServerAuthenticator - Handling Kafka request API_VERSIONS during authentication 09:09:55 09:09:55.753 [data-plane-kafka-network-thread-1-ListenerName(SASL_PLAINTEXT)-SASL_PLAINTEXT-1] DEBUG org.apache.kafka.common.security.authenticator.SaslServerAuthenticator - Set SASL server state to HANDSHAKE_REQUEST during authentication 09:09:55 09:09:55.753 [main] DEBUG org.apache.kafka.common.security.authenticator.SaslClientAuthenticator - [KafkaServer id=1] Set SASL client state to SEND_HANDSHAKE_REQUEST 09:09:55 09:09:55.753 [main] DEBUG org.apache.kafka.common.security.authenticator.SaslClientAuthenticator - [KafkaServer id=1] Set SASL client state to RECEIVE_HANDSHAKE_RESPONSE 09:09:55 09:09:55.753 [data-plane-kafka-network-thread-1-ListenerName(SASL_PLAINTEXT)-SASL_PLAINTEXT-1] DEBUG org.apache.kafka.common.security.authenticator.SaslServerAuthenticator - Handling Kafka request SASL_HANDSHAKE during authentication 09:09:55 09:09:55.753 [data-plane-kafka-network-thread-1-ListenerName(SASL_PLAINTEXT)-SASL_PLAINTEXT-1] DEBUG org.apache.kafka.common.security.authenticator.SaslServerAuthenticator - Using SASL mechanism 'PLAIN' provided by client 09:09:55 09:09:55.753 [main] DEBUG org.apache.kafka.common.security.authenticator.SaslClientAuthenticator - [KafkaServer id=1] Set SASL client state to INITIAL 09:09:55 09:09:55.753 [main] DEBUG org.apache.kafka.common.security.authenticator.SaslClientAuthenticator - [KafkaServer id=1] Set SASL client state to INTERMEDIATE 09:09:55 09:09:55.754 [data-plane-kafka-network-thread-1-ListenerName(SASL_PLAINTEXT)-SASL_PLAINTEXT-1] DEBUG org.apache.kafka.common.security.authenticator.SaslServerAuthenticator - Set SASL server state to AUTHENTICATE during authentication 09:09:55 09:09:55.754 [data-plane-kafka-network-thread-1-ListenerName(SASL_PLAINTEXT)-SASL_PLAINTEXT-1] DEBUG org.apache.kafka.common.security.authenticator.SaslServerAuthenticator - Authentication complete; session max lifetime from broker config=0 ms, no credential expiration; no session expiration, sending 0 ms to client 09:09:55 09:09:55.754 [data-plane-kafka-network-thread-1-ListenerName(SASL_PLAINTEXT)-SASL_PLAINTEXT-1] DEBUG org.apache.kafka.common.security.authenticator.SaslServerAuthenticator - Set SASL server state to COMPLETE during authentication 09:09:55 09:09:55.754 [data-plane-kafka-network-thread-1-ListenerName(SASL_PLAINTEXT)-SASL_PLAINTEXT-1] DEBUG org.apache.kafka.common.network.Selector - [SocketServer listenerType=ZK_BROKER, nodeId=1] Successfully authenticated with /127.0.0.1 09:09:55 09:09:55.754 [main] DEBUG org.apache.kafka.common.security.authenticator.SaslClientAuthenticator - [KafkaServer id=1] Set SASL client state to COMPLETE 09:09:55 09:09:55.754 [main] DEBUG org.apache.kafka.common.security.authenticator.SaslClientAuthenticator - [KafkaServer id=1] Finished authentication with no session expiration and no session re-authentication 09:09:55 09:09:55.754 [main] DEBUG org.apache.kafka.common.network.Selector - [KafkaServer id=1] Successfully authenticated with localhost/127.0.0.1 09:09:55 09:09:55.754 [main] DEBUG org.apache.kafka.clients.NetworkClient - [KafkaServer id=1] Sending CONTROLLED_SHUTDOWN request with header RequestHeader(apiKey=CONTROLLED_SHUTDOWN, apiVersion=3, clientId=1, correlationId=0) and timeout 30000 to node 1: ControlledShutdownRequestData(brokerId=1, brokerEpoch=25) 09:09:55 09:09:55.757 [controller-event-thread] INFO kafka.controller.KafkaController - [Controller id=1] Shutting down broker 1 09:09:55 09:09:55.757 [controller-event-thread] DEBUG kafka.controller.KafkaController - [Controller id=1] All shutting down brokers: 1 09:09:55 09:09:55.758 [controller-event-thread] DEBUG kafka.controller.KafkaController - [Controller id=1] Live brokers: 09:09:55 09:09:55.761 [controller-event-thread] INFO state.change.logger - [Controller id=1 epoch=1] Sending UpdateMetadata request to brokers HashSet() for 0 partitions 09:09:55 09:09:55.766 [main] DEBUG org.apache.kafka.clients.NetworkClient - [KafkaServer id=1] Received CONTROLLED_SHUTDOWN response from node 1 for request with header RequestHeader(apiKey=CONTROLLED_SHUTDOWN, apiVersion=3, clientId=1, correlationId=0): ControlledShutdownResponseData(errorCode=0, remainingPartitions=[]) 09:09:55 09:09:55.766 [main] INFO kafka.server.KafkaServer - [KafkaServer id=1] Controlled shutdown request returned successfully after 12ms 09:09:55 09:09:55.766 [data-plane-kafka-network-thread-1-ListenerName(SASL_PLAINTEXT)-SASL_PLAINTEXT-1] DEBUG kafka.request.logger - Completed request:{"isForwarded":false,"requestHeader":{"requestApiKey":7,"requestApiVersion":3,"correlationId":0,"clientId":"1","requestApiKeyName":"CONTROLLED_SHUTDOWN"},"request":{"brokerId":1,"brokerEpoch":25},"response":{"errorCode":0,"remainingPartitions":[]},"connection":"127.0.0.1:40117-127.0.0.1:33320-5","totalTimeMs":11.172,"requestQueueTimeMs":1.123,"localTimeMs":1.011,"remoteTimeMs":8.695,"throttleTimeMs":0,"responseQueueTimeMs":0.098,"sendTimeMs":0.243,"securityProtocol":"SASL_PLAINTEXT","principal":"User:admin","listener":"SASL_PLAINTEXT","clientInformation":{"softwareName":"unknown","softwareVersion":"unknown"}} 09:09:55 09:09:55.766 [data-plane-kafka-network-thread-1-ListenerName(SASL_PLAINTEXT)-SASL_PLAINTEXT-1] DEBUG org.apache.kafka.common.network.Selector - [SocketServer listenerType=ZK_BROKER, nodeId=1] Connection with /127.0.0.1 (channelId=127.0.0.1:40117-127.0.0.1:33320-5) disconnected 09:09:55 java.io.EOFException: null 09:09:55 at org.apache.kafka.common.network.NetworkReceive.readFrom(NetworkReceive.java:97) 09:09:55 at org.apache.kafka.common.network.KafkaChannel.receive(KafkaChannel.java:452) 09:09:55 at org.apache.kafka.common.network.KafkaChannel.read(KafkaChannel.java:402) 09:09:55 at org.apache.kafka.common.network.Selector.attemptRead(Selector.java:674) 09:09:55 at org.apache.kafka.common.network.Selector.pollSelectionKeys(Selector.java:576) 09:09:55 at org.apache.kafka.common.network.Selector.poll(Selector.java:481) 09:09:55 at kafka.network.Processor.poll(SocketServer.scala:1055) 09:09:55 at kafka.network.Processor.run(SocketServer.scala:959) 09:09:55 at java.base/java.lang.Thread.run(Thread.java:829) 09:09:55 09:09:55.768 [main] INFO kafka.common.ZkNodeChangeNotificationListener$ChangeEventProcessThread - [/config/changes-event-process-thread]: Shutting down 09:09:55 09:09:55.769 [main] INFO kafka.common.ZkNodeChangeNotificationListener$ChangeEventProcessThread - [/config/changes-event-process-thread]: Shutdown completed 09:09:55 09:09:55.769 [/config/changes-event-process-thread] INFO kafka.common.ZkNodeChangeNotificationListener$ChangeEventProcessThread - [/config/changes-event-process-thread]: Stopped 09:09:55 09:09:55.769 [main] INFO kafka.network.SocketServer - [SocketServer listenerType=ZK_BROKER, nodeId=1] Stopping socket server request processors 09:09:55 09:09:55.771 [data-plane-kafka-socket-acceptor-ListenerName(SASL_PLAINTEXT)-SASL_PLAINTEXT-40117] DEBUG kafka.network.DataPlaneAcceptor - Closing server socket, selector, and any throttled sockets. 09:09:55 09:09:55.773 [data-plane-kafka-network-thread-1-ListenerName(SASL_PLAINTEXT)-SASL_PLAINTEXT-1] DEBUG kafka.network.Processor - Closing selector - processor 1 09:09:55 09:09:55.774 [data-plane-kafka-network-thread-1-ListenerName(SASL_PLAINTEXT)-SASL_PLAINTEXT-1] DEBUG kafka.network.Processor - Closing selector connection 127.0.0.1:40117-127.0.0.1:33316-4 09:09:55 09:09:55.774 [TestBroker:1:BrokerToControllerChannelManager broker=1 name=forwarding] DEBUG org.apache.kafka.common.network.Selector - [BrokerToControllerChannelManager broker=1 name=forwarding] Connection with localhost/127.0.0.1 (channelId=1) disconnected 09:09:55 java.io.EOFException: null 09:09:55 at org.apache.kafka.common.network.NetworkReceive.readFrom(NetworkReceive.java:97) 09:09:55 at org.apache.kafka.common.network.KafkaChannel.receive(KafkaChannel.java:452) 09:09:55 at org.apache.kafka.common.network.KafkaChannel.read(KafkaChannel.java:402) 09:09:55 at org.apache.kafka.common.network.Selector.attemptRead(Selector.java:674) 09:09:55 at org.apache.kafka.common.network.Selector.pollSelectionKeys(Selector.java:576) 09:09:55 at org.apache.kafka.common.network.Selector.poll(Selector.java:481) 09:09:55 at org.apache.kafka.clients.NetworkClient.poll(NetworkClient.java:560) 09:09:55 at kafka.common.InterBrokerSendThread.pollOnce(InterBrokerSendThread.scala:74) 09:09:55 at kafka.server.BrokerToControllerRequestThread.doWork(BrokerToControllerChannelManager.scala:368) 09:09:55 at kafka.utils.ShutdownableThread.run(ShutdownableThread.scala:96) 09:09:55 09:09:55.774 [data-plane-kafka-network-thread-1-ListenerName(SASL_PLAINTEXT)-SASL_PLAINTEXT-0] DEBUG kafka.network.Processor - Closing selector - processor 0 09:09:55 09:09:55.774 [data-plane-kafka-network-thread-1-ListenerName(SASL_PLAINTEXT)-SASL_PLAINTEXT-0] DEBUG kafka.network.Processor - Closing selector connection 127.0.0.1:40117-127.0.0.1:33314-4 09:09:55 09:09:55.774 [data-plane-kafka-network-thread-1-ListenerName(SASL_PLAINTEXT)-SASL_PLAINTEXT-1] DEBUG kafka.network.Processor - Closing selector connection 127.0.0.1:40117-127.0.0.1:49640-2 09:09:55 09:09:55.775 [TestBroker:1:BrokerToControllerChannelManager broker=1 name=forwarding] INFO org.apache.kafka.clients.NetworkClient - [BrokerToControllerChannelManager broker=1 name=forwarding] Node 1 disconnected. 09:09:55 09:09:55.775 [data-plane-kafka-network-thread-1-ListenerName(SASL_PLAINTEXT)-SASL_PLAINTEXT-1] DEBUG kafka.network.Processor - Closing selector connection 127.0.0.1:40117-127.0.0.1:49644-3 09:09:55 09:09:55.775 [kafka-producer-network-thread | mso-123456-producer-a431b68b-e45a-4ed5-8b19-0482c7f0322b] DEBUG org.apache.kafka.common.network.Selector - [Producer clientId=mso-123456-producer-a431b68b-e45a-4ed5-8b19-0482c7f0322b] Connection with localhost/127.0.0.1 (channelId=-1) disconnected 09:09:55 java.io.EOFException: null 09:09:55 at org.apache.kafka.common.network.NetworkReceive.readFrom(NetworkReceive.java:97) 09:09:55 at org.apache.kafka.common.network.KafkaChannel.receive(KafkaChannel.java:452) 09:09:55 at org.apache.kafka.common.network.KafkaChannel.read(KafkaChannel.java:402) 09:09:55 at org.apache.kafka.common.network.Selector.attemptRead(Selector.java:674) 09:09:55 at org.apache.kafka.common.network.Selector.pollSelectionKeys(Selector.java:576) 09:09:55 at org.apache.kafka.common.network.Selector.poll(Selector.java:481) 09:09:55 at org.apache.kafka.clients.NetworkClient.poll(NetworkClient.java:560) 09:09:55 at org.apache.kafka.clients.producer.internals.Sender.runOnce(Sender.java:328) 09:09:55 at org.apache.kafka.clients.producer.internals.Sender.run(Sender.java:243) 09:09:55 at java.base/java.lang.Thread.run(Thread.java:829) 09:09:55 09:09:55.776 [kafka-producer-network-thread | mso-123456-producer-a431b68b-e45a-4ed5-8b19-0482c7f0322b] INFO org.apache.kafka.clients.NetworkClient - [Producer clientId=mso-123456-producer-a431b68b-e45a-4ed5-8b19-0482c7f0322b] Node -1 disconnected. 09:09:55 09:09:55.775 [data-plane-kafka-network-thread-1-ListenerName(SASL_PLAINTEXT)-SASL_PLAINTEXT-0] DEBUG kafka.network.Processor - Closing selector connection 127.0.0.1:40117-127.0.0.1:49630-0 09:09:55 09:09:55.777 [data-plane-kafka-network-thread-1-ListenerName(SASL_PLAINTEXT)-SASL_PLAINTEXT-0] DEBUG kafka.network.Processor - Closing selector connection 127.0.0.1:40117-127.0.0.1:33318-5 09:09:55 09:09:55.778 [data-plane-kafka-network-thread-1-ListenerName(SASL_PLAINTEXT)-SASL_PLAINTEXT-0] DEBUG kafka.network.Processor - Closing selector connection 127.0.0.1:40117-127.0.0.1:49642-3 09:09:55 09:09:55.778 [kafka-producer-network-thread | mso-123456-producer-a431b68b-e45a-4ed5-8b19-0482c7f0322b] DEBUG org.apache.kafka.clients.NetworkClient - [Producer clientId=mso-123456-producer-a431b68b-e45a-4ed5-8b19-0482c7f0322b] Sending metadata request MetadataRequestData(topics=[MetadataRequestTopic(topicId=AAAAAAAAAAAAAAAAAAAAAA, name='my-test-topic')], allowAutoTopicCreation=true, includeClusterAuthorizedOperations=false, includeTopicAuthorizedOperations=false) to node localhost:40117 (id: 1 rack: null) 09:09:55 09:09:55.778 [kafka-producer-network-thread | mso-123456-producer-a431b68b-e45a-4ed5-8b19-0482c7f0322b] DEBUG org.apache.kafka.clients.NetworkClient - [Producer clientId=mso-123456-producer-a431b68b-e45a-4ed5-8b19-0482c7f0322b] Sending METADATA request with header RequestHeader(apiKey=METADATA, apiVersion=12, clientId=mso-123456-producer-a431b68b-e45a-4ed5-8b19-0482c7f0322b, correlationId=5) and timeout 30000 to node 1: MetadataRequestData(topics=[MetadataRequestTopic(topicId=AAAAAAAAAAAAAAAAAAAAAA, name='my-test-topic')], allowAutoTopicCreation=true, includeClusterAuthorizedOperations=false, includeTopicAuthorizedOperations=false) 09:09:55 09:09:55.778 [kafka-producer-network-thread | mso-123456-producer-a431b68b-e45a-4ed5-8b19-0482c7f0322b] DEBUG org.apache.kafka.common.network.Selector - [Producer clientId=mso-123456-producer-a431b68b-e45a-4ed5-8b19-0482c7f0322b] Connection with localhost/127.0.0.1 (channelId=1) disconnected 09:09:55 java.io.EOFException: null 09:09:55 at org.apache.kafka.common.network.NetworkReceive.readFrom(NetworkReceive.java:97) 09:09:55 at org.apache.kafka.common.network.KafkaChannel.receive(KafkaChannel.java:452) 09:09:55 at org.apache.kafka.common.network.KafkaChannel.read(KafkaChannel.java:402) 09:09:55 at org.apache.kafka.common.network.Selector.attemptRead(Selector.java:674) 09:09:55 at org.apache.kafka.common.network.Selector.pollSelectionKeys(Selector.java:576) 09:09:55 at org.apache.kafka.common.network.Selector.poll(Selector.java:481) 09:09:55 at org.apache.kafka.clients.NetworkClient.poll(NetworkClient.java:560) 09:09:55 at org.apache.kafka.clients.producer.internals.Sender.runOnce(Sender.java:328) 09:09:55 at org.apache.kafka.clients.producer.internals.Sender.run(Sender.java:243) 09:09:55 at java.base/java.lang.Thread.run(Thread.java:829) 09:09:55 09:09:55.778 [kafka-producer-network-thread | mso-123456-producer-a431b68b-e45a-4ed5-8b19-0482c7f0322b] INFO org.apache.kafka.clients.NetworkClient - [Producer clientId=mso-123456-producer-a431b68b-e45a-4ed5-8b19-0482c7f0322b] Node 1 disconnected. 09:09:55 09:09:55.779 [kafka-producer-network-thread | mso-123456-producer-a431b68b-e45a-4ed5-8b19-0482c7f0322b] DEBUG org.apache.kafka.clients.NetworkClient - [Producer clientId=mso-123456-producer-a431b68b-e45a-4ed5-8b19-0482c7f0322b] Cancelled in-flight METADATA request with correlation id 5 due to node 1 being disconnected (elapsed time since creation: 0ms, elapsed time since send: 0ms, request timeout: 30000ms): MetadataRequestData(topics=[MetadataRequestTopic(topicId=AAAAAAAAAAAAAAAAAAAAAA, name='my-test-topic')], allowAutoTopicCreation=true, includeClusterAuthorizedOperations=false, includeTopicAuthorizedOperations=false) 09:09:55 09:09:55.780 [main] INFO kafka.network.SocketServer - [SocketServer listenerType=ZK_BROKER, nodeId=1] Stopped socket server request processors 09:09:55 09:09:55.780 [main] INFO kafka.server.KafkaRequestHandlerPool - [data-plane Kafka Request Handler on Broker 1], shutting down 09:09:55 09:09:55.781 [data-plane-kafka-request-handler-0] DEBUG kafka.server.KafkaRequestHandler - [Kafka Request Handler 0 on Broker 1], Kafka request handler 0 on broker 1 received shut down command 09:09:55 09:09:55.781 [data-plane-kafka-request-handler-1] DEBUG kafka.server.KafkaRequestHandler - [Kafka Request Handler 1 on Broker 1], Kafka request handler 1 on broker 1 received shut down command 09:09:55 09:09:55.782 [main] INFO kafka.server.KafkaRequestHandlerPool - [data-plane Kafka Request Handler on Broker 1], shut down completely 09:09:55 09:09:55.782 [main] DEBUG kafka.utils.KafkaScheduler - Shutting down task scheduler. 09:09:55 09:09:55.786 [main] INFO kafka.server.DelayedOperationPurgatory$ExpiredOperationReaper - [ExpirationReaper-1-AlterAcls]: Shutting down 09:09:55 09:09:55.787 [ExpirationReaper-1-AlterAcls] INFO kafka.server.DelayedOperationPurgatory$ExpiredOperationReaper - [ExpirationReaper-1-AlterAcls]: Stopped 09:09:55 09:09:55.787 [main] INFO kafka.server.DelayedOperationPurgatory$ExpiredOperationReaper - [ExpirationReaper-1-AlterAcls]: Shutdown completed 09:09:55 09:09:55.787 [main] INFO kafka.server.KafkaApis - [KafkaApi-1] Shutdown complete. 09:09:55 09:09:55.788 [main] INFO kafka.server.DelayedOperationPurgatory$ExpiredOperationReaper - [ExpirationReaper-1-topic]: Shutting down 09:09:55 09:09:55.788 [ExpirationReaper-1-topic] INFO kafka.server.DelayedOperationPurgatory$ExpiredOperationReaper - [ExpirationReaper-1-topic]: Stopped 09:09:55 09:09:55.789 [main] INFO kafka.server.DelayedOperationPurgatory$ExpiredOperationReaper - [ExpirationReaper-1-topic]: Shutdown completed 09:09:55 09:09:55.790 [main] INFO kafka.coordinator.transaction.TransactionCoordinator - [TransactionCoordinator id=1] Shutting down. 09:09:55 09:09:55.790 [main] DEBUG kafka.utils.KafkaScheduler - Shutting down task scheduler. 09:09:55 09:09:55.791 [main] INFO kafka.coordinator.transaction.TransactionStateManager - [Transaction State Manager 1]: Shutdown complete 09:09:55 09:09:55.791 [main] INFO kafka.coordinator.transaction.TransactionMarkerChannelManager - [Transaction Marker Channel Manager 1]: Shutting down 09:09:55 09:09:55.791 [main] INFO kafka.coordinator.transaction.TransactionMarkerChannelManager - [Transaction Marker Channel Manager 1]: Shutdown completed 09:09:55 09:09:55.791 [TxnMarkerSenderThread-1] INFO kafka.coordinator.transaction.TransactionMarkerChannelManager - [Transaction Marker Channel Manager 1]: Stopped 09:09:55 09:09:55.792 [main] INFO kafka.coordinator.transaction.TransactionCoordinator - [TransactionCoordinator id=1] Shutdown complete. 09:09:55 09:09:55.792 [main] INFO kafka.coordinator.group.GroupCoordinator - [GroupCoordinator 1]: Shutting down. 09:09:55 09:09:55.793 [main] DEBUG kafka.utils.KafkaScheduler - Shutting down task scheduler. 09:09:55 09:09:55.793 [main] INFO kafka.server.DelayedOperationPurgatory$ExpiredOperationReaper - [ExpirationReaper-1-Heartbeat]: Shutting down 09:09:55 09:09:55.793 [ExpirationReaper-1-Heartbeat] INFO kafka.server.DelayedOperationPurgatory$ExpiredOperationReaper - [ExpirationReaper-1-Heartbeat]: Stopped 09:09:55 09:09:55.793 [main] INFO kafka.server.DelayedOperationPurgatory$ExpiredOperationReaper - [ExpirationReaper-1-Heartbeat]: Shutdown completed 09:09:55 09:09:55.794 [main] INFO kafka.server.DelayedOperationPurgatory$ExpiredOperationReaper - [ExpirationReaper-1-Rebalance]: Shutting down 09:09:55 09:09:55.794 [ExpirationReaper-1-Rebalance] INFO kafka.server.DelayedOperationPurgatory$ExpiredOperationReaper - [ExpirationReaper-1-Rebalance]: Stopped 09:09:55 09:09:55.794 [main] INFO kafka.server.DelayedOperationPurgatory$ExpiredOperationReaper - [ExpirationReaper-1-Rebalance]: Shutdown completed 09:09:55 09:09:55.794 [main] INFO kafka.coordinator.group.GroupCoordinator - [GroupCoordinator 1]: Shutdown complete. 09:09:55 09:09:55.795 [main] INFO kafka.server.ReplicaManager - [ReplicaManager broker=1] Shutting down 09:09:55 09:09:55.795 [main] INFO kafka.server.ReplicaManager$LogDirFailureHandler - [LogDirFailureHandler]: Shutting down 09:09:55 09:09:55.795 [LogDirFailureHandler] INFO kafka.server.ReplicaManager$LogDirFailureHandler - [LogDirFailureHandler]: Stopped 09:09:55 09:09:55.795 [main] INFO kafka.server.ReplicaManager$LogDirFailureHandler - [LogDirFailureHandler]: Shutdown completed 09:09:55 09:09:55.796 [main] INFO kafka.server.ReplicaFetcherManager - [ReplicaFetcherManager on broker 1] shutting down 09:09:55 09:09:55.797 [main] INFO kafka.server.ReplicaFetcherManager - [ReplicaFetcherManager on broker 1] shutdown completed 09:09:55 09:09:55.797 [main] INFO kafka.server.ReplicaAlterLogDirsManager - [ReplicaAlterLogDirsManager on broker 1] shutting down 09:09:55 09:09:55.797 [main] INFO kafka.server.ReplicaAlterLogDirsManager - [ReplicaAlterLogDirsManager on broker 1] shutdown completed 09:09:55 09:09:55.797 [main] INFO kafka.server.DelayedOperationPurgatory$ExpiredOperationReaper - [ExpirationReaper-1-Fetch]: Shutting down 09:09:55 09:09:55.797 [main] INFO kafka.server.DelayedOperationPurgatory$ExpiredOperationReaper - [ExpirationReaper-1-Fetch]: Shutdown completed 09:09:55 09:09:55.797 [ExpirationReaper-1-Fetch] INFO kafka.server.DelayedOperationPurgatory$ExpiredOperationReaper - [ExpirationReaper-1-Fetch]: Stopped 09:09:55 09:09:55.797 [main] INFO kafka.server.DelayedOperationPurgatory$ExpiredOperationReaper - [ExpirationReaper-1-Produce]: Shutting down 09:09:55 09:09:55.798 [ExpirationReaper-1-Produce] INFO kafka.server.DelayedOperationPurgatory$ExpiredOperationReaper - [ExpirationReaper-1-Produce]: Stopped 09:09:55 09:09:55.798 [main] INFO kafka.server.DelayedOperationPurgatory$ExpiredOperationReaper - [ExpirationReaper-1-Produce]: Shutdown completed 09:09:55 09:09:55.798 [main] INFO kafka.server.DelayedOperationPurgatory$ExpiredOperationReaper - [ExpirationReaper-1-DeleteRecords]: Shutting down 09:09:55 09:09:55.799 [main] INFO kafka.server.DelayedOperationPurgatory$ExpiredOperationReaper - [ExpirationReaper-1-DeleteRecords]: Shutdown completed 09:09:55 09:09:55.799 [ExpirationReaper-1-DeleteRecords] INFO kafka.server.DelayedOperationPurgatory$ExpiredOperationReaper - [ExpirationReaper-1-DeleteRecords]: Stopped 09:09:55 09:09:55.799 [main] INFO kafka.server.DelayedOperationPurgatory$ExpiredOperationReaper - [ExpirationReaper-1-ElectLeader]: Shutting down 09:09:55 09:09:55.800 [ExpirationReaper-1-ElectLeader] INFO kafka.server.DelayedOperationPurgatory$ExpiredOperationReaper - [ExpirationReaper-1-ElectLeader]: Stopped 09:09:55 09:09:55.800 [main] INFO kafka.server.DelayedOperationPurgatory$ExpiredOperationReaper - [ExpirationReaper-1-ElectLeader]: Shutdown completed 09:09:55 09:09:55.804 [main] INFO kafka.server.ReplicaManager - [ReplicaManager broker=1] Shut down completely 09:09:55 09:09:55.804 [main] INFO kafka.server.BrokerToControllerRequestThread - [TestBroker:1:BrokerToControllerChannelManager broker=1 name=alterPartition]: Shutting down 09:09:55 09:09:55.805 [TestBroker:1:BrokerToControllerChannelManager broker=1 name=alterPartition] INFO kafka.server.BrokerToControllerRequestThread - [TestBroker:1:BrokerToControllerChannelManager broker=1 name=alterPartition]: Stopped 09:09:55 09:09:55.805 [main] INFO kafka.server.BrokerToControllerRequestThread - [TestBroker:1:BrokerToControllerChannelManager broker=1 name=alterPartition]: Shutdown completed 09:09:55 09:09:55.806 [main] INFO kafka.server.BrokerToControllerChannelManagerImpl - Broker to controller channel manager for alterPartition shutdown 09:09:55 09:09:55.807 [main] INFO kafka.server.BrokerToControllerRequestThread - [TestBroker:1:BrokerToControllerChannelManager broker=1 name=forwarding]: Shutting down 09:09:55 09:09:55.807 [TestBroker:1:BrokerToControllerChannelManager broker=1 name=forwarding] INFO kafka.server.BrokerToControllerRequestThread - [TestBroker:1:BrokerToControllerChannelManager broker=1 name=forwarding]: Stopped 09:09:55 09:09:55.807 [main] INFO kafka.server.BrokerToControllerRequestThread - [TestBroker:1:BrokerToControllerChannelManager broker=1 name=forwarding]: Shutdown completed 09:09:55 09:09:55.807 [main] INFO kafka.server.BrokerToControllerChannelManagerImpl - Broker to controller channel manager for forwarding shutdown 09:09:55 09:09:55.808 [main] INFO kafka.log.LogManager - Shutting down. 09:09:55 09:09:55.809 [main] INFO kafka.log.LogCleaner - Shutting down the log cleaner. 09:09:55 09:09:55.809 [main] INFO kafka.log.LogCleaner - [kafka-log-cleaner-thread-0]: Shutting down 09:09:55 09:09:55.809 [kafka-log-cleaner-thread-0] INFO kafka.log.LogCleaner - [kafka-log-cleaner-thread-0]: Stopped 09:09:55 09:09:55.809 [main] INFO kafka.log.LogCleaner - [kafka-log-cleaner-thread-0]: Shutdown completed 09:09:55 09:09:55.811 [main] DEBUG kafka.log.LogManager - Flushing and closing logs at /tmp/kafka-unit11182757027218931278 09:09:55 09:09:55.813 [log-closing-/tmp/kafka-unit11182757027218931278] DEBUG kafka.log.UnifiedLog - [UnifiedLog partition=__consumer_offsets-29, dir=/tmp/kafka-unit11182757027218931278] Flushing log up to offset 0 (inclusive)with recovery point 0, last flushed: 1770973776487, current time: 1770973795813,unflushed: 0 09:09:55 09:09:55.815 [log-closing-/tmp/kafka-unit11182757027218931278] DEBUG kafka.log.UnifiedLog - [UnifiedLog partition=__consumer_offsets-29, dir=/tmp/kafka-unit11182757027218931278] Closing log 09:09:55 09:09:55.817 [log-closing-/tmp/kafka-unit11182757027218931278] DEBUG kafka.log.AbstractIndex - Resized /tmp/kafka-unit11182757027218931278/__consumer_offsets-29/00000000000000000000.index to 0, position is 0 and limit is 0 09:09:55 09:09:55.820 [log-closing-/tmp/kafka-unit11182757027218931278] DEBUG kafka.log.AbstractIndex - Resized /tmp/kafka-unit11182757027218931278/__consumer_offsets-29/00000000000000000000.timeindex to 0, position is 0 and limit is 0 09:09:55 09:09:55.822 [log-closing-/tmp/kafka-unit11182757027218931278] DEBUG kafka.log.UnifiedLog - [UnifiedLog partition=__consumer_offsets-43, dir=/tmp/kafka-unit11182757027218931278] Flushing log up to offset 0 (inclusive)with recovery point 0, last flushed: 1770973776938, current time: 1770973795822,unflushed: 0 09:09:55 09:09:55.824 [log-closing-/tmp/kafka-unit11182757027218931278] DEBUG kafka.log.UnifiedLog - [UnifiedLog partition=__consumer_offsets-43, dir=/tmp/kafka-unit11182757027218931278] Closing log 09:09:55 09:09:55.824 [log-closing-/tmp/kafka-unit11182757027218931278] DEBUG kafka.log.AbstractIndex - Resized /tmp/kafka-unit11182757027218931278/__consumer_offsets-43/00000000000000000000.index to 0, position is 0 and limit is 0 09:09:55 09:09:55.824 [log-closing-/tmp/kafka-unit11182757027218931278] DEBUG kafka.log.AbstractIndex - Resized /tmp/kafka-unit11182757027218931278/__consumer_offsets-43/00000000000000000000.timeindex to 0, position is 0 and limit is 0 09:09:55 09:09:55.824 [log-closing-/tmp/kafka-unit11182757027218931278] DEBUG kafka.log.UnifiedLog - [UnifiedLog partition=__consumer_offsets-0, dir=/tmp/kafka-unit11182757027218931278] Flushing log up to offset 0 (inclusive)with recovery point 0, last flushed: 1770973776708, current time: 1770973795824,unflushed: 0 09:09:55 09:09:55.826 [log-closing-/tmp/kafka-unit11182757027218931278] DEBUG kafka.log.UnifiedLog - [UnifiedLog partition=__consumer_offsets-0, dir=/tmp/kafka-unit11182757027218931278] Closing log 09:09:55 09:09:55.826 [log-closing-/tmp/kafka-unit11182757027218931278] DEBUG kafka.log.AbstractIndex - Resized /tmp/kafka-unit11182757027218931278/__consumer_offsets-0/00000000000000000000.index to 0, position is 0 and limit is 0 09:09:55 09:09:55.826 [log-closing-/tmp/kafka-unit11182757027218931278] DEBUG kafka.log.AbstractIndex - Resized /tmp/kafka-unit11182757027218931278/__consumer_offsets-0/00000000000000000000.timeindex to 0, position is 0 and limit is 0 09:09:55 09:09:55.826 [log-closing-/tmp/kafka-unit11182757027218931278] DEBUG kafka.log.UnifiedLog - [UnifiedLog partition=__consumer_offsets-6, dir=/tmp/kafka-unit11182757027218931278] Flushing log up to offset 0 (inclusive)with recovery point 0, last flushed: 1770973776930, current time: 1770973795826,unflushed: 0 09:09:55 09:09:55.827 [log-closing-/tmp/kafka-unit11182757027218931278] DEBUG kafka.log.UnifiedLog - [UnifiedLog partition=__consumer_offsets-6, dir=/tmp/kafka-unit11182757027218931278] Closing log 09:09:55 09:09:55.827 [log-closing-/tmp/kafka-unit11182757027218931278] DEBUG kafka.log.AbstractIndex - Resized /tmp/kafka-unit11182757027218931278/__consumer_offsets-6/00000000000000000000.index to 0, position is 0 and limit is 0 09:09:55 09:09:55.827 [log-closing-/tmp/kafka-unit11182757027218931278] DEBUG kafka.log.AbstractIndex - Resized /tmp/kafka-unit11182757027218931278/__consumer_offsets-6/00000000000000000000.timeindex to 0, position is 0 and limit is 0 09:09:55 09:09:55.828 [log-closing-/tmp/kafka-unit11182757027218931278] DEBUG kafka.log.UnifiedLog - [UnifiedLog partition=__consumer_offsets-35, dir=/tmp/kafka-unit11182757027218931278] Flushing log up to offset 0 (inclusive)with recovery point 0, last flushed: 1770973776727, current time: 1770973795828,unflushed: 0 09:09:55 09:09:55.829 [log-closing-/tmp/kafka-unit11182757027218931278] DEBUG kafka.log.UnifiedLog - [UnifiedLog partition=__consumer_offsets-35, dir=/tmp/kafka-unit11182757027218931278] Closing log 09:09:55 09:09:55.829 [log-closing-/tmp/kafka-unit11182757027218931278] DEBUG kafka.log.AbstractIndex - Resized /tmp/kafka-unit11182757027218931278/__consumer_offsets-35/00000000000000000000.index to 0, position is 0 and limit is 0 09:09:55 09:09:55.829 [log-closing-/tmp/kafka-unit11182757027218931278] DEBUG kafka.log.AbstractIndex - Resized /tmp/kafka-unit11182757027218931278/__consumer_offsets-35/00000000000000000000.timeindex to 0, position is 0 and limit is 0 09:09:55 09:09:55.829 [log-closing-/tmp/kafka-unit11182757027218931278] DEBUG kafka.log.UnifiedLog - [UnifiedLog partition=__consumer_offsets-30, dir=/tmp/kafka-unit11182757027218931278] Flushing log up to offset 0 (inclusive)with recovery point 0, last flushed: 1770973776633, current time: 1770973795829,unflushed: 0 09:09:55 09:09:55.830 [log-closing-/tmp/kafka-unit11182757027218931278] DEBUG kafka.log.UnifiedLog - [UnifiedLog partition=__consumer_offsets-30, dir=/tmp/kafka-unit11182757027218931278] Closing log 09:09:55 09:09:55.831 [log-closing-/tmp/kafka-unit11182757027218931278] DEBUG kafka.log.AbstractIndex - Resized /tmp/kafka-unit11182757027218931278/__consumer_offsets-30/00000000000000000000.index to 0, position is 0 and limit is 0 09:09:55 09:09:55.831 [log-closing-/tmp/kafka-unit11182757027218931278] DEBUG kafka.log.AbstractIndex - Resized /tmp/kafka-unit11182757027218931278/__consumer_offsets-30/00000000000000000000.timeindex to 0, position is 0 and limit is 0 09:09:55 09:09:55.831 [log-closing-/tmp/kafka-unit11182757027218931278] DEBUG kafka.log.UnifiedLog - [UnifiedLog partition=__consumer_offsets-13, dir=/tmp/kafka-unit11182757027218931278] Flushing log up to offset 0 (inclusive)with recovery point 0, last flushed: 1770973776944, current time: 1770973795831,unflushed: 0 09:09:55 09:09:55.832 [log-closing-/tmp/kafka-unit11182757027218931278] DEBUG kafka.log.UnifiedLog - [UnifiedLog partition=__consumer_offsets-13, dir=/tmp/kafka-unit11182757027218931278] Closing log 09:09:55 09:09:55.832 [log-closing-/tmp/kafka-unit11182757027218931278] DEBUG kafka.log.AbstractIndex - Resized /tmp/kafka-unit11182757027218931278/__consumer_offsets-13/00000000000000000000.index to 0, position is 0 and limit is 0 09:09:55 09:09:55.832 [log-closing-/tmp/kafka-unit11182757027218931278] DEBUG kafka.log.AbstractIndex - Resized /tmp/kafka-unit11182757027218931278/__consumer_offsets-13/00000000000000000000.timeindex to 0, position is 0 and limit is 0 09:09:55 09:09:55.833 [log-closing-/tmp/kafka-unit11182757027218931278] DEBUG kafka.log.UnifiedLog - [UnifiedLog partition=__consumer_offsets-26, dir=/tmp/kafka-unit11182757027218931278] Flushing log up to offset 0 (inclusive)with recovery point 0, last flushed: 1770973776223, current time: 1770973795833,unflushed: 0 09:09:55 09:09:55.834 [log-closing-/tmp/kafka-unit11182757027218931278] DEBUG kafka.log.UnifiedLog - [UnifiedLog partition=__consumer_offsets-26, dir=/tmp/kafka-unit11182757027218931278] Closing log 09:09:55 09:09:55.834 [log-closing-/tmp/kafka-unit11182757027218931278] DEBUG kafka.log.AbstractIndex - Resized /tmp/kafka-unit11182757027218931278/__consumer_offsets-26/00000000000000000000.index to 0, position is 0 and limit is 0 09:09:55 09:09:55.834 [log-closing-/tmp/kafka-unit11182757027218931278] DEBUG kafka.log.AbstractIndex - Resized /tmp/kafka-unit11182757027218931278/__consumer_offsets-26/00000000000000000000.timeindex to 0, position is 0 and limit is 0 09:09:55 09:09:55.834 [log-closing-/tmp/kafka-unit11182757027218931278] DEBUG kafka.log.UnifiedLog - [UnifiedLog partition=__consumer_offsets-21, dir=/tmp/kafka-unit11182757027218931278] Flushing log up to offset 0 (inclusive)with recovery point 0, last flushed: 1770973776913, current time: 1770973795834,unflushed: 0 09:09:55 09:09:55.835 [log-closing-/tmp/kafka-unit11182757027218931278] DEBUG kafka.log.UnifiedLog - [UnifiedLog partition=__consumer_offsets-21, dir=/tmp/kafka-unit11182757027218931278] Closing log 09:09:55 09:09:55.835 [log-closing-/tmp/kafka-unit11182757027218931278] DEBUG kafka.log.AbstractIndex - Resized /tmp/kafka-unit11182757027218931278/__consumer_offsets-21/00000000000000000000.index to 0, position is 0 and limit is 0 09:09:55 09:09:55.836 [log-closing-/tmp/kafka-unit11182757027218931278] DEBUG kafka.log.AbstractIndex - Resized /tmp/kafka-unit11182757027218931278/__consumer_offsets-21/00000000000000000000.timeindex to 0, position is 0 and limit is 0 09:09:55 09:09:55.836 [log-closing-/tmp/kafka-unit11182757027218931278] DEBUG kafka.log.UnifiedLog - [UnifiedLog partition=__consumer_offsets-19, dir=/tmp/kafka-unit11182757027218931278] Flushing log up to offset 0 (inclusive)with recovery point 0, last flushed: 1770973776184, current time: 1770973795836,unflushed: 0 09:09:55 09:09:55.837 [log-closing-/tmp/kafka-unit11182757027218931278] DEBUG kafka.log.UnifiedLog - [UnifiedLog partition=__consumer_offsets-19, dir=/tmp/kafka-unit11182757027218931278] Closing log 09:09:55 09:09:55.837 [log-closing-/tmp/kafka-unit11182757027218931278] DEBUG kafka.log.AbstractIndex - Resized /tmp/kafka-unit11182757027218931278/__consumer_offsets-19/00000000000000000000.index to 0, position is 0 and limit is 0 09:09:55 09:09:55.837 [log-closing-/tmp/kafka-unit11182757027218931278] DEBUG kafka.log.AbstractIndex - Resized /tmp/kafka-unit11182757027218931278/__consumer_offsets-19/00000000000000000000.timeindex to 0, position is 0 and limit is 0 09:09:55 09:09:55.838 [log-closing-/tmp/kafka-unit11182757027218931278] DEBUG kafka.log.UnifiedLog - [UnifiedLog partition=__consumer_offsets-25, dir=/tmp/kafka-unit11182757027218931278] Flushing log up to offset 0 (inclusive)with recovery point 0, last flushed: 1770973776328, current time: 1770973795838,unflushed: 0 09:09:55 09:09:55.839 [log-closing-/tmp/kafka-unit11182757027218931278] DEBUG kafka.log.UnifiedLog - [UnifiedLog partition=__consumer_offsets-25, dir=/tmp/kafka-unit11182757027218931278] Closing log 09:09:55 09:09:55.840 [log-closing-/tmp/kafka-unit11182757027218931278] DEBUG kafka.log.AbstractIndex - Resized /tmp/kafka-unit11182757027218931278/__consumer_offsets-25/00000000000000000000.index to 0, position is 0 and limit is 0 09:09:55 09:09:55.840 [log-closing-/tmp/kafka-unit11182757027218931278] DEBUG kafka.log.AbstractIndex - Resized /tmp/kafka-unit11182757027218931278/__consumer_offsets-25/00000000000000000000.timeindex to 0, position is 0 and limit is 0 09:09:55 09:09:55.840 [log-closing-/tmp/kafka-unit11182757027218931278] DEBUG kafka.log.UnifiedLog - [UnifiedLog partition=__consumer_offsets-33, dir=/tmp/kafka-unit11182757027218931278] Flushing log up to offset 0 (inclusive)with recovery point 0, last flushed: 1770973776071, current time: 1770973795840,unflushed: 0 09:09:55 09:09:55.841 [log-closing-/tmp/kafka-unit11182757027218931278] DEBUG kafka.log.UnifiedLog - [UnifiedLog partition=__consumer_offsets-33, dir=/tmp/kafka-unit11182757027218931278] Closing log 09:09:55 09:09:55.841 [log-closing-/tmp/kafka-unit11182757027218931278] DEBUG kafka.log.AbstractIndex - Resized /tmp/kafka-unit11182757027218931278/__consumer_offsets-33/00000000000000000000.index to 0, position is 0 and limit is 0 09:09:55 09:09:55.842 [log-closing-/tmp/kafka-unit11182757027218931278] DEBUG kafka.log.AbstractIndex - Resized /tmp/kafka-unit11182757027218931278/__consumer_offsets-33/00000000000000000000.timeindex to 0, position is 0 and limit is 0 09:09:55 09:09:55.842 [log-closing-/tmp/kafka-unit11182757027218931278] DEBUG kafka.log.UnifiedLog - [UnifiedLog partition=__consumer_offsets-41, dir=/tmp/kafka-unit11182757027218931278] Flushing log up to offset 0 (inclusive)with recovery point 0, last flushed: 1770973776017, current time: 1770973795842,unflushed: 0 09:09:55 09:09:55.843 [log-closing-/tmp/kafka-unit11182757027218931278] DEBUG kafka.log.UnifiedLog - [UnifiedLog partition=__consumer_offsets-41, dir=/tmp/kafka-unit11182757027218931278] Closing log 09:09:55 09:09:55.843 [log-closing-/tmp/kafka-unit11182757027218931278] DEBUG kafka.log.AbstractIndex - Resized /tmp/kafka-unit11182757027218931278/__consumer_offsets-41/00000000000000000000.index to 0, position is 0 and limit is 0 09:09:55 09:09:55.844 [log-closing-/tmp/kafka-unit11182757027218931278] DEBUG kafka.log.AbstractIndex - Resized /tmp/kafka-unit11182757027218931278/__consumer_offsets-41/00000000000000000000.timeindex to 0, position is 0 and limit is 0 09:09:55 09:09:55.844 [log-closing-/tmp/kafka-unit11182757027218931278] DEBUG kafka.log.UnifiedLog - [UnifiedLog partition=__consumer_offsets-37, dir=/tmp/kafka-unit11182757027218931278] Flushing log up to offset 4 (inclusive)with recovery point 4, last flushed: 1770973795328, current time: 1770973795844,unflushed: 0 09:09:55 09:09:55.844 [log-closing-/tmp/kafka-unit11182757027218931278] DEBUG kafka.log.UnifiedLog - [UnifiedLog partition=__consumer_offsets-37, dir=/tmp/kafka-unit11182757027218931278] Closing log 09:09:55 09:09:55.848 [log-closing-/tmp/kafka-unit11182757027218931278] INFO kafka.log.ProducerStateManager - [ProducerStateManager partition=__consumer_offsets-37] Wrote producer snapshot at offset 4 with 0 producer ids in 2 ms. 09:09:55 09:09:55.849 [log-closing-/tmp/kafka-unit11182757027218931278] DEBUG kafka.log.AbstractIndex - Resized /tmp/kafka-unit11182757027218931278/__consumer_offsets-37/00000000000000000000.index to 0, position is 0 and limit is 0 09:09:55 09:09:55.849 [log-closing-/tmp/kafka-unit11182757027218931278] DEBUG kafka.log.AbstractIndex - Resized /tmp/kafka-unit11182757027218931278/__consumer_offsets-37/00000000000000000000.timeindex to 12, position is 12 and limit is 12 09:09:55 09:09:55.849 [log-closing-/tmp/kafka-unit11182757027218931278] DEBUG kafka.log.UnifiedLog - [UnifiedLog partition=__consumer_offsets-8, dir=/tmp/kafka-unit11182757027218931278] Flushing log up to offset 0 (inclusive)with recovery point 0, last flushed: 1770973776598, current time: 1770973795849,unflushed: 0 09:09:55 09:09:55.851 [log-closing-/tmp/kafka-unit11182757027218931278] DEBUG kafka.log.UnifiedLog - [UnifiedLog partition=__consumer_offsets-8, dir=/tmp/kafka-unit11182757027218931278] Closing log 09:09:55 09:09:55.851 [log-closing-/tmp/kafka-unit11182757027218931278] DEBUG kafka.log.AbstractIndex - Resized /tmp/kafka-unit11182757027218931278/__consumer_offsets-8/00000000000000000000.index to 0, position is 0 and limit is 0 09:09:55 09:09:55.851 [log-closing-/tmp/kafka-unit11182757027218931278] DEBUG kafka.log.AbstractIndex - Resized /tmp/kafka-unit11182757027218931278/__consumer_offsets-8/00000000000000000000.timeindex to 0, position is 0 and limit is 0 09:09:55 09:09:55.851 [log-closing-/tmp/kafka-unit11182757027218931278] DEBUG kafka.log.UnifiedLog - [UnifiedLog partition=__consumer_offsets-24, dir=/tmp/kafka-unit11182757027218931278] Flushing log up to offset 0 (inclusive)with recovery point 0, last flushed: 1770973776259, current time: 1770973795851,unflushed: 0 09:09:55 09:09:55.852 [log-closing-/tmp/kafka-unit11182757027218931278] DEBUG kafka.log.UnifiedLog - [UnifiedLog partition=__consumer_offsets-24, dir=/tmp/kafka-unit11182757027218931278] Closing log 09:09:55 09:09:55.853 [log-closing-/tmp/kafka-unit11182757027218931278] DEBUG kafka.log.AbstractIndex - Resized /tmp/kafka-unit11182757027218931278/__consumer_offsets-24/00000000000000000000.index to 0, position is 0 and limit is 0 09:09:55 09:09:55.853 [log-closing-/tmp/kafka-unit11182757027218931278] DEBUG kafka.log.AbstractIndex - Resized /tmp/kafka-unit11182757027218931278/__consumer_offsets-24/00000000000000000000.timeindex to 0, position is 0 and limit is 0 09:09:55 09:09:55.853 [log-closing-/tmp/kafka-unit11182757027218931278] DEBUG kafka.log.UnifiedLog - [UnifiedLog partition=__consumer_offsets-49, dir=/tmp/kafka-unit11182757027218931278] Flushing log up to offset 0 (inclusive)with recovery point 0, last flushed: 1770973776233, current time: 1770973795853,unflushed: 0 09:09:55 09:09:55.854 [log-closing-/tmp/kafka-unit11182757027218931278] DEBUG kafka.log.UnifiedLog - [UnifiedLog partition=__consumer_offsets-49, dir=/tmp/kafka-unit11182757027218931278] Closing log 09:09:55 09:09:55.854 [log-closing-/tmp/kafka-unit11182757027218931278] DEBUG kafka.log.AbstractIndex - Resized /tmp/kafka-unit11182757027218931278/__consumer_offsets-49/00000000000000000000.index to 0, position is 0 and limit is 0 09:09:55 09:09:55.854 [log-closing-/tmp/kafka-unit11182757027218931278] DEBUG kafka.log.AbstractIndex - Resized /tmp/kafka-unit11182757027218931278/__consumer_offsets-49/00000000000000000000.timeindex to 0, position is 0 and limit is 0 09:09:55 09:09:55.855 [log-closing-/tmp/kafka-unit11182757027218931278] DEBUG kafka.log.UnifiedLog - [UnifiedLog partition=my-test-topic-0, dir=/tmp/kafka-unit11182757027218931278] Flushing log up to offset 3 (inclusive)with recovery point 3, last flushed: 1770973795693, current time: 1770973795855,unflushed: 0 09:09:55 09:09:55.855 [log-closing-/tmp/kafka-unit11182757027218931278] DEBUG kafka.log.UnifiedLog - [UnifiedLog partition=my-test-topic-0, dir=/tmp/kafka-unit11182757027218931278] Closing log 09:09:55 09:09:55.856 [kafka-coordinator-heartbeat-thread | mso-group] DEBUG org.apache.kafka.common.network.Selector - [Consumer clientId=mso-123456-consumer-f5c1d64b-2a77-4abe-97e2-5d595acfbd82, groupId=mso-group] Connection with localhost/127.0.0.1 (channelId=2147483646) disconnected 09:09:55 java.io.EOFException: null 09:09:55 at org.apache.kafka.common.network.NetworkReceive.readFrom(NetworkReceive.java:97) 09:09:55 at org.apache.kafka.common.network.KafkaChannel.receive(KafkaChannel.java:452) 09:09:55 at org.apache.kafka.common.network.KafkaChannel.read(KafkaChannel.java:402) 09:09:55 at org.apache.kafka.common.network.Selector.attemptRead(Selector.java:674) 09:09:55 at org.apache.kafka.common.network.Selector.pollSelectionKeys(Selector.java:576) 09:09:55 at org.apache.kafka.common.network.Selector.poll(Selector.java:481) 09:09:55 at org.apache.kafka.clients.NetworkClient.poll(NetworkClient.java:560) 09:09:55 at org.apache.kafka.clients.consumer.internals.ConsumerNetworkClient.poll(ConsumerNetworkClient.java:280) 09:09:55 at org.apache.kafka.clients.consumer.internals.ConsumerNetworkClient.pollNoWakeup(ConsumerNetworkClient.java:321) 09:09:55 at org.apache.kafka.clients.consumer.internals.AbstractCoordinator$HeartbeatThread.run(AbstractCoordinator.java:1454) 09:09:55 09:09:55.856 [kafka-coordinator-heartbeat-thread | mso-group] DEBUG org.apache.kafka.common.network.Selector - [Consumer clientId=mso-123456-consumer-f5c1d64b-2a77-4abe-97e2-5d595acfbd82, groupId=mso-group] Connection with localhost/127.0.0.1 (channelId=1) disconnected 09:09:55 java.io.EOFException: null 09:09:55 at org.apache.kafka.common.network.NetworkReceive.readFrom(NetworkReceive.java:97) 09:09:55 at org.apache.kafka.common.network.KafkaChannel.receive(KafkaChannel.java:452) 09:09:55 at org.apache.kafka.common.network.KafkaChannel.read(KafkaChannel.java:402) 09:09:55 at org.apache.kafka.common.network.Selector.attemptRead(Selector.java:674) 09:09:55 at org.apache.kafka.common.network.Selector.pollSelectionKeys(Selector.java:576) 09:09:55 at org.apache.kafka.common.network.Selector.poll(Selector.java:481) 09:09:55 at org.apache.kafka.clients.NetworkClient.poll(NetworkClient.java:560) 09:09:55 at org.apache.kafka.clients.consumer.internals.ConsumerNetworkClient.poll(ConsumerNetworkClient.java:280) 09:09:55 at org.apache.kafka.clients.consumer.internals.ConsumerNetworkClient.pollNoWakeup(ConsumerNetworkClient.java:321) 09:09:55 at org.apache.kafka.clients.consumer.internals.AbstractCoordinator$HeartbeatThread.run(AbstractCoordinator.java:1454) 09:09:55 09:09:55.856 [kafka-coordinator-heartbeat-thread | mso-group] DEBUG org.apache.kafka.common.network.Selector - [Consumer clientId=mso-123456-consumer-f5c1d64b-2a77-4abe-97e2-5d595acfbd82, groupId=mso-group] Connection with localhost/127.0.0.1 (channelId=-1) disconnected 09:09:55 java.io.EOFException: null 09:09:55 at org.apache.kafka.common.network.NetworkReceive.readFrom(NetworkReceive.java:97) 09:09:55 at org.apache.kafka.common.network.KafkaChannel.receive(KafkaChannel.java:452) 09:09:55 at org.apache.kafka.common.network.KafkaChannel.read(KafkaChannel.java:402) 09:09:55 at org.apache.kafka.common.network.Selector.attemptRead(Selector.java:674) 09:09:55 at org.apache.kafka.common.network.Selector.pollSelectionKeys(Selector.java:576) 09:09:55 at org.apache.kafka.common.network.Selector.poll(Selector.java:481) 09:09:55 at org.apache.kafka.clients.NetworkClient.poll(NetworkClient.java:560) 09:09:55 at org.apache.kafka.clients.consumer.internals.ConsumerNetworkClient.poll(ConsumerNetworkClient.java:280) 09:09:55 at org.apache.kafka.clients.consumer.internals.ConsumerNetworkClient.pollNoWakeup(ConsumerNetworkClient.java:321) 09:09:55 at org.apache.kafka.clients.consumer.internals.AbstractCoordinator$HeartbeatThread.run(AbstractCoordinator.java:1454) 09:09:55 09:09:55.857 [kafka-coordinator-heartbeat-thread | mso-group] INFO org.apache.kafka.clients.NetworkClient - [Consumer clientId=mso-123456-consumer-f5c1d64b-2a77-4abe-97e2-5d595acfbd82, groupId=mso-group] Node 1 disconnected. 09:09:55 09:09:55.857 [kafka-coordinator-heartbeat-thread | mso-group] DEBUG org.apache.kafka.clients.NetworkClient - [Consumer clientId=mso-123456-consumer-f5c1d64b-2a77-4abe-97e2-5d595acfbd82, groupId=mso-group] Cancelled in-flight FETCH request with correlation id 77 due to node 1 being disconnected (elapsed time since creation: 123ms, elapsed time since send: 123ms, request timeout: 30000ms): FetchRequestData(clusterId=null, replicaId=-1, maxWaitMs=500, minBytes=1, maxBytes=52428800, isolationLevel=0, sessionId=1021167583, sessionEpoch=31, topics=[FetchTopic(topic='my-test-topic', topicId=rpOTusxPRiGyrsjjjH_fwA, partitions=[FetchPartition(partition=0, currentLeaderEpoch=0, fetchOffset=3, lastFetchedEpoch=-1, logStartOffset=-1, partitionMaxBytes=1048576)])], forgottenTopicsData=[], rackId='') 09:09:55 09:09:55.857 [kafka-coordinator-heartbeat-thread | mso-group] INFO org.apache.kafka.clients.NetworkClient - [Consumer clientId=mso-123456-consumer-f5c1d64b-2a77-4abe-97e2-5d595acfbd82, groupId=mso-group] Node -1 disconnected. 09:09:55 09:09:55.857 [kafka-coordinator-heartbeat-thread | mso-group] INFO org.apache.kafka.clients.NetworkClient - [Consumer clientId=mso-123456-consumer-f5c1d64b-2a77-4abe-97e2-5d595acfbd82, groupId=mso-group] Node 2147483646 disconnected. 09:09:55 09:09:55.857 [kafka-coordinator-heartbeat-thread | mso-group] DEBUG org.apache.kafka.clients.consumer.internals.ConsumerNetworkClient - [Consumer clientId=mso-123456-consumer-f5c1d64b-2a77-4abe-97e2-5d595acfbd82, groupId=mso-group] Cancelled request with header RequestHeader(apiKey=FETCH, apiVersion=13, clientId=mso-123456-consumer-f5c1d64b-2a77-4abe-97e2-5d595acfbd82, correlationId=77) due to node 1 being disconnected 09:09:55 09:09:55.858 [log-closing-/tmp/kafka-unit11182757027218931278] INFO kafka.log.ProducerStateManager - [ProducerStateManager partition=my-test-topic-0] Wrote producer snapshot at offset 3 with 1 producer ids in 2 ms. 09:09:55 09:09:55.858 [kafka-coordinator-heartbeat-thread | mso-group] INFO org.apache.kafka.clients.FetchSessionHandler - [Consumer clientId=mso-123456-consumer-f5c1d64b-2a77-4abe-97e2-5d595acfbd82, groupId=mso-group] Error sending fetch request (sessionId=1021167583, epoch=31) to node 1: 09:09:55 org.apache.kafka.common.errors.DisconnectException: null 09:09:55 09:09:55.858 [log-closing-/tmp/kafka-unit11182757027218931278] DEBUG kafka.log.AbstractIndex - Resized /tmp/kafka-unit11182757027218931278/my-test-topic-0/00000000000000000000.index to 0, position is 0 and limit is 0 09:09:55 09:09:55.858 [log-closing-/tmp/kafka-unit11182757027218931278] DEBUG kafka.log.AbstractIndex - Resized /tmp/kafka-unit11182757027218931278/my-test-topic-0/00000000000000000000.timeindex to 12, position is 12 and limit is 12 09:09:55 09:09:55.858 [kafka-coordinator-heartbeat-thread | mso-group] INFO org.apache.kafka.clients.consumer.internals.ConsumerCoordinator - [Consumer clientId=mso-123456-consumer-f5c1d64b-2a77-4abe-97e2-5d595acfbd82, groupId=mso-group] Group coordinator localhost:40117 (id: 2147483646 rack: null) is unavailable or invalid due to cause: coordinator unavailable. isDisconnected: true. Rediscovery will be attempted. 09:09:55 09:09:55.858 [kafka-coordinator-heartbeat-thread | mso-group] DEBUG org.apache.kafka.clients.consumer.internals.ConsumerCoordinator - [Consumer clientId=mso-123456-consumer-f5c1d64b-2a77-4abe-97e2-5d595acfbd82, groupId=mso-group] No broker available to send FindCoordinator request 09:09:55 09:09:55.858 [log-closing-/tmp/kafka-unit11182757027218931278] DEBUG kafka.log.UnifiedLog - [UnifiedLog partition=__consumer_offsets-3, dir=/tmp/kafka-unit11182757027218931278] Flushing log up to offset 0 (inclusive)with recovery point 0, last flushed: 1770973775998, current time: 1770973795858,unflushed: 0 09:09:55 09:09:55.860 [log-closing-/tmp/kafka-unit11182757027218931278] DEBUG kafka.log.UnifiedLog - [UnifiedLog partition=__consumer_offsets-3, dir=/tmp/kafka-unit11182757027218931278] Closing log 09:09:55 09:09:55.860 [log-closing-/tmp/kafka-unit11182757027218931278] DEBUG kafka.log.AbstractIndex - Resized /tmp/kafka-unit11182757027218931278/__consumer_offsets-3/00000000000000000000.index to 0, position is 0 and limit is 0 09:09:55 09:09:55.860 [log-closing-/tmp/kafka-unit11182757027218931278] DEBUG kafka.log.AbstractIndex - Resized /tmp/kafka-unit11182757027218931278/__consumer_offsets-3/00000000000000000000.timeindex to 0, position is 0 and limit is 0 09:09:55 09:09:55.860 [log-closing-/tmp/kafka-unit11182757027218931278] DEBUG kafka.log.UnifiedLog - [UnifiedLog partition=__consumer_offsets-40, dir=/tmp/kafka-unit11182757027218931278] Flushing log up to offset 0 (inclusive)with recovery point 0, last flushed: 1770973776383, current time: 1770973795860,unflushed: 0 09:09:55 09:09:55.862 [log-closing-/tmp/kafka-unit11182757027218931278] DEBUG kafka.log.UnifiedLog - [UnifiedLog partition=__consumer_offsets-40, dir=/tmp/kafka-unit11182757027218931278] Closing log 09:09:55 09:09:55.862 [log-closing-/tmp/kafka-unit11182757027218931278] DEBUG kafka.log.AbstractIndex - Resized /tmp/kafka-unit11182757027218931278/__consumer_offsets-40/00000000000000000000.index to 0, position is 0 and limit is 0 09:09:55 09:09:55.862 [log-closing-/tmp/kafka-unit11182757027218931278] DEBUG kafka.log.AbstractIndex - Resized /tmp/kafka-unit11182757027218931278/__consumer_offsets-40/00000000000000000000.timeindex to 0, position is 0 and limit is 0 09:09:55 09:09:55.862 [log-closing-/tmp/kafka-unit11182757027218931278] DEBUG kafka.log.UnifiedLog - [UnifiedLog partition=__consumer_offsets-27, dir=/tmp/kafka-unit11182757027218931278] Flushing log up to offset 0 (inclusive)with recovery point 0, last flushed: 1770973776884, current time: 1770973795862,unflushed: 0 09:09:55 09:09:55.863 [log-closing-/tmp/kafka-unit11182757027218931278] DEBUG kafka.log.UnifiedLog - [UnifiedLog partition=__consumer_offsets-27, dir=/tmp/kafka-unit11182757027218931278] Closing log 09:09:55 09:09:55.864 [log-closing-/tmp/kafka-unit11182757027218931278] DEBUG kafka.log.AbstractIndex - Resized /tmp/kafka-unit11182757027218931278/__consumer_offsets-27/00000000000000000000.index to 0, position is 0 and limit is 0 09:09:55 09:09:55.864 [log-closing-/tmp/kafka-unit11182757027218931278] DEBUG kafka.log.AbstractIndex - Resized /tmp/kafka-unit11182757027218931278/__consumer_offsets-27/00000000000000000000.timeindex to 0, position is 0 and limit is 0 09:09:55 09:09:55.864 [log-closing-/tmp/kafka-unit11182757027218931278] DEBUG kafka.log.UnifiedLog - [UnifiedLog partition=__consumer_offsets-17, dir=/tmp/kafka-unit11182757027218931278] Flushing log up to offset 0 (inclusive)with recovery point 0, last flushed: 1770973776408, current time: 1770973795864,unflushed: 0 09:09:55 09:09:55.865 [log-closing-/tmp/kafka-unit11182757027218931278] DEBUG kafka.log.UnifiedLog - [UnifiedLog partition=__consumer_offsets-17, dir=/tmp/kafka-unit11182757027218931278] Closing log 09:09:55 09:09:55.865 [log-closing-/tmp/kafka-unit11182757027218931278] DEBUG kafka.log.AbstractIndex - Resized /tmp/kafka-unit11182757027218931278/__consumer_offsets-17/00000000000000000000.index to 0, position is 0 and limit is 0 09:09:55 09:09:55.865 [log-closing-/tmp/kafka-unit11182757027218931278] DEBUG kafka.log.AbstractIndex - Resized /tmp/kafka-unit11182757027218931278/__consumer_offsets-17/00000000000000000000.timeindex to 0, position is 0 and limit is 0 09:09:55 09:09:55.865 [log-closing-/tmp/kafka-unit11182757027218931278] DEBUG kafka.log.UnifiedLog - [UnifiedLog partition=__consumer_offsets-32, dir=/tmp/kafka-unit11182757027218931278] Flushing log up to offset 0 (inclusive)with recovery point 0, last flushed: 1770973776446, current time: 1770973795865,unflushed: 0 09:09:55 09:09:55.867 [log-closing-/tmp/kafka-unit11182757027218931278] DEBUG kafka.log.UnifiedLog - [UnifiedLog partition=__consumer_offsets-32, dir=/tmp/kafka-unit11182757027218931278] Closing log 09:09:55 09:09:55.867 [log-closing-/tmp/kafka-unit11182757027218931278] DEBUG kafka.log.AbstractIndex - Resized /tmp/kafka-unit11182757027218931278/__consumer_offsets-32/00000000000000000000.index to 0, position is 0 and limit is 0 09:09:55 09:09:55.867 [log-closing-/tmp/kafka-unit11182757027218931278] DEBUG kafka.log.AbstractIndex - Resized /tmp/kafka-unit11182757027218931278/__consumer_offsets-32/00000000000000000000.timeindex to 0, position is 0 and limit is 0 09:09:55 09:09:55.867 [log-closing-/tmp/kafka-unit11182757027218931278] DEBUG kafka.log.UnifiedLog - [UnifiedLog partition=__consumer_offsets-39, dir=/tmp/kafka-unit11182757027218931278] Flushing log up to offset 0 (inclusive)with recovery point 0, last flushed: 1770973776243, current time: 1770973795867,unflushed: 0 09:09:55 09:09:55.868 [log-closing-/tmp/kafka-unit11182757027218931278] DEBUG kafka.log.UnifiedLog - [UnifiedLog partition=__consumer_offsets-39, dir=/tmp/kafka-unit11182757027218931278] Closing log 09:09:55 09:09:55.868 [log-closing-/tmp/kafka-unit11182757027218931278] DEBUG kafka.log.AbstractIndex - Resized /tmp/kafka-unit11182757027218931278/__consumer_offsets-39/00000000000000000000.index to 0, position is 0 and limit is 0 09:09:55 09:09:55.868 [log-closing-/tmp/kafka-unit11182757027218931278] DEBUG kafka.log.AbstractIndex - Resized /tmp/kafka-unit11182757027218931278/__consumer_offsets-39/00000000000000000000.timeindex to 0, position is 0 and limit is 0 09:09:55 09:09:55.868 [log-closing-/tmp/kafka-unit11182757027218931278] DEBUG kafka.log.UnifiedLog - [UnifiedLog partition=__consumer_offsets-2, dir=/tmp/kafka-unit11182757027218931278] Flushing log up to offset 0 (inclusive)with recovery point 0, last flushed: 1770973776321, current time: 1770973795868,unflushed: 0 09:09:55 09:09:55.869 [log-closing-/tmp/kafka-unit11182757027218931278] DEBUG kafka.log.UnifiedLog - [UnifiedLog partition=__consumer_offsets-2, dir=/tmp/kafka-unit11182757027218931278] Closing log 09:09:55 09:09:55.870 [log-closing-/tmp/kafka-unit11182757027218931278] DEBUG kafka.log.AbstractIndex - Resized /tmp/kafka-unit11182757027218931278/__consumer_offsets-2/00000000000000000000.index to 0, position is 0 and limit is 0 09:09:55 09:09:55.870 [log-closing-/tmp/kafka-unit11182757027218931278] DEBUG kafka.log.AbstractIndex - Resized /tmp/kafka-unit11182757027218931278/__consumer_offsets-2/00000000000000000000.timeindex to 0, position is 0 and limit is 0 09:09:55 09:09:55.870 [log-closing-/tmp/kafka-unit11182757027218931278] DEBUG kafka.log.UnifiedLog - [UnifiedLog partition=__consumer_offsets-44, dir=/tmp/kafka-unit11182757027218931278] Flushing log up to offset 0 (inclusive)with recovery point 0, last flushed: 1770973776496, current time: 1770973795870,unflushed: 0 09:09:55 09:09:55.871 [log-closing-/tmp/kafka-unit11182757027218931278] DEBUG kafka.log.UnifiedLog - [UnifiedLog partition=__consumer_offsets-44, dir=/tmp/kafka-unit11182757027218931278] Closing log 09:09:55 09:09:55.871 [log-closing-/tmp/kafka-unit11182757027218931278] DEBUG kafka.log.AbstractIndex - Resized /tmp/kafka-unit11182757027218931278/__consumer_offsets-44/00000000000000000000.index to 0, position is 0 and limit is 0 09:09:55 09:09:55.871 [log-closing-/tmp/kafka-unit11182757027218931278] DEBUG kafka.log.AbstractIndex - Resized /tmp/kafka-unit11182757027218931278/__consumer_offsets-44/00000000000000000000.timeindex to 0, position is 0 and limit is 0 09:09:55 09:09:55.871 [log-closing-/tmp/kafka-unit11182757027218931278] DEBUG kafka.log.UnifiedLog - [UnifiedLog partition=__consumer_offsets-12, dir=/tmp/kafka-unit11182757027218931278] Flushing log up to offset 0 (inclusive)with recovery point 0, last flushed: 1770973776903, current time: 1770973795871,unflushed: 0 09:09:55 09:09:55.872 [log-closing-/tmp/kafka-unit11182757027218931278] DEBUG kafka.log.UnifiedLog - [UnifiedLog partition=__consumer_offsets-12, dir=/tmp/kafka-unit11182757027218931278] Closing log 09:09:55 09:09:55.872 [log-closing-/tmp/kafka-unit11182757027218931278] DEBUG kafka.log.AbstractIndex - Resized /tmp/kafka-unit11182757027218931278/__consumer_offsets-12/00000000000000000000.index to 0, position is 0 and limit is 0 09:09:55 09:09:55.872 [log-closing-/tmp/kafka-unit11182757027218931278] DEBUG kafka.log.AbstractIndex - Resized /tmp/kafka-unit11182757027218931278/__consumer_offsets-12/00000000000000000000.timeindex to 0, position is 0 and limit is 0 09:09:55 09:09:55.873 [log-closing-/tmp/kafka-unit11182757027218931278] DEBUG kafka.log.UnifiedLog - [UnifiedLog partition=__consumer_offsets-36, dir=/tmp/kafka-unit11182757027218931278] Flushing log up to offset 0 (inclusive)with recovery point 0, last flushed: 1770973776921, current time: 1770973795873,unflushed: 0 09:09:55 09:09:55.874 [log-closing-/tmp/kafka-unit11182757027218931278] DEBUG kafka.log.UnifiedLog - [UnifiedLog partition=__consumer_offsets-36, dir=/tmp/kafka-unit11182757027218931278] Closing log 09:09:55 09:09:55.874 [log-closing-/tmp/kafka-unit11182757027218931278] DEBUG kafka.log.AbstractIndex - Resized /tmp/kafka-unit11182757027218931278/__consumer_offsets-36/00000000000000000000.index to 0, position is 0 and limit is 0 09:09:55 09:09:55.874 [log-closing-/tmp/kafka-unit11182757027218931278] DEBUG kafka.log.AbstractIndex - Resized /tmp/kafka-unit11182757027218931278/__consumer_offsets-36/00000000000000000000.timeindex to 0, position is 0 and limit is 0 09:09:55 09:09:55.874 [log-closing-/tmp/kafka-unit11182757027218931278] DEBUG kafka.log.UnifiedLog - [UnifiedLog partition=__consumer_offsets-45, dir=/tmp/kafka-unit11182757027218931278] Flushing log up to offset 0 (inclusive)with recovery point 0, last flushed: 1770973776607, current time: 1770973795874,unflushed: 0 09:09:55 09:09:55.875 [log-closing-/tmp/kafka-unit11182757027218931278] DEBUG kafka.log.UnifiedLog - [UnifiedLog partition=__consumer_offsets-45, dir=/tmp/kafka-unit11182757027218931278] Closing log 09:09:55 09:09:55.876 [log-closing-/tmp/kafka-unit11182757027218931278] DEBUG kafka.log.AbstractIndex - Resized /tmp/kafka-unit11182757027218931278/__consumer_offsets-45/00000000000000000000.index to 0, position is 0 and limit is 0 09:09:55 09:09:55.876 [log-closing-/tmp/kafka-unit11182757027218931278] DEBUG kafka.log.AbstractIndex - Resized /tmp/kafka-unit11182757027218931278/__consumer_offsets-45/00000000000000000000.timeindex to 0, position is 0 and limit is 0 09:09:55 09:09:55.876 [log-closing-/tmp/kafka-unit11182757027218931278] DEBUG kafka.log.UnifiedLog - [UnifiedLog partition=__consumer_offsets-16, dir=/tmp/kafka-unit11182757027218931278] Flushing log up to offset 0 (inclusive)with recovery point 0, last flushed: 1770973776313, current time: 1770973795876,unflushed: 0 09:09:55 09:09:55.877 [log-closing-/tmp/kafka-unit11182757027218931278] DEBUG kafka.log.UnifiedLog - [UnifiedLog partition=__consumer_offsets-16, dir=/tmp/kafka-unit11182757027218931278] Closing log 09:09:55 09:09:55.877 [log-closing-/tmp/kafka-unit11182757027218931278] DEBUG kafka.log.AbstractIndex - Resized /tmp/kafka-unit11182757027218931278/__consumer_offsets-16/00000000000000000000.index to 0, position is 0 and limit is 0 09:09:55 09:09:55.877 [log-closing-/tmp/kafka-unit11182757027218931278] DEBUG kafka.log.AbstractIndex - Resized /tmp/kafka-unit11182757027218931278/__consumer_offsets-16/00000000000000000000.timeindex to 0, position is 0 and limit is 0 09:09:55 09:09:55.877 [log-closing-/tmp/kafka-unit11182757027218931278] DEBUG kafka.log.UnifiedLog - [UnifiedLog partition=__consumer_offsets-10, dir=/tmp/kafka-unit11182757027218931278] Flushing log up to offset 0 (inclusive)with recovery point 0, last flushed: 1770973776026, current time: 1770973795877,unflushed: 0 09:09:55 09:09:55.878 [kafka-producer-network-thread | mso-123456-producer-a431b68b-e45a-4ed5-8b19-0482c7f0322b] DEBUG org.apache.kafka.clients.NetworkClient - [Producer clientId=mso-123456-producer-a431b68b-e45a-4ed5-8b19-0482c7f0322b] Initialize connection to node localhost:40117 (id: 1 rack: null) for sending metadata request 09:09:55 09:09:55.878 [kafka-producer-network-thread | mso-123456-producer-a431b68b-e45a-4ed5-8b19-0482c7f0322b] DEBUG org.apache.kafka.clients.ClientUtils - Resolved host localhost as 127.0.0.1 09:09:55 09:09:55.878 [kafka-producer-network-thread | mso-123456-producer-a431b68b-e45a-4ed5-8b19-0482c7f0322b] DEBUG org.apache.kafka.clients.NetworkClient - [Producer clientId=mso-123456-producer-a431b68b-e45a-4ed5-8b19-0482c7f0322b] Initiating connection to node localhost:40117 (id: 1 rack: null) using address localhost/127.0.0.1 09:09:55 09:09:55.878 [log-closing-/tmp/kafka-unit11182757027218931278] DEBUG kafka.log.UnifiedLog - [UnifiedLog partition=__consumer_offsets-10, dir=/tmp/kafka-unit11182757027218931278] Closing log 09:09:55 09:09:55.879 [log-closing-/tmp/kafka-unit11182757027218931278] DEBUG kafka.log.AbstractIndex - Resized /tmp/kafka-unit11182757027218931278/__consumer_offsets-10/00000000000000000000.index to 0, position is 0 and limit is 0 09:09:55 09:09:55.879 [log-closing-/tmp/kafka-unit11182757027218931278] DEBUG kafka.log.AbstractIndex - Resized /tmp/kafka-unit11182757027218931278/__consumer_offsets-10/00000000000000000000.timeindex to 0, position is 0 and limit is 0 09:09:55 09:09:55.879 [kafka-producer-network-thread | mso-123456-producer-a431b68b-e45a-4ed5-8b19-0482c7f0322b] DEBUG org.apache.kafka.common.security.authenticator.SaslClientAuthenticator - [Producer clientId=mso-123456-producer-a431b68b-e45a-4ed5-8b19-0482c7f0322b] Set SASL client state to SEND_APIVERSIONS_REQUEST 09:09:55 09:09:55.879 [kafka-producer-network-thread | mso-123456-producer-a431b68b-e45a-4ed5-8b19-0482c7f0322b] DEBUG org.apache.kafka.common.security.authenticator.SaslClientAuthenticator - [Producer clientId=mso-123456-producer-a431b68b-e45a-4ed5-8b19-0482c7f0322b] Creating SaslClient: client=null;service=kafka;serviceHostname=localhost;mechs=[PLAIN] 09:09:55 09:09:55.879 [log-closing-/tmp/kafka-unit11182757027218931278] DEBUG kafka.log.UnifiedLog - [UnifiedLog partition=__consumer_offsets-11, dir=/tmp/kafka-unit11182757027218931278] Flushing log up to offset 0 (inclusive)with recovery point 0, last flushed: 1770973776212, current time: 1770973795879,unflushed: 0 09:09:55 09:09:55.880 [log-closing-/tmp/kafka-unit11182757027218931278] DEBUG kafka.log.UnifiedLog - [UnifiedLog partition=__consumer_offsets-11, dir=/tmp/kafka-unit11182757027218931278] Closing log 09:09:55 09:09:55.880 [log-closing-/tmp/kafka-unit11182757027218931278] DEBUG kafka.log.AbstractIndex - Resized /tmp/kafka-unit11182757027218931278/__consumer_offsets-11/00000000000000000000.index to 0, position is 0 and limit is 0 09:09:55 09:09:55.880 [kafka-producer-network-thread | mso-123456-producer-a431b68b-e45a-4ed5-8b19-0482c7f0322b] DEBUG org.apache.kafka.common.network.Selector - [Producer clientId=mso-123456-producer-a431b68b-e45a-4ed5-8b19-0482c7f0322b] Connection with localhost/127.0.0.1 (channelId=1) disconnected 09:09:55 java.net.ConnectException: Connection refused 09:09:55 at java.base/sun.nio.ch.SocketChannelImpl.checkConnect(Native Method) 09:09:55 at java.base/sun.nio.ch.SocketChannelImpl.finishConnect(SocketChannelImpl.java:777) 09:09:55 at org.apache.kafka.common.network.PlaintextTransportLayer.finishConnect(PlaintextTransportLayer.java:50) 09:09:55 at org.apache.kafka.common.network.KafkaChannel.finishConnect(KafkaChannel.java:224) 09:09:55 at org.apache.kafka.common.network.Selector.pollSelectionKeys(Selector.java:526) 09:09:55 at org.apache.kafka.common.network.Selector.poll(Selector.java:481) 09:09:55 at org.apache.kafka.clients.NetworkClient.poll(NetworkClient.java:560) 09:09:55 at org.apache.kafka.clients.producer.internals.Sender.runOnce(Sender.java:328) 09:09:55 at org.apache.kafka.clients.producer.internals.Sender.run(Sender.java:243) 09:09:55 at java.base/java.lang.Thread.run(Thread.java:829) 09:09:55 09:09:55.880 [log-closing-/tmp/kafka-unit11182757027218931278] DEBUG kafka.log.AbstractIndex - Resized /tmp/kafka-unit11182757027218931278/__consumer_offsets-11/00000000000000000000.timeindex to 0, position is 0 and limit is 0 09:09:55 09:09:55.880 [kafka-producer-network-thread | mso-123456-producer-a431b68b-e45a-4ed5-8b19-0482c7f0322b] INFO org.apache.kafka.clients.NetworkClient - [Producer clientId=mso-123456-producer-a431b68b-e45a-4ed5-8b19-0482c7f0322b] Node 1 disconnected. 09:09:55 09:09:55.880 [kafka-producer-network-thread | mso-123456-producer-a431b68b-e45a-4ed5-8b19-0482c7f0322b] WARN org.apache.kafka.clients.NetworkClient - [Producer clientId=mso-123456-producer-a431b68b-e45a-4ed5-8b19-0482c7f0322b] Connection to node 1 (localhost/127.0.0.1:40117) could not be established. Broker may not be available. 09:09:55 09:09:55.880 [log-closing-/tmp/kafka-unit11182757027218931278] DEBUG kafka.log.UnifiedLog - [UnifiedLog partition=__consumer_offsets-20, dir=/tmp/kafka-unit11182757027218931278] Flushing log up to offset 0 (inclusive)with recovery point 0, last flushed: 1770973776847, current time: 1770973795880,unflushed: 0 09:09:55 09:09:55.881 [log-closing-/tmp/kafka-unit11182757027218931278] DEBUG kafka.log.UnifiedLog - [UnifiedLog partition=__consumer_offsets-20, dir=/tmp/kafka-unit11182757027218931278] Closing log 09:09:55 09:09:55.881 [log-closing-/tmp/kafka-unit11182757027218931278] DEBUG kafka.log.AbstractIndex - Resized /tmp/kafka-unit11182757027218931278/__consumer_offsets-20/00000000000000000000.index to 0, position is 0 and limit is 0 09:09:55 09:09:55.882 [log-closing-/tmp/kafka-unit11182757027218931278] DEBUG kafka.log.AbstractIndex - Resized /tmp/kafka-unit11182757027218931278/__consumer_offsets-20/00000000000000000000.timeindex to 0, position is 0 and limit is 0 09:09:55 09:09:55.882 [log-closing-/tmp/kafka-unit11182757027218931278] DEBUG kafka.log.UnifiedLog - [UnifiedLog partition=__consumer_offsets-47, dir=/tmp/kafka-unit11182757027218931278] Flushing log up to offset 0 (inclusive)with recovery point 0, last flushed: 1770973776392, current time: 1770973795882,unflushed: 0 09:09:55 09:09:55.883 [log-closing-/tmp/kafka-unit11182757027218931278] DEBUG kafka.log.UnifiedLog - [UnifiedLog partition=__consumer_offsets-47, dir=/tmp/kafka-unit11182757027218931278] Closing log 09:09:55 09:09:55.883 [log-closing-/tmp/kafka-unit11182757027218931278] DEBUG kafka.log.AbstractIndex - Resized /tmp/kafka-unit11182757027218931278/__consumer_offsets-47/00000000000000000000.index to 0, position is 0 and limit is 0 09:09:55 09:09:55.883 [log-closing-/tmp/kafka-unit11182757027218931278] DEBUG kafka.log.AbstractIndex - Resized /tmp/kafka-unit11182757027218931278/__consumer_offsets-47/00000000000000000000.timeindex to 0, position is 0 and limit is 0 09:09:55 09:09:55.883 [log-closing-/tmp/kafka-unit11182757027218931278] DEBUG kafka.log.UnifiedLog - [UnifiedLog partition=__consumer_offsets-18, dir=/tmp/kafka-unit11182757027218931278] Flushing log up to offset 0 (inclusive)with recovery point 0, last flushed: 1770973776008, current time: 1770973795883,unflushed: 0 09:09:55 09:09:55.884 [log-closing-/tmp/kafka-unit11182757027218931278] DEBUG kafka.log.UnifiedLog - [UnifiedLog partition=__consumer_offsets-18, dir=/tmp/kafka-unit11182757027218931278] Closing log 09:09:55 09:09:55.884 [log-closing-/tmp/kafka-unit11182757027218931278] DEBUG kafka.log.AbstractIndex - Resized /tmp/kafka-unit11182757027218931278/__consumer_offsets-18/00000000000000000000.index to 0, position is 0 and limit is 0 09:09:55 09:09:55.884 [log-closing-/tmp/kafka-unit11182757027218931278] DEBUG kafka.log.AbstractIndex - Resized /tmp/kafka-unit11182757027218931278/__consumer_offsets-18/00000000000000000000.timeindex to 0, position is 0 and limit is 0 09:09:55 09:09:55.884 [log-closing-/tmp/kafka-unit11182757027218931278] DEBUG kafka.log.UnifiedLog - [UnifiedLog partition=__consumer_offsets-7, dir=/tmp/kafka-unit11182757027218931278] Flushing log up to offset 0 (inclusive)with recovery point 0, last flushed: 1770973776468, current time: 1770973795884,unflushed: 0 09:09:55 09:09:55.885 [log-closing-/tmp/kafka-unit11182757027218931278] DEBUG kafka.log.UnifiedLog - [UnifiedLog partition=__consumer_offsets-7, dir=/tmp/kafka-unit11182757027218931278] Closing log 09:09:55 09:09:55.886 [log-closing-/tmp/kafka-unit11182757027218931278] DEBUG kafka.log.AbstractIndex - Resized /tmp/kafka-unit11182757027218931278/__consumer_offsets-7/00000000000000000000.index to 0, position is 0 and limit is 0 09:09:55 09:09:55.886 [log-closing-/tmp/kafka-unit11182757027218931278] DEBUG kafka.log.AbstractIndex - Resized /tmp/kafka-unit11182757027218931278/__consumer_offsets-7/00000000000000000000.timeindex to 0, position is 0 and limit is 0 09:09:55 09:09:55.886 [log-closing-/tmp/kafka-unit11182757027218931278] DEBUG kafka.log.UnifiedLog - [UnifiedLog partition=__consumer_offsets-48, dir=/tmp/kafka-unit11182757027218931278] Flushing log up to offset 0 (inclusive)with recovery point 0, last flushed: 1770973776174, current time: 1770973795886,unflushed: 0 09:09:55 09:09:55.887 [log-closing-/tmp/kafka-unit11182757027218931278] DEBUG kafka.log.UnifiedLog - [UnifiedLog partition=__consumer_offsets-48, dir=/tmp/kafka-unit11182757027218931278] Closing log 09:09:55 09:09:55.887 [log-closing-/tmp/kafka-unit11182757027218931278] DEBUG kafka.log.AbstractIndex - Resized /tmp/kafka-unit11182757027218931278/__consumer_offsets-48/00000000000000000000.index to 0, position is 0 and limit is 0 09:09:55 09:09:55.887 [log-closing-/tmp/kafka-unit11182757027218931278] DEBUG kafka.log.AbstractIndex - Resized /tmp/kafka-unit11182757027218931278/__consumer_offsets-48/00000000000000000000.timeindex to 0, position is 0 and limit is 0 09:09:55 09:09:55.887 [log-closing-/tmp/kafka-unit11182757027218931278] DEBUG kafka.log.UnifiedLog - [UnifiedLog partition=__consumer_offsets-22, dir=/tmp/kafka-unit11182757027218931278] Flushing log up to offset 0 (inclusive)with recovery point 0, last flushed: 1770973776478, current time: 1770973795887,unflushed: 0 09:09:55 09:09:55.888 [log-closing-/tmp/kafka-unit11182757027218931278] DEBUG kafka.log.UnifiedLog - [UnifiedLog partition=__consumer_offsets-22, dir=/tmp/kafka-unit11182757027218931278] Closing log 09:09:55 09:09:55.888 [log-closing-/tmp/kafka-unit11182757027218931278] DEBUG kafka.log.AbstractIndex - Resized /tmp/kafka-unit11182757027218931278/__consumer_offsets-22/00000000000000000000.index to 0, position is 0 and limit is 0 09:09:55 09:09:55.888 [log-closing-/tmp/kafka-unit11182757027218931278] DEBUG kafka.log.AbstractIndex - Resized /tmp/kafka-unit11182757027218931278/__consumer_offsets-22/00000000000000000000.timeindex to 0, position is 0 and limit is 0 09:09:55 09:09:55.888 [log-closing-/tmp/kafka-unit11182757027218931278] DEBUG kafka.log.UnifiedLog - [UnifiedLog partition=__consumer_offsets-46, dir=/tmp/kafka-unit11182757027218931278] Flushing log up to offset 0 (inclusive)with recovery point 0, last flushed: 1770973776297, current time: 1770973795888,unflushed: 0 09:09:55 09:09:55.889 [log-closing-/tmp/kafka-unit11182757027218931278] DEBUG kafka.log.UnifiedLog - [UnifiedLog partition=__consumer_offsets-46, dir=/tmp/kafka-unit11182757027218931278] Closing log 09:09:55 09:09:55.890 [log-closing-/tmp/kafka-unit11182757027218931278] DEBUG kafka.log.AbstractIndex - Resized /tmp/kafka-unit11182757027218931278/__consumer_offsets-46/00000000000000000000.index to 0, position is 0 and limit is 0 09:09:55 09:09:55.890 [log-closing-/tmp/kafka-unit11182757027218931278] DEBUG kafka.log.AbstractIndex - Resized /tmp/kafka-unit11182757027218931278/__consumer_offsets-46/00000000000000000000.timeindex to 0, position is 0 and limit is 0 09:09:55 09:09:55.890 [log-closing-/tmp/kafka-unit11182757027218931278] DEBUG kafka.log.UnifiedLog - [UnifiedLog partition=__consumer_offsets-23, dir=/tmp/kafka-unit11182757027218931278] Flushing log up to offset 0 (inclusive)with recovery point 0, last flushed: 1770973776569, current time: 1770973795890,unflushed: 0 09:09:55 09:09:55.891 [log-closing-/tmp/kafka-unit11182757027218931278] DEBUG kafka.log.UnifiedLog - [UnifiedLog partition=__consumer_offsets-23, dir=/tmp/kafka-unit11182757027218931278] Closing log 09:09:55 09:09:55.891 [log-closing-/tmp/kafka-unit11182757027218931278] DEBUG kafka.log.AbstractIndex - Resized /tmp/kafka-unit11182757027218931278/__consumer_offsets-23/00000000000000000000.index to 0, position is 0 and limit is 0 09:09:55 09:09:55.891 [log-closing-/tmp/kafka-unit11182757027218931278] DEBUG kafka.log.AbstractIndex - Resized /tmp/kafka-unit11182757027218931278/__consumer_offsets-23/00000000000000000000.timeindex to 0, position is 0 and limit is 0 09:09:55 09:09:55.891 [log-closing-/tmp/kafka-unit11182757027218931278] DEBUG kafka.log.UnifiedLog - [UnifiedLog partition=__consumer_offsets-42, dir=/tmp/kafka-unit11182757027218931278] Flushing log up to offset 0 (inclusive)with recovery point 0, last flushed: 1770973776892, current time: 1770973795891,unflushed: 0 09:09:55 09:09:55.892 [log-closing-/tmp/kafka-unit11182757027218931278] DEBUG kafka.log.UnifiedLog - [UnifiedLog partition=__consumer_offsets-42, dir=/tmp/kafka-unit11182757027218931278] Closing log 09:09:55 09:09:55.892 [log-closing-/tmp/kafka-unit11182757027218931278] DEBUG kafka.log.AbstractIndex - Resized /tmp/kafka-unit11182757027218931278/__consumer_offsets-42/00000000000000000000.index to 0, position is 0 and limit is 0 09:09:55 09:09:55.892 [log-closing-/tmp/kafka-unit11182757027218931278] DEBUG kafka.log.AbstractIndex - Resized /tmp/kafka-unit11182757027218931278/__consumer_offsets-42/00000000000000000000.timeindex to 0, position is 0 and limit is 0 09:09:55 09:09:55.893 [log-closing-/tmp/kafka-unit11182757027218931278] DEBUG kafka.log.UnifiedLog - [UnifiedLog partition=__consumer_offsets-28, dir=/tmp/kafka-unit11182757027218931278] Flushing log up to offset 0 (inclusive)with recovery point 0, last flushed: 1770973776963, current time: 1770973795893,unflushed: 0 09:09:55 09:09:55.894 [log-closing-/tmp/kafka-unit11182757027218931278] DEBUG kafka.log.UnifiedLog - [UnifiedLog partition=__consumer_offsets-28, dir=/tmp/kafka-unit11182757027218931278] Closing log 09:09:55 09:09:55.894 [log-closing-/tmp/kafka-unit11182757027218931278] DEBUG kafka.log.AbstractIndex - Resized /tmp/kafka-unit11182757027218931278/__consumer_offsets-28/00000000000000000000.index to 0, position is 0 and limit is 0 09:09:55 09:09:55.894 [log-closing-/tmp/kafka-unit11182757027218931278] DEBUG kafka.log.AbstractIndex - Resized /tmp/kafka-unit11182757027218931278/__consumer_offsets-28/00000000000000000000.timeindex to 0, position is 0 and limit is 0 09:09:55 09:09:55.894 [log-closing-/tmp/kafka-unit11182757027218931278] DEBUG kafka.log.UnifiedLog - [UnifiedLog partition=__consumer_offsets-4, dir=/tmp/kafka-unit11182757027218931278] Flushing log up to offset 0 (inclusive)with recovery point 0, last flushed: 1770973776204, current time: 1770973795894,unflushed: 0 09:09:55 09:09:55.895 [log-closing-/tmp/kafka-unit11182757027218931278] DEBUG kafka.log.UnifiedLog - [UnifiedLog partition=__consumer_offsets-4, dir=/tmp/kafka-unit11182757027218931278] Closing log 09:09:55 09:09:55.895 [log-closing-/tmp/kafka-unit11182757027218931278] DEBUG kafka.log.AbstractIndex - Resized /tmp/kafka-unit11182757027218931278/__consumer_offsets-4/00000000000000000000.index to 0, position is 0 and limit is 0 09:09:55 09:09:55.895 [log-closing-/tmp/kafka-unit11182757027218931278] DEBUG kafka.log.AbstractIndex - Resized /tmp/kafka-unit11182757027218931278/__consumer_offsets-4/00000000000000000000.timeindex to 0, position is 0 and limit is 0 09:09:55 09:09:55.895 [log-closing-/tmp/kafka-unit11182757027218931278] DEBUG kafka.log.UnifiedLog - [UnifiedLog partition=__consumer_offsets-31, dir=/tmp/kafka-unit11182757027218931278] Flushing log up to offset 0 (inclusive)with recovery point 0, last flushed: 1770973776289, current time: 1770973795895,unflushed: 0 09:09:55 09:09:55.896 [log-closing-/tmp/kafka-unit11182757027218931278] DEBUG kafka.log.UnifiedLog - [UnifiedLog partition=__consumer_offsets-31, dir=/tmp/kafka-unit11182757027218931278] Closing log 09:09:55 09:09:55.896 [log-closing-/tmp/kafka-unit11182757027218931278] DEBUG kafka.log.AbstractIndex - Resized /tmp/kafka-unit11182757027218931278/__consumer_offsets-31/00000000000000000000.index to 0, position is 0 and limit is 0 09:09:55 09:09:55.896 [log-closing-/tmp/kafka-unit11182757027218931278] DEBUG kafka.log.AbstractIndex - Resized /tmp/kafka-unit11182757027218931278/__consumer_offsets-31/00000000000000000000.timeindex to 0, position is 0 and limit is 0 09:09:55 09:09:55.897 [log-closing-/tmp/kafka-unit11182757027218931278] DEBUG kafka.log.UnifiedLog - [UnifiedLog partition=__consumer_offsets-5, dir=/tmp/kafka-unit11182757027218931278] Flushing log up to offset 0 (inclusive)with recovery point 0, last flushed: 1770973776812, current time: 1770973795897,unflushed: 0 09:09:55 09:09:55.898 [log-closing-/tmp/kafka-unit11182757027218931278] DEBUG kafka.log.UnifiedLog - [UnifiedLog partition=__consumer_offsets-5, dir=/tmp/kafka-unit11182757027218931278] Closing log 09:09:55 09:09:55.898 [log-closing-/tmp/kafka-unit11182757027218931278] DEBUG kafka.log.AbstractIndex - Resized /tmp/kafka-unit11182757027218931278/__consumer_offsets-5/00000000000000000000.index to 0, position is 0 and limit is 0 09:09:55 09:09:55.898 [log-closing-/tmp/kafka-unit11182757027218931278] DEBUG kafka.log.AbstractIndex - Resized /tmp/kafka-unit11182757027218931278/__consumer_offsets-5/00000000000000000000.timeindex to 0, position is 0 and limit is 0 09:09:55 09:09:55.898 [log-closing-/tmp/kafka-unit11182757027218931278] DEBUG kafka.log.UnifiedLog - [UnifiedLog partition=__consumer_offsets-1, dir=/tmp/kafka-unit11182757027218931278] Flushing log up to offset 0 (inclusive)with recovery point 0, last flushed: 1770973776305, current time: 1770973795898,unflushed: 0 09:09:55 09:09:55.930 [log-closing-/tmp/kafka-unit11182757027218931278] DEBUG kafka.log.UnifiedLog - [UnifiedLog partition=__consumer_offsets-1, dir=/tmp/kafka-unit11182757027218931278] Closing log 09:09:55 09:09:55.931 [log-closing-/tmp/kafka-unit11182757027218931278] DEBUG kafka.log.AbstractIndex - Resized /tmp/kafka-unit11182757027218931278/__consumer_offsets-1/00000000000000000000.index to 0, position is 0 and limit is 0 09:09:55 09:09:55.931 [log-closing-/tmp/kafka-unit11182757027218931278] DEBUG kafka.log.AbstractIndex - Resized /tmp/kafka-unit11182757027218931278/__consumer_offsets-1/00000000000000000000.timeindex to 0, position is 0 and limit is 0 09:09:55 09:09:55.931 [log-closing-/tmp/kafka-unit11182757027218931278] DEBUG kafka.log.UnifiedLog - [UnifiedLog partition=__consumer_offsets-15, dir=/tmp/kafka-unit11182757027218931278] Flushing log up to offset 0 (inclusive)with recovery point 0, last flushed: 1770973776623, current time: 1770973795931,unflushed: 0 09:09:55 09:09:55.959 [kafka-coordinator-heartbeat-thread | mso-group] DEBUG org.apache.kafka.clients.NetworkClient - [Consumer clientId=mso-123456-consumer-f5c1d64b-2a77-4abe-97e2-5d595acfbd82, groupId=mso-group] Initialize connection to node localhost:40117 (id: 1 rack: null) for sending metadata request 09:09:55 09:09:55.959 [kafka-coordinator-heartbeat-thread | mso-group] DEBUG org.apache.kafka.clients.ClientUtils - Resolved host localhost as 127.0.0.1 09:09:55 09:09:55.959 [kafka-coordinator-heartbeat-thread | mso-group] DEBUG org.apache.kafka.clients.NetworkClient - [Consumer clientId=mso-123456-consumer-f5c1d64b-2a77-4abe-97e2-5d595acfbd82, groupId=mso-group] Initiating connection to node localhost:40117 (id: 1 rack: null) using address localhost/127.0.0.1 09:09:55 09:09:55.959 [kafka-coordinator-heartbeat-thread | mso-group] DEBUG org.apache.kafka.common.security.authenticator.SaslClientAuthenticator - [Consumer clientId=mso-123456-consumer-f5c1d64b-2a77-4abe-97e2-5d595acfbd82, groupId=mso-group] Set SASL client state to SEND_APIVERSIONS_REQUEST 09:09:55 09:09:55.959 [kafka-coordinator-heartbeat-thread | mso-group] DEBUG org.apache.kafka.common.security.authenticator.SaslClientAuthenticator - [Consumer clientId=mso-123456-consumer-f5c1d64b-2a77-4abe-97e2-5d595acfbd82, groupId=mso-group] Creating SaslClient: client=null;service=kafka;serviceHostname=localhost;mechs=[PLAIN] 09:09:55 09:09:55.960 [kafka-coordinator-heartbeat-thread | mso-group] DEBUG org.apache.kafka.common.network.Selector - [Consumer clientId=mso-123456-consumer-f5c1d64b-2a77-4abe-97e2-5d595acfbd82, groupId=mso-group] Connection with localhost/127.0.0.1 (channelId=1) disconnected 09:09:55 java.net.ConnectException: Connection refused 09:09:55 at java.base/sun.nio.ch.SocketChannelImpl.checkConnect(Native Method) 09:09:55 at java.base/sun.nio.ch.SocketChannelImpl.finishConnect(SocketChannelImpl.java:777) 09:09:55 at org.apache.kafka.common.network.PlaintextTransportLayer.finishConnect(PlaintextTransportLayer.java:50) 09:09:55 at org.apache.kafka.common.network.KafkaChannel.finishConnect(KafkaChannel.java:224) 09:09:55 at org.apache.kafka.common.network.Selector.pollSelectionKeys(Selector.java:526) 09:09:55 at org.apache.kafka.common.network.Selector.poll(Selector.java:481) 09:09:55 at org.apache.kafka.clients.NetworkClient.poll(NetworkClient.java:560) 09:09:55 at org.apache.kafka.clients.consumer.internals.ConsumerNetworkClient.poll(ConsumerNetworkClient.java:280) 09:09:55 at org.apache.kafka.clients.consumer.internals.ConsumerNetworkClient.pollNoWakeup(ConsumerNetworkClient.java:321) 09:09:55 at org.apache.kafka.clients.consumer.internals.AbstractCoordinator$HeartbeatThread.run(AbstractCoordinator.java:1454) 09:09:55 09:09:55.961 [kafka-coordinator-heartbeat-thread | mso-group] INFO org.apache.kafka.clients.NetworkClient - [Consumer clientId=mso-123456-consumer-f5c1d64b-2a77-4abe-97e2-5d595acfbd82, groupId=mso-group] Node 1 disconnected. 09:09:55 09:09:55.961 [kafka-coordinator-heartbeat-thread | mso-group] WARN org.apache.kafka.clients.NetworkClient - [Consumer clientId=mso-123456-consumer-f5c1d64b-2a77-4abe-97e2-5d595acfbd82, groupId=mso-group] Connection to node 1 (localhost/127.0.0.1:40117) could not be established. Broker may not be available. 09:09:55 09:09:55.961 [kafka-coordinator-heartbeat-thread | mso-group] DEBUG org.apache.kafka.clients.consumer.internals.ConsumerCoordinator - [Consumer clientId=mso-123456-consumer-f5c1d64b-2a77-4abe-97e2-5d595acfbd82, groupId=mso-group] No broker available to send FindCoordinator request 09:09:55 09:09:55.981 [kafka-producer-network-thread | mso-123456-producer-a431b68b-e45a-4ed5-8b19-0482c7f0322b] DEBUG org.apache.kafka.clients.NetworkClient - [Producer clientId=mso-123456-producer-a431b68b-e45a-4ed5-8b19-0482c7f0322b] Initialize connection to node localhost:40117 (id: 1 rack: null) for sending metadata request 09:09:55 09:09:55.981 [kafka-producer-network-thread | mso-123456-producer-a431b68b-e45a-4ed5-8b19-0482c7f0322b] DEBUG org.apache.kafka.clients.ClientUtils - Resolved host localhost as 127.0.0.1 09:09:55 09:09:55.981 [kafka-producer-network-thread | mso-123456-producer-a431b68b-e45a-4ed5-8b19-0482c7f0322b] DEBUG org.apache.kafka.clients.NetworkClient - [Producer clientId=mso-123456-producer-a431b68b-e45a-4ed5-8b19-0482c7f0322b] Initiating connection to node localhost:40117 (id: 1 rack: null) using address localhost/127.0.0.1 09:09:55 09:09:55.981 [kafka-producer-network-thread | mso-123456-producer-a431b68b-e45a-4ed5-8b19-0482c7f0322b] DEBUG org.apache.kafka.common.security.authenticator.SaslClientAuthenticator - [Producer clientId=mso-123456-producer-a431b68b-e45a-4ed5-8b19-0482c7f0322b] Set SASL client state to SEND_APIVERSIONS_REQUEST 09:09:55 09:09:55.981 [kafka-producer-network-thread | mso-123456-producer-a431b68b-e45a-4ed5-8b19-0482c7f0322b] DEBUG org.apache.kafka.common.security.authenticator.SaslClientAuthenticator - [Producer clientId=mso-123456-producer-a431b68b-e45a-4ed5-8b19-0482c7f0322b] Creating SaslClient: client=null;service=kafka;serviceHostname=localhost;mechs=[PLAIN] 09:09:55 09:09:55.982 [kafka-producer-network-thread | mso-123456-producer-a431b68b-e45a-4ed5-8b19-0482c7f0322b] DEBUG org.apache.kafka.common.network.Selector - [Producer clientId=mso-123456-producer-a431b68b-e45a-4ed5-8b19-0482c7f0322b] Connection with localhost/127.0.0.1 (channelId=1) disconnected 09:09:55 java.net.ConnectException: Connection refused 09:09:55 at java.base/sun.nio.ch.SocketChannelImpl.checkConnect(Native Method) 09:09:55 at java.base/sun.nio.ch.SocketChannelImpl.finishConnect(SocketChannelImpl.java:777) 09:09:55 at org.apache.kafka.common.network.PlaintextTransportLayer.finishConnect(PlaintextTransportLayer.java:50) 09:09:55 at org.apache.kafka.common.network.KafkaChannel.finishConnect(KafkaChannel.java:224) 09:09:55 at org.apache.kafka.common.network.Selector.pollSelectionKeys(Selector.java:526) 09:09:55 at org.apache.kafka.common.network.Selector.poll(Selector.java:481) 09:09:55 at org.apache.kafka.clients.NetworkClient.poll(NetworkClient.java:560) 09:09:55 at org.apache.kafka.clients.producer.internals.Sender.runOnce(Sender.java:328) 09:09:55 at org.apache.kafka.clients.producer.internals.Sender.run(Sender.java:243) 09:09:55 at java.base/java.lang.Thread.run(Thread.java:829) 09:09:55 09:09:55.982 [kafka-producer-network-thread | mso-123456-producer-a431b68b-e45a-4ed5-8b19-0482c7f0322b] INFO org.apache.kafka.clients.NetworkClient - [Producer clientId=mso-123456-producer-a431b68b-e45a-4ed5-8b19-0482c7f0322b] Node 1 disconnected. 09:09:55 09:09:55.982 [kafka-producer-network-thread | mso-123456-producer-a431b68b-e45a-4ed5-8b19-0482c7f0322b] WARN org.apache.kafka.clients.NetworkClient - [Producer clientId=mso-123456-producer-a431b68b-e45a-4ed5-8b19-0482c7f0322b] Connection to node 1 (localhost/127.0.0.1:40117) could not be established. Broker may not be available. 09:09:56 09:09:56.029 [log-closing-/tmp/kafka-unit11182757027218931278] DEBUG kafka.log.UnifiedLog - [UnifiedLog partition=__consumer_offsets-15, dir=/tmp/kafka-unit11182757027218931278] Closing log 09:09:56 09:09:56.029 [log-closing-/tmp/kafka-unit11182757027218931278] DEBUG kafka.log.AbstractIndex - Resized /tmp/kafka-unit11182757027218931278/__consumer_offsets-15/00000000000000000000.index to 0, position is 0 and limit is 0 09:09:56 09:09:56.029 [log-closing-/tmp/kafka-unit11182757027218931278] DEBUG kafka.log.AbstractIndex - Resized /tmp/kafka-unit11182757027218931278/__consumer_offsets-15/00000000000000000000.timeindex to 0, position is 0 and limit is 0 09:09:56 09:09:56.030 [log-closing-/tmp/kafka-unit11182757027218931278] DEBUG kafka.log.UnifiedLog - [UnifiedLog partition=__consumer_offsets-38, dir=/tmp/kafka-unit11182757027218931278] Flushing log up to offset 0 (inclusive)with recovery point 0, last flushed: 1770973776589, current time: 1770973796030,unflushed: 0 09:09:56 09:09:56.051 [log-closing-/tmp/kafka-unit11182757027218931278] DEBUG kafka.log.UnifiedLog - [UnifiedLog partition=__consumer_offsets-38, dir=/tmp/kafka-unit11182757027218931278] Closing log 09:09:56 09:09:56.051 [log-closing-/tmp/kafka-unit11182757027218931278] DEBUG kafka.log.AbstractIndex - Resized /tmp/kafka-unit11182757027218931278/__consumer_offsets-38/00000000000000000000.index to 0, position is 0 and limit is 0 09:09:56 09:09:56.051 [log-closing-/tmp/kafka-unit11182757027218931278] DEBUG kafka.log.AbstractIndex - Resized /tmp/kafka-unit11182757027218931278/__consumer_offsets-38/00000000000000000000.timeindex to 0, position is 0 and limit is 0 09:09:56 09:09:56.051 [log-closing-/tmp/kafka-unit11182757027218931278] DEBUG kafka.log.UnifiedLog - [UnifiedLog partition=__consumer_offsets-34, dir=/tmp/kafka-unit11182757027218931278] Flushing log up to offset 0 (inclusive)with recovery point 0, last flushed: 1770973776194, current time: 1770973796051,unflushed: 0 09:09:56 09:09:56.053 [log-closing-/tmp/kafka-unit11182757027218931278] DEBUG kafka.log.UnifiedLog - [UnifiedLog partition=__consumer_offsets-34, dir=/tmp/kafka-unit11182757027218931278] Closing log 09:09:56 09:09:56.053 [log-closing-/tmp/kafka-unit11182757027218931278] DEBUG kafka.log.AbstractIndex - Resized /tmp/kafka-unit11182757027218931278/__consumer_offsets-34/00000000000000000000.index to 0, position is 0 and limit is 0 09:09:56 09:09:56.053 [log-closing-/tmp/kafka-unit11182757027218931278] DEBUG kafka.log.AbstractIndex - Resized /tmp/kafka-unit11182757027218931278/__consumer_offsets-34/00000000000000000000.timeindex to 0, position is 0 and limit is 0 09:09:56 09:09:56.053 [log-closing-/tmp/kafka-unit11182757027218931278] DEBUG kafka.log.UnifiedLog - [UnifiedLog partition=__consumer_offsets-9, dir=/tmp/kafka-unit11182757027218931278] Flushing log up to offset 0 (inclusive)with recovery point 0, last flushed: 1770973776252, current time: 1770973796053,unflushed: 0 09:09:56 09:09:56.055 [log-closing-/tmp/kafka-unit11182757027218931278] DEBUG kafka.log.UnifiedLog - [UnifiedLog partition=__consumer_offsets-9, dir=/tmp/kafka-unit11182757027218931278] Closing log 09:09:56 09:09:56.055 [log-closing-/tmp/kafka-unit11182757027218931278] DEBUG kafka.log.AbstractIndex - Resized /tmp/kafka-unit11182757027218931278/__consumer_offsets-9/00000000000000000000.index to 0, position is 0 and limit is 0 09:09:56 09:09:56.055 [log-closing-/tmp/kafka-unit11182757027218931278] DEBUG kafka.log.AbstractIndex - Resized /tmp/kafka-unit11182757027218931278/__consumer_offsets-9/00000000000000000000.timeindex to 0, position is 0 and limit is 0 09:09:56 09:09:56.056 [log-closing-/tmp/kafka-unit11182757027218931278] DEBUG kafka.log.UnifiedLog - [UnifiedLog partition=__consumer_offsets-14, dir=/tmp/kafka-unit11182757027218931278] Flushing log up to offset 0 (inclusive)with recovery point 0, last flushed: 1770973776505, current time: 1770973796056,unflushed: 0 09:09:56 09:09:56.057 [log-closing-/tmp/kafka-unit11182757027218931278] DEBUG kafka.log.UnifiedLog - [UnifiedLog partition=__consumer_offsets-14, dir=/tmp/kafka-unit11182757027218931278] Closing log 09:09:56 09:09:56.057 [log-closing-/tmp/kafka-unit11182757027218931278] DEBUG kafka.log.AbstractIndex - Resized /tmp/kafka-unit11182757027218931278/__consumer_offsets-14/00000000000000000000.index to 0, position is 0 and limit is 0 09:09:56 09:09:56.057 [log-closing-/tmp/kafka-unit11182757027218931278] DEBUG kafka.log.AbstractIndex - Resized /tmp/kafka-unit11182757027218931278/__consumer_offsets-14/00000000000000000000.timeindex to 0, position is 0 and limit is 0 09:09:56 09:09:56.058 [main] DEBUG kafka.log.LogManager - Updating recovery points at /tmp/kafka-unit11182757027218931278 09:09:56 09:09:56.061 [kafka-coordinator-heartbeat-thread | mso-group] DEBUG org.apache.kafka.clients.NetworkClient - [Consumer clientId=mso-123456-consumer-f5c1d64b-2a77-4abe-97e2-5d595acfbd82, groupId=mso-group] Initialize connection to node localhost:40117 (id: 1 rack: null) for sending metadata request 09:09:56 09:09:56.062 [kafka-coordinator-heartbeat-thread | mso-group] DEBUG org.apache.kafka.clients.ClientUtils - Resolved host localhost as 127.0.0.1 09:09:56 09:09:56.062 [kafka-coordinator-heartbeat-thread | mso-group] DEBUG org.apache.kafka.clients.NetworkClient - [Consumer clientId=mso-123456-consumer-f5c1d64b-2a77-4abe-97e2-5d595acfbd82, groupId=mso-group] Initiating connection to node localhost:40117 (id: 1 rack: null) using address localhost/127.0.0.1 09:09:56 09:09:56.062 [kafka-coordinator-heartbeat-thread | mso-group] DEBUG org.apache.kafka.common.security.authenticator.SaslClientAuthenticator - [Consumer clientId=mso-123456-consumer-f5c1d64b-2a77-4abe-97e2-5d595acfbd82, groupId=mso-group] Set SASL client state to SEND_APIVERSIONS_REQUEST 09:09:56 09:09:56.062 [kafka-coordinator-heartbeat-thread | mso-group] DEBUG org.apache.kafka.common.security.authenticator.SaslClientAuthenticator - [Consumer clientId=mso-123456-consumer-f5c1d64b-2a77-4abe-97e2-5d595acfbd82, groupId=mso-group] Creating SaslClient: client=null;service=kafka;serviceHostname=localhost;mechs=[PLAIN] 09:09:56 09:09:56.062 [main] DEBUG kafka.log.LogManager - Updating log start offsets at /tmp/kafka-unit11182757027218931278 09:09:56 09:09:56.063 [kafka-coordinator-heartbeat-thread | mso-group] DEBUG org.apache.kafka.common.network.Selector - [Consumer clientId=mso-123456-consumer-f5c1d64b-2a77-4abe-97e2-5d595acfbd82, groupId=mso-group] Connection with localhost/127.0.0.1 (channelId=1) disconnected 09:09:56 java.net.ConnectException: Connection refused 09:09:56 at java.base/sun.nio.ch.SocketChannelImpl.checkConnect(Native Method) 09:09:56 at java.base/sun.nio.ch.SocketChannelImpl.finishConnect(SocketChannelImpl.java:777) 09:09:56 at org.apache.kafka.common.network.PlaintextTransportLayer.finishConnect(PlaintextTransportLayer.java:50) 09:09:56 at org.apache.kafka.common.network.KafkaChannel.finishConnect(KafkaChannel.java:224) 09:09:56 at org.apache.kafka.common.network.Selector.pollSelectionKeys(Selector.java:526) 09:09:56 at org.apache.kafka.common.network.Selector.poll(Selector.java:481) 09:09:56 at org.apache.kafka.clients.NetworkClient.poll(NetworkClient.java:560) 09:09:56 at org.apache.kafka.clients.consumer.internals.ConsumerNetworkClient.poll(ConsumerNetworkClient.java:280) 09:09:56 at org.apache.kafka.clients.consumer.internals.ConsumerNetworkClient.pollNoWakeup(ConsumerNetworkClient.java:321) 09:09:56 at org.apache.kafka.clients.consumer.internals.AbstractCoordinator$HeartbeatThread.run(AbstractCoordinator.java:1454) 09:09:56 09:09:56.063 [kafka-coordinator-heartbeat-thread | mso-group] INFO org.apache.kafka.clients.NetworkClient - [Consumer clientId=mso-123456-consumer-f5c1d64b-2a77-4abe-97e2-5d595acfbd82, groupId=mso-group] Node 1 disconnected. 09:09:56 09:09:56.063 [kafka-coordinator-heartbeat-thread | mso-group] WARN org.apache.kafka.clients.NetworkClient - [Consumer clientId=mso-123456-consumer-f5c1d64b-2a77-4abe-97e2-5d595acfbd82, groupId=mso-group] Connection to node 1 (localhost/127.0.0.1:40117) could not be established. Broker may not be available. 09:09:56 09:09:56.063 [kafka-coordinator-heartbeat-thread | mso-group] DEBUG org.apache.kafka.clients.consumer.internals.ConsumerCoordinator - [Consumer clientId=mso-123456-consumer-f5c1d64b-2a77-4abe-97e2-5d595acfbd82, groupId=mso-group] No broker available to send FindCoordinator request 09:09:56 09:09:56.068 [main] DEBUG kafka.log.LogManager - Writing clean shutdown marker at /tmp/kafka-unit11182757027218931278 09:09:56 09:09:56.070 [main] INFO kafka.log.LogManager - Shutdown complete. 09:09:56 09:09:56.070 [main] INFO kafka.controller.ControllerEventManager$ControllerEventThread - [ControllerEventThread controllerId=1] Shutting down 09:09:56 09:09:56.070 [controller-event-thread] INFO kafka.controller.ControllerEventManager$ControllerEventThread - [ControllerEventThread controllerId=1] Stopped 09:09:56 09:09:56.070 [main] INFO kafka.controller.ControllerEventManager$ControllerEventThread - [ControllerEventThread controllerId=1] Shutdown completed 09:09:56 09:09:56.071 [main] DEBUG kafka.controller.KafkaController - [Controller id=1] Resigning 09:09:56 09:09:56.071 [main] DEBUG kafka.controller.KafkaController - [Controller id=1] Unregister BrokerModifications handler for Set(1) 09:09:56 09:09:56.072 [main] DEBUG kafka.utils.KafkaScheduler - Shutting down task scheduler. 09:09:56 09:09:56.073 [main] INFO kafka.controller.ZkPartitionStateMachine - [PartitionStateMachine controllerId=1] Stopped partition state machine 09:09:56 09:09:56.074 [main] INFO kafka.controller.ZkReplicaStateMachine - [ReplicaStateMachine controllerId=1] Stopped replica state machine 09:09:56 09:09:56.074 [main] INFO kafka.controller.RequestSendThread - [RequestSendThread controllerId=1] Shutting down 09:09:56 09:09:56.074 [main] INFO kafka.controller.RequestSendThread - [RequestSendThread controllerId=1] Shutdown completed 09:09:56 09:09:56.074 [TestBroker:1:Controller-1-to-broker-1-send-thread] INFO kafka.controller.RequestSendThread - [RequestSendThread controllerId=1] Stopped 09:09:56 09:09:56.076 [main] INFO kafka.controller.KafkaController - [Controller id=1] Resigned 09:09:56 09:09:56.077 [main] INFO kafka.server.FinalizedFeatureChangeListener$ChangeNotificationProcessorThread - [feature-zk-node-event-process-thread]: Shutting down 09:09:56 09:09:56.077 [main] INFO kafka.server.FinalizedFeatureChangeListener$ChangeNotificationProcessorThread - [feature-zk-node-event-process-thread]: Shutdown completed 09:09:56 09:09:56.077 [feature-zk-node-event-process-thread] INFO kafka.server.FinalizedFeatureChangeListener$ChangeNotificationProcessorThread - [feature-zk-node-event-process-thread]: Stopped 09:09:56 09:09:56.078 [main] INFO kafka.zookeeper.ZooKeeperClient - [ZooKeeperClient Kafka server] Closing. 09:09:56 09:09:56.078 [main] DEBUG kafka.utils.KafkaScheduler - Shutting down task scheduler. 09:09:56 09:09:56.078 [main] DEBUG org.apache.zookeeper.ZooKeeper - Closing session: 0x1000002945e0000 09:09:56 09:09:56.078 [main] DEBUG org.apache.zookeeper.ClientCnxn - Closing client for session: 0x1000002945e0000 09:09:56 09:09:56.079 [ProcessThread(sid:0 cport:46481):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from data tree is: 267671441431 09:09:56 09:09:56.079 [ProcessThread(sid:0 cport:46481):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 268808859999 09:09:56 09:09:56.079 [ProcessThread(sid:0 cport:46481):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 268665528520 09:09:56 09:09:56.079 [ProcessThread(sid:0 cport:46481):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 270089519136 09:09:56 09:09:56.080 [ProcessThread(sid:0 cport:46481):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 266545927753 09:09:56 09:09:56.082 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - Processing request:: sessionid:0x1000002945e0000 type:closeSession cxid:0x10a zxid:0x8d txntype:-11 reqpath:n/a 09:09:56 09:09:56.082 [SyncThread:0] DEBUG org.apache.zookeeper.server.SessionTrackerImpl - Removing session 0x1000002945e0000 09:09:56 09:09:56.082 [SyncThread:0] DEBUG org.apache.zookeeper.common.PathTrie - controller 09:09:56 09:09:56.082 [SyncThread:0] DEBUG org.apache.zookeeper.server.DataTree - Deleting ephemeral node /controller for session 0x1000002945e0000 09:09:56 09:09:56.082 [SyncThread:0] DEBUG org.apache.zookeeper.common.PathTrie - brokers 09:09:56 09:09:56.083 [SyncThread:0] DEBUG org.apache.zookeeper.server.DataTree - Deleting ephemeral node /brokers/ids/1 for session 0x1000002945e0000 09:09:56 09:09:56.083 [kafka-producer-network-thread | mso-123456-producer-a431b68b-e45a-4ed5-8b19-0482c7f0322b] DEBUG org.apache.kafka.clients.NetworkClient - [Producer clientId=mso-123456-producer-a431b68b-e45a-4ed5-8b19-0482c7f0322b] Give up sending metadata request since no node is available 09:09:56 09:09:56.083 [SyncThread:0] DEBUG org.apache.zookeeper.server.DataTree - Digests are matching for Zxid: 8d, Digest in log and actual tree: 266545927753 09:09:56 09:09:56.083 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - sessionid:0x1000002945e0000 type:closeSession cxid:0x10a zxid:0x8d txntype:-11 reqpath:n/a 09:09:56 09:09:56.083 [main-SendThread(127.0.0.1:46481)] DEBUG org.apache.zookeeper.ClientCnxn - Got notification session id: 0x1000002945e0000 09:09:56 09:09:56.083 [main-SendThread(127.0.0.1:46481)] DEBUG org.apache.zookeeper.ClientCnxn - Got WatchedEvent state:SyncConnected type:NodeDeleted path:/controller for session id 0x1000002945e0000 09:09:56 09:09:56.083 [main-SendThread(127.0.0.1:46481)] DEBUG org.apache.zookeeper.ClientCnxn - Got notification session id: 0x1000002945e0000 09:09:56 09:09:56.083 [main-EventThread] DEBUG kafka.zookeeper.ZooKeeperClient - [ZooKeeperClient Kafka server] Received event: WatchedEvent state:SyncConnected type:NodeDeleted path:/controller 09:09:56 09:09:56.110 [main-SendThread(127.0.0.1:46481)] DEBUG org.apache.zookeeper.ClientCnxn - Got WatchedEvent state:SyncConnected type:NodeDeleted path:/brokers/ids/1 for session id 0x1000002945e0000 09:09:56 09:09:56.111 [main-SendThread(127.0.0.1:46481)] DEBUG org.apache.zookeeper.ClientCnxn - Got notification session id: 0x1000002945e0000 09:09:56 09:09:56.111 [main-SendThread(127.0.0.1:46481)] DEBUG org.apache.zookeeper.ClientCnxn - Got WatchedEvent state:SyncConnected type:NodeChildrenChanged path:/brokers/ids for session id 0x1000002945e0000 09:09:56 09:09:56.111 [main-SendThread(127.0.0.1:46481)] DEBUG org.apache.zookeeper.ClientCnxn - Reading reply session id: 0x1000002945e0000, packet:: clientPath:null serverPath:null finished:false header:: 266,-11 replyHeader:: 266,141,0 request:: null response:: null 09:09:56 09:09:56.111 [NIOWorkerThread-4] DEBUG org.apache.zookeeper.server.NIOServerCnxn - Closed socket connection for client /127.0.0.1:45792 which had sessionid 0x1000002945e0000 09:09:56 09:09:56.111 [main] DEBUG org.apache.zookeeper.ClientCnxn - Disconnecting client for session: 0x1000002945e0000 09:09:56 09:09:56.112 [main-EventThread] DEBUG kafka.zookeeper.ZooKeeperClient - [ZooKeeperClient Kafka server] Received event: WatchedEvent state:SyncConnected type:NodeDeleted path:/brokers/ids/1 09:09:56 09:09:56.112 [main-EventThread] DEBUG kafka.zookeeper.ZooKeeperClient - [ZooKeeperClient Kafka server] Received event: WatchedEvent state:SyncConnected type:NodeChildrenChanged path:/brokers/ids 09:09:56 09:09:56.112 [main-SendThread(127.0.0.1:46481)] WARN org.apache.zookeeper.ClientCnxn - An exception was thrown while closing send thread for session 0x1000002945e0000. 09:09:56 org.apache.zookeeper.ClientCnxn$EndOfStreamException: Unable to read additional data from server sessionid 0x1000002945e0000, likely server has closed socket 09:09:56 at org.apache.zookeeper.ClientCnxnSocketNIO.doIO(ClientCnxnSocketNIO.java:77) 09:09:56 at org.apache.zookeeper.ClientCnxnSocketNIO.doTransport(ClientCnxnSocketNIO.java:350) 09:09:56 at org.apache.zookeeper.ClientCnxn$SendThread.run(ClientCnxn.java:1290) 09:09:56 09:09:56.133 [kafka-producer-network-thread | mso-123456-producer-a431b68b-e45a-4ed5-8b19-0482c7f0322b] DEBUG org.apache.kafka.clients.NetworkClient - [Producer clientId=mso-123456-producer-a431b68b-e45a-4ed5-8b19-0482c7f0322b] Give up sending metadata request since no node is available 09:09:56 09:09:56.163 [kafka-coordinator-heartbeat-thread | mso-group] DEBUG org.apache.kafka.clients.NetworkClient - [Consumer clientId=mso-123456-consumer-f5c1d64b-2a77-4abe-97e2-5d595acfbd82, groupId=mso-group] Give up sending metadata request since no node is available 09:09:56 09:09:56.164 [kafka-coordinator-heartbeat-thread | mso-group] DEBUG org.apache.kafka.clients.consumer.internals.ConsumerCoordinator - [Consumer clientId=mso-123456-consumer-f5c1d64b-2a77-4abe-97e2-5d595acfbd82, groupId=mso-group] No broker available to send FindCoordinator request 09:09:56 09:09:56.183 [kafka-producer-network-thread | mso-123456-producer-a431b68b-e45a-4ed5-8b19-0482c7f0322b] DEBUG org.apache.kafka.clients.NetworkClient - [Producer clientId=mso-123456-producer-a431b68b-e45a-4ed5-8b19-0482c7f0322b] Give up sending metadata request since no node is available 09:09:56 09:09:56.213 [main-EventThread] DEBUG kafka.zookeeper.ZooKeeperClient - [ZooKeeperClient Kafka server] Received event: WatchedEvent state:Closed type:None path:null 09:09:56 09:09:56.214 [main] INFO org.apache.zookeeper.ZooKeeper - Session: 0x1000002945e0000 closed 09:09:56 09:09:56.214 [main-EventThread] INFO org.apache.zookeeper.ClientCnxn - EventThread shut down for session: 0x1000002945e0000 09:09:56 09:09:56.216 [main] INFO kafka.zookeeper.ZooKeeperClient - [ZooKeeperClient Kafka server] Closed. 09:09:56 09:09:56.217 [main] INFO kafka.server.ClientQuotaManager$ThrottledChannelReaper - [TestBroker:1ThrottledChannelReaper-Fetch]: Shutting down 09:09:56 09:09:56.220 [TestBroker:1ThrottledChannelReaper-Fetch] INFO kafka.server.ClientQuotaManager$ThrottledChannelReaper - [TestBroker:1ThrottledChannelReaper-Fetch]: Stopped 09:09:56 09:09:56.220 [main] INFO kafka.server.ClientQuotaManager$ThrottledChannelReaper - [TestBroker:1ThrottledChannelReaper-Fetch]: Shutdown completed 09:09:56 09:09:56.220 [main] INFO kafka.server.ClientQuotaManager$ThrottledChannelReaper - [TestBroker:1ThrottledChannelReaper-Produce]: Shutting down 09:09:56 09:09:56.220 [TestBroker:1ThrottledChannelReaper-Produce] INFO kafka.server.ClientQuotaManager$ThrottledChannelReaper - [TestBroker:1ThrottledChannelReaper-Produce]: Stopped 09:09:56 09:09:56.220 [main] INFO kafka.server.ClientQuotaManager$ThrottledChannelReaper - [TestBroker:1ThrottledChannelReaper-Produce]: Shutdown completed 09:09:56 09:09:56.220 [main] INFO kafka.server.ClientQuotaManager$ThrottledChannelReaper - [TestBroker:1ThrottledChannelReaper-Request]: Shutting down 09:09:56 09:09:56.220 [TestBroker:1ThrottledChannelReaper-Request] INFO kafka.server.ClientQuotaManager$ThrottledChannelReaper - [TestBroker:1ThrottledChannelReaper-Request]: Stopped 09:09:56 09:09:56.220 [main] INFO kafka.server.ClientQuotaManager$ThrottledChannelReaper - [TestBroker:1ThrottledChannelReaper-Request]: Shutdown completed 09:09:56 09:09:56.220 [main] INFO kafka.server.ClientQuotaManager$ThrottledChannelReaper - [TestBroker:1ThrottledChannelReaper-ControllerMutation]: Shutting down 09:09:56 09:09:56.221 [TestBroker:1ThrottledChannelReaper-ControllerMutation] INFO kafka.server.ClientQuotaManager$ThrottledChannelReaper - [TestBroker:1ThrottledChannelReaper-ControllerMutation]: Stopped 09:09:56 09:09:56.221 [main] INFO kafka.server.ClientQuotaManager$ThrottledChannelReaper - [TestBroker:1ThrottledChannelReaper-ControllerMutation]: Shutdown completed 09:09:56 09:09:56.222 [main] INFO kafka.network.SocketServer - [SocketServer listenerType=ZK_BROKER, nodeId=1] Shutting down socket server 09:09:56 09:09:56.234 [kafka-producer-network-thread | mso-123456-producer-a431b68b-e45a-4ed5-8b19-0482c7f0322b] DEBUG org.apache.kafka.clients.NetworkClient - [Producer clientId=mso-123456-producer-a431b68b-e45a-4ed5-8b19-0482c7f0322b] Initialize connection to node localhost:40117 (id: 1 rack: null) for sending metadata request 09:09:56 09:09:56.234 [kafka-producer-network-thread | mso-123456-producer-a431b68b-e45a-4ed5-8b19-0482c7f0322b] DEBUG org.apache.kafka.clients.ClientUtils - Resolved host localhost as 127.0.0.1 09:09:56 09:09:56.234 [kafka-producer-network-thread | mso-123456-producer-a431b68b-e45a-4ed5-8b19-0482c7f0322b] DEBUG org.apache.kafka.clients.NetworkClient - [Producer clientId=mso-123456-producer-a431b68b-e45a-4ed5-8b19-0482c7f0322b] Initiating connection to node localhost:40117 (id: 1 rack: null) using address localhost/127.0.0.1 09:09:56 09:09:56.235 [kafka-producer-network-thread | mso-123456-producer-a431b68b-e45a-4ed5-8b19-0482c7f0322b] DEBUG org.apache.kafka.common.security.authenticator.SaslClientAuthenticator - [Producer clientId=mso-123456-producer-a431b68b-e45a-4ed5-8b19-0482c7f0322b] Set SASL client state to SEND_APIVERSIONS_REQUEST 09:09:56 09:09:56.235 [kafka-producer-network-thread | mso-123456-producer-a431b68b-e45a-4ed5-8b19-0482c7f0322b] DEBUG org.apache.kafka.common.security.authenticator.SaslClientAuthenticator - [Producer clientId=mso-123456-producer-a431b68b-e45a-4ed5-8b19-0482c7f0322b] Creating SaslClient: client=null;service=kafka;serviceHostname=localhost;mechs=[PLAIN] 09:09:56 09:09:56.236 [kafka-producer-network-thread | mso-123456-producer-a431b68b-e45a-4ed5-8b19-0482c7f0322b] DEBUG org.apache.kafka.common.network.Selector - [Producer clientId=mso-123456-producer-a431b68b-e45a-4ed5-8b19-0482c7f0322b] Connection with localhost/127.0.0.1 (channelId=1) disconnected 09:09:56 java.net.ConnectException: Connection refused 09:09:56 at java.base/sun.nio.ch.SocketChannelImpl.checkConnect(Native Method) 09:09:56 at java.base/sun.nio.ch.SocketChannelImpl.finishConnect(SocketChannelImpl.java:777) 09:09:56 at org.apache.kafka.common.network.PlaintextTransportLayer.finishConnect(PlaintextTransportLayer.java:50) 09:09:56 at org.apache.kafka.common.network.KafkaChannel.finishConnect(KafkaChannel.java:224) 09:09:56 at org.apache.kafka.common.network.Selector.pollSelectionKeys(Selector.java:526) 09:09:56 at org.apache.kafka.common.network.Selector.poll(Selector.java:481) 09:09:56 at org.apache.kafka.clients.NetworkClient.poll(NetworkClient.java:560) 09:09:56 at org.apache.kafka.clients.producer.internals.Sender.runOnce(Sender.java:328) 09:09:56 at org.apache.kafka.clients.producer.internals.Sender.run(Sender.java:243) 09:09:56 at java.base/java.lang.Thread.run(Thread.java:829) 09:09:56 09:09:56.236 [kafka-producer-network-thread | mso-123456-producer-a431b68b-e45a-4ed5-8b19-0482c7f0322b] INFO org.apache.kafka.clients.NetworkClient - [Producer clientId=mso-123456-producer-a431b68b-e45a-4ed5-8b19-0482c7f0322b] Node 1 disconnected. 09:09:56 09:09:56.236 [kafka-producer-network-thread | mso-123456-producer-a431b68b-e45a-4ed5-8b19-0482c7f0322b] WARN org.apache.kafka.clients.NetworkClient - [Producer clientId=mso-123456-producer-a431b68b-e45a-4ed5-8b19-0482c7f0322b] Connection to node 1 (localhost/127.0.0.1:40117) could not be established. Broker may not be available. 09:09:56 09:09:56.245 [main] INFO kafka.network.SocketServer - [SocketServer listenerType=ZK_BROKER, nodeId=1] Shutdown completed 09:09:56 09:09:56.245 [main] INFO org.apache.kafka.common.metrics.Metrics - Metrics scheduler closed 09:09:56 09:09:56.245 [main] INFO org.apache.kafka.common.metrics.Metrics - Closing reporter org.apache.kafka.common.metrics.JmxReporter 09:09:56 09:09:56.246 [main] INFO org.apache.kafka.common.metrics.Metrics - Metrics reporters closed 09:09:56 09:09:56.247 [main] INFO kafka.server.BrokerTopicStats - Broker and topic stats closed 09:09:56 09:09:56.247 [main] INFO org.apache.kafka.common.utils.AppInfoParser - App info kafka.server for 1 unregistered 09:09:56 09:09:56.247 [main] INFO kafka.server.KafkaServer - [KafkaServer id=1] shut down completed 09:09:56 09:09:56.248 [main] INFO com.salesforce.kafka.test.ZookeeperTestServer - Shutting down zookeeper test server 09:09:56 09:09:56.248 [ConnnectionExpirer] INFO org.apache.zookeeper.server.NIOServerCnxnFactory - ConnnectionExpirerThread interrupted 09:09:56 09:09:56.249 [NIOServerCxnFactory.AcceptThread:0.0.0.0/0.0.0.0:46481] INFO org.apache.zookeeper.server.NIOServerCnxnFactory - accept thread exitted run method 09:09:56 09:09:56.250 [NIOServerCxnFactory.SelectorThread-1] INFO org.apache.zookeeper.server.NIOServerCnxnFactory - selector thread exitted run method 09:09:56 09:09:56.250 [NIOServerCxnFactory.SelectorThread-0] INFO org.apache.zookeeper.server.NIOServerCnxnFactory - selector thread exitted run method 09:09:56 09:09:56.252 [main] INFO org.apache.zookeeper.server.ZooKeeperServer - shutting down 09:09:56 09:09:56.252 [main] INFO org.apache.zookeeper.server.RequestThrottler - Shutting down 09:09:56 09:09:56.252 [RequestThrottler] INFO org.apache.zookeeper.server.RequestThrottler - Draining request throttler queue 09:09:56 09:09:56.252 [RequestThrottler] INFO org.apache.zookeeper.server.RequestThrottler - RequestThrottler shutdown. Dropped 0 requests 09:09:56 09:09:56.252 [main] INFO org.apache.zookeeper.server.SessionTrackerImpl - Shutting down 09:09:56 09:09:56.252 [main] INFO org.apache.zookeeper.server.PrepRequestProcessor - Shutting down 09:09:56 09:09:56.253 [main] INFO org.apache.zookeeper.server.SyncRequestProcessor - Shutting down 09:09:56 09:09:56.253 [ProcessThread(sid:0 cport:46481):] INFO org.apache.zookeeper.server.PrepRequestProcessor - PrepRequestProcessor exited loop! 09:09:56 09:09:56.253 [SyncThread:0] INFO org.apache.zookeeper.server.SyncRequestProcessor - SyncRequestProcessor exited! 09:09:56 09:09:56.253 [main] INFO org.apache.zookeeper.server.FinalRequestProcessor - shutdown of request processor complete 09:09:56 09:09:56.253 [main] DEBUG org.apache.zookeeper.server.persistence.FileTxnLog - Created new input stream: /tmp/kafka-unit12795613514470425753/version-2/log.1 09:09:56 09:09:56.253 [main] DEBUG org.apache.zookeeper.server.persistence.FileTxnLog - Created new input archive: /tmp/kafka-unit12795613514470425753/version-2/log.1 09:09:56 09:09:56.257 [main] DEBUG org.apache.zookeeper.server.persistence.FileTxnLog - EOF exception 09:09:56 java.io.EOFException: Failed to read /tmp/kafka-unit12795613514470425753/version-2/log.1 09:09:56 at org.apache.zookeeper.server.persistence.FileTxnLog$FileTxnIterator.next(FileTxnLog.java:771) 09:09:56 at org.apache.zookeeper.server.persistence.FileTxnLog$FileTxnIterator.(FileTxnLog.java:650) 09:09:56 at org.apache.zookeeper.server.persistence.FileTxnLog.read(FileTxnLog.java:462) 09:09:56 at org.apache.zookeeper.server.persistence.FileTxnLog.read(FileTxnLog.java:449) 09:09:56 at org.apache.zookeeper.server.persistence.FileTxnSnapLog.fastForwardFromEdits(FileTxnSnapLog.java:321) 09:09:56 at org.apache.zookeeper.server.ZKDatabase.fastForwardDataBase(ZKDatabase.java:300) 09:09:56 at org.apache.zookeeper.server.ZooKeeperServer.shutdown(ZooKeeperServer.java:848) 09:09:56 at org.apache.zookeeper.server.ZooKeeperServer.shutdown(ZooKeeperServer.java:796) 09:09:56 at org.apache.zookeeper.server.NIOServerCnxnFactory.shutdown(NIOServerCnxnFactory.java:922) 09:09:56 at org.apache.zookeeper.server.ZooKeeperServerMain.shutdown(ZooKeeperServerMain.java:219) 09:09:56 at org.apache.curator.test.TestingZooKeeperMain.close(TestingZooKeeperMain.java:144) 09:09:56 at org.apache.curator.test.TestingZooKeeperServer.stop(TestingZooKeeperServer.java:110) 09:09:56 at org.apache.curator.test.TestingServer.stop(TestingServer.java:161) 09:09:56 at com.salesforce.kafka.test.ZookeeperTestServer.stop(ZookeeperTestServer.java:129) 09:09:56 at com.salesforce.kafka.test.KafkaTestCluster.stop(KafkaTestCluster.java:303) 09:09:56 at com.salesforce.kafka.test.KafkaTestCluster.close(KafkaTestCluster.java:312) 09:09:56 at org.onap.sdc.utils.SdcKafkaTest.after(SdcKafkaTest.java:65) 09:09:56 at java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke0(Native Method) 09:09:56 at java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) 09:09:56 at java.base/jdk.internal.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) 09:09:56 at java.base/java.lang.reflect.Method.invoke(Method.java:566) 09:09:56 at org.junit.platform.commons.util.ReflectionUtils.invokeMethod(ReflectionUtils.java:688) 09:09:56 at org.junit.jupiter.engine.execution.MethodInvocation.proceed(MethodInvocation.java:60) 09:09:56 at org.junit.jupiter.engine.execution.InvocationInterceptorChain$ValidatingInvocation.proceed(InvocationInterceptorChain.java:131) 09:09:56 at org.junit.jupiter.engine.extension.TimeoutExtension.intercept(TimeoutExtension.java:149) 09:09:56 at org.junit.jupiter.engine.extension.TimeoutExtension.interceptLifecycleMethod(TimeoutExtension.java:126) 09:09:56 at org.junit.jupiter.engine.extension.TimeoutExtension.interceptAfterAllMethod(TimeoutExtension.java:116) 09:09:56 at org.junit.jupiter.engine.execution.ExecutableInvoker$ReflectiveInterceptorCall.lambda$ofVoidMethod$0(ExecutableInvoker.java:115) 09:09:56 at org.junit.jupiter.engine.execution.ExecutableInvoker.lambda$invoke$0(ExecutableInvoker.java:105) 09:09:56 at org.junit.jupiter.engine.execution.InvocationInterceptorChain$InterceptedInvocation.proceed(InvocationInterceptorChain.java:106) 09:09:56 at org.junit.jupiter.engine.execution.InvocationInterceptorChain.proceed(InvocationInterceptorChain.java:64) 09:09:56 at org.junit.jupiter.engine.execution.InvocationInterceptorChain.chainAndInvoke(InvocationInterceptorChain.java:45) 09:09:56 at org.junit.jupiter.engine.execution.InvocationInterceptorChain.invoke(InvocationInterceptorChain.java:37) 09:09:56 at org.junit.jupiter.engine.execution.ExecutableInvoker.invoke(ExecutableInvoker.java:104) 09:09:56 at org.junit.jupiter.engine.execution.ExecutableInvoker.invoke(ExecutableInvoker.java:98) 09:09:56 at org.junit.jupiter.engine.descriptor.ClassBasedTestDescriptor.lambda$invokeAfterAllMethods$11(ClassBasedTestDescriptor.java:412) 09:09:56 at org.junit.platform.engine.support.hierarchical.ThrowableCollector.execute(ThrowableCollector.java:73) 09:09:56 at org.junit.jupiter.engine.descriptor.ClassBasedTestDescriptor.lambda$invokeAfterAllMethods$12(ClassBasedTestDescriptor.java:410) 09:09:56 at java.base/java.util.ArrayList.forEach(ArrayList.java:1541) 09:09:56 at java.base/java.util.Collections$UnmodifiableCollection.forEach(Collections.java:1085) 09:09:56 at org.junit.jupiter.engine.descriptor.ClassBasedTestDescriptor.invokeAfterAllMethods(ClassBasedTestDescriptor.java:410) 09:09:56 at org.junit.jupiter.engine.descriptor.ClassBasedTestDescriptor.after(ClassBasedTestDescriptor.java:212) 09:09:56 at org.junit.jupiter.engine.descriptor.ClassBasedTestDescriptor.after(ClassBasedTestDescriptor.java:78) 09:09:56 at org.junit.platform.engine.support.hierarchical.NodeTestTask.lambda$executeRecursively$6(NodeTestTask.java:149) 09:09:56 at org.junit.platform.engine.support.hierarchical.ThrowableCollector.execute(ThrowableCollector.java:73) 09:09:56 at org.junit.platform.engine.support.hierarchical.NodeTestTask.lambda$executeRecursively$7(NodeTestTask.java:149) 09:09:56 at org.junit.platform.engine.support.hierarchical.Node.around(Node.java:137) 09:09:56 at org.junit.platform.engine.support.hierarchical.NodeTestTask.lambda$executeRecursively$8(NodeTestTask.java:127) 09:09:56 at org.junit.platform.engine.support.hierarchical.ThrowableCollector.execute(ThrowableCollector.java:73) 09:09:56 at org.junit.platform.engine.support.hierarchical.NodeTestTask.executeRecursively(NodeTestTask.java:126) 09:09:56 at org.junit.platform.engine.support.hierarchical.NodeTestTask.execute(NodeTestTask.java:84) 09:09:56 at java.base/java.util.ArrayList.forEach(ArrayList.java:1541) 09:09:56 at org.junit.platform.engine.support.hierarchical.SameThreadHierarchicalTestExecutorService.invokeAll(SameThreadHierarchicalTestExecutorService.java:38) 09:09:56 at org.junit.platform.engine.support.hierarchical.NodeTestTask.lambda$executeRecursively$5(NodeTestTask.java:143) 09:09:56 at org.junit.platform.engine.support.hierarchical.ThrowableCollector.execute(ThrowableCollector.java:73) 09:09:56 at org.junit.platform.engine.support.hierarchical.NodeTestTask.lambda$executeRecursively$7(NodeTestTask.java:129) 09:09:56 at org.junit.platform.engine.support.hierarchical.Node.around(Node.java:137) 09:09:56 at org.junit.platform.engine.support.hierarchical.NodeTestTask.lambda$executeRecursively$8(NodeTestTask.java:127) 09:09:56 at org.junit.platform.engine.support.hierarchical.ThrowableCollector.execute(ThrowableCollector.java:73) 09:09:56 at org.junit.platform.engine.support.hierarchical.NodeTestTask.executeRecursively(NodeTestTask.java:126) 09:09:56 at org.junit.platform.engine.support.hierarchical.NodeTestTask.execute(NodeTestTask.java:84) 09:09:56 at org.junit.platform.engine.support.hierarchical.SameThreadHierarchicalTestExecutorService.submit(SameThreadHierarchicalTestExecutorService.java:32) 09:09:56 at org.junit.platform.engine.support.hierarchical.HierarchicalTestExecutor.execute(HierarchicalTestExecutor.java:57) 09:09:56 at org.junit.platform.engine.support.hierarchical.HierarchicalTestEngine.execute(HierarchicalTestEngine.java:51) 09:09:56 at org.junit.platform.launcher.core.EngineExecutionOrchestrator.execute(EngineExecutionOrchestrator.java:108) 09:09:56 at org.junit.platform.launcher.core.EngineExecutionOrchestrator.execute(EngineExecutionOrchestrator.java:88) 09:09:56 at org.junit.platform.launcher.core.EngineExecutionOrchestrator.lambda$execute$0(EngineExecutionOrchestrator.java:54) 09:09:56 at org.junit.platform.launcher.core.EngineExecutionOrchestrator.withInterceptedStreams(EngineExecutionOrchestrator.java:67) 09:09:56 at org.junit.platform.launcher.core.EngineExecutionOrchestrator.execute(EngineExecutionOrchestrator.java:52) 09:09:56 at org.junit.platform.launcher.core.DefaultLauncher.execute(DefaultLauncher.java:96) 09:09:56 at org.junit.platform.launcher.core.DefaultLauncher.execute(DefaultLauncher.java:75) 09:09:56 at org.apache.maven.surefire.junitplatform.JUnitPlatformProvider.invokeAllTests(JUnitPlatformProvider.java:154) 09:09:56 at org.apache.maven.surefire.junitplatform.JUnitPlatformProvider.invoke(JUnitPlatformProvider.java:127) 09:09:56 at org.apache.maven.surefire.booter.ForkedBooter.runSuitesInProcess(ForkedBooter.java:377) 09:09:56 at org.apache.maven.surefire.booter.ForkedBooter.execute(ForkedBooter.java:138) 09:09:56 at org.apache.maven.surefire.booter.ForkedBooter.run(ForkedBooter.java:465) 09:09:56 at org.apache.maven.surefire.booter.ForkedBooter.main(ForkedBooter.java:451) 09:09:56 09:09:56.258 [Thread-2] DEBUG org.apache.zookeeper.server.ZooKeeperServer - ZooKeeper server is not running, so not proceeding to shutdown! 09:09:56 09:09:56.258 [main] INFO kafka.server.KafkaServer - [KafkaServer id=1] shutting down 09:09:56 09:09:56.258 [main] INFO com.salesforce.kafka.test.ZookeeperTestServer - Shutting down zookeeper test server 09:09:56 [INFO] Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 26.518 s - in org.onap.sdc.utils.SdcKafkaTest 09:09:56 [INFO] Running org.onap.sdc.utils.NotificationSenderTest 09:09:56 09:09:56.264 [kafka-coordinator-heartbeat-thread | mso-group] DEBUG org.apache.kafka.clients.NetworkClient - [Consumer clientId=mso-123456-consumer-f5c1d64b-2a77-4abe-97e2-5d595acfbd82, groupId=mso-group] Initialize connection to node localhost:40117 (id: 1 rack: null) for sending metadata request 09:09:56 09:09:56.264 [kafka-coordinator-heartbeat-thread | mso-group] DEBUG org.apache.kafka.clients.ClientUtils - Resolved host localhost as 127.0.0.1 09:09:56 09:09:56.264 [kafka-coordinator-heartbeat-thread | mso-group] DEBUG org.apache.kafka.clients.NetworkClient - [Consumer clientId=mso-123456-consumer-f5c1d64b-2a77-4abe-97e2-5d595acfbd82, groupId=mso-group] Initiating connection to node localhost:40117 (id: 1 rack: null) using address localhost/127.0.0.1 09:09:56 09:09:56.264 [kafka-coordinator-heartbeat-thread | mso-group] DEBUG org.apache.kafka.common.security.authenticator.SaslClientAuthenticator - [Consumer clientId=mso-123456-consumer-f5c1d64b-2a77-4abe-97e2-5d595acfbd82, groupId=mso-group] Set SASL client state to SEND_APIVERSIONS_REQUEST 09:09:56 09:09:56.264 [kafka-coordinator-heartbeat-thread | mso-group] DEBUG org.apache.kafka.common.security.authenticator.SaslClientAuthenticator - [Consumer clientId=mso-123456-consumer-f5c1d64b-2a77-4abe-97e2-5d595acfbd82, groupId=mso-group] Creating SaslClient: client=null;service=kafka;serviceHostname=localhost;mechs=[PLAIN] 09:09:56 09:09:56.265 [kafka-coordinator-heartbeat-thread | mso-group] DEBUG org.apache.kafka.common.network.Selector - [Consumer clientId=mso-123456-consumer-f5c1d64b-2a77-4abe-97e2-5d595acfbd82, groupId=mso-group] Connection with localhost/127.0.0.1 (channelId=1) disconnected 09:09:56 java.net.ConnectException: Connection refused 09:09:56 at java.base/sun.nio.ch.SocketChannelImpl.checkConnect(Native Method) 09:09:56 at java.base/sun.nio.ch.SocketChannelImpl.finishConnect(SocketChannelImpl.java:777) 09:09:56 at org.apache.kafka.common.network.PlaintextTransportLayer.finishConnect(PlaintextTransportLayer.java:50) 09:09:56 at org.apache.kafka.common.network.KafkaChannel.finishConnect(KafkaChannel.java:224) 09:09:56 at org.apache.kafka.common.network.Selector.pollSelectionKeys(Selector.java:526) 09:09:56 at org.apache.kafka.common.network.Selector.poll(Selector.java:481) 09:09:56 at org.apache.kafka.clients.NetworkClient.poll(NetworkClient.java:560) 09:09:56 at org.apache.kafka.clients.consumer.internals.ConsumerNetworkClient.poll(ConsumerNetworkClient.java:280) 09:09:56 at org.apache.kafka.clients.consumer.internals.ConsumerNetworkClient.pollNoWakeup(ConsumerNetworkClient.java:321) 09:09:56 at org.apache.kafka.clients.consumer.internals.AbstractCoordinator$HeartbeatThread.run(AbstractCoordinator.java:1454) 09:09:56 09:09:56.265 [kafka-coordinator-heartbeat-thread | mso-group] INFO org.apache.kafka.clients.NetworkClient - [Consumer clientId=mso-123456-consumer-f5c1d64b-2a77-4abe-97e2-5d595acfbd82, groupId=mso-group] Node 1 disconnected. 09:09:56 09:09:56.265 [kafka-coordinator-heartbeat-thread | mso-group] WARN org.apache.kafka.clients.NetworkClient - [Consumer clientId=mso-123456-consumer-f5c1d64b-2a77-4abe-97e2-5d595acfbd82, groupId=mso-group] Connection to node 1 (localhost/127.0.0.1:40117) could not be established. Broker may not be available. 09:09:56 09:09:56.265 [kafka-coordinator-heartbeat-thread | mso-group] DEBUG org.apache.kafka.clients.consumer.internals.ConsumerCoordinator - [Consumer clientId=mso-123456-consumer-f5c1d64b-2a77-4abe-97e2-5d595acfbd82, groupId=mso-group] No broker available to send FindCoordinator request 09:09:56 09:09:56.369 [kafka-coordinator-heartbeat-thread | mso-group] DEBUG org.apache.kafka.clients.NetworkClient - [Consumer clientId=mso-123456-consumer-f5c1d64b-2a77-4abe-97e2-5d595acfbd82, groupId=mso-group] Give up sending metadata request since no node is available 09:09:56 09:09:56.370 [kafka-producer-network-thread | mso-123456-producer-a431b68b-e45a-4ed5-8b19-0482c7f0322b] DEBUG org.apache.kafka.clients.NetworkClient - [Producer clientId=mso-123456-producer-a431b68b-e45a-4ed5-8b19-0482c7f0322b] Give up sending metadata request since no node is available 09:09:56 09:09:56.370 [kafka-coordinator-heartbeat-thread | mso-group] DEBUG org.apache.kafka.clients.consumer.internals.ConsumerCoordinator - [Consumer clientId=mso-123456-consumer-f5c1d64b-2a77-4abe-97e2-5d595acfbd82, groupId=mso-group] No broker available to send FindCoordinator request 09:09:56 09:09:56.483 [kafka-coordinator-heartbeat-thread | mso-group] DEBUG org.apache.kafka.clients.NetworkClient - [Consumer clientId=mso-123456-consumer-f5c1d64b-2a77-4abe-97e2-5d595acfbd82, groupId=mso-group] Give up sending metadata request since no node is available 09:09:56 09:09:56.484 [kafka-producer-network-thread | mso-123456-producer-a431b68b-e45a-4ed5-8b19-0482c7f0322b] DEBUG org.apache.kafka.clients.NetworkClient - [Producer clientId=mso-123456-producer-a431b68b-e45a-4ed5-8b19-0482c7f0322b] Give up sending metadata request since no node is available 09:09:56 09:09:56.485 [kafka-coordinator-heartbeat-thread | mso-group] DEBUG org.apache.kafka.clients.consumer.internals.ConsumerCoordinator - [Consumer clientId=mso-123456-consumer-f5c1d64b-2a77-4abe-97e2-5d595acfbd82, groupId=mso-group] No broker available to send FindCoordinator request 09:09:56 09:09:56.505 [main] INFO org.onap.sdc.utils.NotificationSender - DistributionClient - sendStatus 09:09:56 09:09:56.506 [main] DEBUG org.onap.sdc.utils.NotificationSender - Publisher server list: null 09:09:56 09:09:56.506 [main] INFO org.onap.sdc.utils.NotificationSender - Trying to send status: status 09:09:56 to topic null 09:09:56 09:09:56.535 [kafka-producer-network-thread | mso-123456-producer-a431b68b-e45a-4ed5-8b19-0482c7f0322b] DEBUG org.apache.kafka.clients.NetworkClient - [Producer clientId=mso-123456-producer-a431b68b-e45a-4ed5-8b19-0482c7f0322b] Give up sending metadata request since no node is available 09:09:56 09:09:56.585 [kafka-producer-network-thread | mso-123456-producer-a431b68b-e45a-4ed5-8b19-0482c7f0322b] DEBUG org.apache.kafka.clients.NetworkClient - [Producer clientId=mso-123456-producer-a431b68b-e45a-4ed5-8b19-0482c7f0322b] Give up sending metadata request since no node is available 09:09:56 09:09:56.589 [kafka-coordinator-heartbeat-thread | mso-group] DEBUG org.apache.kafka.clients.NetworkClient - [Consumer clientId=mso-123456-consumer-f5c1d64b-2a77-4abe-97e2-5d595acfbd82, groupId=mso-group] Give up sending metadata request since no node is available 09:09:56 09:09:56.589 [kafka-coordinator-heartbeat-thread | mso-group] DEBUG org.apache.kafka.clients.consumer.internals.ConsumerCoordinator - [Consumer clientId=mso-123456-consumer-f5c1d64b-2a77-4abe-97e2-5d595acfbd82, groupId=mso-group] No broker available to send FindCoordinator request 09:09:56 09:09:56.635 [kafka-producer-network-thread | mso-123456-producer-a431b68b-e45a-4ed5-8b19-0482c7f0322b] DEBUG org.apache.kafka.clients.NetworkClient - [Producer clientId=mso-123456-producer-a431b68b-e45a-4ed5-8b19-0482c7f0322b] Initialize connection to node localhost:40117 (id: 1 rack: null) for sending metadata request 09:09:56 09:09:56.636 [kafka-producer-network-thread | mso-123456-producer-a431b68b-e45a-4ed5-8b19-0482c7f0322b] DEBUG org.apache.kafka.clients.ClientUtils - Resolved host localhost as 127.0.0.1 09:09:56 09:09:56.636 [kafka-producer-network-thread | mso-123456-producer-a431b68b-e45a-4ed5-8b19-0482c7f0322b] DEBUG org.apache.kafka.clients.NetworkClient - [Producer clientId=mso-123456-producer-a431b68b-e45a-4ed5-8b19-0482c7f0322b] Initiating connection to node localhost:40117 (id: 1 rack: null) using address localhost/127.0.0.1 09:09:56 09:09:56.637 [kafka-producer-network-thread | mso-123456-producer-a431b68b-e45a-4ed5-8b19-0482c7f0322b] DEBUG org.apache.kafka.common.security.authenticator.SaslClientAuthenticator - [Producer clientId=mso-123456-producer-a431b68b-e45a-4ed5-8b19-0482c7f0322b] Set SASL client state to SEND_APIVERSIONS_REQUEST 09:09:56 09:09:56.637 [kafka-producer-network-thread | mso-123456-producer-a431b68b-e45a-4ed5-8b19-0482c7f0322b] DEBUG org.apache.kafka.common.security.authenticator.SaslClientAuthenticator - [Producer clientId=mso-123456-producer-a431b68b-e45a-4ed5-8b19-0482c7f0322b] Creating SaslClient: client=null;service=kafka;serviceHostname=localhost;mechs=[PLAIN] 09:09:56 09:09:56.639 [kafka-producer-network-thread | mso-123456-producer-a431b68b-e45a-4ed5-8b19-0482c7f0322b] DEBUG org.apache.kafka.common.network.Selector - [Producer clientId=mso-123456-producer-a431b68b-e45a-4ed5-8b19-0482c7f0322b] Connection with localhost/127.0.0.1 (channelId=1) disconnected 09:09:56 java.net.ConnectException: Connection refused 09:09:56 at java.base/sun.nio.ch.SocketChannelImpl.checkConnect(Native Method) 09:09:56 at java.base/sun.nio.ch.SocketChannelImpl.finishConnect(SocketChannelImpl.java:777) 09:09:56 at org.apache.kafka.common.network.PlaintextTransportLayer.finishConnect(PlaintextTransportLayer.java:50) 09:09:56 at org.apache.kafka.common.network.KafkaChannel.finishConnect(KafkaChannel.java:224) 09:09:56 at org.apache.kafka.common.network.Selector.pollSelectionKeys(Selector.java:526) 09:09:56 at org.apache.kafka.common.network.Selector.poll(Selector.java:481) 09:09:56 at org.apache.kafka.clients.NetworkClient.poll(NetworkClient.java:560) 09:09:56 at org.apache.kafka.clients.producer.internals.Sender.runOnce(Sender.java:328) 09:09:56 at org.apache.kafka.clients.producer.internals.Sender.run(Sender.java:243) 09:09:56 at java.base/java.lang.Thread.run(Thread.java:829) 09:09:56 09:09:56.639 [kafka-producer-network-thread | mso-123456-producer-a431b68b-e45a-4ed5-8b19-0482c7f0322b] INFO org.apache.kafka.clients.NetworkClient - [Producer clientId=mso-123456-producer-a431b68b-e45a-4ed5-8b19-0482c7f0322b] Node 1 disconnected. 09:09:56 09:09:56.639 [kafka-producer-network-thread | mso-123456-producer-a431b68b-e45a-4ed5-8b19-0482c7f0322b] WARN org.apache.kafka.clients.NetworkClient - [Producer clientId=mso-123456-producer-a431b68b-e45a-4ed5-8b19-0482c7f0322b] Connection to node 1 (localhost/127.0.0.1:40117) could not be established. Broker may not be available. 09:09:56 09:09:56.689 [kafka-coordinator-heartbeat-thread | mso-group] DEBUG org.apache.kafka.clients.NetworkClient - [Consumer clientId=mso-123456-consumer-f5c1d64b-2a77-4abe-97e2-5d595acfbd82, groupId=mso-group] Initialize connection to node localhost:40117 (id: 1 rack: null) for sending metadata request 09:09:56 09:09:56.689 [kafka-coordinator-heartbeat-thread | mso-group] DEBUG org.apache.kafka.clients.ClientUtils - Resolved host localhost as 127.0.0.1 09:09:56 09:09:56.690 [kafka-coordinator-heartbeat-thread | mso-group] DEBUG org.apache.kafka.clients.NetworkClient - [Consumer clientId=mso-123456-consumer-f5c1d64b-2a77-4abe-97e2-5d595acfbd82, groupId=mso-group] Initiating connection to node localhost:40117 (id: 1 rack: null) using address localhost/127.0.0.1 09:09:56 09:09:56.690 [kafka-coordinator-heartbeat-thread | mso-group] DEBUG org.apache.kafka.common.security.authenticator.SaslClientAuthenticator - [Consumer clientId=mso-123456-consumer-f5c1d64b-2a77-4abe-97e2-5d595acfbd82, groupId=mso-group] Set SASL client state to SEND_APIVERSIONS_REQUEST 09:09:56 09:09:56.690 [kafka-coordinator-heartbeat-thread | mso-group] DEBUG org.apache.kafka.common.security.authenticator.SaslClientAuthenticator - [Consumer clientId=mso-123456-consumer-f5c1d64b-2a77-4abe-97e2-5d595acfbd82, groupId=mso-group] Creating SaslClient: client=null;service=kafka;serviceHostname=localhost;mechs=[PLAIN] 09:09:56 09:09:56.691 [kafka-coordinator-heartbeat-thread | mso-group] DEBUG org.apache.kafka.common.network.Selector - [Consumer clientId=mso-123456-consumer-f5c1d64b-2a77-4abe-97e2-5d595acfbd82, groupId=mso-group] Connection with localhost/127.0.0.1 (channelId=1) disconnected 09:09:56 java.net.ConnectException: Connection refused 09:09:56 at java.base/sun.nio.ch.SocketChannelImpl.checkConnect(Native Method) 09:09:56 at java.base/sun.nio.ch.SocketChannelImpl.finishConnect(SocketChannelImpl.java:777) 09:09:56 at org.apache.kafka.common.network.PlaintextTransportLayer.finishConnect(PlaintextTransportLayer.java:50) 09:09:56 at org.apache.kafka.common.network.KafkaChannel.finishConnect(KafkaChannel.java:224) 09:09:56 at org.apache.kafka.common.network.Selector.pollSelectionKeys(Selector.java:526) 09:09:56 at org.apache.kafka.common.network.Selector.poll(Selector.java:481) 09:09:56 at org.apache.kafka.clients.NetworkClient.poll(NetworkClient.java:560) 09:09:56 at org.apache.kafka.clients.consumer.internals.ConsumerNetworkClient.poll(ConsumerNetworkClient.java:280) 09:09:56 at org.apache.kafka.clients.consumer.internals.ConsumerNetworkClient.pollNoWakeup(ConsumerNetworkClient.java:321) 09:09:56 at org.apache.kafka.clients.consumer.internals.AbstractCoordinator$HeartbeatThread.run(AbstractCoordinator.java:1454) 09:09:56 09:09:56.691 [kafka-coordinator-heartbeat-thread | mso-group] INFO org.apache.kafka.clients.NetworkClient - [Consumer clientId=mso-123456-consumer-f5c1d64b-2a77-4abe-97e2-5d595acfbd82, groupId=mso-group] Node 1 disconnected. 09:09:56 09:09:56.691 [kafka-coordinator-heartbeat-thread | mso-group] WARN org.apache.kafka.clients.NetworkClient - [Consumer clientId=mso-123456-consumer-f5c1d64b-2a77-4abe-97e2-5d595acfbd82, groupId=mso-group] Connection to node 1 (localhost/127.0.0.1:40117) could not be established. Broker may not be available. 09:09:56 09:09:56.692 [kafka-coordinator-heartbeat-thread | mso-group] DEBUG org.apache.kafka.clients.consumer.internals.ConsumerCoordinator - [Consumer clientId=mso-123456-consumer-f5c1d64b-2a77-4abe-97e2-5d595acfbd82, groupId=mso-group] No broker available to send FindCoordinator request 09:09:56 09:09:56.740 [kafka-producer-network-thread | mso-123456-producer-a431b68b-e45a-4ed5-8b19-0482c7f0322b] DEBUG org.apache.kafka.clients.NetworkClient - [Producer clientId=mso-123456-producer-a431b68b-e45a-4ed5-8b19-0482c7f0322b] Give up sending metadata request since no node is available 09:09:56 09:09:56.790 [kafka-producer-network-thread | mso-123456-producer-a431b68b-e45a-4ed5-8b19-0482c7f0322b] DEBUG org.apache.kafka.clients.NetworkClient - [Producer clientId=mso-123456-producer-a431b68b-e45a-4ed5-8b19-0482c7f0322b] Give up sending metadata request since no node is available 09:09:56 09:09:56.792 [kafka-coordinator-heartbeat-thread | mso-group] DEBUG org.apache.kafka.clients.NetworkClient - [Consumer clientId=mso-123456-consumer-f5c1d64b-2a77-4abe-97e2-5d595acfbd82, groupId=mso-group] Give up sending metadata request since no node is available 09:09:56 09:09:56.792 [kafka-coordinator-heartbeat-thread | mso-group] DEBUG org.apache.kafka.clients.consumer.internals.ConsumerCoordinator - [Consumer clientId=mso-123456-consumer-f5c1d64b-2a77-4abe-97e2-5d595acfbd82, groupId=mso-group] No broker available to send FindCoordinator request 09:09:56 09:09:56.841 [kafka-producer-network-thread | mso-123456-producer-a431b68b-e45a-4ed5-8b19-0482c7f0322b] DEBUG org.apache.kafka.clients.NetworkClient - [Producer clientId=mso-123456-producer-a431b68b-e45a-4ed5-8b19-0482c7f0322b] Give up sending metadata request since no node is available 09:09:56 09:09:56.891 [kafka-producer-network-thread | mso-123456-producer-a431b68b-e45a-4ed5-8b19-0482c7f0322b] DEBUG org.apache.kafka.clients.NetworkClient - [Producer clientId=mso-123456-producer-a431b68b-e45a-4ed5-8b19-0482c7f0322b] Give up sending metadata request since no node is available 09:09:56 09:09:56.892 [kafka-coordinator-heartbeat-thread | mso-group] DEBUG org.apache.kafka.clients.NetworkClient - [Consumer clientId=mso-123456-consumer-f5c1d64b-2a77-4abe-97e2-5d595acfbd82, groupId=mso-group] Give up sending metadata request since no node is available 09:09:56 09:09:56.892 [kafka-coordinator-heartbeat-thread | mso-group] DEBUG org.apache.kafka.clients.consumer.internals.ConsumerCoordinator - [Consumer clientId=mso-123456-consumer-f5c1d64b-2a77-4abe-97e2-5d595acfbd82, groupId=mso-group] No broker available to send FindCoordinator request 09:09:56 09:09:56.942 [kafka-producer-network-thread | mso-123456-producer-a431b68b-e45a-4ed5-8b19-0482c7f0322b] DEBUG org.apache.kafka.clients.NetworkClient - [Producer clientId=mso-123456-producer-a431b68b-e45a-4ed5-8b19-0482c7f0322b] Give up sending metadata request since no node is available 09:09:56 09:09:56.992 [kafka-producer-network-thread | mso-123456-producer-a431b68b-e45a-4ed5-8b19-0482c7f0322b] DEBUG org.apache.kafka.clients.NetworkClient - [Producer clientId=mso-123456-producer-a431b68b-e45a-4ed5-8b19-0482c7f0322b] Give up sending metadata request since no node is available 09:09:56 09:09:56.993 [kafka-coordinator-heartbeat-thread | mso-group] DEBUG org.apache.kafka.clients.NetworkClient - [Consumer clientId=mso-123456-consumer-f5c1d64b-2a77-4abe-97e2-5d595acfbd82, groupId=mso-group] Give up sending metadata request since no node is available 09:09:56 09:09:56.993 [kafka-coordinator-heartbeat-thread | mso-group] DEBUG org.apache.kafka.clients.consumer.internals.ConsumerCoordinator - [Consumer clientId=mso-123456-consumer-f5c1d64b-2a77-4abe-97e2-5d595acfbd82, groupId=mso-group] No broker available to send FindCoordinator request 09:09:57 09:09:57.043 [kafka-producer-network-thread | mso-123456-producer-a431b68b-e45a-4ed5-8b19-0482c7f0322b] DEBUG org.apache.kafka.clients.NetworkClient - [Producer clientId=mso-123456-producer-a431b68b-e45a-4ed5-8b19-0482c7f0322b] Give up sending metadata request since no node is available 09:09:57 09:09:57.093 [kafka-producer-network-thread | mso-123456-producer-a431b68b-e45a-4ed5-8b19-0482c7f0322b] DEBUG org.apache.kafka.clients.NetworkClient - [Producer clientId=mso-123456-producer-a431b68b-e45a-4ed5-8b19-0482c7f0322b] Give up sending metadata request since no node is available 09:09:57 09:09:57.093 [kafka-coordinator-heartbeat-thread | mso-group] DEBUG org.apache.kafka.clients.NetworkClient - [Consumer clientId=mso-123456-consumer-f5c1d64b-2a77-4abe-97e2-5d595acfbd82, groupId=mso-group] Give up sending metadata request since no node is available 09:09:57 09:09:57.093 [kafka-coordinator-heartbeat-thread | mso-group] DEBUG org.apache.kafka.clients.consumer.internals.ConsumerCoordinator - [Consumer clientId=mso-123456-consumer-f5c1d64b-2a77-4abe-97e2-5d595acfbd82, groupId=mso-group] No broker available to send FindCoordinator request 09:09:57 09:09:57.143 [kafka-producer-network-thread | mso-123456-producer-a431b68b-e45a-4ed5-8b19-0482c7f0322b] DEBUG org.apache.kafka.clients.NetworkClient - [Producer clientId=mso-123456-producer-a431b68b-e45a-4ed5-8b19-0482c7f0322b] Give up sending metadata request since no node is available 09:09:57 09:09:57.194 [kafka-producer-network-thread | mso-123456-producer-a431b68b-e45a-4ed5-8b19-0482c7f0322b] DEBUG org.apache.kafka.clients.NetworkClient - [Producer clientId=mso-123456-producer-a431b68b-e45a-4ed5-8b19-0482c7f0322b] Give up sending metadata request since no node is available 09:09:57 09:09:57.194 [kafka-coordinator-heartbeat-thread | mso-group] DEBUG org.apache.kafka.clients.NetworkClient - [Consumer clientId=mso-123456-consumer-f5c1d64b-2a77-4abe-97e2-5d595acfbd82, groupId=mso-group] Give up sending metadata request since no node is available 09:09:57 09:09:57.194 [kafka-coordinator-heartbeat-thread | mso-group] DEBUG org.apache.kafka.clients.consumer.internals.ConsumerCoordinator - [Consumer clientId=mso-123456-consumer-f5c1d64b-2a77-4abe-97e2-5d595acfbd82, groupId=mso-group] No broker available to send FindCoordinator request 09:09:57 09:09:57.244 [kafka-producer-network-thread | mso-123456-producer-a431b68b-e45a-4ed5-8b19-0482c7f0322b] DEBUG org.apache.kafka.clients.NetworkClient - [Producer clientId=mso-123456-producer-a431b68b-e45a-4ed5-8b19-0482c7f0322b] Give up sending metadata request since no node is available 09:09:57 09:09:57.294 [kafka-coordinator-heartbeat-thread | mso-group] DEBUG org.apache.kafka.clients.NetworkClient - [Consumer clientId=mso-123456-consumer-f5c1d64b-2a77-4abe-97e2-5d595acfbd82, groupId=mso-group] Give up sending metadata request since no node is available 09:09:57 09:09:57.294 [kafka-producer-network-thread | mso-123456-producer-a431b68b-e45a-4ed5-8b19-0482c7f0322b] DEBUG org.apache.kafka.clients.NetworkClient - [Producer clientId=mso-123456-producer-a431b68b-e45a-4ed5-8b19-0482c7f0322b] Give up sending metadata request since no node is available 09:09:57 09:09:57.294 [kafka-coordinator-heartbeat-thread | mso-group] DEBUG org.apache.kafka.clients.consumer.internals.ConsumerCoordinator - [Consumer clientId=mso-123456-consumer-f5c1d64b-2a77-4abe-97e2-5d595acfbd82, groupId=mso-group] No broker available to send FindCoordinator request 09:09:57 09:09:57.344 [kafka-producer-network-thread | mso-123456-producer-a431b68b-e45a-4ed5-8b19-0482c7f0322b] DEBUG org.apache.kafka.clients.NetworkClient - [Producer clientId=mso-123456-producer-a431b68b-e45a-4ed5-8b19-0482c7f0322b] Initialize connection to node localhost:40117 (id: 1 rack: null) for sending metadata request 09:09:57 09:09:57.345 [kafka-producer-network-thread | mso-123456-producer-a431b68b-e45a-4ed5-8b19-0482c7f0322b] DEBUG org.apache.kafka.clients.ClientUtils - Resolved host localhost as 127.0.0.1 09:09:57 09:09:57.345 [kafka-producer-network-thread | mso-123456-producer-a431b68b-e45a-4ed5-8b19-0482c7f0322b] DEBUG org.apache.kafka.clients.NetworkClient - [Producer clientId=mso-123456-producer-a431b68b-e45a-4ed5-8b19-0482c7f0322b] Initiating connection to node localhost:40117 (id: 1 rack: null) using address localhost/127.0.0.1 09:09:57 09:09:57.345 [kafka-producer-network-thread | mso-123456-producer-a431b68b-e45a-4ed5-8b19-0482c7f0322b] DEBUG org.apache.kafka.common.security.authenticator.SaslClientAuthenticator - [Producer clientId=mso-123456-producer-a431b68b-e45a-4ed5-8b19-0482c7f0322b] Set SASL client state to SEND_APIVERSIONS_REQUEST 09:09:57 09:09:57.345 [kafka-producer-network-thread | mso-123456-producer-a431b68b-e45a-4ed5-8b19-0482c7f0322b] DEBUG org.apache.kafka.common.security.authenticator.SaslClientAuthenticator - [Producer clientId=mso-123456-producer-a431b68b-e45a-4ed5-8b19-0482c7f0322b] Creating SaslClient: client=null;service=kafka;serviceHostname=localhost;mechs=[PLAIN] 09:09:57 09:09:57.346 [kafka-producer-network-thread | mso-123456-producer-a431b68b-e45a-4ed5-8b19-0482c7f0322b] DEBUG org.apache.kafka.common.network.Selector - [Producer clientId=mso-123456-producer-a431b68b-e45a-4ed5-8b19-0482c7f0322b] Connection with localhost/127.0.0.1 (channelId=1) disconnected 09:09:57 java.net.ConnectException: Connection refused 09:09:57 at java.base/sun.nio.ch.SocketChannelImpl.checkConnect(Native Method) 09:09:57 at java.base/sun.nio.ch.SocketChannelImpl.finishConnect(SocketChannelImpl.java:777) 09:09:57 at org.apache.kafka.common.network.PlaintextTransportLayer.finishConnect(PlaintextTransportLayer.java:50) 09:09:57 at org.apache.kafka.common.network.KafkaChannel.finishConnect(KafkaChannel.java:224) 09:09:57 at org.apache.kafka.common.network.Selector.pollSelectionKeys(Selector.java:526) 09:09:57 at org.apache.kafka.common.network.Selector.poll(Selector.java:481) 09:09:57 at org.apache.kafka.clients.NetworkClient.poll(NetworkClient.java:560) 09:09:57 at org.apache.kafka.clients.producer.internals.Sender.runOnce(Sender.java:328) 09:09:57 at org.apache.kafka.clients.producer.internals.Sender.run(Sender.java:243) 09:09:57 at java.base/java.lang.Thread.run(Thread.java:829) 09:09:57 09:09:57.346 [kafka-producer-network-thread | mso-123456-producer-a431b68b-e45a-4ed5-8b19-0482c7f0322b] INFO org.apache.kafka.clients.NetworkClient - [Producer clientId=mso-123456-producer-a431b68b-e45a-4ed5-8b19-0482c7f0322b] Node 1 disconnected. 09:09:57 09:09:57.346 [kafka-producer-network-thread | mso-123456-producer-a431b68b-e45a-4ed5-8b19-0482c7f0322b] WARN org.apache.kafka.clients.NetworkClient - [Producer clientId=mso-123456-producer-a431b68b-e45a-4ed5-8b19-0482c7f0322b] Connection to node 1 (localhost/127.0.0.1:40117) could not be established. Broker may not be available. 09:09:57 09:09:57.395 [kafka-coordinator-heartbeat-thread | mso-group] DEBUG org.apache.kafka.clients.NetworkClient - [Consumer clientId=mso-123456-consumer-f5c1d64b-2a77-4abe-97e2-5d595acfbd82, groupId=mso-group] Give up sending metadata request since no node is available 09:09:57 09:09:57.395 [kafka-coordinator-heartbeat-thread | mso-group] DEBUG org.apache.kafka.clients.consumer.internals.ConsumerCoordinator - [Consumer clientId=mso-123456-consumer-f5c1d64b-2a77-4abe-97e2-5d595acfbd82, groupId=mso-group] No broker available to send FindCoordinator request 09:09:57 09:09:57.447 [kafka-producer-network-thread | mso-123456-producer-a431b68b-e45a-4ed5-8b19-0482c7f0322b] DEBUG org.apache.kafka.clients.NetworkClient - [Producer clientId=mso-123456-producer-a431b68b-e45a-4ed5-8b19-0482c7f0322b] Give up sending metadata request since no node is available 09:09:57 09:09:57.495 [kafka-coordinator-heartbeat-thread | mso-group] DEBUG org.apache.kafka.clients.NetworkClient - [Consumer clientId=mso-123456-consumer-f5c1d64b-2a77-4abe-97e2-5d595acfbd82, groupId=mso-group] Give up sending metadata request since no node is available 09:09:57 09:09:57.495 [kafka-coordinator-heartbeat-thread | mso-group] DEBUG org.apache.kafka.clients.consumer.internals.ConsumerCoordinator - [Consumer clientId=mso-123456-consumer-f5c1d64b-2a77-4abe-97e2-5d595acfbd82, groupId=mso-group] No broker available to send FindCoordinator request 09:09:57 09:09:57.497 [kafka-producer-network-thread | mso-123456-producer-a431b68b-e45a-4ed5-8b19-0482c7f0322b] DEBUG org.apache.kafka.clients.NetworkClient - [Producer clientId=mso-123456-producer-a431b68b-e45a-4ed5-8b19-0482c7f0322b] Give up sending metadata request since no node is available 09:09:57 09:09:57.519 [main] INFO org.onap.sdc.utils.NotificationSender - DistributionClient - sendStatus 09:09:57 09:09:57.520 [main] DEBUG org.onap.sdc.utils.NotificationSender - Publisher server list: null 09:09:57 09:09:57.520 [main] INFO org.onap.sdc.utils.NotificationSender - Trying to send status: status 09:09:57 to topic null 09:09:57 09:09:57.548 [kafka-producer-network-thread | mso-123456-producer-a431b68b-e45a-4ed5-8b19-0482c7f0322b] DEBUG org.apache.kafka.clients.NetworkClient - [Producer clientId=mso-123456-producer-a431b68b-e45a-4ed5-8b19-0482c7f0322b] Give up sending metadata request since no node is available 09:09:57 09:09:57.595 [kafka-coordinator-heartbeat-thread | mso-group] DEBUG org.apache.kafka.clients.NetworkClient - [Consumer clientId=mso-123456-consumer-f5c1d64b-2a77-4abe-97e2-5d595acfbd82, groupId=mso-group] Give up sending metadata request since no node is available 09:09:57 09:09:57.595 [kafka-coordinator-heartbeat-thread | mso-group] DEBUG org.apache.kafka.clients.consumer.internals.ConsumerCoordinator - [Consumer clientId=mso-123456-consumer-f5c1d64b-2a77-4abe-97e2-5d595acfbd82, groupId=mso-group] No broker available to send FindCoordinator request 09:09:57 09:09:57.598 [kafka-producer-network-thread | mso-123456-producer-a431b68b-e45a-4ed5-8b19-0482c7f0322b] DEBUG org.apache.kafka.clients.NetworkClient - [Producer clientId=mso-123456-producer-a431b68b-e45a-4ed5-8b19-0482c7f0322b] Give up sending metadata request since no node is available 09:09:57 09:09:57.648 [kafka-producer-network-thread | mso-123456-producer-a431b68b-e45a-4ed5-8b19-0482c7f0322b] DEBUG org.apache.kafka.clients.NetworkClient - [Producer clientId=mso-123456-producer-a431b68b-e45a-4ed5-8b19-0482c7f0322b] Give up sending metadata request since no node is available 09:09:57 09:09:57.696 [kafka-coordinator-heartbeat-thread | mso-group] DEBUG org.apache.kafka.clients.NetworkClient - [Consumer clientId=mso-123456-consumer-f5c1d64b-2a77-4abe-97e2-5d595acfbd82, groupId=mso-group] Initialize connection to node localhost:40117 (id: 1 rack: null) for sending metadata request 09:09:57 09:09:57.696 [kafka-coordinator-heartbeat-thread | mso-group] DEBUG org.apache.kafka.clients.ClientUtils - Resolved host localhost as 127.0.0.1 09:09:57 09:09:57.696 [kafka-coordinator-heartbeat-thread | mso-group] DEBUG org.apache.kafka.clients.NetworkClient - [Consumer clientId=mso-123456-consumer-f5c1d64b-2a77-4abe-97e2-5d595acfbd82, groupId=mso-group] Initiating connection to node localhost:40117 (id: 1 rack: null) using address localhost/127.0.0.1 09:09:57 09:09:57.696 [kafka-coordinator-heartbeat-thread | mso-group] DEBUG org.apache.kafka.common.security.authenticator.SaslClientAuthenticator - [Consumer clientId=mso-123456-consumer-f5c1d64b-2a77-4abe-97e2-5d595acfbd82, groupId=mso-group] Set SASL client state to SEND_APIVERSIONS_REQUEST 09:09:57 09:09:57.696 [kafka-coordinator-heartbeat-thread | mso-group] DEBUG org.apache.kafka.common.security.authenticator.SaslClientAuthenticator - [Consumer clientId=mso-123456-consumer-f5c1d64b-2a77-4abe-97e2-5d595acfbd82, groupId=mso-group] Creating SaslClient: client=null;service=kafka;serviceHostname=localhost;mechs=[PLAIN] 09:09:57 09:09:57.697 [kafka-coordinator-heartbeat-thread | mso-group] DEBUG org.apache.kafka.common.network.Selector - [Consumer clientId=mso-123456-consumer-f5c1d64b-2a77-4abe-97e2-5d595acfbd82, groupId=mso-group] Connection with localhost/127.0.0.1 (channelId=1) disconnected 09:09:57 java.net.ConnectException: Connection refused 09:09:57 at java.base/sun.nio.ch.SocketChannelImpl.checkConnect(Native Method) 09:09:57 at java.base/sun.nio.ch.SocketChannelImpl.finishConnect(SocketChannelImpl.java:777) 09:09:57 at org.apache.kafka.common.network.PlaintextTransportLayer.finishConnect(PlaintextTransportLayer.java:50) 09:09:57 at org.apache.kafka.common.network.KafkaChannel.finishConnect(KafkaChannel.java:224) 09:09:57 at org.apache.kafka.common.network.Selector.pollSelectionKeys(Selector.java:526) 09:09:57 at org.apache.kafka.common.network.Selector.poll(Selector.java:481) 09:09:57 at org.apache.kafka.clients.NetworkClient.poll(NetworkClient.java:560) 09:09:57 at org.apache.kafka.clients.consumer.internals.ConsumerNetworkClient.poll(ConsumerNetworkClient.java:280) 09:09:57 at org.apache.kafka.clients.consumer.internals.ConsumerNetworkClient.pollNoWakeup(ConsumerNetworkClient.java:321) 09:09:57 at org.apache.kafka.clients.consumer.internals.AbstractCoordinator$HeartbeatThread.run(AbstractCoordinator.java:1454) 09:09:57 09:09:57.697 [kafka-coordinator-heartbeat-thread | mso-group] INFO org.apache.kafka.clients.NetworkClient - [Consumer clientId=mso-123456-consumer-f5c1d64b-2a77-4abe-97e2-5d595acfbd82, groupId=mso-group] Node 1 disconnected. 09:09:57 09:09:57.698 [kafka-coordinator-heartbeat-thread | mso-group] WARN org.apache.kafka.clients.NetworkClient - [Consumer clientId=mso-123456-consumer-f5c1d64b-2a77-4abe-97e2-5d595acfbd82, groupId=mso-group] Connection to node 1 (localhost/127.0.0.1:40117) could not be established. Broker may not be available. 09:09:57 09:09:57.698 [kafka-coordinator-heartbeat-thread | mso-group] DEBUG org.apache.kafka.clients.consumer.internals.ConsumerCoordinator - [Consumer clientId=mso-123456-consumer-f5c1d64b-2a77-4abe-97e2-5d595acfbd82, groupId=mso-group] No broker available to send FindCoordinator request 09:09:57 09:09:57.699 [kafka-producer-network-thread | mso-123456-producer-a431b68b-e45a-4ed5-8b19-0482c7f0322b] DEBUG org.apache.kafka.clients.NetworkClient - [Producer clientId=mso-123456-producer-a431b68b-e45a-4ed5-8b19-0482c7f0322b] Give up sending metadata request since no node is available 09:09:57 09:09:57.749 [kafka-producer-network-thread | mso-123456-producer-a431b68b-e45a-4ed5-8b19-0482c7f0322b] DEBUG org.apache.kafka.clients.NetworkClient - [Producer clientId=mso-123456-producer-a431b68b-e45a-4ed5-8b19-0482c7f0322b] Give up sending metadata request since no node is available 09:09:57 09:09:57.798 [kafka-coordinator-heartbeat-thread | mso-group] DEBUG org.apache.kafka.clients.NetworkClient - [Consumer clientId=mso-123456-consumer-f5c1d64b-2a77-4abe-97e2-5d595acfbd82, groupId=mso-group] Give up sending metadata request since no node is available 09:09:57 09:09:57.798 [kafka-coordinator-heartbeat-thread | mso-group] DEBUG org.apache.kafka.clients.consumer.internals.ConsumerCoordinator - [Consumer clientId=mso-123456-consumer-f5c1d64b-2a77-4abe-97e2-5d595acfbd82, groupId=mso-group] No broker available to send FindCoordinator request 09:09:57 09:09:57.799 [kafka-producer-network-thread | mso-123456-producer-a431b68b-e45a-4ed5-8b19-0482c7f0322b] DEBUG org.apache.kafka.clients.NetworkClient - [Producer clientId=mso-123456-producer-a431b68b-e45a-4ed5-8b19-0482c7f0322b] Give up sending metadata request since no node is available 09:09:57 09:09:57.850 [kafka-producer-network-thread | mso-123456-producer-a431b68b-e45a-4ed5-8b19-0482c7f0322b] DEBUG org.apache.kafka.clients.NetworkClient - [Producer clientId=mso-123456-producer-a431b68b-e45a-4ed5-8b19-0482c7f0322b] Give up sending metadata request since no node is available 09:09:57 09:09:57.899 [kafka-coordinator-heartbeat-thread | mso-group] DEBUG org.apache.kafka.clients.NetworkClient - [Consumer clientId=mso-123456-consumer-f5c1d64b-2a77-4abe-97e2-5d595acfbd82, groupId=mso-group] Give up sending metadata request since no node is available 09:09:57 09:09:57.899 [kafka-coordinator-heartbeat-thread | mso-group] DEBUG org.apache.kafka.clients.consumer.internals.ConsumerCoordinator - [Consumer clientId=mso-123456-consumer-f5c1d64b-2a77-4abe-97e2-5d595acfbd82, groupId=mso-group] No broker available to send FindCoordinator request 09:09:57 09:09:57.900 [kafka-producer-network-thread | mso-123456-producer-a431b68b-e45a-4ed5-8b19-0482c7f0322b] DEBUG org.apache.kafka.clients.NetworkClient - [Producer clientId=mso-123456-producer-a431b68b-e45a-4ed5-8b19-0482c7f0322b] Give up sending metadata request since no node is available 09:09:57 09:09:57.950 [kafka-producer-network-thread | mso-123456-producer-a431b68b-e45a-4ed5-8b19-0482c7f0322b] DEBUG org.apache.kafka.clients.NetworkClient - [Producer clientId=mso-123456-producer-a431b68b-e45a-4ed5-8b19-0482c7f0322b] Give up sending metadata request since no node is available 09:09:58 09:09:57.999 [kafka-coordinator-heartbeat-thread | mso-group] DEBUG org.apache.kafka.clients.NetworkClient - [Consumer clientId=mso-123456-consumer-f5c1d64b-2a77-4abe-97e2-5d595acfbd82, groupId=mso-group] Give up sending metadata request since no node is available 09:09:58 09:09:57.999 [kafka-coordinator-heartbeat-thread | mso-group] DEBUG org.apache.kafka.clients.consumer.internals.ConsumerCoordinator - [Consumer clientId=mso-123456-consumer-f5c1d64b-2a77-4abe-97e2-5d595acfbd82, groupId=mso-group] No broker available to send FindCoordinator request 09:09:58 09:09:58.000 [kafka-producer-network-thread | mso-123456-producer-a431b68b-e45a-4ed5-8b19-0482c7f0322b] DEBUG org.apache.kafka.clients.NetworkClient - [Producer clientId=mso-123456-producer-a431b68b-e45a-4ed5-8b19-0482c7f0322b] Give up sending metadata request since no node is available 09:09:58 09:09:58.051 [kafka-producer-network-thread | mso-123456-producer-a431b68b-e45a-4ed5-8b19-0482c7f0322b] DEBUG org.apache.kafka.clients.NetworkClient - [Producer clientId=mso-123456-producer-a431b68b-e45a-4ed5-8b19-0482c7f0322b] Give up sending metadata request since no node is available 09:09:58 09:09:58.100 [kafka-coordinator-heartbeat-thread | mso-group] DEBUG org.apache.kafka.clients.NetworkClient - [Consumer clientId=mso-123456-consumer-f5c1d64b-2a77-4abe-97e2-5d595acfbd82, groupId=mso-group] Give up sending metadata request since no node is available 09:09:58 09:09:58.100 [kafka-coordinator-heartbeat-thread | mso-group] DEBUG org.apache.kafka.clients.consumer.internals.ConsumerCoordinator - [Consumer clientId=mso-123456-consumer-f5c1d64b-2a77-4abe-97e2-5d595acfbd82, groupId=mso-group] No broker available to send FindCoordinator request 09:09:58 09:09:58.101 [kafka-producer-network-thread | mso-123456-producer-a431b68b-e45a-4ed5-8b19-0482c7f0322b] DEBUG org.apache.kafka.clients.NetworkClient - [Producer clientId=mso-123456-producer-a431b68b-e45a-4ed5-8b19-0482c7f0322b] Give up sending metadata request since no node is available 09:09:58 09:09:58.151 [kafka-producer-network-thread | mso-123456-producer-a431b68b-e45a-4ed5-8b19-0482c7f0322b] DEBUG org.apache.kafka.clients.NetworkClient - [Producer clientId=mso-123456-producer-a431b68b-e45a-4ed5-8b19-0482c7f0322b] Give up sending metadata request since no node is available 09:09:58 09:09:58.200 [kafka-coordinator-heartbeat-thread | mso-group] DEBUG org.apache.kafka.clients.NetworkClient - [Consumer clientId=mso-123456-consumer-f5c1d64b-2a77-4abe-97e2-5d595acfbd82, groupId=mso-group] Give up sending metadata request since no node is available 09:09:58 09:09:58.200 [kafka-coordinator-heartbeat-thread | mso-group] DEBUG org.apache.kafka.clients.consumer.internals.ConsumerCoordinator - [Consumer clientId=mso-123456-consumer-f5c1d64b-2a77-4abe-97e2-5d595acfbd82, groupId=mso-group] No broker available to send FindCoordinator request 09:09:58 09:09:58.202 [kafka-producer-network-thread | mso-123456-producer-a431b68b-e45a-4ed5-8b19-0482c7f0322b] DEBUG org.apache.kafka.clients.NetworkClient - [Producer clientId=mso-123456-producer-a431b68b-e45a-4ed5-8b19-0482c7f0322b] Give up sending metadata request since no node is available 09:09:58 09:09:58.252 [kafka-producer-network-thread | mso-123456-producer-a431b68b-e45a-4ed5-8b19-0482c7f0322b] DEBUG org.apache.kafka.clients.NetworkClient - [Producer clientId=mso-123456-producer-a431b68b-e45a-4ed5-8b19-0482c7f0322b] Give up sending metadata request since no node is available 09:09:58 09:09:58.300 [kafka-coordinator-heartbeat-thread | mso-group] DEBUG org.apache.kafka.clients.NetworkClient - [Consumer clientId=mso-123456-consumer-f5c1d64b-2a77-4abe-97e2-5d595acfbd82, groupId=mso-group] Give up sending metadata request since no node is available 09:09:58 09:09:58.300 [kafka-coordinator-heartbeat-thread | mso-group] DEBUG org.apache.kafka.clients.consumer.internals.ConsumerCoordinator - [Consumer clientId=mso-123456-consumer-f5c1d64b-2a77-4abe-97e2-5d595acfbd82, groupId=mso-group] No broker available to send FindCoordinator request 09:09:58 09:09:58.303 [kafka-producer-network-thread | mso-123456-producer-a431b68b-e45a-4ed5-8b19-0482c7f0322b] DEBUG org.apache.kafka.clients.NetworkClient - [Producer clientId=mso-123456-producer-a431b68b-e45a-4ed5-8b19-0482c7f0322b] Give up sending metadata request since no node is available 09:09:58 09:09:58.353 [kafka-producer-network-thread | mso-123456-producer-a431b68b-e45a-4ed5-8b19-0482c7f0322b] DEBUG org.apache.kafka.clients.NetworkClient - [Producer clientId=mso-123456-producer-a431b68b-e45a-4ed5-8b19-0482c7f0322b] Give up sending metadata request since no node is available 09:09:58 09:09:58.401 [kafka-coordinator-heartbeat-thread | mso-group] DEBUG org.apache.kafka.clients.NetworkClient - [Consumer clientId=mso-123456-consumer-f5c1d64b-2a77-4abe-97e2-5d595acfbd82, groupId=mso-group] Give up sending metadata request since no node is available 09:09:58 09:09:58.401 [kafka-coordinator-heartbeat-thread | mso-group] DEBUG org.apache.kafka.clients.consumer.internals.ConsumerCoordinator - [Consumer clientId=mso-123456-consumer-f5c1d64b-2a77-4abe-97e2-5d595acfbd82, groupId=mso-group] No broker available to send FindCoordinator request 09:09:58 09:09:58.403 [kafka-producer-network-thread | mso-123456-producer-a431b68b-e45a-4ed5-8b19-0482c7f0322b] DEBUG org.apache.kafka.clients.NetworkClient - [Producer clientId=mso-123456-producer-a431b68b-e45a-4ed5-8b19-0482c7f0322b] Give up sending metadata request since no node is available 09:09:58 09:09:58.454 [kafka-producer-network-thread | mso-123456-producer-a431b68b-e45a-4ed5-8b19-0482c7f0322b] DEBUG org.apache.kafka.clients.NetworkClient - [Producer clientId=mso-123456-producer-a431b68b-e45a-4ed5-8b19-0482c7f0322b] Give up sending metadata request since no node is available 09:09:58 09:09:58.501 [kafka-coordinator-heartbeat-thread | mso-group] DEBUG org.apache.kafka.clients.NetworkClient - [Consumer clientId=mso-123456-consumer-f5c1d64b-2a77-4abe-97e2-5d595acfbd82, groupId=mso-group] Give up sending metadata request since no node is available 09:09:58 09:09:58.502 [kafka-coordinator-heartbeat-thread | mso-group] DEBUG org.apache.kafka.clients.consumer.internals.ConsumerCoordinator - [Consumer clientId=mso-123456-consumer-f5c1d64b-2a77-4abe-97e2-5d595acfbd82, groupId=mso-group] No broker available to send FindCoordinator request 09:09:58 09:09:58.504 [kafka-producer-network-thread | mso-123456-producer-a431b68b-e45a-4ed5-8b19-0482c7f0322b] DEBUG org.apache.kafka.clients.NetworkClient - [Producer clientId=mso-123456-producer-a431b68b-e45a-4ed5-8b19-0482c7f0322b] Initialize connection to node localhost:40117 (id: 1 rack: null) for sending metadata request 09:09:58 09:09:58.504 [kafka-producer-network-thread | mso-123456-producer-a431b68b-e45a-4ed5-8b19-0482c7f0322b] DEBUG org.apache.kafka.clients.ClientUtils - Resolved host localhost as 127.0.0.1 09:09:58 09:09:58.504 [kafka-producer-network-thread | mso-123456-producer-a431b68b-e45a-4ed5-8b19-0482c7f0322b] DEBUG org.apache.kafka.clients.NetworkClient - [Producer clientId=mso-123456-producer-a431b68b-e45a-4ed5-8b19-0482c7f0322b] Initiating connection to node localhost:40117 (id: 1 rack: null) using address localhost/127.0.0.1 09:09:58 09:09:58.505 [kafka-producer-network-thread | mso-123456-producer-a431b68b-e45a-4ed5-8b19-0482c7f0322b] DEBUG org.apache.kafka.common.security.authenticator.SaslClientAuthenticator - [Producer clientId=mso-123456-producer-a431b68b-e45a-4ed5-8b19-0482c7f0322b] Set SASL client state to SEND_APIVERSIONS_REQUEST 09:09:58 09:09:58.505 [kafka-producer-network-thread | mso-123456-producer-a431b68b-e45a-4ed5-8b19-0482c7f0322b] DEBUG org.apache.kafka.common.security.authenticator.SaslClientAuthenticator - [Producer clientId=mso-123456-producer-a431b68b-e45a-4ed5-8b19-0482c7f0322b] Creating SaslClient: client=null;service=kafka;serviceHostname=localhost;mechs=[PLAIN] 09:09:58 09:09:58.506 [kafka-producer-network-thread | mso-123456-producer-a431b68b-e45a-4ed5-8b19-0482c7f0322b] DEBUG org.apache.kafka.common.network.Selector - [Producer clientId=mso-123456-producer-a431b68b-e45a-4ed5-8b19-0482c7f0322b] Connection with localhost/127.0.0.1 (channelId=1) disconnected 09:09:58 java.net.ConnectException: Connection refused 09:09:58 at java.base/sun.nio.ch.SocketChannelImpl.checkConnect(Native Method) 09:09:58 at java.base/sun.nio.ch.SocketChannelImpl.finishConnect(SocketChannelImpl.java:777) 09:09:58 at org.apache.kafka.common.network.PlaintextTransportLayer.finishConnect(PlaintextTransportLayer.java:50) 09:09:58 at org.apache.kafka.common.network.KafkaChannel.finishConnect(KafkaChannel.java:224) 09:09:58 at org.apache.kafka.common.network.Selector.pollSelectionKeys(Selector.java:526) 09:09:58 at org.apache.kafka.common.network.Selector.poll(Selector.java:481) 09:09:58 at org.apache.kafka.clients.NetworkClient.poll(NetworkClient.java:560) 09:09:58 at org.apache.kafka.clients.producer.internals.Sender.runOnce(Sender.java:328) 09:09:58 at org.apache.kafka.clients.producer.internals.Sender.run(Sender.java:243) 09:09:58 at java.base/java.lang.Thread.run(Thread.java:829) 09:09:58 09:09:58.506 [kafka-producer-network-thread | mso-123456-producer-a431b68b-e45a-4ed5-8b19-0482c7f0322b] INFO org.apache.kafka.clients.NetworkClient - [Producer clientId=mso-123456-producer-a431b68b-e45a-4ed5-8b19-0482c7f0322b] Node 1 disconnected. 09:09:58 09:09:58.506 [kafka-producer-network-thread | mso-123456-producer-a431b68b-e45a-4ed5-8b19-0482c7f0322b] WARN org.apache.kafka.clients.NetworkClient - [Producer clientId=mso-123456-producer-a431b68b-e45a-4ed5-8b19-0482c7f0322b] Connection to node 1 (localhost/127.0.0.1:40117) could not be established. Broker may not be available. 09:09:58 09:09:58.520 [main] ERROR org.onap.sdc.utils.NotificationSender - DistributionClient - sendDownloadStatus. Failed to send messages and close publisher. 09:09:58 org.apache.kafka.common.KafkaException: null 09:09:58 09:09:58.540 [main] INFO org.onap.sdc.utils.NotificationSender - DistributionClient - sendStatus 09:09:58 09:09:58.540 [main] DEBUG org.onap.sdc.utils.NotificationSender - Publisher server list: null 09:09:58 09:09:58.541 [main] INFO org.onap.sdc.utils.NotificationSender - Trying to send status: status 09:09:58 to topic null 09:09:58 09:09:58.541 [main] ERROR org.onap.sdc.utils.NotificationSender - DistributionClient - sendStatus. Failed to send status 09:09:58 org.apache.kafka.common.KafkaException: null 09:09:58 at org.onap.sdc.utils.kafka.SdcKafkaProducer.send(SdcKafkaProducer.java:65) 09:09:58 at org.onap.sdc.utils.NotificationSender.send(NotificationSender.java:48) 09:09:58 at org.onap.sdc.utils.NotificationSenderTest.whenSendingThrowsIOExceptionShouldReturnGeneralErrorStatus(NotificationSenderTest.java:84) 09:09:58 at java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke0(Native Method) 09:09:58 at java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) 09:09:58 at java.base/jdk.internal.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) 09:09:58 at java.base/java.lang.reflect.Method.invoke(Method.java:566) 09:09:58 at org.junit.platform.commons.util.ReflectionUtils.invokeMethod(ReflectionUtils.java:688) 09:09:58 at org.junit.jupiter.engine.execution.MethodInvocation.proceed(MethodInvocation.java:60) 09:09:58 at org.junit.jupiter.engine.execution.InvocationInterceptorChain$ValidatingInvocation.proceed(InvocationInterceptorChain.java:131) 09:09:58 at org.junit.jupiter.engine.extension.TimeoutExtension.intercept(TimeoutExtension.java:149) 09:09:58 at org.junit.jupiter.engine.extension.TimeoutExtension.interceptTestableMethod(TimeoutExtension.java:140) 09:09:58 at org.junit.jupiter.engine.extension.TimeoutExtension.interceptTestMethod(TimeoutExtension.java:84) 09:09:58 at org.junit.jupiter.engine.execution.ExecutableInvoker$ReflectiveInterceptorCall.lambda$ofVoidMethod$0(ExecutableInvoker.java:115) 09:09:58 at org.junit.jupiter.engine.execution.ExecutableInvoker.lambda$invoke$0(ExecutableInvoker.java:105) 09:09:58 at org.junit.jupiter.engine.execution.InvocationInterceptorChain$InterceptedInvocation.proceed(InvocationInterceptorChain.java:106) 09:09:58 at org.junit.jupiter.engine.execution.InvocationInterceptorChain.proceed(InvocationInterceptorChain.java:64) 09:09:58 at org.junit.jupiter.engine.execution.InvocationInterceptorChain.chainAndInvoke(InvocationInterceptorChain.java:45) 09:09:58 at org.junit.jupiter.engine.execution.InvocationInterceptorChain.invoke(InvocationInterceptorChain.java:37) 09:09:58 at org.junit.jupiter.engine.execution.ExecutableInvoker.invoke(ExecutableInvoker.java:104) 09:09:58 at org.junit.jupiter.engine.execution.ExecutableInvoker.invoke(ExecutableInvoker.java:98) 09:09:58 at org.junit.jupiter.engine.descriptor.TestMethodTestDescriptor.lambda$invokeTestMethod$6(TestMethodTestDescriptor.java:210) 09:09:58 at org.junit.platform.engine.support.hierarchical.ThrowableCollector.execute(ThrowableCollector.java:73) 09:09:58 at org.junit.jupiter.engine.descriptor.TestMethodTestDescriptor.invokeTestMethod(TestMethodTestDescriptor.java:206) 09:09:58 at org.junit.jupiter.engine.descriptor.TestMethodTestDescriptor.execute(TestMethodTestDescriptor.java:131) 09:09:58 at org.junit.jupiter.engine.descriptor.TestMethodTestDescriptor.execute(TestMethodTestDescriptor.java:65) 09:09:58 at org.junit.platform.engine.support.hierarchical.NodeTestTask.lambda$executeRecursively$5(NodeTestTask.java:139) 09:09:58 at org.junit.platform.engine.support.hierarchical.ThrowableCollector.execute(ThrowableCollector.java:73) 09:09:58 at org.junit.platform.engine.support.hierarchical.NodeTestTask.lambda$executeRecursively$7(NodeTestTask.java:129) 09:09:58 at org.junit.platform.engine.support.hierarchical.Node.around(Node.java:137) 09:09:58 at org.junit.platform.engine.support.hierarchical.NodeTestTask.lambda$executeRecursively$8(NodeTestTask.java:127) 09:09:58 at org.junit.platform.engine.support.hierarchical.ThrowableCollector.execute(ThrowableCollector.java:73) 09:09:58 at org.junit.platform.engine.support.hierarchical.NodeTestTask.executeRecursively(NodeTestTask.java:126) 09:09:58 at org.junit.platform.engine.support.hierarchical.NodeTestTask.execute(NodeTestTask.java:84) 09:09:58 at java.base/java.util.ArrayList.forEach(ArrayList.java:1541) 09:09:58 at org.junit.platform.engine.support.hierarchical.SameThreadHierarchicalTestExecutorService.invokeAll(SameThreadHierarchicalTestExecutorService.java:38) 09:09:58 at org.junit.platform.engine.support.hierarchical.NodeTestTask.lambda$executeRecursively$5(NodeTestTask.java:143) 09:09:58 at org.junit.platform.engine.support.hierarchical.ThrowableCollector.execute(ThrowableCollector.java:73) 09:09:58 at org.junit.platform.engine.support.hierarchical.NodeTestTask.lambda$executeRecursively$7(NodeTestTask.java:129) 09:09:58 at org.junit.platform.engine.support.hierarchical.Node.around(Node.java:137) 09:09:58 at org.junit.platform.engine.support.hierarchical.NodeTestTask.lambda$executeRecursively$8(NodeTestTask.java:127) 09:09:58 at org.junit.platform.engine.support.hierarchical.ThrowableCollector.execute(ThrowableCollector.java:73) 09:09:58 at org.junit.platform.engine.support.hierarchical.NodeTestTask.executeRecursively(NodeTestTask.java:126) 09:09:58 at org.junit.platform.engine.support.hierarchical.NodeTestTask.execute(NodeTestTask.java:84) 09:09:58 at java.base/java.util.ArrayList.forEach(ArrayList.java:1541) 09:09:58 at org.junit.platform.engine.support.hierarchical.SameThreadHierarchicalTestExecutorService.invokeAll(SameThreadHierarchicalTestExecutorService.java:38) 09:09:58 at org.junit.platform.engine.support.hierarchical.NodeTestTask.lambda$executeRecursively$5(NodeTestTask.java:143) 09:09:58 at org.junit.platform.engine.support.hierarchical.ThrowableCollector.execute(ThrowableCollector.java:73) 09:09:58 at org.junit.platform.engine.support.hierarchical.NodeTestTask.lambda$executeRecursively$7(NodeTestTask.java:129) 09:09:58 at org.junit.platform.engine.support.hierarchical.Node.around(Node.java:137) 09:09:58 at org.junit.platform.engine.support.hierarchical.NodeTestTask.lambda$executeRecursively$8(NodeTestTask.java:127) 09:09:58 at org.junit.platform.engine.support.hierarchical.ThrowableCollector.execute(ThrowableCollector.java:73) 09:09:58 at org.junit.platform.engine.support.hierarchical.NodeTestTask.executeRecursively(NodeTestTask.java:126) 09:09:58 at org.junit.platform.engine.support.hierarchical.NodeTestTask.execute(NodeTestTask.java:84) 09:09:58 at org.junit.platform.engine.support.hierarchical.SameThreadHierarchicalTestExecutorService.submit(SameThreadHierarchicalTestExecutorService.java:32) 09:09:58 at org.junit.platform.engine.support.hierarchical.HierarchicalTestExecutor.execute(HierarchicalTestExecutor.java:57) 09:09:58 at org.junit.platform.engine.support.hierarchical.HierarchicalTestEngine.execute(HierarchicalTestEngine.java:51) 09:09:58 at org.junit.platform.launcher.core.EngineExecutionOrchestrator.execute(EngineExecutionOrchestrator.java:108) 09:09:58 at org.junit.platform.launcher.core.EngineExecutionOrchestrator.execute(EngineExecutionOrchestrator.java:88) 09:09:58 at org.junit.platform.launcher.core.EngineExecutionOrchestrator.lambda$execute$0(EngineExecutionOrchestrator.java:54) 09:09:58 at org.junit.platform.launcher.core.EngineExecutionOrchestrator.withInterceptedStreams(EngineExecutionOrchestrator.java:67) 09:09:58 at org.junit.platform.launcher.core.EngineExecutionOrchestrator.execute(EngineExecutionOrchestrator.java:52) 09:09:58 at org.junit.platform.launcher.core.DefaultLauncher.execute(DefaultLauncher.java:96) 09:09:58 at org.junit.platform.launcher.core.DefaultLauncher.execute(DefaultLauncher.java:75) 09:09:58 at org.apache.maven.surefire.junitplatform.JUnitPlatformProvider.invokeAllTests(JUnitPlatformProvider.java:154) 09:09:58 at org.apache.maven.surefire.junitplatform.JUnitPlatformProvider.invoke(JUnitPlatformProvider.java:127) 09:09:58 at org.apache.maven.surefire.booter.ForkedBooter.runSuitesInProcess(ForkedBooter.java:377) 09:09:58 at org.apache.maven.surefire.booter.ForkedBooter.execute(ForkedBooter.java:138) 09:09:58 at org.apache.maven.surefire.booter.ForkedBooter.run(ForkedBooter.java:465) 09:09:58 at org.apache.maven.surefire.booter.ForkedBooter.main(ForkedBooter.java:451) 09:09:58 [INFO] Tests run: 3, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 2.283 s - in org.onap.sdc.utils.NotificationSenderTest 09:09:58 [INFO] Running org.onap.sdc.utils.KafkaCommonConfigTest 09:09:58 [INFO] Tests run: 2, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.003 s - in org.onap.sdc.utils.KafkaCommonConfigTest 09:09:58 [INFO] Running org.onap.sdc.utils.GeneralUtilsTest 09:09:58 [INFO] Tests run: 4, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.006 s - in org.onap.sdc.utils.GeneralUtilsTest 09:09:58 [INFO] Running org.onap.sdc.impl.NotificationConsumerTest 09:09:58 09:09:58.671 [kafka-coordinator-heartbeat-thread | mso-group] DEBUG org.apache.kafka.clients.NetworkClient - [Consumer clientId=mso-123456-consumer-f5c1d64b-2a77-4abe-97e2-5d595acfbd82, groupId=mso-group] Give up sending metadata request since no node is available 09:09:58 09:09:58.672 [kafka-producer-network-thread | mso-123456-producer-a431b68b-e45a-4ed5-8b19-0482c7f0322b] DEBUG org.apache.kafka.clients.NetworkClient - [Producer clientId=mso-123456-producer-a431b68b-e45a-4ed5-8b19-0482c7f0322b] Give up sending metadata request since no node is available 09:09:58 09:09:58.672 [kafka-coordinator-heartbeat-thread | mso-group] DEBUG org.apache.kafka.clients.consumer.internals.ConsumerCoordinator - [Consumer clientId=mso-123456-consumer-f5c1d64b-2a77-4abe-97e2-5d595acfbd82, groupId=mso-group] No broker available to send FindCoordinator request 09:09:58 09:09:58.772 [kafka-producer-network-thread | mso-123456-producer-a431b68b-e45a-4ed5-8b19-0482c7f0322b] DEBUG org.apache.kafka.clients.NetworkClient - [Producer clientId=mso-123456-producer-a431b68b-e45a-4ed5-8b19-0482c7f0322b] Give up sending metadata request since no node is available 09:09:58 09:09:58.773 [kafka-coordinator-heartbeat-thread | mso-group] DEBUG org.apache.kafka.clients.NetworkClient - [Consumer clientId=mso-123456-consumer-f5c1d64b-2a77-4abe-97e2-5d595acfbd82, groupId=mso-group] Initialize connection to node localhost:40117 (id: 1 rack: null) for sending metadata request 09:09:58 09:09:58.774 [kafka-coordinator-heartbeat-thread | mso-group] DEBUG org.apache.kafka.clients.ClientUtils - Resolved host localhost as 127.0.0.1 09:09:58 09:09:58.774 [kafka-coordinator-heartbeat-thread | mso-group] DEBUG org.apache.kafka.clients.NetworkClient - [Consumer clientId=mso-123456-consumer-f5c1d64b-2a77-4abe-97e2-5d595acfbd82, groupId=mso-group] Initiating connection to node localhost:40117 (id: 1 rack: null) using address localhost/127.0.0.1 09:09:58 09:09:58.774 [kafka-coordinator-heartbeat-thread | mso-group] DEBUG org.apache.kafka.common.security.authenticator.SaslClientAuthenticator - [Consumer clientId=mso-123456-consumer-f5c1d64b-2a77-4abe-97e2-5d595acfbd82, groupId=mso-group] Set SASL client state to SEND_APIVERSIONS_REQUEST 09:09:58 09:09:58.774 [kafka-coordinator-heartbeat-thread | mso-group] DEBUG org.apache.kafka.common.security.authenticator.SaslClientAuthenticator - [Consumer clientId=mso-123456-consumer-f5c1d64b-2a77-4abe-97e2-5d595acfbd82, groupId=mso-group] Creating SaslClient: client=null;service=kafka;serviceHostname=localhost;mechs=[PLAIN] 09:09:58 09:09:58.777 [kafka-coordinator-heartbeat-thread | mso-group] DEBUG org.apache.kafka.common.network.Selector - [Consumer clientId=mso-123456-consumer-f5c1d64b-2a77-4abe-97e2-5d595acfbd82, groupId=mso-group] Connection with localhost/127.0.0.1 (channelId=1) disconnected 09:09:58 java.net.ConnectException: Connection refused 09:09:58 at java.base/sun.nio.ch.SocketChannelImpl.checkConnect(Native Method) 09:09:58 at java.base/sun.nio.ch.SocketChannelImpl.finishConnect(SocketChannelImpl.java:777) 09:09:58 at org.apache.kafka.common.network.PlaintextTransportLayer.finishConnect(PlaintextTransportLayer.java:50) 09:09:58 at org.apache.kafka.common.network.KafkaChannel.finishConnect(KafkaChannel.java:224) 09:09:58 at org.apache.kafka.common.network.Selector.pollSelectionKeys(Selector.java:526) 09:09:58 at org.apache.kafka.common.network.Selector.poll(Selector.java:481) 09:09:58 at org.apache.kafka.clients.NetworkClient.poll(NetworkClient.java:560) 09:09:58 at org.apache.kafka.clients.consumer.internals.ConsumerNetworkClient.poll(ConsumerNetworkClient.java:280) 09:09:58 at org.apache.kafka.clients.consumer.internals.ConsumerNetworkClient.pollNoWakeup(ConsumerNetworkClient.java:321) 09:09:58 at org.apache.kafka.clients.consumer.internals.AbstractCoordinator$HeartbeatThread.run(AbstractCoordinator.java:1454) 09:09:58 09:09:58.778 [kafka-coordinator-heartbeat-thread | mso-group] INFO org.apache.kafka.clients.NetworkClient - [Consumer clientId=mso-123456-consumer-f5c1d64b-2a77-4abe-97e2-5d595acfbd82, groupId=mso-group] Node 1 disconnected. 09:09:58 09:09:58.778 [kafka-coordinator-heartbeat-thread | mso-group] WARN org.apache.kafka.clients.NetworkClient - [Consumer clientId=mso-123456-consumer-f5c1d64b-2a77-4abe-97e2-5d595acfbd82, groupId=mso-group] Connection to node 1 (localhost/127.0.0.1:40117) could not be established. Broker may not be available. 09:09:58 09:09:58.779 [kafka-coordinator-heartbeat-thread | mso-group] DEBUG org.apache.kafka.clients.consumer.internals.ConsumerCoordinator - [Consumer clientId=mso-123456-consumer-f5c1d64b-2a77-4abe-97e2-5d595acfbd82, groupId=mso-group] No broker available to send FindCoordinator request 09:09:58 09:09:58.824 [kafka-producer-network-thread | mso-123456-producer-a431b68b-e45a-4ed5-8b19-0482c7f0322b] DEBUG org.apache.kafka.clients.NetworkClient - [Producer clientId=mso-123456-producer-a431b68b-e45a-4ed5-8b19-0482c7f0322b] Give up sending metadata request since no node is available 09:09:59 09:09:59.003 [kafka-coordinator-heartbeat-thread | mso-group] DEBUG org.apache.kafka.clients.NetworkClient - [Consumer clientId=mso-123456-consumer-f5c1d64b-2a77-4abe-97e2-5d595acfbd82, groupId=mso-group] Give up sending metadata request since no node is available 09:09:59 09:09:59.004 [kafka-producer-network-thread | mso-123456-producer-a431b68b-e45a-4ed5-8b19-0482c7f0322b] DEBUG org.apache.kafka.clients.NetworkClient - [Producer clientId=mso-123456-producer-a431b68b-e45a-4ed5-8b19-0482c7f0322b] Give up sending metadata request since no node is available 09:09:59 09:09:59.005 [kafka-coordinator-heartbeat-thread | mso-group] DEBUG org.apache.kafka.clients.consumer.internals.ConsumerCoordinator - [Consumer clientId=mso-123456-consumer-f5c1d64b-2a77-4abe-97e2-5d595acfbd82, groupId=mso-group] No broker available to send FindCoordinator request 09:09:59 09:09:59.028 [main] INFO org.onap.sdc.impl.DistributionClientImpl - DistributionClient - sendNotificationStatus 09:09:59 09:09:59.029 [main] DEBUG org.onap.sdc.impl.DistributionClientImpl - client was not initialized 09:09:59 09:09:59.034 [pool-8-thread-1] DEBUG org.onap.sdc.impl.NotificationConsumer - Polling for messages from topic: null 09:09:59 09:09:59.055 [kafka-producer-network-thread | mso-123456-producer-a431b68b-e45a-4ed5-8b19-0482c7f0322b] DEBUG org.apache.kafka.clients.NetworkClient - [Producer clientId=mso-123456-producer-a431b68b-e45a-4ed5-8b19-0482c7f0322b] Give up sending metadata request since no node is available 09:09:59 09:09:59.105 [kafka-producer-network-thread | mso-123456-producer-a431b68b-e45a-4ed5-8b19-0482c7f0322b] DEBUG org.apache.kafka.clients.NetworkClient - [Producer clientId=mso-123456-producer-a431b68b-e45a-4ed5-8b19-0482c7f0322b] Give up sending metadata request since no node is available 09:09:59 09:09:59.105 [kafka-coordinator-heartbeat-thread | mso-group] DEBUG org.apache.kafka.clients.NetworkClient - [Consumer clientId=mso-123456-consumer-f5c1d64b-2a77-4abe-97e2-5d595acfbd82, groupId=mso-group] Give up sending metadata request since no node is available 09:09:59 09:09:59.106 [kafka-coordinator-heartbeat-thread | mso-group] DEBUG org.apache.kafka.clients.consumer.internals.ConsumerCoordinator - [Consumer clientId=mso-123456-consumer-f5c1d64b-2a77-4abe-97e2-5d595acfbd82, groupId=mso-group] No broker available to send FindCoordinator request 09:09:59 09:09:59.133 [pool-8-thread-1] DEBUG org.onap.sdc.impl.NotificationConsumer - Polling for messages from topic: null 09:09:59 09:09:59.156 [kafka-producer-network-thread | mso-123456-producer-a431b68b-e45a-4ed5-8b19-0482c7f0322b] DEBUG org.apache.kafka.clients.NetworkClient - [Producer clientId=mso-123456-producer-a431b68b-e45a-4ed5-8b19-0482c7f0322b] Give up sending metadata request since no node is available 09:09:59 09:09:59.158 [SessionTracker] INFO org.apache.zookeeper.server.SessionTrackerImpl - SessionTrackerImpl exited loop! 09:09:59 09:09:59.206 [kafka-coordinator-heartbeat-thread | mso-group] DEBUG org.apache.kafka.clients.NetworkClient - [Consumer clientId=mso-123456-consumer-f5c1d64b-2a77-4abe-97e2-5d595acfbd82, groupId=mso-group] Give up sending metadata request since no node is available 09:09:59 09:09:59.206 [kafka-coordinator-heartbeat-thread | mso-group] DEBUG org.apache.kafka.clients.consumer.internals.ConsumerCoordinator - [Consumer clientId=mso-123456-consumer-f5c1d64b-2a77-4abe-97e2-5d595acfbd82, groupId=mso-group] No broker available to send FindCoordinator request 09:09:59 09:09:59.206 [kafka-producer-network-thread | mso-123456-producer-a431b68b-e45a-4ed5-8b19-0482c7f0322b] DEBUG org.apache.kafka.clients.NetworkClient - [Producer clientId=mso-123456-producer-a431b68b-e45a-4ed5-8b19-0482c7f0322b] Give up sending metadata request since no node is available 09:09:59 09:09:59.233 [pool-8-thread-2] DEBUG org.onap.sdc.impl.NotificationConsumer - Polling for messages from topic: null 09:09:59 09:09:59.256 [kafka-producer-network-thread | mso-123456-producer-a431b68b-e45a-4ed5-8b19-0482c7f0322b] DEBUG org.apache.kafka.clients.NetworkClient - [Producer clientId=mso-123456-producer-a431b68b-e45a-4ed5-8b19-0482c7f0322b] Give up sending metadata request since no node is available 09:09:59 09:09:59.306 [kafka-coordinator-heartbeat-thread | mso-group] DEBUG org.apache.kafka.clients.NetworkClient - [Consumer clientId=mso-123456-consumer-f5c1d64b-2a77-4abe-97e2-5d595acfbd82, groupId=mso-group] Give up sending metadata request since no node is available 09:09:59 09:09:59.306 [kafka-coordinator-heartbeat-thread | mso-group] DEBUG org.apache.kafka.clients.consumer.internals.ConsumerCoordinator - [Consumer clientId=mso-123456-consumer-f5c1d64b-2a77-4abe-97e2-5d595acfbd82, groupId=mso-group] No broker available to send FindCoordinator request 09:09:59 09:09:59.307 [kafka-producer-network-thread | mso-123456-producer-a431b68b-e45a-4ed5-8b19-0482c7f0322b] DEBUG org.apache.kafka.clients.NetworkClient - [Producer clientId=mso-123456-producer-a431b68b-e45a-4ed5-8b19-0482c7f0322b] Give up sending metadata request since no node is available 09:09:59 09:09:59.333 [pool-8-thread-1] DEBUG org.onap.sdc.impl.NotificationConsumer - Polling for messages from topic: null 09:09:59 09:09:59.357 [kafka-producer-network-thread | mso-123456-producer-a431b68b-e45a-4ed5-8b19-0482c7f0322b] DEBUG org.apache.kafka.clients.NetworkClient - [Producer clientId=mso-123456-producer-a431b68b-e45a-4ed5-8b19-0482c7f0322b] Give up sending metadata request since no node is available 09:09:59 09:09:59.406 [kafka-coordinator-heartbeat-thread | mso-group] DEBUG org.apache.kafka.clients.NetworkClient - [Consumer clientId=mso-123456-consumer-f5c1d64b-2a77-4abe-97e2-5d595acfbd82, groupId=mso-group] Give up sending metadata request since no node is available 09:09:59 09:09:59.407 [kafka-coordinator-heartbeat-thread | mso-group] DEBUG org.apache.kafka.clients.consumer.internals.ConsumerCoordinator - [Consumer clientId=mso-123456-consumer-f5c1d64b-2a77-4abe-97e2-5d595acfbd82, groupId=mso-group] No broker available to send FindCoordinator request 09:09:59 09:09:59.407 [kafka-producer-network-thread | mso-123456-producer-a431b68b-e45a-4ed5-8b19-0482c7f0322b] DEBUG org.apache.kafka.clients.NetworkClient - [Producer clientId=mso-123456-producer-a431b68b-e45a-4ed5-8b19-0482c7f0322b] Give up sending metadata request since no node is available 09:09:59 09:09:59.433 [pool-8-thread-3] DEBUG org.onap.sdc.impl.NotificationConsumer - Polling for messages from topic: null 09:09:59 09:09:59.458 [kafka-producer-network-thread | mso-123456-producer-a431b68b-e45a-4ed5-8b19-0482c7f0322b] DEBUG org.apache.kafka.clients.NetworkClient - [Producer clientId=mso-123456-producer-a431b68b-e45a-4ed5-8b19-0482c7f0322b] Give up sending metadata request since no node is available 09:09:59 09:09:59.507 [kafka-coordinator-heartbeat-thread | mso-group] DEBUG org.apache.kafka.clients.NetworkClient - [Consumer clientId=mso-123456-consumer-f5c1d64b-2a77-4abe-97e2-5d595acfbd82, groupId=mso-group] Give up sending metadata request since no node is available 09:09:59 09:09:59.507 [kafka-coordinator-heartbeat-thread | mso-group] DEBUG org.apache.kafka.clients.consumer.internals.ConsumerCoordinator - [Consumer clientId=mso-123456-consumer-f5c1d64b-2a77-4abe-97e2-5d595acfbd82, groupId=mso-group] No broker available to send FindCoordinator request 09:09:59 09:09:59.508 [kafka-producer-network-thread | mso-123456-producer-a431b68b-e45a-4ed5-8b19-0482c7f0322b] DEBUG org.apache.kafka.clients.NetworkClient - [Producer clientId=mso-123456-producer-a431b68b-e45a-4ed5-8b19-0482c7f0322b] Give up sending metadata request since no node is available 09:09:59 09:09:59.533 [pool-8-thread-2] DEBUG org.onap.sdc.impl.NotificationConsumer - Polling for messages from topic: null 09:09:59 09:09:59.558 [kafka-producer-network-thread | mso-123456-producer-a431b68b-e45a-4ed5-8b19-0482c7f0322b] DEBUG org.apache.kafka.clients.NetworkClient - [Producer clientId=mso-123456-producer-a431b68b-e45a-4ed5-8b19-0482c7f0322b] Give up sending metadata request since no node is available 09:09:59 09:09:59.607 [kafka-coordinator-heartbeat-thread | mso-group] DEBUG org.apache.kafka.clients.NetworkClient - [Consumer clientId=mso-123456-consumer-f5c1d64b-2a77-4abe-97e2-5d595acfbd82, groupId=mso-group] Give up sending metadata request since no node is available 09:09:59 09:09:59.607 [kafka-coordinator-heartbeat-thread | mso-group] DEBUG org.apache.kafka.clients.consumer.internals.ConsumerCoordinator - [Consumer clientId=mso-123456-consumer-f5c1d64b-2a77-4abe-97e2-5d595acfbd82, groupId=mso-group] No broker available to send FindCoordinator request 09:09:59 09:09:59.609 [kafka-producer-network-thread | mso-123456-producer-a431b68b-e45a-4ed5-8b19-0482c7f0322b] DEBUG org.apache.kafka.clients.NetworkClient - [Producer clientId=mso-123456-producer-a431b68b-e45a-4ed5-8b19-0482c7f0322b] Initialize connection to node localhost:40117 (id: 1 rack: null) for sending metadata request 09:09:59 09:09:59.609 [kafka-producer-network-thread | mso-123456-producer-a431b68b-e45a-4ed5-8b19-0482c7f0322b] DEBUG org.apache.kafka.clients.ClientUtils - Resolved host localhost as 127.0.0.1 09:09:59 09:09:59.609 [kafka-producer-network-thread | mso-123456-producer-a431b68b-e45a-4ed5-8b19-0482c7f0322b] DEBUG org.apache.kafka.clients.NetworkClient - [Producer clientId=mso-123456-producer-a431b68b-e45a-4ed5-8b19-0482c7f0322b] Initiating connection to node localhost:40117 (id: 1 rack: null) using address localhost/127.0.0.1 09:09:59 09:09:59.610 [kafka-producer-network-thread | mso-123456-producer-a431b68b-e45a-4ed5-8b19-0482c7f0322b] DEBUG org.apache.kafka.common.security.authenticator.SaslClientAuthenticator - [Producer clientId=mso-123456-producer-a431b68b-e45a-4ed5-8b19-0482c7f0322b] Set SASL client state to SEND_APIVERSIONS_REQUEST 09:09:59 09:09:59.610 [kafka-producer-network-thread | mso-123456-producer-a431b68b-e45a-4ed5-8b19-0482c7f0322b] DEBUG org.apache.kafka.common.security.authenticator.SaslClientAuthenticator - [Producer clientId=mso-123456-producer-a431b68b-e45a-4ed5-8b19-0482c7f0322b] Creating SaslClient: client=null;service=kafka;serviceHostname=localhost;mechs=[PLAIN] 09:09:59 09:09:59.612 [kafka-producer-network-thread | mso-123456-producer-a431b68b-e45a-4ed5-8b19-0482c7f0322b] DEBUG org.apache.kafka.common.network.Selector - [Producer clientId=mso-123456-producer-a431b68b-e45a-4ed5-8b19-0482c7f0322b] Connection with localhost/127.0.0.1 (channelId=1) disconnected 09:09:59 java.net.ConnectException: Connection refused 09:09:59 at java.base/sun.nio.ch.SocketChannelImpl.checkConnect(Native Method) 09:09:59 at java.base/sun.nio.ch.SocketChannelImpl.finishConnect(SocketChannelImpl.java:777) 09:09:59 at org.apache.kafka.common.network.PlaintextTransportLayer.finishConnect(PlaintextTransportLayer.java:50) 09:09:59 at org.apache.kafka.common.network.KafkaChannel.finishConnect(KafkaChannel.java:224) 09:09:59 at org.apache.kafka.common.network.Selector.pollSelectionKeys(Selector.java:526) 09:09:59 at org.apache.kafka.common.network.Selector.poll(Selector.java:481) 09:09:59 at org.apache.kafka.clients.NetworkClient.poll(NetworkClient.java:560) 09:09:59 at org.apache.kafka.clients.producer.internals.Sender.runOnce(Sender.java:328) 09:09:59 at org.apache.kafka.clients.producer.internals.Sender.run(Sender.java:243) 09:09:59 at java.base/java.lang.Thread.run(Thread.java:829) 09:09:59 09:09:59.613 [kafka-producer-network-thread | mso-123456-producer-a431b68b-e45a-4ed5-8b19-0482c7f0322b] INFO org.apache.kafka.clients.NetworkClient - [Producer clientId=mso-123456-producer-a431b68b-e45a-4ed5-8b19-0482c7f0322b] Node 1 disconnected. 09:09:59 09:09:59.616 [kafka-producer-network-thread | mso-123456-producer-a431b68b-e45a-4ed5-8b19-0482c7f0322b] WARN org.apache.kafka.clients.NetworkClient - [Producer clientId=mso-123456-producer-a431b68b-e45a-4ed5-8b19-0482c7f0322b] Connection to node 1 (localhost/127.0.0.1:40117) could not be established. Broker may not be available. 09:09:59 09:09:59.633 [pool-8-thread-4] DEBUG org.onap.sdc.impl.NotificationConsumer - Polling for messages from topic: null 09:09:59 09:09:59.707 [kafka-coordinator-heartbeat-thread | mso-group] DEBUG org.apache.kafka.clients.NetworkClient - [Consumer clientId=mso-123456-consumer-f5c1d64b-2a77-4abe-97e2-5d595acfbd82, groupId=mso-group] Give up sending metadata request since no node is available 09:09:59 09:09:59.708 [kafka-coordinator-heartbeat-thread | mso-group] DEBUG org.apache.kafka.clients.consumer.internals.ConsumerCoordinator - [Consumer clientId=mso-123456-consumer-f5c1d64b-2a77-4abe-97e2-5d595acfbd82, groupId=mso-group] No broker available to send FindCoordinator request 09:09:59 09:09:59.714 [kafka-producer-network-thread | mso-123456-producer-a431b68b-e45a-4ed5-8b19-0482c7f0322b] DEBUG org.apache.kafka.clients.NetworkClient - [Producer clientId=mso-123456-producer-a431b68b-e45a-4ed5-8b19-0482c7f0322b] Give up sending metadata request since no node is available 09:09:59 09:09:59.733 [pool-8-thread-1] DEBUG org.onap.sdc.impl.NotificationConsumer - Polling for messages from topic: null 09:09:59 09:09:59.764 [kafka-producer-network-thread | mso-123456-producer-a431b68b-e45a-4ed5-8b19-0482c7f0322b] DEBUG org.apache.kafka.clients.NetworkClient - [Producer clientId=mso-123456-producer-a431b68b-e45a-4ed5-8b19-0482c7f0322b] Give up sending metadata request since no node is available 09:09:59 09:09:59.808 [kafka-coordinator-heartbeat-thread | mso-group] DEBUG org.apache.kafka.clients.NetworkClient - [Consumer clientId=mso-123456-consumer-f5c1d64b-2a77-4abe-97e2-5d595acfbd82, groupId=mso-group] Give up sending metadata request since no node is available 09:09:59 09:09:59.808 [kafka-coordinator-heartbeat-thread | mso-group] DEBUG org.apache.kafka.clients.consumer.internals.ConsumerCoordinator - [Consumer clientId=mso-123456-consumer-f5c1d64b-2a77-4abe-97e2-5d595acfbd82, groupId=mso-group] No broker available to send FindCoordinator request 09:09:59 09:09:59.814 [kafka-producer-network-thread | mso-123456-producer-a431b68b-e45a-4ed5-8b19-0482c7f0322b] DEBUG org.apache.kafka.clients.NetworkClient - [Producer clientId=mso-123456-producer-a431b68b-e45a-4ed5-8b19-0482c7f0322b] Give up sending metadata request since no node is available 09:09:59 09:09:59.833 [pool-8-thread-5] DEBUG org.onap.sdc.impl.NotificationConsumer - Polling for messages from topic: null 09:09:59 09:09:59.865 [kafka-producer-network-thread | mso-123456-producer-a431b68b-e45a-4ed5-8b19-0482c7f0322b] DEBUG org.apache.kafka.clients.NetworkClient - [Producer clientId=mso-123456-producer-a431b68b-e45a-4ed5-8b19-0482c7f0322b] Give up sending metadata request since no node is available 09:09:59 09:09:59.908 [kafka-coordinator-heartbeat-thread | mso-group] DEBUG org.apache.kafka.clients.NetworkClient - [Consumer clientId=mso-123456-consumer-f5c1d64b-2a77-4abe-97e2-5d595acfbd82, groupId=mso-group] Initialize connection to node localhost:40117 (id: 1 rack: null) for sending metadata request 09:09:59 09:09:59.909 [kafka-coordinator-heartbeat-thread | mso-group] DEBUG org.apache.kafka.clients.ClientUtils - Resolved host localhost as 127.0.0.1 09:09:59 09:09:59.909 [kafka-coordinator-heartbeat-thread | mso-group] DEBUG org.apache.kafka.clients.NetworkClient - [Consumer clientId=mso-123456-consumer-f5c1d64b-2a77-4abe-97e2-5d595acfbd82, groupId=mso-group] Initiating connection to node localhost:40117 (id: 1 rack: null) using address localhost/127.0.0.1 09:09:59 09:09:59.909 [kafka-coordinator-heartbeat-thread | mso-group] DEBUG org.apache.kafka.common.security.authenticator.SaslClientAuthenticator - [Consumer clientId=mso-123456-consumer-f5c1d64b-2a77-4abe-97e2-5d595acfbd82, groupId=mso-group] Set SASL client state to SEND_APIVERSIONS_REQUEST 09:09:59 09:09:59.909 [kafka-coordinator-heartbeat-thread | mso-group] DEBUG org.apache.kafka.common.security.authenticator.SaslClientAuthenticator - [Consumer clientId=mso-123456-consumer-f5c1d64b-2a77-4abe-97e2-5d595acfbd82, groupId=mso-group] Creating SaslClient: client=null;service=kafka;serviceHostname=localhost;mechs=[PLAIN] 09:09:59 09:09:59.909 [kafka-coordinator-heartbeat-thread | mso-group] DEBUG org.apache.kafka.common.network.Selector - [Consumer clientId=mso-123456-consumer-f5c1d64b-2a77-4abe-97e2-5d595acfbd82, groupId=mso-group] Connection with localhost/127.0.0.1 (channelId=1) disconnected 09:09:59 java.net.ConnectException: Connection refused 09:09:59 at java.base/sun.nio.ch.SocketChannelImpl.checkConnect(Native Method) 09:09:59 at java.base/sun.nio.ch.SocketChannelImpl.finishConnect(SocketChannelImpl.java:777) 09:09:59 at org.apache.kafka.common.network.PlaintextTransportLayer.finishConnect(PlaintextTransportLayer.java:50) 09:09:59 at org.apache.kafka.common.network.KafkaChannel.finishConnect(KafkaChannel.java:224) 09:09:59 at org.apache.kafka.common.network.Selector.pollSelectionKeys(Selector.java:526) 09:09:59 at org.apache.kafka.common.network.Selector.poll(Selector.java:481) 09:09:59 at org.apache.kafka.clients.NetworkClient.poll(NetworkClient.java:560) 09:09:59 at org.apache.kafka.clients.consumer.internals.ConsumerNetworkClient.poll(ConsumerNetworkClient.java:280) 09:09:59 at org.apache.kafka.clients.consumer.internals.ConsumerNetworkClient.pollNoWakeup(ConsumerNetworkClient.java:321) 09:09:59 at org.apache.kafka.clients.consumer.internals.AbstractCoordinator$HeartbeatThread.run(AbstractCoordinator.java:1454) 09:09:59 09:09:59.909 [kafka-coordinator-heartbeat-thread | mso-group] INFO org.apache.kafka.clients.NetworkClient - [Consumer clientId=mso-123456-consumer-f5c1d64b-2a77-4abe-97e2-5d595acfbd82, groupId=mso-group] Node 1 disconnected. 09:09:59 09:09:59.909 [kafka-coordinator-heartbeat-thread | mso-group] WARN org.apache.kafka.clients.NetworkClient - [Consumer clientId=mso-123456-consumer-f5c1d64b-2a77-4abe-97e2-5d595acfbd82, groupId=mso-group] Connection to node 1 (localhost/127.0.0.1:40117) could not be established. Broker may not be available. 09:09:59 09:09:59.910 [kafka-coordinator-heartbeat-thread | mso-group] DEBUG org.apache.kafka.clients.consumer.internals.ConsumerCoordinator - [Consumer clientId=mso-123456-consumer-f5c1d64b-2a77-4abe-97e2-5d595acfbd82, groupId=mso-group] No broker available to send FindCoordinator request 09:09:59 09:09:59.915 [kafka-producer-network-thread | mso-123456-producer-a431b68b-e45a-4ed5-8b19-0482c7f0322b] DEBUG org.apache.kafka.clients.NetworkClient - [Producer clientId=mso-123456-producer-a431b68b-e45a-4ed5-8b19-0482c7f0322b] Give up sending metadata request since no node is available 09:09:59 09:09:59.933 [pool-8-thread-3] DEBUG org.onap.sdc.impl.NotificationConsumer - Polling for messages from topic: null 09:09:59 09:09:59.965 [kafka-producer-network-thread | mso-123456-producer-a431b68b-e45a-4ed5-8b19-0482c7f0322b] DEBUG org.apache.kafka.clients.NetworkClient - [Producer clientId=mso-123456-producer-a431b68b-e45a-4ed5-8b19-0482c7f0322b] Give up sending metadata request since no node is available 09:10:00 09:10:00.010 [kafka-coordinator-heartbeat-thread | mso-group] DEBUG org.apache.kafka.clients.NetworkClient - [Consumer clientId=mso-123456-consumer-f5c1d64b-2a77-4abe-97e2-5d595acfbd82, groupId=mso-group] Give up sending metadata request since no node is available 09:10:00 09:10:00.010 [kafka-coordinator-heartbeat-thread | mso-group] DEBUG org.apache.kafka.clients.consumer.internals.ConsumerCoordinator - [Consumer clientId=mso-123456-consumer-f5c1d64b-2a77-4abe-97e2-5d595acfbd82, groupId=mso-group] No broker available to send FindCoordinator request 09:10:00 09:10:00.015 [kafka-producer-network-thread | mso-123456-producer-a431b68b-e45a-4ed5-8b19-0482c7f0322b] DEBUG org.apache.kafka.clients.NetworkClient - [Producer clientId=mso-123456-producer-a431b68b-e45a-4ed5-8b19-0482c7f0322b] Give up sending metadata request since no node is available 09:10:00 09:10:00.033 [pool-8-thread-3] DEBUG org.onap.sdc.impl.NotificationConsumer - Polling for messages from topic: null 09:10:00 09:10:00.040 [main] INFO org.onap.sdc.impl.DistributionClientImpl - DistributionClient - sendNotificationStatus 09:10:00 09:10:00.040 [main] DEBUG org.onap.sdc.impl.DistributionClientImpl - client was not initialized 09:10:00 09:10:00.042 [pool-9-thread-1] DEBUG org.onap.sdc.impl.NotificationConsumer - Polling for messages from topic: null 09:10:00 09:10:00.066 [kafka-producer-network-thread | mso-123456-producer-a431b68b-e45a-4ed5-8b19-0482c7f0322b] DEBUG org.apache.kafka.clients.NetworkClient - [Producer clientId=mso-123456-producer-a431b68b-e45a-4ed5-8b19-0482c7f0322b] Give up sending metadata request since no node is available 09:10:00 09:10:00.110 [kafka-coordinator-heartbeat-thread | mso-group] DEBUG org.apache.kafka.clients.NetworkClient - [Consumer clientId=mso-123456-consumer-f5c1d64b-2a77-4abe-97e2-5d595acfbd82, groupId=mso-group] Give up sending metadata request since no node is available 09:10:00 09:10:00.110 [kafka-coordinator-heartbeat-thread | mso-group] DEBUG org.apache.kafka.clients.consumer.internals.ConsumerCoordinator - [Consumer clientId=mso-123456-consumer-f5c1d64b-2a77-4abe-97e2-5d595acfbd82, groupId=mso-group] No broker available to send FindCoordinator request 09:10:00 09:10:00.116 [kafka-producer-network-thread | mso-123456-producer-a431b68b-e45a-4ed5-8b19-0482c7f0322b] DEBUG org.apache.kafka.clients.NetworkClient - [Producer clientId=mso-123456-producer-a431b68b-e45a-4ed5-8b19-0482c7f0322b] Give up sending metadata request since no node is available 09:10:00 09:10:00.142 [pool-9-thread-1] DEBUG org.onap.sdc.impl.NotificationConsumer - Polling for messages from topic: null 09:10:00 09:10:00.166 [kafka-producer-network-thread | mso-123456-producer-a431b68b-e45a-4ed5-8b19-0482c7f0322b] DEBUG org.apache.kafka.clients.NetworkClient - [Producer clientId=mso-123456-producer-a431b68b-e45a-4ed5-8b19-0482c7f0322b] Give up sending metadata request since no node is available 09:10:00 09:10:00.210 [kafka-coordinator-heartbeat-thread | mso-group] DEBUG org.apache.kafka.clients.NetworkClient - [Consumer clientId=mso-123456-consumer-f5c1d64b-2a77-4abe-97e2-5d595acfbd82, groupId=mso-group] Give up sending metadata request since no node is available 09:10:00 09:10:00.210 [kafka-coordinator-heartbeat-thread | mso-group] DEBUG org.apache.kafka.clients.consumer.internals.ConsumerCoordinator - [Consumer clientId=mso-123456-consumer-f5c1d64b-2a77-4abe-97e2-5d595acfbd82, groupId=mso-group] No broker available to send FindCoordinator request 09:10:00 09:10:00.216 [kafka-producer-network-thread | mso-123456-producer-a431b68b-e45a-4ed5-8b19-0482c7f0322b] DEBUG org.apache.kafka.clients.NetworkClient - [Producer clientId=mso-123456-producer-a431b68b-e45a-4ed5-8b19-0482c7f0322b] Give up sending metadata request since no node is available 09:10:00 09:10:00.242 [pool-9-thread-2] DEBUG org.onap.sdc.impl.NotificationConsumer - Polling for messages from topic: null 09:10:00 09:10:00.243 [pool-9-thread-2] DEBUG org.onap.sdc.impl.NotificationConsumer - received message from topic 09:10:00 09:10:00.243 [pool-9-thread-2] DEBUG org.onap.sdc.impl.NotificationConsumer - received notification from broker: {"distributionID" : "bcc7a72e-90b1-4c5f-9a37-28dc3cd86416", 09:10:00 "serviceName" : "Testnotificationser1", 09:10:00 "serviceVersion" : "1.0", 09:10:00 "serviceUUID" : "7f7f94f4-373a-4b71-a0e3-80ae2ba4eb5d", 09:10:00 "serviceDescription" : "TestNotificationVF1", 09:10:00 "bugabuga" : "xyz", 09:10:00 "resources" : [{ 09:10:00 "resourceInstanceName" : "testnotificationvf11", 09:10:00 "resourceName" : "TestNotificationVF1", 09:10:00 "resourceVersion" : "1.0", 09:10:00 "resoucreType" : "VF", 09:10:00 "resourceUUID" : "907e1746-9f69-40f5-9f2a-313654092a2d", 09:10:00 "artifacts" : [{ 09:10:00 "artifactName" : "heat.yaml", 09:10:00 "artifactType" : "HEAT", 09:10:00 "artifactURL" : "/sdc/v1/catalog/services/Testnotificationser1/1.0/resourceInstances/testnotificationvf11/artifacts/heat.yaml", 09:10:00 "artifactChecksum" : "ODEyNjE4YTMzYzRmMTk2ODVhNTU2NTg3YWEyNmIxMTM\u003d", 09:10:00 "artifactDescription" : "heat", 09:10:00 "artifactTimeout" : 60, 09:10:00 "artifactUUID" : "8df6123c-f368-47d3-93be-1972cefbcc35", 09:10:00 "artifactBuga" : "8df6123c-f368-47d3-93be-1972cefbcc35", 09:10:00 "artifactVersion" : "1" 09:10:00 }, { 09:10:00 "artifactName" : "buga.bug", 09:10:00 "artifactType" : "BUGA_BUGA", 09:10:00 "artifactURL" : "/sdc/v1/catalog/services/Testnotificationser1/1.0/resourceInstances/testnotificationvf11/artifacts/heat.env", 09:10:00 "artifactChecksum" : "NGIzMjExZTM1NDc2NjBjOTQyMGJmMWNiMmU0NTE5NzM\u003d", 09:10:00 "artifactDescription" : "Auto-generated HEAT Environment deployment artifact", 09:10:00 "artifactTimeout" : 0, 09:10:00 "artifactUUID" : "ce65d31c-35c0-43a9-90c7-596fc51d0c86", 09:10:00 "artifactVersion" : "1", 09:10:00 "generatedFromUUID" : "8df6123c-f368-47d3-93be-1972cefbcc35" 09:10:00 } 09:10:00 ] 09:10:00 } 09:10:00 ]} 09:10:00 09:10:00.262 [pool-9-thread-2] DEBUG org.onap.sdc.impl.NotificationConsumer - sending notification to client: { 09:10:00 "distributionID": "bcc7a72e-90b1-4c5f-9a37-28dc3cd86416", 09:10:00 "serviceName": "Testnotificationser1", 09:10:00 "serviceVersion": "1.0", 09:10:00 "serviceUUID": "7f7f94f4-373a-4b71-a0e3-80ae2ba4eb5d", 09:10:00 "serviceDescription": "TestNotificationVF1", 09:10:00 "resources": [ 09:10:00 { 09:10:00 "resourceInstanceName": "testnotificationvf11", 09:10:00 "resourceName": "TestNotificationVF1", 09:10:00 "resourceVersion": "1.0", 09:10:00 "resoucreType": "VF", 09:10:00 "resourceUUID": "907e1746-9f69-40f5-9f2a-313654092a2d", 09:10:00 "artifacts": [ 09:10:00 { 09:10:00 "artifactName": "heat.yaml", 09:10:00 "artifactType": "HEAT", 09:10:00 "artifactURL": "/sdc/v1/catalog/services/Testnotificationser1/1.0/resourceInstances/testnotificationvf11/artifacts/heat.yaml", 09:10:00 "artifactChecksum": "ODEyNjE4YTMzYzRmMTk2ODVhNTU2NTg3YWEyNmIxMTM\u003d", 09:10:00 "artifactDescription": "heat", 09:10:00 "artifactTimeout": 60, 09:10:00 "artifactVersion": "1", 09:10:00 "artifactUUID": "8df6123c-f368-47d3-93be-1972cefbcc35", 09:10:00 "relatedArtifactsInfo": [] 09:10:00 } 09:10:00 ] 09:10:00 } 09:10:00 ], 09:10:00 "serviceArtifacts": [] 09:10:00 } 09:10:00 09:10:00.267 [kafka-producer-network-thread | mso-123456-producer-a431b68b-e45a-4ed5-8b19-0482c7f0322b] DEBUG org.apache.kafka.clients.NetworkClient - [Producer clientId=mso-123456-producer-a431b68b-e45a-4ed5-8b19-0482c7f0322b] Give up sending metadata request since no node is available 09:10:00 09:10:00.311 [kafka-coordinator-heartbeat-thread | mso-group] DEBUG org.apache.kafka.clients.NetworkClient - [Consumer clientId=mso-123456-consumer-f5c1d64b-2a77-4abe-97e2-5d595acfbd82, groupId=mso-group] Give up sending metadata request since no node is available 09:10:00 09:10:00.311 [kafka-coordinator-heartbeat-thread | mso-group] DEBUG org.apache.kafka.clients.consumer.internals.ConsumerCoordinator - [Consumer clientId=mso-123456-consumer-f5c1d64b-2a77-4abe-97e2-5d595acfbd82, groupId=mso-group] No broker available to send FindCoordinator request 09:10:00 09:10:00.317 [kafka-producer-network-thread | mso-123456-producer-a431b68b-e45a-4ed5-8b19-0482c7f0322b] DEBUG org.apache.kafka.clients.NetworkClient - [Producer clientId=mso-123456-producer-a431b68b-e45a-4ed5-8b19-0482c7f0322b] Give up sending metadata request since no node is available 09:10:00 09:10:00.342 [pool-9-thread-1] DEBUG org.onap.sdc.impl.NotificationConsumer - Polling for messages from topic: null 09:10:00 09:10:00.367 [kafka-producer-network-thread | mso-123456-producer-a431b68b-e45a-4ed5-8b19-0482c7f0322b] DEBUG org.apache.kafka.clients.NetworkClient - [Producer clientId=mso-123456-producer-a431b68b-e45a-4ed5-8b19-0482c7f0322b] Give up sending metadata request since no node is available 09:10:00 09:10:00.411 [kafka-coordinator-heartbeat-thread | mso-group] DEBUG org.apache.kafka.clients.NetworkClient - [Consumer clientId=mso-123456-consumer-f5c1d64b-2a77-4abe-97e2-5d595acfbd82, groupId=mso-group] Give up sending metadata request since no node is available 09:10:00 09:10:00.411 [kafka-coordinator-heartbeat-thread | mso-group] DEBUG org.apache.kafka.clients.consumer.internals.ConsumerCoordinator - [Consumer clientId=mso-123456-consumer-f5c1d64b-2a77-4abe-97e2-5d595acfbd82, groupId=mso-group] No broker available to send FindCoordinator request 09:10:00 09:10:00.418 [kafka-producer-network-thread | mso-123456-producer-a431b68b-e45a-4ed5-8b19-0482c7f0322b] DEBUG org.apache.kafka.clients.NetworkClient - [Producer clientId=mso-123456-producer-a431b68b-e45a-4ed5-8b19-0482c7f0322b] Give up sending metadata request since no node is available 09:10:00 09:10:00.442 [pool-9-thread-3] DEBUG org.onap.sdc.impl.NotificationConsumer - Polling for messages from topic: null 09:10:00 09:10:00.468 [kafka-producer-network-thread | mso-123456-producer-a431b68b-e45a-4ed5-8b19-0482c7f0322b] DEBUG org.apache.kafka.clients.NetworkClient - [Producer clientId=mso-123456-producer-a431b68b-e45a-4ed5-8b19-0482c7f0322b] Give up sending metadata request since no node is available 09:10:00 09:10:00.514 [kafka-coordinator-heartbeat-thread | mso-group] DEBUG org.apache.kafka.clients.NetworkClient - [Consumer clientId=mso-123456-consumer-f5c1d64b-2a77-4abe-97e2-5d595acfbd82, groupId=mso-group] Give up sending metadata request since no node is available 09:10:00 09:10:00.514 [kafka-coordinator-heartbeat-thread | mso-group] DEBUG org.apache.kafka.clients.consumer.internals.ConsumerCoordinator - [Consumer clientId=mso-123456-consumer-f5c1d64b-2a77-4abe-97e2-5d595acfbd82, groupId=mso-group] No broker available to send FindCoordinator request 09:10:00 09:10:00.518 [kafka-producer-network-thread | mso-123456-producer-a431b68b-e45a-4ed5-8b19-0482c7f0322b] DEBUG org.apache.kafka.clients.NetworkClient - [Producer clientId=mso-123456-producer-a431b68b-e45a-4ed5-8b19-0482c7f0322b] Initialize connection to node localhost:40117 (id: 1 rack: null) for sending metadata request 09:10:00 09:10:00.519 [kafka-producer-network-thread | mso-123456-producer-a431b68b-e45a-4ed5-8b19-0482c7f0322b] DEBUG org.apache.kafka.clients.ClientUtils - Resolved host localhost as 127.0.0.1 09:10:00 09:10:00.519 [kafka-producer-network-thread | mso-123456-producer-a431b68b-e45a-4ed5-8b19-0482c7f0322b] DEBUG org.apache.kafka.clients.NetworkClient - [Producer clientId=mso-123456-producer-a431b68b-e45a-4ed5-8b19-0482c7f0322b] Initiating connection to node localhost:40117 (id: 1 rack: null) using address localhost/127.0.0.1 09:10:00 09:10:00.519 [kafka-producer-network-thread | mso-123456-producer-a431b68b-e45a-4ed5-8b19-0482c7f0322b] DEBUG org.apache.kafka.common.security.authenticator.SaslClientAuthenticator - [Producer clientId=mso-123456-producer-a431b68b-e45a-4ed5-8b19-0482c7f0322b] Set SASL client state to SEND_APIVERSIONS_REQUEST 09:10:00 09:10:00.519 [kafka-producer-network-thread | mso-123456-producer-a431b68b-e45a-4ed5-8b19-0482c7f0322b] DEBUG org.apache.kafka.common.security.authenticator.SaslClientAuthenticator - [Producer clientId=mso-123456-producer-a431b68b-e45a-4ed5-8b19-0482c7f0322b] Creating SaslClient: client=null;service=kafka;serviceHostname=localhost;mechs=[PLAIN] 09:10:00 09:10:00.520 [kafka-producer-network-thread | mso-123456-producer-a431b68b-e45a-4ed5-8b19-0482c7f0322b] DEBUG org.apache.kafka.common.network.Selector - [Producer clientId=mso-123456-producer-a431b68b-e45a-4ed5-8b19-0482c7f0322b] Connection with localhost/127.0.0.1 (channelId=1) disconnected 09:10:00 java.net.ConnectException: Connection refused 09:10:00 at java.base/sun.nio.ch.SocketChannelImpl.checkConnect(Native Method) 09:10:00 at java.base/sun.nio.ch.SocketChannelImpl.finishConnect(SocketChannelImpl.java:777) 09:10:00 at org.apache.kafka.common.network.PlaintextTransportLayer.finishConnect(PlaintextTransportLayer.java:50) 09:10:00 at org.apache.kafka.common.network.KafkaChannel.finishConnect(KafkaChannel.java:224) 09:10:00 at org.apache.kafka.common.network.Selector.pollSelectionKeys(Selector.java:526) 09:10:00 at org.apache.kafka.common.network.Selector.poll(Selector.java:481) 09:10:00 at org.apache.kafka.clients.NetworkClient.poll(NetworkClient.java:560) 09:10:00 at org.apache.kafka.clients.producer.internals.Sender.runOnce(Sender.java:328) 09:10:00 at org.apache.kafka.clients.producer.internals.Sender.run(Sender.java:243) 09:10:00 at java.base/java.lang.Thread.run(Thread.java:829) 09:10:00 09:10:00.520 [kafka-producer-network-thread | mso-123456-producer-a431b68b-e45a-4ed5-8b19-0482c7f0322b] INFO org.apache.kafka.clients.NetworkClient - [Producer clientId=mso-123456-producer-a431b68b-e45a-4ed5-8b19-0482c7f0322b] Node 1 disconnected. 09:10:00 09:10:00.520 [kafka-producer-network-thread | mso-123456-producer-a431b68b-e45a-4ed5-8b19-0482c7f0322b] WARN org.apache.kafka.clients.NetworkClient - [Producer clientId=mso-123456-producer-a431b68b-e45a-4ed5-8b19-0482c7f0322b] Connection to node 1 (localhost/127.0.0.1:40117) could not be established. Broker may not be available. 09:10:00 09:10:00.542 [pool-9-thread-2] DEBUG org.onap.sdc.impl.NotificationConsumer - Polling for messages from topic: null 09:10:00 09:10:00.614 [kafka-coordinator-heartbeat-thread | mso-group] DEBUG org.apache.kafka.clients.NetworkClient - [Consumer clientId=mso-123456-consumer-f5c1d64b-2a77-4abe-97e2-5d595acfbd82, groupId=mso-group] Give up sending metadata request since no node is available 09:10:00 09:10:00.615 [kafka-coordinator-heartbeat-thread | mso-group] DEBUG org.apache.kafka.clients.consumer.internals.ConsumerCoordinator - [Consumer clientId=mso-123456-consumer-f5c1d64b-2a77-4abe-97e2-5d595acfbd82, groupId=mso-group] No broker available to send FindCoordinator request 09:10:00 09:10:00.620 [kafka-producer-network-thread | mso-123456-producer-a431b68b-e45a-4ed5-8b19-0482c7f0322b] DEBUG org.apache.kafka.clients.NetworkClient - [Producer clientId=mso-123456-producer-a431b68b-e45a-4ed5-8b19-0482c7f0322b] Give up sending metadata request since no node is available 09:10:00 09:10:00.642 [pool-9-thread-4] DEBUG org.onap.sdc.impl.NotificationConsumer - Polling for messages from topic: null 09:10:00 09:10:00.671 [kafka-producer-network-thread | mso-123456-producer-a431b68b-e45a-4ed5-8b19-0482c7f0322b] DEBUG org.apache.kafka.clients.NetworkClient - [Producer clientId=mso-123456-producer-a431b68b-e45a-4ed5-8b19-0482c7f0322b] Give up sending metadata request since no node is available 09:10:00 09:10:00.715 [kafka-coordinator-heartbeat-thread | mso-group] DEBUG org.apache.kafka.clients.NetworkClient - [Consumer clientId=mso-123456-consumer-f5c1d64b-2a77-4abe-97e2-5d595acfbd82, groupId=mso-group] Give up sending metadata request since no node is available 09:10:00 09:10:00.715 [kafka-coordinator-heartbeat-thread | mso-group] DEBUG org.apache.kafka.clients.consumer.internals.ConsumerCoordinator - [Consumer clientId=mso-123456-consumer-f5c1d64b-2a77-4abe-97e2-5d595acfbd82, groupId=mso-group] No broker available to send FindCoordinator request 09:10:00 09:10:00.721 [kafka-producer-network-thread | mso-123456-producer-a431b68b-e45a-4ed5-8b19-0482c7f0322b] DEBUG org.apache.kafka.clients.NetworkClient - [Producer clientId=mso-123456-producer-a431b68b-e45a-4ed5-8b19-0482c7f0322b] Give up sending metadata request since no node is available 09:10:00 09:10:00.742 [pool-9-thread-1] DEBUG org.onap.sdc.impl.NotificationConsumer - Polling for messages from topic: null 09:10:00 09:10:00.771 [kafka-producer-network-thread | mso-123456-producer-a431b68b-e45a-4ed5-8b19-0482c7f0322b] DEBUG org.apache.kafka.clients.NetworkClient - [Producer clientId=mso-123456-producer-a431b68b-e45a-4ed5-8b19-0482c7f0322b] Give up sending metadata request since no node is available 09:10:00 09:10:00.815 [kafka-coordinator-heartbeat-thread | mso-group] DEBUG org.apache.kafka.clients.NetworkClient - [Consumer clientId=mso-123456-consumer-f5c1d64b-2a77-4abe-97e2-5d595acfbd82, groupId=mso-group] Initialize connection to node localhost:40117 (id: 1 rack: null) for sending metadata request 09:10:00 09:10:00.816 [kafka-coordinator-heartbeat-thread | mso-group] DEBUG org.apache.kafka.clients.ClientUtils - Resolved host localhost as 127.0.0.1 09:10:00 09:10:00.816 [kafka-coordinator-heartbeat-thread | mso-group] DEBUG org.apache.kafka.clients.NetworkClient - [Consumer clientId=mso-123456-consumer-f5c1d64b-2a77-4abe-97e2-5d595acfbd82, groupId=mso-group] Initiating connection to node localhost:40117 (id: 1 rack: null) using address localhost/127.0.0.1 09:10:00 09:10:00.816 [kafka-coordinator-heartbeat-thread | mso-group] DEBUG org.apache.kafka.common.security.authenticator.SaslClientAuthenticator - [Consumer clientId=mso-123456-consumer-f5c1d64b-2a77-4abe-97e2-5d595acfbd82, groupId=mso-group] Set SASL client state to SEND_APIVERSIONS_REQUEST 09:10:00 09:10:00.816 [kafka-coordinator-heartbeat-thread | mso-group] DEBUG org.apache.kafka.common.security.authenticator.SaslClientAuthenticator - [Consumer clientId=mso-123456-consumer-f5c1d64b-2a77-4abe-97e2-5d595acfbd82, groupId=mso-group] Creating SaslClient: client=null;service=kafka;serviceHostname=localhost;mechs=[PLAIN] 09:10:00 09:10:00.817 [kafka-coordinator-heartbeat-thread | mso-group] DEBUG org.apache.kafka.common.network.Selector - [Consumer clientId=mso-123456-consumer-f5c1d64b-2a77-4abe-97e2-5d595acfbd82, groupId=mso-group] Connection with localhost/127.0.0.1 (channelId=1) disconnected 09:10:00 java.net.ConnectException: Connection refused 09:10:00 at java.base/sun.nio.ch.SocketChannelImpl.checkConnect(Native Method) 09:10:00 at java.base/sun.nio.ch.SocketChannelImpl.finishConnect(SocketChannelImpl.java:777) 09:10:00 at org.apache.kafka.common.network.PlaintextTransportLayer.finishConnect(PlaintextTransportLayer.java:50) 09:10:00 at org.apache.kafka.common.network.KafkaChannel.finishConnect(KafkaChannel.java:224) 09:10:00 at org.apache.kafka.common.network.Selector.pollSelectionKeys(Selector.java:526) 09:10:00 at org.apache.kafka.common.network.Selector.poll(Selector.java:481) 09:10:00 at org.apache.kafka.clients.NetworkClient.poll(NetworkClient.java:560) 09:10:00 at org.apache.kafka.clients.consumer.internals.ConsumerNetworkClient.poll(ConsumerNetworkClient.java:280) 09:10:00 at org.apache.kafka.clients.consumer.internals.ConsumerNetworkClient.pollNoWakeup(ConsumerNetworkClient.java:321) 09:10:00 at org.apache.kafka.clients.consumer.internals.AbstractCoordinator$HeartbeatThread.run(AbstractCoordinator.java:1454) 09:10:00 09:10:00.817 [kafka-coordinator-heartbeat-thread | mso-group] INFO org.apache.kafka.clients.NetworkClient - [Consumer clientId=mso-123456-consumer-f5c1d64b-2a77-4abe-97e2-5d595acfbd82, groupId=mso-group] Node 1 disconnected. 09:10:00 09:10:00.817 [kafka-coordinator-heartbeat-thread | mso-group] WARN org.apache.kafka.clients.NetworkClient - [Consumer clientId=mso-123456-consumer-f5c1d64b-2a77-4abe-97e2-5d595acfbd82, groupId=mso-group] Connection to node 1 (localhost/127.0.0.1:40117) could not be established. Broker may not be available. 09:10:00 09:10:00.817 [kafka-coordinator-heartbeat-thread | mso-group] DEBUG org.apache.kafka.clients.consumer.internals.ConsumerCoordinator - [Consumer clientId=mso-123456-consumer-f5c1d64b-2a77-4abe-97e2-5d595acfbd82, groupId=mso-group] No broker available to send FindCoordinator request 09:10:00 09:10:00.821 [kafka-producer-network-thread | mso-123456-producer-a431b68b-e45a-4ed5-8b19-0482c7f0322b] DEBUG org.apache.kafka.clients.NetworkClient - [Producer clientId=mso-123456-producer-a431b68b-e45a-4ed5-8b19-0482c7f0322b] Give up sending metadata request since no node is available 09:10:00 09:10:00.842 [pool-9-thread-5] DEBUG org.onap.sdc.impl.NotificationConsumer - Polling for messages from topic: null 09:10:00 09:10:00.872 [kafka-producer-network-thread | mso-123456-producer-a431b68b-e45a-4ed5-8b19-0482c7f0322b] DEBUG org.apache.kafka.clients.NetworkClient - [Producer clientId=mso-123456-producer-a431b68b-e45a-4ed5-8b19-0482c7f0322b] Give up sending metadata request since no node is available 09:10:00 09:10:00.917 [kafka-coordinator-heartbeat-thread | mso-group] DEBUG org.apache.kafka.clients.NetworkClient - [Consumer clientId=mso-123456-consumer-f5c1d64b-2a77-4abe-97e2-5d595acfbd82, groupId=mso-group] Give up sending metadata request since no node is available 09:10:00 09:10:00.918 [kafka-coordinator-heartbeat-thread | mso-group] DEBUG org.apache.kafka.clients.consumer.internals.ConsumerCoordinator - [Consumer clientId=mso-123456-consumer-f5c1d64b-2a77-4abe-97e2-5d595acfbd82, groupId=mso-group] No broker available to send FindCoordinator request 09:10:00 09:10:00.922 [kafka-producer-network-thread | mso-123456-producer-a431b68b-e45a-4ed5-8b19-0482c7f0322b] DEBUG org.apache.kafka.clients.NetworkClient - [Producer clientId=mso-123456-producer-a431b68b-e45a-4ed5-8b19-0482c7f0322b] Give up sending metadata request since no node is available 09:10:00 09:10:00.942 [pool-9-thread-3] DEBUG org.onap.sdc.impl.NotificationConsumer - Polling for messages from topic: null 09:10:00 09:10:00.972 [kafka-producer-network-thread | mso-123456-producer-a431b68b-e45a-4ed5-8b19-0482c7f0322b] DEBUG org.apache.kafka.clients.NetworkClient - [Producer clientId=mso-123456-producer-a431b68b-e45a-4ed5-8b19-0482c7f0322b] Give up sending metadata request since no node is available 09:10:01 09:10:01.018 [kafka-coordinator-heartbeat-thread | mso-group] DEBUG org.apache.kafka.clients.NetworkClient - [Consumer clientId=mso-123456-consumer-f5c1d64b-2a77-4abe-97e2-5d595acfbd82, groupId=mso-group] Give up sending metadata request since no node is available 09:10:01 09:10:01.018 [kafka-coordinator-heartbeat-thread | mso-group] DEBUG org.apache.kafka.clients.consumer.internals.ConsumerCoordinator - [Consumer clientId=mso-123456-consumer-f5c1d64b-2a77-4abe-97e2-5d595acfbd82, groupId=mso-group] No broker available to send FindCoordinator request 09:10:01 09:10:01.023 [kafka-producer-network-thread | mso-123456-producer-a431b68b-e45a-4ed5-8b19-0482c7f0322b] DEBUG org.apache.kafka.clients.NetworkClient - [Producer clientId=mso-123456-producer-a431b68b-e45a-4ed5-8b19-0482c7f0322b] Give up sending metadata request since no node is available 09:10:01 09:10:01.042 [pool-9-thread-3] DEBUG org.onap.sdc.impl.NotificationConsumer - Polling for messages from topic: null 09:10:01 09:10:01.047 [main] INFO org.onap.sdc.impl.DistributionClientImpl - DistributionClient - sendNotificationStatus 09:10:01 09:10:01.048 [main] DEBUG org.onap.sdc.impl.DistributionClientImpl - client was not initialized 09:10:01 09:10:01.051 [pool-10-thread-1] DEBUG org.onap.sdc.impl.NotificationConsumer - Polling for messages from topic: null 09:10:01 09:10:01.073 [kafka-producer-network-thread | mso-123456-producer-a431b68b-e45a-4ed5-8b19-0482c7f0322b] DEBUG org.apache.kafka.clients.NetworkClient - [Producer clientId=mso-123456-producer-a431b68b-e45a-4ed5-8b19-0482c7f0322b] Give up sending metadata request since no node is available 09:10:01 09:10:01.118 [kafka-coordinator-heartbeat-thread | mso-group] DEBUG org.apache.kafka.clients.NetworkClient - [Consumer clientId=mso-123456-consumer-f5c1d64b-2a77-4abe-97e2-5d595acfbd82, groupId=mso-group] Give up sending metadata request since no node is available 09:10:01 09:10:01.118 [kafka-coordinator-heartbeat-thread | mso-group] DEBUG org.apache.kafka.clients.consumer.internals.ConsumerCoordinator - [Consumer clientId=mso-123456-consumer-f5c1d64b-2a77-4abe-97e2-5d595acfbd82, groupId=mso-group] No broker available to send FindCoordinator request 09:10:01 09:10:01.123 [kafka-producer-network-thread | mso-123456-producer-a431b68b-e45a-4ed5-8b19-0482c7f0322b] DEBUG org.apache.kafka.clients.NetworkClient - [Producer clientId=mso-123456-producer-a431b68b-e45a-4ed5-8b19-0482c7f0322b] Give up sending metadata request since no node is available 09:10:01 09:10:01.150 [pool-10-thread-1] DEBUG org.onap.sdc.impl.NotificationConsumer - Polling for messages from topic: null 09:10:01 09:10:01.173 [kafka-producer-network-thread | mso-123456-producer-a431b68b-e45a-4ed5-8b19-0482c7f0322b] DEBUG org.apache.kafka.clients.NetworkClient - [Producer clientId=mso-123456-producer-a431b68b-e45a-4ed5-8b19-0482c7f0322b] Give up sending metadata request since no node is available 09:10:01 09:10:01.219 [kafka-coordinator-heartbeat-thread | mso-group] DEBUG org.apache.kafka.clients.NetworkClient - [Consumer clientId=mso-123456-consumer-f5c1d64b-2a77-4abe-97e2-5d595acfbd82, groupId=mso-group] Give up sending metadata request since no node is available 09:10:01 09:10:01.219 [kafka-coordinator-heartbeat-thread | mso-group] DEBUG org.apache.kafka.clients.consumer.internals.ConsumerCoordinator - [Consumer clientId=mso-123456-consumer-f5c1d64b-2a77-4abe-97e2-5d595acfbd82, groupId=mso-group] No broker available to send FindCoordinator request 09:10:01 09:10:01.224 [kafka-producer-network-thread | mso-123456-producer-a431b68b-e45a-4ed5-8b19-0482c7f0322b] DEBUG org.apache.kafka.clients.NetworkClient - [Producer clientId=mso-123456-producer-a431b68b-e45a-4ed5-8b19-0482c7f0322b] Give up sending metadata request since no node is available 09:10:01 09:10:01.250 [pool-10-thread-2] DEBUG org.onap.sdc.impl.NotificationConsumer - Polling for messages from topic: null 09:10:01 09:10:01.251 [pool-10-thread-2] DEBUG org.onap.sdc.impl.NotificationConsumer - received message from topic 09:10:01 09:10:01.251 [pool-10-thread-2] DEBUG org.onap.sdc.impl.NotificationConsumer - received notification from broker: {"distributionID" : "bcc7a72e-90b1-4c5f-9a37-28dc3cd86416", 09:10:01 "serviceName" : "Testnotificationser1", 09:10:01 "serviceVersion" : "1.0", 09:10:01 "serviceUUID" : "7f7f94f4-373a-4b71-a0e3-80ae2ba4eb5d", 09:10:01 "serviceDescription" : "TestNotificationVF1", 09:10:01 "resources" : [{ 09:10:01 "resourceInstanceName" : "testnotificationvf11", 09:10:01 "resourceName" : "TestNotificationVF1", 09:10:01 "resourceVersion" : "1.0", 09:10:01 "resoucreType" : "VF", 09:10:01 "resourceUUID" : "907e1746-9f69-40f5-9f2a-313654092a2d", 09:10:01 "artifacts" : [{ 09:10:01 "artifactName" : "sample-xml-alldata-1-1.xml", 09:10:01 "artifactType" : "YANG_XML", 09:10:01 "artifactURL" : "/sdc/v1/catalog/services/Testnotificationser1/1.0/resourceInstances/testnotificationvf11/artifacts/sample-xml-alldata-1-1.xml", 09:10:01 "artifactChecksum" : "MTUxODFkMmRlOTNhNjYxMGYyYTI1ZjA5Y2QyNWQyYTk\u003d", 09:10:01 "artifactDescription" : "MyYang", 09:10:01 "artifactTimeout" : 0, 09:10:01 "artifactUUID" : "0005bc4a-2c19-452e-be6d-d574a56be4d0", 09:10:01 "artifactVersion" : "1", 09:10:01 "relatedArtifacts" : [ 09:10:01 "ce65d31c-35c0-43a9-90c7-596fc51d0c86" 09:10:01 ] }, { 09:10:01 "artifactName" : "heat.yaml", 09:10:01 "artifactType" : "HEAT", 09:10:01 "artifactURL" : "/sdc/v1/catalog/services/Testnotificationser1/1.0/resourceInstances/testnotificationvf11/artifacts/heat.yaml", 09:10:01 "artifactChecksum" : "ODEyNjE4YTMzYzRmMTk2ODVhNTU2NTg3YWEyNmIxMTM\u003d", 09:10:01 "artifactDescription" : "heat", 09:10:01 "artifactTimeout" : 60, 09:10:01 "artifactUUID" : "8df6123c-f368-47d3-93be-1972cefbcc35", 09:10:01 "artifactVersion" : "1", 09:10:01 "relatedArtifacts" : [ 09:10:01 "0005bc4a-2c19-452e-be6d-d574a56be4d0", 09:10:01 "ce65d31c-35c0-43a9-90c7-596fc51d0c86" 09:10:01 ] }, { 09:10:01 "artifactName" : "heat.env", 09:10:01 "artifactType" : "HEAT_ENV", 09:10:01 "artifactURL" : "/sdc/v1/catalog/services/Testnotificationser1/1.0/resourceInstances/testnotificationvf11/artifacts/heat.env", 09:10:01 "artifactChecksum" : "NGIzMjExZTM1NDc2NjBjOTQyMGJmMWNiMmU0NTE5NzM\u003d", 09:10:01 "artifactDescription" : "Auto-generated HEAT Environment deployment artifact", 09:10:01 "artifactTimeout" : 0, 09:10:01 "artifactUUID" : "ce65d31c-35c0-43a9-90c7-596fc51d0c86", 09:10:01 "artifactVersion" : "1", 09:10:01 "generatedFromUUID" : "8df6123c-f368-47d3-93be-1972cefbcc35" 09:10:01 } 09:10:01 ] 09:10:01 } 09:10:01 ]} 09:10:01 09:10:01.259 [pool-10-thread-2] DEBUG org.onap.sdc.impl.NotificationConsumer - sending notification to client: { 09:10:01 "distributionID": "bcc7a72e-90b1-4c5f-9a37-28dc3cd86416", 09:10:01 "serviceName": "Testnotificationser1", 09:10:01 "serviceVersion": "1.0", 09:10:01 "serviceUUID": "7f7f94f4-373a-4b71-a0e3-80ae2ba4eb5d", 09:10:01 "serviceDescription": "TestNotificationVF1", 09:10:01 "resources": [ 09:10:01 { 09:10:01 "resourceInstanceName": "testnotificationvf11", 09:10:01 "resourceName": "TestNotificationVF1", 09:10:01 "resourceVersion": "1.0", 09:10:01 "resoucreType": "VF", 09:10:01 "resourceUUID": "907e1746-9f69-40f5-9f2a-313654092a2d", 09:10:01 "artifacts": [ 09:10:01 { 09:10:01 "artifactName": "sample-xml-alldata-1-1.xml", 09:10:01 "artifactType": "YANG_XML", 09:10:01 "artifactURL": "/sdc/v1/catalog/services/Testnotificationser1/1.0/resourceInstances/testnotificationvf11/artifacts/sample-xml-alldata-1-1.xml", 09:10:01 "artifactChecksum": "MTUxODFkMmRlOTNhNjYxMGYyYTI1ZjA5Y2QyNWQyYTk\u003d", 09:10:01 "artifactDescription": "MyYang", 09:10:01 "artifactTimeout": 0, 09:10:01 "artifactVersion": "1", 09:10:01 "artifactUUID": "0005bc4a-2c19-452e-be6d-d574a56be4d0", 09:10:01 "relatedArtifacts": [ 09:10:01 "ce65d31c-35c0-43a9-90c7-596fc51d0c86" 09:10:01 ], 09:10:01 "relatedArtifactsInfo": [ 09:10:01 { 09:10:01 "artifactName": "heat.env", 09:10:01 "artifactType": "HEAT_ENV", 09:10:01 "artifactURL": "/sdc/v1/catalog/services/Testnotificationser1/1.0/resourceInstances/testnotificationvf11/artifacts/heat.env", 09:10:01 "artifactChecksum": "NGIzMjExZTM1NDc2NjBjOTQyMGJmMWNiMmU0NTE5NzM\u003d", 09:10:01 "artifactDescription": "Auto-generated HEAT Environment deployment artifact", 09:10:01 "artifactTimeout": 0, 09:10:01 "artifactVersion": "1", 09:10:01 "artifactUUID": "ce65d31c-35c0-43a9-90c7-596fc51d0c86", 09:10:01 "generatedFromUUID": "8df6123c-f368-47d3-93be-1972cefbcc35" 09:10:01 } 09:10:01 ] 09:10:01 }, 09:10:01 { 09:10:01 "artifactName": "heat.yaml", 09:10:01 "artifactType": "HEAT", 09:10:01 "artifactURL": "/sdc/v1/catalog/services/Testnotificationser1/1.0/resourceInstances/testnotificationvf11/artifacts/heat.yaml", 09:10:01 "artifactChecksum": "ODEyNjE4YTMzYzRmMTk2ODVhNTU2NTg3YWEyNmIxMTM\u003d", 09:10:01 "artifactDescription": "heat", 09:10:01 "artifactTimeout": 60, 09:10:01 "artifactVersion": "1", 09:10:01 "artifactUUID": "8df6123c-f368-47d3-93be-1972cefbcc35", 09:10:01 "generatedArtifact": { 09:10:01 "artifactName": "heat.env", 09:10:01 "artifactType": "HEAT_ENV", 09:10:01 "artifactURL": "/sdc/v1/catalog/services/Testnotificationser1/1.0/resourceInstances/testnotificationvf11/artifacts/heat.env", 09:10:01 "artifactChecksum": "NGIzMjExZTM1NDc2NjBjOTQyMGJmMWNiMmU0NTE5NzM\u003d", 09:10:01 "artifactDescription": "Auto-generated HEAT Environment deployment artifact", 09:10:01 "artifactTimeout": 0, 09:10:01 "artifactVersion": "1", 09:10:01 "artifactUUID": "ce65d31c-35c0-43a9-90c7-596fc51d0c86", 09:10:01 "generatedFromUUID": "8df6123c-f368-47d3-93be-1972cefbcc35" 09:10:01 }, 09:10:01 "relatedArtifacts": [ 09:10:01 "0005bc4a-2c19-452e-be6d-d574a56be4d0", 09:10:01 "ce65d31c-35c0-43a9-90c7-596fc51d0c86" 09:10:01 ], 09:10:01 "relatedArtifactsInfo": [ 09:10:01 { 09:10:01 "artifactName": "sample-xml-alldata-1-1.xml", 09:10:01 "artifactType": "YANG_XML", 09:10:01 "artifactURL": "/sdc/v1/catalog/services/Testnotificationser1/1.0/resourceInstances/testnotificationvf11/artifacts/sample-xml-alldata-1-1.xml", 09:10:01 "artifactChecksum": "MTUxODFkMmRlOTNhNjYxMGYyYTI1ZjA5Y2QyNWQyYTk\u003d", 09:10:01 "artifactDescription": "MyYang", 09:10:01 "artifactTimeout": 0, 09:10:01 "artifactVersion": "1", 09:10:01 "artifactUUID": "0005bc4a-2c19-452e-be6d-d574a56be4d0", 09:10:01 "relatedArtifacts": [ 09:10:01 "ce65d31c-35c0-43a9-90c7-596fc51d0c86" 09:10:01 ], 09:10:01 "relatedArtifactsInfo": [ 09:10:01 { 09:10:01 "artifactName": "heat.env", 09:10:01 "artifactType": "HEAT_ENV", 09:10:01 "artifactURL": "/sdc/v1/catalog/services/Testnotificationser1/1.0/resourceInstances/testnotificationvf11/artifacts/heat.env", 09:10:01 "artifactChecksum": "NGIzMjExZTM1NDc2NjBjOTQyMGJmMWNiMmU0NTE5NzM\u003d", 09:10:01 "artifactDescription": "Auto-generated HEAT Environment deployment artifact", 09:10:01 "artifactTimeout": 0, 09:10:01 "artifactVersion": "1", 09:10:01 "artifactUUID": "ce65d31c-35c0-43a9-90c7-596fc51d0c86", 09:10:01 "generatedFromUUID": "8df6123c-f368-47d3-93be-1972cefbcc35" 09:10:01 } 09:10:01 ] 09:10:01 }, 09:10:01 { 09:10:01 "artifactName": "heat.env", 09:10:01 "artifactType": "HEAT_ENV", 09:10:01 "artifactURL": "/sdc/v1/catalog/services/Testnotificationser1/1.0/resourceInstances/testnotificationvf11/artifacts/heat.env", 09:10:01 "artifactChecksum": "NGIzMjExZTM1NDc2NjBjOTQyMGJmMWNiMmU0NTE5NzM\u003d", 09:10:01 "artifactDescription": "Auto-generated HEAT Environment deployment artifact", 09:10:01 "artifactTimeout": 0, 09:10:01 "artifactVersion": "1", 09:10:01 "artifactUUID": "ce65d31c-35c0-43a9-90c7-596fc51d0c86", 09:10:01 "generatedFromUUID": "8df6123c-f368-47d3-93be-1972cefbcc35" 09:10:01 } 09:10:01 ] 09:10:01 }, 09:10:01 { 09:10:01 "artifactName": "heat.env", 09:10:01 "artifactType": "HEAT_ENV", 09:10:01 "artifactURL": "/sdc/v1/catalog/services/Testnotificationser1/1.0/resourceInstances/testnotificationvf11/artifacts/heat.env", 09:10:01 "artifactChecksum": "NGIzMjExZTM1NDc2NjBjOTQyMGJmMWNiMmU0NTE5NzM\u003d", 09:10:01 "artifactDescription": "Auto-generated HEAT Environment deployment artifact", 09:10:01 "artifactTimeout": 0, 09:10:01 "artifactVersion": "1", 09:10:01 "artifactUUID": "ce65d31c-35c0-43a9-90c7-596fc51d0c86", 09:10:01 "relatedArtifactsInfo": [] 09:10:01 } 09:10:01 ] 09:10:01 } 09:10:01 ], 09:10:01 "serviceArtifacts": [] 09:10:01 } 09:10:01 09:10:01.274 [kafka-producer-network-thread | mso-123456-producer-a431b68b-e45a-4ed5-8b19-0482c7f0322b] DEBUG org.apache.kafka.clients.NetworkClient - [Producer clientId=mso-123456-producer-a431b68b-e45a-4ed5-8b19-0482c7f0322b] Give up sending metadata request since no node is available 09:10:01 09:10:01.319 [kafka-coordinator-heartbeat-thread | mso-group] DEBUG org.apache.kafka.clients.NetworkClient - [Consumer clientId=mso-123456-consumer-f5c1d64b-2a77-4abe-97e2-5d595acfbd82, groupId=mso-group] Give up sending metadata request since no node is available 09:10:01 09:10:01.319 [kafka-coordinator-heartbeat-thread | mso-group] DEBUG org.apache.kafka.clients.consumer.internals.ConsumerCoordinator - [Consumer clientId=mso-123456-consumer-f5c1d64b-2a77-4abe-97e2-5d595acfbd82, groupId=mso-group] No broker available to send FindCoordinator request 09:10:01 09:10:01.324 [kafka-producer-network-thread | mso-123456-producer-a431b68b-e45a-4ed5-8b19-0482c7f0322b] DEBUG org.apache.kafka.clients.NetworkClient - [Producer clientId=mso-123456-producer-a431b68b-e45a-4ed5-8b19-0482c7f0322b] Give up sending metadata request since no node is available 09:10:01 09:10:01.350 [pool-10-thread-1] DEBUG org.onap.sdc.impl.NotificationConsumer - Polling for messages from topic: null 09:10:01 09:10:01.375 [kafka-producer-network-thread | mso-123456-producer-a431b68b-e45a-4ed5-8b19-0482c7f0322b] DEBUG org.apache.kafka.clients.NetworkClient - [Producer clientId=mso-123456-producer-a431b68b-e45a-4ed5-8b19-0482c7f0322b] Initialize connection to node localhost:40117 (id: 1 rack: null) for sending metadata request 09:10:01 09:10:01.375 [kafka-producer-network-thread | mso-123456-producer-a431b68b-e45a-4ed5-8b19-0482c7f0322b] DEBUG org.apache.kafka.clients.ClientUtils - Resolved host localhost as 127.0.0.1 09:10:01 09:10:01.375 [kafka-producer-network-thread | mso-123456-producer-a431b68b-e45a-4ed5-8b19-0482c7f0322b] DEBUG org.apache.kafka.clients.NetworkClient - [Producer clientId=mso-123456-producer-a431b68b-e45a-4ed5-8b19-0482c7f0322b] Initiating connection to node localhost:40117 (id: 1 rack: null) using address localhost/127.0.0.1 09:10:01 09:10:01.375 [kafka-producer-network-thread | mso-123456-producer-a431b68b-e45a-4ed5-8b19-0482c7f0322b] DEBUG org.apache.kafka.common.security.authenticator.SaslClientAuthenticator - [Producer clientId=mso-123456-producer-a431b68b-e45a-4ed5-8b19-0482c7f0322b] Set SASL client state to SEND_APIVERSIONS_REQUEST 09:10:01 09:10:01.375 [kafka-producer-network-thread | mso-123456-producer-a431b68b-e45a-4ed5-8b19-0482c7f0322b] DEBUG org.apache.kafka.common.security.authenticator.SaslClientAuthenticator - [Producer clientId=mso-123456-producer-a431b68b-e45a-4ed5-8b19-0482c7f0322b] Creating SaslClient: client=null;service=kafka;serviceHostname=localhost;mechs=[PLAIN] 09:10:01 09:10:01.376 [kafka-producer-network-thread | mso-123456-producer-a431b68b-e45a-4ed5-8b19-0482c7f0322b] DEBUG org.apache.kafka.common.network.Selector - [Producer clientId=mso-123456-producer-a431b68b-e45a-4ed5-8b19-0482c7f0322b] Connection with localhost/127.0.0.1 (channelId=1) disconnected 09:10:01 java.net.ConnectException: Connection refused 09:10:01 at java.base/sun.nio.ch.SocketChannelImpl.checkConnect(Native Method) 09:10:01 at java.base/sun.nio.ch.SocketChannelImpl.finishConnect(SocketChannelImpl.java:777) 09:10:01 at org.apache.kafka.common.network.PlaintextTransportLayer.finishConnect(PlaintextTransportLayer.java:50) 09:10:01 at org.apache.kafka.common.network.KafkaChannel.finishConnect(KafkaChannel.java:224) 09:10:01 at org.apache.kafka.common.network.Selector.pollSelectionKeys(Selector.java:526) 09:10:01 at org.apache.kafka.common.network.Selector.poll(Selector.java:481) 09:10:01 at org.apache.kafka.clients.NetworkClient.poll(NetworkClient.java:560) 09:10:01 at org.apache.kafka.clients.producer.internals.Sender.runOnce(Sender.java:328) 09:10:01 at org.apache.kafka.clients.producer.internals.Sender.run(Sender.java:243) 09:10:01 at java.base/java.lang.Thread.run(Thread.java:829) 09:10:01 09:10:01.376 [kafka-producer-network-thread | mso-123456-producer-a431b68b-e45a-4ed5-8b19-0482c7f0322b] INFO org.apache.kafka.clients.NetworkClient - [Producer clientId=mso-123456-producer-a431b68b-e45a-4ed5-8b19-0482c7f0322b] Node 1 disconnected. 09:10:01 09:10:01.376 [kafka-producer-network-thread | mso-123456-producer-a431b68b-e45a-4ed5-8b19-0482c7f0322b] WARN org.apache.kafka.clients.NetworkClient - [Producer clientId=mso-123456-producer-a431b68b-e45a-4ed5-8b19-0482c7f0322b] Connection to node 1 (localhost/127.0.0.1:40117) could not be established. Broker may not be available. 09:10:01 09:10:01.419 [kafka-coordinator-heartbeat-thread | mso-group] DEBUG org.apache.kafka.clients.NetworkClient - [Consumer clientId=mso-123456-consumer-f5c1d64b-2a77-4abe-97e2-5d595acfbd82, groupId=mso-group] Give up sending metadata request since no node is available 09:10:01 09:10:01.419 [kafka-coordinator-heartbeat-thread | mso-group] DEBUG org.apache.kafka.clients.consumer.internals.ConsumerCoordinator - [Consumer clientId=mso-123456-consumer-f5c1d64b-2a77-4abe-97e2-5d595acfbd82, groupId=mso-group] No broker available to send FindCoordinator request 09:10:01 09:10:01.450 [pool-10-thread-3] DEBUG org.onap.sdc.impl.NotificationConsumer - Polling for messages from topic: null 09:10:01 09:10:01.476 [kafka-producer-network-thread | mso-123456-producer-a431b68b-e45a-4ed5-8b19-0482c7f0322b] DEBUG org.apache.kafka.clients.NetworkClient - [Producer clientId=mso-123456-producer-a431b68b-e45a-4ed5-8b19-0482c7f0322b] Give up sending metadata request since no node is available 09:10:01 09:10:01.520 [kafka-coordinator-heartbeat-thread | mso-group] DEBUG org.apache.kafka.clients.NetworkClient - [Consumer clientId=mso-123456-consumer-f5c1d64b-2a77-4abe-97e2-5d595acfbd82, groupId=mso-group] Give up sending metadata request since no node is available 09:10:01 09:10:01.520 [kafka-coordinator-heartbeat-thread | mso-group] DEBUG org.apache.kafka.clients.consumer.internals.ConsumerCoordinator - [Consumer clientId=mso-123456-consumer-f5c1d64b-2a77-4abe-97e2-5d595acfbd82, groupId=mso-group] No broker available to send FindCoordinator request 09:10:01 09:10:01.527 [kafka-producer-network-thread | mso-123456-producer-a431b68b-e45a-4ed5-8b19-0482c7f0322b] DEBUG org.apache.kafka.clients.NetworkClient - [Producer clientId=mso-123456-producer-a431b68b-e45a-4ed5-8b19-0482c7f0322b] Give up sending metadata request since no node is available 09:10:01 09:10:01.550 [pool-10-thread-2] DEBUG org.onap.sdc.impl.NotificationConsumer - Polling for messages from topic: null 09:10:01 09:10:01.577 [kafka-producer-network-thread | mso-123456-producer-a431b68b-e45a-4ed5-8b19-0482c7f0322b] DEBUG org.apache.kafka.clients.NetworkClient - [Producer clientId=mso-123456-producer-a431b68b-e45a-4ed5-8b19-0482c7f0322b] Give up sending metadata request since no node is available 09:10:01 09:10:01.620 [kafka-coordinator-heartbeat-thread | mso-group] DEBUG org.apache.kafka.clients.NetworkClient - [Consumer clientId=mso-123456-consumer-f5c1d64b-2a77-4abe-97e2-5d595acfbd82, groupId=mso-group] Give up sending metadata request since no node is available 09:10:01 09:10:01.620 [kafka-coordinator-heartbeat-thread | mso-group] DEBUG org.apache.kafka.clients.consumer.internals.ConsumerCoordinator - [Consumer clientId=mso-123456-consumer-f5c1d64b-2a77-4abe-97e2-5d595acfbd82, groupId=mso-group] No broker available to send FindCoordinator request 09:10:01 09:10:01.627 [kafka-producer-network-thread | mso-123456-producer-a431b68b-e45a-4ed5-8b19-0482c7f0322b] DEBUG org.apache.kafka.clients.NetworkClient - [Producer clientId=mso-123456-producer-a431b68b-e45a-4ed5-8b19-0482c7f0322b] Give up sending metadata request since no node is available 09:10:01 09:10:01.650 [pool-10-thread-4] DEBUG org.onap.sdc.impl.NotificationConsumer - Polling for messages from topic: null 09:10:01 09:10:01.678 [kafka-producer-network-thread | mso-123456-producer-a431b68b-e45a-4ed5-8b19-0482c7f0322b] DEBUG org.apache.kafka.clients.NetworkClient - [Producer clientId=mso-123456-producer-a431b68b-e45a-4ed5-8b19-0482c7f0322b] Give up sending metadata request since no node is available 09:10:01 09:10:01.720 [kafka-coordinator-heartbeat-thread | mso-group] DEBUG org.apache.kafka.clients.NetworkClient - [Consumer clientId=mso-123456-consumer-f5c1d64b-2a77-4abe-97e2-5d595acfbd82, groupId=mso-group] Give up sending metadata request since no node is available 09:10:01 09:10:01.720 [kafka-coordinator-heartbeat-thread | mso-group] DEBUG org.apache.kafka.clients.consumer.internals.ConsumerCoordinator - [Consumer clientId=mso-123456-consumer-f5c1d64b-2a77-4abe-97e2-5d595acfbd82, groupId=mso-group] No broker available to send FindCoordinator request 09:10:01 09:10:01.728 [kafka-producer-network-thread | mso-123456-producer-a431b68b-e45a-4ed5-8b19-0482c7f0322b] DEBUG org.apache.kafka.clients.NetworkClient - [Producer clientId=mso-123456-producer-a431b68b-e45a-4ed5-8b19-0482c7f0322b] Give up sending metadata request since no node is available 09:10:01 09:10:01.750 [pool-10-thread-1] DEBUG org.onap.sdc.impl.NotificationConsumer - Polling for messages from topic: null 09:10:01 09:10:01.779 [kafka-producer-network-thread | mso-123456-producer-a431b68b-e45a-4ed5-8b19-0482c7f0322b] DEBUG org.apache.kafka.clients.NetworkClient - [Producer clientId=mso-123456-producer-a431b68b-e45a-4ed5-8b19-0482c7f0322b] Give up sending metadata request since no node is available 09:10:01 09:10:01.821 [kafka-coordinator-heartbeat-thread | mso-group] DEBUG org.apache.kafka.clients.NetworkClient - [Consumer clientId=mso-123456-consumer-f5c1d64b-2a77-4abe-97e2-5d595acfbd82, groupId=mso-group] Give up sending metadata request since no node is available 09:10:01 09:10:01.821 [kafka-coordinator-heartbeat-thread | mso-group] DEBUG org.apache.kafka.clients.consumer.internals.ConsumerCoordinator - [Consumer clientId=mso-123456-consumer-f5c1d64b-2a77-4abe-97e2-5d595acfbd82, groupId=mso-group] No broker available to send FindCoordinator request 09:10:01 09:10:01.829 [kafka-producer-network-thread | mso-123456-producer-a431b68b-e45a-4ed5-8b19-0482c7f0322b] DEBUG org.apache.kafka.clients.NetworkClient - [Producer clientId=mso-123456-producer-a431b68b-e45a-4ed5-8b19-0482c7f0322b] Give up sending metadata request since no node is available 09:10:01 09:10:01.850 [pool-10-thread-5] DEBUG org.onap.sdc.impl.NotificationConsumer - Polling for messages from topic: null 09:10:01 09:10:01.879 [kafka-producer-network-thread | mso-123456-producer-a431b68b-e45a-4ed5-8b19-0482c7f0322b] DEBUG org.apache.kafka.clients.NetworkClient - [Producer clientId=mso-123456-producer-a431b68b-e45a-4ed5-8b19-0482c7f0322b] Give up sending metadata request since no node is available 09:10:01 09:10:01.921 [kafka-coordinator-heartbeat-thread | mso-group] DEBUG org.apache.kafka.clients.NetworkClient - [Consumer clientId=mso-123456-consumer-f5c1d64b-2a77-4abe-97e2-5d595acfbd82, groupId=mso-group] Give up sending metadata request since no node is available 09:10:01 09:10:01.921 [kafka-coordinator-heartbeat-thread | mso-group] DEBUG org.apache.kafka.clients.consumer.internals.ConsumerCoordinator - [Consumer clientId=mso-123456-consumer-f5c1d64b-2a77-4abe-97e2-5d595acfbd82, groupId=mso-group] No broker available to send FindCoordinator request 09:10:01 09:10:01.930 [kafka-producer-network-thread | mso-123456-producer-a431b68b-e45a-4ed5-8b19-0482c7f0322b] DEBUG org.apache.kafka.clients.NetworkClient - [Producer clientId=mso-123456-producer-a431b68b-e45a-4ed5-8b19-0482c7f0322b] Give up sending metadata request since no node is available 09:10:01 09:10:01.950 [pool-10-thread-3] DEBUG org.onap.sdc.impl.NotificationConsumer - Polling for messages from topic: null 09:10:01 09:10:01.980 [kafka-producer-network-thread | mso-123456-producer-a431b68b-e45a-4ed5-8b19-0482c7f0322b] DEBUG org.apache.kafka.clients.NetworkClient - [Producer clientId=mso-123456-producer-a431b68b-e45a-4ed5-8b19-0482c7f0322b] Give up sending metadata request since no node is available 09:10:02 09:10:02.022 [kafka-coordinator-heartbeat-thread | mso-group] DEBUG org.apache.kafka.clients.NetworkClient - [Consumer clientId=mso-123456-consumer-f5c1d64b-2a77-4abe-97e2-5d595acfbd82, groupId=mso-group] Initialize connection to node localhost:40117 (id: 1 rack: null) for sending metadata request 09:10:02 09:10:02.022 [kafka-coordinator-heartbeat-thread | mso-group] DEBUG org.apache.kafka.clients.ClientUtils - Resolved host localhost as 127.0.0.1 09:10:02 09:10:02.022 [kafka-coordinator-heartbeat-thread | mso-group] DEBUG org.apache.kafka.clients.NetworkClient - [Consumer clientId=mso-123456-consumer-f5c1d64b-2a77-4abe-97e2-5d595acfbd82, groupId=mso-group] Initiating connection to node localhost:40117 (id: 1 rack: null) using address localhost/127.0.0.1 09:10:02 09:10:02.022 [kafka-coordinator-heartbeat-thread | mso-group] DEBUG org.apache.kafka.common.security.authenticator.SaslClientAuthenticator - [Consumer clientId=mso-123456-consumer-f5c1d64b-2a77-4abe-97e2-5d595acfbd82, groupId=mso-group] Set SASL client state to SEND_APIVERSIONS_REQUEST 09:10:02 09:10:02.022 [kafka-coordinator-heartbeat-thread | mso-group] DEBUG org.apache.kafka.common.security.authenticator.SaslClientAuthenticator - [Consumer clientId=mso-123456-consumer-f5c1d64b-2a77-4abe-97e2-5d595acfbd82, groupId=mso-group] Creating SaslClient: client=null;service=kafka;serviceHostname=localhost;mechs=[PLAIN] 09:10:02 09:10:02.023 [kafka-coordinator-heartbeat-thread | mso-group] DEBUG org.apache.kafka.common.network.Selector - [Consumer clientId=mso-123456-consumer-f5c1d64b-2a77-4abe-97e2-5d595acfbd82, groupId=mso-group] Connection with localhost/127.0.0.1 (channelId=1) disconnected 09:10:02 java.net.ConnectException: Connection refused 09:10:02 at java.base/sun.nio.ch.SocketChannelImpl.checkConnect(Native Method) 09:10:02 at java.base/sun.nio.ch.SocketChannelImpl.finishConnect(SocketChannelImpl.java:777) 09:10:02 at org.apache.kafka.common.network.PlaintextTransportLayer.finishConnect(PlaintextTransportLayer.java:50) 09:10:02 at org.apache.kafka.common.network.KafkaChannel.finishConnect(KafkaChannel.java:224) 09:10:02 at org.apache.kafka.common.network.Selector.pollSelectionKeys(Selector.java:526) 09:10:02 at org.apache.kafka.common.network.Selector.poll(Selector.java:481) 09:10:02 at org.apache.kafka.clients.NetworkClient.poll(NetworkClient.java:560) 09:10:02 at org.apache.kafka.clients.consumer.internals.ConsumerNetworkClient.poll(ConsumerNetworkClient.java:280) 09:10:02 at org.apache.kafka.clients.consumer.internals.ConsumerNetworkClient.pollNoWakeup(ConsumerNetworkClient.java:321) 09:10:02 at org.apache.kafka.clients.consumer.internals.AbstractCoordinator$HeartbeatThread.run(AbstractCoordinator.java:1454) 09:10:02 09:10:02.023 [kafka-coordinator-heartbeat-thread | mso-group] INFO org.apache.kafka.clients.NetworkClient - [Consumer clientId=mso-123456-consumer-f5c1d64b-2a77-4abe-97e2-5d595acfbd82, groupId=mso-group] Node 1 disconnected. 09:10:02 09:10:02.023 [kafka-coordinator-heartbeat-thread | mso-group] WARN org.apache.kafka.clients.NetworkClient - [Consumer clientId=mso-123456-consumer-f5c1d64b-2a77-4abe-97e2-5d595acfbd82, groupId=mso-group] Connection to node 1 (localhost/127.0.0.1:40117) could not be established. Broker may not be available. 09:10:02 09:10:02.023 [kafka-coordinator-heartbeat-thread | mso-group] DEBUG org.apache.kafka.clients.consumer.internals.ConsumerCoordinator - [Consumer clientId=mso-123456-consumer-f5c1d64b-2a77-4abe-97e2-5d595acfbd82, groupId=mso-group] No broker available to send FindCoordinator request 09:10:02 09:10:02.030 [kafka-producer-network-thread | mso-123456-producer-a431b68b-e45a-4ed5-8b19-0482c7f0322b] DEBUG org.apache.kafka.clients.NetworkClient - [Producer clientId=mso-123456-producer-a431b68b-e45a-4ed5-8b19-0482c7f0322b] Give up sending metadata request since no node is available 09:10:02 09:10:02.050 [pool-10-thread-3] DEBUG org.onap.sdc.impl.NotificationConsumer - Polling for messages from topic: null 09:10:02 09:10:02.057 [main] INFO org.onap.sdc.impl.DistributionClientImpl - DistributionClient - sendNotificationStatus 09:10:02 09:10:02.057 [main] DEBUG org.onap.sdc.impl.DistributionClientImpl - client was not initialized 09:10:02 09:10:02.060 [pool-11-thread-1] DEBUG org.onap.sdc.impl.NotificationConsumer - Polling for messages from topic: null 09:10:02 09:10:02.081 [kafka-producer-network-thread | mso-123456-producer-a431b68b-e45a-4ed5-8b19-0482c7f0322b] DEBUG org.apache.kafka.clients.NetworkClient - [Producer clientId=mso-123456-producer-a431b68b-e45a-4ed5-8b19-0482c7f0322b] Give up sending metadata request since no node is available 09:10:02 09:10:02.124 [kafka-coordinator-heartbeat-thread | mso-group] DEBUG org.apache.kafka.clients.NetworkClient - [Consumer clientId=mso-123456-consumer-f5c1d64b-2a77-4abe-97e2-5d595acfbd82, groupId=mso-group] Give up sending metadata request since no node is available 09:10:02 09:10:02.124 [kafka-coordinator-heartbeat-thread | mso-group] DEBUG org.apache.kafka.clients.consumer.internals.ConsumerCoordinator - [Consumer clientId=mso-123456-consumer-f5c1d64b-2a77-4abe-97e2-5d595acfbd82, groupId=mso-group] No broker available to send FindCoordinator request 09:10:02 09:10:02.131 [kafka-producer-network-thread | mso-123456-producer-a431b68b-e45a-4ed5-8b19-0482c7f0322b] DEBUG org.apache.kafka.clients.NetworkClient - [Producer clientId=mso-123456-producer-a431b68b-e45a-4ed5-8b19-0482c7f0322b] Give up sending metadata request since no node is available 09:10:02 09:10:02.159 [pool-11-thread-1] DEBUG org.onap.sdc.impl.NotificationConsumer - Polling for messages from topic: null 09:10:02 09:10:02.182 [kafka-producer-network-thread | mso-123456-producer-a431b68b-e45a-4ed5-8b19-0482c7f0322b] DEBUG org.apache.kafka.clients.NetworkClient - [Producer clientId=mso-123456-producer-a431b68b-e45a-4ed5-8b19-0482c7f0322b] Give up sending metadata request since no node is available 09:10:02 09:10:02.224 [kafka-coordinator-heartbeat-thread | mso-group] DEBUG org.apache.kafka.clients.NetworkClient - [Consumer clientId=mso-123456-consumer-f5c1d64b-2a77-4abe-97e2-5d595acfbd82, groupId=mso-group] Give up sending metadata request since no node is available 09:10:02 09:10:02.224 [kafka-coordinator-heartbeat-thread | mso-group] DEBUG org.apache.kafka.clients.consumer.internals.ConsumerCoordinator - [Consumer clientId=mso-123456-consumer-f5c1d64b-2a77-4abe-97e2-5d595acfbd82, groupId=mso-group] No broker available to send FindCoordinator request 09:10:02 09:10:02.232 [kafka-producer-network-thread | mso-123456-producer-a431b68b-e45a-4ed5-8b19-0482c7f0322b] DEBUG org.apache.kafka.clients.NetworkClient - [Producer clientId=mso-123456-producer-a431b68b-e45a-4ed5-8b19-0482c7f0322b] Give up sending metadata request since no node is available 09:10:02 09:10:02.259 [pool-11-thread-2] DEBUG org.onap.sdc.impl.NotificationConsumer - Polling for messages from topic: null 09:10:02 09:10:02.260 [pool-11-thread-2] DEBUG org.onap.sdc.impl.NotificationConsumer - received message from topic 09:10:02 09:10:02.260 [pool-11-thread-2] DEBUG org.onap.sdc.impl.NotificationConsumer - received notification from broker: {"distributionID" : "bcc7a72e-90b1-4c5f-9a37-28dc3cd86416", 09:10:02 "serviceName" : "Testnotificationser1", 09:10:02 "serviceVersion" : "1.0", 09:10:02 "serviceUUID" : "7f7f94f4-373a-4b71-a0e3-80ae2ba4eb5d", 09:10:02 "serviceDescription" : "TestNotificationVF1", 09:10:02 "resources" : [{ 09:10:02 "resourceInstanceName" : "testnotificationvf11", 09:10:02 "resourceName" : "TestNotificationVF1", 09:10:02 "resourceVersion" : "1.0", 09:10:02 "resoucreType" : "VF", 09:10:02 "resourceUUID" : "907e1746-9f69-40f5-9f2a-313654092a2d", 09:10:02 "artifacts" : [{ 09:10:02 "artifactName" : "sample-xml-alldata-1-1.xml", 09:10:02 "artifactType" : "YANG_XML", 09:10:02 "artifactURL" : "/sdc/v1/catalog/services/Testnotificationser1/1.0/resourceInstances/testnotificationvf11/artifacts/sample-xml-alldata-1-1.xml", 09:10:02 "artifactChecksum" : "MTUxODFkMmRlOTNhNjYxMGYyYTI1ZjA5Y2QyNWQyYTk\u003d", 09:10:02 "artifactDescription" : "MyYang", 09:10:02 "artifactTimeout" : 0, 09:10:02 "artifactUUID" : "0005bc4a-2c19-452e-be6d-d574a56be4d0", 09:10:02 "artifactVersion" : "1" 09:10:02 }, { 09:10:02 "artifactName" : "heat.yaml", 09:10:02 "artifactType" : "HEAT", 09:10:02 "artifactURL" : "/sdc/v1/catalog/services/Testnotificationser1/1.0/resourceInstances/testnotificationvf11/artifacts/heat.yaml", 09:10:02 "artifactChecksum" : "ODEyNjE4YTMzYzRmMTk2ODVhNTU2NTg3YWEyNmIxMTM\u003d", 09:10:02 "artifactDescription" : "heat", 09:10:02 "artifactTimeout" : 60, 09:10:02 "artifactUUID" : "8df6123c-f368-47d3-93be-1972cefbcc35", 09:10:02 "artifactVersion" : "1" 09:10:02 }, { 09:10:02 "artifactName" : "heat.env", 09:10:02 "artifactType" : "HEAT_ENV", 09:10:02 "artifactURL" : "/sdc/v1/catalog/services/Testnotificationser1/1.0/resourceInstances/testnotificationvf11/artifacts/heat.env", 09:10:02 "artifactChecksum" : "NGIzMjExZTM1NDc2NjBjOTQyMGJmMWNiMmU0NTE5NzM\u003d", 09:10:02 "artifactDescription" : "Auto-generated HEAT Environment deployment artifact", 09:10:02 "artifactTimeout" : 0, 09:10:02 "artifactUUID" : "ce65d31c-35c0-43a9-90c7-596fc51d0c86", 09:10:02 "artifactVersion" : "1", 09:10:02 "generatedFromUUID" : "8df6123c-f368-47d3-93be-1972cefbcc35" 09:10:02 } 09:10:02 ] 09:10:02 } 09:10:02 ]} 09:10:02 09:10:02.264 [pool-11-thread-2] DEBUG org.onap.sdc.impl.NotificationConsumer - sending notification to client: { 09:10:02 "distributionID": "bcc7a72e-90b1-4c5f-9a37-28dc3cd86416", 09:10:02 "serviceName": "Testnotificationser1", 09:10:02 "serviceVersion": "1.0", 09:10:02 "serviceUUID": "7f7f94f4-373a-4b71-a0e3-80ae2ba4eb5d", 09:10:02 "serviceDescription": "TestNotificationVF1", 09:10:02 "resources": [ 09:10:02 { 09:10:02 "resourceInstanceName": "testnotificationvf11", 09:10:02 "resourceName": "TestNotificationVF1", 09:10:02 "resourceVersion": "1.0", 09:10:02 "resoucreType": "VF", 09:10:02 "resourceUUID": "907e1746-9f69-40f5-9f2a-313654092a2d", 09:10:02 "artifacts": [ 09:10:02 { 09:10:02 "artifactName": "heat.yaml", 09:10:02 "artifactType": "HEAT", 09:10:02 "artifactURL": "/sdc/v1/catalog/services/Testnotificationser1/1.0/resourceInstances/testnotificationvf11/artifacts/heat.yaml", 09:10:02 "artifactChecksum": "ODEyNjE4YTMzYzRmMTk2ODVhNTU2NTg3YWEyNmIxMTM\u003d", 09:10:02 "artifactDescription": "heat", 09:10:02 "artifactTimeout": 60, 09:10:02 "artifactVersion": "1", 09:10:02 "artifactUUID": "8df6123c-f368-47d3-93be-1972cefbcc35", 09:10:02 "generatedArtifact": { 09:10:02 "artifactName": "heat.env", 09:10:02 "artifactType": "HEAT_ENV", 09:10:02 "artifactURL": "/sdc/v1/catalog/services/Testnotificationser1/1.0/resourceInstances/testnotificationvf11/artifacts/heat.env", 09:10:02 "artifactChecksum": "NGIzMjExZTM1NDc2NjBjOTQyMGJmMWNiMmU0NTE5NzM\u003d", 09:10:02 "artifactDescription": "Auto-generated HEAT Environment deployment artifact", 09:10:02 "artifactTimeout": 0, 09:10:02 "artifactVersion": "1", 09:10:02 "artifactUUID": "ce65d31c-35c0-43a9-90c7-596fc51d0c86", 09:10:02 "generatedFromUUID": "8df6123c-f368-47d3-93be-1972cefbcc35" 09:10:02 }, 09:10:02 "relatedArtifactsInfo": [] 09:10:02 } 09:10:02 ] 09:10:02 } 09:10:02 ], 09:10:02 "serviceArtifacts": [] 09:10:02 } 09:10:02 09:10:02.283 [kafka-producer-network-thread | mso-123456-producer-a431b68b-e45a-4ed5-8b19-0482c7f0322b] DEBUG org.apache.kafka.clients.NetworkClient - [Producer clientId=mso-123456-producer-a431b68b-e45a-4ed5-8b19-0482c7f0322b] Give up sending metadata request since no node is available 09:10:02 09:10:02.325 [kafka-coordinator-heartbeat-thread | mso-group] DEBUG org.apache.kafka.clients.NetworkClient - [Consumer clientId=mso-123456-consumer-f5c1d64b-2a77-4abe-97e2-5d595acfbd82, groupId=mso-group] Give up sending metadata request since no node is available 09:10:02 09:10:02.325 [kafka-coordinator-heartbeat-thread | mso-group] DEBUG org.apache.kafka.clients.consumer.internals.ConsumerCoordinator - [Consumer clientId=mso-123456-consumer-f5c1d64b-2a77-4abe-97e2-5d595acfbd82, groupId=mso-group] No broker available to send FindCoordinator request 09:10:02 09:10:02.333 [kafka-producer-network-thread | mso-123456-producer-a431b68b-e45a-4ed5-8b19-0482c7f0322b] DEBUG org.apache.kafka.clients.NetworkClient - [Producer clientId=mso-123456-producer-a431b68b-e45a-4ed5-8b19-0482c7f0322b] Give up sending metadata request since no node is available 09:10:02 09:10:02.359 [pool-11-thread-1] DEBUG org.onap.sdc.impl.NotificationConsumer - Polling for messages from topic: null 09:10:02 09:10:02.383 [kafka-producer-network-thread | mso-123456-producer-a431b68b-e45a-4ed5-8b19-0482c7f0322b] DEBUG org.apache.kafka.clients.NetworkClient - [Producer clientId=mso-123456-producer-a431b68b-e45a-4ed5-8b19-0482c7f0322b] Initialize connection to node localhost:40117 (id: 1 rack: null) for sending metadata request 09:10:02 09:10:02.384 [kafka-producer-network-thread | mso-123456-producer-a431b68b-e45a-4ed5-8b19-0482c7f0322b] DEBUG org.apache.kafka.clients.ClientUtils - Resolved host localhost as 127.0.0.1 09:10:02 09:10:02.384 [kafka-producer-network-thread | mso-123456-producer-a431b68b-e45a-4ed5-8b19-0482c7f0322b] DEBUG org.apache.kafka.clients.NetworkClient - [Producer clientId=mso-123456-producer-a431b68b-e45a-4ed5-8b19-0482c7f0322b] Initiating connection to node localhost:40117 (id: 1 rack: null) using address localhost/127.0.0.1 09:10:02 09:10:02.384 [kafka-producer-network-thread | mso-123456-producer-a431b68b-e45a-4ed5-8b19-0482c7f0322b] DEBUG org.apache.kafka.common.security.authenticator.SaslClientAuthenticator - [Producer clientId=mso-123456-producer-a431b68b-e45a-4ed5-8b19-0482c7f0322b] Set SASL client state to SEND_APIVERSIONS_REQUEST 09:10:02 09:10:02.384 [kafka-producer-network-thread | mso-123456-producer-a431b68b-e45a-4ed5-8b19-0482c7f0322b] DEBUG org.apache.kafka.common.security.authenticator.SaslClientAuthenticator - [Producer clientId=mso-123456-producer-a431b68b-e45a-4ed5-8b19-0482c7f0322b] Creating SaslClient: client=null;service=kafka;serviceHostname=localhost;mechs=[PLAIN] 09:10:02 09:10:02.384 [kafka-producer-network-thread | mso-123456-producer-a431b68b-e45a-4ed5-8b19-0482c7f0322b] DEBUG org.apache.kafka.common.network.Selector - [Producer clientId=mso-123456-producer-a431b68b-e45a-4ed5-8b19-0482c7f0322b] Connection with localhost/127.0.0.1 (channelId=1) disconnected 09:10:02 java.net.ConnectException: Connection refused 09:10:02 at java.base/sun.nio.ch.SocketChannelImpl.checkConnect(Native Method) 09:10:02 at java.base/sun.nio.ch.SocketChannelImpl.finishConnect(SocketChannelImpl.java:777) 09:10:02 at org.apache.kafka.common.network.PlaintextTransportLayer.finishConnect(PlaintextTransportLayer.java:50) 09:10:02 at org.apache.kafka.common.network.KafkaChannel.finishConnect(KafkaChannel.java:224) 09:10:02 at org.apache.kafka.common.network.Selector.pollSelectionKeys(Selector.java:526) 09:10:02 at org.apache.kafka.common.network.Selector.poll(Selector.java:481) 09:10:02 at org.apache.kafka.clients.NetworkClient.poll(NetworkClient.java:560) 09:10:02 at org.apache.kafka.clients.producer.internals.Sender.runOnce(Sender.java:328) 09:10:02 at org.apache.kafka.clients.producer.internals.Sender.run(Sender.java:243) 09:10:02 at java.base/java.lang.Thread.run(Thread.java:829) 09:10:02 09:10:02.385 [kafka-producer-network-thread | mso-123456-producer-a431b68b-e45a-4ed5-8b19-0482c7f0322b] INFO org.apache.kafka.clients.NetworkClient - [Producer clientId=mso-123456-producer-a431b68b-e45a-4ed5-8b19-0482c7f0322b] Node 1 disconnected. 09:10:02 09:10:02.385 [kafka-producer-network-thread | mso-123456-producer-a431b68b-e45a-4ed5-8b19-0482c7f0322b] WARN org.apache.kafka.clients.NetworkClient - [Producer clientId=mso-123456-producer-a431b68b-e45a-4ed5-8b19-0482c7f0322b] Connection to node 1 (localhost/127.0.0.1:40117) could not be established. Broker may not be available. 09:10:02 09:10:02.425 [kafka-coordinator-heartbeat-thread | mso-group] DEBUG org.apache.kafka.clients.NetworkClient - [Consumer clientId=mso-123456-consumer-f5c1d64b-2a77-4abe-97e2-5d595acfbd82, groupId=mso-group] Give up sending metadata request since no node is available 09:10:02 09:10:02.425 [kafka-coordinator-heartbeat-thread | mso-group] DEBUG org.apache.kafka.clients.consumer.internals.ConsumerCoordinator - [Consumer clientId=mso-123456-consumer-f5c1d64b-2a77-4abe-97e2-5d595acfbd82, groupId=mso-group] No broker available to send FindCoordinator request 09:10:02 09:10:02.459 [pool-11-thread-3] DEBUG org.onap.sdc.impl.NotificationConsumer - Polling for messages from topic: null 09:10:02 09:10:02.485 [kafka-producer-network-thread | mso-123456-producer-a431b68b-e45a-4ed5-8b19-0482c7f0322b] DEBUG org.apache.kafka.clients.NetworkClient - [Producer clientId=mso-123456-producer-a431b68b-e45a-4ed5-8b19-0482c7f0322b] Give up sending metadata request since no node is available 09:10:02 09:10:02.526 [kafka-coordinator-heartbeat-thread | mso-group] DEBUG org.apache.kafka.clients.NetworkClient - [Consumer clientId=mso-123456-consumer-f5c1d64b-2a77-4abe-97e2-5d595acfbd82, groupId=mso-group] Give up sending metadata request since no node is available 09:10:02 09:10:02.526 [kafka-coordinator-heartbeat-thread | mso-group] DEBUG org.apache.kafka.clients.consumer.internals.ConsumerCoordinator - [Consumer clientId=mso-123456-consumer-f5c1d64b-2a77-4abe-97e2-5d595acfbd82, groupId=mso-group] No broker available to send FindCoordinator request 09:10:02 09:10:02.536 [kafka-producer-network-thread | mso-123456-producer-a431b68b-e45a-4ed5-8b19-0482c7f0322b] DEBUG org.apache.kafka.clients.NetworkClient - [Producer clientId=mso-123456-producer-a431b68b-e45a-4ed5-8b19-0482c7f0322b] Give up sending metadata request since no node is available 09:10:02 09:10:02.559 [pool-11-thread-2] DEBUG org.onap.sdc.impl.NotificationConsumer - Polling for messages from topic: null 09:10:02 09:10:02.586 [kafka-producer-network-thread | mso-123456-producer-a431b68b-e45a-4ed5-8b19-0482c7f0322b] DEBUG org.apache.kafka.clients.NetworkClient - [Producer clientId=mso-123456-producer-a431b68b-e45a-4ed5-8b19-0482c7f0322b] Give up sending metadata request since no node is available 09:10:02 09:10:02.626 [kafka-coordinator-heartbeat-thread | mso-group] DEBUG org.apache.kafka.clients.NetworkClient - [Consumer clientId=mso-123456-consumer-f5c1d64b-2a77-4abe-97e2-5d595acfbd82, groupId=mso-group] Give up sending metadata request since no node is available 09:10:02 09:10:02.627 [kafka-coordinator-heartbeat-thread | mso-group] DEBUG org.apache.kafka.clients.consumer.internals.ConsumerCoordinator - [Consumer clientId=mso-123456-consumer-f5c1d64b-2a77-4abe-97e2-5d595acfbd82, groupId=mso-group] No broker available to send FindCoordinator request 09:10:02 09:10:02.636 [kafka-producer-network-thread | mso-123456-producer-a431b68b-e45a-4ed5-8b19-0482c7f0322b] DEBUG org.apache.kafka.clients.NetworkClient - [Producer clientId=mso-123456-producer-a431b68b-e45a-4ed5-8b19-0482c7f0322b] Give up sending metadata request since no node is available 09:10:02 09:10:02.659 [pool-11-thread-4] DEBUG org.onap.sdc.impl.NotificationConsumer - Polling for messages from topic: null 09:10:02 09:10:02.687 [kafka-producer-network-thread | mso-123456-producer-a431b68b-e45a-4ed5-8b19-0482c7f0322b] DEBUG org.apache.kafka.clients.NetworkClient - [Producer clientId=mso-123456-producer-a431b68b-e45a-4ed5-8b19-0482c7f0322b] Give up sending metadata request since no node is available 09:10:02 09:10:02.727 [kafka-coordinator-heartbeat-thread | mso-group] DEBUG org.apache.kafka.clients.NetworkClient - [Consumer clientId=mso-123456-consumer-f5c1d64b-2a77-4abe-97e2-5d595acfbd82, groupId=mso-group] Give up sending metadata request since no node is available 09:10:02 09:10:02.727 [kafka-coordinator-heartbeat-thread | mso-group] DEBUG org.apache.kafka.clients.consumer.internals.ConsumerCoordinator - [Consumer clientId=mso-123456-consumer-f5c1d64b-2a77-4abe-97e2-5d595acfbd82, groupId=mso-group] No broker available to send FindCoordinator request 09:10:02 09:10:02.737 [kafka-producer-network-thread | mso-123456-producer-a431b68b-e45a-4ed5-8b19-0482c7f0322b] DEBUG org.apache.kafka.clients.NetworkClient - [Producer clientId=mso-123456-producer-a431b68b-e45a-4ed5-8b19-0482c7f0322b] Give up sending metadata request since no node is available 09:10:02 09:10:02.759 [pool-11-thread-1] DEBUG org.onap.sdc.impl.NotificationConsumer - Polling for messages from topic: null 09:10:02 09:10:02.788 [kafka-producer-network-thread | mso-123456-producer-a431b68b-e45a-4ed5-8b19-0482c7f0322b] DEBUG org.apache.kafka.clients.NetworkClient - [Producer clientId=mso-123456-producer-a431b68b-e45a-4ed5-8b19-0482c7f0322b] Give up sending metadata request since no node is available 09:10:02 09:10:02.827 [kafka-coordinator-heartbeat-thread | mso-group] DEBUG org.apache.kafka.clients.NetworkClient - [Consumer clientId=mso-123456-consumer-f5c1d64b-2a77-4abe-97e2-5d595acfbd82, groupId=mso-group] Give up sending metadata request since no node is available 09:10:02 09:10:02.828 [kafka-coordinator-heartbeat-thread | mso-group] DEBUG org.apache.kafka.clients.consumer.internals.ConsumerCoordinator - [Consumer clientId=mso-123456-consumer-f5c1d64b-2a77-4abe-97e2-5d595acfbd82, groupId=mso-group] No broker available to send FindCoordinator request 09:10:02 09:10:02.839 [kafka-producer-network-thread | mso-123456-producer-a431b68b-e45a-4ed5-8b19-0482c7f0322b] DEBUG org.apache.kafka.clients.NetworkClient - [Producer clientId=mso-123456-producer-a431b68b-e45a-4ed5-8b19-0482c7f0322b] Give up sending metadata request since no node is available 09:10:02 09:10:02.859 [pool-11-thread-5] DEBUG org.onap.sdc.impl.NotificationConsumer - Polling for messages from topic: null 09:10:02 09:10:02.889 [kafka-producer-network-thread | mso-123456-producer-a431b68b-e45a-4ed5-8b19-0482c7f0322b] DEBUG org.apache.kafka.clients.NetworkClient - [Producer clientId=mso-123456-producer-a431b68b-e45a-4ed5-8b19-0482c7f0322b] Give up sending metadata request since no node is available 09:10:02 09:10:02.928 [kafka-coordinator-heartbeat-thread | mso-group] DEBUG org.apache.kafka.clients.NetworkClient - [Consumer clientId=mso-123456-consumer-f5c1d64b-2a77-4abe-97e2-5d595acfbd82, groupId=mso-group] Give up sending metadata request since no node is available 09:10:02 09:10:02.928 [kafka-coordinator-heartbeat-thread | mso-group] DEBUG org.apache.kafka.clients.consumer.internals.ConsumerCoordinator - [Consumer clientId=mso-123456-consumer-f5c1d64b-2a77-4abe-97e2-5d595acfbd82, groupId=mso-group] No broker available to send FindCoordinator request 09:10:02 09:10:02.940 [kafka-producer-network-thread | mso-123456-producer-a431b68b-e45a-4ed5-8b19-0482c7f0322b] DEBUG org.apache.kafka.clients.NetworkClient - [Producer clientId=mso-123456-producer-a431b68b-e45a-4ed5-8b19-0482c7f0322b] Give up sending metadata request since no node is available 09:10:02 09:10:02.959 [pool-11-thread-3] DEBUG org.onap.sdc.impl.NotificationConsumer - Polling for messages from topic: null 09:10:02 09:10:02.990 [kafka-producer-network-thread | mso-123456-producer-a431b68b-e45a-4ed5-8b19-0482c7f0322b] DEBUG org.apache.kafka.clients.NetworkClient - [Producer clientId=mso-123456-producer-a431b68b-e45a-4ed5-8b19-0482c7f0322b] Give up sending metadata request since no node is available 09:10:03 09:10:03.028 [kafka-coordinator-heartbeat-thread | mso-group] DEBUG org.apache.kafka.clients.NetworkClient - [Consumer clientId=mso-123456-consumer-f5c1d64b-2a77-4abe-97e2-5d595acfbd82, groupId=mso-group] Give up sending metadata request since no node is available 09:10:03 09:10:03.028 [kafka-coordinator-heartbeat-thread | mso-group] DEBUG org.apache.kafka.clients.consumer.internals.ConsumerCoordinator - [Consumer clientId=mso-123456-consumer-f5c1d64b-2a77-4abe-97e2-5d595acfbd82, groupId=mso-group] No broker available to send FindCoordinator request 09:10:03 09:10:03.044 [kafka-producer-network-thread | mso-123456-producer-a431b68b-e45a-4ed5-8b19-0482c7f0322b] DEBUG org.apache.kafka.clients.NetworkClient - [Producer clientId=mso-123456-producer-a431b68b-e45a-4ed5-8b19-0482c7f0322b] Give up sending metadata request since no node is available 09:10:03 09:10:03.059 [pool-11-thread-3] DEBUG org.onap.sdc.impl.NotificationConsumer - Polling for messages from topic: null 09:10:03 09:10:03.064 [main] INFO org.onap.sdc.impl.DistributionClientImpl - DistributionClient - sendNotificationStatus 09:10:03 09:10:03.064 [main] DEBUG org.onap.sdc.impl.DistributionClientImpl - client was not initialized 09:10:03 09:10:03.077 [pool-12-thread-1] DEBUG org.onap.sdc.impl.NotificationConsumer - Polling for messages from topic: null 09:10:03 09:10:03.094 [kafka-producer-network-thread | mso-123456-producer-a431b68b-e45a-4ed5-8b19-0482c7f0322b] DEBUG org.apache.kafka.clients.NetworkClient - [Producer clientId=mso-123456-producer-a431b68b-e45a-4ed5-8b19-0482c7f0322b] Give up sending metadata request since no node is available 09:10:03 09:10:03.129 [kafka-coordinator-heartbeat-thread | mso-group] DEBUG org.apache.kafka.clients.NetworkClient - [Consumer clientId=mso-123456-consumer-f5c1d64b-2a77-4abe-97e2-5d595acfbd82, groupId=mso-group] Initialize connection to node localhost:40117 (id: 1 rack: null) for sending metadata request 09:10:03 09:10:03.129 [kafka-coordinator-heartbeat-thread | mso-group] DEBUG org.apache.kafka.clients.ClientUtils - Resolved host localhost as 127.0.0.1 09:10:03 09:10:03.129 [kafka-coordinator-heartbeat-thread | mso-group] DEBUG org.apache.kafka.clients.NetworkClient - [Consumer clientId=mso-123456-consumer-f5c1d64b-2a77-4abe-97e2-5d595acfbd82, groupId=mso-group] Initiating connection to node localhost:40117 (id: 1 rack: null) using address localhost/127.0.0.1 09:10:03 09:10:03.129 [kafka-coordinator-heartbeat-thread | mso-group] DEBUG org.apache.kafka.common.security.authenticator.SaslClientAuthenticator - [Consumer clientId=mso-123456-consumer-f5c1d64b-2a77-4abe-97e2-5d595acfbd82, groupId=mso-group] Set SASL client state to SEND_APIVERSIONS_REQUEST 09:10:03 09:10:03.129 [kafka-coordinator-heartbeat-thread | mso-group] DEBUG org.apache.kafka.common.security.authenticator.SaslClientAuthenticator - [Consumer clientId=mso-123456-consumer-f5c1d64b-2a77-4abe-97e2-5d595acfbd82, groupId=mso-group] Creating SaslClient: client=null;service=kafka;serviceHostname=localhost;mechs=[PLAIN] 09:10:03 09:10:03.130 [kafka-coordinator-heartbeat-thread | mso-group] DEBUG org.apache.kafka.common.network.Selector - [Consumer clientId=mso-123456-consumer-f5c1d64b-2a77-4abe-97e2-5d595acfbd82, groupId=mso-group] Connection with localhost/127.0.0.1 (channelId=1) disconnected 09:10:03 java.net.ConnectException: Connection refused 09:10:03 at java.base/sun.nio.ch.SocketChannelImpl.checkConnect(Native Method) 09:10:03 at java.base/sun.nio.ch.SocketChannelImpl.finishConnect(SocketChannelImpl.java:777) 09:10:03 at org.apache.kafka.common.network.PlaintextTransportLayer.finishConnect(PlaintextTransportLayer.java:50) 09:10:03 at org.apache.kafka.common.network.KafkaChannel.finishConnect(KafkaChannel.java:224) 09:10:03 at org.apache.kafka.common.network.Selector.pollSelectionKeys(Selector.java:526) 09:10:03 at org.apache.kafka.common.network.Selector.poll(Selector.java:481) 09:10:03 at org.apache.kafka.clients.NetworkClient.poll(NetworkClient.java:560) 09:10:03 at org.apache.kafka.clients.consumer.internals.ConsumerNetworkClient.poll(ConsumerNetworkClient.java:280) 09:10:03 at org.apache.kafka.clients.consumer.internals.ConsumerNetworkClient.pollNoWakeup(ConsumerNetworkClient.java:321) 09:10:03 at org.apache.kafka.clients.consumer.internals.AbstractCoordinator$HeartbeatThread.run(AbstractCoordinator.java:1454) 09:10:03 09:10:03.130 [kafka-coordinator-heartbeat-thread | mso-group] INFO org.apache.kafka.clients.NetworkClient - [Consumer clientId=mso-123456-consumer-f5c1d64b-2a77-4abe-97e2-5d595acfbd82, groupId=mso-group] Node 1 disconnected. 09:10:03 09:10:03.130 [kafka-coordinator-heartbeat-thread | mso-group] WARN org.apache.kafka.clients.NetworkClient - [Consumer clientId=mso-123456-consumer-f5c1d64b-2a77-4abe-97e2-5d595acfbd82, groupId=mso-group] Connection to node 1 (localhost/127.0.0.1:40117) could not be established. Broker may not be available. 09:10:03 09:10:03.130 [kafka-coordinator-heartbeat-thread | mso-group] DEBUG org.apache.kafka.clients.consumer.internals.ConsumerCoordinator - [Consumer clientId=mso-123456-consumer-f5c1d64b-2a77-4abe-97e2-5d595acfbd82, groupId=mso-group] No broker available to send FindCoordinator request 09:10:03 09:10:03.144 [kafka-producer-network-thread | mso-123456-producer-a431b68b-e45a-4ed5-8b19-0482c7f0322b] DEBUG org.apache.kafka.clients.NetworkClient - [Producer clientId=mso-123456-producer-a431b68b-e45a-4ed5-8b19-0482c7f0322b] Give up sending metadata request since no node is available 09:10:03 09:10:03.176 [pool-12-thread-1] DEBUG org.onap.sdc.impl.NotificationConsumer - Polling for messages from topic: null 09:10:03 09:10:03.194 [kafka-producer-network-thread | mso-123456-producer-a431b68b-e45a-4ed5-8b19-0482c7f0322b] DEBUG org.apache.kafka.clients.NetworkClient - [Producer clientId=mso-123456-producer-a431b68b-e45a-4ed5-8b19-0482c7f0322b] Give up sending metadata request since no node is available 09:10:03 09:10:03.231 [kafka-coordinator-heartbeat-thread | mso-group] DEBUG org.apache.kafka.clients.NetworkClient - [Consumer clientId=mso-123456-consumer-f5c1d64b-2a77-4abe-97e2-5d595acfbd82, groupId=mso-group] Give up sending metadata request since no node is available 09:10:03 09:10:03.231 [kafka-coordinator-heartbeat-thread | mso-group] DEBUG org.apache.kafka.clients.consumer.internals.ConsumerCoordinator - [Consumer clientId=mso-123456-consumer-f5c1d64b-2a77-4abe-97e2-5d595acfbd82, groupId=mso-group] No broker available to send FindCoordinator request 09:10:03 09:10:03.245 [kafka-producer-network-thread | mso-123456-producer-a431b68b-e45a-4ed5-8b19-0482c7f0322b] DEBUG org.apache.kafka.clients.NetworkClient - [Producer clientId=mso-123456-producer-a431b68b-e45a-4ed5-8b19-0482c7f0322b] Initialize connection to node localhost:40117 (id: 1 rack: null) for sending metadata request 09:10:03 09:10:03.245 [kafka-producer-network-thread | mso-123456-producer-a431b68b-e45a-4ed5-8b19-0482c7f0322b] DEBUG org.apache.kafka.clients.ClientUtils - Resolved host localhost as 127.0.0.1 09:10:03 09:10:03.245 [kafka-producer-network-thread | mso-123456-producer-a431b68b-e45a-4ed5-8b19-0482c7f0322b] DEBUG org.apache.kafka.clients.NetworkClient - [Producer clientId=mso-123456-producer-a431b68b-e45a-4ed5-8b19-0482c7f0322b] Initiating connection to node localhost:40117 (id: 1 rack: null) using address localhost/127.0.0.1 09:10:03 09:10:03.245 [kafka-producer-network-thread | mso-123456-producer-a431b68b-e45a-4ed5-8b19-0482c7f0322b] DEBUG org.apache.kafka.common.security.authenticator.SaslClientAuthenticator - [Producer clientId=mso-123456-producer-a431b68b-e45a-4ed5-8b19-0482c7f0322b] Set SASL client state to SEND_APIVERSIONS_REQUEST 09:10:03 09:10:03.245 [kafka-producer-network-thread | mso-123456-producer-a431b68b-e45a-4ed5-8b19-0482c7f0322b] DEBUG org.apache.kafka.common.security.authenticator.SaslClientAuthenticator - [Producer clientId=mso-123456-producer-a431b68b-e45a-4ed5-8b19-0482c7f0322b] Creating SaslClient: client=null;service=kafka;serviceHostname=localhost;mechs=[PLAIN] 09:10:03 09:10:03.246 [kafka-producer-network-thread | mso-123456-producer-a431b68b-e45a-4ed5-8b19-0482c7f0322b] DEBUG org.apache.kafka.common.network.Selector - [Producer clientId=mso-123456-producer-a431b68b-e45a-4ed5-8b19-0482c7f0322b] Connection with localhost/127.0.0.1 (channelId=1) disconnected 09:10:03 java.net.ConnectException: Connection refused 09:10:03 at java.base/sun.nio.ch.SocketChannelImpl.checkConnect(Native Method) 09:10:03 at java.base/sun.nio.ch.SocketChannelImpl.finishConnect(SocketChannelImpl.java:777) 09:10:03 at org.apache.kafka.common.network.PlaintextTransportLayer.finishConnect(PlaintextTransportLayer.java:50) 09:10:03 at org.apache.kafka.common.network.KafkaChannel.finishConnect(KafkaChannel.java:224) 09:10:03 at org.apache.kafka.common.network.Selector.pollSelectionKeys(Selector.java:526) 09:10:03 at org.apache.kafka.common.network.Selector.poll(Selector.java:481) 09:10:03 at org.apache.kafka.clients.NetworkClient.poll(NetworkClient.java:560) 09:10:03 at org.apache.kafka.clients.producer.internals.Sender.runOnce(Sender.java:328) 09:10:03 at org.apache.kafka.clients.producer.internals.Sender.run(Sender.java:243) 09:10:03 at java.base/java.lang.Thread.run(Thread.java:829) 09:10:03 09:10:03.246 [kafka-producer-network-thread | mso-123456-producer-a431b68b-e45a-4ed5-8b19-0482c7f0322b] INFO org.apache.kafka.clients.NetworkClient - [Producer clientId=mso-123456-producer-a431b68b-e45a-4ed5-8b19-0482c7f0322b] Node 1 disconnected. 09:10:03 09:10:03.246 [kafka-producer-network-thread | mso-123456-producer-a431b68b-e45a-4ed5-8b19-0482c7f0322b] WARN org.apache.kafka.clients.NetworkClient - [Producer clientId=mso-123456-producer-a431b68b-e45a-4ed5-8b19-0482c7f0322b] Connection to node 1 (localhost/127.0.0.1:40117) could not be established. Broker may not be available. 09:10:03 09:10:03.277 [pool-12-thread-2] DEBUG org.onap.sdc.impl.NotificationConsumer - Polling for messages from topic: null 09:10:03 09:10:03.277 [pool-12-thread-2] DEBUG org.onap.sdc.impl.NotificationConsumer - received message from topic 09:10:03 09:10:03.277 [pool-12-thread-2] DEBUG org.onap.sdc.impl.NotificationConsumer - received notification from broker: { "distributionID" : "5v1234d8-5b6d-42c4-7t54-47v95n58qb7", "serviceName" : "srv1", "serviceVersion": "2.0", "serviceUUID" : "4e0697d8-5b6d-42c4-8c74-46c33d46624c", "serviceArtifacts":[ { "artifactName" : "ddd.yml", "artifactType" : "DG_XML", "artifactTimeout" : "65", "artifactDescription" : "description", "artifactURL" : "/sdc/v1/catalog/services/srv1/2.0/resources/ddd/3.0/artifacts/ddd.xml" , "resourceUUID" : "4e5874d8-5b6d-42c4-8c74-46c33d90drw" , "checksum" : "15e389rnrp58hsw==" } ]} 09:10:03 09:10:03.278 [pool-12-thread-2] ERROR org.onap.sdc.impl.NotificationConsumer - Error exception occurred when fetching with Kafka Consumer:null 09:10:03 09:10:03.278 [pool-12-thread-2] DEBUG org.onap.sdc.impl.NotificationConsumer - Error exception occurred when fetching with Kafka Consumer:null 09:10:03 java.lang.NullPointerException: null 09:10:03 at org.onap.sdc.impl.NotificationCallbackBuilder.buildResourceInstancesLogic(NotificationCallbackBuilder.java:62) 09:10:03 at org.onap.sdc.impl.NotificationCallbackBuilder.buildCallbackNotificationLogic(NotificationCallbackBuilder.java:48) 09:10:03 at org.onap.sdc.impl.NotificationConsumer.run(NotificationConsumer.java:58) 09:10:03 at java.base/java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:515) 09:10:03 at java.base/java.util.concurrent.FutureTask.runAndReset(FutureTask.java:305) 09:10:03 at java.base/java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(ScheduledThreadPoolExecutor.java:305) 09:10:03 at java.base/java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128) 09:10:03 at java.base/java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628) 09:10:03 at java.base/java.lang.Thread.run(Thread.java:829) 09:10:03 09:10:03.331 [kafka-coordinator-heartbeat-thread | mso-group] DEBUG org.apache.kafka.clients.NetworkClient - [Consumer clientId=mso-123456-consumer-f5c1d64b-2a77-4abe-97e2-5d595acfbd82, groupId=mso-group] Give up sending metadata request since no node is available 09:10:03 09:10:03.331 [kafka-coordinator-heartbeat-thread | mso-group] DEBUG org.apache.kafka.clients.consumer.internals.ConsumerCoordinator - [Consumer clientId=mso-123456-consumer-f5c1d64b-2a77-4abe-97e2-5d595acfbd82, groupId=mso-group] No broker available to send FindCoordinator request 09:10:03 09:10:03.347 [kafka-producer-network-thread | mso-123456-producer-a431b68b-e45a-4ed5-8b19-0482c7f0322b] DEBUG org.apache.kafka.clients.NetworkClient - [Producer clientId=mso-123456-producer-a431b68b-e45a-4ed5-8b19-0482c7f0322b] Give up sending metadata request since no node is available 09:10:03 09:10:03.377 [pool-12-thread-1] DEBUG org.onap.sdc.impl.NotificationConsumer - Polling for messages from topic: null 09:10:03 09:10:03.397 [kafka-producer-network-thread | mso-123456-producer-a431b68b-e45a-4ed5-8b19-0482c7f0322b] DEBUG org.apache.kafka.clients.NetworkClient - [Producer clientId=mso-123456-producer-a431b68b-e45a-4ed5-8b19-0482c7f0322b] Give up sending metadata request since no node is available 09:10:03 09:10:03.431 [kafka-coordinator-heartbeat-thread | mso-group] DEBUG org.apache.kafka.clients.NetworkClient - [Consumer clientId=mso-123456-consumer-f5c1d64b-2a77-4abe-97e2-5d595acfbd82, groupId=mso-group] Give up sending metadata request since no node is available 09:10:03 09:10:03.431 [kafka-coordinator-heartbeat-thread | mso-group] DEBUG org.apache.kafka.clients.consumer.internals.ConsumerCoordinator - [Consumer clientId=mso-123456-consumer-f5c1d64b-2a77-4abe-97e2-5d595acfbd82, groupId=mso-group] No broker available to send FindCoordinator request 09:10:03 09:10:03.447 [kafka-producer-network-thread | mso-123456-producer-a431b68b-e45a-4ed5-8b19-0482c7f0322b] DEBUG org.apache.kafka.clients.NetworkClient - [Producer clientId=mso-123456-producer-a431b68b-e45a-4ed5-8b19-0482c7f0322b] Give up sending metadata request since no node is available 09:10:03 09:10:03.477 [pool-12-thread-3] DEBUG org.onap.sdc.impl.NotificationConsumer - Polling for messages from topic: null 09:10:03 09:10:03.498 [kafka-producer-network-thread | mso-123456-producer-a431b68b-e45a-4ed5-8b19-0482c7f0322b] DEBUG org.apache.kafka.clients.NetworkClient - [Producer clientId=mso-123456-producer-a431b68b-e45a-4ed5-8b19-0482c7f0322b] Give up sending metadata request since no node is available 09:10:03 09:10:03.532 [kafka-coordinator-heartbeat-thread | mso-group] DEBUG org.apache.kafka.clients.NetworkClient - [Consumer clientId=mso-123456-consumer-f5c1d64b-2a77-4abe-97e2-5d595acfbd82, groupId=mso-group] Give up sending metadata request since no node is available 09:10:03 09:10:03.532 [kafka-coordinator-heartbeat-thread | mso-group] DEBUG org.apache.kafka.clients.consumer.internals.ConsumerCoordinator - [Consumer clientId=mso-123456-consumer-f5c1d64b-2a77-4abe-97e2-5d595acfbd82, groupId=mso-group] No broker available to send FindCoordinator request 09:10:03 09:10:03.548 [kafka-producer-network-thread | mso-123456-producer-a431b68b-e45a-4ed5-8b19-0482c7f0322b] DEBUG org.apache.kafka.clients.NetworkClient - [Producer clientId=mso-123456-producer-a431b68b-e45a-4ed5-8b19-0482c7f0322b] Give up sending metadata request since no node is available 09:10:03 09:10:03.577 [pool-12-thread-2] DEBUG org.onap.sdc.impl.NotificationConsumer - Polling for messages from topic: null 09:10:03 09:10:03.598 [kafka-producer-network-thread | mso-123456-producer-a431b68b-e45a-4ed5-8b19-0482c7f0322b] DEBUG org.apache.kafka.clients.NetworkClient - [Producer clientId=mso-123456-producer-a431b68b-e45a-4ed5-8b19-0482c7f0322b] Give up sending metadata request since no node is available 09:10:03 09:10:03.632 [kafka-coordinator-heartbeat-thread | mso-group] DEBUG org.apache.kafka.clients.NetworkClient - [Consumer clientId=mso-123456-consumer-f5c1d64b-2a77-4abe-97e2-5d595acfbd82, groupId=mso-group] Give up sending metadata request since no node is available 09:10:03 09:10:03.632 [kafka-coordinator-heartbeat-thread | mso-group] DEBUG org.apache.kafka.clients.consumer.internals.ConsumerCoordinator - [Consumer clientId=mso-123456-consumer-f5c1d64b-2a77-4abe-97e2-5d595acfbd82, groupId=mso-group] No broker available to send FindCoordinator request 09:10:03 09:10:03.649 [kafka-producer-network-thread | mso-123456-producer-a431b68b-e45a-4ed5-8b19-0482c7f0322b] DEBUG org.apache.kafka.clients.NetworkClient - [Producer clientId=mso-123456-producer-a431b68b-e45a-4ed5-8b19-0482c7f0322b] Give up sending metadata request since no node is available 09:10:03 09:10:03.677 [pool-12-thread-4] DEBUG org.onap.sdc.impl.NotificationConsumer - Polling for messages from topic: null 09:10:03 09:10:03.699 [kafka-producer-network-thread | mso-123456-producer-a431b68b-e45a-4ed5-8b19-0482c7f0322b] DEBUG org.apache.kafka.clients.NetworkClient - [Producer clientId=mso-123456-producer-a431b68b-e45a-4ed5-8b19-0482c7f0322b] Give up sending metadata request since no node is available 09:10:03 09:10:03.732 [kafka-coordinator-heartbeat-thread | mso-group] DEBUG org.apache.kafka.clients.NetworkClient - [Consumer clientId=mso-123456-consumer-f5c1d64b-2a77-4abe-97e2-5d595acfbd82, groupId=mso-group] Give up sending metadata request since no node is available 09:10:03 09:10:03.732 [kafka-coordinator-heartbeat-thread | mso-group] DEBUG org.apache.kafka.clients.consumer.internals.ConsumerCoordinator - [Consumer clientId=mso-123456-consumer-f5c1d64b-2a77-4abe-97e2-5d595acfbd82, groupId=mso-group] No broker available to send FindCoordinator request 09:10:03 09:10:03.749 [kafka-producer-network-thread | mso-123456-producer-a431b68b-e45a-4ed5-8b19-0482c7f0322b] DEBUG org.apache.kafka.clients.NetworkClient - [Producer clientId=mso-123456-producer-a431b68b-e45a-4ed5-8b19-0482c7f0322b] Give up sending metadata request since no node is available 09:10:03 09:10:03.777 [pool-12-thread-1] DEBUG org.onap.sdc.impl.NotificationConsumer - Polling for messages from topic: null 09:10:03 09:10:03.800 [kafka-producer-network-thread | mso-123456-producer-a431b68b-e45a-4ed5-8b19-0482c7f0322b] DEBUG org.apache.kafka.clients.NetworkClient - [Producer clientId=mso-123456-producer-a431b68b-e45a-4ed5-8b19-0482c7f0322b] Give up sending metadata request since no node is available 09:10:03 09:10:03.833 [kafka-coordinator-heartbeat-thread | mso-group] DEBUG org.apache.kafka.clients.NetworkClient - [Consumer clientId=mso-123456-consumer-f5c1d64b-2a77-4abe-97e2-5d595acfbd82, groupId=mso-group] Give up sending metadata request since no node is available 09:10:03 09:10:03.833 [kafka-coordinator-heartbeat-thread | mso-group] DEBUG org.apache.kafka.clients.consumer.internals.ConsumerCoordinator - [Consumer clientId=mso-123456-consumer-f5c1d64b-2a77-4abe-97e2-5d595acfbd82, groupId=mso-group] No broker available to send FindCoordinator request 09:10:03 09:10:03.850 [kafka-producer-network-thread | mso-123456-producer-a431b68b-e45a-4ed5-8b19-0482c7f0322b] DEBUG org.apache.kafka.clients.NetworkClient - [Producer clientId=mso-123456-producer-a431b68b-e45a-4ed5-8b19-0482c7f0322b] Give up sending metadata request since no node is available 09:10:03 09:10:03.877 [pool-12-thread-5] DEBUG org.onap.sdc.impl.NotificationConsumer - Polling for messages from topic: null 09:10:03 09:10:03.901 [kafka-producer-network-thread | mso-123456-producer-a431b68b-e45a-4ed5-8b19-0482c7f0322b] DEBUG org.apache.kafka.clients.NetworkClient - [Producer clientId=mso-123456-producer-a431b68b-e45a-4ed5-8b19-0482c7f0322b] Give up sending metadata request since no node is available 09:10:03 09:10:03.933 [kafka-coordinator-heartbeat-thread | mso-group] DEBUG org.apache.kafka.clients.NetworkClient - [Consumer clientId=mso-123456-consumer-f5c1d64b-2a77-4abe-97e2-5d595acfbd82, groupId=mso-group] Give up sending metadata request since no node is available 09:10:03 09:10:03.933 [kafka-coordinator-heartbeat-thread | mso-group] DEBUG org.apache.kafka.clients.consumer.internals.ConsumerCoordinator - [Consumer clientId=mso-123456-consumer-f5c1d64b-2a77-4abe-97e2-5d595acfbd82, groupId=mso-group] No broker available to send FindCoordinator request 09:10:03 09:10:03.951 [kafka-producer-network-thread | mso-123456-producer-a431b68b-e45a-4ed5-8b19-0482c7f0322b] DEBUG org.apache.kafka.clients.NetworkClient - [Producer clientId=mso-123456-producer-a431b68b-e45a-4ed5-8b19-0482c7f0322b] Give up sending metadata request since no node is available 09:10:03 09:10:03.977 [pool-12-thread-3] DEBUG org.onap.sdc.impl.NotificationConsumer - Polling for messages from topic: null 09:10:04 09:10:04.002 [kafka-producer-network-thread | mso-123456-producer-a431b68b-e45a-4ed5-8b19-0482c7f0322b] DEBUG org.apache.kafka.clients.NetworkClient - [Producer clientId=mso-123456-producer-a431b68b-e45a-4ed5-8b19-0482c7f0322b] Give up sending metadata request since no node is available 09:10:04 09:10:04.033 [kafka-coordinator-heartbeat-thread | mso-group] DEBUG org.apache.kafka.clients.NetworkClient - [Consumer clientId=mso-123456-consumer-f5c1d64b-2a77-4abe-97e2-5d595acfbd82, groupId=mso-group] Initialize connection to node localhost:40117 (id: 1 rack: null) for sending metadata request 09:10:04 09:10:04.033 [kafka-coordinator-heartbeat-thread | mso-group] DEBUG org.apache.kafka.clients.ClientUtils - Resolved host localhost as 127.0.0.1 09:10:04 09:10:04.034 [kafka-coordinator-heartbeat-thread | mso-group] DEBUG org.apache.kafka.clients.NetworkClient - [Consumer clientId=mso-123456-consumer-f5c1d64b-2a77-4abe-97e2-5d595acfbd82, groupId=mso-group] Initiating connection to node localhost:40117 (id: 1 rack: null) using address localhost/127.0.0.1 09:10:04 09:10:04.034 [kafka-coordinator-heartbeat-thread | mso-group] DEBUG org.apache.kafka.common.security.authenticator.SaslClientAuthenticator - [Consumer clientId=mso-123456-consumer-f5c1d64b-2a77-4abe-97e2-5d595acfbd82, groupId=mso-group] Set SASL client state to SEND_APIVERSIONS_REQUEST 09:10:04 09:10:04.034 [kafka-coordinator-heartbeat-thread | mso-group] DEBUG org.apache.kafka.common.security.authenticator.SaslClientAuthenticator - [Consumer clientId=mso-123456-consumer-f5c1d64b-2a77-4abe-97e2-5d595acfbd82, groupId=mso-group] Creating SaslClient: client=null;service=kafka;serviceHostname=localhost;mechs=[PLAIN] 09:10:04 09:10:04.035 [kafka-coordinator-heartbeat-thread | mso-group] DEBUG org.apache.kafka.common.network.Selector - [Consumer clientId=mso-123456-consumer-f5c1d64b-2a77-4abe-97e2-5d595acfbd82, groupId=mso-group] Connection with localhost/127.0.0.1 (channelId=1) disconnected 09:10:04 java.net.ConnectException: Connection refused 09:10:04 at java.base/sun.nio.ch.SocketChannelImpl.checkConnect(Native Method) 09:10:04 at java.base/sun.nio.ch.SocketChannelImpl.finishConnect(SocketChannelImpl.java:777) 09:10:04 at org.apache.kafka.common.network.PlaintextTransportLayer.finishConnect(PlaintextTransportLayer.java:50) 09:10:04 at org.apache.kafka.common.network.KafkaChannel.finishConnect(KafkaChannel.java:224) 09:10:04 at org.apache.kafka.common.network.Selector.pollSelectionKeys(Selector.java:526) 09:10:04 at org.apache.kafka.common.network.Selector.poll(Selector.java:481) 09:10:04 at org.apache.kafka.clients.NetworkClient.poll(NetworkClient.java:560) 09:10:04 at org.apache.kafka.clients.consumer.internals.ConsumerNetworkClient.poll(ConsumerNetworkClient.java:280) 09:10:04 at org.apache.kafka.clients.consumer.internals.ConsumerNetworkClient.pollNoWakeup(ConsumerNetworkClient.java:321) 09:10:04 at org.apache.kafka.clients.consumer.internals.AbstractCoordinator$HeartbeatThread.run(AbstractCoordinator.java:1454) 09:10:04 09:10:04.035 [kafka-coordinator-heartbeat-thread | mso-group] INFO org.apache.kafka.clients.NetworkClient - [Consumer clientId=mso-123456-consumer-f5c1d64b-2a77-4abe-97e2-5d595acfbd82, groupId=mso-group] Node 1 disconnected. 09:10:04 09:10:04.035 [kafka-coordinator-heartbeat-thread | mso-group] WARN org.apache.kafka.clients.NetworkClient - [Consumer clientId=mso-123456-consumer-f5c1d64b-2a77-4abe-97e2-5d595acfbd82, groupId=mso-group] Connection to node 1 (localhost/127.0.0.1:40117) could not be established. Broker may not be available. 09:10:04 09:10:04.035 [kafka-coordinator-heartbeat-thread | mso-group] DEBUG org.apache.kafka.clients.consumer.internals.ConsumerCoordinator - [Consumer clientId=mso-123456-consumer-f5c1d64b-2a77-4abe-97e2-5d595acfbd82, groupId=mso-group] No broker available to send FindCoordinator request 09:10:04 09:10:04.052 [kafka-producer-network-thread | mso-123456-producer-a431b68b-e45a-4ed5-8b19-0482c7f0322b] DEBUG org.apache.kafka.clients.NetworkClient - [Producer clientId=mso-123456-producer-a431b68b-e45a-4ed5-8b19-0482c7f0322b] Give up sending metadata request since no node is available 09:10:04 09:10:04.076 [pool-12-thread-3] DEBUG org.onap.sdc.impl.NotificationConsumer - Polling for messages from topic: null 09:10:04 09:10:04.100 [main] INFO org.onap.sdc.impl.DistributionClientImpl - DistributionClient - sendNotificationStatus 09:10:04 09:10:04.100 [main] DEBUG org.onap.sdc.impl.DistributionClientImpl - client was not initialized 09:10:04 09:10:04.102 [kafka-producer-network-thread | mso-123456-producer-a431b68b-e45a-4ed5-8b19-0482c7f0322b] DEBUG org.apache.kafka.clients.NetworkClient - [Producer clientId=mso-123456-producer-a431b68b-e45a-4ed5-8b19-0482c7f0322b] Give up sending metadata request since no node is available 09:10:04 09:10:04.103 [pool-13-thread-1] DEBUG org.onap.sdc.impl.NotificationConsumer - Polling for messages from topic: null 09:10:04 09:10:04.136 [kafka-coordinator-heartbeat-thread | mso-group] DEBUG org.apache.kafka.clients.NetworkClient - [Consumer clientId=mso-123456-consumer-f5c1d64b-2a77-4abe-97e2-5d595acfbd82, groupId=mso-group] Give up sending metadata request since no node is available 09:10:04 09:10:04.136 [kafka-coordinator-heartbeat-thread | mso-group] DEBUG org.apache.kafka.clients.consumer.internals.ConsumerCoordinator - [Consumer clientId=mso-123456-consumer-f5c1d64b-2a77-4abe-97e2-5d595acfbd82, groupId=mso-group] No broker available to send FindCoordinator request 09:10:04 09:10:04.153 [kafka-producer-network-thread | mso-123456-producer-a431b68b-e45a-4ed5-8b19-0482c7f0322b] DEBUG org.apache.kafka.clients.NetworkClient - [Producer clientId=mso-123456-producer-a431b68b-e45a-4ed5-8b19-0482c7f0322b] Initialize connection to node localhost:40117 (id: 1 rack: null) for sending metadata request 09:10:04 09:10:04.153 [kafka-producer-network-thread | mso-123456-producer-a431b68b-e45a-4ed5-8b19-0482c7f0322b] DEBUG org.apache.kafka.clients.ClientUtils - Resolved host localhost as 127.0.0.1 09:10:04 09:10:04.153 [kafka-producer-network-thread | mso-123456-producer-a431b68b-e45a-4ed5-8b19-0482c7f0322b] DEBUG org.apache.kafka.clients.NetworkClient - [Producer clientId=mso-123456-producer-a431b68b-e45a-4ed5-8b19-0482c7f0322b] Initiating connection to node localhost:40117 (id: 1 rack: null) using address localhost/127.0.0.1 09:10:04 09:10:04.153 [kafka-producer-network-thread | mso-123456-producer-a431b68b-e45a-4ed5-8b19-0482c7f0322b] DEBUG org.apache.kafka.common.security.authenticator.SaslClientAuthenticator - [Producer clientId=mso-123456-producer-a431b68b-e45a-4ed5-8b19-0482c7f0322b] Set SASL client state to SEND_APIVERSIONS_REQUEST 09:10:04 09:10:04.153 [kafka-producer-network-thread | mso-123456-producer-a431b68b-e45a-4ed5-8b19-0482c7f0322b] DEBUG org.apache.kafka.common.security.authenticator.SaslClientAuthenticator - [Producer clientId=mso-123456-producer-a431b68b-e45a-4ed5-8b19-0482c7f0322b] Creating SaslClient: client=null;service=kafka;serviceHostname=localhost;mechs=[PLAIN] 09:10:04 09:10:04.154 [kafka-producer-network-thread | mso-123456-producer-a431b68b-e45a-4ed5-8b19-0482c7f0322b] DEBUG org.apache.kafka.common.network.Selector - [Producer clientId=mso-123456-producer-a431b68b-e45a-4ed5-8b19-0482c7f0322b] Connection with localhost/127.0.0.1 (channelId=1) disconnected 09:10:04 java.net.ConnectException: Connection refused 09:10:04 at java.base/sun.nio.ch.SocketChannelImpl.checkConnect(Native Method) 09:10:04 at java.base/sun.nio.ch.SocketChannelImpl.finishConnect(SocketChannelImpl.java:777) 09:10:04 at org.apache.kafka.common.network.PlaintextTransportLayer.finishConnect(PlaintextTransportLayer.java:50) 09:10:04 at org.apache.kafka.common.network.KafkaChannel.finishConnect(KafkaChannel.java:224) 09:10:04 at org.apache.kafka.common.network.Selector.pollSelectionKeys(Selector.java:526) 09:10:04 at org.apache.kafka.common.network.Selector.poll(Selector.java:481) 09:10:04 at org.apache.kafka.clients.NetworkClient.poll(NetworkClient.java:560) 09:10:04 at org.apache.kafka.clients.producer.internals.Sender.runOnce(Sender.java:328) 09:10:04 at org.apache.kafka.clients.producer.internals.Sender.run(Sender.java:243) 09:10:04 at java.base/java.lang.Thread.run(Thread.java:829) 09:10:04 09:10:04.154 [kafka-producer-network-thread | mso-123456-producer-a431b68b-e45a-4ed5-8b19-0482c7f0322b] INFO org.apache.kafka.clients.NetworkClient - [Producer clientId=mso-123456-producer-a431b68b-e45a-4ed5-8b19-0482c7f0322b] Node 1 disconnected. 09:10:04 09:10:04.154 [kafka-producer-network-thread | mso-123456-producer-a431b68b-e45a-4ed5-8b19-0482c7f0322b] WARN org.apache.kafka.clients.NetworkClient - [Producer clientId=mso-123456-producer-a431b68b-e45a-4ed5-8b19-0482c7f0322b] Connection to node 1 (localhost/127.0.0.1:40117) could not be established. Broker may not be available. 09:10:04 09:10:04.202 [pool-13-thread-1] DEBUG org.onap.sdc.impl.NotificationConsumer - Polling for messages from topic: null 09:10:04 09:10:04.236 [kafka-coordinator-heartbeat-thread | mso-group] DEBUG org.apache.kafka.clients.NetworkClient - [Consumer clientId=mso-123456-consumer-f5c1d64b-2a77-4abe-97e2-5d595acfbd82, groupId=mso-group] Give up sending metadata request since no node is available 09:10:04 09:10:04.236 [kafka-coordinator-heartbeat-thread | mso-group] DEBUG org.apache.kafka.clients.consumer.internals.ConsumerCoordinator - [Consumer clientId=mso-123456-consumer-f5c1d64b-2a77-4abe-97e2-5d595acfbd82, groupId=mso-group] No broker available to send FindCoordinator request 09:10:04 09:10:04.254 [kafka-producer-network-thread | mso-123456-producer-a431b68b-e45a-4ed5-8b19-0482c7f0322b] DEBUG org.apache.kafka.clients.NetworkClient - [Producer clientId=mso-123456-producer-a431b68b-e45a-4ed5-8b19-0482c7f0322b] Give up sending metadata request since no node is available 09:10:04 09:10:04.302 [pool-13-thread-2] DEBUG org.onap.sdc.impl.NotificationConsumer - Polling for messages from topic: null 09:10:04 09:10:04.303 [pool-13-thread-2] DEBUG org.onap.sdc.impl.NotificationConsumer - received message from topic 09:10:04 09:10:04.303 [pool-13-thread-2] DEBUG org.onap.sdc.impl.NotificationConsumer - received notification from broker: {"distributionID" : "bcc7a72e-90b1-4c5f-9a37-28dc3cd86416", 09:10:04 "serviceName" : "Testnotificationser1", 09:10:04 "serviceVersion" : "1.0", 09:10:04 "serviceUUID" : "7f7f94f4-373a-4b71-a0e3-80ae2ba4eb5d", 09:10:04 "serviceDescription" : "TestNotificationVF1", 09:10:04 "resources" : [{ 09:10:04 "resourceInstanceName" : "testnotificationvf11", 09:10:04 "resourceName" : "TestNotificationVF1", 09:10:04 "resourceVersion" : "1.0", 09:10:04 "resoucreType" : "VF", 09:10:04 "resourceUUID" : "907e1746-9f69-40f5-9f2a-313654092a2d", 09:10:04 "artifacts" : [{ 09:10:04 "artifactName" : "sample-xml-alldata-1-1.xml", 09:10:04 "artifactType" : "YANG_XML", 09:10:04 "artifactURL" : "/sdc/v1/catalog/services/Testnotificationser1/1.0/resourceInstances/testnotificationvf11/artifacts/sample-xml-alldata-1-1.xml", 09:10:04 "artifactChecksum" : "MTUxODFkMmRlOTNhNjYxMGYyYTI1ZjA5Y2QyNWQyYTk\u003d", 09:10:04 "artifactDescription" : "MyYang", 09:10:04 "artifactTimeout" : 0, 09:10:04 "artifactUUID" : "0005bc4a-2c19-452e-be6d-d574a56be4d0", 09:10:04 "artifactVersion" : "1" 09:10:04 }, { 09:10:04 "artifactName" : "heat.yaml", 09:10:04 "artifactType" : "HEAT", 09:10:04 "artifactURL" : "/sdc/v1/catalog/services/Testnotificationser1/1.0/resourceInstances/testnotificationvf11/artifacts/heat.yaml", 09:10:04 "artifactChecksum" : "ODEyNjE4YTMzYzRmMTk2ODVhNTU2NTg3YWEyNmIxMTM\u003d", 09:10:04 "artifactDescription" : "heat", 09:10:04 "artifactTimeout" : 60, 09:10:04 "artifactUUID" : "8df6123c-f368-47d3-93be-1972cefbcc35", 09:10:04 "artifactVersion" : "1" 09:10:04 }, { 09:10:04 "artifactName" : "heat.env", 09:10:04 "artifactType" : "HEAT_ENV", 09:10:04 "artifactURL" : "/sdc/v1/catalog/services/Testnotificationser1/1.0/resourceInstances/testnotificationvf11/artifacts/heat.env", 09:10:04 "artifactChecksum" : "NGIzMjExZTM1NDc2NjBjOTQyMGJmMWNiMmU0NTE5NzM\u003d", 09:10:04 "artifactDescription" : "Auto-generated HEAT Environment deployment artifact", 09:10:04 "artifactTimeout" : 0, 09:10:04 "artifactUUID" : "ce65d31c-35c0-43a9-90c7-596fc51d0c86", 09:10:04 "artifactVersion" : "1", 09:10:04 "generatedFromUUID" : "8df6123c-f368-47d3-93be-1972cefbcc35" 09:10:04 } 09:10:04 ] 09:10:04 } 09:10:04 ]} 09:10:04 09:10:04.305 [kafka-producer-network-thread | mso-123456-producer-a431b68b-e45a-4ed5-8b19-0482c7f0322b] DEBUG org.apache.kafka.clients.NetworkClient - [Producer clientId=mso-123456-producer-a431b68b-e45a-4ed5-8b19-0482c7f0322b] Give up sending metadata request since no node is available 09:10:04 09:10:04.309 [pool-13-thread-2] DEBUG org.onap.sdc.impl.NotificationConsumer - sending notification to client: { 09:10:04 "distributionID": "bcc7a72e-90b1-4c5f-9a37-28dc3cd86416", 09:10:04 "serviceName": "Testnotificationser1", 09:10:04 "serviceVersion": "1.0", 09:10:04 "serviceUUID": "7f7f94f4-373a-4b71-a0e3-80ae2ba4eb5d", 09:10:04 "serviceDescription": "TestNotificationVF1", 09:10:04 "resources": [ 09:10:04 { 09:10:04 "resourceInstanceName": "testnotificationvf11", 09:10:04 "resourceName": "TestNotificationVF1", 09:10:04 "resourceVersion": "1.0", 09:10:04 "resoucreType": "VF", 09:10:04 "resourceUUID": "907e1746-9f69-40f5-9f2a-313654092a2d", 09:10:04 "artifacts": [ 09:10:04 { 09:10:04 "artifactName": "heat.yaml", 09:10:04 "artifactType": "HEAT", 09:10:04 "artifactURL": "/sdc/v1/catalog/services/Testnotificationser1/1.0/resourceInstances/testnotificationvf11/artifacts/heat.yaml", 09:10:04 "artifactChecksum": "ODEyNjE4YTMzYzRmMTk2ODVhNTU2NTg3YWEyNmIxMTM\u003d", 09:10:04 "artifactDescription": "heat", 09:10:04 "artifactTimeout": 60, 09:10:04 "artifactVersion": "1", 09:10:04 "artifactUUID": "8df6123c-f368-47d3-93be-1972cefbcc35", 09:10:04 "generatedArtifact": { 09:10:04 "artifactName": "heat.env", 09:10:04 "artifactType": "HEAT_ENV", 09:10:04 "artifactURL": "/sdc/v1/catalog/services/Testnotificationser1/1.0/resourceInstances/testnotificationvf11/artifacts/heat.env", 09:10:04 "artifactChecksum": "NGIzMjExZTM1NDc2NjBjOTQyMGJmMWNiMmU0NTE5NzM\u003d", 09:10:04 "artifactDescription": "Auto-generated HEAT Environment deployment artifact", 09:10:04 "artifactTimeout": 0, 09:10:04 "artifactVersion": "1", 09:10:04 "artifactUUID": "ce65d31c-35c0-43a9-90c7-596fc51d0c86", 09:10:04 "generatedFromUUID": "8df6123c-f368-47d3-93be-1972cefbcc35" 09:10:04 }, 09:10:04 "relatedArtifactsInfo": [] 09:10:04 } 09:10:04 ] 09:10:04 } 09:10:04 ], 09:10:04 "serviceArtifacts": [] 09:10:04 } 09:10:04 09:10:04.336 [kafka-coordinator-heartbeat-thread | mso-group] DEBUG org.apache.kafka.clients.NetworkClient - [Consumer clientId=mso-123456-consumer-f5c1d64b-2a77-4abe-97e2-5d595acfbd82, groupId=mso-group] Give up sending metadata request since no node is available 09:10:04 09:10:04.336 [kafka-coordinator-heartbeat-thread | mso-group] DEBUG org.apache.kafka.clients.consumer.internals.ConsumerCoordinator - [Consumer clientId=mso-123456-consumer-f5c1d64b-2a77-4abe-97e2-5d595acfbd82, groupId=mso-group] No broker available to send FindCoordinator request 09:10:04 09:10:04.356 [kafka-producer-network-thread | mso-123456-producer-a431b68b-e45a-4ed5-8b19-0482c7f0322b] DEBUG org.apache.kafka.clients.NetworkClient - [Producer clientId=mso-123456-producer-a431b68b-e45a-4ed5-8b19-0482c7f0322b] Give up sending metadata request since no node is available 09:10:04 09:10:04.406 [kafka-producer-network-thread | mso-123456-producer-a431b68b-e45a-4ed5-8b19-0482c7f0322b] DEBUG org.apache.kafka.clients.NetworkClient - [Producer clientId=mso-123456-producer-a431b68b-e45a-4ed5-8b19-0482c7f0322b] Give up sending metadata request since no node is available 09:10:04 09:10:04.412 [pool-13-thread-1] DEBUG org.onap.sdc.impl.NotificationConsumer - Polling for messages from topic: null 09:10:04 09:10:04.436 [kafka-coordinator-heartbeat-thread | mso-group] DEBUG org.apache.kafka.clients.NetworkClient - [Consumer clientId=mso-123456-consumer-f5c1d64b-2a77-4abe-97e2-5d595acfbd82, groupId=mso-group] Give up sending metadata request since no node is available 09:10:04 09:10:04.437 [kafka-coordinator-heartbeat-thread | mso-group] DEBUG org.apache.kafka.clients.consumer.internals.ConsumerCoordinator - [Consumer clientId=mso-123456-consumer-f5c1d64b-2a77-4abe-97e2-5d595acfbd82, groupId=mso-group] No broker available to send FindCoordinator request 09:10:04 09:10:04.456 [kafka-producer-network-thread | mso-123456-producer-a431b68b-e45a-4ed5-8b19-0482c7f0322b] DEBUG org.apache.kafka.clients.NetworkClient - [Producer clientId=mso-123456-producer-a431b68b-e45a-4ed5-8b19-0482c7f0322b] Give up sending metadata request since no node is available 09:10:04 09:10:04.502 [pool-13-thread-3] DEBUG org.onap.sdc.impl.NotificationConsumer - Polling for messages from topic: null 09:10:04 09:10:04.506 [kafka-producer-network-thread | mso-123456-producer-a431b68b-e45a-4ed5-8b19-0482c7f0322b] DEBUG org.apache.kafka.clients.NetworkClient - [Producer clientId=mso-123456-producer-a431b68b-e45a-4ed5-8b19-0482c7f0322b] Give up sending metadata request since no node is available 09:10:04 09:10:04.537 [kafka-coordinator-heartbeat-thread | mso-group] DEBUG org.apache.kafka.clients.NetworkClient - [Consumer clientId=mso-123456-consumer-f5c1d64b-2a77-4abe-97e2-5d595acfbd82, groupId=mso-group] Give up sending metadata request since no node is available 09:10:04 09:10:04.537 [kafka-coordinator-heartbeat-thread | mso-group] DEBUG org.apache.kafka.clients.consumer.internals.ConsumerCoordinator - [Consumer clientId=mso-123456-consumer-f5c1d64b-2a77-4abe-97e2-5d595acfbd82, groupId=mso-group] No broker available to send FindCoordinator request 09:10:04 09:10:04.557 [kafka-producer-network-thread | mso-123456-producer-a431b68b-e45a-4ed5-8b19-0482c7f0322b] DEBUG org.apache.kafka.clients.NetworkClient - [Producer clientId=mso-123456-producer-a431b68b-e45a-4ed5-8b19-0482c7f0322b] Give up sending metadata request since no node is available 09:10:04 09:10:04.602 [pool-13-thread-2] DEBUG org.onap.sdc.impl.NotificationConsumer - Polling for messages from topic: null 09:10:04 09:10:04.620 [kafka-producer-network-thread | mso-123456-producer-a431b68b-e45a-4ed5-8b19-0482c7f0322b] DEBUG org.apache.kafka.clients.NetworkClient - [Producer clientId=mso-123456-producer-a431b68b-e45a-4ed5-8b19-0482c7f0322b] Give up sending metadata request since no node is available 09:10:04 09:10:04.637 [kafka-coordinator-heartbeat-thread | mso-group] DEBUG org.apache.kafka.clients.NetworkClient - [Consumer clientId=mso-123456-consumer-f5c1d64b-2a77-4abe-97e2-5d595acfbd82, groupId=mso-group] Give up sending metadata request since no node is available 09:10:04 09:10:04.637 [kafka-coordinator-heartbeat-thread | mso-group] DEBUG org.apache.kafka.clients.consumer.internals.ConsumerCoordinator - [Consumer clientId=mso-123456-consumer-f5c1d64b-2a77-4abe-97e2-5d595acfbd82, groupId=mso-group] No broker available to send FindCoordinator request 09:10:04 09:10:04.671 [kafka-producer-network-thread | mso-123456-producer-a431b68b-e45a-4ed5-8b19-0482c7f0322b] DEBUG org.apache.kafka.clients.NetworkClient - [Producer clientId=mso-123456-producer-a431b68b-e45a-4ed5-8b19-0482c7f0322b] Give up sending metadata request since no node is available 09:10:04 09:10:04.702 [pool-13-thread-4] DEBUG org.onap.sdc.impl.NotificationConsumer - Polling for messages from topic: null 09:10:04 09:10:04.721 [kafka-producer-network-thread | mso-123456-producer-a431b68b-e45a-4ed5-8b19-0482c7f0322b] DEBUG org.apache.kafka.clients.NetworkClient - [Producer clientId=mso-123456-producer-a431b68b-e45a-4ed5-8b19-0482c7f0322b] Give up sending metadata request since no node is available 09:10:04 09:10:04.738 [kafka-coordinator-heartbeat-thread | mso-group] DEBUG org.apache.kafka.clients.NetworkClient - [Consumer clientId=mso-123456-consumer-f5c1d64b-2a77-4abe-97e2-5d595acfbd82, groupId=mso-group] Give up sending metadata request since no node is available 09:10:04 09:10:04.738 [kafka-coordinator-heartbeat-thread | mso-group] DEBUG org.apache.kafka.clients.consumer.internals.ConsumerCoordinator - [Consumer clientId=mso-123456-consumer-f5c1d64b-2a77-4abe-97e2-5d595acfbd82, groupId=mso-group] No broker available to send FindCoordinator request 09:10:04 09:10:04.771 [kafka-producer-network-thread | mso-123456-producer-a431b68b-e45a-4ed5-8b19-0482c7f0322b] DEBUG org.apache.kafka.clients.NetworkClient - [Producer clientId=mso-123456-producer-a431b68b-e45a-4ed5-8b19-0482c7f0322b] Give up sending metadata request since no node is available 09:10:04 09:10:04.802 [pool-13-thread-1] DEBUG org.onap.sdc.impl.NotificationConsumer - Polling for messages from topic: null 09:10:04 09:10:04.822 [kafka-producer-network-thread | mso-123456-producer-a431b68b-e45a-4ed5-8b19-0482c7f0322b] DEBUG org.apache.kafka.clients.NetworkClient - [Producer clientId=mso-123456-producer-a431b68b-e45a-4ed5-8b19-0482c7f0322b] Give up sending metadata request since no node is available 09:10:04 09:10:04.838 [kafka-coordinator-heartbeat-thread | mso-group] DEBUG org.apache.kafka.clients.NetworkClient - [Consumer clientId=mso-123456-consumer-f5c1d64b-2a77-4abe-97e2-5d595acfbd82, groupId=mso-group] Give up sending metadata request since no node is available 09:10:04 09:10:04.838 [kafka-coordinator-heartbeat-thread | mso-group] DEBUG org.apache.kafka.clients.consumer.internals.ConsumerCoordinator - [Consumer clientId=mso-123456-consumer-f5c1d64b-2a77-4abe-97e2-5d595acfbd82, groupId=mso-group] No broker available to send FindCoordinator request 09:10:04 09:10:04.872 [kafka-producer-network-thread | mso-123456-producer-a431b68b-e45a-4ed5-8b19-0482c7f0322b] DEBUG org.apache.kafka.clients.NetworkClient - [Producer clientId=mso-123456-producer-a431b68b-e45a-4ed5-8b19-0482c7f0322b] Give up sending metadata request since no node is available 09:10:04 09:10:04.902 [pool-13-thread-5] DEBUG org.onap.sdc.impl.NotificationConsumer - Polling for messages from topic: null 09:10:04 09:10:04.922 [kafka-producer-network-thread | mso-123456-producer-a431b68b-e45a-4ed5-8b19-0482c7f0322b] DEBUG org.apache.kafka.clients.NetworkClient - [Producer clientId=mso-123456-producer-a431b68b-e45a-4ed5-8b19-0482c7f0322b] Give up sending metadata request since no node is available 09:10:04 09:10:04.938 [kafka-coordinator-heartbeat-thread | mso-group] DEBUG org.apache.kafka.clients.NetworkClient - [Consumer clientId=mso-123456-consumer-f5c1d64b-2a77-4abe-97e2-5d595acfbd82, groupId=mso-group] Initialize connection to node localhost:40117 (id: 1 rack: null) for sending metadata request 09:10:04 09:10:04.938 [kafka-coordinator-heartbeat-thread | mso-group] DEBUG org.apache.kafka.clients.ClientUtils - Resolved host localhost as 127.0.0.1 09:10:04 09:10:04.939 [kafka-coordinator-heartbeat-thread | mso-group] DEBUG org.apache.kafka.clients.NetworkClient - [Consumer clientId=mso-123456-consumer-f5c1d64b-2a77-4abe-97e2-5d595acfbd82, groupId=mso-group] Initiating connection to node localhost:40117 (id: 1 rack: null) using address localhost/127.0.0.1 09:10:04 09:10:04.939 [kafka-coordinator-heartbeat-thread | mso-group] DEBUG org.apache.kafka.common.security.authenticator.SaslClientAuthenticator - [Consumer clientId=mso-123456-consumer-f5c1d64b-2a77-4abe-97e2-5d595acfbd82, groupId=mso-group] Set SASL client state to SEND_APIVERSIONS_REQUEST 09:10:04 09:10:04.939 [kafka-coordinator-heartbeat-thread | mso-group] DEBUG org.apache.kafka.common.security.authenticator.SaslClientAuthenticator - [Consumer clientId=mso-123456-consumer-f5c1d64b-2a77-4abe-97e2-5d595acfbd82, groupId=mso-group] Creating SaslClient: client=null;service=kafka;serviceHostname=localhost;mechs=[PLAIN] 09:10:04 09:10:04.940 [kafka-coordinator-heartbeat-thread | mso-group] DEBUG org.apache.kafka.common.network.Selector - [Consumer clientId=mso-123456-consumer-f5c1d64b-2a77-4abe-97e2-5d595acfbd82, groupId=mso-group] Connection with localhost/127.0.0.1 (channelId=1) disconnected 09:10:04 java.net.ConnectException: Connection refused 09:10:04 at java.base/sun.nio.ch.SocketChannelImpl.checkConnect(Native Method) 09:10:04 at java.base/sun.nio.ch.SocketChannelImpl.finishConnect(SocketChannelImpl.java:777) 09:10:04 at org.apache.kafka.common.network.PlaintextTransportLayer.finishConnect(PlaintextTransportLayer.java:50) 09:10:04 at org.apache.kafka.common.network.KafkaChannel.finishConnect(KafkaChannel.java:224) 09:10:04 at org.apache.kafka.common.network.Selector.pollSelectionKeys(Selector.java:526) 09:10:04 at org.apache.kafka.common.network.Selector.poll(Selector.java:481) 09:10:04 at org.apache.kafka.clients.NetworkClient.poll(NetworkClient.java:560) 09:10:04 at org.apache.kafka.clients.consumer.internals.ConsumerNetworkClient.poll(ConsumerNetworkClient.java:280) 09:10:04 at org.apache.kafka.clients.consumer.internals.ConsumerNetworkClient.pollNoWakeup(ConsumerNetworkClient.java:321) 09:10:04 at org.apache.kafka.clients.consumer.internals.AbstractCoordinator$HeartbeatThread.run(AbstractCoordinator.java:1454) 09:10:04 09:10:04.940 [kafka-coordinator-heartbeat-thread | mso-group] INFO org.apache.kafka.clients.NetworkClient - [Consumer clientId=mso-123456-consumer-f5c1d64b-2a77-4abe-97e2-5d595acfbd82, groupId=mso-group] Node 1 disconnected. 09:10:04 09:10:04.940 [kafka-coordinator-heartbeat-thread | mso-group] WARN org.apache.kafka.clients.NetworkClient - [Consumer clientId=mso-123456-consumer-f5c1d64b-2a77-4abe-97e2-5d595acfbd82, groupId=mso-group] Connection to node 1 (localhost/127.0.0.1:40117) could not be established. Broker may not be available. 09:10:04 09:10:04.940 [kafka-coordinator-heartbeat-thread | mso-group] DEBUG org.apache.kafka.clients.consumer.internals.ConsumerCoordinator - [Consumer clientId=mso-123456-consumer-f5c1d64b-2a77-4abe-97e2-5d595acfbd82, groupId=mso-group] No broker available to send FindCoordinator request 09:10:04 09:10:04.972 [kafka-producer-network-thread | mso-123456-producer-a431b68b-e45a-4ed5-8b19-0482c7f0322b] DEBUG org.apache.kafka.clients.NetworkClient - [Producer clientId=mso-123456-producer-a431b68b-e45a-4ed5-8b19-0482c7f0322b] Give up sending metadata request since no node is available 09:10:05 09:10:05.002 [pool-13-thread-3] DEBUG org.onap.sdc.impl.NotificationConsumer - Polling for messages from topic: null 09:10:05 09:10:05.023 [kafka-producer-network-thread | mso-123456-producer-a431b68b-e45a-4ed5-8b19-0482c7f0322b] DEBUG org.apache.kafka.clients.NetworkClient - [Producer clientId=mso-123456-producer-a431b68b-e45a-4ed5-8b19-0482c7f0322b] Give up sending metadata request since no node is available 09:10:05 09:10:05.040 [kafka-coordinator-heartbeat-thread | mso-group] DEBUG org.apache.kafka.clients.NetworkClient - [Consumer clientId=mso-123456-consumer-f5c1d64b-2a77-4abe-97e2-5d595acfbd82, groupId=mso-group] Give up sending metadata request since no node is available 09:10:05 09:10:05.040 [kafka-coordinator-heartbeat-thread | mso-group] DEBUG org.apache.kafka.clients.consumer.internals.ConsumerCoordinator - [Consumer clientId=mso-123456-consumer-f5c1d64b-2a77-4abe-97e2-5d595acfbd82, groupId=mso-group] No broker available to send FindCoordinator request 09:10:05 09:10:05.073 [kafka-producer-network-thread | mso-123456-producer-a431b68b-e45a-4ed5-8b19-0482c7f0322b] DEBUG org.apache.kafka.clients.NetworkClient - [Producer clientId=mso-123456-producer-a431b68b-e45a-4ed5-8b19-0482c7f0322b] Initialize connection to node localhost:40117 (id: 1 rack: null) for sending metadata request 09:10:05 09:10:05.073 [kafka-producer-network-thread | mso-123456-producer-a431b68b-e45a-4ed5-8b19-0482c7f0322b] DEBUG org.apache.kafka.clients.ClientUtils - Resolved host localhost as 127.0.0.1 09:10:05 09:10:05.073 [kafka-producer-network-thread | mso-123456-producer-a431b68b-e45a-4ed5-8b19-0482c7f0322b] DEBUG org.apache.kafka.clients.NetworkClient - [Producer clientId=mso-123456-producer-a431b68b-e45a-4ed5-8b19-0482c7f0322b] Initiating connection to node localhost:40117 (id: 1 rack: null) using address localhost/127.0.0.1 09:10:05 09:10:05.073 [kafka-producer-network-thread | mso-123456-producer-a431b68b-e45a-4ed5-8b19-0482c7f0322b] DEBUG org.apache.kafka.common.security.authenticator.SaslClientAuthenticator - [Producer clientId=mso-123456-producer-a431b68b-e45a-4ed5-8b19-0482c7f0322b] Set SASL client state to SEND_APIVERSIONS_REQUEST 09:10:05 09:10:05.074 [kafka-producer-network-thread | mso-123456-producer-a431b68b-e45a-4ed5-8b19-0482c7f0322b] DEBUG org.apache.kafka.common.security.authenticator.SaslClientAuthenticator - [Producer clientId=mso-123456-producer-a431b68b-e45a-4ed5-8b19-0482c7f0322b] Creating SaslClient: client=null;service=kafka;serviceHostname=localhost;mechs=[PLAIN] 09:10:05 09:10:05.074 [kafka-producer-network-thread | mso-123456-producer-a431b68b-e45a-4ed5-8b19-0482c7f0322b] DEBUG org.apache.kafka.common.network.Selector - [Producer clientId=mso-123456-producer-a431b68b-e45a-4ed5-8b19-0482c7f0322b] Connection with localhost/127.0.0.1 (channelId=1) disconnected 09:10:05 java.net.ConnectException: Connection refused 09:10:05 at java.base/sun.nio.ch.SocketChannelImpl.checkConnect(Native Method) 09:10:05 at java.base/sun.nio.ch.SocketChannelImpl.finishConnect(SocketChannelImpl.java:777) 09:10:05 at org.apache.kafka.common.network.PlaintextTransportLayer.finishConnect(PlaintextTransportLayer.java:50) 09:10:05 at org.apache.kafka.common.network.KafkaChannel.finishConnect(KafkaChannel.java:224) 09:10:05 at org.apache.kafka.common.network.Selector.pollSelectionKeys(Selector.java:526) 09:10:05 at org.apache.kafka.common.network.Selector.poll(Selector.java:481) 09:10:05 at org.apache.kafka.clients.NetworkClient.poll(NetworkClient.java:560) 09:10:05 at org.apache.kafka.clients.producer.internals.Sender.runOnce(Sender.java:328) 09:10:05 at org.apache.kafka.clients.producer.internals.Sender.run(Sender.java:243) 09:10:05 at java.base/java.lang.Thread.run(Thread.java:829) 09:10:05 09:10:05.074 [kafka-producer-network-thread | mso-123456-producer-a431b68b-e45a-4ed5-8b19-0482c7f0322b] INFO org.apache.kafka.clients.NetworkClient - [Producer clientId=mso-123456-producer-a431b68b-e45a-4ed5-8b19-0482c7f0322b] Node 1 disconnected. 09:10:05 09:10:05.075 [kafka-producer-network-thread | mso-123456-producer-a431b68b-e45a-4ed5-8b19-0482c7f0322b] WARN org.apache.kafka.clients.NetworkClient - [Producer clientId=mso-123456-producer-a431b68b-e45a-4ed5-8b19-0482c7f0322b] Connection to node 1 (localhost/127.0.0.1:40117) could not be established. Broker may not be available. 09:10:05 09:10:05.102 [pool-13-thread-3] DEBUG org.onap.sdc.impl.NotificationConsumer - Polling for messages from topic: null 09:10:05 09:10:05.106 [main] INFO org.onap.sdc.impl.DistributionClientImpl - DistributionClient - sendNotificationStatus 09:10:05 09:10:05.106 [main] DEBUG org.onap.sdc.impl.DistributionClientImpl - client was not initialized 09:10:05 09:10:05.108 [pool-14-thread-1] DEBUG org.onap.sdc.impl.NotificationConsumer - Polling for messages from topic: null 09:10:05 09:10:05.140 [kafka-coordinator-heartbeat-thread | mso-group] DEBUG org.apache.kafka.clients.NetworkClient - [Consumer clientId=mso-123456-consumer-f5c1d64b-2a77-4abe-97e2-5d595acfbd82, groupId=mso-group] Give up sending metadata request since no node is available 09:10:05 09:10:05.141 [kafka-coordinator-heartbeat-thread | mso-group] DEBUG org.apache.kafka.clients.consumer.internals.ConsumerCoordinator - [Consumer clientId=mso-123456-consumer-f5c1d64b-2a77-4abe-97e2-5d595acfbd82, groupId=mso-group] No broker available to send FindCoordinator request 09:10:05 09:10:05.174 [kafka-producer-network-thread | mso-123456-producer-a431b68b-e45a-4ed5-8b19-0482c7f0322b] DEBUG org.apache.kafka.clients.NetworkClient - [Producer clientId=mso-123456-producer-a431b68b-e45a-4ed5-8b19-0482c7f0322b] Give up sending metadata request since no node is available 09:10:05 09:10:05.208 [pool-14-thread-1] DEBUG org.onap.sdc.impl.NotificationConsumer - Polling for messages from topic: null 09:10:05 09:10:05.225 [kafka-producer-network-thread | mso-123456-producer-a431b68b-e45a-4ed5-8b19-0482c7f0322b] DEBUG org.apache.kafka.clients.NetworkClient - [Producer clientId=mso-123456-producer-a431b68b-e45a-4ed5-8b19-0482c7f0322b] Give up sending metadata request since no node is available 09:10:05 09:10:05.241 [kafka-coordinator-heartbeat-thread | mso-group] DEBUG org.apache.kafka.clients.NetworkClient - [Consumer clientId=mso-123456-consumer-f5c1d64b-2a77-4abe-97e2-5d595acfbd82, groupId=mso-group] Give up sending metadata request since no node is available 09:10:05 09:10:05.241 [kafka-coordinator-heartbeat-thread | mso-group] DEBUG org.apache.kafka.clients.consumer.internals.ConsumerCoordinator - [Consumer clientId=mso-123456-consumer-f5c1d64b-2a77-4abe-97e2-5d595acfbd82, groupId=mso-group] No broker available to send FindCoordinator request 09:10:05 09:10:05.275 [kafka-producer-network-thread | mso-123456-producer-a431b68b-e45a-4ed5-8b19-0482c7f0322b] DEBUG org.apache.kafka.clients.NetworkClient - [Producer clientId=mso-123456-producer-a431b68b-e45a-4ed5-8b19-0482c7f0322b] Give up sending metadata request since no node is available 09:10:05 09:10:05.308 [pool-14-thread-2] DEBUG org.onap.sdc.impl.NotificationConsumer - Polling for messages from topic: null 09:10:05 09:10:05.308 [pool-14-thread-2] DEBUG org.onap.sdc.impl.NotificationConsumer - received message from topic 09:10:05 09:10:05.308 [pool-14-thread-2] DEBUG org.onap.sdc.impl.NotificationConsumer - received notification from broker: { 09:10:05 "distributionID" : "bcc7a72e-90b1-4c5f-9a37-28dc3cd86416", 09:10:05 "serviceName" : "Testnotificationser1", 09:10:05 "serviceVersion" : "1.0", 09:10:05 "serviceUUID" : "7f7f94f4-373a-4b71-a0e3-80ae2ba4eb5d", 09:10:05 "serviceDescription" : "TestNotificationVF1", 09:10:05 "serviceArtifacts" : [{ 09:10:05 "artifactName" : "sample-xml-alldata-1-1.xml", 09:10:05 "artifactType" : "YANG_XML", 09:10:05 "artifactURL" : "/sdc/v1/catalog/services/Testnotificationser1/1.0/resourceInstances/testnotificationvf11/artifacts/sample-xml-alldata-1-1.xml", 09:10:05 "artifactChecksum" : "MTUxODFkMmRlOTNhNjYxMGYyYTI1ZjA5Y2QyNWQyYTk\u003d", 09:10:05 "artifactDescription" : "MyYang", 09:10:05 "artifactTimeout" : 0, 09:10:05 "artifactUUID" : "0005bc4a-2c19-452e-be6d-d574a56be4d0", 09:10:05 "artifactVersion" : "1" 09:10:05 }, { 09:10:05 "artifactName" : "heat.yaml", 09:10:05 "artifactType" : "HEAT", 09:10:05 "artifactURL" : "/sdc/v1/catalog/services/Testnotificationser1/1.0/resourceInstances/testnotificationvf11/artifacts/heat.yaml", 09:10:05 "artifactChecksum" : "ODEyNjE4YTMzYzRmMTk2ODVhNTU2NTg3YWEyNmIxMTM\u003d", 09:10:05 "artifactDescription" : "heat", 09:10:05 "artifactTimeout" : 60, 09:10:05 "artifactUUID" : "8df6123c-f368-47d3-93be-1972cefbcc35", 09:10:05 "artifactVersion" : "1" 09:10:05 }, { 09:10:05 "artifactName" : "heat.env", 09:10:05 "artifactType" : "HEAT_ENV", 09:10:05 "artifactURL" : "/sdc/v1/catalog/services/Testnotificationser1/1.0/resourceInstances/testnotificationvf11/artifacts/heat.env", 09:10:05 "artifactChecksum" : "NGIzMjExZTM1NDc2NjBjOTQyMGJmMWNiMmU0NTE5NzM\u003d", 09:10:05 "artifactDescription" : "Auto-generated HEAT Environment deployment artifact", 09:10:05 "artifactTimeout" : 0, 09:10:05 "artifactUUID" : "ce65d31c-35c0-43a9-90c7-596fc51d0c86", 09:10:05 "artifactVersion" : "1", 09:10:05 "generatedFromUUID" : "8df6123c-f368-47d3-93be-1972cefbcc35" 09:10:05 } 09:10:05 ], 09:10:05 "resources" : [{ 09:10:05 "resourceInstanceName" : "testnotificationvf11", 09:10:05 "resourceName" : "TestNotificationVF1", 09:10:05 "resourceVersion" : "1.0", 09:10:05 "resoucreType" : "VF", 09:10:05 "resourceUUID" : "907e1746-9f69-40f5-9f2a-313654092a2d", 09:10:05 "artifacts" : [{ 09:10:05 "artifactName" : "sample-xml-alldata-1-1.xml", 09:10:05 "artifactType" : "YANG_XML", 09:10:05 "artifactURL" : "/sdc/v1/catalog/services/Testnotificationser1/1.0/resourceInstances/testnotificationvf11/artifacts/sample-xml-alldata-1-1.xml", 09:10:05 "artifactChecksum" : "MTUxODFkMmRlOTNhNjYxMGYyYTI1ZjA5Y2QyNWQyYTk\u003d", 09:10:05 "artifactDescription" : "MyYang", 09:10:05 "artifactTimeout" : 0, 09:10:05 "artifactUUID" : "0005bc4a-2c19-452e-be6d-d574a56be4d0", 09:10:05 "artifactVersion" : "1" 09:10:05 }, { 09:10:05 "artifactName" : "heat.yaml", 09:10:05 "artifactType" : "HEAT", 09:10:05 "artifactURL" : "/sdc/v1/catalog/services/Testnotificationser1/1.0/resourceInstances/testnotificationvf11/artifacts/heat.yaml", 09:10:05 "artifactChecksum" : "ODEyNjE4YTMzYzRmMTk2ODVhNTU2NTg3YWEyNmIxMTM\u003d", 09:10:05 "artifactDescription" : "heat", 09:10:05 "artifactTimeout" : 60, 09:10:05 "artifactUUID" : "8df6123c-f368-47d3-93be-1972cefbcc35", 09:10:05 "artifactVersion" : "1" 09:10:05 }, { 09:10:05 "artifactName" : "heat.env", 09:10:05 "artifactType" : "HEAT_ENV", 09:10:05 "artifactURL" : "/sdc/v1/catalog/services/Testnotificationser1/1.0/resourceInstances/testnotificationvf11/artifacts/heat.env", 09:10:05 "artifactChecksum" : "NGIzMjExZTM1NDc2NjBjOTQyMGJmMWNiMmU0NTE5NzM\u003d", 09:10:05 "artifactDescription" : "Auto-generated HEAT Environment deployment artifact", 09:10:05 "artifactTimeout" : 0, 09:10:05 "artifactUUID" : "ce65d31c-35c0-43a9-90c7-596fc51d0c86", 09:10:05 "artifactVersion" : "1", 09:10:05 "generatedFromUUID" : "8df6123c-f368-47d3-93be-1972cefbcc35" 09:10:05 } 09:10:05 ] 09:10:05 } 09:10:05 ] 09:10:05 } 09:10:05 09:10:05.317 [pool-14-thread-2] DEBUG org.onap.sdc.impl.NotificationConsumer - sending notification to client: { 09:10:05 "distributionID": "bcc7a72e-90b1-4c5f-9a37-28dc3cd86416", 09:10:05 "serviceName": "Testnotificationser1", 09:10:05 "serviceVersion": "1.0", 09:10:05 "serviceUUID": "7f7f94f4-373a-4b71-a0e3-80ae2ba4eb5d", 09:10:05 "serviceDescription": "TestNotificationVF1", 09:10:05 "resources": [ 09:10:05 { 09:10:05 "resourceInstanceName": "testnotificationvf11", 09:10:05 "resourceName": "TestNotificationVF1", 09:10:05 "resourceVersion": "1.0", 09:10:05 "resoucreType": "VF", 09:10:05 "resourceUUID": "907e1746-9f69-40f5-9f2a-313654092a2d", 09:10:05 "artifacts": [ 09:10:05 { 09:10:05 "artifactName": "heat.yaml", 09:10:05 "artifactType": "HEAT", 09:10:05 "artifactURL": "/sdc/v1/catalog/services/Testnotificationser1/1.0/resourceInstances/testnotificationvf11/artifacts/heat.yaml", 09:10:05 "artifactChecksum": "ODEyNjE4YTMzYzRmMTk2ODVhNTU2NTg3YWEyNmIxMTM\u003d", 09:10:05 "artifactDescription": "heat", 09:10:05 "artifactTimeout": 60, 09:10:05 "artifactVersion": "1", 09:10:05 "artifactUUID": "8df6123c-f368-47d3-93be-1972cefbcc35", 09:10:05 "generatedArtifact": { 09:10:05 "artifactName": "heat.env", 09:10:05 "artifactType": "HEAT_ENV", 09:10:05 "artifactURL": "/sdc/v1/catalog/services/Testnotificationser1/1.0/resourceInstances/testnotificationvf11/artifacts/heat.env", 09:10:05 "artifactChecksum": "NGIzMjExZTM1NDc2NjBjOTQyMGJmMWNiMmU0NTE5NzM\u003d", 09:10:05 "artifactDescription": "Auto-generated HEAT Environment deployment artifact", 09:10:05 "artifactTimeout": 0, 09:10:05 "artifactVersion": "1", 09:10:05 "artifactUUID": "ce65d31c-35c0-43a9-90c7-596fc51d0c86", 09:10:05 "generatedFromUUID": "8df6123c-f368-47d3-93be-1972cefbcc35" 09:10:05 }, 09:10:05 "relatedArtifactsInfo": [] 09:10:05 } 09:10:05 ] 09:10:05 } 09:10:05 ], 09:10:05 "serviceArtifacts": [ 09:10:05 { 09:10:05 "artifactName": "heat.yaml", 09:10:05 "artifactType": "HEAT", 09:10:05 "artifactURL": "/sdc/v1/catalog/services/Testnotificationser1/1.0/resourceInstances/testnotificationvf11/artifacts/heat.yaml", 09:10:05 "artifactChecksum": "ODEyNjE4YTMzYzRmMTk2ODVhNTU2NTg3YWEyNmIxMTM\u003d", 09:10:05 "artifactDescription": "heat", 09:10:05 "artifactTimeout": 60, 09:10:05 "artifactVersion": "1", 09:10:05 "artifactUUID": "8df6123c-f368-47d3-93be-1972cefbcc35", 09:10:05 "generatedArtifact": { 09:10:05 "artifactName": "heat.env", 09:10:05 "artifactType": "HEAT_ENV", 09:10:05 "artifactURL": "/sdc/v1/catalog/services/Testnotificationser1/1.0/resourceInstances/testnotificationvf11/artifacts/heat.env", 09:10:05 "artifactChecksum": "NGIzMjExZTM1NDc2NjBjOTQyMGJmMWNiMmU0NTE5NzM\u003d", 09:10:05 "artifactDescription": "Auto-generated HEAT Environment deployment artifact", 09:10:05 "artifactTimeout": 0, 09:10:05 "artifactVersion": "1", 09:10:05 "artifactUUID": "ce65d31c-35c0-43a9-90c7-596fc51d0c86", 09:10:05 "generatedFromUUID": "8df6123c-f368-47d3-93be-1972cefbcc35" 09:10:05 } 09:10:05 } 09:10:05 ] 09:10:05 } 09:10:05 09:10:05.325 [kafka-producer-network-thread | mso-123456-producer-a431b68b-e45a-4ed5-8b19-0482c7f0322b] DEBUG org.apache.kafka.clients.NetworkClient - [Producer clientId=mso-123456-producer-a431b68b-e45a-4ed5-8b19-0482c7f0322b] Give up sending metadata request since no node is available 09:10:05 09:10:05.341 [kafka-coordinator-heartbeat-thread | mso-group] DEBUG org.apache.kafka.clients.NetworkClient - [Consumer clientId=mso-123456-consumer-f5c1d64b-2a77-4abe-97e2-5d595acfbd82, groupId=mso-group] Give up sending metadata request since no node is available 09:10:05 09:10:05.341 [kafka-coordinator-heartbeat-thread | mso-group] DEBUG org.apache.kafka.clients.consumer.internals.ConsumerCoordinator - [Consumer clientId=mso-123456-consumer-f5c1d64b-2a77-4abe-97e2-5d595acfbd82, groupId=mso-group] No broker available to send FindCoordinator request 09:10:05 09:10:05.376 [kafka-producer-network-thread | mso-123456-producer-a431b68b-e45a-4ed5-8b19-0482c7f0322b] DEBUG org.apache.kafka.clients.NetworkClient - [Producer clientId=mso-123456-producer-a431b68b-e45a-4ed5-8b19-0482c7f0322b] Give up sending metadata request since no node is available 09:10:05 09:10:05.407 [pool-14-thread-1] DEBUG org.onap.sdc.impl.NotificationConsumer - Polling for messages from topic: null 09:10:05 09:10:05.426 [kafka-producer-network-thread | mso-123456-producer-a431b68b-e45a-4ed5-8b19-0482c7f0322b] DEBUG org.apache.kafka.clients.NetworkClient - [Producer clientId=mso-123456-producer-a431b68b-e45a-4ed5-8b19-0482c7f0322b] Give up sending metadata request since no node is available 09:10:05 09:10:05.441 [kafka-coordinator-heartbeat-thread | mso-group] DEBUG org.apache.kafka.clients.NetworkClient - [Consumer clientId=mso-123456-consumer-f5c1d64b-2a77-4abe-97e2-5d595acfbd82, groupId=mso-group] Give up sending metadata request since no node is available 09:10:05 09:10:05.441 [kafka-coordinator-heartbeat-thread | mso-group] DEBUG org.apache.kafka.clients.consumer.internals.ConsumerCoordinator - [Consumer clientId=mso-123456-consumer-f5c1d64b-2a77-4abe-97e2-5d595acfbd82, groupId=mso-group] No broker available to send FindCoordinator request 09:10:05 09:10:05.476 [kafka-producer-network-thread | mso-123456-producer-a431b68b-e45a-4ed5-8b19-0482c7f0322b] DEBUG org.apache.kafka.clients.NetworkClient - [Producer clientId=mso-123456-producer-a431b68b-e45a-4ed5-8b19-0482c7f0322b] Give up sending metadata request since no node is available 09:10:05 09:10:05.508 [pool-14-thread-3] DEBUG org.onap.sdc.impl.NotificationConsumer - Polling for messages from topic: null 09:10:05 09:10:05.526 [kafka-producer-network-thread | mso-123456-producer-a431b68b-e45a-4ed5-8b19-0482c7f0322b] DEBUG org.apache.kafka.clients.NetworkClient - [Producer clientId=mso-123456-producer-a431b68b-e45a-4ed5-8b19-0482c7f0322b] Give up sending metadata request since no node is available 09:10:05 09:10:05.542 [kafka-coordinator-heartbeat-thread | mso-group] DEBUG org.apache.kafka.clients.NetworkClient - [Consumer clientId=mso-123456-consumer-f5c1d64b-2a77-4abe-97e2-5d595acfbd82, groupId=mso-group] Give up sending metadata request since no node is available 09:10:05 09:10:05.542 [kafka-coordinator-heartbeat-thread | mso-group] DEBUG org.apache.kafka.clients.consumer.internals.ConsumerCoordinator - [Consumer clientId=mso-123456-consumer-f5c1d64b-2a77-4abe-97e2-5d595acfbd82, groupId=mso-group] No broker available to send FindCoordinator request 09:10:05 09:10:05.576 [kafka-producer-network-thread | mso-123456-producer-a431b68b-e45a-4ed5-8b19-0482c7f0322b] DEBUG org.apache.kafka.clients.NetworkClient - [Producer clientId=mso-123456-producer-a431b68b-e45a-4ed5-8b19-0482c7f0322b] Give up sending metadata request since no node is available 09:10:05 09:10:05.607 [pool-14-thread-2] DEBUG org.onap.sdc.impl.NotificationConsumer - Polling for messages from topic: null 09:10:05 09:10:05.627 [kafka-producer-network-thread | mso-123456-producer-a431b68b-e45a-4ed5-8b19-0482c7f0322b] DEBUG org.apache.kafka.clients.NetworkClient - [Producer clientId=mso-123456-producer-a431b68b-e45a-4ed5-8b19-0482c7f0322b] Give up sending metadata request since no node is available 09:10:05 09:10:05.642 [kafka-coordinator-heartbeat-thread | mso-group] DEBUG org.apache.kafka.clients.NetworkClient - [Consumer clientId=mso-123456-consumer-f5c1d64b-2a77-4abe-97e2-5d595acfbd82, groupId=mso-group] Give up sending metadata request since no node is available 09:10:05 09:10:05.642 [kafka-coordinator-heartbeat-thread | mso-group] DEBUG org.apache.kafka.clients.consumer.internals.ConsumerCoordinator - [Consumer clientId=mso-123456-consumer-f5c1d64b-2a77-4abe-97e2-5d595acfbd82, groupId=mso-group] No broker available to send FindCoordinator request 09:10:05 09:10:05.677 [kafka-producer-network-thread | mso-123456-producer-a431b68b-e45a-4ed5-8b19-0482c7f0322b] DEBUG org.apache.kafka.clients.NetworkClient - [Producer clientId=mso-123456-producer-a431b68b-e45a-4ed5-8b19-0482c7f0322b] Give up sending metadata request since no node is available 09:10:05 09:10:05.708 [pool-14-thread-4] DEBUG org.onap.sdc.impl.NotificationConsumer - Polling for messages from topic: null 09:10:05 09:10:05.728 [kafka-producer-network-thread | mso-123456-producer-a431b68b-e45a-4ed5-8b19-0482c7f0322b] DEBUG org.apache.kafka.clients.NetworkClient - [Producer clientId=mso-123456-producer-a431b68b-e45a-4ed5-8b19-0482c7f0322b] Give up sending metadata request since no node is available 09:10:05 09:10:05.742 [kafka-coordinator-heartbeat-thread | mso-group] DEBUG org.apache.kafka.clients.NetworkClient - [Consumer clientId=mso-123456-consumer-f5c1d64b-2a77-4abe-97e2-5d595acfbd82, groupId=mso-group] Give up sending metadata request since no node is available 09:10:05 09:10:05.742 [kafka-coordinator-heartbeat-thread | mso-group] DEBUG org.apache.kafka.clients.consumer.internals.ConsumerCoordinator - [Consumer clientId=mso-123456-consumer-f5c1d64b-2a77-4abe-97e2-5d595acfbd82, groupId=mso-group] No broker available to send FindCoordinator request 09:10:05 09:10:05.778 [kafka-producer-network-thread | mso-123456-producer-a431b68b-e45a-4ed5-8b19-0482c7f0322b] DEBUG org.apache.kafka.clients.NetworkClient - [Producer clientId=mso-123456-producer-a431b68b-e45a-4ed5-8b19-0482c7f0322b] Give up sending metadata request since no node is available 09:10:05 09:10:05.808 [pool-14-thread-1] DEBUG org.onap.sdc.impl.NotificationConsumer - Polling for messages from topic: null 09:10:05 09:10:05.828 [kafka-producer-network-thread | mso-123456-producer-a431b68b-e45a-4ed5-8b19-0482c7f0322b] DEBUG org.apache.kafka.clients.NetworkClient - [Producer clientId=mso-123456-producer-a431b68b-e45a-4ed5-8b19-0482c7f0322b] Give up sending metadata request since no node is available 09:10:05 09:10:05.843 [kafka-coordinator-heartbeat-thread | mso-group] DEBUG org.apache.kafka.clients.NetworkClient - [Consumer clientId=mso-123456-consumer-f5c1d64b-2a77-4abe-97e2-5d595acfbd82, groupId=mso-group] Initialize connection to node localhost:40117 (id: 1 rack: null) for sending metadata request 09:10:05 09:10:05.843 [kafka-coordinator-heartbeat-thread | mso-group] DEBUG org.apache.kafka.clients.ClientUtils - Resolved host localhost as 127.0.0.1 09:10:05 09:10:05.843 [kafka-coordinator-heartbeat-thread | mso-group] DEBUG org.apache.kafka.clients.NetworkClient - [Consumer clientId=mso-123456-consumer-f5c1d64b-2a77-4abe-97e2-5d595acfbd82, groupId=mso-group] Initiating connection to node localhost:40117 (id: 1 rack: null) using address localhost/127.0.0.1 09:10:05 09:10:05.843 [kafka-coordinator-heartbeat-thread | mso-group] DEBUG org.apache.kafka.common.security.authenticator.SaslClientAuthenticator - [Consumer clientId=mso-123456-consumer-f5c1d64b-2a77-4abe-97e2-5d595acfbd82, groupId=mso-group] Set SASL client state to SEND_APIVERSIONS_REQUEST 09:10:05 09:10:05.843 [kafka-coordinator-heartbeat-thread | mso-group] DEBUG org.apache.kafka.common.security.authenticator.SaslClientAuthenticator - [Consumer clientId=mso-123456-consumer-f5c1d64b-2a77-4abe-97e2-5d595acfbd82, groupId=mso-group] Creating SaslClient: client=null;service=kafka;serviceHostname=localhost;mechs=[PLAIN] 09:10:05 09:10:05.844 [kafka-coordinator-heartbeat-thread | mso-group] DEBUG org.apache.kafka.common.network.Selector - [Consumer clientId=mso-123456-consumer-f5c1d64b-2a77-4abe-97e2-5d595acfbd82, groupId=mso-group] Connection with localhost/127.0.0.1 (channelId=1) disconnected 09:10:05 java.net.ConnectException: Connection refused 09:10:05 at java.base/sun.nio.ch.SocketChannelImpl.checkConnect(Native Method) 09:10:05 at java.base/sun.nio.ch.SocketChannelImpl.finishConnect(SocketChannelImpl.java:777) 09:10:05 at org.apache.kafka.common.network.PlaintextTransportLayer.finishConnect(PlaintextTransportLayer.java:50) 09:10:05 at org.apache.kafka.common.network.KafkaChannel.finishConnect(KafkaChannel.java:224) 09:10:05 at org.apache.kafka.common.network.Selector.pollSelectionKeys(Selector.java:526) 09:10:05 at org.apache.kafka.common.network.Selector.poll(Selector.java:481) 09:10:05 at org.apache.kafka.clients.NetworkClient.poll(NetworkClient.java:560) 09:10:05 at org.apache.kafka.clients.consumer.internals.ConsumerNetworkClient.poll(ConsumerNetworkClient.java:280) 09:10:05 at org.apache.kafka.clients.consumer.internals.ConsumerNetworkClient.pollNoWakeup(ConsumerNetworkClient.java:321) 09:10:05 at org.apache.kafka.clients.consumer.internals.AbstractCoordinator$HeartbeatThread.run(AbstractCoordinator.java:1454) 09:10:05 09:10:05.844 [kafka-coordinator-heartbeat-thread | mso-group] INFO org.apache.kafka.clients.NetworkClient - [Consumer clientId=mso-123456-consumer-f5c1d64b-2a77-4abe-97e2-5d595acfbd82, groupId=mso-group] Node 1 disconnected. 09:10:05 09:10:05.844 [kafka-coordinator-heartbeat-thread | mso-group] WARN org.apache.kafka.clients.NetworkClient - [Consumer clientId=mso-123456-consumer-f5c1d64b-2a77-4abe-97e2-5d595acfbd82, groupId=mso-group] Connection to node 1 (localhost/127.0.0.1:40117) could not be established. Broker may not be available. 09:10:05 09:10:05.844 [kafka-coordinator-heartbeat-thread | mso-group] DEBUG org.apache.kafka.clients.consumer.internals.ConsumerCoordinator - [Consumer clientId=mso-123456-consumer-f5c1d64b-2a77-4abe-97e2-5d595acfbd82, groupId=mso-group] No broker available to send FindCoordinator request 09:10:05 09:10:05.878 [kafka-producer-network-thread | mso-123456-producer-a431b68b-e45a-4ed5-8b19-0482c7f0322b] DEBUG org.apache.kafka.clients.NetworkClient - [Producer clientId=mso-123456-producer-a431b68b-e45a-4ed5-8b19-0482c7f0322b] Give up sending metadata request since no node is available 09:10:05 09:10:05.908 [pool-14-thread-5] DEBUG org.onap.sdc.impl.NotificationConsumer - Polling for messages from topic: null 09:10:05 09:10:05.929 [kafka-producer-network-thread | mso-123456-producer-a431b68b-e45a-4ed5-8b19-0482c7f0322b] DEBUG org.apache.kafka.clients.NetworkClient - [Producer clientId=mso-123456-producer-a431b68b-e45a-4ed5-8b19-0482c7f0322b] Give up sending metadata request since no node is available 09:10:05 09:10:05.944 [kafka-coordinator-heartbeat-thread | mso-group] DEBUG org.apache.kafka.clients.NetworkClient - [Consumer clientId=mso-123456-consumer-f5c1d64b-2a77-4abe-97e2-5d595acfbd82, groupId=mso-group] Give up sending metadata request since no node is available 09:10:05 09:10:05.944 [kafka-coordinator-heartbeat-thread | mso-group] DEBUG org.apache.kafka.clients.consumer.internals.ConsumerCoordinator - [Consumer clientId=mso-123456-consumer-f5c1d64b-2a77-4abe-97e2-5d595acfbd82, groupId=mso-group] No broker available to send FindCoordinator request 09:10:05 09:10:05.979 [kafka-producer-network-thread | mso-123456-producer-a431b68b-e45a-4ed5-8b19-0482c7f0322b] DEBUG org.apache.kafka.clients.NetworkClient - [Producer clientId=mso-123456-producer-a431b68b-e45a-4ed5-8b19-0482c7f0322b] Give up sending metadata request since no node is available 09:10:06 09:10:06.007 [pool-14-thread-3] DEBUG org.onap.sdc.impl.NotificationConsumer - Polling for messages from topic: null 09:10:06 09:10:06.029 [kafka-producer-network-thread | mso-123456-producer-a431b68b-e45a-4ed5-8b19-0482c7f0322b] DEBUG org.apache.kafka.clients.NetworkClient - [Producer clientId=mso-123456-producer-a431b68b-e45a-4ed5-8b19-0482c7f0322b] Give up sending metadata request since no node is available 09:10:06 09:10:06.045 [kafka-coordinator-heartbeat-thread | mso-group] DEBUG org.apache.kafka.clients.NetworkClient - [Consumer clientId=mso-123456-consumer-f5c1d64b-2a77-4abe-97e2-5d595acfbd82, groupId=mso-group] Give up sending metadata request since no node is available 09:10:06 09:10:06.045 [kafka-coordinator-heartbeat-thread | mso-group] DEBUG org.apache.kafka.clients.consumer.internals.ConsumerCoordinator - [Consumer clientId=mso-123456-consumer-f5c1d64b-2a77-4abe-97e2-5d595acfbd82, groupId=mso-group] No broker available to send FindCoordinator request 09:10:06 09:10:06.079 [kafka-producer-network-thread | mso-123456-producer-a431b68b-e45a-4ed5-8b19-0482c7f0322b] DEBUG org.apache.kafka.clients.NetworkClient - [Producer clientId=mso-123456-producer-a431b68b-e45a-4ed5-8b19-0482c7f0322b] Give up sending metadata request since no node is available 09:10:06 09:10:06.108 [pool-14-thread-3] DEBUG org.onap.sdc.impl.NotificationConsumer - Polling for messages from topic: null 09:10:06 [INFO] Tests run: 7, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 7.549 s - in org.onap.sdc.impl.NotificationConsumerTest 09:10:06 [INFO] Running org.onap.sdc.impl.HeatParserTest 09:10:06 09:10:06.119 [main] DEBUG org.onap.sdc.utils.heat.HeatParser - Start of extracting HEAT parameters from file, file contents: just text 09:10:06 09:10:06.130 [kafka-producer-network-thread | mso-123456-producer-a431b68b-e45a-4ed5-8b19-0482c7f0322b] DEBUG org.apache.kafka.clients.NetworkClient - [Producer clientId=mso-123456-producer-a431b68b-e45a-4ed5-8b19-0482c7f0322b] Give up sending metadata request since no node is available 09:10:06 09:10:06.145 [kafka-coordinator-heartbeat-thread | mso-group] DEBUG org.apache.kafka.clients.NetworkClient - [Consumer clientId=mso-123456-consumer-f5c1d64b-2a77-4abe-97e2-5d595acfbd82, groupId=mso-group] Give up sending metadata request since no node is available 09:10:06 09:10:06.145 [kafka-coordinator-heartbeat-thread | mso-group] DEBUG org.apache.kafka.clients.consumer.internals.ConsumerCoordinator - [Consumer clientId=mso-123456-consumer-f5c1d64b-2a77-4abe-97e2-5d595acfbd82, groupId=mso-group] No broker available to send FindCoordinator request 09:10:06 09:10:06.180 [kafka-producer-network-thread | mso-123456-producer-a431b68b-e45a-4ed5-8b19-0482c7f0322b] DEBUG org.apache.kafka.clients.NetworkClient - [Producer clientId=mso-123456-producer-a431b68b-e45a-4ed5-8b19-0482c7f0322b] Give up sending metadata request since no node is available 09:10:06 09:10:06.231 [kafka-producer-network-thread | mso-123456-producer-a431b68b-e45a-4ed5-8b19-0482c7f0322b] DEBUG org.apache.kafka.clients.NetworkClient - [Producer clientId=mso-123456-producer-a431b68b-e45a-4ed5-8b19-0482c7f0322b] Initialize connection to node localhost:40117 (id: 1 rack: null) for sending metadata request 09:10:06 09:10:06.231 [kafka-producer-network-thread | mso-123456-producer-a431b68b-e45a-4ed5-8b19-0482c7f0322b] DEBUG org.apache.kafka.clients.ClientUtils - Resolved host localhost as 127.0.0.1 09:10:06 09:10:06.231 [kafka-producer-network-thread | mso-123456-producer-a431b68b-e45a-4ed5-8b19-0482c7f0322b] DEBUG org.apache.kafka.clients.NetworkClient - [Producer clientId=mso-123456-producer-a431b68b-e45a-4ed5-8b19-0482c7f0322b] Initiating connection to node localhost:40117 (id: 1 rack: null) using address localhost/127.0.0.1 09:10:06 09:10:06.232 [kafka-producer-network-thread | mso-123456-producer-a431b68b-e45a-4ed5-8b19-0482c7f0322b] DEBUG org.apache.kafka.common.security.authenticator.SaslClientAuthenticator - [Producer clientId=mso-123456-producer-a431b68b-e45a-4ed5-8b19-0482c7f0322b] Set SASL client state to SEND_APIVERSIONS_REQUEST 09:10:06 09:10:06.232 [kafka-producer-network-thread | mso-123456-producer-a431b68b-e45a-4ed5-8b19-0482c7f0322b] DEBUG org.apache.kafka.common.security.authenticator.SaslClientAuthenticator - [Producer clientId=mso-123456-producer-a431b68b-e45a-4ed5-8b19-0482c7f0322b] Creating SaslClient: client=null;service=kafka;serviceHostname=localhost;mechs=[PLAIN] 09:10:06 09:10:06.232 [main] ERROR org.onap.sdc.utils.YamlToObjectConverter - Failed to convert YAML just text to object. 09:10:06 org.yaml.snakeyaml.constructor.ConstructorException: Can't construct a java object for tag:yaml.org,2002:org.onap.sdc.utils.heat.HeatConfiguration; exception=No single argument constructor found for class org.onap.sdc.utils.heat.HeatConfiguration : null 09:10:06 in 'string', line 1, column 1: 09:10:06 just text 09:10:06 ^ 09:10:06 09:10:06 at org.yaml.snakeyaml.constructor.Constructor$ConstructYamlObject.construct(Constructor.java:336) 09:10:06 at org.yaml.snakeyaml.constructor.BaseConstructor.constructObjectNoCheck(BaseConstructor.java:230) 09:10:06 at org.yaml.snakeyaml.constructor.BaseConstructor.constructObject(BaseConstructor.java:220) 09:10:06 at org.yaml.snakeyaml.constructor.BaseConstructor.constructDocument(BaseConstructor.java:174) 09:10:06 at org.yaml.snakeyaml.constructor.BaseConstructor.getSingleData(BaseConstructor.java:158) 09:10:06 at org.yaml.snakeyaml.Yaml.loadFromReader(Yaml.java:491) 09:10:06 at org.yaml.snakeyaml.Yaml.loadAs(Yaml.java:470) 09:10:06 at org.onap.sdc.utils.YamlToObjectConverter.convertFromString(YamlToObjectConverter.java:113) 09:10:06 at org.onap.sdc.utils.heat.HeatParser.getHeatParameters(HeatParser.java:60) 09:10:06 at org.onap.sdc.impl.HeatParserTest.testParametersParsingInvalidYaml(HeatParserTest.java:122) 09:10:06 at java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke0(Native Method) 09:10:06 at java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) 09:10:06 at java.base/jdk.internal.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) 09:10:06 at java.base/java.lang.reflect.Method.invoke(Method.java:566) 09:10:06 at org.junit.platform.commons.util.ReflectionUtils.invokeMethod(ReflectionUtils.java:688) 09:10:06 at org.junit.jupiter.engine.execution.MethodInvocation.proceed(MethodInvocation.java:60) 09:10:06 at org.junit.jupiter.engine.execution.InvocationInterceptorChain$ValidatingInvocation.proceed(InvocationInterceptorChain.java:131) 09:10:06 at org.junit.jupiter.engine.extension.TimeoutExtension.intercept(TimeoutExtension.java:149) 09:10:06 at org.junit.jupiter.engine.extension.TimeoutExtension.interceptTestableMethod(TimeoutExtension.java:140) 09:10:06 at org.junit.jupiter.engine.extension.TimeoutExtension.interceptTestMethod(TimeoutExtension.java:84) 09:10:06 at org.junit.jupiter.engine.execution.ExecutableInvoker$ReflectiveInterceptorCall.lambda$ofVoidMethod$0(ExecutableInvoker.java:115) 09:10:06 at org.junit.jupiter.engine.execution.ExecutableInvoker.lambda$invoke$0(ExecutableInvoker.java:105) 09:10:06 at org.junit.jupiter.engine.execution.InvocationInterceptorChain$InterceptedInvocation.proceed(InvocationInterceptorChain.java:106) 09:10:06 at org.junit.jupiter.engine.execution.InvocationInterceptorChain.proceed(InvocationInterceptorChain.java:64) 09:10:06 at org.junit.jupiter.engine.execution.InvocationInterceptorChain.chainAndInvoke(InvocationInterceptorChain.java:45) 09:10:06 at org.junit.jupiter.engine.execution.InvocationInterceptorChain.invoke(InvocationInterceptorChain.java:37) 09:10:06 at org.junit.jupiter.engine.execution.ExecutableInvoker.invoke(ExecutableInvoker.java:104) 09:10:06 at org.junit.jupiter.engine.execution.ExecutableInvoker.invoke(ExecutableInvoker.java:98) 09:10:06 at org.junit.jupiter.engine.descriptor.TestMethodTestDescriptor.lambda$invokeTestMethod$6(TestMethodTestDescriptor.java:210) 09:10:06 at org.junit.platform.engine.support.hierarchical.ThrowableCollector.execute(ThrowableCollector.java:73) 09:10:06 at org.junit.jupiter.engine.descriptor.TestMethodTestDescriptor.invokeTestMethod(TestMethodTestDescriptor.java:206) 09:10:06 at org.junit.jupiter.engine.descriptor.TestMethodTestDescriptor.execute(TestMethodTestDescriptor.java:131) 09:10:06 at org.junit.jupiter.engine.descriptor.TestMethodTestDescriptor.execute(TestMethodTestDescriptor.java:65) 09:10:06 at org.junit.platform.engine.support.hierarchical.NodeTestTask.lambda$executeRecursively$5(NodeTestTask.java:139) 09:10:06 at org.junit.platform.engine.support.hierarchical.ThrowableCollector.execute(ThrowableCollector.java:73) 09:10:06 at org.junit.platform.engine.support.hierarchical.NodeTestTask.lambda$executeRecursively$7(NodeTestTask.java:129) 09:10:06 at org.junit.platform.engine.support.hierarchical.Node.around(Node.java:137) 09:10:06 at org.junit.platform.engine.support.hierarchical.NodeTestTask.lambda$executeRecursively$8(NodeTestTask.java:127) 09:10:06 at org.junit.platform.engine.support.hierarchical.ThrowableCollector.execute(ThrowableCollector.java:73) 09:10:06 at org.junit.platform.engine.support.hierarchical.NodeTestTask.executeRecursively(NodeTestTask.java:126) 09:10:06 at org.junit.platform.engine.support.hierarchical.NodeTestTask.execute(NodeTestTask.java:84) 09:10:06 at java.base/java.util.ArrayList.forEach(ArrayList.java:1541) 09:10:06 at org.junit.platform.engine.support.hierarchical.SameThreadHierarchicalTestExecutorService.invokeAll(SameThreadHierarchicalTestExecutorService.java:38) 09:10:06 at org.junit.platform.engine.support.hierarchical.NodeTestTask.lambda$executeRecursively$5(NodeTestTask.java:143) 09:10:06 at org.junit.platform.engine.support.hierarchical.ThrowableCollector.execute(ThrowableCollector.java:73) 09:10:06 at org.junit.platform.engine.support.hierarchical.NodeTestTask.lambda$executeRecursively$7(NodeTestTask.java:129) 09:10:06 at org.junit.platform.engine.support.hierarchical.Node.around(Node.java:137) 09:10:06 at org.junit.platform.engine.support.hierarchical.NodeTestTask.lambda$executeRecursively$8(NodeTestTask.java:127) 09:10:06 at org.junit.platform.engine.support.hierarchical.ThrowableCollector.execute(ThrowableCollector.java:73) 09:10:06 at org.junit.platform.engine.support.hierarchical.NodeTestTask.executeRecursively(NodeTestTask.java:126) 09:10:06 at org.junit.platform.engine.support.hierarchical.NodeTestTask.execute(NodeTestTask.java:84) 09:10:06 at java.base/java.util.ArrayList.forEach(ArrayList.java:1541) 09:10:06 at org.junit.platform.engine.support.hierarchical.SameThreadHierarchicalTestExecutorService.invokeAll(SameThreadHierarchicalTestExecutorService.java:38) 09:10:06 at org.junit.platform.engine.support.hierarchical.NodeTestTask.lambda$executeRecursively$5(NodeTestTask.java:143) 09:10:06 at org.junit.platform.engine.support.hierarchical.ThrowableCollector.execute(ThrowableCollector.java:73) 09:10:06 at org.junit.platform.engine.support.hierarchical.NodeTestTask.lambda$executeRecursively$7(NodeTestTask.java:129) 09:10:06 at org.junit.platform.engine.support.hierarchical.Node.around(Node.java:137) 09:10:06 at org.junit.platform.engine.support.hierarchical.NodeTestTask.lambda$executeRecursively$8(NodeTestTask.java:127) 09:10:06 at org.junit.platform.engine.support.hierarchical.ThrowableCollector.execute(ThrowableCollector.java:73) 09:10:06 at org.junit.platform.engine.support.hierarchical.NodeTestTask.executeRecursively(NodeTestTask.java:126) 09:10:06 at org.junit.platform.engine.support.hierarchical.NodeTestTask.execute(NodeTestTask.java:84) 09:10:06 at org.junit.platform.engine.support.hierarchical.SameThreadHierarchicalTestExecutorService.submit(SameThreadHierarchicalTestExecutorService.java:32) 09:10:06 at org.junit.platform.engine.support.hierarchical.HierarchicalTestExecutor.execute(HierarchicalTestExecutor.java:57) 09:10:06 at org.junit.platform.engine.support.hierarchical.HierarchicalTestEngine.execute(HierarchicalTestEngine.java:51) 09:10:06 at org.junit.platform.launcher.core.EngineExecutionOrchestrator.execute(EngineExecutionOrchestrator.java:108) 09:10:06 at org.junit.platform.launcher.core.EngineExecutionOrchestrator.execute(EngineExecutionOrchestrator.java:88) 09:10:06 at org.junit.platform.launcher.core.EngineExecutionOrchestrator.lambda$execute$0(EngineExecutionOrchestrator.java:54) 09:10:06 at org.junit.platform.launcher.core.EngineExecutionOrchestrator.withInterceptedStreams(EngineExecutionOrchestrator.java:67) 09:10:06 at org.junit.platform.launcher.core.EngineExecutionOrchestrator.execute(EngineExecutionOrchestrator.java:52) 09:10:06 at org.junit.platform.launcher.core.DefaultLauncher.execute(DefaultLauncher.java:96) 09:10:06 at org.junit.platform.launcher.core.DefaultLauncher.execute(DefaultLauncher.java:75) 09:10:06 at org.apache.maven.surefire.junitplatform.JUnitPlatformProvider.invokeAllTests(JUnitPlatformProvider.java:154) 09:10:06 at org.apache.maven.surefire.junitplatform.JUnitPlatformProvider.invoke(JUnitPlatformProvider.java:127) 09:10:06 at org.apache.maven.surefire.booter.ForkedBooter.runSuitesInProcess(ForkedBooter.java:377) 09:10:06 at org.apache.maven.surefire.booter.ForkedBooter.execute(ForkedBooter.java:138) 09:10:06 at org.apache.maven.surefire.booter.ForkedBooter.run(ForkedBooter.java:465) 09:10:06 at org.apache.maven.surefire.booter.ForkedBooter.main(ForkedBooter.java:451) 09:10:06 Caused by: org.yaml.snakeyaml.error.YAMLException: No single argument constructor found for class org.onap.sdc.utils.heat.HeatConfiguration : null 09:10:06 at org.yaml.snakeyaml.constructor.Constructor$ConstructScalar.construct(Constructor.java:393) 09:10:06 at org.yaml.snakeyaml.constructor.Constructor$ConstructYamlObject.construct(Constructor.java:332) 09:10:06 ... 76 common frames omitted 09:10:06 09:10:06.232 [main] ERROR org.onap.sdc.utils.heat.HeatParser - Couldn't parse HEAT template. 09:10:06 09:10:06.232 [main] WARN org.onap.sdc.utils.heat.HeatParser - HEAT template parameters section wasn't found or is empty. 09:10:06 09:10:06.233 [kafka-producer-network-thread | mso-123456-producer-a431b68b-e45a-4ed5-8b19-0482c7f0322b] DEBUG org.apache.kafka.common.network.Selector - [Producer clientId=mso-123456-producer-a431b68b-e45a-4ed5-8b19-0482c7f0322b] Connection with localhost/127.0.0.1 (channelId=1) disconnected 09:10:06 java.net.ConnectException: Connection refused 09:10:06 at java.base/sun.nio.ch.SocketChannelImpl.checkConnect(Native Method) 09:10:06 at java.base/sun.nio.ch.SocketChannelImpl.finishConnect(SocketChannelImpl.java:777) 09:10:06 at org.apache.kafka.common.network.PlaintextTransportLayer.finishConnect(PlaintextTransportLayer.java:50) 09:10:06 at org.apache.kafka.common.network.KafkaChannel.finishConnect(KafkaChannel.java:224) 09:10:06 at org.apache.kafka.common.network.Selector.pollSelectionKeys(Selector.java:526) 09:10:06 at org.apache.kafka.common.network.Selector.poll(Selector.java:481) 09:10:06 at org.apache.kafka.clients.NetworkClient.poll(NetworkClient.java:560) 09:10:06 at org.apache.kafka.clients.producer.internals.Sender.runOnce(Sender.java:328) 09:10:06 at org.apache.kafka.clients.producer.internals.Sender.run(Sender.java:243) 09:10:06 at java.base/java.lang.Thread.run(Thread.java:829) 09:10:06 09:10:06.233 [kafka-producer-network-thread | mso-123456-producer-a431b68b-e45a-4ed5-8b19-0482c7f0322b] INFO org.apache.kafka.clients.NetworkClient - [Producer clientId=mso-123456-producer-a431b68b-e45a-4ed5-8b19-0482c7f0322b] Node 1 disconnected. 09:10:06 09:10:06.233 [kafka-producer-network-thread | mso-123456-producer-a431b68b-e45a-4ed5-8b19-0482c7f0322b] WARN org.apache.kafka.clients.NetworkClient - [Producer clientId=mso-123456-producer-a431b68b-e45a-4ed5-8b19-0482c7f0322b] Connection to node 1 (localhost/127.0.0.1:40117) could not be established. Broker may not be available. 09:10:06 09:10:06.245 [kafka-coordinator-heartbeat-thread | mso-group] DEBUG org.apache.kafka.clients.NetworkClient - [Consumer clientId=mso-123456-consumer-f5c1d64b-2a77-4abe-97e2-5d595acfbd82, groupId=mso-group] Give up sending metadata request since no node is available 09:10:06 09:10:06.246 [kafka-coordinator-heartbeat-thread | mso-group] DEBUG org.apache.kafka.clients.consumer.internals.ConsumerCoordinator - [Consumer clientId=mso-123456-consumer-f5c1d64b-2a77-4abe-97e2-5d595acfbd82, groupId=mso-group] No broker available to send FindCoordinator request 09:10:06 09:10:06.257 [main] DEBUG org.onap.sdc.utils.heat.HeatParser - Start of extracting HEAT parameters from file, file contents: heat_template_version: 2013-05-23 09:10:06 09:10:06 description: Simple template to deploy a stack with two virtual machine instances 09:10:06 09:10:06 parameters: 09:10:06 image_name_1: 09:10:06 type: string 09:10:06 label: Image Name 09:10:06 description: SCOIMAGE Specify an image name for instance1 09:10:06 default: cirros-0.3.1-x86_64 09:10:06 image_name_2: 09:10:06 type: string 09:10:06 label: Image Name 09:10:06 description: SCOIMAGE Specify an image name for instance2 09:10:06 default: cirros-0.3.1-x86_64 09:10:06 network_id: 09:10:06 type: string 09:10:06 label: Network ID 09:10:06 description: SCONETWORK Network to be used for the compute instance 09:10:06 hidden: true 09:10:06 constraints: 09:10:06 - length: { min: 6, max: 8 } 09:10:06 description: Password length must be between 6 and 8 characters. 09:10:06 - range: { min: 6, max: 8 } 09:10:06 description: Range description 09:10:06 - allowed_values: 09:10:06 - m1.small 09:10:06 - m1.medium 09:10:06 - m1.large 09:10:06 description: Allowed values description 09:10:06 - allowed_pattern: "[a-zA-Z0-9]+" 09:10:06 description: Password must consist of characters and numbers only. 09:10:06 - allowed_pattern: "[A-Z]+[a-zA-Z0-9]*" 09:10:06 description: Password must start with an uppercase character. 09:10:06 - custom_constraint: nova.keypair 09:10:06 description: Custom description 09:10:06 09:10:06 resources: 09:10:06 my_instance1: 09:10:06 type: OS::Nova::Server 09:10:06 properties: 09:10:06 image: { get_param: image_name_1 } 09:10:06 flavor: m1.small 09:10:06 networks: 09:10:06 - network : { get_param : network_id } 09:10:06 my_instance2: 09:10:06 type: OS::Nova::Server 09:10:06 properties: 09:10:06 image: { get_param: image_name_2 } 09:10:06 flavor: m1.tiny 09:10:06 networks: 09:10:06 - network : { get_param : network_id } 09:10:06 09:10:06.334 [kafka-producer-network-thread | mso-123456-producer-a431b68b-e45a-4ed5-8b19-0482c7f0322b] DEBUG org.apache.kafka.clients.NetworkClient - [Producer clientId=mso-123456-producer-a431b68b-e45a-4ed5-8b19-0482c7f0322b] Give up sending metadata request since no node is available 09:10:06 Found HEAT parameters: {image_name_1=type:string, label:Image Name, default:cirros-0.3.1-x86_64, hidden:false, description:SCOIMAGE Specify an image name for instance1, image_name_2=type:string, label:Image Name, default:cirros-0.3.1-x86_64, hidden:false, description:SCOIMAGE Specify an image name for instance2, network_id=type:string, label:Network ID, hidden:true, constraints:[length:{min=6, max=8}, description:Password length must be between 6 and 8 characters., range:{min=6, max=8}, description:Range description, allowed_values:[m1.small, m1.medium, m1.large], description:Allowed values description, allowed_pattern:[a-zA-Z0-9]+, description:Password must consist of characters and numbers only., allowed_pattern:[A-Z]+[a-zA-Z0-9]*, description:Password must start with an uppercase character., custom_constraint:nova.keypair, description:Custom description], description:SCONETWORK Network to be used for the compute instance} 09:10:06 09:10:06.340 [main] DEBUG org.onap.sdc.utils.heat.HeatParser - Found HEAT parameters: {image_name_1=type:string, label:Image Name, default:cirros-0.3.1-x86_64, hidden:false, description:SCOIMAGE Specify an image name for instance1, image_name_2=type:string, label:Image Name, default:cirros-0.3.1-x86_64, hidden:false, description:SCOIMAGE Specify an image name for instance2, network_id=type:string, label:Network ID, hidden:true, constraints:[length:{min=6, max=8}, description:Password length must be between 6 and 8 characters., range:{min=6, max=8}, description:Range description, allowed_values:[m1.small, m1.medium, m1.large], description:Allowed values description, allowed_pattern:[a-zA-Z0-9]+, description:Password must consist of characters and numbers only., allowed_pattern:[A-Z]+[a-zA-Z0-9]*, description:Password must start with an uppercase character., custom_constraint:nova.keypair, description:Custom description], description:SCONETWORK Network to be used for the compute instance} 09:10:06 09:10:06.341 [main] DEBUG org.onap.sdc.utils.heat.HeatParser - Start of extracting HEAT parameters from file, file contents: heat_template_version: 2013-05-23 09:10:06 09:10:06 description: Simple template to deploy a stack with two virtual machine instances 09:10:06 09:10:06.342 [main] WARN org.onap.sdc.utils.heat.HeatParser - HEAT template parameters section wasn't found or is empty. 09:10:06 [INFO] Tests run: 3, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.226 s - in org.onap.sdc.impl.HeatParserTest 09:10:06 [INFO] Running org.onap.sdc.impl.DistributionStatusMessageImplTest 09:10:06 09:10:06.346 [kafka-coordinator-heartbeat-thread | mso-group] DEBUG org.apache.kafka.clients.NetworkClient - [Consumer clientId=mso-123456-consumer-f5c1d64b-2a77-4abe-97e2-5d595acfbd82, groupId=mso-group] Give up sending metadata request since no node is available 09:10:06 09:10:06.346 [kafka-coordinator-heartbeat-thread | mso-group] DEBUG org.apache.kafka.clients.consumer.internals.ConsumerCoordinator - [Consumer clientId=mso-123456-consumer-f5c1d64b-2a77-4abe-97e2-5d595acfbd82, groupId=mso-group] No broker available to send FindCoordinator request 09:10:06 [INFO] Tests run: 3, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.004 s - in org.onap.sdc.impl.DistributionStatusMessageImplTest 09:10:06 [INFO] Running org.onap.sdc.impl.NotificationCallbackBuilderTest 09:10:06 [INFO] Tests run: 2, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.009 s - in org.onap.sdc.impl.NotificationCallbackBuilderTest 09:10:06 [INFO] Running org.onap.sdc.impl.DistributionClientDownloadResultTest 09:10:06 [WARNING] Tests run: 7, Failures: 0, Errors: 0, Skipped: 1, Time elapsed: 0.004 s - in org.onap.sdc.impl.DistributionClientDownloadResultTest 09:10:06 [INFO] Running org.onap.sdc.impl.ConfigurationValidatorTest 09:10:06 [INFO] Tests run: 4, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.005 s - in org.onap.sdc.impl.ConfigurationValidatorTest 09:10:06 [INFO] Running org.onap.sdc.impl.DistributionClientTest 09:10:06 09:10:06.380 [main] INFO org.onap.sdc.impl.DistributionClientImpl - DistributionClient - init 09:10:06 09:10:06.382 [main] DEBUG org.onap.sdc.impl.DistributionClientImpl - Artifact types: [HEAT] were validated with SDC server 09:10:06 09:10:06.382 [main] DEBUG org.onap.sdc.impl.DistributionClientImpl - Get MessageBus cluster information from SDC 09:10:06 09:10:06.382 [main] DEBUG org.onap.sdc.impl.DistributionClientImpl - MessageBus cluster info retrieved successfully org.onap.sdc.utils.kafka.KafkaDataResponse@41be1359 09:10:06 09:10:06.383 [main] INFO org.apache.kafka.clients.producer.ProducerConfig - ProducerConfig values: 09:10:06 acks = -1 09:10:06 batch.size = 16384 09:10:06 bootstrap.servers = [localhost:9092] 09:10:06 buffer.memory = 33554432 09:10:06 client.dns.lookup = use_all_dns_ips 09:10:06 client.id = mso-123456-producer-ed478832-32af-451a-a221-ab91f035a123 09:10:06 compression.type = none 09:10:06 connections.max.idle.ms = 540000 09:10:06 delivery.timeout.ms = 120000 09:10:06 enable.idempotence = true 09:10:06 interceptor.classes = [] 09:10:06 key.serializer = class org.apache.kafka.common.serialization.StringSerializer 09:10:06 linger.ms = 0 09:10:06 max.block.ms = 60000 09:10:06 max.in.flight.requests.per.connection = 5 09:10:06 max.request.size = 1048576 09:10:06 metadata.max.age.ms = 300000 09:10:06 metadata.max.idle.ms = 300000 09:10:06 metric.reporters = [] 09:10:06 metrics.num.samples = 2 09:10:06 metrics.recording.level = INFO 09:10:06 metrics.sample.window.ms = 30000 09:10:06 partitioner.adaptive.partitioning.enable = true 09:10:06 partitioner.availability.timeout.ms = 0 09:10:06 partitioner.class = null 09:10:06 partitioner.ignore.keys = false 09:10:06 receive.buffer.bytes = 32768 09:10:06 reconnect.backoff.max.ms = 1000 09:10:06 reconnect.backoff.ms = 50 09:10:06 request.timeout.ms = 30000 09:10:06 retries = 2147483647 09:10:06 retry.backoff.ms = 100 09:10:06 sasl.client.callback.handler.class = null 09:10:06 sasl.jaas.config = [hidden] 09:10:06 sasl.kerberos.kinit.cmd = /usr/bin/kinit 09:10:06 sasl.kerberos.min.time.before.relogin = 60000 09:10:06 sasl.kerberos.service.name = null 09:10:06 sasl.kerberos.ticket.renew.jitter = 0.05 09:10:06 sasl.kerberos.ticket.renew.window.factor = 0.8 09:10:06 sasl.login.callback.handler.class = null 09:10:06 sasl.login.class = null 09:10:06 sasl.login.connect.timeout.ms = null 09:10:06 sasl.login.read.timeout.ms = null 09:10:06 sasl.login.refresh.buffer.seconds = 300 09:10:06 sasl.login.refresh.min.period.seconds = 60 09:10:06 sasl.login.refresh.window.factor = 0.8 09:10:06 sasl.login.refresh.window.jitter = 0.05 09:10:06 sasl.login.retry.backoff.max.ms = 10000 09:10:06 sasl.login.retry.backoff.ms = 100 09:10:06 sasl.mechanism = PLAIN 09:10:06 sasl.oauthbearer.clock.skew.seconds = 30 09:10:06 sasl.oauthbearer.expected.audience = null 09:10:06 sasl.oauthbearer.expected.issuer = null 09:10:06 sasl.oauthbearer.jwks.endpoint.refresh.ms = 3600000 09:10:06 sasl.oauthbearer.jwks.endpoint.retry.backoff.max.ms = 10000 09:10:06 sasl.oauthbearer.jwks.endpoint.retry.backoff.ms = 100 09:10:06 sasl.oauthbearer.jwks.endpoint.url = null 09:10:06 sasl.oauthbearer.scope.claim.name = scope 09:10:06 sasl.oauthbearer.sub.claim.name = sub 09:10:06 sasl.oauthbearer.token.endpoint.url = null 09:10:06 security.protocol = SASL_PLAINTEXT 09:10:06 security.providers = null 09:10:06 send.buffer.bytes = 131072 09:10:06 socket.connection.setup.timeout.max.ms = 30000 09:10:06 socket.connection.setup.timeout.ms = 10000 09:10:06 ssl.cipher.suites = null 09:10:06 ssl.enabled.protocols = [TLSv1.2, TLSv1.3] 09:10:06 ssl.endpoint.identification.algorithm = https 09:10:06 ssl.engine.factory.class = null 09:10:06 ssl.key.password = null 09:10:06 ssl.keymanager.algorithm = SunX509 09:10:06 ssl.keystore.certificate.chain = null 09:10:06 ssl.keystore.key = null 09:10:06 ssl.keystore.location = null 09:10:06 ssl.keystore.password = null 09:10:06 ssl.keystore.type = JKS 09:10:06 ssl.protocol = TLSv1.3 09:10:06 ssl.provider = null 09:10:06 ssl.secure.random.implementation = null 09:10:06 ssl.trustmanager.algorithm = PKIX 09:10:06 ssl.truststore.certificates = null 09:10:06 ssl.truststore.location = null 09:10:06 ssl.truststore.password = null 09:10:06 ssl.truststore.type = JKS 09:10:06 transaction.timeout.ms = 60000 09:10:06 transactional.id = null 09:10:06 value.serializer = class org.apache.kafka.common.serialization.StringSerializer 09:10:06 09:10:06 09:10:06.384 [kafka-producer-network-thread | mso-123456-producer-a431b68b-e45a-4ed5-8b19-0482c7f0322b] DEBUG org.apache.kafka.clients.NetworkClient - [Producer clientId=mso-123456-producer-a431b68b-e45a-4ed5-8b19-0482c7f0322b] Give up sending metadata request since no node is available 09:10:06 09:10:06.385 [main] INFO org.apache.kafka.clients.producer.KafkaProducer - [Producer clientId=mso-123456-producer-ed478832-32af-451a-a221-ab91f035a123] Instantiated an idempotent producer. 09:10:06 09:10:06.387 [main] INFO org.apache.kafka.common.utils.AppInfoParser - Kafka version: 3.3.1 09:10:06 09:10:06.387 [main] INFO org.apache.kafka.common.utils.AppInfoParser - Kafka commitId: e23c59d00e687ff5 09:10:06 09:10:06.387 [main] INFO org.apache.kafka.common.utils.AppInfoParser - Kafka startTimeMs: 1770973806387 09:10:06 09:10:06.387 [kafka-producer-network-thread | mso-123456-producer-ed478832-32af-451a-a221-ab91f035a123] DEBUG org.apache.kafka.clients.producer.internals.Sender - [Producer clientId=mso-123456-producer-ed478832-32af-451a-a221-ab91f035a123] Starting Kafka producer I/O thread. 09:10:06 09:10:06.387 [kafka-producer-network-thread | mso-123456-producer-ed478832-32af-451a-a221-ab91f035a123] DEBUG org.apache.kafka.clients.producer.internals.TransactionManager - [Producer clientId=mso-123456-producer-ed478832-32af-451a-a221-ab91f035a123] Transition from state UNINITIALIZED to INITIALIZING 09:10:06 09:10:06.387 [main] DEBUG org.apache.kafka.clients.producer.KafkaProducer - [Producer clientId=mso-123456-producer-ed478832-32af-451a-a221-ab91f035a123] Kafka producer started 09:10:06 DistributionClientResultImpl [responseStatus=SUCCESS, responseMessage=distribution client initialized successfully] 09:10:06 09:10:06.387 [kafka-producer-network-thread | mso-123456-producer-ed478832-32af-451a-a221-ab91f035a123] DEBUG org.apache.kafka.clients.producer.internals.TransactionManager - [Producer clientId=mso-123456-producer-ed478832-32af-451a-a221-ab91f035a123] Enqueuing transactional request InitProducerIdRequestData(transactionalId=null, transactionTimeoutMs=2147483647, producerId=-1, producerEpoch=-1) 09:10:06 09:10:06.388 [main] INFO org.onap.sdc.impl.DistributionClientImpl - DistributionClient - init 09:10:06 09:10:06.388 [main] WARN org.onap.sdc.impl.DistributionClientImpl - distribution client already initialized 09:10:06 09:10:06.388 [kafka-producer-network-thread | mso-123456-producer-ed478832-32af-451a-a221-ab91f035a123] DEBUG org.apache.kafka.clients.NetworkClient - [Producer clientId=mso-123456-producer-ed478832-32af-451a-a221-ab91f035a123] Initialize connection to node localhost:9092 (id: -1 rack: null) for sending metadata request 09:10:06 09:10:06.388 [kafka-producer-network-thread | mso-123456-producer-ed478832-32af-451a-a221-ab91f035a123] DEBUG org.apache.kafka.clients.ClientUtils - Resolved host localhost as 127.0.0.1 09:10:06 09:10:06.388 [kafka-producer-network-thread | mso-123456-producer-ed478832-32af-451a-a221-ab91f035a123] DEBUG org.apache.kafka.clients.NetworkClient - [Producer clientId=mso-123456-producer-ed478832-32af-451a-a221-ab91f035a123] Initiating connection to node localhost:9092 (id: -1 rack: null) using address localhost/127.0.0.1 09:10:06 09:10:06.389 [kafka-producer-network-thread | mso-123456-producer-ed478832-32af-451a-a221-ab91f035a123] DEBUG org.apache.kafka.common.security.authenticator.SaslClientAuthenticator - [Producer clientId=mso-123456-producer-ed478832-32af-451a-a221-ab91f035a123] Set SASL client state to SEND_APIVERSIONS_REQUEST 09:10:06 09:10:06.389 [kafka-producer-network-thread | mso-123456-producer-ed478832-32af-451a-a221-ab91f035a123] DEBUG org.apache.kafka.common.security.authenticator.SaslClientAuthenticator - [Producer clientId=mso-123456-producer-ed478832-32af-451a-a221-ab91f035a123] Creating SaslClient: client=null;service=kafka;serviceHostname=localhost;mechs=[PLAIN] 09:10:06 09:10:06.389 [main] INFO org.onap.sdc.impl.DistributionClientImpl - stop DistributionClient 09:10:06 09:10:06.391 [main] INFO org.onap.sdc.impl.DistributionClientImpl - DistributionClient - init 09:10:06 09:10:06.392 [main] ERROR org.onap.sdc.impl.DistributionClientImpl - DistributionClientResultImpl [responseStatus=CONF_MISSING_PASSWORD, responseMessage=configuration is invalid: CONF_MISSING_PASSWORD] 09:10:06 09:10:06.392 [kafka-producer-network-thread | mso-123456-producer-ed478832-32af-451a-a221-ab91f035a123] DEBUG org.apache.kafka.common.network.Selector - [Producer clientId=mso-123456-producer-ed478832-32af-451a-a221-ab91f035a123] Connection with localhost/127.0.0.1 (channelId=-1) disconnected 09:10:06 java.net.ConnectException: Connection refused 09:10:06 at java.base/sun.nio.ch.SocketChannelImpl.checkConnect(Native Method) 09:10:06 at java.base/sun.nio.ch.SocketChannelImpl.finishConnect(SocketChannelImpl.java:777) 09:10:06 at org.apache.kafka.common.network.PlaintextTransportLayer.finishConnect(PlaintextTransportLayer.java:50) 09:10:06 at org.apache.kafka.common.network.KafkaChannel.finishConnect(KafkaChannel.java:224) 09:10:06 at org.apache.kafka.common.network.Selector.pollSelectionKeys(Selector.java:526) 09:10:06 at org.apache.kafka.common.network.Selector.poll(Selector.java:481) 09:10:06 at org.apache.kafka.clients.NetworkClient.poll(NetworkClient.java:560) 09:10:06 at org.apache.kafka.clients.NetworkClientUtils.isReady(NetworkClientUtils.java:42) 09:10:06 at org.apache.kafka.clients.NetworkClientUtils.awaitReady(NetworkClientUtils.java:64) 09:10:06 at org.apache.kafka.clients.producer.internals.Sender.awaitNodeReady(Sender.java:534) 09:10:06 at org.apache.kafka.clients.producer.internals.Sender.maybeSendAndPollTransactionalRequest(Sender.java:455) 09:10:06 at org.apache.kafka.clients.producer.internals.Sender.runOnce(Sender.java:316) 09:10:06 at org.apache.kafka.clients.producer.internals.Sender.run(Sender.java:243) 09:10:06 at java.base/java.lang.Thread.run(Thread.java:829) 09:10:06 09:10:06.392 [main] INFO org.onap.sdc.impl.DistributionClientImpl - DistributionClient - init 09:10:06 09:10:06.392 [kafka-producer-network-thread | mso-123456-producer-ed478832-32af-451a-a221-ab91f035a123] INFO org.apache.kafka.clients.NetworkClient - [Producer clientId=mso-123456-producer-ed478832-32af-451a-a221-ab91f035a123] Node -1 disconnected. 09:10:06 09:10:06.392 [main] ERROR org.onap.sdc.impl.DistributionClientImpl - DistributionClientResultImpl [responseStatus=CONF_MISSING_PASSWORD, responseMessage=configuration is invalid: CONF_MISSING_PASSWORD] 09:10:06 09:10:06.392 [kafka-producer-network-thread | mso-123456-producer-ed478832-32af-451a-a221-ab91f035a123] WARN org.apache.kafka.clients.NetworkClient - [Producer clientId=mso-123456-producer-ed478832-32af-451a-a221-ab91f035a123] Connection to node -1 (localhost/127.0.0.1:9092) could not be established. Broker may not be available. 09:10:06 09:10:06.392 [kafka-producer-network-thread | mso-123456-producer-ed478832-32af-451a-a221-ab91f035a123] WARN org.apache.kafka.clients.NetworkClient - [Producer clientId=mso-123456-producer-ed478832-32af-451a-a221-ab91f035a123] Bootstrap broker localhost:9092 (id: -1 rack: null) disconnected 09:10:06 09:10:06.392 [main] INFO org.onap.sdc.impl.DistributionClientImpl - DistributionClient - init 09:10:06 09:10:06.392 [kafka-producer-network-thread | mso-123456-producer-ed478832-32af-451a-a221-ab91f035a123] DEBUG org.apache.kafka.clients.producer.internals.Sender - [Producer clientId=mso-123456-producer-ed478832-32af-451a-a221-ab91f035a123] Disconnect from localhost:9092 (id: -1 rack: null) while trying to send request InitProducerIdRequestData(transactionalId=null, transactionTimeoutMs=2147483647, producerId=-1, producerEpoch=-1). Going to back off and retry. 09:10:06 java.io.IOException: Connection to localhost:9092 (id: -1 rack: null) failed. 09:10:06 at org.apache.kafka.clients.NetworkClientUtils.awaitReady(NetworkClientUtils.java:70) 09:10:06 at org.apache.kafka.clients.producer.internals.Sender.awaitNodeReady(Sender.java:534) 09:10:06 at org.apache.kafka.clients.producer.internals.Sender.maybeSendAndPollTransactionalRequest(Sender.java:455) 09:10:06 at org.apache.kafka.clients.producer.internals.Sender.runOnce(Sender.java:316) 09:10:06 at org.apache.kafka.clients.producer.internals.Sender.run(Sender.java:243) 09:10:06 at java.base/java.lang.Thread.run(Thread.java:829) 09:10:06 09:10:06.393 [main] ERROR org.onap.sdc.impl.DistributionClientImpl - DistributionClientResultImpl [responseStatus=CONF_MISSING_USERNAME, responseMessage=configuration is invalid: CONF_MISSING_USERNAME] 09:10:06 09:10:06.393 [main] INFO org.onap.sdc.impl.DistributionClientImpl - DistributionClient - init 09:10:06 09:10:06.393 [main] ERROR org.onap.sdc.impl.DistributionClientImpl - DistributionClientResultImpl [responseStatus=CONF_MISSING_USERNAME, responseMessage=configuration is invalid: CONF_MISSING_USERNAME] 09:10:06 09:10:06.393 [main] INFO org.onap.sdc.impl.DistributionClientImpl - DistributionClient - init 09:10:06 09:10:06.393 [main] ERROR org.onap.sdc.impl.DistributionClientImpl - DistributionClientResultImpl [responseStatus=CONF_MISSING_SDC_FQDN, responseMessage=configuration is invalid: CONF_MISSING_SDC_FQDN] 09:10:06 09:10:06.394 [main] INFO org.onap.sdc.impl.DistributionClientImpl - DistributionClient - init 09:10:06 09:10:06.394 [main] ERROR org.onap.sdc.impl.DistributionClientImpl - DistributionClientResultImpl [responseStatus=CONF_MISSING_SDC_FQDN, responseMessage=configuration is invalid: CONF_MISSING_SDC_FQDN] 09:10:06 09:10:06.394 [main] INFO org.onap.sdc.impl.DistributionClientImpl - DistributionClient - init 09:10:06 09:10:06.394 [main] ERROR org.onap.sdc.impl.DistributionClientImpl - DistributionClientResultImpl [responseStatus=CONF_INVALID_SDC_FQDN, responseMessage=configuration is invalid: CONF_INVALID_SDC_FQDN] 09:10:06 09:10:06.395 [main] INFO org.onap.sdc.impl.DistributionClientImpl - DistributionClient - init 09:10:06 09:10:06.395 [main] ERROR org.onap.sdc.impl.DistributionClientImpl - DistributionClientResultImpl [responseStatus=CONF_MISSING_CONSUMER_ID, responseMessage=configuration is invalid: CONF_MISSING_CONSUMER_ID] 09:10:06 09:10:06.395 [main] INFO org.onap.sdc.impl.DistributionClientImpl - DistributionClient - init 09:10:06 09:10:06.395 [main] ERROR org.onap.sdc.impl.DistributionClientImpl - DistributionClientResultImpl [responseStatus=CONF_MISSING_CONSUMER_ID, responseMessage=configuration is invalid: CONF_MISSING_CONSUMER_ID] 09:10:06 09:10:06.396 [main] INFO org.onap.sdc.impl.DistributionClientImpl - DistributionClient - init 09:10:06 09:10:06.396 [main] ERROR org.onap.sdc.impl.DistributionClientImpl - DistributionClientResultImpl [responseStatus=CONF_MISSING_ENVIRONMENT_NAME, responseMessage=configuration is invalid: CONF_MISSING_ENVIRONMENT_NAME] 09:10:06 09:10:06.396 [main] INFO org.onap.sdc.impl.DistributionClientImpl - DistributionClient - init 09:10:06 09:10:06.396 [main] ERROR org.onap.sdc.impl.DistributionClientImpl - DistributionClientResultImpl [responseStatus=CONF_MISSING_ENVIRONMENT_NAME, responseMessage=configuration is invalid: CONF_MISSING_ENVIRONMENT_NAME] 09:10:06 09:10:06.396 [main] INFO org.onap.sdc.impl.DistributionClientImpl - stop DistributionClient 09:10:06 09:10:06.396 [main] DEBUG org.onap.sdc.impl.DistributionClientImpl - client was not initialized 09:10:06 isUseHttpsWithSDC set to true 09:10:06 09:10:06.398 [main] INFO org.onap.sdc.impl.DistributionClientImpl - DistributionClient - init 09:10:06 09:10:06.435 [kafka-producer-network-thread | mso-123456-producer-a431b68b-e45a-4ed5-8b19-0482c7f0322b] DEBUG org.apache.kafka.clients.NetworkClient - [Producer clientId=mso-123456-producer-a431b68b-e45a-4ed5-8b19-0482c7f0322b] Give up sending metadata request since no node is available 09:10:06 09:10:06.442 [main] DEBUG org.onap.sdc.http.SdcConnectorClient - about to perform get on SDC. requestId= d91d2b3e-b21e-426d-906d-3c3b192817c5 url= /sdc/v1/artifactTypes 09:10:06 09:10:06.442 [main] DEBUG org.onap.sdc.http.HttpSdcClient - url to send https://badhost:8080/sdc/v1/artifactTypes 09:10:06 09:10:06.446 [kafka-coordinator-heartbeat-thread | mso-group] DEBUG org.apache.kafka.clients.NetworkClient - [Consumer clientId=mso-123456-consumer-f5c1d64b-2a77-4abe-97e2-5d595acfbd82, groupId=mso-group] Give up sending metadata request since no node is available 09:10:06 09:10:06.446 [kafka-coordinator-heartbeat-thread | mso-group] DEBUG org.apache.kafka.clients.consumer.internals.ConsumerCoordinator - [Consumer clientId=mso-123456-consumer-f5c1d64b-2a77-4abe-97e2-5d595acfbd82, groupId=mso-group] No broker available to send FindCoordinator request 09:10:06 09:10:06.485 [kafka-producer-network-thread | mso-123456-producer-a431b68b-e45a-4ed5-8b19-0482c7f0322b] DEBUG org.apache.kafka.clients.NetworkClient - [Producer clientId=mso-123456-producer-a431b68b-e45a-4ed5-8b19-0482c7f0322b] Give up sending metadata request since no node is available 09:10:06 09:10:06.493 [kafka-producer-network-thread | mso-123456-producer-ed478832-32af-451a-a221-ab91f035a123] DEBUG org.apache.kafka.clients.producer.internals.TransactionManager - [Producer clientId=mso-123456-producer-ed478832-32af-451a-a221-ab91f035a123] Enqueuing transactional request InitProducerIdRequestData(transactionalId=null, transactionTimeoutMs=2147483647, producerId=-1, producerEpoch=-1) 09:10:06 09:10:06.493 [kafka-producer-network-thread | mso-123456-producer-ed478832-32af-451a-a221-ab91f035a123] DEBUG org.apache.kafka.clients.NetworkClient - [Producer clientId=mso-123456-producer-ed478832-32af-451a-a221-ab91f035a123] Initialize connection to node localhost:9092 (id: -1 rack: null) for sending metadata request 09:10:06 09:10:06.493 [kafka-producer-network-thread | mso-123456-producer-ed478832-32af-451a-a221-ab91f035a123] DEBUG org.apache.kafka.clients.ClientUtils - Resolved host localhost as 127.0.0.1 09:10:06 09:10:06.493 [kafka-producer-network-thread | mso-123456-producer-ed478832-32af-451a-a221-ab91f035a123] DEBUG org.apache.kafka.clients.NetworkClient - [Producer clientId=mso-123456-producer-ed478832-32af-451a-a221-ab91f035a123] Initiating connection to node localhost:9092 (id: -1 rack: null) using address localhost/127.0.0.1 09:10:06 09:10:06.494 [kafka-producer-network-thread | mso-123456-producer-ed478832-32af-451a-a221-ab91f035a123] DEBUG org.apache.kafka.common.security.authenticator.SaslClientAuthenticator - [Producer clientId=mso-123456-producer-ed478832-32af-451a-a221-ab91f035a123] Set SASL client state to SEND_APIVERSIONS_REQUEST 09:10:06 09:10:06.494 [kafka-producer-network-thread | mso-123456-producer-ed478832-32af-451a-a221-ab91f035a123] DEBUG org.apache.kafka.common.security.authenticator.SaslClientAuthenticator - [Producer clientId=mso-123456-producer-ed478832-32af-451a-a221-ab91f035a123] Creating SaslClient: client=null;service=kafka;serviceHostname=localhost;mechs=[PLAIN] 09:10:06 09:10:06.495 [kafka-producer-network-thread | mso-123456-producer-ed478832-32af-451a-a221-ab91f035a123] DEBUG org.apache.kafka.common.network.Selector - [Producer clientId=mso-123456-producer-ed478832-32af-451a-a221-ab91f035a123] Connection with localhost/127.0.0.1 (channelId=-1) disconnected 09:10:06 java.net.ConnectException: Connection refused 09:10:06 at java.base/sun.nio.ch.SocketChannelImpl.checkConnect(Native Method) 09:10:06 at java.base/sun.nio.ch.SocketChannelImpl.finishConnect(SocketChannelImpl.java:777) 09:10:06 at org.apache.kafka.common.network.PlaintextTransportLayer.finishConnect(PlaintextTransportLayer.java:50) 09:10:06 at org.apache.kafka.common.network.KafkaChannel.finishConnect(KafkaChannel.java:224) 09:10:06 at org.apache.kafka.common.network.Selector.pollSelectionKeys(Selector.java:526) 09:10:06 at org.apache.kafka.common.network.Selector.poll(Selector.java:481) 09:10:06 at org.apache.kafka.clients.NetworkClient.poll(NetworkClient.java:560) 09:10:06 at org.apache.kafka.clients.NetworkClientUtils.isReady(NetworkClientUtils.java:42) 09:10:06 at org.apache.kafka.clients.NetworkClientUtils.awaitReady(NetworkClientUtils.java:64) 09:10:06 at org.apache.kafka.clients.producer.internals.Sender.awaitNodeReady(Sender.java:534) 09:10:06 at org.apache.kafka.clients.producer.internals.Sender.maybeSendAndPollTransactionalRequest(Sender.java:455) 09:10:06 at org.apache.kafka.clients.producer.internals.Sender.runOnce(Sender.java:316) 09:10:06 at org.apache.kafka.clients.producer.internals.Sender.run(Sender.java:243) 09:10:06 at java.base/java.lang.Thread.run(Thread.java:829) 09:10:06 09:10:06.495 [kafka-producer-network-thread | mso-123456-producer-ed478832-32af-451a-a221-ab91f035a123] INFO org.apache.kafka.clients.NetworkClient - [Producer clientId=mso-123456-producer-ed478832-32af-451a-a221-ab91f035a123] Node -1 disconnected. 09:10:06 09:10:06.496 [kafka-producer-network-thread | mso-123456-producer-ed478832-32af-451a-a221-ab91f035a123] WARN org.apache.kafka.clients.NetworkClient - [Producer clientId=mso-123456-producer-ed478832-32af-451a-a221-ab91f035a123] Connection to node -1 (localhost/127.0.0.1:9092) could not be established. Broker may not be available. 09:10:06 09:10:06.496 [kafka-producer-network-thread | mso-123456-producer-ed478832-32af-451a-a221-ab91f035a123] WARN org.apache.kafka.clients.NetworkClient - [Producer clientId=mso-123456-producer-ed478832-32af-451a-a221-ab91f035a123] Bootstrap broker localhost:9092 (id: -1 rack: null) disconnected 09:10:06 09:10:06.496 [kafka-producer-network-thread | mso-123456-producer-ed478832-32af-451a-a221-ab91f035a123] DEBUG org.apache.kafka.clients.producer.internals.Sender - [Producer clientId=mso-123456-producer-ed478832-32af-451a-a221-ab91f035a123] Disconnect from localhost:9092 (id: -1 rack: null) while trying to send request InitProducerIdRequestData(transactionalId=null, transactionTimeoutMs=2147483647, producerId=-1, producerEpoch=-1). Going to back off and retry. 09:10:06 java.io.IOException: Connection to localhost:9092 (id: -1 rack: null) failed. 09:10:06 at org.apache.kafka.clients.NetworkClientUtils.awaitReady(NetworkClientUtils.java:70) 09:10:06 at org.apache.kafka.clients.producer.internals.Sender.awaitNodeReady(Sender.java:534) 09:10:06 at org.apache.kafka.clients.producer.internals.Sender.maybeSendAndPollTransactionalRequest(Sender.java:455) 09:10:06 at org.apache.kafka.clients.producer.internals.Sender.runOnce(Sender.java:316) 09:10:06 at org.apache.kafka.clients.producer.internals.Sender.run(Sender.java:243) 09:10:06 at java.base/java.lang.Thread.run(Thread.java:829) 09:10:06 09:10:06.500 [main] ERROR org.onap.sdc.http.HttpSdcClient - failed to connect to url: /sdc/v1/artifactTypes 09:10:06 java.net.UnknownHostException: badhost: System error 09:10:06 at java.base/java.net.Inet6AddressImpl.lookupAllHostAddr(Native Method) 09:10:06 at java.base/java.net.InetAddress$PlatformNameService.lookupAllHostAddr(InetAddress.java:929) 09:10:06 at java.base/java.net.InetAddress.getAddressesFromNameService(InetAddress.java:1529) 09:10:06 at java.base/java.net.InetAddress$NameServiceAddresses.get(InetAddress.java:848) 09:10:06 at java.base/java.net.InetAddress.getAllByName0(InetAddress.java:1519) 09:10:06 at java.base/java.net.InetAddress.getAllByName(InetAddress.java:1378) 09:10:06 at java.base/java.net.InetAddress.getAllByName(InetAddress.java:1306) 09:10:06 at org.apache.http.impl.conn.SystemDefaultDnsResolver.resolve(SystemDefaultDnsResolver.java:45) 09:10:06 at org.apache.http.impl.conn.DefaultHttpClientConnectionOperator.connect(DefaultHttpClientConnectionOperator.java:112) 09:10:06 at org.apache.http.impl.conn.PoolingHttpClientConnectionManager.connect(PoolingHttpClientConnectionManager.java:376) 09:10:06 at org.apache.http.impl.execchain.MainClientExec.establishRoute(MainClientExec.java:393) 09:10:06 at org.apache.http.impl.execchain.MainClientExec.execute(MainClientExec.java:236) 09:10:06 at org.apache.http.impl.execchain.ProtocolExec.execute(ProtocolExec.java:186) 09:10:06 at org.apache.http.impl.execchain.RetryExec.execute(RetryExec.java:89) 09:10:06 at org.apache.http.impl.execchain.RedirectExec.execute(RedirectExec.java:110) 09:10:06 at org.apache.http.impl.client.InternalHttpClient.doExecute(InternalHttpClient.java:185) 09:10:06 at org.apache.http.impl.client.CloseableHttpClient.execute(CloseableHttpClient.java:83) 09:10:06 at org.apache.http.impl.client.CloseableHttpClient.execute(CloseableHttpClient.java:108) 09:10:06 at org.onap.sdc.http.HttpSdcClient.getRequest(HttpSdcClient.java:116) 09:10:06 at org.onap.sdc.http.SdcConnectorClient.performSdcServerRequest(SdcConnectorClient.java:120) 09:10:06 at org.onap.sdc.http.SdcConnectorClient.getValidArtifactTypesList(SdcConnectorClient.java:74) 09:10:06 at org.onap.sdc.impl.DistributionClientImpl.validateArtifactTypesWithSdcServer(DistributionClientImpl.java:300) 09:10:06 at org.onap.sdc.impl.DistributionClientImpl.init(DistributionClientImpl.java:129) 09:10:06 at java.base/java.lang.invoke.MethodHandle.invokeWithArguments(MethodHandle.java:710) 09:10:06 at org.mockito.internal.util.reflection.InstrumentationMemberAccessor$Dispatcher$ByteBuddy$bEIykPBv.invokeWithArguments(Unknown Source) 09:10:06 at org.mockito.internal.util.reflection.InstrumentationMemberAccessor.invoke(InstrumentationMemberAccessor.java:239) 09:10:06 at org.mockito.internal.util.reflection.ModuleMemberAccessor.invoke(ModuleMemberAccessor.java:55) 09:10:06 at org.mockito.internal.creation.bytebuddy.MockMethodAdvice.tryInvoke(MockMethodAdvice.java:333) 09:10:06 at org.mockito.internal.creation.bytebuddy.MockMethodAdvice.access$500(MockMethodAdvice.java:60) 09:10:06 at org.mockito.internal.creation.bytebuddy.MockMethodAdvice$RealMethodCall.invoke(MockMethodAdvice.java:253) 09:10:06 at org.mockito.internal.invocation.InterceptedInvocation.callRealMethod(InterceptedInvocation.java:142) 09:10:06 at org.mockito.internal.stubbing.answers.CallsRealMethods.answer(CallsRealMethods.java:45) 09:10:06 at org.mockito.Answers.answer(Answers.java:99) 09:10:06 at org.mockito.internal.handler.MockHandlerImpl.handle(MockHandlerImpl.java:110) 09:10:06 at org.mockito.internal.handler.NullResultGuardian.handle(NullResultGuardian.java:29) 09:10:06 at org.mockito.internal.handler.InvocationNotifierHandler.handle(InvocationNotifierHandler.java:34) 09:10:06 at org.mockito.internal.creation.bytebuddy.MockMethodInterceptor.doIntercept(MockMethodInterceptor.java:82) 09:10:06 at org.mockito.internal.creation.bytebuddy.MockMethodAdvice.handle(MockMethodAdvice.java:151) 09:10:06 at org.onap.sdc.impl.DistributionClientImpl.init(DistributionClientImpl.java:117) 09:10:06 at org.onap.sdc.impl.DistributionClientTest.initFailedConnectSdcTest(DistributionClientTest.java:189) 09:10:06 at java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke0(Native Method) 09:10:06 at java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) 09:10:06 at java.base/jdk.internal.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) 09:10:06 at java.base/java.lang.reflect.Method.invoke(Method.java:566) 09:10:06 at org.junit.platform.commons.util.ReflectionUtils.invokeMethod(ReflectionUtils.java:688) 09:10:06 at org.junit.jupiter.engine.execution.MethodInvocation.proceed(MethodInvocation.java:60) 09:10:06 at org.junit.jupiter.engine.execution.InvocationInterceptorChain$ValidatingInvocation.proceed(InvocationInterceptorChain.java:131) 09:10:06 at org.junit.jupiter.engine.extension.TimeoutExtension.intercept(TimeoutExtension.java:149) 09:10:06 at org.junit.jupiter.engine.extension.TimeoutExtension.interceptTestableMethod(TimeoutExtension.java:140) 09:10:06 at org.junit.jupiter.engine.extension.TimeoutExtension.interceptTestMethod(TimeoutExtension.java:84) 09:10:06 at org.junit.jupiter.engine.execution.ExecutableInvoker$ReflectiveInterceptorCall.lambda$ofVoidMethod$0(ExecutableInvoker.java:115) 09:10:06 at org.junit.jupiter.engine.execution.ExecutableInvoker.lambda$invoke$0(ExecutableInvoker.java:105) 09:10:06 at org.junit.jupiter.engine.execution.InvocationInterceptorChain$InterceptedInvocation.proceed(InvocationInterceptorChain.java:106) 09:10:06 at org.junit.jupiter.engine.execution.InvocationInterceptorChain.proceed(InvocationInterceptorChain.java:64) 09:10:06 at org.junit.jupiter.engine.execution.InvocationInterceptorChain.chainAndInvoke(InvocationInterceptorChain.java:45) 09:10:06 at org.junit.jupiter.engine.execution.InvocationInterceptorChain.invoke(InvocationInterceptorChain.java:37) 09:10:06 at org.junit.jupiter.engine.execution.ExecutableInvoker.invoke(ExecutableInvoker.java:104) 09:10:06 at org.junit.jupiter.engine.execution.ExecutableInvoker.invoke(ExecutableInvoker.java:98) 09:10:06 at org.junit.jupiter.engine.descriptor.TestMethodTestDescriptor.lambda$invokeTestMethod$6(TestMethodTestDescriptor.java:210) 09:10:06 at org.junit.platform.engine.support.hierarchical.ThrowableCollector.execute(ThrowableCollector.java:73) 09:10:06 at org.junit.jupiter.engine.descriptor.TestMethodTestDescriptor.invokeTestMethod(TestMethodTestDescriptor.java:206) 09:10:06 at org.junit.jupiter.engine.descriptor.TestMethodTestDescriptor.execute(TestMethodTestDescriptor.java:131) 09:10:06 at org.junit.jupiter.engine.descriptor.TestMethodTestDescriptor.execute(TestMethodTestDescriptor.java:65) 09:10:06 at org.junit.platform.engine.support.hierarchical.NodeTestTask.lambda$executeRecursively$5(NodeTestTask.java:139) 09:10:06 at org.junit.platform.engine.support.hierarchical.ThrowableCollector.execute(ThrowableCollector.java:73) 09:10:06 at org.junit.platform.engine.support.hierarchical.NodeTestTask.lambda$executeRecursively$7(NodeTestTask.java:129) 09:10:06 at org.junit.platform.engine.support.hierarchical.Node.around(Node.java:137) 09:10:06 at org.junit.platform.engine.support.hierarchical.NodeTestTask.lambda$executeRecursively$8(NodeTestTask.java:127) 09:10:06 at org.junit.platform.engine.support.hierarchical.ThrowableCollector.execute(ThrowableCollector.java:73) 09:10:06 at org.junit.platform.engine.support.hierarchical.NodeTestTask.executeRecursively(NodeTestTask.java:126) 09:10:06 at org.junit.platform.engine.support.hierarchical.NodeTestTask.execute(NodeTestTask.java:84) 09:10:06 at java.base/java.util.ArrayList.forEach(ArrayList.java:1541) 09:10:06 at org.junit.platform.engine.support.hierarchical.SameThreadHierarchicalTestExecutorService.invokeAll(SameThreadHierarchicalTestExecutorService.java:38) 09:10:06 at org.junit.platform.engine.support.hierarchical.NodeTestTask.lambda$executeRecursively$5(NodeTestTask.java:143) 09:10:06 at org.junit.platform.engine.support.hierarchical.ThrowableCollector.execute(ThrowableCollector.java:73) 09:10:06 at org.junit.platform.engine.support.hierarchical.NodeTestTask.lambda$executeRecursively$7(NodeTestTask.java:129) 09:10:06 at org.junit.platform.engine.support.hierarchical.Node.around(Node.java:137) 09:10:06 at org.junit.platform.engine.support.hierarchical.NodeTestTask.lambda$executeRecursively$8(NodeTestTask.java:127) 09:10:06 at org.junit.platform.engine.support.hierarchical.ThrowableCollector.execute(ThrowableCollector.java:73) 09:10:06 at org.junit.platform.engine.support.hierarchical.NodeTestTask.executeRecursively(NodeTestTask.java:126) 09:10:06 at org.junit.platform.engine.support.hierarchical.NodeTestTask.execute(NodeTestTask.java:84) 09:10:06 at java.base/java.util.ArrayList.forEach(ArrayList.java:1541) 09:10:06 at org.junit.platform.engine.support.hierarchical.SameThreadHierarchicalTestExecutorService.invokeAll(SameThreadHierarchicalTestExecutorService.java:38) 09:10:06 at org.junit.platform.engine.support.hierarchical.NodeTestTask.lambda$executeRecursively$5(NodeTestTask.java:143) 09:10:06 at org.junit.platform.engine.support.hierarchical.ThrowableCollector.execute(ThrowableCollector.java:73) 09:10:06 at org.junit.platform.engine.support.hierarchical.NodeTestTask.lambda$executeRecursively$7(NodeTestTask.java:129) 09:10:06 at org.junit.platform.engine.support.hierarchical.Node.around(Node.java:137) 09:10:06 at org.junit.platform.engine.support.hierarchical.NodeTestTask.lambda$executeRecursively$8(NodeTestTask.java:127) 09:10:06 at org.junit.platform.engine.support.hierarchical.ThrowableCollector.execute(ThrowableCollector.java:73) 09:10:06 at org.junit.platform.engine.support.hierarchical.NodeTestTask.executeRecursively(NodeTestTask.java:126) 09:10:06 at org.junit.platform.engine.support.hierarchical.NodeTestTask.execute(NodeTestTask.java:84) 09:10:06 at org.junit.platform.engine.support.hierarchical.SameThreadHierarchicalTestExecutorService.submit(SameThreadHierarchicalTestExecutorService.java:32) 09:10:06 at org.junit.platform.engine.support.hierarchical.HierarchicalTestExecutor.execute(HierarchicalTestExecutor.java:57) 09:10:06 at org.junit.platform.engine.support.hierarchical.HierarchicalTestEngine.execute(HierarchicalTestEngine.java:51) 09:10:06 at org.junit.platform.launcher.core.EngineExecutionOrchestrator.execute(EngineExecutionOrchestrator.java:108) 09:10:06 at org.junit.platform.launcher.core.EngineExecutionOrchestrator.execute(EngineExecutionOrchestrator.java:88) 09:10:06 at org.junit.platform.launcher.core.EngineExecutionOrchestrator.lambda$execute$0(EngineExecutionOrchestrator.java:54) 09:10:06 at org.junit.platform.launcher.core.EngineExecutionOrchestrator.withInterceptedStreams(EngineExecutionOrchestrator.java:67) 09:10:06 at org.junit.platform.launcher.core.EngineExecutionOrchestrator.execute(EngineExecutionOrchestrator.java:52) 09:10:06 at org.junit.platform.launcher.core.DefaultLauncher.execute(DefaultLauncher.java:96) 09:10:06 at org.junit.platform.launcher.core.DefaultLauncher.execute(DefaultLauncher.java:75) 09:10:06 at org.apache.maven.surefire.junitplatform.JUnitPlatformProvider.invokeAllTests(JUnitPlatformProvider.java:154) 09:10:06 at org.apache.maven.surefire.junitplatform.JUnitPlatformProvider.invoke(JUnitPlatformProvider.java:127) 09:10:06 at org.apache.maven.surefire.booter.ForkedBooter.runSuitesInProcess(ForkedBooter.java:377) 09:10:06 at org.apache.maven.surefire.booter.ForkedBooter.execute(ForkedBooter.java:138) 09:10:06 at org.apache.maven.surefire.booter.ForkedBooter.run(ForkedBooter.java:465) 09:10:06 at org.apache.maven.surefire.booter.ForkedBooter.main(ForkedBooter.java:451) 09:10:06 09:10:06.500 [main] ERROR org.onap.sdc.http.SdcConnectorClient - status from SDC is org.onap.sdc.http.HttpSdcResponse@6cd475c4 09:10:06 09:10:06.500 [main] ERROR org.onap.sdc.http.SdcConnectorClient - DistributionClientResultImpl [responseStatus=SDC_CONNECTION_FAILED, responseMessage=SDC server problem] 09:10:06 09:10:06.500 [main] DEBUG org.onap.sdc.http.SdcConnectorClient - error from SDC is: failed to connect 09:10:06 09:10:06.501 [main] INFO org.onap.sdc.impl.DistributionClientImpl - DistributionClient - init 09:10:06 09:10:06.523 [main] DEBUG org.onap.sdc.http.SdcConnectorClient - about to perform get on SDC. requestId= f7546583-f65b-4573-8a63-4fcd69da5ae1 url= /sdc/v1/artifactTypes 09:10:06 09:10:06.524 [main] DEBUG org.onap.sdc.http.HttpSdcClient - url to send https://localhost:8181/sdc/v1/artifactTypes 09:10:06 09:10:06.527 [main] ERROR org.onap.sdc.http.HttpSdcClient - failed to connect to url: /sdc/v1/artifactTypes 09:10:06 org.apache.http.conn.HttpHostConnectException: Connect to localhost:8181 [localhost/127.0.0.1] failed: Connection refused (Connection refused) 09:10:06 at org.apache.http.impl.conn.DefaultHttpClientConnectionOperator.connect(DefaultHttpClientConnectionOperator.java:156) 09:10:06 at org.apache.http.impl.conn.PoolingHttpClientConnectionManager.connect(PoolingHttpClientConnectionManager.java:376) 09:10:06 at org.apache.http.impl.execchain.MainClientExec.establishRoute(MainClientExec.java:393) 09:10:06 at org.apache.http.impl.execchain.MainClientExec.execute(MainClientExec.java:236) 09:10:06 at org.apache.http.impl.execchain.ProtocolExec.execute(ProtocolExec.java:186) 09:10:06 at org.apache.http.impl.execchain.RetryExec.execute(RetryExec.java:89) 09:10:06 at org.apache.http.impl.execchain.RedirectExec.execute(RedirectExec.java:110) 09:10:06 at org.apache.http.impl.client.InternalHttpClient.doExecute(InternalHttpClient.java:185) 09:10:06 at org.apache.http.impl.client.CloseableHttpClient.execute(CloseableHttpClient.java:83) 09:10:06 at org.apache.http.impl.client.CloseableHttpClient.execute(CloseableHttpClient.java:108) 09:10:06 at org.onap.sdc.http.HttpSdcClient.getRequest(HttpSdcClient.java:116) 09:10:06 at org.onap.sdc.http.SdcConnectorClient.performSdcServerRequest(SdcConnectorClient.java:120) 09:10:06 at org.onap.sdc.http.SdcConnectorClient.getValidArtifactTypesList(SdcConnectorClient.java:74) 09:10:06 at org.onap.sdc.impl.DistributionClientImpl.validateArtifactTypesWithSdcServer(DistributionClientImpl.java:300) 09:10:06 at org.onap.sdc.impl.DistributionClientImpl.init(DistributionClientImpl.java:129) 09:10:06 at java.base/java.lang.invoke.MethodHandle.invokeWithArguments(MethodHandle.java:710) 09:10:06 at org.mockito.internal.util.reflection.InstrumentationMemberAccessor$Dispatcher$ByteBuddy$bEIykPBv.invokeWithArguments(Unknown Source) 09:10:06 at org.mockito.internal.util.reflection.InstrumentationMemberAccessor.invoke(InstrumentationMemberAccessor.java:239) 09:10:06 at org.mockito.internal.util.reflection.ModuleMemberAccessor.invoke(ModuleMemberAccessor.java:55) 09:10:06 at org.mockito.internal.creation.bytebuddy.MockMethodAdvice.tryInvoke(MockMethodAdvice.java:333) 09:10:06 at org.mockito.internal.creation.bytebuddy.MockMethodAdvice.access$500(MockMethodAdvice.java:60) 09:10:06 at org.mockito.internal.creation.bytebuddy.MockMethodAdvice$RealMethodCall.invoke(MockMethodAdvice.java:253) 09:10:06 at org.mockito.internal.invocation.InterceptedInvocation.callRealMethod(InterceptedInvocation.java:142) 09:10:06 at org.mockito.internal.stubbing.answers.CallsRealMethods.answer(CallsRealMethods.java:45) 09:10:06 at org.mockito.Answers.answer(Answers.java:99) 09:10:06 at org.mockito.internal.handler.MockHandlerImpl.handle(MockHandlerImpl.java:110) 09:10:06 at org.mockito.internal.handler.NullResultGuardian.handle(NullResultGuardian.java:29) 09:10:06 at org.mockito.internal.handler.InvocationNotifierHandler.handle(InvocationNotifierHandler.java:34) 09:10:06 at org.mockito.internal.creation.bytebuddy.MockMethodInterceptor.doIntercept(MockMethodInterceptor.java:82) 09:10:06 at org.mockito.internal.creation.bytebuddy.MockMethodAdvice.handle(MockMethodAdvice.java:151) 09:10:06 at org.onap.sdc.impl.DistributionClientImpl.init(DistributionClientImpl.java:117) 09:10:06 at org.onap.sdc.impl.DistributionClientTest.initFailedConnectSdcTest(DistributionClientTest.java:195) 09:10:06 at java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke0(Native Method) 09:10:06 at java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) 09:10:06 at java.base/jdk.internal.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) 09:10:06 at java.base/java.lang.reflect.Method.invoke(Method.java:566) 09:10:06 at org.junit.platform.commons.util.ReflectionUtils.invokeMethod(ReflectionUtils.java:688) 09:10:06 at org.junit.jupiter.engine.execution.MethodInvocation.proceed(MethodInvocation.java:60) 09:10:06 at org.junit.jupiter.engine.execution.InvocationInterceptorChain$ValidatingInvocation.proceed(InvocationInterceptorChain.java:131) 09:10:06 at org.junit.jupiter.engine.extension.TimeoutExtension.intercept(TimeoutExtension.java:149) 09:10:06 at org.junit.jupiter.engine.extension.TimeoutExtension.interceptTestableMethod(TimeoutExtension.java:140) 09:10:06 at org.junit.jupiter.engine.extension.TimeoutExtension.interceptTestMethod(TimeoutExtension.java:84) 09:10:06 at org.junit.jupiter.engine.execution.ExecutableInvoker$ReflectiveInterceptorCall.lambda$ofVoidMethod$0(ExecutableInvoker.java:115) 09:10:06 at org.junit.jupiter.engine.execution.ExecutableInvoker.lambda$invoke$0(ExecutableInvoker.java:105) 09:10:06 at org.junit.jupiter.engine.execution.InvocationInterceptorChain$InterceptedInvocation.proceed(InvocationInterceptorChain.java:106) 09:10:06 at org.junit.jupiter.engine.execution.InvocationInterceptorChain.proceed(InvocationInterceptorChain.java:64) 09:10:06 at org.junit.jupiter.engine.execution.InvocationInterceptorChain.chainAndInvoke(InvocationInterceptorChain.java:45) 09:10:06 at org.junit.jupiter.engine.execution.InvocationInterceptorChain.invoke(InvocationInterceptorChain.java:37) 09:10:06 at org.junit.jupiter.engine.execution.ExecutableInvoker.invoke(ExecutableInvoker.java:104) 09:10:06 at org.junit.jupiter.engine.execution.ExecutableInvoker.invoke(ExecutableInvoker.java:98) 09:10:06 at org.junit.jupiter.engine.descriptor.TestMethodTestDescriptor.lambda$invokeTestMethod$6(TestMethodTestDescriptor.java:210) 09:10:06 at org.junit.platform.engine.support.hierarchical.ThrowableCollector.execute(ThrowableCollector.java:73) 09:10:06 at org.junit.jupiter.engine.descriptor.TestMethodTestDescriptor.invokeTestMethod(TestMethodTestDescriptor.java:206) 09:10:06 at org.junit.jupiter.engine.descriptor.TestMethodTestDescriptor.execute(TestMethodTestDescriptor.java:131) 09:10:06 at org.junit.jupiter.engine.descriptor.TestMethodTestDescriptor.execute(TestMethodTestDescriptor.java:65) 09:10:06 at org.junit.platform.engine.support.hierarchical.NodeTestTask.lambda$executeRecursively$5(NodeTestTask.java:139) 09:10:06 at org.junit.platform.engine.support.hierarchical.ThrowableCollector.execute(ThrowableCollector.java:73) 09:10:06 at org.junit.platform.engine.support.hierarchical.NodeTestTask.lambda$executeRecursively$7(NodeTestTask.java:129) 09:10:06 at org.junit.platform.engine.support.hierarchical.Node.around(Node.java:137) 09:10:06 at org.junit.platform.engine.support.hierarchical.NodeTestTask.lambda$executeRecursively$8(NodeTestTask.java:127) 09:10:06 at org.junit.platform.engine.support.hierarchical.ThrowableCollector.execute(ThrowableCollector.java:73) 09:10:06 at org.junit.platform.engine.support.hierarchical.NodeTestTask.executeRecursively(NodeTestTask.java:126) 09:10:06 at org.junit.platform.engine.support.hierarchical.NodeTestTask.execute(NodeTestTask.java:84) 09:10:06 at java.base/java.util.ArrayList.forEach(ArrayList.java:1541) 09:10:06 at org.junit.platform.engine.support.hierarchical.SameThreadHierarchicalTestExecutorService.invokeAll(SameThreadHierarchicalTestExecutorService.java:38) 09:10:06 at org.junit.platform.engine.support.hierarchical.NodeTestTask.lambda$executeRecursively$5(NodeTestTask.java:143) 09:10:06 at org.junit.platform.engine.support.hierarchical.ThrowableCollector.execute(ThrowableCollector.java:73) 09:10:06 at org.junit.platform.engine.support.hierarchical.NodeTestTask.lambda$executeRecursively$7(NodeTestTask.java:129) 09:10:06 at org.junit.platform.engine.support.hierarchical.Node.around(Node.java:137) 09:10:06 at org.junit.platform.engine.support.hierarchical.NodeTestTask.lambda$executeRecursively$8(NodeTestTask.java:127) 09:10:06 at org.junit.platform.engine.support.hierarchical.ThrowableCollector.execute(ThrowableCollector.java:73) 09:10:06 at org.junit.platform.engine.support.hierarchical.NodeTestTask.executeRecursively(NodeTestTask.java:126) 09:10:06 at org.junit.platform.engine.support.hierarchical.NodeTestTask.execute(NodeTestTask.java:84) 09:10:06 at java.base/java.util.ArrayList.forEach(ArrayList.java:1541) 09:10:06 at org.junit.platform.engine.support.hierarchical.SameThreadHierarchicalTestExecutorService.invokeAll(SameThreadHierarchicalTestExecutorService.java:38) 09:10:06 at org.junit.platform.engine.support.hierarchical.NodeTestTask.lambda$executeRecursively$5(NodeTestTask.java:143) 09:10:06 at org.junit.platform.engine.support.hierarchical.ThrowableCollector.execute(ThrowableCollector.java:73) 09:10:06 at org.junit.platform.engine.support.hierarchical.NodeTestTask.lambda$executeRecursively$7(NodeTestTask.java:129) 09:10:06 at org.junit.platform.engine.support.hierarchical.Node.around(Node.java:137) 09:10:06 at org.junit.platform.engine.support.hierarchical.NodeTestTask.lambda$executeRecursively$8(NodeTestTask.java:127) 09:10:06 at org.junit.platform.engine.support.hierarchical.ThrowableCollector.execute(ThrowableCollector.java:73) 09:10:06 at org.junit.platform.engine.support.hierarchical.NodeTestTask.executeRecursively(NodeTestTask.java:126) 09:10:06 at org.junit.platform.engine.support.hierarchical.NodeTestTask.execute(NodeTestTask.java:84) 09:10:06 at org.junit.platform.engine.support.hierarchical.SameThreadHierarchicalTestExecutorService.submit(SameThreadHierarchicalTestExecutorService.java:32) 09:10:06 at org.junit.platform.engine.support.hierarchical.HierarchicalTestExecutor.execute(HierarchicalTestExecutor.java:57) 09:10:06 at org.junit.platform.engine.support.hierarchical.HierarchicalTestEngine.execute(HierarchicalTestEngine.java:51) 09:10:06 at org.junit.platform.launcher.core.EngineExecutionOrchestrator.execute(EngineExecutionOrchestrator.java:108) 09:10:06 at org.junit.platform.launcher.core.EngineExecutionOrchestrator.execute(EngineExecutionOrchestrator.java:88) 09:10:06 at org.junit.platform.launcher.core.EngineExecutionOrchestrator.lambda$execute$0(EngineExecutionOrchestrator.java:54) 09:10:06 at org.junit.platform.launcher.core.EngineExecutionOrchestrator.withInterceptedStreams(EngineExecutionOrchestrator.java:67) 09:10:06 at org.junit.platform.launcher.core.EngineExecutionOrchestrator.execute(EngineExecutionOrchestrator.java:52) 09:10:06 at org.junit.platform.launcher.core.DefaultLauncher.execute(DefaultLauncher.java:96) 09:10:06 at org.junit.platform.launcher.core.DefaultLauncher.execute(DefaultLauncher.java:75) 09:10:06 at org.apache.maven.surefire.junitplatform.JUnitPlatformProvider.invokeAllTests(JUnitPlatformProvider.java:154) 09:10:06 at org.apache.maven.surefire.junitplatform.JUnitPlatformProvider.invoke(JUnitPlatformProvider.java:127) 09:10:06 at org.apache.maven.surefire.booter.ForkedBooter.runSuitesInProcess(ForkedBooter.java:377) 09:10:06 at org.apache.maven.surefire.booter.ForkedBooter.execute(ForkedBooter.java:138) 09:10:06 at org.apache.maven.surefire.booter.ForkedBooter.run(ForkedBooter.java:465) 09:10:06 at org.apache.maven.surefire.booter.ForkedBooter.main(ForkedBooter.java:451) 09:10:06 Caused by: java.net.ConnectException: Connection refused (Connection refused) 09:10:06 at java.base/java.net.PlainSocketImpl.socketConnect(Native Method) 09:10:06 at java.base/java.net.AbstractPlainSocketImpl.doConnect(AbstractPlainSocketImpl.java:412) 09:10:06 at java.base/java.net.AbstractPlainSocketImpl.connectToAddress(AbstractPlainSocketImpl.java:255) 09:10:06 at java.base/java.net.AbstractPlainSocketImpl.connect(AbstractPlainSocketImpl.java:237) 09:10:06 at java.base/java.net.SocksSocketImpl.connect(SocksSocketImpl.java:392) 09:10:06 at java.base/java.net.Socket.connect(Socket.java:609) 09:10:06 at org.apache.http.conn.ssl.SSLConnectionSocketFactory.connectSocket(SSLConnectionSocketFactory.java:368) 09:10:06 at org.apache.http.impl.conn.DefaultHttpClientConnectionOperator.connect(DefaultHttpClientConnectionOperator.java:142) 09:10:06 ... 98 common frames omitted 09:10:06 09:10:06.527 [main] ERROR org.onap.sdc.http.SdcConnectorClient - status from SDC is org.onap.sdc.http.HttpSdcResponse@75d12c54 09:10:06 09:10:06.527 [main] ERROR org.onap.sdc.http.SdcConnectorClient - DistributionClientResultImpl [responseStatus=SDC_CONNECTION_FAILED, responseMessage=SDC server problem] 09:10:06 09:10:06.527 [main] DEBUG org.onap.sdc.http.SdcConnectorClient - error from SDC is: failed to connect 09:10:06 09:10:06.528 [main] INFO org.onap.sdc.impl.DistributionClientImpl - stop DistributionClient 09:10:06 09:10:06.528 [main] DEBUG org.onap.sdc.impl.DistributionClientImpl - client was not initialized 09:10:06 09:10:06.530 [main] INFO org.onap.sdc.impl.DistributionClientImpl - DistributionClient - init 09:10:06 09:10:06.531 [main] DEBUG org.onap.sdc.impl.DistributionClientImpl - Artifact types: [HEAT] were validated with SDC server 09:10:06 09:10:06.531 [main] DEBUG org.onap.sdc.impl.DistributionClientImpl - Get MessageBus cluster information from SDC 09:10:06 09:10:06.531 [main] DEBUG org.onap.sdc.impl.DistributionClientImpl - MessageBus cluster info retrieved successfully org.onap.sdc.utils.kafka.KafkaDataResponse@79a83ef 09:10:06 09:10:06.531 [main] INFO org.apache.kafka.clients.producer.ProducerConfig - ProducerConfig values: 09:10:06 acks = -1 09:10:06 batch.size = 16384 09:10:06 bootstrap.servers = [localhost:9092] 09:10:06 buffer.memory = 33554432 09:10:06 client.dns.lookup = use_all_dns_ips 09:10:06 client.id = mso-123456-producer-97b112bb-2505-47ed-bd4e-dd24b234ea4e 09:10:06 compression.type = none 09:10:06 connections.max.idle.ms = 540000 09:10:06 delivery.timeout.ms = 120000 09:10:06 enable.idempotence = true 09:10:06 interceptor.classes = [] 09:10:06 key.serializer = class org.apache.kafka.common.serialization.StringSerializer 09:10:06 linger.ms = 0 09:10:06 max.block.ms = 60000 09:10:06 max.in.flight.requests.per.connection = 5 09:10:06 max.request.size = 1048576 09:10:06 metadata.max.age.ms = 300000 09:10:06 metadata.max.idle.ms = 300000 09:10:06 metric.reporters = [] 09:10:06 metrics.num.samples = 2 09:10:06 metrics.recording.level = INFO 09:10:06 metrics.sample.window.ms = 30000 09:10:06 partitioner.adaptive.partitioning.enable = true 09:10:06 partitioner.availability.timeout.ms = 0 09:10:06 partitioner.class = null 09:10:06 partitioner.ignore.keys = false 09:10:06 receive.buffer.bytes = 32768 09:10:06 reconnect.backoff.max.ms = 1000 09:10:06 reconnect.backoff.ms = 50 09:10:06 request.timeout.ms = 30000 09:10:06 retries = 2147483647 09:10:06 retry.backoff.ms = 100 09:10:06 sasl.client.callback.handler.class = null 09:10:06 sasl.jaas.config = [hidden] 09:10:06 sasl.kerberos.kinit.cmd = /usr/bin/kinit 09:10:06 sasl.kerberos.min.time.before.relogin = 60000 09:10:06 sasl.kerberos.service.name = null 09:10:06 sasl.kerberos.ticket.renew.jitter = 0.05 09:10:06 sasl.kerberos.ticket.renew.window.factor = 0.8 09:10:06 sasl.login.callback.handler.class = null 09:10:06 sasl.login.class = null 09:10:06 sasl.login.connect.timeout.ms = null 09:10:06 sasl.login.read.timeout.ms = null 09:10:06 sasl.login.refresh.buffer.seconds = 300 09:10:06 sasl.login.refresh.min.period.seconds = 60 09:10:06 sasl.login.refresh.window.factor = 0.8 09:10:06 sasl.login.refresh.window.jitter = 0.05 09:10:06 sasl.login.retry.backoff.max.ms = 10000 09:10:06 sasl.login.retry.backoff.ms = 100 09:10:06 sasl.mechanism = PLAIN 09:10:06 sasl.oauthbearer.clock.skew.seconds = 30 09:10:06 sasl.oauthbearer.expected.audience = null 09:10:06 sasl.oauthbearer.expected.issuer = null 09:10:06 sasl.oauthbearer.jwks.endpoint.refresh.ms = 3600000 09:10:06 sasl.oauthbearer.jwks.endpoint.retry.backoff.max.ms = 10000 09:10:06 sasl.oauthbearer.jwks.endpoint.retry.backoff.ms = 100 09:10:06 sasl.oauthbearer.jwks.endpoint.url = null 09:10:06 sasl.oauthbearer.scope.claim.name = scope 09:10:06 sasl.oauthbearer.sub.claim.name = sub 09:10:06 sasl.oauthbearer.token.endpoint.url = null 09:10:06 security.protocol = SASL_PLAINTEXT 09:10:06 security.providers = null 09:10:06 send.buffer.bytes = 131072 09:10:06 socket.connection.setup.timeout.max.ms = 30000 09:10:06 socket.connection.setup.timeout.ms = 10000 09:10:06 ssl.cipher.suites = null 09:10:06 ssl.enabled.protocols = [TLSv1.2, TLSv1.3] 09:10:06 ssl.endpoint.identification.algorithm = https 09:10:06 ssl.engine.factory.class = null 09:10:06 ssl.key.password = null 09:10:06 ssl.keymanager.algorithm = SunX509 09:10:06 ssl.keystore.certificate.chain = null 09:10:06 ssl.keystore.key = null 09:10:06 ssl.keystore.location = null 09:10:06 ssl.keystore.password = null 09:10:06 ssl.keystore.type = JKS 09:10:06 ssl.protocol = TLSv1.3 09:10:06 ssl.provider = null 09:10:06 ssl.secure.random.implementation = null 09:10:06 ssl.trustmanager.algorithm = PKIX 09:10:06 ssl.truststore.certificates = null 09:10:06 ssl.truststore.location = null 09:10:06 ssl.truststore.password = null 09:10:06 ssl.truststore.type = JKS 09:10:06 transaction.timeout.ms = 60000 09:10:06 transactional.id = null 09:10:06 value.serializer = class org.apache.kafka.common.serialization.StringSerializer 09:10:06 09:10:06 09:10:06.532 [main] INFO org.apache.kafka.clients.producer.KafkaProducer - [Producer clientId=mso-123456-producer-97b112bb-2505-47ed-bd4e-dd24b234ea4e] Instantiated an idempotent producer. 09:10:06 09:10:06.534 [main] INFO org.apache.kafka.common.utils.AppInfoParser - Kafka version: 3.3.1 09:10:06 09:10:06.534 [main] INFO org.apache.kafka.common.utils.AppInfoParser - Kafka commitId: e23c59d00e687ff5 09:10:06 09:10:06.534 [main] INFO org.apache.kafka.common.utils.AppInfoParser - Kafka startTimeMs: 1770973806534 09:10:06 09:10:06.534 [kafka-producer-network-thread | mso-123456-producer-97b112bb-2505-47ed-bd4e-dd24b234ea4e] DEBUG org.apache.kafka.clients.producer.internals.Sender - [Producer clientId=mso-123456-producer-97b112bb-2505-47ed-bd4e-dd24b234ea4e] Starting Kafka producer I/O thread. 09:10:06 09:10:06.534 [main] DEBUG org.apache.kafka.clients.producer.KafkaProducer - [Producer clientId=mso-123456-producer-97b112bb-2505-47ed-bd4e-dd24b234ea4e] Kafka producer started 09:10:06 09:10:06.534 [kafka-producer-network-thread | mso-123456-producer-97b112bb-2505-47ed-bd4e-dd24b234ea4e] DEBUG org.apache.kafka.clients.producer.internals.TransactionManager - [Producer clientId=mso-123456-producer-97b112bb-2505-47ed-bd4e-dd24b234ea4e] Transition from state UNINITIALIZED to INITIALIZING 09:10:06 DistributionClientResultImpl [responseStatus=SUCCESS, responseMessage=distribution client initialized successfully] 09:10:06 09:10:06.534 [kafka-producer-network-thread | mso-123456-producer-97b112bb-2505-47ed-bd4e-dd24b234ea4e] DEBUG org.apache.kafka.clients.producer.internals.TransactionManager - [Producer clientId=mso-123456-producer-97b112bb-2505-47ed-bd4e-dd24b234ea4e] Enqueuing transactional request InitProducerIdRequestData(transactionalId=null, transactionTimeoutMs=2147483647, producerId=-1, producerEpoch=-1) 09:10:06 09:10:06.534 [kafka-producer-network-thread | mso-123456-producer-97b112bb-2505-47ed-bd4e-dd24b234ea4e] DEBUG org.apache.kafka.clients.NetworkClient - [Producer clientId=mso-123456-producer-97b112bb-2505-47ed-bd4e-dd24b234ea4e] Initialize connection to node localhost:9092 (id: -1 rack: null) for sending metadata request 09:10:06 09:10:06.535 [main] INFO org.onap.sdc.impl.DistributionClientImpl - stop DistributionClient 09:10:06 09:10:06.535 [kafka-producer-network-thread | mso-123456-producer-97b112bb-2505-47ed-bd4e-dd24b234ea4e] DEBUG org.apache.kafka.clients.ClientUtils - Resolved host localhost as 127.0.0.1 09:10:06 09:10:06.535 [kafka-producer-network-thread | mso-123456-producer-97b112bb-2505-47ed-bd4e-dd24b234ea4e] DEBUG org.apache.kafka.clients.NetworkClient - [Producer clientId=mso-123456-producer-97b112bb-2505-47ed-bd4e-dd24b234ea4e] Initiating connection to node localhost:9092 (id: -1 rack: null) using address localhost/127.0.0.1 09:10:06 09:10:06.535 [kafka-producer-network-thread | mso-123456-producer-97b112bb-2505-47ed-bd4e-dd24b234ea4e] DEBUG org.apache.kafka.common.security.authenticator.SaslClientAuthenticator - [Producer clientId=mso-123456-producer-97b112bb-2505-47ed-bd4e-dd24b234ea4e] Set SASL client state to SEND_APIVERSIONS_REQUEST 09:10:06 09:10:06.535 [kafka-producer-network-thread | mso-123456-producer-97b112bb-2505-47ed-bd4e-dd24b234ea4e] DEBUG org.apache.kafka.common.security.authenticator.SaslClientAuthenticator - [Producer clientId=mso-123456-producer-97b112bb-2505-47ed-bd4e-dd24b234ea4e] Creating SaslClient: client=null;service=kafka;serviceHostname=localhost;mechs=[PLAIN] 09:10:06 09:10:06.535 [kafka-producer-network-thread | mso-123456-producer-a431b68b-e45a-4ed5-8b19-0482c7f0322b] DEBUG org.apache.kafka.clients.NetworkClient - [Producer clientId=mso-123456-producer-a431b68b-e45a-4ed5-8b19-0482c7f0322b] Give up sending metadata request since no node is available 09:10:06 09:10:06.536 [main] INFO org.onap.sdc.impl.DistributionClientImpl - start DistributionClient 09:10:06 09:10:06.536 [main] DEBUG org.onap.sdc.impl.DistributionClientImpl - client was not initialized 09:10:06 09:10:06.537 [kafka-producer-network-thread | mso-123456-producer-97b112bb-2505-47ed-bd4e-dd24b234ea4e] DEBUG org.apache.kafka.common.network.Selector - [Producer clientId=mso-123456-producer-97b112bb-2505-47ed-bd4e-dd24b234ea4e] Connection with localhost/127.0.0.1 (channelId=-1) disconnected 09:10:06 java.net.ConnectException: Connection refused 09:10:06 at java.base/sun.nio.ch.SocketChannelImpl.checkConnect(Native Method) 09:10:06 at java.base/sun.nio.ch.SocketChannelImpl.finishConnect(SocketChannelImpl.java:777) 09:10:06 at org.apache.kafka.common.network.PlaintextTransportLayer.finishConnect(PlaintextTransportLayer.java:50) 09:10:06 at org.apache.kafka.common.network.KafkaChannel.finishConnect(KafkaChannel.java:224) 09:10:06 at org.apache.kafka.common.network.Selector.pollSelectionKeys(Selector.java:526) 09:10:06 at org.apache.kafka.common.network.Selector.poll(Selector.java:481) 09:10:06 at org.apache.kafka.clients.NetworkClient.poll(NetworkClient.java:560) 09:10:06 at org.apache.kafka.clients.NetworkClientUtils.isReady(NetworkClientUtils.java:42) 09:10:06 at org.apache.kafka.clients.NetworkClientUtils.awaitReady(NetworkClientUtils.java:64) 09:10:06 at org.apache.kafka.clients.producer.internals.Sender.awaitNodeReady(Sender.java:534) 09:10:06 at org.apache.kafka.clients.producer.internals.Sender.maybeSendAndPollTransactionalRequest(Sender.java:455) 09:10:06 at org.apache.kafka.clients.producer.internals.Sender.runOnce(Sender.java:316) 09:10:06 at org.apache.kafka.clients.producer.internals.Sender.run(Sender.java:243) 09:10:06 at java.base/java.lang.Thread.run(Thread.java:829) 09:10:06 09:10:06.537 [main] INFO org.onap.sdc.impl.DistributionClientImpl - stop DistributionClient 09:10:06 09:10:06.537 [main] DEBUG org.onap.sdc.impl.DistributionClientImpl - client was not initialized 09:10:06 09:10:06.537 [kafka-producer-network-thread | mso-123456-producer-97b112bb-2505-47ed-bd4e-dd24b234ea4e] INFO org.apache.kafka.clients.NetworkClient - [Producer clientId=mso-123456-producer-97b112bb-2505-47ed-bd4e-dd24b234ea4e] Node -1 disconnected. 09:10:06 09:10:06.537 [kafka-producer-network-thread | mso-123456-producer-97b112bb-2505-47ed-bd4e-dd24b234ea4e] WARN org.apache.kafka.clients.NetworkClient - [Producer clientId=mso-123456-producer-97b112bb-2505-47ed-bd4e-dd24b234ea4e] Connection to node -1 (localhost/127.0.0.1:9092) could not be established. Broker may not be available. 09:10:06 09:10:06.537 [kafka-producer-network-thread | mso-123456-producer-97b112bb-2505-47ed-bd4e-dd24b234ea4e] WARN org.apache.kafka.clients.NetworkClient - [Producer clientId=mso-123456-producer-97b112bb-2505-47ed-bd4e-dd24b234ea4e] Bootstrap broker localhost:9092 (id: -1 rack: null) disconnected 09:10:06 09:10:06.538 [kafka-producer-network-thread | mso-123456-producer-97b112bb-2505-47ed-bd4e-dd24b234ea4e] DEBUG org.apache.kafka.clients.producer.internals.Sender - [Producer clientId=mso-123456-producer-97b112bb-2505-47ed-bd4e-dd24b234ea4e] Disconnect from localhost:9092 (id: -1 rack: null) while trying to send request InitProducerIdRequestData(transactionalId=null, transactionTimeoutMs=2147483647, producerId=-1, producerEpoch=-1). Going to back off and retry. 09:10:06 java.io.IOException: Connection to localhost:9092 (id: -1 rack: null) failed. 09:10:06 at org.apache.kafka.clients.NetworkClientUtils.awaitReady(NetworkClientUtils.java:70) 09:10:06 at org.apache.kafka.clients.producer.internals.Sender.awaitNodeReady(Sender.java:534) 09:10:06 at org.apache.kafka.clients.producer.internals.Sender.maybeSendAndPollTransactionalRequest(Sender.java:455) 09:10:06 at org.apache.kafka.clients.producer.internals.Sender.runOnce(Sender.java:316) 09:10:06 at org.apache.kafka.clients.producer.internals.Sender.run(Sender.java:243) 09:10:06 at java.base/java.lang.Thread.run(Thread.java:829) 09:10:06 09:10:06.541 [main] INFO org.onap.sdc.impl.DistributionClientImpl - stop DistributionClient 09:10:06 09:10:06.541 [main] DEBUG org.onap.sdc.impl.DistributionClientImpl - client was not initialized 09:10:06 09:10:06.542 [main] ERROR org.onap.sdc.impl.DistributionClientImpl - DistributionClientResultImpl [responseStatus=CONF_MISSING_PASSWORD, responseMessage=configuration is invalid: CONF_MISSING_PASSWORD] 09:10:06 09:10:06.542 [main] ERROR org.onap.sdc.impl.DistributionClientImpl - DistributionClientResultImpl [responseStatus=CONF_MISSING_PASSWORD, responseMessage=configuration is invalid: CONF_MISSING_PASSWORD] 09:10:06 09:10:06.543 [main] INFO org.onap.sdc.impl.DistributionClientImpl - stop DistributionClient 09:10:06 09:10:06.543 [main] DEBUG org.onap.sdc.impl.DistributionClientImpl - client was not initialized 09:10:06 09:10:06.544 [main] INFO org.onap.sdc.impl.DistributionClientImpl - DistributionClient - init 09:10:06 09:10:06.547 [kafka-coordinator-heartbeat-thread | mso-group] DEBUG org.apache.kafka.clients.NetworkClient - [Consumer clientId=mso-123456-consumer-f5c1d64b-2a77-4abe-97e2-5d595acfbd82, groupId=mso-group] Give up sending metadata request since no node is available 09:10:06 09:10:06.547 [kafka-coordinator-heartbeat-thread | mso-group] DEBUG org.apache.kafka.clients.consumer.internals.ConsumerCoordinator - [Consumer clientId=mso-123456-consumer-f5c1d64b-2a77-4abe-97e2-5d595acfbd82, groupId=mso-group] No broker available to send FindCoordinator request 09:10:06 09:10:06.547 [main] DEBUG org.onap.sdc.http.SdcConnectorClient - about to perform get on SDC. requestId= 2782e7a3-b72d-4e5a-a498-b2fd1b1dcae2 url= /sdc/v1/artifactTypes 09:10:06 09:10:06.547 [main] DEBUG org.onap.sdc.http.HttpSdcClient - url to send http://badhost:8080/sdc/v1/artifactTypes 09:10:06 09:10:06.552 [main] ERROR org.onap.sdc.http.HttpSdcClient - failed to connect to url: /sdc/v1/artifactTypes 09:10:06 java.net.UnknownHostException: proxy: System error 09:10:06 at java.base/java.net.Inet6AddressImpl.lookupAllHostAddr(Native Method) 09:10:06 at java.base/java.net.InetAddress$PlatformNameService.lookupAllHostAddr(InetAddress.java:929) 09:10:06 at java.base/java.net.InetAddress.getAddressesFromNameService(InetAddress.java:1529) 09:10:06 at java.base/java.net.InetAddress$NameServiceAddresses.get(InetAddress.java:848) 09:10:06 at java.base/java.net.InetAddress.getAllByName0(InetAddress.java:1519) 09:10:06 at java.base/java.net.InetAddress.getAllByName(InetAddress.java:1378) 09:10:06 at java.base/java.net.InetAddress.getAllByName(InetAddress.java:1306) 09:10:06 at org.apache.http.impl.conn.SystemDefaultDnsResolver.resolve(SystemDefaultDnsResolver.java:45) 09:10:06 at org.apache.http.impl.conn.DefaultHttpClientConnectionOperator.connect(DefaultHttpClientConnectionOperator.java:112) 09:10:06 at org.apache.http.impl.conn.PoolingHttpClientConnectionManager.connect(PoolingHttpClientConnectionManager.java:376) 09:10:06 at org.apache.http.impl.execchain.MainClientExec.establishRoute(MainClientExec.java:401) 09:10:06 at org.apache.http.impl.execchain.MainClientExec.execute(MainClientExec.java:236) 09:10:06 at org.apache.http.impl.execchain.ProtocolExec.execute(ProtocolExec.java:186) 09:10:06 at org.apache.http.impl.execchain.RetryExec.execute(RetryExec.java:89) 09:10:06 at org.apache.http.impl.execchain.RedirectExec.execute(RedirectExec.java:110) 09:10:06 at org.apache.http.impl.client.InternalHttpClient.doExecute(InternalHttpClient.java:185) 09:10:06 at org.apache.http.impl.client.CloseableHttpClient.execute(CloseableHttpClient.java:83) 09:10:06 at org.apache.http.impl.client.CloseableHttpClient.execute(CloseableHttpClient.java:108) 09:10:06 at org.onap.sdc.http.HttpSdcClient.getRequest(HttpSdcClient.java:116) 09:10:06 at org.onap.sdc.http.SdcConnectorClient.performSdcServerRequest(SdcConnectorClient.java:120) 09:10:06 at org.onap.sdc.http.SdcConnectorClient.getValidArtifactTypesList(SdcConnectorClient.java:74) 09:10:06 at org.onap.sdc.impl.DistributionClientImpl.validateArtifactTypesWithSdcServer(DistributionClientImpl.java:300) 09:10:06 at org.onap.sdc.impl.DistributionClientImpl.init(DistributionClientImpl.java:129) 09:10:06 at java.base/java.lang.invoke.MethodHandle.invokeWithArguments(MethodHandle.java:710) 09:10:06 at org.mockito.internal.util.reflection.InstrumentationMemberAccessor$Dispatcher$ByteBuddy$bEIykPBv.invokeWithArguments(Unknown Source) 09:10:06 at org.mockito.internal.util.reflection.InstrumentationMemberAccessor.invoke(InstrumentationMemberAccessor.java:239) 09:10:06 at org.mockito.internal.util.reflection.ModuleMemberAccessor.invoke(ModuleMemberAccessor.java:55) 09:10:06 at org.mockito.internal.creation.bytebuddy.MockMethodAdvice.tryInvoke(MockMethodAdvice.java:333) 09:10:06 at org.mockito.internal.creation.bytebuddy.MockMethodAdvice.access$500(MockMethodAdvice.java:60) 09:10:06 at org.mockito.internal.creation.bytebuddy.MockMethodAdvice$RealMethodCall.invoke(MockMethodAdvice.java:253) 09:10:06 at org.mockito.internal.invocation.InterceptedInvocation.callRealMethod(InterceptedInvocation.java:142) 09:10:06 at org.mockito.internal.stubbing.answers.CallsRealMethods.answer(CallsRealMethods.java:45) 09:10:06 at org.mockito.Answers.answer(Answers.java:99) 09:10:06 at org.mockito.internal.handler.MockHandlerImpl.handle(MockHandlerImpl.java:110) 09:10:06 at org.mockito.internal.handler.NullResultGuardian.handle(NullResultGuardian.java:29) 09:10:06 at org.mockito.internal.handler.InvocationNotifierHandler.handle(InvocationNotifierHandler.java:34) 09:10:06 at org.mockito.internal.creation.bytebuddy.MockMethodInterceptor.doIntercept(MockMethodInterceptor.java:82) 09:10:06 at org.mockito.internal.creation.bytebuddy.MockMethodAdvice.handle(MockMethodAdvice.java:151) 09:10:06 at org.onap.sdc.impl.DistributionClientImpl.init(DistributionClientImpl.java:117) 09:10:06 at org.onap.sdc.impl.DistributionClientTest.initFailedConnectSdcInHttpTest(DistributionClientTest.java:207) 09:10:06 at java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke0(Native Method) 09:10:06 at java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) 09:10:06 at java.base/jdk.internal.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) 09:10:06 at java.base/java.lang.reflect.Method.invoke(Method.java:566) 09:10:06 at org.junit.platform.commons.util.ReflectionUtils.invokeMethod(ReflectionUtils.java:688) 09:10:06 at org.junit.jupiter.engine.execution.MethodInvocation.proceed(MethodInvocation.java:60) 09:10:06 at org.junit.jupiter.engine.execution.InvocationInterceptorChain$ValidatingInvocation.proceed(InvocationInterceptorChain.java:131) 09:10:06 at org.junit.jupiter.engine.extension.TimeoutExtension.intercept(TimeoutExtension.java:149) 09:10:06 at org.junit.jupiter.engine.extension.TimeoutExtension.interceptTestableMethod(TimeoutExtension.java:140) 09:10:06 at org.junit.jupiter.engine.extension.TimeoutExtension.interceptTestMethod(TimeoutExtension.java:84) 09:10:06 at org.junit.jupiter.engine.execution.ExecutableInvoker$ReflectiveInterceptorCall.lambda$ofVoidMethod$0(ExecutableInvoker.java:115) 09:10:06 at org.junit.jupiter.engine.execution.ExecutableInvoker.lambda$invoke$0(ExecutableInvoker.java:105) 09:10:06 at org.junit.jupiter.engine.execution.InvocationInterceptorChain$InterceptedInvocation.proceed(InvocationInterceptorChain.java:106) 09:10:06 at org.junit.jupiter.engine.execution.InvocationInterceptorChain.proceed(InvocationInterceptorChain.java:64) 09:10:06 at org.junit.jupiter.engine.execution.InvocationInterceptorChain.chainAndInvoke(InvocationInterceptorChain.java:45) 09:10:06 at org.junit.jupiter.engine.execution.InvocationInterceptorChain.invoke(InvocationInterceptorChain.java:37) 09:10:06 at org.junit.jupiter.engine.execution.ExecutableInvoker.invoke(ExecutableInvoker.java:104) 09:10:06 at org.junit.jupiter.engine.execution.ExecutableInvoker.invoke(ExecutableInvoker.java:98) 09:10:06 at org.junit.jupiter.engine.descriptor.TestMethodTestDescriptor.lambda$invokeTestMethod$6(TestMethodTestDescriptor.java:210) 09:10:06 at org.junit.platform.engine.support.hierarchical.ThrowableCollector.execute(ThrowableCollector.java:73) 09:10:06 at org.junit.jupiter.engine.descriptor.TestMethodTestDescriptor.invokeTestMethod(TestMethodTestDescriptor.java:206) 09:10:06 at org.junit.jupiter.engine.descriptor.TestMethodTestDescriptor.execute(TestMethodTestDescriptor.java:131) 09:10:06 at org.junit.jupiter.engine.descriptor.TestMethodTestDescriptor.execute(TestMethodTestDescriptor.java:65) 09:10:06 at org.junit.platform.engine.support.hierarchical.NodeTestTask.lambda$executeRecursively$5(NodeTestTask.java:139) 09:10:06 at org.junit.platform.engine.support.hierarchical.ThrowableCollector.execute(ThrowableCollector.java:73) 09:10:06 at org.junit.platform.engine.support.hierarchical.NodeTestTask.lambda$executeRecursively$7(NodeTestTask.java:129) 09:10:06 at org.junit.platform.engine.support.hierarchical.Node.around(Node.java:137) 09:10:06 at org.junit.platform.engine.support.hierarchical.NodeTestTask.lambda$executeRecursively$8(NodeTestTask.java:127) 09:10:06 at org.junit.platform.engine.support.hierarchical.ThrowableCollector.execute(ThrowableCollector.java:73) 09:10:06 at org.junit.platform.engine.support.hierarchical.NodeTestTask.executeRecursively(NodeTestTask.java:126) 09:10:06 at org.junit.platform.engine.support.hierarchical.NodeTestTask.execute(NodeTestTask.java:84) 09:10:06 at java.base/java.util.ArrayList.forEach(ArrayList.java:1541) 09:10:06 at org.junit.platform.engine.support.hierarchical.SameThreadHierarchicalTestExecutorService.invokeAll(SameThreadHierarchicalTestExecutorService.java:38) 09:10:06 at org.junit.platform.engine.support.hierarchical.NodeTestTask.lambda$executeRecursively$5(NodeTestTask.java:143) 09:10:06 at org.junit.platform.engine.support.hierarchical.ThrowableCollector.execute(ThrowableCollector.java:73) 09:10:06 at org.junit.platform.engine.support.hierarchical.NodeTestTask.lambda$executeRecursively$7(NodeTestTask.java:129) 09:10:06 at org.junit.platform.engine.support.hierarchical.Node.around(Node.java:137) 09:10:06 at org.junit.platform.engine.support.hierarchical.NodeTestTask.lambda$executeRecursively$8(NodeTestTask.java:127) 09:10:06 at org.junit.platform.engine.support.hierarchical.ThrowableCollector.execute(ThrowableCollector.java:73) 09:10:06 at org.junit.platform.engine.support.hierarchical.NodeTestTask.executeRecursively(NodeTestTask.java:126) 09:10:06 at org.junit.platform.engine.support.hierarchical.NodeTestTask.execute(NodeTestTask.java:84) 09:10:06 at java.base/java.util.ArrayList.forEach(ArrayList.java:1541) 09:10:06 at org.junit.platform.engine.support.hierarchical.SameThreadHierarchicalTestExecutorService.invokeAll(SameThreadHierarchicalTestExecutorService.java:38) 09:10:06 at org.junit.platform.engine.support.hierarchical.NodeTestTask.lambda$executeRecursively$5(NodeTestTask.java:143) 09:10:06 at org.junit.platform.engine.support.hierarchical.ThrowableCollector.execute(ThrowableCollector.java:73) 09:10:06 at org.junit.platform.engine.support.hierarchical.NodeTestTask.lambda$executeRecursively$7(NodeTestTask.java:129) 09:10:06 at org.junit.platform.engine.support.hierarchical.Node.around(Node.java:137) 09:10:06 at org.junit.platform.engine.support.hierarchical.NodeTestTask.lambda$executeRecursively$8(NodeTestTask.java:127) 09:10:06 at org.junit.platform.engine.support.hierarchical.ThrowableCollector.execute(ThrowableCollector.java:73) 09:10:06 at org.junit.platform.engine.support.hierarchical.NodeTestTask.executeRecursively(NodeTestTask.java:126) 09:10:06 at org.junit.platform.engine.support.hierarchical.NodeTestTask.execute(NodeTestTask.java:84) 09:10:06 at org.junit.platform.engine.support.hierarchical.SameThreadHierarchicalTestExecutorService.submit(SameThreadHierarchicalTestExecutorService.java:32) 09:10:06 at org.junit.platform.engine.support.hierarchical.HierarchicalTestExecutor.execute(HierarchicalTestExecutor.java:57) 09:10:06 at org.junit.platform.engine.support.hierarchical.HierarchicalTestEngine.execute(HierarchicalTestEngine.java:51) 09:10:06 at org.junit.platform.launcher.core.EngineExecutionOrchestrator.execute(EngineExecutionOrchestrator.java:108) 09:10:06 at org.junit.platform.launcher.core.EngineExecutionOrchestrator.execute(EngineExecutionOrchestrator.java:88) 09:10:06 at org.junit.platform.launcher.core.EngineExecutionOrchestrator.lambda$execute$0(EngineExecutionOrchestrator.java:54) 09:10:06 at org.junit.platform.launcher.core.EngineExecutionOrchestrator.withInterceptedStreams(EngineExecutionOrchestrator.java:67) 09:10:06 at org.junit.platform.launcher.core.EngineExecutionOrchestrator.execute(EngineExecutionOrchestrator.java:52) 09:10:06 at org.junit.platform.launcher.core.DefaultLauncher.execute(DefaultLauncher.java:96) 09:10:06 at org.junit.platform.launcher.core.DefaultLauncher.execute(DefaultLauncher.java:75) 09:10:06 at org.apache.maven.surefire.junitplatform.JUnitPlatformProvider.invokeAllTests(JUnitPlatformProvider.java:154) 09:10:06 at org.apache.maven.surefire.junitplatform.JUnitPlatformProvider.invoke(JUnitPlatformProvider.java:127) 09:10:06 at org.apache.maven.surefire.booter.ForkedBooter.runSuitesInProcess(ForkedBooter.java:377) 09:10:06 at org.apache.maven.surefire.booter.ForkedBooter.execute(ForkedBooter.java:138) 09:10:06 at org.apache.maven.surefire.booter.ForkedBooter.run(ForkedBooter.java:465) 09:10:06 at org.apache.maven.surefire.booter.ForkedBooter.main(ForkedBooter.java:451) 09:10:06 09:10:06.553 [main] ERROR org.onap.sdc.http.SdcConnectorClient - status from SDC is org.onap.sdc.http.HttpSdcResponse@12cab957 09:10:06 09:10:06.553 [main] ERROR org.onap.sdc.http.SdcConnectorClient - DistributionClientResultImpl [responseStatus=SDC_CONNECTION_FAILED, responseMessage=SDC server problem] 09:10:06 09:10:06.553 [main] DEBUG org.onap.sdc.http.SdcConnectorClient - error from SDC is: failed to connect 09:10:06 09:10:06.553 [main] INFO org.onap.sdc.impl.DistributionClientImpl - DistributionClient - init 09:10:06 09:10:06.554 [main] DEBUG org.onap.sdc.http.SdcConnectorClient - about to perform get on SDC. requestId= 1441d1e4-4b5c-451e-a016-0bc114ebe499 url= /sdc/v1/artifactTypes 09:10:06 09:10:06.554 [main] DEBUG org.onap.sdc.http.HttpSdcClient - url to send http://localhost:8181/sdc/v1/artifactTypes 09:10:06 09:10:06.555 [main] ERROR org.onap.sdc.http.HttpSdcClient - failed to connect to url: /sdc/v1/artifactTypes 09:10:06 java.net.UnknownHostException: proxy 09:10:06 at java.base/java.net.InetAddress$CachedAddresses.get(InetAddress.java:797) 09:10:06 at java.base/java.net.InetAddress.getAllByName0(InetAddress.java:1519) 09:10:06 at java.base/java.net.InetAddress.getAllByName(InetAddress.java:1378) 09:10:06 at java.base/java.net.InetAddress.getAllByName(InetAddress.java:1306) 09:10:06 at org.apache.http.impl.conn.SystemDefaultDnsResolver.resolve(SystemDefaultDnsResolver.java:45) 09:10:06 at org.apache.http.impl.conn.DefaultHttpClientConnectionOperator.connect(DefaultHttpClientConnectionOperator.java:112) 09:10:06 at org.apache.http.impl.conn.PoolingHttpClientConnectionManager.connect(PoolingHttpClientConnectionManager.java:376) 09:10:06 at org.apache.http.impl.execchain.MainClientExec.establishRoute(MainClientExec.java:401) 09:10:06 at org.apache.http.impl.execchain.MainClientExec.execute(MainClientExec.java:236) 09:10:06 at org.apache.http.impl.execchain.ProtocolExec.execute(ProtocolExec.java:186) 09:10:06 at org.apache.http.impl.execchain.RetryExec.execute(RetryExec.java:89) 09:10:06 at org.apache.http.impl.execchain.RedirectExec.execute(RedirectExec.java:110) 09:10:06 at org.apache.http.impl.client.InternalHttpClient.doExecute(InternalHttpClient.java:185) 09:10:06 at org.apache.http.impl.client.CloseableHttpClient.execute(CloseableHttpClient.java:83) 09:10:06 at org.apache.http.impl.client.CloseableHttpClient.execute(CloseableHttpClient.java:108) 09:10:06 at org.onap.sdc.http.HttpSdcClient.getRequest(HttpSdcClient.java:116) 09:10:06 at org.onap.sdc.http.SdcConnectorClient.performSdcServerRequest(SdcConnectorClient.java:120) 09:10:06 at org.onap.sdc.http.SdcConnectorClient.getValidArtifactTypesList(SdcConnectorClient.java:74) 09:10:06 at org.onap.sdc.impl.DistributionClientImpl.validateArtifactTypesWithSdcServer(DistributionClientImpl.java:300) 09:10:06 at org.onap.sdc.impl.DistributionClientImpl.init(DistributionClientImpl.java:129) 09:10:06 at java.base/java.lang.invoke.MethodHandle.invokeWithArguments(MethodHandle.java:710) 09:10:06 at org.mockito.internal.util.reflection.InstrumentationMemberAccessor$Dispatcher$ByteBuddy$bEIykPBv.invokeWithArguments(Unknown Source) 09:10:06 at org.mockito.internal.util.reflection.InstrumentationMemberAccessor.invoke(InstrumentationMemberAccessor.java:239) 09:10:06 at org.mockito.internal.util.reflection.ModuleMemberAccessor.invoke(ModuleMemberAccessor.java:55) 09:10:06 at org.mockito.internal.creation.bytebuddy.MockMethodAdvice.tryInvoke(MockMethodAdvice.java:333) 09:10:06 at org.mockito.internal.creation.bytebuddy.MockMethodAdvice.access$500(MockMethodAdvice.java:60) 09:10:06 at org.mockito.internal.creation.bytebuddy.MockMethodAdvice$RealMethodCall.invoke(MockMethodAdvice.java:253) 09:10:06 at org.mockito.internal.invocation.InterceptedInvocation.callRealMethod(InterceptedInvocation.java:142) 09:10:06 at org.mockito.internal.stubbing.answers.CallsRealMethods.answer(CallsRealMethods.java:45) 09:10:06 at org.mockito.Answers.answer(Answers.java:99) 09:10:06 at org.mockito.internal.handler.MockHandlerImpl.handle(MockHandlerImpl.java:110) 09:10:06 at org.mockito.internal.handler.NullResultGuardian.handle(NullResultGuardian.java:29) 09:10:06 at org.mockito.internal.handler.InvocationNotifierHandler.handle(InvocationNotifierHandler.java:34) 09:10:06 at org.mockito.internal.creation.bytebuddy.MockMethodInterceptor.doIntercept(MockMethodInterceptor.java:82) 09:10:06 at org.mockito.internal.creation.bytebuddy.MockMethodAdvice.handle(MockMethodAdvice.java:151) 09:10:06 at org.onap.sdc.impl.DistributionClientImpl.init(DistributionClientImpl.java:117) 09:10:06 at org.onap.sdc.impl.DistributionClientTest.initFailedConnectSdcInHttpTest(DistributionClientTest.java:214) 09:10:06 at java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke0(Native Method) 09:10:06 at java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) 09:10:06 at java.base/jdk.internal.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) 09:10:06 at java.base/java.lang.reflect.Method.invoke(Method.java:566) 09:10:06 at org.junit.platform.commons.util.ReflectionUtils.invokeMethod(ReflectionUtils.java:688) 09:10:06 at org.junit.jupiter.engine.execution.MethodInvocation.proceed(MethodInvocation.java:60) 09:10:06 at org.junit.jupiter.engine.execution.InvocationInterceptorChain$ValidatingInvocation.proceed(InvocationInterceptorChain.java:131) 09:10:06 at org.junit.jupiter.engine.extension.TimeoutExtension.intercept(TimeoutExtension.java:149) 09:10:06 at org.junit.jupiter.engine.extension.TimeoutExtension.interceptTestableMethod(TimeoutExtension.java:140) 09:10:06 at org.junit.jupiter.engine.extension.TimeoutExtension.interceptTestMethod(TimeoutExtension.java:84) 09:10:06 at org.junit.jupiter.engine.execution.ExecutableInvoker$ReflectiveInterceptorCall.lambda$ofVoidMethod$0(ExecutableInvoker.java:115) 09:10:06 at org.junit.jupiter.engine.execution.ExecutableInvoker.lambda$invoke$0(ExecutableInvoker.java:105) 09:10:06 at org.junit.jupiter.engine.execution.InvocationInterceptorChain$InterceptedInvocation.proceed(InvocationInterceptorChain.java:106) 09:10:06 at org.junit.jupiter.engine.execution.InvocationInterceptorChain.proceed(InvocationInterceptorChain.java:64) 09:10:06 at org.junit.jupiter.engine.execution.InvocationInterceptorChain.chainAndInvoke(InvocationInterceptorChain.java:45) 09:10:06 at org.junit.jupiter.engine.execution.InvocationInterceptorChain.invoke(InvocationInterceptorChain.java:37) 09:10:06 at org.junit.jupiter.engine.execution.ExecutableInvoker.invoke(ExecutableInvoker.java:104) 09:10:06 at org.junit.jupiter.engine.execution.ExecutableInvoker.invoke(ExecutableInvoker.java:98) 09:10:06 at org.junit.jupiter.engine.descriptor.TestMethodTestDescriptor.lambda$invokeTestMethod$6(TestMethodTestDescriptor.java:210) 09:10:06 at org.junit.platform.engine.support.hierarchical.ThrowableCollector.execute(ThrowableCollector.java:73) 09:10:06 at org.junit.jupiter.engine.descriptor.TestMethodTestDescriptor.invokeTestMethod(TestMethodTestDescriptor.java:206) 09:10:06 at org.junit.jupiter.engine.descriptor.TestMethodTestDescriptor.execute(TestMethodTestDescriptor.java:131) 09:10:06 at org.junit.jupiter.engine.descriptor.TestMethodTestDescriptor.execute(TestMethodTestDescriptor.java:65) 09:10:06 at org.junit.platform.engine.support.hierarchical.NodeTestTask.lambda$executeRecursively$5(NodeTestTask.java:139) 09:10:06 at org.junit.platform.engine.support.hierarchical.ThrowableCollector.execute(ThrowableCollector.java:73) 09:10:06 at org.junit.platform.engine.support.hierarchical.NodeTestTask.lambda$executeRecursively$7(NodeTestTask.java:129) 09:10:06 at org.junit.platform.engine.support.hierarchical.Node.around(Node.java:137) 09:10:06 at org.junit.platform.engine.support.hierarchical.NodeTestTask.lambda$executeRecursively$8(NodeTestTask.java:127) 09:10:06 at org.junit.platform.engine.support.hierarchical.ThrowableCollector.execute(ThrowableCollector.java:73) 09:10:06 at org.junit.platform.engine.support.hierarchical.NodeTestTask.executeRecursively(NodeTestTask.java:126) 09:10:06 at org.junit.platform.engine.support.hierarchical.NodeTestTask.execute(NodeTestTask.java:84) 09:10:06 at java.base/java.util.ArrayList.forEach(ArrayList.java:1541) 09:10:06 at org.junit.platform.engine.support.hierarchical.SameThreadHierarchicalTestExecutorService.invokeAll(SameThreadHierarchicalTestExecutorService.java:38) 09:10:06 at org.junit.platform.engine.support.hierarchical.NodeTestTask.lambda$executeRecursively$5(NodeTestTask.java:143) 09:10:06 at org.junit.platform.engine.support.hierarchical.ThrowableCollector.execute(ThrowableCollector.java:73) 09:10:06 at org.junit.platform.engine.support.hierarchical.NodeTestTask.lambda$executeRecursively$7(NodeTestTask.java:129) 09:10:06 at org.junit.platform.engine.support.hierarchical.Node.around(Node.java:137) 09:10:06 at org.junit.platform.engine.support.hierarchical.NodeTestTask.lambda$executeRecursively$8(NodeTestTask.java:127) 09:10:06 at org.junit.platform.engine.support.hierarchical.ThrowableCollector.execute(ThrowableCollector.java:73) 09:10:06 at org.junit.platform.engine.support.hierarchical.NodeTestTask.executeRecursively(NodeTestTask.java:126) 09:10:06 at org.junit.platform.engine.support.hierarchical.NodeTestTask.execute(NodeTestTask.java:84) 09:10:06 at java.base/java.util.ArrayList.forEach(ArrayList.java:1541) 09:10:06 at org.junit.platform.engine.support.hierarchical.SameThreadHierarchicalTestExecutorService.invokeAll(SameThreadHierarchicalTestExecutorService.java:38) 09:10:06 at org.junit.platform.engine.support.hierarchical.NodeTestTask.lambda$executeRecursively$5(NodeTestTask.java:143) 09:10:06 at org.junit.platform.engine.support.hierarchical.ThrowableCollector.execute(ThrowableCollector.java:73) 09:10:06 at org.junit.platform.engine.support.hierarchical.NodeTestTask.lambda$executeRecursively$7(NodeTestTask.java:129) 09:10:06 at org.junit.platform.engine.support.hierarchical.Node.around(Node.java:137) 09:10:06 at org.junit.platform.engine.support.hierarchical.NodeTestTask.lambda$executeRecursively$8(NodeTestTask.java:127) 09:10:06 at org.junit.platform.engine.support.hierarchical.ThrowableCollector.execute(ThrowableCollector.java:73) 09:10:06 at org.junit.platform.engine.support.hierarchical.NodeTestTask.executeRecursively(NodeTestTask.java:126) 09:10:06 at org.junit.platform.engine.support.hierarchical.NodeTestTask.execute(NodeTestTask.java:84) 09:10:06 at org.junit.platform.engine.support.hierarchical.SameThreadHierarchicalTestExecutorService.submit(SameThreadHierarchicalTestExecutorService.java:32) 09:10:06 at org.junit.platform.engine.support.hierarchical.HierarchicalTestExecutor.execute(HierarchicalTestExecutor.java:57) 09:10:06 at org.junit.platform.engine.support.hierarchical.HierarchicalTestEngine.execute(HierarchicalTestEngine.java:51) 09:10:06 at org.junit.platform.launcher.core.EngineExecutionOrchestrator.execute(EngineExecutionOrchestrator.java:108) 09:10:06 at org.junit.platform.launcher.core.EngineExecutionOrchestrator.execute(EngineExecutionOrchestrator.java:88) 09:10:06 at org.junit.platform.launcher.core.EngineExecutionOrchestrator.lambda$execute$0(EngineExecutionOrchestrator.java:54) 09:10:06 at org.junit.platform.launcher.core.EngineExecutionOrchestrator.withInterceptedStreams(EngineExecutionOrchestrator.java:67) 09:10:06 at org.junit.platform.launcher.core.EngineExecutionOrchestrator.execute(EngineExecutionOrchestrator.java:52) 09:10:06 at org.junit.platform.launcher.core.DefaultLauncher.execute(DefaultLauncher.java:96) 09:10:06 at org.junit.platform.launcher.core.DefaultLauncher.execute(DefaultLauncher.java:75) 09:10:06 at org.apache.maven.surefire.junitplatform.JUnitPlatformProvider.invokeAllTests(JUnitPlatformProvider.java:154) 09:10:06 at org.apache.maven.surefire.junitplatform.JUnitPlatformProvider.invoke(JUnitPlatformProvider.java:127) 09:10:06 at org.apache.maven.surefire.booter.ForkedBooter.runSuitesInProcess(ForkedBooter.java:377) 09:10:06 at org.apache.maven.surefire.booter.ForkedBooter.execute(ForkedBooter.java:138) 09:10:06 at org.apache.maven.surefire.booter.ForkedBooter.run(ForkedBooter.java:465) 09:10:06 at org.apache.maven.surefire.booter.ForkedBooter.main(ForkedBooter.java:451) 09:10:06 09:10:06.555 [main] ERROR org.onap.sdc.http.SdcConnectorClient - status from SDC is org.onap.sdc.http.HttpSdcResponse@50bdd9b1 09:10:06 09:10:06.555 [main] ERROR org.onap.sdc.http.SdcConnectorClient - DistributionClientResultImpl [responseStatus=SDC_CONNECTION_FAILED, responseMessage=SDC server problem] 09:10:06 09:10:06.555 [main] DEBUG org.onap.sdc.http.SdcConnectorClient - error from SDC is: failed to connect 09:10:06 09:10:06.556 [main] INFO org.onap.sdc.impl.DistributionClientImpl - stop DistributionClient 09:10:06 09:10:06.556 [main] DEBUG org.onap.sdc.impl.DistributionClientImpl - client was not initialized 09:10:06 09:10:06.558 [main] INFO org.onap.sdc.impl.DistributionClientImpl - stop DistributionClient 09:10:06 09:10:06.558 [main] DEBUG org.onap.sdc.impl.DistributionClientImpl - client was not initialized 09:10:06 09:10:06.559 [main] WARN org.onap.sdc.impl.DistributionClientImpl - polling interval is out of range. value should be greater than or equals to 15 09:10:06 09:10:06.559 [main] WARN org.onap.sdc.impl.DistributionClientImpl - setting polling interval to default: 15 09:10:06 09:10:06.559 [main] WARN org.onap.sdc.impl.DistributionClientImpl - polling interval is out of range. value should be greater than or equals to 15 09:10:06 09:10:06.559 [main] WARN org.onap.sdc.impl.DistributionClientImpl - setting polling interval to default: 15 09:10:06 09:10:06.560 [main] INFO org.onap.sdc.impl.DistributionClientImpl - stop DistributionClient 09:10:06 09:10:06.560 [main] DEBUG org.onap.sdc.impl.DistributionClientImpl - client was not initialized 09:10:06 09:10:06.561 [main] INFO org.onap.sdc.impl.DistributionClientImpl - DistributionClient - init 09:10:06 09:10:06.561 [main] WARN org.onap.sdc.impl.DistributionClientImpl - polling interval is out of range. value should be greater than or equals to 15 09:10:06 09:10:06.561 [main] WARN org.onap.sdc.impl.DistributionClientImpl - setting polling interval to default: 15 09:10:06 09:10:06.562 [main] DEBUG org.onap.sdc.impl.DistributionClientImpl - Artifact types: [HEAT] were validated with SDC server 09:10:06 09:10:06.562 [main] DEBUG org.onap.sdc.impl.DistributionClientImpl - Get MessageBus cluster information from SDC 09:10:06 09:10:06.562 [main] DEBUG org.onap.sdc.impl.DistributionClientImpl - MessageBus cluster info retrieved successfully org.onap.sdc.utils.kafka.KafkaDataResponse@3d41b3cc 09:10:06 09:10:06.562 [main] INFO org.apache.kafka.clients.producer.ProducerConfig - ProducerConfig values: 09:10:06 acks = -1 09:10:06 batch.size = 16384 09:10:06 bootstrap.servers = [localhost:9092] 09:10:06 buffer.memory = 33554432 09:10:06 client.dns.lookup = use_all_dns_ips 09:10:06 client.id = mso-123456-producer-944da5b4-ee50-478d-8683-7c42aeddf36c 09:10:06 compression.type = none 09:10:06 connections.max.idle.ms = 540000 09:10:06 delivery.timeout.ms = 120000 09:10:06 enable.idempotence = true 09:10:06 interceptor.classes = [] 09:10:06 key.serializer = class org.apache.kafka.common.serialization.StringSerializer 09:10:06 linger.ms = 0 09:10:06 max.block.ms = 60000 09:10:06 max.in.flight.requests.per.connection = 5 09:10:06 max.request.size = 1048576 09:10:06 metadata.max.age.ms = 300000 09:10:06 metadata.max.idle.ms = 300000 09:10:06 metric.reporters = [] 09:10:06 metrics.num.samples = 2 09:10:06 metrics.recording.level = INFO 09:10:06 metrics.sample.window.ms = 30000 09:10:06 partitioner.adaptive.partitioning.enable = true 09:10:06 partitioner.availability.timeout.ms = 0 09:10:06 partitioner.class = null 09:10:06 partitioner.ignore.keys = false 09:10:06 receive.buffer.bytes = 32768 09:10:06 reconnect.backoff.max.ms = 1000 09:10:06 reconnect.backoff.ms = 50 09:10:06 request.timeout.ms = 30000 09:10:06 retries = 2147483647 09:10:06 retry.backoff.ms = 100 09:10:06 sasl.client.callback.handler.class = null 09:10:06 sasl.jaas.config = [hidden] 09:10:06 sasl.kerberos.kinit.cmd = /usr/bin/kinit 09:10:06 sasl.kerberos.min.time.before.relogin = 60000 09:10:06 sasl.kerberos.service.name = null 09:10:06 sasl.kerberos.ticket.renew.jitter = 0.05 09:10:06 sasl.kerberos.ticket.renew.window.factor = 0.8 09:10:06 sasl.login.callback.handler.class = null 09:10:06 sasl.login.class = null 09:10:06 sasl.login.connect.timeout.ms = null 09:10:06 sasl.login.read.timeout.ms = null 09:10:06 sasl.login.refresh.buffer.seconds = 300 09:10:06 sasl.login.refresh.min.period.seconds = 60 09:10:06 sasl.login.refresh.window.factor = 0.8 09:10:06 sasl.login.refresh.window.jitter = 0.05 09:10:06 sasl.login.retry.backoff.max.ms = 10000 09:10:06 sasl.login.retry.backoff.ms = 100 09:10:06 sasl.mechanism = PLAIN 09:10:06 sasl.oauthbearer.clock.skew.seconds = 30 09:10:06 sasl.oauthbearer.expected.audience = null 09:10:06 sasl.oauthbearer.expected.issuer = null 09:10:06 sasl.oauthbearer.jwks.endpoint.refresh.ms = 3600000 09:10:06 sasl.oauthbearer.jwks.endpoint.retry.backoff.max.ms = 10000 09:10:06 sasl.oauthbearer.jwks.endpoint.retry.backoff.ms = 100 09:10:06 sasl.oauthbearer.jwks.endpoint.url = null 09:10:06 sasl.oauthbearer.scope.claim.name = scope 09:10:06 sasl.oauthbearer.sub.claim.name = sub 09:10:06 sasl.oauthbearer.token.endpoint.url = null 09:10:06 security.protocol = SASL_PLAINTEXT 09:10:06 security.providers = null 09:10:06 send.buffer.bytes = 131072 09:10:06 socket.connection.setup.timeout.max.ms = 30000 09:10:06 socket.connection.setup.timeout.ms = 10000 09:10:06 ssl.cipher.suites = null 09:10:06 ssl.enabled.protocols = [TLSv1.2, TLSv1.3] 09:10:06 ssl.endpoint.identification.algorithm = https 09:10:06 ssl.engine.factory.class = null 09:10:06 ssl.key.password = null 09:10:06 ssl.keymanager.algorithm = SunX509 09:10:06 ssl.keystore.certificate.chain = null 09:10:06 ssl.keystore.key = null 09:10:06 ssl.keystore.location = null 09:10:06 ssl.keystore.password = null 09:10:06 ssl.keystore.type = JKS 09:10:06 ssl.protocol = TLSv1.3 09:10:06 ssl.provider = null 09:10:06 ssl.secure.random.implementation = null 09:10:06 ssl.trustmanager.algorithm = PKIX 09:10:06 ssl.truststore.certificates = null 09:10:06 ssl.truststore.location = null 09:10:06 ssl.truststore.password = null 09:10:06 ssl.truststore.type = JKS 09:10:06 transaction.timeout.ms = 60000 09:10:06 transactional.id = null 09:10:06 value.serializer = class org.apache.kafka.common.serialization.StringSerializer 09:10:06 09:10:06 09:10:06.563 [main] INFO org.apache.kafka.clients.producer.KafkaProducer - [Producer clientId=mso-123456-producer-944da5b4-ee50-478d-8683-7c42aeddf36c] Instantiated an idempotent producer. 09:10:06 09:10:06.565 [main] INFO org.apache.kafka.common.utils.AppInfoParser - Kafka version: 3.3.1 09:10:06 09:10:06.565 [main] INFO org.apache.kafka.common.utils.AppInfoParser - Kafka commitId: e23c59d00e687ff5 09:10:06 09:10:06.565 [kafka-producer-network-thread | mso-123456-producer-944da5b4-ee50-478d-8683-7c42aeddf36c] DEBUG org.apache.kafka.clients.producer.internals.Sender - [Producer clientId=mso-123456-producer-944da5b4-ee50-478d-8683-7c42aeddf36c] Starting Kafka producer I/O thread. 09:10:06 09:10:06.565 [main] INFO org.apache.kafka.common.utils.AppInfoParser - Kafka startTimeMs: 1770973806565 09:10:06 09:10:06.565 [kafka-producer-network-thread | mso-123456-producer-944da5b4-ee50-478d-8683-7c42aeddf36c] DEBUG org.apache.kafka.clients.producer.internals.TransactionManager - [Producer clientId=mso-123456-producer-944da5b4-ee50-478d-8683-7c42aeddf36c] Transition from state UNINITIALIZED to INITIALIZING 09:10:06 09:10:06.565 [kafka-producer-network-thread | mso-123456-producer-944da5b4-ee50-478d-8683-7c42aeddf36c] DEBUG org.apache.kafka.clients.producer.internals.TransactionManager - [Producer clientId=mso-123456-producer-944da5b4-ee50-478d-8683-7c42aeddf36c] Enqueuing transactional request InitProducerIdRequestData(transactionalId=null, transactionTimeoutMs=2147483647, producerId=-1, producerEpoch=-1) 09:10:06 09:10:06.565 [kafka-producer-network-thread | mso-123456-producer-944da5b4-ee50-478d-8683-7c42aeddf36c] DEBUG org.apache.kafka.clients.NetworkClient - [Producer clientId=mso-123456-producer-944da5b4-ee50-478d-8683-7c42aeddf36c] Initialize connection to node localhost:9092 (id: -1 rack: null) for sending metadata request 09:10:06 09:10:06.565 [main] DEBUG org.apache.kafka.clients.producer.KafkaProducer - [Producer clientId=mso-123456-producer-944da5b4-ee50-478d-8683-7c42aeddf36c] Kafka producer started 09:10:06 09:10:06.565 [kafka-producer-network-thread | mso-123456-producer-944da5b4-ee50-478d-8683-7c42aeddf36c] DEBUG org.apache.kafka.clients.ClientUtils - Resolved host localhost as 127.0.0.1 09:10:06 09:10:06.565 [kafka-producer-network-thread | mso-123456-producer-944da5b4-ee50-478d-8683-7c42aeddf36c] DEBUG org.apache.kafka.clients.NetworkClient - [Producer clientId=mso-123456-producer-944da5b4-ee50-478d-8683-7c42aeddf36c] Initiating connection to node localhost:9092 (id: -1 rack: null) using address localhost/127.0.0.1 09:10:06 09:10:06.565 [kafka-producer-network-thread | mso-123456-producer-944da5b4-ee50-478d-8683-7c42aeddf36c] DEBUG org.apache.kafka.common.security.authenticator.SaslClientAuthenticator - [Producer clientId=mso-123456-producer-944da5b4-ee50-478d-8683-7c42aeddf36c] Set SASL client state to SEND_APIVERSIONS_REQUEST 09:10:06 09:10:06.565 [kafka-producer-network-thread | mso-123456-producer-944da5b4-ee50-478d-8683-7c42aeddf36c] DEBUG org.apache.kafka.common.security.authenticator.SaslClientAuthenticator - [Producer clientId=mso-123456-producer-944da5b4-ee50-478d-8683-7c42aeddf36c] Creating SaslClient: client=null;service=kafka;serviceHostname=localhost;mechs=[PLAIN] 09:10:06 09:10:06.567 [kafka-producer-network-thread | mso-123456-producer-944da5b4-ee50-478d-8683-7c42aeddf36c] DEBUG org.apache.kafka.common.network.Selector - [Producer clientId=mso-123456-producer-944da5b4-ee50-478d-8683-7c42aeddf36c] Connection with localhost/127.0.0.1 (channelId=-1) disconnected 09:10:06 java.net.ConnectException: Connection refused 09:10:06 at java.base/sun.nio.ch.SocketChannelImpl.checkConnect(Native Method) 09:10:06 at java.base/sun.nio.ch.SocketChannelImpl.finishConnect(SocketChannelImpl.java:777) 09:10:06 at org.apache.kafka.common.network.PlaintextTransportLayer.finishConnect(PlaintextTransportLayer.java:50) 09:10:06 at org.apache.kafka.common.network.KafkaChannel.finishConnect(KafkaChannel.java:224) 09:10:06 at org.apache.kafka.common.network.Selector.pollSelectionKeys(Selector.java:526) 09:10:06 at org.apache.kafka.common.network.Selector.poll(Selector.java:481) 09:10:06 at org.apache.kafka.clients.NetworkClient.poll(NetworkClient.java:560) 09:10:06 at org.apache.kafka.clients.NetworkClientUtils.isReady(NetworkClientUtils.java:42) 09:10:06 at org.apache.kafka.clients.NetworkClientUtils.awaitReady(NetworkClientUtils.java:64) 09:10:06 at org.apache.kafka.clients.producer.internals.Sender.awaitNodeReady(Sender.java:534) 09:10:06 at org.apache.kafka.clients.producer.internals.Sender.maybeSendAndPollTransactionalRequest(Sender.java:455) 09:10:06 at org.apache.kafka.clients.producer.internals.Sender.runOnce(Sender.java:316) 09:10:06 at org.apache.kafka.clients.producer.internals.Sender.run(Sender.java:243) 09:10:06 at java.base/java.lang.Thread.run(Thread.java:829) 09:10:06 09:10:06.567 [kafka-producer-network-thread | mso-123456-producer-944da5b4-ee50-478d-8683-7c42aeddf36c] INFO org.apache.kafka.clients.NetworkClient - [Producer clientId=mso-123456-producer-944da5b4-ee50-478d-8683-7c42aeddf36c] Node -1 disconnected. 09:10:06 09:10:06.567 [kafka-producer-network-thread | mso-123456-producer-944da5b4-ee50-478d-8683-7c42aeddf36c] WARN org.apache.kafka.clients.NetworkClient - [Producer clientId=mso-123456-producer-944da5b4-ee50-478d-8683-7c42aeddf36c] Connection to node -1 (localhost/127.0.0.1:9092) could not be established. Broker may not be available. 09:10:06 09:10:06.567 [kafka-producer-network-thread | mso-123456-producer-944da5b4-ee50-478d-8683-7c42aeddf36c] WARN org.apache.kafka.clients.NetworkClient - [Producer clientId=mso-123456-producer-944da5b4-ee50-478d-8683-7c42aeddf36c] Bootstrap broker localhost:9092 (id: -1 rack: null) disconnected 09:10:06 09:10:06.567 [kafka-producer-network-thread | mso-123456-producer-944da5b4-ee50-478d-8683-7c42aeddf36c] DEBUG org.apache.kafka.clients.producer.internals.Sender - [Producer clientId=mso-123456-producer-944da5b4-ee50-478d-8683-7c42aeddf36c] Disconnect from localhost:9092 (id: -1 rack: null) while trying to send request InitProducerIdRequestData(transactionalId=null, transactionTimeoutMs=2147483647, producerId=-1, producerEpoch=-1). Going to back off and retry. 09:10:06 java.io.IOException: Connection to localhost:9092 (id: -1 rack: null) failed. 09:10:06 at org.apache.kafka.clients.NetworkClientUtils.awaitReady(NetworkClientUtils.java:70) 09:10:06 at org.apache.kafka.clients.producer.internals.Sender.awaitNodeReady(Sender.java:534) 09:10:06 at org.apache.kafka.clients.producer.internals.Sender.maybeSendAndPollTransactionalRequest(Sender.java:455) 09:10:06 at org.apache.kafka.clients.producer.internals.Sender.runOnce(Sender.java:316) 09:10:06 at org.apache.kafka.clients.producer.internals.Sender.run(Sender.java:243) 09:10:06 at java.base/java.lang.Thread.run(Thread.java:829) 09:10:06 09:10:06.586 [kafka-producer-network-thread | mso-123456-producer-a431b68b-e45a-4ed5-8b19-0482c7f0322b] DEBUG org.apache.kafka.clients.NetworkClient - [Producer clientId=mso-123456-producer-a431b68b-e45a-4ed5-8b19-0482c7f0322b] Give up sending metadata request since no node is available 09:10:06 Configuration [sdcAddress=localhost:8443, user=mso-user, password=password, useHttpsWithSDC=true, pollingInterval=15, sdcStatusTopicName=SDC-DISTR-STATUS-TOPIC-AUTO, sdcNotificationTopicName=SDC-DISTR-NOTIF-TOPIC-AUTO, pollingTimeout=20, relevantArtifactTypes=[HEAT], consumerGroup=mso-group, environmentName=PROD, comsumerID=mso-123456, keyStorePath=src/test/resources/etc/sdc-user-keystore.jks, trustStorePath=src/test/resources/etc/sdc-user-truststore.jks, activateServerTLSAuth=true, filterInEmptyResources=false, consumeProduceStatusTopic=false, useSystemProxy=false, httpProxyHost=proxy, httpProxyPort=8080, httpsProxyHost=null, httpsProxyPort=0] 09:10:06 09:10:06.590 [main] INFO org.onap.sdc.impl.DistributionClientImpl - stop DistributionClient 09:10:06 09:10:06.592 [main] ERROR org.onap.sdc.impl.DistributionClientImpl - DistributionClientResultImpl [responseStatus=CONF_MISSING_USERNAME, responseMessage=configuration is invalid: CONF_MISSING_USERNAME] 09:10:06 09:10:06.592 [main] ERROR org.onap.sdc.impl.DistributionClientImpl - DistributionClientResultImpl [responseStatus=CONF_MISSING_USERNAME, responseMessage=configuration is invalid: CONF_MISSING_USERNAME] 09:10:06 09:10:06.592 [main] INFO org.onap.sdc.impl.DistributionClientImpl - stop DistributionClient 09:10:06 09:10:06.592 [main] DEBUG org.onap.sdc.impl.DistributionClientImpl - client was not initialized 09:10:06 [INFO] Tests run: 12, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.217 s - in org.onap.sdc.impl.DistributionClientTest 09:10:06 09:10:06.596 [kafka-producer-network-thread | mso-123456-producer-ed478832-32af-451a-a221-ab91f035a123] DEBUG org.apache.kafka.clients.producer.internals.TransactionManager - [Producer clientId=mso-123456-producer-ed478832-32af-451a-a221-ab91f035a123] Enqueuing transactional request InitProducerIdRequestData(transactionalId=null, transactionTimeoutMs=2147483647, producerId=-1, producerEpoch=-1) 09:10:06 09:10:06.596 [kafka-producer-network-thread | mso-123456-producer-ed478832-32af-451a-a221-ab91f035a123] DEBUG org.apache.kafka.clients.NetworkClient - [Producer clientId=mso-123456-producer-ed478832-32af-451a-a221-ab91f035a123] Initialize connection to node localhost:9092 (id: -1 rack: null) for sending metadata request 09:10:06 09:10:06.596 [kafka-producer-network-thread | mso-123456-producer-ed478832-32af-451a-a221-ab91f035a123] DEBUG org.apache.kafka.clients.ClientUtils - Resolved host localhost as 127.0.0.1 09:10:06 09:10:06.596 [kafka-producer-network-thread | mso-123456-producer-ed478832-32af-451a-a221-ab91f035a123] DEBUG org.apache.kafka.clients.NetworkClient - [Producer clientId=mso-123456-producer-ed478832-32af-451a-a221-ab91f035a123] Initiating connection to node localhost:9092 (id: -1 rack: null) using address localhost/127.0.0.1 09:10:06 09:10:06.596 [kafka-producer-network-thread | mso-123456-producer-ed478832-32af-451a-a221-ab91f035a123] DEBUG org.apache.kafka.common.security.authenticator.SaslClientAuthenticator - [Producer clientId=mso-123456-producer-ed478832-32af-451a-a221-ab91f035a123] Set SASL client state to SEND_APIVERSIONS_REQUEST 09:10:06 09:10:06.596 [kafka-producer-network-thread | mso-123456-producer-ed478832-32af-451a-a221-ab91f035a123] DEBUG org.apache.kafka.common.security.authenticator.SaslClientAuthenticator - [Producer clientId=mso-123456-producer-ed478832-32af-451a-a221-ab91f035a123] Creating SaslClient: client=null;service=kafka;serviceHostname=localhost;mechs=[PLAIN] 09:10:06 09:10:06.597 [kafka-producer-network-thread | mso-123456-producer-ed478832-32af-451a-a221-ab91f035a123] DEBUG org.apache.kafka.common.network.Selector - [Producer clientId=mso-123456-producer-ed478832-32af-451a-a221-ab91f035a123] Connection with localhost/127.0.0.1 (channelId=-1) disconnected 09:10:06 java.net.ConnectException: Connection refused 09:10:06 at java.base/sun.nio.ch.SocketChannelImpl.checkConnect(Native Method) 09:10:06 at java.base/sun.nio.ch.SocketChannelImpl.finishConnect(SocketChannelImpl.java:777) 09:10:06 at org.apache.kafka.common.network.PlaintextTransportLayer.finishConnect(PlaintextTransportLayer.java:50) 09:10:06 at org.apache.kafka.common.network.KafkaChannel.finishConnect(KafkaChannel.java:224) 09:10:06 at org.apache.kafka.common.network.Selector.pollSelectionKeys(Selector.java:526) 09:10:06 at org.apache.kafka.common.network.Selector.poll(Selector.java:481) 09:10:06 at org.apache.kafka.clients.NetworkClient.poll(NetworkClient.java:560) 09:10:06 at org.apache.kafka.clients.NetworkClientUtils.isReady(NetworkClientUtils.java:42) 09:10:06 at org.apache.kafka.clients.NetworkClientUtils.awaitReady(NetworkClientUtils.java:64) 09:10:06 at org.apache.kafka.clients.producer.internals.Sender.awaitNodeReady(Sender.java:534) 09:10:06 at org.apache.kafka.clients.producer.internals.Sender.maybeSendAndPollTransactionalRequest(Sender.java:455) 09:10:06 at org.apache.kafka.clients.producer.internals.Sender.runOnce(Sender.java:316) 09:10:06 at org.apache.kafka.clients.producer.internals.Sender.run(Sender.java:243) 09:10:06 at java.base/java.lang.Thread.run(Thread.java:829) 09:10:06 09:10:06.597 [kafka-producer-network-thread | mso-123456-producer-ed478832-32af-451a-a221-ab91f035a123] INFO org.apache.kafka.clients.NetworkClient - [Producer clientId=mso-123456-producer-ed478832-32af-451a-a221-ab91f035a123] Node -1 disconnected. 09:10:06 09:10:06.597 [kafka-producer-network-thread | mso-123456-producer-ed478832-32af-451a-a221-ab91f035a123] WARN org.apache.kafka.clients.NetworkClient - [Producer clientId=mso-123456-producer-ed478832-32af-451a-a221-ab91f035a123] Connection to node -1 (localhost/127.0.0.1:9092) could not be established. Broker may not be available. 09:10:06 09:10:06.597 [kafka-producer-network-thread | mso-123456-producer-ed478832-32af-451a-a221-ab91f035a123] WARN org.apache.kafka.clients.NetworkClient - [Producer clientId=mso-123456-producer-ed478832-32af-451a-a221-ab91f035a123] Bootstrap broker localhost:9092 (id: -1 rack: null) disconnected 09:10:06 09:10:06.597 [kafka-producer-network-thread | mso-123456-producer-ed478832-32af-451a-a221-ab91f035a123] DEBUG org.apache.kafka.clients.producer.internals.Sender - [Producer clientId=mso-123456-producer-ed478832-32af-451a-a221-ab91f035a123] Disconnect from localhost:9092 (id: -1 rack: null) while trying to send request InitProducerIdRequestData(transactionalId=null, transactionTimeoutMs=2147483647, producerId=-1, producerEpoch=-1). Going to back off and retry. 09:10:06 java.io.IOException: Connection to localhost:9092 (id: -1 rack: null) failed. 09:10:06 at org.apache.kafka.clients.NetworkClientUtils.awaitReady(NetworkClientUtils.java:70) 09:10:06 at org.apache.kafka.clients.producer.internals.Sender.awaitNodeReady(Sender.java:534) 09:10:06 at org.apache.kafka.clients.producer.internals.Sender.maybeSendAndPollTransactionalRequest(Sender.java:455) 09:10:06 at org.apache.kafka.clients.producer.internals.Sender.runOnce(Sender.java:316) 09:10:06 at org.apache.kafka.clients.producer.internals.Sender.run(Sender.java:243) 09:10:06 at java.base/java.lang.Thread.run(Thread.java:829) 09:10:06 09:10:06.636 [kafka-producer-network-thread | mso-123456-producer-a431b68b-e45a-4ed5-8b19-0482c7f0322b] DEBUG org.apache.kafka.clients.NetworkClient - [Producer clientId=mso-123456-producer-a431b68b-e45a-4ed5-8b19-0482c7f0322b] Give up sending metadata request since no node is available 09:10:06 09:10:06.638 [kafka-producer-network-thread | mso-123456-producer-97b112bb-2505-47ed-bd4e-dd24b234ea4e] DEBUG org.apache.kafka.clients.producer.internals.TransactionManager - [Producer clientId=mso-123456-producer-97b112bb-2505-47ed-bd4e-dd24b234ea4e] Enqueuing transactional request InitProducerIdRequestData(transactionalId=null, transactionTimeoutMs=2147483647, producerId=-1, producerEpoch=-1) 09:10:06 09:10:06.638 [kafka-producer-network-thread | mso-123456-producer-97b112bb-2505-47ed-bd4e-dd24b234ea4e] DEBUG org.apache.kafka.clients.NetworkClient - [Producer clientId=mso-123456-producer-97b112bb-2505-47ed-bd4e-dd24b234ea4e] Initialize connection to node localhost:9092 (id: -1 rack: null) for sending metadata request 09:10:06 09:10:06.638 [kafka-producer-network-thread | mso-123456-producer-97b112bb-2505-47ed-bd4e-dd24b234ea4e] DEBUG org.apache.kafka.clients.ClientUtils - Resolved host localhost as 127.0.0.1 09:10:06 09:10:06.638 [kafka-producer-network-thread | mso-123456-producer-97b112bb-2505-47ed-bd4e-dd24b234ea4e] DEBUG org.apache.kafka.clients.NetworkClient - [Producer clientId=mso-123456-producer-97b112bb-2505-47ed-bd4e-dd24b234ea4e] Initiating connection to node localhost:9092 (id: -1 rack: null) using address localhost/127.0.0.1 09:10:06 09:10:06.638 [kafka-producer-network-thread | mso-123456-producer-97b112bb-2505-47ed-bd4e-dd24b234ea4e] DEBUG org.apache.kafka.common.security.authenticator.SaslClientAuthenticator - [Producer clientId=mso-123456-producer-97b112bb-2505-47ed-bd4e-dd24b234ea4e] Set SASL client state to SEND_APIVERSIONS_REQUEST 09:10:06 09:10:06.638 [kafka-producer-network-thread | mso-123456-producer-97b112bb-2505-47ed-bd4e-dd24b234ea4e] DEBUG org.apache.kafka.common.security.authenticator.SaslClientAuthenticator - [Producer clientId=mso-123456-producer-97b112bb-2505-47ed-bd4e-dd24b234ea4e] Creating SaslClient: client=null;service=kafka;serviceHostname=localhost;mechs=[PLAIN] 09:10:06 09:10:06.639 [kafka-producer-network-thread | mso-123456-producer-97b112bb-2505-47ed-bd4e-dd24b234ea4e] DEBUG org.apache.kafka.common.network.Selector - [Producer clientId=mso-123456-producer-97b112bb-2505-47ed-bd4e-dd24b234ea4e] Connection with localhost/127.0.0.1 (channelId=-1) disconnected 09:10:06 java.net.ConnectException: Connection refused 09:10:06 at java.base/sun.nio.ch.SocketChannelImpl.checkConnect(Native Method) 09:10:06 at java.base/sun.nio.ch.SocketChannelImpl.finishConnect(SocketChannelImpl.java:777) 09:10:06 at org.apache.kafka.common.network.PlaintextTransportLayer.finishConnect(PlaintextTransportLayer.java:50) 09:10:06 at org.apache.kafka.common.network.KafkaChannel.finishConnect(KafkaChannel.java:224) 09:10:06 at org.apache.kafka.common.network.Selector.pollSelectionKeys(Selector.java:526) 09:10:06 at org.apache.kafka.common.network.Selector.poll(Selector.java:481) 09:10:06 at org.apache.kafka.clients.NetworkClient.poll(NetworkClient.java:560) 09:10:06 at org.apache.kafka.clients.NetworkClientUtils.isReady(NetworkClientUtils.java:42) 09:10:06 at org.apache.kafka.clients.NetworkClientUtils.awaitReady(NetworkClientUtils.java:64) 09:10:06 at org.apache.kafka.clients.producer.internals.Sender.awaitNodeReady(Sender.java:534) 09:10:06 at org.apache.kafka.clients.producer.internals.Sender.maybeSendAndPollTransactionalRequest(Sender.java:455) 09:10:06 at org.apache.kafka.clients.producer.internals.Sender.runOnce(Sender.java:316) 09:10:06 at org.apache.kafka.clients.producer.internals.Sender.run(Sender.java:243) 09:10:06 at java.base/java.lang.Thread.run(Thread.java:829) 09:10:06 09:10:06.639 [kafka-producer-network-thread | mso-123456-producer-97b112bb-2505-47ed-bd4e-dd24b234ea4e] INFO org.apache.kafka.clients.NetworkClient - [Producer clientId=mso-123456-producer-97b112bb-2505-47ed-bd4e-dd24b234ea4e] Node -1 disconnected. 09:10:06 09:10:06.639 [kafka-producer-network-thread | mso-123456-producer-97b112bb-2505-47ed-bd4e-dd24b234ea4e] WARN org.apache.kafka.clients.NetworkClient - [Producer clientId=mso-123456-producer-97b112bb-2505-47ed-bd4e-dd24b234ea4e] Connection to node -1 (localhost/127.0.0.1:9092) could not be established. Broker may not be available. 09:10:06 09:10:06.639 [kafka-producer-network-thread | mso-123456-producer-97b112bb-2505-47ed-bd4e-dd24b234ea4e] WARN org.apache.kafka.clients.NetworkClient - [Producer clientId=mso-123456-producer-97b112bb-2505-47ed-bd4e-dd24b234ea4e] Bootstrap broker localhost:9092 (id: -1 rack: null) disconnected 09:10:06 09:10:06.639 [kafka-producer-network-thread | mso-123456-producer-97b112bb-2505-47ed-bd4e-dd24b234ea4e] DEBUG org.apache.kafka.clients.producer.internals.Sender - [Producer clientId=mso-123456-producer-97b112bb-2505-47ed-bd4e-dd24b234ea4e] Disconnect from localhost:9092 (id: -1 rack: null) while trying to send request InitProducerIdRequestData(transactionalId=null, transactionTimeoutMs=2147483647, producerId=-1, producerEpoch=-1). Going to back off and retry. 09:10:06 java.io.IOException: Connection to localhost:9092 (id: -1 rack: null) failed. 09:10:06 at org.apache.kafka.clients.NetworkClientUtils.awaitReady(NetworkClientUtils.java:70) 09:10:06 at org.apache.kafka.clients.producer.internals.Sender.awaitNodeReady(Sender.java:534) 09:10:06 at org.apache.kafka.clients.producer.internals.Sender.maybeSendAndPollTransactionalRequest(Sender.java:455) 09:10:06 at org.apache.kafka.clients.producer.internals.Sender.runOnce(Sender.java:316) 09:10:06 at org.apache.kafka.clients.producer.internals.Sender.run(Sender.java:243) 09:10:06 at java.base/java.lang.Thread.run(Thread.java:829) 09:10:06 09:10:06.647 [kafka-coordinator-heartbeat-thread | mso-group] DEBUG org.apache.kafka.clients.NetworkClient - [Consumer clientId=mso-123456-consumer-f5c1d64b-2a77-4abe-97e2-5d595acfbd82, groupId=mso-group] Give up sending metadata request since no node is available 09:10:06 09:10:06.647 [kafka-coordinator-heartbeat-thread | mso-group] DEBUG org.apache.kafka.clients.consumer.internals.ConsumerCoordinator - [Consumer clientId=mso-123456-consumer-f5c1d64b-2a77-4abe-97e2-5d595acfbd82, groupId=mso-group] No broker available to send FindCoordinator request 09:10:06 09:10:06.667 [kafka-producer-network-thread | mso-123456-producer-944da5b4-ee50-478d-8683-7c42aeddf36c] DEBUG org.apache.kafka.clients.producer.internals.TransactionManager - [Producer clientId=mso-123456-producer-944da5b4-ee50-478d-8683-7c42aeddf36c] Enqueuing transactional request InitProducerIdRequestData(transactionalId=null, transactionTimeoutMs=2147483647, producerId=-1, producerEpoch=-1) 09:10:06 09:10:06.667 [kafka-producer-network-thread | mso-123456-producer-944da5b4-ee50-478d-8683-7c42aeddf36c] DEBUG org.apache.kafka.clients.NetworkClient - [Producer clientId=mso-123456-producer-944da5b4-ee50-478d-8683-7c42aeddf36c] Initialize connection to node localhost:9092 (id: -1 rack: null) for sending metadata request 09:10:06 09:10:06.667 [kafka-producer-network-thread | mso-123456-producer-944da5b4-ee50-478d-8683-7c42aeddf36c] DEBUG org.apache.kafka.clients.ClientUtils - Resolved host localhost as 127.0.0.1 09:10:06 09:10:06.667 [kafka-producer-network-thread | mso-123456-producer-944da5b4-ee50-478d-8683-7c42aeddf36c] DEBUG org.apache.kafka.clients.NetworkClient - [Producer clientId=mso-123456-producer-944da5b4-ee50-478d-8683-7c42aeddf36c] Initiating connection to node localhost:9092 (id: -1 rack: null) using address localhost/127.0.0.1 09:10:06 09:10:06.668 [kafka-producer-network-thread | mso-123456-producer-944da5b4-ee50-478d-8683-7c42aeddf36c] DEBUG org.apache.kafka.common.security.authenticator.SaslClientAuthenticator - [Producer clientId=mso-123456-producer-944da5b4-ee50-478d-8683-7c42aeddf36c] Set SASL client state to SEND_APIVERSIONS_REQUEST 09:10:06 09:10:06.668 [kafka-producer-network-thread | mso-123456-producer-944da5b4-ee50-478d-8683-7c42aeddf36c] DEBUG org.apache.kafka.common.security.authenticator.SaslClientAuthenticator - [Producer clientId=mso-123456-producer-944da5b4-ee50-478d-8683-7c42aeddf36c] Creating SaslClient: client=null;service=kafka;serviceHostname=localhost;mechs=[PLAIN] 09:10:06 09:10:06.668 [kafka-producer-network-thread | mso-123456-producer-944da5b4-ee50-478d-8683-7c42aeddf36c] DEBUG org.apache.kafka.common.network.Selector - [Producer clientId=mso-123456-producer-944da5b4-ee50-478d-8683-7c42aeddf36c] Connection with localhost/127.0.0.1 (channelId=-1) disconnected 09:10:06 java.net.ConnectException: Connection refused 09:10:06 at java.base/sun.nio.ch.SocketChannelImpl.checkConnect(Native Method) 09:10:06 at java.base/sun.nio.ch.SocketChannelImpl.finishConnect(SocketChannelImpl.java:777) 09:10:06 at org.apache.kafka.common.network.PlaintextTransportLayer.finishConnect(PlaintextTransportLayer.java:50) 09:10:06 at org.apache.kafka.common.network.KafkaChannel.finishConnect(KafkaChannel.java:224) 09:10:06 at org.apache.kafka.common.network.Selector.pollSelectionKeys(Selector.java:526) 09:10:06 at org.apache.kafka.common.network.Selector.poll(Selector.java:481) 09:10:06 at org.apache.kafka.clients.NetworkClient.poll(NetworkClient.java:560) 09:10:06 at org.apache.kafka.clients.NetworkClientUtils.isReady(NetworkClientUtils.java:42) 09:10:06 at org.apache.kafka.clients.NetworkClientUtils.awaitReady(NetworkClientUtils.java:64) 09:10:06 at org.apache.kafka.clients.producer.internals.Sender.awaitNodeReady(Sender.java:534) 09:10:06 at org.apache.kafka.clients.producer.internals.Sender.maybeSendAndPollTransactionalRequest(Sender.java:455) 09:10:06 at org.apache.kafka.clients.producer.internals.Sender.runOnce(Sender.java:316) 09:10:06 at org.apache.kafka.clients.producer.internals.Sender.run(Sender.java:243) 09:10:06 at java.base/java.lang.Thread.run(Thread.java:829) 09:10:06 09:10:06.669 [kafka-producer-network-thread | mso-123456-producer-944da5b4-ee50-478d-8683-7c42aeddf36c] INFO org.apache.kafka.clients.NetworkClient - [Producer clientId=mso-123456-producer-944da5b4-ee50-478d-8683-7c42aeddf36c] Node -1 disconnected. 09:10:06 09:10:06.669 [kafka-producer-network-thread | mso-123456-producer-944da5b4-ee50-478d-8683-7c42aeddf36c] WARN org.apache.kafka.clients.NetworkClient - [Producer clientId=mso-123456-producer-944da5b4-ee50-478d-8683-7c42aeddf36c] Connection to node -1 (localhost/127.0.0.1:9092) could not be established. Broker may not be available. 09:10:06 09:10:06.669 [kafka-producer-network-thread | mso-123456-producer-944da5b4-ee50-478d-8683-7c42aeddf36c] WARN org.apache.kafka.clients.NetworkClient - [Producer clientId=mso-123456-producer-944da5b4-ee50-478d-8683-7c42aeddf36c] Bootstrap broker localhost:9092 (id: -1 rack: null) disconnected 09:10:06 09:10:06.669 [kafka-producer-network-thread | mso-123456-producer-944da5b4-ee50-478d-8683-7c42aeddf36c] DEBUG org.apache.kafka.clients.producer.internals.Sender - [Producer clientId=mso-123456-producer-944da5b4-ee50-478d-8683-7c42aeddf36c] Disconnect from localhost:9092 (id: -1 rack: null) while trying to send request InitProducerIdRequestData(transactionalId=null, transactionTimeoutMs=2147483647, producerId=-1, producerEpoch=-1). Going to back off and retry. 09:10:06 java.io.IOException: Connection to localhost:9092 (id: -1 rack: null) failed. 09:10:06 at org.apache.kafka.clients.NetworkClientUtils.awaitReady(NetworkClientUtils.java:70) 09:10:06 at org.apache.kafka.clients.producer.internals.Sender.awaitNodeReady(Sender.java:534) 09:10:06 at org.apache.kafka.clients.producer.internals.Sender.maybeSendAndPollTransactionalRequest(Sender.java:455) 09:10:06 at org.apache.kafka.clients.producer.internals.Sender.runOnce(Sender.java:316) 09:10:06 at org.apache.kafka.clients.producer.internals.Sender.run(Sender.java:243) 09:10:06 at java.base/java.lang.Thread.run(Thread.java:829) 09:10:06 09:10:06.686 [kafka-producer-network-thread | mso-123456-producer-a431b68b-e45a-4ed5-8b19-0482c7f0322b] DEBUG org.apache.kafka.clients.NetworkClient - [Producer clientId=mso-123456-producer-a431b68b-e45a-4ed5-8b19-0482c7f0322b] Give up sending metadata request since no node is available 09:10:06 09:10:06.697 [kafka-producer-network-thread | mso-123456-producer-ed478832-32af-451a-a221-ab91f035a123] DEBUG org.apache.kafka.clients.producer.internals.TransactionManager - [Producer clientId=mso-123456-producer-ed478832-32af-451a-a221-ab91f035a123] Enqueuing transactional request InitProducerIdRequestData(transactionalId=null, transactionTimeoutMs=2147483647, producerId=-1, producerEpoch=-1) 09:10:06 09:10:06.697 [kafka-producer-network-thread | mso-123456-producer-ed478832-32af-451a-a221-ab91f035a123] DEBUG org.apache.kafka.clients.producer.internals.TransactionManager - [Producer clientId=mso-123456-producer-ed478832-32af-451a-a221-ab91f035a123] Enqueuing transactional request InitProducerIdRequestData(transactionalId=null, transactionTimeoutMs=2147483647, producerId=-1, producerEpoch=-1) 09:10:06 09:10:06.697 [kafka-producer-network-thread | mso-123456-producer-ed478832-32af-451a-a221-ab91f035a123] DEBUG org.apache.kafka.clients.NetworkClient - [Producer clientId=mso-123456-producer-ed478832-32af-451a-a221-ab91f035a123] Give up sending metadata request since no node is available 09:10:07 [INFO] 09:10:07 [INFO] Results: 09:10:07 [INFO] 09:10:07 [WARNING] Tests run: 71, Failures: 0, Errors: 0, Skipped: 1 09:10:07 [INFO] 09:10:07 [INFO] 09:10:07 [INFO] --- jacoco-maven-plugin:0.8.6:report (post-unit-test) @ sdc-distribution-client --- 09:10:07 [INFO] Loading execution data file /w/workspace/sdc-sdc-distribution-client-master-integration-pairwise/sdc-distribution-client/target/code-coverage/jacoco-ut.exec 09:10:07 [INFO] Analyzed bundle 'sdc-distribution-client' with 44 classes 09:10:07 [INFO] 09:10:07 [INFO] --- maven-jar-plugin:2.4:jar (default-jar) @ sdc-distribution-client --- 09:10:07 [INFO] Building jar: /w/workspace/sdc-sdc-distribution-client-master-integration-pairwise/sdc-distribution-client/target/sdc-distribution-client-2.2.0-SNAPSHOT.jar 09:10:07 [INFO] 09:10:07 [INFO] --- maven-javadoc-plugin:3.2.0:jar (attach-javadocs) @ sdc-distribution-client --- 09:10:07 [INFO] No previous run data found, generating javadoc. 09:10:09 [INFO] 09:10:09 Loading source files for package org.onap.sdc.http... 09:10:09 Loading source files for package org.onap.sdc.utils... 09:10:09 Loading source files for package org.onap.sdc.utils.kafka... 09:10:09 Loading source files for package org.onap.sdc.utils.heat... 09:10:09 Loading source files for package org.onap.sdc.impl... 09:10:09 Constructing Javadoc information... 09:10:09 Standard Doclet version 11.0.16 09:10:09 Building tree for all the packages and classes... 09:10:09 Generating /w/workspace/sdc-sdc-distribution-client-master-integration-pairwise/sdc-distribution-client/target/apidocs/org/onap/sdc/http/HttpClientFactory.html... 09:10:09 Generating /w/workspace/sdc-sdc-distribution-client-master-integration-pairwise/sdc-distribution-client/target/apidocs/org/onap/sdc/http/HttpRequestFactory.html... 09:10:09 Generating /w/workspace/sdc-sdc-distribution-client-master-integration-pairwise/sdc-distribution-client/target/apidocs/org/onap/sdc/http/HttpSdcClient.html... 09:10:09 Generating /w/workspace/sdc-sdc-distribution-client-master-integration-pairwise/sdc-distribution-client/target/apidocs/org/onap/sdc/http/HttpSdcClientException.html... 09:10:09 Generating /w/workspace/sdc-sdc-distribution-client-master-integration-pairwise/sdc-distribution-client/target/apidocs/org/onap/sdc/http/HttpSdcResponse.html... 09:10:09 Generating /w/workspace/sdc-sdc-distribution-client-master-integration-pairwise/sdc-distribution-client/target/apidocs/org/onap/sdc/http/IHttpSdcClient.html... 09:10:09 Generating /w/workspace/sdc-sdc-distribution-client-master-integration-pairwise/sdc-distribution-client/target/apidocs/org/onap/sdc/http/SdcConnectorClient.html... 09:10:09 Generating /w/workspace/sdc-sdc-distribution-client-master-integration-pairwise/sdc-distribution-client/target/apidocs/org/onap/sdc/http/SdcUrls.html... 09:10:09 Generating /w/workspace/sdc-sdc-distribution-client-master-integration-pairwise/sdc-distribution-client/target/apidocs/org/onap/sdc/impl/ArtifactInfo.html... 09:10:09 Generating /w/workspace/sdc-sdc-distribution-client-master-integration-pairwise/sdc-distribution-client/target/apidocs/org/onap/sdc/impl/Configuration.html... 09:10:09 Generating /w/workspace/sdc-sdc-distribution-client-master-integration-pairwise/sdc-distribution-client/target/apidocs/org/onap/sdc/impl/ConfigurationValidator.html... 09:10:09 Generating /w/workspace/sdc-sdc-distribution-client-master-integration-pairwise/sdc-distribution-client/target/apidocs/org/onap/sdc/impl/DistributionClientDownloadResultImpl.html... 09:10:09 Generating /w/workspace/sdc-sdc-distribution-client-master-integration-pairwise/sdc-distribution-client/target/apidocs/org/onap/sdc/impl/DistributionClientFactory.html... 09:10:09 Generating /w/workspace/sdc-sdc-distribution-client-master-integration-pairwise/sdc-distribution-client/target/apidocs/org/onap/sdc/impl/DistributionClientImpl.html... 09:10:09 Generating /w/workspace/sdc-sdc-distribution-client-master-integration-pairwise/sdc-distribution-client/target/apidocs/org/onap/sdc/impl/DistributionClientResultImpl.html... 09:10:09 Generating /w/workspace/sdc-sdc-distribution-client-master-integration-pairwise/sdc-distribution-client/target/apidocs/org/onap/sdc/impl/DistributionStatusMessageJsonBuilderFactory.html... 09:10:09 Generating /w/workspace/sdc-sdc-distribution-client-master-integration-pairwise/sdc-distribution-client/target/apidocs/org/onap/sdc/impl/JsonContainerResourceInstance.html... 09:10:09 Generating /w/workspace/sdc-sdc-distribution-client-master-integration-pairwise/sdc-distribution-client/target/apidocs/org/onap/sdc/impl/NotificationCallbackBuilder.html... 09:10:09 Generating /w/workspace/sdc-sdc-distribution-client-master-integration-pairwise/sdc-distribution-client/target/apidocs/org/onap/sdc/impl/NotificationData.html... 09:10:09 Generating /w/workspace/sdc-sdc-distribution-client-master-integration-pairwise/sdc-distribution-client/target/apidocs/org/onap/sdc/impl/NotificationDataImpl.html... 09:10:09 Generating /w/workspace/sdc-sdc-distribution-client-master-integration-pairwise/sdc-distribution-client/target/apidocs/org/onap/sdc/impl/ResourceInstance.html... 09:10:09 Generating /w/workspace/sdc-sdc-distribution-client-master-integration-pairwise/sdc-distribution-client/target/apidocs/org/onap/sdc/impl/StatusDataImpl.html... 09:10:09 Generating /w/workspace/sdc-sdc-distribution-client-master-integration-pairwise/sdc-distribution-client/target/apidocs/org/onap/sdc/utils/ArtifactTypeEnum.html... 09:10:09 Generating /w/workspace/sdc-sdc-distribution-client-master-integration-pairwise/sdc-distribution-client/target/apidocs/org/onap/sdc/utils/CaseInsensitiveMap.html... 09:10:09 Generating /w/workspace/sdc-sdc-distribution-client-master-integration-pairwise/sdc-distribution-client/target/apidocs/org/onap/sdc/utils/DistributionActionResultEnum.html... 09:10:09 Generating /w/workspace/sdc-sdc-distribution-client-master-integration-pairwise/sdc-distribution-client/target/apidocs/org/onap/sdc/utils/DistributionClientConstants.html... 09:10:09 Generating /w/workspace/sdc-sdc-distribution-client-master-integration-pairwise/sdc-distribution-client/target/apidocs/org/onap/sdc/utils/DistributionStatusEnum.html... 09:10:09 Generating /w/workspace/sdc-sdc-distribution-client-master-integration-pairwise/sdc-distribution-client/target/apidocs/org/onap/sdc/utils/GeneralUtils.html... 09:10:09 Generating /w/workspace/sdc-sdc-distribution-client-master-integration-pairwise/sdc-distribution-client/target/apidocs/org/onap/sdc/utils/NotificationSender.html... 09:10:09 Generating /w/workspace/sdc-sdc-distribution-client-master-integration-pairwise/sdc-distribution-client/target/apidocs/org/onap/sdc/utils/Pair.html... 09:10:09 Generating /w/workspace/sdc-sdc-distribution-client-master-integration-pairwise/sdc-distribution-client/target/apidocs/org/onap/sdc/utils/Wrapper.html... 09:10:09 Generating /w/workspace/sdc-sdc-distribution-client-master-integration-pairwise/sdc-distribution-client/target/apidocs/org/onap/sdc/utils/YamlToObjectConverter.html... 09:10:09 Generating /w/workspace/sdc-sdc-distribution-client-master-integration-pairwise/sdc-distribution-client/target/apidocs/org/onap/sdc/utils/heat/HeatConfiguration.html... 09:10:09 Generating /w/workspace/sdc-sdc-distribution-client-master-integration-pairwise/sdc-distribution-client/target/apidocs/org/onap/sdc/utils/heat/HeatParameter.html... 09:10:09 Generating /w/workspace/sdc-sdc-distribution-client-master-integration-pairwise/sdc-distribution-client/target/apidocs/org/onap/sdc/utils/heat/HeatParameterConstraint.html... 09:10:09 Generating /w/workspace/sdc-sdc-distribution-client-master-integration-pairwise/sdc-distribution-client/target/apidocs/org/onap/sdc/utils/heat/HeatParser.html... 09:10:09 Generating /w/workspace/sdc-sdc-distribution-client-master-integration-pairwise/sdc-distribution-client/target/apidocs/org/onap/sdc/utils/kafka/KafkaCommonConfig.html... 09:10:09 Generating /w/workspace/sdc-sdc-distribution-client-master-integration-pairwise/sdc-distribution-client/target/apidocs/org/onap/sdc/utils/kafka/KafkaDataResponse.html... 09:10:09 Generating /w/workspace/sdc-sdc-distribution-client-master-integration-pairwise/sdc-distribution-client/target/apidocs/org/onap/sdc/utils/kafka/SdcKafkaConsumer.html... 09:10:09 Generating /w/workspace/sdc-sdc-distribution-client-master-integration-pairwise/sdc-distribution-client/target/apidocs/org/onap/sdc/utils/kafka/SdcKafkaProducer.html... 09:10:09 Generating /w/workspace/sdc-sdc-distribution-client-master-integration-pairwise/sdc-distribution-client/target/apidocs/org/onap/sdc/http/package-summary.html... 09:10:09 Generating /w/workspace/sdc-sdc-distribution-client-master-integration-pairwise/sdc-distribution-client/target/apidocs/org/onap/sdc/http/package-tree.html... 09:10:09 Generating /w/workspace/sdc-sdc-distribution-client-master-integration-pairwise/sdc-distribution-client/target/apidocs/org/onap/sdc/impl/package-summary.html... 09:10:09 Generating /w/workspace/sdc-sdc-distribution-client-master-integration-pairwise/sdc-distribution-client/target/apidocs/org/onap/sdc/impl/package-tree.html... 09:10:09 Generating /w/workspace/sdc-sdc-distribution-client-master-integration-pairwise/sdc-distribution-client/target/apidocs/org/onap/sdc/utils/package-summary.html... 09:10:09 Generating /w/workspace/sdc-sdc-distribution-client-master-integration-pairwise/sdc-distribution-client/target/apidocs/org/onap/sdc/utils/package-tree.html... 09:10:09 Generating /w/workspace/sdc-sdc-distribution-client-master-integration-pairwise/sdc-distribution-client/target/apidocs/org/onap/sdc/utils/heat/package-summary.html... 09:10:09 Generating /w/workspace/sdc-sdc-distribution-client-master-integration-pairwise/sdc-distribution-client/target/apidocs/org/onap/sdc/utils/heat/package-tree.html... 09:10:09 Generating /w/workspace/sdc-sdc-distribution-client-master-integration-pairwise/sdc-distribution-client/target/apidocs/org/onap/sdc/utils/kafka/package-summary.html... 09:10:09 Generating /w/workspace/sdc-sdc-distribution-client-master-integration-pairwise/sdc-distribution-client/target/apidocs/org/onap/sdc/utils/kafka/package-tree.html... 09:10:09 Generating /w/workspace/sdc-sdc-distribution-client-master-integration-pairwise/sdc-distribution-client/target/apidocs/constant-values.html... 09:10:09 Generating /w/workspace/sdc-sdc-distribution-client-master-integration-pairwise/sdc-distribution-client/target/apidocs/serialized-form.html... 09:10:09 Generating /w/workspace/sdc-sdc-distribution-client-master-integration-pairwise/sdc-distribution-client/target/apidocs/org/onap/sdc/http/class-use/HttpSdcClient.html... 09:10:09 Generating /w/workspace/sdc-sdc-distribution-client-master-integration-pairwise/sdc-distribution-client/target/apidocs/org/onap/sdc/http/class-use/HttpSdcClientException.html... 09:10:09 Generating /w/workspace/sdc-sdc-distribution-client-master-integration-pairwise/sdc-distribution-client/target/apidocs/org/onap/sdc/http/class-use/SdcUrls.html... 09:10:09 Generating /w/workspace/sdc-sdc-distribution-client-master-integration-pairwise/sdc-distribution-client/target/apidocs/org/onap/sdc/http/class-use/HttpClientFactory.html... 09:10:09 Generating /w/workspace/sdc-sdc-distribution-client-master-integration-pairwise/sdc-distribution-client/target/apidocs/org/onap/sdc/http/class-use/HttpRequestFactory.html... 09:10:09 Generating /w/workspace/sdc-sdc-distribution-client-master-integration-pairwise/sdc-distribution-client/target/apidocs/org/onap/sdc/http/class-use/HttpSdcResponse.html... 09:10:09 Generating /w/workspace/sdc-sdc-distribution-client-master-integration-pairwise/sdc-distribution-client/target/apidocs/org/onap/sdc/http/class-use/SdcConnectorClient.html... 09:10:09 Generating /w/workspace/sdc-sdc-distribution-client-master-integration-pairwise/sdc-distribution-client/target/apidocs/org/onap/sdc/http/class-use/IHttpSdcClient.html... 09:10:09 Generating /w/workspace/sdc-sdc-distribution-client-master-integration-pairwise/sdc-distribution-client/target/apidocs/org/onap/sdc/utils/class-use/NotificationSender.html... 09:10:09 Generating /w/workspace/sdc-sdc-distribution-client-master-integration-pairwise/sdc-distribution-client/target/apidocs/org/onap/sdc/utils/class-use/CaseInsensitiveMap.html... 09:10:09 Generating /w/workspace/sdc-sdc-distribution-client-master-integration-pairwise/sdc-distribution-client/target/apidocs/org/onap/sdc/utils/class-use/Wrapper.html... 09:10:09 Generating /w/workspace/sdc-sdc-distribution-client-master-integration-pairwise/sdc-distribution-client/target/apidocs/org/onap/sdc/utils/class-use/YamlToObjectConverter.html... 09:10:09 Generating /w/workspace/sdc-sdc-distribution-client-master-integration-pairwise/sdc-distribution-client/target/apidocs/org/onap/sdc/utils/class-use/DistributionActionResultEnum.html... 09:10:09 Generating /w/workspace/sdc-sdc-distribution-client-master-integration-pairwise/sdc-distribution-client/target/apidocs/org/onap/sdc/utils/class-use/DistributionClientConstants.html... 09:10:09 Generating /w/workspace/sdc-sdc-distribution-client-master-integration-pairwise/sdc-distribution-client/target/apidocs/org/onap/sdc/utils/class-use/GeneralUtils.html... 09:10:09 Generating /w/workspace/sdc-sdc-distribution-client-master-integration-pairwise/sdc-distribution-client/target/apidocs/org/onap/sdc/utils/class-use/DistributionStatusEnum.html... 09:10:09 Generating /w/workspace/sdc-sdc-distribution-client-master-integration-pairwise/sdc-distribution-client/target/apidocs/org/onap/sdc/utils/class-use/ArtifactTypeEnum.html... 09:10:09 Generating /w/workspace/sdc-sdc-distribution-client-master-integration-pairwise/sdc-distribution-client/target/apidocs/org/onap/sdc/utils/class-use/Pair.html... 09:10:09 Generating /w/workspace/sdc-sdc-distribution-client-master-integration-pairwise/sdc-distribution-client/target/apidocs/org/onap/sdc/utils/kafka/class-use/SdcKafkaConsumer.html... 09:10:09 Generating /w/workspace/sdc-sdc-distribution-client-master-integration-pairwise/sdc-distribution-client/target/apidocs/org/onap/sdc/utils/kafka/class-use/SdcKafkaProducer.html... 09:10:09 Generating /w/workspace/sdc-sdc-distribution-client-master-integration-pairwise/sdc-distribution-client/target/apidocs/org/onap/sdc/utils/kafka/class-use/KafkaCommonConfig.html... 09:10:09 Generating /w/workspace/sdc-sdc-distribution-client-master-integration-pairwise/sdc-distribution-client/target/apidocs/org/onap/sdc/utils/kafka/class-use/KafkaDataResponse.html... 09:10:09 Generating /w/workspace/sdc-sdc-distribution-client-master-integration-pairwise/sdc-distribution-client/target/apidocs/org/onap/sdc/utils/heat/class-use/HeatParameterConstraint.html... 09:10:09 Generating /w/workspace/sdc-sdc-distribution-client-master-integration-pairwise/sdc-distribution-client/target/apidocs/org/onap/sdc/utils/heat/class-use/HeatConfiguration.html... 09:10:09 Generating /w/workspace/sdc-sdc-distribution-client-master-integration-pairwise/sdc-distribution-client/target/apidocs/org/onap/sdc/utils/heat/class-use/HeatParameter.html... 09:10:09 Generating /w/workspace/sdc-sdc-distribution-client-master-integration-pairwise/sdc-distribution-client/target/apidocs/org/onap/sdc/utils/heat/class-use/HeatParser.html... 09:10:09 Generating /w/workspace/sdc-sdc-distribution-client-master-integration-pairwise/sdc-distribution-client/target/apidocs/org/onap/sdc/impl/class-use/DistributionClientFactory.html... 09:10:09 Generating /w/workspace/sdc-sdc-distribution-client-master-integration-pairwise/sdc-distribution-client/target/apidocs/org/onap/sdc/impl/class-use/DistributionStatusMessageJsonBuilderFactory.html... 09:10:09 Generating /w/workspace/sdc-sdc-distribution-client-master-integration-pairwise/sdc-distribution-client/target/apidocs/org/onap/sdc/impl/class-use/DistributionClientResultImpl.html... 09:10:09 Generating /w/workspace/sdc-sdc-distribution-client-master-integration-pairwise/sdc-distribution-client/target/apidocs/org/onap/sdc/impl/class-use/NotificationDataImpl.html... 09:10:09 Generating /w/workspace/sdc-sdc-distribution-client-master-integration-pairwise/sdc-distribution-client/target/apidocs/org/onap/sdc/impl/class-use/ConfigurationValidator.html... 09:10:09 Generating /w/workspace/sdc-sdc-distribution-client-master-integration-pairwise/sdc-distribution-client/target/apidocs/org/onap/sdc/impl/class-use/DistributionClientDownloadResultImpl.html... 09:10:09 Generating /w/workspace/sdc-sdc-distribution-client-master-integration-pairwise/sdc-distribution-client/target/apidocs/org/onap/sdc/impl/class-use/ArtifactInfo.html... 09:10:09 Generating /w/workspace/sdc-sdc-distribution-client-master-integration-pairwise/sdc-distribution-client/target/apidocs/org/onap/sdc/impl/class-use/NotificationData.html... 09:10:09 Generating /w/workspace/sdc-sdc-distribution-client-master-integration-pairwise/sdc-distribution-client/target/apidocs/org/onap/sdc/impl/class-use/ResourceInstance.html... 09:10:09 Generating /w/workspace/sdc-sdc-distribution-client-master-integration-pairwise/sdc-distribution-client/target/apidocs/org/onap/sdc/impl/class-use/NotificationCallbackBuilder.html... 09:10:09 Generating /w/workspace/sdc-sdc-distribution-client-master-integration-pairwise/sdc-distribution-client/target/apidocs/org/onap/sdc/impl/class-use/StatusDataImpl.html... 09:10:09 Generating /w/workspace/sdc-sdc-distribution-client-master-integration-pairwise/sdc-distribution-client/target/apidocs/org/onap/sdc/impl/class-use/JsonContainerResourceInstance.html... 09:10:09 Generating /w/workspace/sdc-sdc-distribution-client-master-integration-pairwise/sdc-distribution-client/target/apidocs/org/onap/sdc/impl/class-use/DistributionClientImpl.html... 09:10:09 Generating /w/workspace/sdc-sdc-distribution-client-master-integration-pairwise/sdc-distribution-client/target/apidocs/org/onap/sdc/impl/class-use/Configuration.html... 09:10:09 Generating /w/workspace/sdc-sdc-distribution-client-master-integration-pairwise/sdc-distribution-client/target/apidocs/org/onap/sdc/http/package-use.html... 09:10:09 Generating /w/workspace/sdc-sdc-distribution-client-master-integration-pairwise/sdc-distribution-client/target/apidocs/org/onap/sdc/impl/package-use.html... 09:10:09 Generating /w/workspace/sdc-sdc-distribution-client-master-integration-pairwise/sdc-distribution-client/target/apidocs/org/onap/sdc/utils/package-use.html... 09:10:09 Generating /w/workspace/sdc-sdc-distribution-client-master-integration-pairwise/sdc-distribution-client/target/apidocs/org/onap/sdc/utils/heat/package-use.html... 09:10:09 Generating /w/workspace/sdc-sdc-distribution-client-master-integration-pairwise/sdc-distribution-client/target/apidocs/org/onap/sdc/utils/kafka/package-use.html... 09:10:09 Building index for all the packages and classes... 09:10:09 Generating /w/workspace/sdc-sdc-distribution-client-master-integration-pairwise/sdc-distribution-client/target/apidocs/overview-tree.html... 09:10:09 Generating /w/workspace/sdc-sdc-distribution-client-master-integration-pairwise/sdc-distribution-client/target/apidocs/index-all.html... 09:10:09 Building index for all classes... 09:10:09 Generating /w/workspace/sdc-sdc-distribution-client-master-integration-pairwise/sdc-distribution-client/target/apidocs/allclasses-index.html... 09:10:09 Generating /w/workspace/sdc-sdc-distribution-client-master-integration-pairwise/sdc-distribution-client/target/apidocs/allpackages-index.html... 09:10:09 Generating /w/workspace/sdc-sdc-distribution-client-master-integration-pairwise/sdc-distribution-client/target/apidocs/deprecated-list.html... 09:10:09 Building index for all classes... 09:10:09 Generating /w/workspace/sdc-sdc-distribution-client-master-integration-pairwise/sdc-distribution-client/target/apidocs/allclasses.html... 09:10:09 Generating /w/workspace/sdc-sdc-distribution-client-master-integration-pairwise/sdc-distribution-client/target/apidocs/allclasses.html... 09:10:09 Generating /w/workspace/sdc-sdc-distribution-client-master-integration-pairwise/sdc-distribution-client/target/apidocs/index.html... 09:10:09 Generating /w/workspace/sdc-sdc-distribution-client-master-integration-pairwise/sdc-distribution-client/target/apidocs/overview-summary.html... 09:10:09 Generating /w/workspace/sdc-sdc-distribution-client-master-integration-pairwise/sdc-distribution-client/target/apidocs/help-doc.html... 09:10:09 [INFO] Building jar: /w/workspace/sdc-sdc-distribution-client-master-integration-pairwise/sdc-distribution-client/target/sdc-distribution-client-2.2.0-SNAPSHOT-javadoc.jar 09:10:09 [INFO] 09:10:09 [INFO] --- jacoco-maven-plugin:0.8.6:prepare-agent (pre-integration-test) @ sdc-distribution-client --- 09:10:09 [INFO] failsafeArgLine set to -javaagent:/home/jenkins/.m2/repository/org/jacoco/org.jacoco.agent/0.8.6/org.jacoco.agent-0.8.6-runtime.jar=destfile=/w/workspace/sdc-sdc-distribution-client-master-integration-pairwise/sdc-distribution-client/target/code-coverage/jacoco-it.exec,excludes=**/gen/**:**/generated-sources/**:**/yang-gen/**:**/pax/** 09:10:09 [INFO] 09:10:09 [INFO] --- maven-failsafe-plugin:3.0.0-M4:integration-test (integration-tests) @ sdc-distribution-client --- 09:10:09 [INFO] 09:10:09 [INFO] --- jacoco-maven-plugin:0.8.6:report (post-integration-test) @ sdc-distribution-client --- 09:10:09 [INFO] Skipping JaCoCo execution due to missing execution data file. 09:10:09 [INFO] 09:10:09 [INFO] --- maven-failsafe-plugin:3.0.0-M4:verify (integration-tests) @ sdc-distribution-client --- 09:10:09 [INFO] 09:10:09 [INFO] --- maven-install-plugin:2.4:install (default-install) @ sdc-distribution-client --- 09:10:09 [INFO] Installing /w/workspace/sdc-sdc-distribution-client-master-integration-pairwise/sdc-distribution-client/target/sdc-distribution-client-2.2.0-SNAPSHOT.jar to /home/jenkins/.m2/repository/org/onap/sdc/sdc-distribution-client/sdc-distribution-client/2.2.0-SNAPSHOT/sdc-distribution-client-2.2.0-SNAPSHOT.jar 09:10:09 [INFO] Installing /w/workspace/sdc-sdc-distribution-client-master-integration-pairwise/sdc-distribution-client/pom.xml to /home/jenkins/.m2/repository/org/onap/sdc/sdc-distribution-client/sdc-distribution-client/2.2.0-SNAPSHOT/sdc-distribution-client-2.2.0-SNAPSHOT.pom 09:10:09 [INFO] Installing /w/workspace/sdc-sdc-distribution-client-master-integration-pairwise/sdc-distribution-client/target/sdc-distribution-client-2.2.0-SNAPSHOT-javadoc.jar to /home/jenkins/.m2/repository/org/onap/sdc/sdc-distribution-client/sdc-distribution-client/2.2.0-SNAPSHOT/sdc-distribution-client-2.2.0-SNAPSHOT-javadoc.jar 09:10:09 [INFO] 09:10:09 [INFO] ------< org.onap.sdc.sdc-distribution-client:sdc-distribution-ci >------ 09:10:09 [INFO] Building sdc-distribution-ci 2.2.0-SNAPSHOT [4/4] 09:10:09 [INFO] --------------------------------[ jar ]--------------------------------- 09:10:10 [INFO] 09:10:10 [INFO] --- maven-clean-plugin:2.5:clean (default-clean) @ sdc-distribution-ci --- 09:10:10 [INFO] 09:10:10 [INFO] --- maven-enforcer-plugin:3.0.0-M2:enforce (enforce-property) @ sdc-distribution-ci --- 09:10:10 [INFO] 09:10:10 [INFO] --- maven-enforcer-plugin:3.0.0-M2:enforce (enforce-no-snapshots) @ sdc-distribution-ci --- 09:10:10 [INFO] 09:10:10 [INFO] --- jacoco-maven-plugin:0.8.6:prepare-agent (pre-unit-test) @ sdc-distribution-ci --- 09:10:10 [INFO] surefireArgLine set to -javaagent:/home/jenkins/.m2/repository/org/jacoco/org.jacoco.agent/0.8.6/org.jacoco.agent-0.8.6-runtime.jar=destfile=/w/workspace/sdc-sdc-distribution-client-master-integration-pairwise/sdc-distribution-ci/target/code-coverage/jacoco-ut.exec,excludes=**/gen/**:**/generated-sources/**:**/yang-gen/**:**/pax/** 09:10:10 [INFO] 09:10:10 [INFO] --- jacoco-maven-plugin:0.8.6:prepare-agent (prepare-agent) @ sdc-distribution-ci --- 09:10:10 [INFO] argLine set to -javaagent:/home/jenkins/.m2/repository/org/jacoco/org.jacoco.agent/0.8.6/org.jacoco.agent-0.8.6-runtime.jar=destfile=/w/workspace/sdc-sdc-distribution-client-master-integration-pairwise/sdc-distribution-ci/target/jacoco.exec,excludes=**/gen/**:**/generated-sources/**:**/yang-gen/**:**/pax/** 09:10:10 [INFO] 09:10:10 [INFO] --- maven-checkstyle-plugin:3.1.1:check (onap-license) @ sdc-distribution-ci --- 09:10:10 [INFO] 09:10:10 [INFO] --- maven-checkstyle-plugin:3.1.1:check (onap-java-style) @ sdc-distribution-ci --- 09:10:10 [INFO] 09:10:10 [INFO] --- maven-resources-plugin:2.6:resources (default-resources) @ sdc-distribution-ci --- 09:10:10 [INFO] Using 'UTF-8' encoding to copy filtered resources. 09:10:10 [INFO] Copying 1 resource 09:10:10 [INFO] 09:10:10 [INFO] --- maven-compiler-plugin:3.8.1:compile (default-compile) @ sdc-distribution-ci --- 09:10:10 [INFO] Changes detected - recompiling the module! 09:10:10 [INFO] Compiling 10 source files to /w/workspace/sdc-sdc-distribution-client-master-integration-pairwise/sdc-distribution-ci/target/classes 09:10:10 [INFO] /w/workspace/sdc-sdc-distribution-client-master-integration-pairwise/sdc-distribution-ci/src/main/java/org/onap/test/core/service/ClientNotifyCallback.java: /w/workspace/sdc-sdc-distribution-client-master-integration-pairwise/sdc-distribution-ci/src/main/java/org/onap/test/core/service/ClientNotifyCallback.java uses or overrides a deprecated API. 09:10:10 [INFO] /w/workspace/sdc-sdc-distribution-client-master-integration-pairwise/sdc-distribution-ci/src/main/java/org/onap/test/core/service/ClientNotifyCallback.java: Recompile with -Xlint:deprecation for details. 09:10:10 [INFO] 09:10:10 [INFO] --- maven-resources-plugin:2.6:testResources (default-testResources) @ sdc-distribution-ci --- 09:10:10 [INFO] Using 'UTF-8' encoding to copy filtered resources. 09:10:10 [INFO] Copying 2 resources 09:10:10 [INFO] 09:10:10 [INFO] --- maven-compiler-plugin:3.8.1:testCompile (default-testCompile) @ sdc-distribution-ci --- 09:10:10 [INFO] Changes detected - recompiling the module! 09:10:10 [INFO] Compiling 2 source files to /w/workspace/sdc-sdc-distribution-client-master-integration-pairwise/sdc-distribution-ci/target/test-classes 09:10:11 [INFO] /w/workspace/sdc-sdc-distribution-client-master-integration-pairwise/sdc-distribution-ci/src/test/java/org/onap/test/core/service/CustomKafkaContainer.java: /w/workspace/sdc-sdc-distribution-client-master-integration-pairwise/sdc-distribution-ci/src/test/java/org/onap/test/core/service/CustomKafkaContainer.java uses or overrides a deprecated API. 09:10:11 [INFO] /w/workspace/sdc-sdc-distribution-client-master-integration-pairwise/sdc-distribution-ci/src/test/java/org/onap/test/core/service/CustomKafkaContainer.java: Recompile with -Xlint:deprecation for details. 09:10:11 [INFO] 09:10:11 [INFO] --- maven-surefire-plugin:3.0.0-M4:test (default-test) @ sdc-distribution-ci --- 09:10:11 [INFO] 09:10:11 [INFO] ------------------------------------------------------- 09:10:11 [INFO] T E S T S 09:10:11 [INFO] ------------------------------------------------------- 09:10:11 [INFO] Running org.onap.test.core.service.ClientInitializerTest 09:10:11 EnvironmentVariableExtension: This extension uses reflection to mutate JDK-internal state, which is fragile. Check the Javadoc or documentation for more details. 09:10:12 09:10:12.203 [main] WARN org.testcontainers.utility.TestcontainersConfiguration - Attempted to read Testcontainers configuration file at file:/home/jenkins/.testcontainers.properties but the file was not found. Exception message: FileNotFoundException: /home/jenkins/.testcontainers.properties (No such file or directory) 09:10:12 09:10:12.210 [main] INFO org.testcontainers.utility.ImageNameSubstitutor - Image name substitution will be performed by: DefaultImageNameSubstitutor (composite of 'ConfigurationFileImageNameSubstitutor' and 'PrefixingImageNameSubstitutor') 09:10:13 09:10:13.170 [main] INFO org.testcontainers.dockerclient.DockerClientProviderStrategy - Found Docker environment with local Unix socket (unix:///var/run/docker.sock) 09:10:13 09:10:13.183 [main] INFO org.testcontainers.DockerClientFactory - Docker host IP address is localhost 09:10:13 09:10:13.230 [main] INFO org.testcontainers.DockerClientFactory - Connected to docker: 09:10:13 Server Version: 20.10.18 09:10:13 API Version: 1.41 09:10:13 Operating System: Ubuntu 18.04.6 LTS 09:10:13 Total Memory: 32167 MB 09:10:13 09:10:13.270 [main] INFO 🐳 [testcontainers/ryuk:0.3.3] - Pulling docker image: testcontainers/ryuk:0.3.3. Please be patient; this may take some time but only needs to be done once. 09:10:13 09:10:13.280 [main] INFO org.testcontainers.utility.RegistryAuthLocator - Failure when attempting to lookup auth config. Please ignore if you don't have images in an authenticated registry. Details: (dockerImageName: testcontainers/ryuk:latest, configFile: /home/jenkins/.docker/config.json. Falling back to docker-java default behaviour. Exception message: /home/jenkins/.docker/config.json (No such file or directory) 09:10:14 09:10:14.024 [docker-java-stream-256183458] INFO 🐳 [testcontainers/ryuk:0.3.3] - Starting to pull image 09:10:14 09:10:14.062 [docker-java-stream-256183458] INFO 🐳 [testcontainers/ryuk:0.3.3] - Pulling image layers: 0 pending, 0 downloaded, 0 extracted, (0 bytes/0 bytes) 09:10:14 09:10:14.477 [docker-java-stream-256183458] INFO 🐳 [testcontainers/ryuk:0.3.3] - Pulling image layers: 2 pending, 1 downloaded, 0 extracted, (31 KB/? MB) 09:10:14 09:10:14.491 [docker-java-stream-256183458] INFO 🐳 [testcontainers/ryuk:0.3.3] - Pulling image layers: 1 pending, 2 downloaded, 0 extracted, (55 KB/? MB) 09:10:14 09:10:14.505 [docker-java-stream-256183458] INFO 🐳 [testcontainers/ryuk:0.3.3] - Pulling image layers: 0 pending, 3 downloaded, 0 extracted, (59 KB/5 MB) 09:10:14 09:10:14.867 [docker-java-stream-256183458] INFO 🐳 [testcontainers/ryuk:0.3.3] - Pulling image layers: 0 pending, 3 downloaded, 1 extracted, (2 MB/5 MB) 09:10:15 09:10:15.340 [docker-java-stream-256183458] INFO 🐳 [testcontainers/ryuk:0.3.3] - Pulling image layers: 0 pending, 3 downloaded, 2 extracted, (2 MB/5 MB) 09:10:15 09:10:15.752 [docker-java-stream-256183458] INFO 🐳 [testcontainers/ryuk:0.3.3] - Pulling image layers: 0 pending, 3 downloaded, 3 extracted, (5 MB/5 MB) 09:10:16 09:10:16.175 [docker-java-stream-256183458] INFO 🐳 [testcontainers/ryuk:0.3.3] - Pull complete. 3 layers, pulled in 2s (downloaded 5 MB at 2 MB/s) 09:10:18 09:10:18.853 [main] INFO org.testcontainers.utility.RyukResourceReaper - Ryuk started - will monitor and terminate Testcontainers containers on JVM exit 09:10:18 09:10:18.854 [main] INFO org.testcontainers.DockerClientFactory - Checking the system... 09:10:18 09:10:18.855 [main] INFO org.testcontainers.DockerClientFactory - ✔︎ Docker server version should be at least 1.6.0 09:10:18 09:10:18.953 [main] INFO org.testcontainers.DockerClientFactory - ✔︎ Docker environment should have more than 2GB free disk space 09:10:18 09:10:18.960 [main] INFO 🐳 [confluentinc/cp-kafka:6.2.1] - Pulling docker image: confluentinc/cp-kafka:6.2.1. Please be patient; this may take some time but only needs to be done once. 09:10:19 09:10:19.423 [docker-java-stream-1639074418] INFO 🐳 [confluentinc/cp-kafka:6.2.1] - Starting to pull image 09:10:19 09:10:19.425 [docker-java-stream-1639074418] INFO 🐳 [confluentinc/cp-kafka:6.2.1] - Pulling image layers: 0 pending, 0 downloaded, 0 extracted, (0 bytes/0 bytes) 09:10:19 09:10:19.526 [docker-java-stream-1639074418] INFO 🐳 [confluentinc/cp-kafka:6.2.1] - Pulling image layers: 10 pending, 1 downloaded, 0 extracted, (739 bytes/? MB) 09:10:19 09:10:19.695 [docker-java-stream-1639074418] INFO 🐳 [confluentinc/cp-kafka:6.2.1] - Pulling image layers: 9 pending, 2 downloaded, 0 extracted, (25 MB/? MB) 09:10:19 09:10:19.797 [docker-java-stream-1639074418] INFO 🐳 [confluentinc/cp-kafka:6.2.1] - Pulling image layers: 8 pending, 3 downloaded, 0 extracted, (48 MB/? MB) 09:10:19 09:10:19.904 [docker-java-stream-1639074418] INFO 🐳 [confluentinc/cp-kafka:6.2.1] - Pulling image layers: 7 pending, 4 downloaded, 0 extracted, (70 MB/? MB) 09:10:20 09:10:20.010 [docker-java-stream-1639074418] INFO 🐳 [confluentinc/cp-kafka:6.2.1] - Pulling image layers: 6 pending, 5 downloaded, 0 extracted, (84 MB/? MB) 09:10:20 09:10:20.102 [docker-java-stream-1639074418] INFO 🐳 [confluentinc/cp-kafka:6.2.1] - Pulling image layers: 5 pending, 6 downloaded, 0 extracted, (111 MB/? MB) 09:10:20 09:10:20.189 [docker-java-stream-1639074418] INFO 🐳 [confluentinc/cp-kafka:6.2.1] - Pulling image layers: 4 pending, 7 downloaded, 0 extracted, (143 MB/? MB) 09:10:20 09:10:20.205 [docker-java-stream-1639074418] INFO 🐳 [confluentinc/cp-kafka:6.2.1] - Pulling image layers: 3 pending, 8 downloaded, 0 extracted, (143 MB/? MB) 09:10:20 09:10:20.315 [docker-java-stream-1639074418] INFO 🐳 [confluentinc/cp-kafka:6.2.1] - Pulling image layers: 2 pending, 9 downloaded, 0 extracted, (161 MB/? MB) 09:10:20 09:10:20.871 [docker-java-stream-1639074418] INFO 🐳 [confluentinc/cp-kafka:6.2.1] - Pulling image layers: 1 pending, 10 downloaded, 0 extracted, (311 MB/? MB) 09:10:21 09:10:21.095 [docker-java-stream-1639074418] INFO 🐳 [confluentinc/cp-kafka:6.2.1] - Pulling image layers: 1 pending, 10 downloaded, 1 extracted, (352 MB/? MB) 09:10:21 09:10:21.125 [docker-java-stream-1639074418] INFO 🐳 [confluentinc/cp-kafka:6.2.1] - Pulling image layers: 0 pending, 11 downloaded, 1 extracted, (352 MB/370 MB) 09:10:21 09:10:21.423 [docker-java-stream-1639074418] INFO 🐳 [confluentinc/cp-kafka:6.2.1] - Pulling image layers: 0 pending, 11 downloaded, 2 extracted, (352 MB/370 MB) 09:10:26 09:10:26.686 [docker-java-stream-1639074418] INFO 🐳 [confluentinc/cp-kafka:6.2.1] - Pulling image layers: 0 pending, 11 downloaded, 3 extracted, (359 MB/370 MB) 09:10:26 09:10:26.966 [docker-java-stream-1639074418] INFO 🐳 [confluentinc/cp-kafka:6.2.1] - Pulling image layers: 0 pending, 11 downloaded, 4 extracted, (365 MB/370 MB) 09:10:27 09:10:27.137 [docker-java-stream-1639074418] INFO 🐳 [confluentinc/cp-kafka:6.2.1] - Pulling image layers: 0 pending, 11 downloaded, 5 extracted, (365 MB/370 MB) 09:10:27 09:10:27.650 [docker-java-stream-1639074418] INFO 🐳 [confluentinc/cp-kafka:6.2.1] - Pulling image layers: 0 pending, 11 downloaded, 6 extracted, (368 MB/370 MB) 09:10:27 09:10:27.889 [docker-java-stream-1639074418] INFO 🐳 [confluentinc/cp-kafka:6.2.1] - Pulling image layers: 0 pending, 11 downloaded, 7 extracted, (368 MB/370 MB) 09:10:28 09:10:28.039 [docker-java-stream-1639074418] INFO 🐳 [confluentinc/cp-kafka:6.2.1] - Pulling image layers: 0 pending, 11 downloaded, 8 extracted, (368 MB/370 MB) 09:10:28 09:10:28.180 [docker-java-stream-1639074418] INFO 🐳 [confluentinc/cp-kafka:6.2.1] - Pulling image layers: 0 pending, 11 downloaded, 9 extracted, (368 MB/370 MB) 09:10:28 09:10:28.904 [docker-java-stream-1639074418] INFO 🐳 [confluentinc/cp-kafka:6.2.1] - Pulling image layers: 0 pending, 11 downloaded, 10 extracted, (370 MB/370 MB) 09:10:29 09:10:29.040 [docker-java-stream-1639074418] INFO 🐳 [confluentinc/cp-kafka:6.2.1] - Pulling image layers: 0 pending, 11 downloaded, 11 extracted, (370 MB/370 MB) 09:10:29 09:10:29.058 [docker-java-stream-1639074418] INFO 🐳 [confluentinc/cp-kafka:6.2.1] - Pull complete. 11 layers, pulled in 9s (downloaded 370 MB at 41 MB/s) 09:10:29 09:10:29.066 [main] INFO 🐳 [confluentinc/cp-kafka:6.2.1] - Creating container for image: confluentinc/cp-kafka:6.2.1 09:10:36 09:10:36.922 [main] INFO 🐳 [confluentinc/cp-kafka:6.2.1] - Container confluentinc/cp-kafka:6.2.1 is starting: 2b813fcb398152db8cd1bb0a8295c8a9018abda5ef608174efb84017e6ade4d4 09:10:42 09:10:42.366 [main] INFO 🐳 [confluentinc/cp-kafka:6.2.1] - Container confluentinc/cp-kafka:6.2.1 started in PT23.409761S 09:10:44 09:10:44.269 [main] INFO 🐳 [nexus3.onap.org:10001/onap/onap-component-mock-sdc:master] - Pulling docker image: nexus3.onap.org:10001/onap/onap-component-mock-sdc:master. Please be patient; this may take some time but only needs to be done once. 09:10:44 09:10:44.270 [main] INFO org.testcontainers.utility.RegistryAuthLocator - Failure when attempting to lookup auth config. Please ignore if you don't have images in an authenticated registry. Details: (dockerImageName: nexus3.onap.org:10001/onap/onap-component-mock-sdc:latest, configFile: /home/jenkins/.docker/config.json. Falling back to docker-java default behaviour. Exception message: /home/jenkins/.docker/config.json (No such file or directory) 09:10:45 09:10:45.064 [docker-java-stream--807276005] INFO 🐳 [nexus3.onap.org:10001/onap/onap-component-mock-sdc:master] - Starting to pull image 09:10:45 09:10:45.066 [docker-java-stream--807276005] INFO 🐳 [nexus3.onap.org:10001/onap/onap-component-mock-sdc:master] - Pulling image layers: 0 pending, 0 downloaded, 0 extracted, (0 bytes/0 bytes) 09:10:45 09:10:45.306 [docker-java-stream--807276005] INFO 🐳 [nexus3.onap.org:10001/onap/onap-component-mock-sdc:master] - Pulling image layers: 0 pending, 1 downloaded, 0 extracted, (62 KB/5 MB) 09:10:45 09:10:45.480 [docker-java-stream--807276005] INFO 🐳 [nexus3.onap.org:10001/onap/onap-component-mock-sdc:master] - Pulling image layers: 0 pending, 1 downloaded, 1 extracted, (5 MB/5 MB) 09:10:45 09:10:45.526 [main] INFO 🐳 [nexus3.onap.org:10001/onap/onap-component-mock-sdc:master] - Creating container for image: nexus3.onap.org:10001/onap/onap-component-mock-sdc:master 09:10:45 09:10:45.712 [main] INFO 🐳 [nexus3.onap.org:10001/onap/onap-component-mock-sdc:master] - Container nexus3.onap.org:10001/onap/onap-component-mock-sdc:master is starting: 4efe16d642ca6fd6fb55d29b5fd5bbacc2a0001fc5c9651885da98ec4b1cd20e 09:10:46 09:10:46.176 [main] INFO org.testcontainers.containers.wait.strategy.HttpWaitStrategy - /hardcore_bell: Waiting for 60 seconds for URL: http://localhost:49155/sdc/v1/artifactTypes (where port 49155 maps to container port 30206) 09:10:46 09:10:46.206 [main] INFO 🐳 [nexus3.onap.org:10001/onap/onap-component-mock-sdc:master] - Container nexus3.onap.org:10001/onap/onap-component-mock-sdc:master started in PT1.93991S 09:10:47 09:10:47.297 [main] INFO org.apache.kafka.clients.producer.ProducerConfig - ProducerConfig values: 09:10:47 acks = -1 09:10:47 batch.size = 16384 09:10:47 bootstrap.servers = [localhost:43219] 09:10:47 buffer.memory = 33554432 09:10:47 client.dns.lookup = use_all_dns_ips 09:10:47 client.id = dcae-openapi-manager-producer-86baf609-ac5e-4824-aac4-aae3cf0903e3 09:10:47 compression.type = none 09:10:47 connections.max.idle.ms = 540000 09:10:47 delivery.timeout.ms = 120000 09:10:47 enable.idempotence = true 09:10:47 interceptor.classes = [] 09:10:47 key.serializer = class org.apache.kafka.common.serialization.StringSerializer 09:10:47 linger.ms = 0 09:10:47 max.block.ms = 60000 09:10:47 max.in.flight.requests.per.connection = 5 09:10:47 max.request.size = 1048576 09:10:47 metadata.max.age.ms = 300000 09:10:47 metadata.max.idle.ms = 300000 09:10:47 metric.reporters = [] 09:10:47 metrics.num.samples = 2 09:10:47 metrics.recording.level = INFO 09:10:47 metrics.sample.window.ms = 30000 09:10:47 partitioner.adaptive.partitioning.enable = true 09:10:47 partitioner.availability.timeout.ms = 0 09:10:47 partitioner.class = null 09:10:47 partitioner.ignore.keys = false 09:10:47 receive.buffer.bytes = 32768 09:10:47 reconnect.backoff.max.ms = 1000 09:10:47 reconnect.backoff.ms = 50 09:10:47 request.timeout.ms = 30000 09:10:47 retries = 2147483647 09:10:47 retry.backoff.ms = 100 09:10:47 sasl.client.callback.handler.class = null 09:10:47 sasl.jaas.config = [hidden] 09:10:47 sasl.kerberos.kinit.cmd = /usr/bin/kinit 09:10:47 sasl.kerberos.min.time.before.relogin = 60000 09:10:47 sasl.kerberos.service.name = null 09:10:47 sasl.kerberos.ticket.renew.jitter = 0.05 09:10:47 sasl.kerberos.ticket.renew.window.factor = 0.8 09:10:47 sasl.login.callback.handler.class = null 09:10:47 sasl.login.class = null 09:10:47 sasl.login.connect.timeout.ms = null 09:10:47 sasl.login.read.timeout.ms = null 09:10:47 sasl.login.refresh.buffer.seconds = 300 09:10:47 sasl.login.refresh.min.period.seconds = 60 09:10:47 sasl.login.refresh.window.factor = 0.8 09:10:47 sasl.login.refresh.window.jitter = 0.05 09:10:47 sasl.login.retry.backoff.max.ms = 10000 09:10:47 sasl.login.retry.backoff.ms = 100 09:10:47 sasl.mechanism = PLAIN 09:10:47 sasl.oauthbearer.clock.skew.seconds = 30 09:10:47 sasl.oauthbearer.expected.audience = null 09:10:47 sasl.oauthbearer.expected.issuer = null 09:10:47 sasl.oauthbearer.jwks.endpoint.refresh.ms = 3600000 09:10:47 sasl.oauthbearer.jwks.endpoint.retry.backoff.max.ms = 10000 09:10:47 sasl.oauthbearer.jwks.endpoint.retry.backoff.ms = 100 09:10:47 sasl.oauthbearer.jwks.endpoint.url = null 09:10:47 sasl.oauthbearer.scope.claim.name = scope 09:10:47 sasl.oauthbearer.sub.claim.name = sub 09:10:47 sasl.oauthbearer.token.endpoint.url = null 09:10:47 security.protocol = SASL_PLAINTEXT 09:10:47 security.providers = null 09:10:47 send.buffer.bytes = 131072 09:10:47 socket.connection.setup.timeout.max.ms = 30000 09:10:47 socket.connection.setup.timeout.ms = 10000 09:10:47 ssl.cipher.suites = null 09:10:47 ssl.enabled.protocols = [TLSv1.2, TLSv1.3] 09:10:47 ssl.endpoint.identification.algorithm = https 09:10:47 ssl.engine.factory.class = null 09:10:47 ssl.key.password = null 09:10:47 ssl.keymanager.algorithm = SunX509 09:10:47 ssl.keystore.certificate.chain = null 09:10:47 ssl.keystore.key = null 09:10:47 ssl.keystore.location = null 09:10:47 ssl.keystore.password = null 09:10:47 ssl.keystore.type = JKS 09:10:47 ssl.protocol = TLSv1.3 09:10:47 ssl.provider = null 09:10:47 ssl.secure.random.implementation = null 09:10:47 ssl.trustmanager.algorithm = PKIX 09:10:47 ssl.truststore.certificates = null 09:10:47 ssl.truststore.location = null 09:10:47 ssl.truststore.password = null 09:10:47 ssl.truststore.type = JKS 09:10:47 transaction.timeout.ms = 60000 09:10:47 transactional.id = null 09:10:47 value.serializer = class org.apache.kafka.common.serialization.StringSerializer 09:10:47 09:10:47 09:10:47.385 [main] INFO org.apache.kafka.clients.producer.KafkaProducer - [Producer clientId=dcae-openapi-manager-producer-86baf609-ac5e-4824-aac4-aae3cf0903e3] Instantiated an idempotent producer. 09:10:47 09:10:47.436 [main] INFO org.apache.kafka.common.security.authenticator.AbstractLogin - Successfully logged in. 09:10:47 09:10:47.479 [main] INFO org.apache.kafka.common.utils.AppInfoParser - Kafka version: 3.3.1 09:10:47 09:10:47.479 [main] INFO org.apache.kafka.common.utils.AppInfoParser - Kafka commitId: e23c59d00e687ff5 09:10:47 09:10:47.479 [main] INFO org.apache.kafka.common.utils.AppInfoParser - Kafka startTimeMs: 1770973847476 09:10:47 09:10:47.483 [main] INFO org.onap.test.core.service.ClientInitializer - distribution client initialized successfully 09:10:47 09:10:47.483 [main] INFO org.onap.test.core.service.ClientInitializer - ======================================== 09:10:47 09:10:47.483 [main] INFO org.onap.test.core.service.ClientInitializer - ======================================== 09:10:47 09:10:47.505 [main] INFO org.apache.kafka.clients.consumer.ConsumerConfig - ConsumerConfig values: 09:10:47 allow.auto.create.topics = false 09:10:47 auto.commit.interval.ms = 5000 09:10:47 auto.offset.reset = latest 09:10:47 bootstrap.servers = [localhost:43219] 09:10:47 check.crcs = true 09:10:47 client.dns.lookup = use_all_dns_ips 09:10:47 client.id = dcae-openapi-manager-consumer-0dbce27b-c5df-4a6e-a5b5-2b3da3b96a89 09:10:47 client.rack = 09:10:47 connections.max.idle.ms = 540000 09:10:47 default.api.timeout.ms = 60000 09:10:47 enable.auto.commit = true 09:10:47 exclude.internal.topics = true 09:10:47 fetch.max.bytes = 52428800 09:10:47 fetch.max.wait.ms = 500 09:10:47 fetch.min.bytes = 1 09:10:47 group.id = noapp 09:10:47 group.instance.id = null 09:10:47 heartbeat.interval.ms = 3000 09:10:47 interceptor.classes = [] 09:10:47 internal.leave.group.on.close = true 09:10:47 internal.throw.on.fetch.stable.offset.unsupported = false 09:10:47 isolation.level = read_uncommitted 09:10:47 key.deserializer = class org.apache.kafka.common.serialization.StringDeserializer 09:10:47 max.partition.fetch.bytes = 1048576 09:10:47 max.poll.interval.ms = 300000 09:10:47 max.poll.records = 500 09:10:47 metadata.max.age.ms = 300000 09:10:47 metric.reporters = [] 09:10:47 metrics.num.samples = 2 09:10:47 metrics.recording.level = INFO 09:10:47 metrics.sample.window.ms = 30000 09:10:47 partition.assignment.strategy = [class org.apache.kafka.clients.consumer.RangeAssignor, class org.apache.kafka.clients.consumer.CooperativeStickyAssignor] 09:10:47 receive.buffer.bytes = 65536 09:10:47 reconnect.backoff.max.ms = 1000 09:10:47 reconnect.backoff.ms = 50 09:10:47 request.timeout.ms = 30000 09:10:47 retry.backoff.ms = 100 09:10:47 sasl.client.callback.handler.class = null 09:10:47 sasl.jaas.config = [hidden] 09:10:47 sasl.kerberos.kinit.cmd = /usr/bin/kinit 09:10:47 sasl.kerberos.min.time.before.relogin = 60000 09:10:47 sasl.kerberos.service.name = null 09:10:47 sasl.kerberos.ticket.renew.jitter = 0.05 09:10:47 sasl.kerberos.ticket.renew.window.factor = 0.8 09:10:47 sasl.login.callback.handler.class = null 09:10:47 sasl.login.class = null 09:10:47 sasl.login.connect.timeout.ms = null 09:10:47 sasl.login.read.timeout.ms = null 09:10:47 sasl.login.refresh.buffer.seconds = 300 09:10:47 sasl.login.refresh.min.period.seconds = 60 09:10:47 sasl.login.refresh.window.factor = 0.8 09:10:47 sasl.login.refresh.window.jitter = 0.05 09:10:47 sasl.login.retry.backoff.max.ms = 10000 09:10:47 sasl.login.retry.backoff.ms = 100 09:10:47 sasl.mechanism = PLAIN 09:10:47 sasl.oauthbearer.clock.skew.seconds = 30 09:10:47 sasl.oauthbearer.expected.audience = null 09:10:47 sasl.oauthbearer.expected.issuer = null 09:10:47 sasl.oauthbearer.jwks.endpoint.refresh.ms = 3600000 09:10:47 sasl.oauthbearer.jwks.endpoint.retry.backoff.max.ms = 10000 09:10:47 sasl.oauthbearer.jwks.endpoint.retry.backoff.ms = 100 09:10:47 sasl.oauthbearer.jwks.endpoint.url = null 09:10:47 sasl.oauthbearer.scope.claim.name = scope 09:10:47 sasl.oauthbearer.sub.claim.name = sub 09:10:47 sasl.oauthbearer.token.endpoint.url = null 09:10:47 security.protocol = SASL_PLAINTEXT 09:10:47 security.providers = null 09:10:47 send.buffer.bytes = 131072 09:10:47 session.timeout.ms = 45000 09:10:47 socket.connection.setup.timeout.max.ms = 30000 09:10:47 socket.connection.setup.timeout.ms = 10000 09:10:47 ssl.cipher.suites = null 09:10:47 ssl.enabled.protocols = [TLSv1.2, TLSv1.3] 09:10:47 ssl.endpoint.identification.algorithm = https 09:10:47 ssl.engine.factory.class = null 09:10:47 ssl.key.password = null 09:10:47 ssl.keymanager.algorithm = SunX509 09:10:47 ssl.keystore.certificate.chain = null 09:10:47 ssl.keystore.key = null 09:10:47 ssl.keystore.location = null 09:10:47 ssl.keystore.password = null 09:10:47 ssl.keystore.type = JKS 09:10:47 ssl.protocol = TLSv1.3 09:10:47 ssl.provider = null 09:10:47 ssl.secure.random.implementation = null 09:10:47 ssl.trustmanager.algorithm = PKIX 09:10:47 ssl.truststore.certificates = null 09:10:47 ssl.truststore.location = null 09:10:47 ssl.truststore.password = null 09:10:47 ssl.truststore.type = JKS 09:10:47 value.deserializer = class org.apache.kafka.common.serialization.StringDeserializer 09:10:47 09:10:47 09:10:47.566 [main] INFO org.apache.kafka.common.utils.AppInfoParser - Kafka version: 3.3.1 09:10:47 09:10:47.566 [main] INFO org.apache.kafka.common.utils.AppInfoParser - Kafka commitId: e23c59d00e687ff5 09:10:47 09:10:47.566 [main] INFO org.apache.kafka.common.utils.AppInfoParser - Kafka startTimeMs: 1770973847566 09:10:47 09:10:47.567 [main] INFO org.apache.kafka.clients.consumer.KafkaConsumer - [Consumer clientId=dcae-openapi-manager-consumer-0dbce27b-c5df-4a6e-a5b5-2b3da3b96a89, groupId=noapp] Subscribed to topic(s): SDC-DIST-NOTIF-TOPIC 09:10:47 09:10:47.570 [main] INFO org.onap.test.core.service.ClientInitializer - distribution client started successfully 09:10:47 09:10:47.570 [main] INFO org.onap.test.core.service.ClientInitializer - ======================================== 09:10:48 09:10:48.083 [kafka-producer-network-thread | dcae-openapi-manager-producer-86baf609-ac5e-4824-aac4-aae3cf0903e3] INFO org.apache.kafka.clients.Metadata - [Producer clientId=dcae-openapi-manager-producer-86baf609-ac5e-4824-aac4-aae3cf0903e3] Cluster ID: uEgK2oSzR8mdxSswZOanjw 09:10:48 09:10:48.085 [kafka-producer-network-thread | dcae-openapi-manager-producer-86baf609-ac5e-4824-aac4-aae3cf0903e3] INFO org.apache.kafka.clients.producer.internals.TransactionManager - [Producer clientId=dcae-openapi-manager-producer-86baf609-ac5e-4824-aac4-aae3cf0903e3] ProducerId set to 0 with epoch 0 09:10:48 09:10:48.088 [pool-1-thread-1] WARN org.apache.kafka.clients.NetworkClient - [Consumer clientId=dcae-openapi-manager-consumer-0dbce27b-c5df-4a6e-a5b5-2b3da3b96a89, groupId=noapp] Error while fetching metadata with correlation id 2 : {SDC-DIST-NOTIF-TOPIC=UNKNOWN_TOPIC_OR_PARTITION} 09:10:48 09:10:48.089 [pool-1-thread-1] INFO org.apache.kafka.clients.Metadata - [Consumer clientId=dcae-openapi-manager-consumer-0dbce27b-c5df-4a6e-a5b5-2b3da3b96a89, groupId=noapp] Cluster ID: uEgK2oSzR8mdxSswZOanjw 09:10:48 09:10:48.214 [pool-1-thread-1] WARN org.apache.kafka.clients.NetworkClient - [Consumer clientId=dcae-openapi-manager-consumer-0dbce27b-c5df-4a6e-a5b5-2b3da3b96a89, groupId=noapp] Error while fetching metadata with correlation id 4 : {SDC-DIST-NOTIF-TOPIC=UNKNOWN_TOPIC_OR_PARTITION} 09:10:48 09:10:48.326 [pool-1-thread-1] WARN org.apache.kafka.clients.NetworkClient - [Consumer clientId=dcae-openapi-manager-consumer-0dbce27b-c5df-4a6e-a5b5-2b3da3b96a89, groupId=noapp] Error while fetching metadata with correlation id 6 : {SDC-DIST-NOTIF-TOPIC=UNKNOWN_TOPIC_OR_PARTITION} 09:10:48 09:10:48.335 [pool-1-thread-1] INFO org.apache.kafka.clients.consumer.internals.ConsumerCoordinator - [Consumer clientId=dcae-openapi-manager-consumer-0dbce27b-c5df-4a6e-a5b5-2b3da3b96a89, groupId=noapp] Discovered group coordinator localhost:43219 (id: 2147483646 rack: null) 09:10:48 09:10:48.357 [pool-1-thread-1] INFO org.apache.kafka.clients.consumer.internals.ConsumerCoordinator - [Consumer clientId=dcae-openapi-manager-consumer-0dbce27b-c5df-4a6e-a5b5-2b3da3b96a89, groupId=noapp] (Re-)joining group 09:10:48 09:10:48.403 [pool-1-thread-1] INFO org.apache.kafka.clients.consumer.internals.ConsumerCoordinator - [Consumer clientId=dcae-openapi-manager-consumer-0dbce27b-c5df-4a6e-a5b5-2b3da3b96a89, groupId=noapp] Request joining group due to: need to re-join with the given member-id: dcae-openapi-manager-consumer-0dbce27b-c5df-4a6e-a5b5-2b3da3b96a89-4b4fa7de-dbf9-446f-a7ab-09e53fc3ece6 09:10:48 09:10:48.404 [pool-1-thread-1] INFO org.apache.kafka.clients.consumer.internals.ConsumerCoordinator - [Consumer clientId=dcae-openapi-manager-consumer-0dbce27b-c5df-4a6e-a5b5-2b3da3b96a89, groupId=noapp] Request joining group due to: rebalance failed due to 'The group member needs to have a valid member id before actually entering a consumer group.' (MemberIdRequiredException) 09:10:48 09:10:48.405 [pool-1-thread-1] INFO org.apache.kafka.clients.consumer.internals.ConsumerCoordinator - [Consumer clientId=dcae-openapi-manager-consumer-0dbce27b-c5df-4a6e-a5b5-2b3da3b96a89, groupId=noapp] (Re-)joining group 09:10:48 09:10:48.431 [pool-1-thread-1] WARN org.apache.kafka.clients.NetworkClient - [Consumer clientId=dcae-openapi-manager-consumer-0dbce27b-c5df-4a6e-a5b5-2b3da3b96a89, groupId=noapp] Error while fetching metadata with correlation id 11 : {SDC-DIST-NOTIF-TOPIC=UNKNOWN_TOPIC_OR_PARTITION} 09:10:48 09:10:48.441 [pool-1-thread-1] INFO org.apache.kafka.clients.consumer.internals.ConsumerCoordinator - [Consumer clientId=dcae-openapi-manager-consumer-0dbce27b-c5df-4a6e-a5b5-2b3da3b96a89, groupId=noapp] Successfully joined group with generation Generation{generationId=1, memberId='dcae-openapi-manager-consumer-0dbce27b-c5df-4a6e-a5b5-2b3da3b96a89-4b4fa7de-dbf9-446f-a7ab-09e53fc3ece6', protocol='range'} 09:10:48 09:10:48.537 [pool-1-thread-1] WARN org.apache.kafka.clients.NetworkClient - [Consumer clientId=dcae-openapi-manager-consumer-0dbce27b-c5df-4a6e-a5b5-2b3da3b96a89, groupId=noapp] Error while fetching metadata with correlation id 12 : {SDC-DIST-NOTIF-TOPIC=UNKNOWN_TOPIC_OR_PARTITION} 09:10:48 09:10:48.543 [pool-1-thread-1] INFO org.apache.kafka.clients.consumer.internals.ConsumerCoordinator - [Consumer clientId=dcae-openapi-manager-consumer-0dbce27b-c5df-4a6e-a5b5-2b3da3b96a89, groupId=noapp] Finished assignment for group at generation 1: {dcae-openapi-manager-consumer-0dbce27b-c5df-4a6e-a5b5-2b3da3b96a89-4b4fa7de-dbf9-446f-a7ab-09e53fc3ece6=Assignment(partitions=[])} 09:10:48 09:10:48.573 [main] INFO org.apache.kafka.clients.producer.ProducerConfig - ProducerConfig values: 09:10:48 acks = -1 09:10:48 batch.size = 16384 09:10:48 bootstrap.servers = [PLAINTEXT://localhost:43219] 09:10:48 buffer.memory = 33554432 09:10:48 client.dns.lookup = use_all_dns_ips 09:10:48 client.id = producer-1 09:10:48 compression.type = none 09:10:48 connections.max.idle.ms = 540000 09:10:48 delivery.timeout.ms = 120000 09:10:48 enable.idempotence = true 09:10:48 interceptor.classes = [] 09:10:48 key.serializer = class org.apache.kafka.common.serialization.StringSerializer 09:10:48 linger.ms = 0 09:10:48 max.block.ms = 60000 09:10:48 max.in.flight.requests.per.connection = 5 09:10:48 max.request.size = 1048576 09:10:48 metadata.max.age.ms = 300000 09:10:48 metadata.max.idle.ms = 300000 09:10:48 metric.reporters = [] 09:10:48 metrics.num.samples = 2 09:10:48 metrics.recording.level = INFO 09:10:48 metrics.sample.window.ms = 30000 09:10:48 partitioner.adaptive.partitioning.enable = true 09:10:48 partitioner.availability.timeout.ms = 0 09:10:48 partitioner.class = null 09:10:48 partitioner.ignore.keys = false 09:10:48 receive.buffer.bytes = 32768 09:10:48 reconnect.backoff.max.ms = 1000 09:10:48 reconnect.backoff.ms = 50 09:10:48 request.timeout.ms = 30000 09:10:48 retries = 2147483647 09:10:48 retry.backoff.ms = 100 09:10:48 sasl.client.callback.handler.class = null 09:10:48 sasl.jaas.config = [hidden] 09:10:48 sasl.kerberos.kinit.cmd = /usr/bin/kinit 09:10:48 sasl.kerberos.min.time.before.relogin = 60000 09:10:48 sasl.kerberos.service.name = null 09:10:48 sasl.kerberos.ticket.renew.jitter = 0.05 09:10:48 sasl.kerberos.ticket.renew.window.factor = 0.8 09:10:48 sasl.login.callback.handler.class = null 09:10:48 sasl.login.class = null 09:10:48 sasl.login.connect.timeout.ms = null 09:10:48 sasl.login.read.timeout.ms = null 09:10:48 sasl.login.refresh.buffer.seconds = 300 09:10:48 sasl.login.refresh.min.period.seconds = 60 09:10:48 sasl.login.refresh.window.factor = 0.8 09:10:48 sasl.login.refresh.window.jitter = 0.05 09:10:48 sasl.login.retry.backoff.max.ms = 10000 09:10:48 sasl.login.retry.backoff.ms = 100 09:10:48 sasl.mechanism = PLAIN 09:10:48 sasl.oauthbearer.clock.skew.seconds = 30 09:10:48 sasl.oauthbearer.expected.audience = null 09:10:48 sasl.oauthbearer.expected.issuer = null 09:10:48 sasl.oauthbearer.jwks.endpoint.refresh.ms = 3600000 09:10:48 sasl.oauthbearer.jwks.endpoint.retry.backoff.max.ms = 10000 09:10:48 sasl.oauthbearer.jwks.endpoint.retry.backoff.ms = 100 09:10:48 sasl.oauthbearer.jwks.endpoint.url = null 09:10:48 sasl.oauthbearer.scope.claim.name = scope 09:10:48 sasl.oauthbearer.sub.claim.name = sub 09:10:48 sasl.oauthbearer.token.endpoint.url = null 09:10:48 security.protocol = SASL_PLAINTEXT 09:10:48 security.providers = null 09:10:48 send.buffer.bytes = 131072 09:10:48 socket.connection.setup.timeout.max.ms = 30000 09:10:48 socket.connection.setup.timeout.ms = 10000 09:10:48 ssl.cipher.suites = null 09:10:48 ssl.enabled.protocols = [TLSv1.2, TLSv1.3] 09:10:48 ssl.endpoint.identification.algorithm = https 09:10:48 ssl.engine.factory.class = null 09:10:48 ssl.key.password = null 09:10:48 ssl.keymanager.algorithm = SunX509 09:10:48 ssl.keystore.certificate.chain = null 09:10:48 ssl.keystore.key = null 09:10:48 ssl.keystore.location = null 09:10:48 ssl.keystore.password = null 09:10:48 ssl.keystore.type = JKS 09:10:48 ssl.protocol = TLSv1.3 09:10:48 ssl.provider = null 09:10:48 ssl.secure.random.implementation = null 09:10:48 ssl.trustmanager.algorithm = PKIX 09:10:48 ssl.truststore.certificates = null 09:10:48 ssl.truststore.location = null 09:10:48 ssl.truststore.password = null 09:10:48 ssl.truststore.type = JKS 09:10:48 transaction.timeout.ms = 60000 09:10:48 transactional.id = null 09:10:48 value.serializer = class org.apache.kafka.common.serialization.StringSerializer 09:10:48 09:10:48 09:10:48.576 [main] INFO org.apache.kafka.clients.producer.KafkaProducer - [Producer clientId=producer-1] Instantiated an idempotent producer. 09:10:48 09:10:48.586 [main] INFO org.apache.kafka.common.utils.AppInfoParser - Kafka version: 3.3.1 09:10:48 09:10:48.587 [main] INFO org.apache.kafka.common.utils.AppInfoParser - Kafka commitId: e23c59d00e687ff5 09:10:48 09:10:48.587 [main] INFO org.apache.kafka.common.utils.AppInfoParser - Kafka startTimeMs: 1770973848586 09:10:48 09:10:48.610 [pool-1-thread-1] INFO org.apache.kafka.clients.consumer.internals.ConsumerCoordinator - [Consumer clientId=dcae-openapi-manager-consumer-0dbce27b-c5df-4a6e-a5b5-2b3da3b96a89, groupId=noapp] Successfully synced group in generation Generation{generationId=1, memberId='dcae-openapi-manager-consumer-0dbce27b-c5df-4a6e-a5b5-2b3da3b96a89-4b4fa7de-dbf9-446f-a7ab-09e53fc3ece6', protocol='range'} 09:10:48 09:10:48.612 [pool-1-thread-1] INFO org.apache.kafka.clients.consumer.internals.ConsumerCoordinator - [Consumer clientId=dcae-openapi-manager-consumer-0dbce27b-c5df-4a6e-a5b5-2b3da3b96a89, groupId=noapp] Notifying assignor about the new Assignment(partitions=[]) 09:10:48 09:10:48.612 [pool-1-thread-1] INFO org.apache.kafka.clients.consumer.internals.ConsumerCoordinator - [Consumer clientId=dcae-openapi-manager-consumer-0dbce27b-c5df-4a6e-a5b5-2b3da3b96a89, groupId=noapp] Adding newly assigned partitions: 09:10:48 09:10:48.624 [kafka-producer-network-thread | producer-1] WARN org.apache.kafka.clients.NetworkClient - [Producer clientId=producer-1] Error while fetching metadata with correlation id 1 : {SDC-DIST-NOTIF-TOPIC=LEADER_NOT_AVAILABLE} 09:10:48 09:10:48.625 [kafka-producer-network-thread | producer-1] INFO org.apache.kafka.clients.Metadata - [Producer clientId=producer-1] Cluster ID: uEgK2oSzR8mdxSswZOanjw 09:10:48 09:10:48.627 [kafka-producer-network-thread | producer-1] INFO org.apache.kafka.clients.producer.internals.TransactionManager - [Producer clientId=producer-1] ProducerId set to 1 with epoch 0 09:10:48 09:10:48.642 [pool-1-thread-1] WARN org.apache.kafka.clients.NetworkClient - [Consumer clientId=dcae-openapi-manager-consumer-0dbce27b-c5df-4a6e-a5b5-2b3da3b96a89, groupId=noapp] Error while fetching metadata with correlation id 14 : {SDC-DIST-NOTIF-TOPIC=UNKNOWN_TOPIC_OR_PARTITION} 09:10:48 09:10:48.741 [kafka-producer-network-thread | producer-1] INFO org.apache.kafka.clients.Metadata - [Producer clientId=producer-1] Resetting the last seen epoch of partition SDC-DIST-NOTIF-TOPIC-0 to 0 since the associated topicId changed from null to IkDP28uPQ26Quz-Dv6VJUQ 09:10:48 09:10:48.754 [pool-1-thread-1] INFO org.apache.kafka.clients.Metadata - [Consumer clientId=dcae-openapi-manager-consumer-0dbce27b-c5df-4a6e-a5b5-2b3da3b96a89, groupId=noapp] Resetting the last seen epoch of partition SDC-DIST-NOTIF-TOPIC-0 to 0 since the associated topicId changed from null to IkDP28uPQ26Quz-Dv6VJUQ 09:10:48 09:10:48.755 [pool-1-thread-1] INFO org.apache.kafka.clients.consumer.internals.ConsumerCoordinator - [Consumer clientId=dcae-openapi-manager-consumer-0dbce27b-c5df-4a6e-a5b5-2b3da3b96a89, groupId=noapp] Request joining group due to: cached metadata has changed from (version6: {}) at the beginning of the rebalance to (version8: {SDC-DIST-NOTIF-TOPIC=1}) 09:10:48 09:10:48.756 [pool-1-thread-1] INFO org.apache.kafka.clients.consumer.internals.ConsumerCoordinator - [Consumer clientId=dcae-openapi-manager-consumer-0dbce27b-c5df-4a6e-a5b5-2b3da3b96a89, groupId=noapp] Revoke previously assigned partitions 09:10:48 09:10:48.756 [pool-1-thread-1] INFO org.apache.kafka.clients.consumer.internals.ConsumerCoordinator - [Consumer clientId=dcae-openapi-manager-consumer-0dbce27b-c5df-4a6e-a5b5-2b3da3b96a89, groupId=noapp] (Re-)joining group 09:10:48 09:10:48.770 [pool-1-thread-1] INFO org.apache.kafka.clients.consumer.internals.ConsumerCoordinator - [Consumer clientId=dcae-openapi-manager-consumer-0dbce27b-c5df-4a6e-a5b5-2b3da3b96a89, groupId=noapp] Successfully joined group with generation Generation{generationId=2, memberId='dcae-openapi-manager-consumer-0dbce27b-c5df-4a6e-a5b5-2b3da3b96a89-4b4fa7de-dbf9-446f-a7ab-09e53fc3ece6', protocol='range'} 09:10:48 09:10:48.770 [pool-1-thread-1] INFO org.apache.kafka.clients.consumer.internals.ConsumerCoordinator - [Consumer clientId=dcae-openapi-manager-consumer-0dbce27b-c5df-4a6e-a5b5-2b3da3b96a89, groupId=noapp] Finished assignment for group at generation 2: {dcae-openapi-manager-consumer-0dbce27b-c5df-4a6e-a5b5-2b3da3b96a89-4b4fa7de-dbf9-446f-a7ab-09e53fc3ece6=Assignment(partitions=[SDC-DIST-NOTIF-TOPIC-0])} 09:10:48 09:10:48.776 [pool-1-thread-1] INFO org.apache.kafka.clients.consumer.internals.ConsumerCoordinator - [Consumer clientId=dcae-openapi-manager-consumer-0dbce27b-c5df-4a6e-a5b5-2b3da3b96a89, groupId=noapp] Successfully synced group in generation Generation{generationId=2, memberId='dcae-openapi-manager-consumer-0dbce27b-c5df-4a6e-a5b5-2b3da3b96a89-4b4fa7de-dbf9-446f-a7ab-09e53fc3ece6', protocol='range'} 09:10:48 09:10:48.776 [pool-1-thread-1] INFO org.apache.kafka.clients.consumer.internals.ConsumerCoordinator - [Consumer clientId=dcae-openapi-manager-consumer-0dbce27b-c5df-4a6e-a5b5-2b3da3b96a89, groupId=noapp] Notifying assignor about the new Assignment(partitions=[SDC-DIST-NOTIF-TOPIC-0]) 09:10:48 09:10:48.779 [pool-1-thread-1] INFO org.apache.kafka.clients.consumer.internals.ConsumerCoordinator - [Consumer clientId=dcae-openapi-manager-consumer-0dbce27b-c5df-4a6e-a5b5-2b3da3b96a89, groupId=noapp] Adding newly assigned partitions: SDC-DIST-NOTIF-TOPIC-0 09:10:48 09:10:48.794 [pool-1-thread-1] INFO org.apache.kafka.clients.consumer.internals.ConsumerCoordinator - [Consumer clientId=dcae-openapi-manager-consumer-0dbce27b-c5df-4a6e-a5b5-2b3da3b96a89, groupId=noapp] Found no committed offset for partition SDC-DIST-NOTIF-TOPIC-0 09:10:48 09:10:48.835 [pool-1-thread-1] INFO org.apache.kafka.clients.consumer.internals.SubscriptionState - [Consumer clientId=dcae-openapi-manager-consumer-0dbce27b-c5df-4a6e-a5b5-2b3da3b96a89, groupId=noapp] Resetting offset for partition SDC-DIST-NOTIF-TOPIC-0 to position FetchPosition{offset=0, offsetEpoch=Optional.empty, currentLeader=LeaderAndEpoch{leader=Optional[localhost:43219 (id: 1 rack: null)], epoch=0}}. 09:10:48 09:10:48.853 [main] INFO org.apache.kafka.clients.producer.KafkaProducer - [Producer clientId=producer-1] Closing the Kafka producer with timeoutMillis = 9223372036854775807 ms. 09:10:48 09:10:48.862 [main] INFO org.apache.kafka.common.metrics.Metrics - Metrics scheduler closed 09:10:48 09:10:48.862 [main] INFO org.apache.kafka.common.metrics.Metrics - Closing reporter org.apache.kafka.common.metrics.JmxReporter 09:10:48 09:10:48.862 [main] INFO org.apache.kafka.common.metrics.Metrics - Metrics reporters closed 09:10:48 09:10:48.863 [main] INFO org.apache.kafka.common.utils.AppInfoParser - App info kafka.producer for producer-1 unregistered 09:10:48 09:10:48.865 [main] INFO org.onap.test.core.service.ClientInitializerTest - Waiting for artifacts 09:10:48 09:10:48.920 [pool-1-thread-1] INFO org.onap.sdc.utils.NotificationSender - DistributionClient - sendStatus 09:10:48 09:10:48.921 [pool-1-thread-1] INFO org.onap.sdc.utils.NotificationSender - Trying to send status: { 09:10:48 "distributionID": "bf3df55e-cdc6-4bf7-b3b3-0fdccab91106", 09:10:48 "consumerID": "dcae-openapi-manager", 09:10:48 "timestamp": 1770973847571, 09:10:48 "artifactURL": "/sdc/v1/catalog/services/DemovlbCds/1.0/resourceInstances/vlb_cds68b6da5968e40/artifacts/k8s-tca-clamp-policy-05082019.yaml", 09:10:48 "status": "NOT_NOTIFIED" 09:10:48 } 09:10:48 to topic SDC-DIST-STATUS-TOPIC 09:10:48 09:10:48.977 [kafka-producer-network-thread | dcae-openapi-manager-producer-86baf609-ac5e-4824-aac4-aae3cf0903e3] WARN org.apache.kafka.clients.NetworkClient - [Producer clientId=dcae-openapi-manager-producer-86baf609-ac5e-4824-aac4-aae3cf0903e3] Error while fetching metadata with correlation id 4 : {SDC-DIST-STATUS-TOPIC=LEADER_NOT_AVAILABLE} 09:10:49 09:10:49.082 [kafka-producer-network-thread | dcae-openapi-manager-producer-86baf609-ac5e-4824-aac4-aae3cf0903e3] WARN org.apache.kafka.clients.NetworkClient - [Producer clientId=dcae-openapi-manager-producer-86baf609-ac5e-4824-aac4-aae3cf0903e3] Error while fetching metadata with correlation id 5 : {SDC-DIST-STATUS-TOPIC=LEADER_NOT_AVAILABLE} 09:10:49 09:10:49.186 [kafka-producer-network-thread | dcae-openapi-manager-producer-86baf609-ac5e-4824-aac4-aae3cf0903e3] INFO org.apache.kafka.clients.Metadata - [Producer clientId=dcae-openapi-manager-producer-86baf609-ac5e-4824-aac4-aae3cf0903e3] Resetting the last seen epoch of partition SDC-DIST-STATUS-TOPIC-0 to 0 since the associated topicId changed from null to LVkMh0W5RnW5kyKzBSQZHQ 09:10:50 09:10:50.191 [pool-1-thread-1] INFO org.onap.sdc.utils.NotificationSender - DistributionClient - sendStatus 09:10:50 09:10:50.192 [pool-1-thread-1] INFO org.onap.sdc.utils.NotificationSender - Trying to send status: { 09:10:50 "distributionID": "bf3df55e-cdc6-4bf7-b3b3-0fdccab91106", 09:10:50 "consumerID": "dcae-openapi-manager", 09:10:50 "timestamp": 1770973847571, 09:10:50 "artifactURL": "/sdc/v1/catalog/services/DemovlbCds/1.0/resourceInstances/vlb_cds68b6da5968e40/artifacts/vf-license-model.xml", 09:10:50 "status": "NOT_NOTIFIED" 09:10:50 } 09:10:50 to topic SDC-DIST-STATUS-TOPIC 09:10:51 09:10:51.194 [pool-1-thread-1] INFO org.onap.sdc.utils.NotificationSender - DistributionClient - sendStatus 09:10:51 09:10:51.194 [pool-1-thread-1] INFO org.onap.sdc.utils.NotificationSender - Trying to send status: { 09:10:51 "distributionID": "bf3df55e-cdc6-4bf7-b3b3-0fdccab91106", 09:10:51 "consumerID": "dcae-openapi-manager", 09:10:51 "timestamp": 1770973847571, 09:10:51 "artifactURL": "/sdc/v1/catalog/services/DemovlbCds/1.0/resourceInstances/vlb_cds68b6da5968e40/artifacts/base_template.env", 09:10:51 "status": "NOT_NOTIFIED" 09:10:51 } 09:10:51 to topic SDC-DIST-STATUS-TOPIC 09:10:52 09:10:52.197 [pool-1-thread-1] INFO org.onap.sdc.utils.NotificationSender - DistributionClient - sendStatus 09:10:52 09:10:52.197 [pool-1-thread-1] INFO org.onap.sdc.utils.NotificationSender - Trying to send status: { 09:10:52 "distributionID": "bf3df55e-cdc6-4bf7-b3b3-0fdccab91106", 09:10:52 "consumerID": "dcae-openapi-manager", 09:10:52 "timestamp": 1770973847571, 09:10:52 "artifactURL": "/sdc/v1/catalog/services/DemovlbCds/1.0/resourceInstances/vlb_cds68b6da5968e40/artifacts/vlb_cds68b6da5968e40_modules.json", 09:10:52 "status": "NOT_NOTIFIED" 09:10:52 } 09:10:52 to topic SDC-DIST-STATUS-TOPIC 09:10:53 09:10:53.201 [pool-1-thread-1] INFO org.onap.sdc.utils.NotificationSender - DistributionClient - sendStatus 09:10:53 09:10:53.202 [pool-1-thread-1] INFO org.onap.sdc.utils.NotificationSender - Trying to send status: { 09:10:53 "distributionID": "bf3df55e-cdc6-4bf7-b3b3-0fdccab91106", 09:10:53 "consumerID": "dcae-openapi-manager", 09:10:53 "timestamp": 1770973847571, 09:10:53 "artifactURL": "/", 09:10:53 "status": "NOTIFIED" 09:10:53 } 09:10:53 to topic SDC-DIST-STATUS-TOPIC 09:10:54 09:10:54.204 [pool-1-thread-1] INFO org.onap.sdc.utils.NotificationSender - DistributionClient - sendStatus 09:10:54 09:10:54.204 [pool-1-thread-1] INFO org.onap.sdc.utils.NotificationSender - Trying to send status: { 09:10:54 "distributionID": "bf3df55e-cdc6-4bf7-b3b3-0fdccab91106", 09:10:54 "consumerID": "dcae-openapi-manager", 09:10:54 "timestamp": 1770973847571, 09:10:54 "artifactURL": "/sdc/v1/catalog/services/DemovlbCds/1.0/resourceInstances/vlb_cds68b6da5968e40/artifacts/vdns.env", 09:10:54 "status": "NOT_NOTIFIED" 09:10:54 } 09:10:54 to topic SDC-DIST-STATUS-TOPIC 09:10:55 09:10:55.207 [pool-1-thread-1] INFO org.onap.sdc.utils.NotificationSender - DistributionClient - sendStatus 09:10:55 09:10:55.207 [pool-1-thread-1] INFO org.onap.sdc.utils.NotificationSender - Trying to send status: { 09:10:55 "distributionID": "bf3df55e-cdc6-4bf7-b3b3-0fdccab91106", 09:10:55 "consumerID": "dcae-openapi-manager", 09:10:55 "timestamp": 1770973847571, 09:10:55 "artifactURL": "/sdc/v1/catalog/services/DemovlbCds/1.0/resourceInstances/vlb_cds68b6da5968e40/artifacts/vendor-license-model.xml", 09:10:55 "status": "NOT_NOTIFIED" 09:10:55 } 09:10:55 to topic SDC-DIST-STATUS-TOPIC 09:10:56 09:10:56.209 [pool-1-thread-1] INFO org.onap.sdc.utils.NotificationSender - DistributionClient - sendStatus 09:10:56 09:10:56.209 [pool-1-thread-1] INFO org.onap.sdc.utils.NotificationSender - Trying to send status: { 09:10:56 "distributionID": "bf3df55e-cdc6-4bf7-b3b3-0fdccab91106", 09:10:56 "consumerID": "dcae-openapi-manager", 09:10:56 "timestamp": 1770973847571, 09:10:56 "artifactURL": "/", 09:10:56 "status": "NOTIFIED" 09:10:56 } 09:10:56 to topic SDC-DIST-STATUS-TOPIC 09:10:57 09:10:57.211 [pool-1-thread-1] INFO org.onap.sdc.utils.NotificationSender - DistributionClient - sendStatus 09:10:57 09:10:57.211 [pool-1-thread-1] INFO org.onap.sdc.utils.NotificationSender - Trying to send status: { 09:10:57 "distributionID": "bf3df55e-cdc6-4bf7-b3b3-0fdccab91106", 09:10:57 "consumerID": "dcae-openapi-manager", 09:10:57 "timestamp": 1770973847571, 09:10:57 "artifactURL": "/sdc/v1/catalog/services/DemovlbCds/1.0/resourceInstances/vlb_cds68b6da5968e40/artifacts/vlb.env", 09:10:57 "status": "NOT_NOTIFIED" 09:10:57 } 09:10:57 to topic SDC-DIST-STATUS-TOPIC 09:10:58 09:10:58.213 [pool-1-thread-1] INFO org.onap.sdc.utils.NotificationSender - DistributionClient - sendStatus 09:10:58 09:10:58.213 [pool-1-thread-1] INFO org.onap.sdc.utils.NotificationSender - Trying to send status: { 09:10:58 "distributionID": "bf3df55e-cdc6-4bf7-b3b3-0fdccab91106", 09:10:58 "consumerID": "dcae-openapi-manager", 09:10:58 "timestamp": 1770973847571, 09:10:58 "artifactURL": "/sdc/v1/catalog/services/DemovlbCds/1.0/resourceInstances/vlb_cds68b6da5968e40/artifacts/vpkg.env", 09:10:58 "status": "NOT_NOTIFIED" 09:10:58 } 09:10:58 to topic SDC-DIST-STATUS-TOPIC 09:10:59 09:10:59.215 [pool-1-thread-1] INFO org.onap.sdc.utils.NotificationSender - DistributionClient - sendStatus 09:10:59 09:10:59.215 [pool-1-thread-1] INFO org.onap.sdc.utils.NotificationSender - Trying to send status: { 09:10:59 "distributionID": "bf3df55e-cdc6-4bf7-b3b3-0fdccab91106", 09:10:59 "consumerID": "dcae-openapi-manager", 09:10:59 "timestamp": 1770973847571, 09:10:59 "artifactURL": "/", 09:10:59 "status": "NOTIFIED" 09:10:59 } 09:10:59 to topic SDC-DIST-STATUS-TOPIC 09:11:00 09:11:00.217 [pool-1-thread-1] INFO org.onap.sdc.utils.NotificationSender - DistributionClient - sendStatus 09:11:00 09:11:00.218 [pool-1-thread-1] INFO org.onap.sdc.utils.NotificationSender - Trying to send status: { 09:11:00 "distributionID": "bf3df55e-cdc6-4bf7-b3b3-0fdccab91106", 09:11:00 "consumerID": "dcae-openapi-manager", 09:11:00 "timestamp": 1770973847571, 09:11:00 "artifactURL": "/", 09:11:00 "status": "NOTIFIED" 09:11:00 } 09:11:00 to topic SDC-DIST-STATUS-TOPIC 09:11:01 09:11:01.220 [pool-1-thread-1] INFO org.onap.sdc.utils.NotificationSender - DistributionClient - sendStatus 09:11:01 09:11:01.220 [pool-1-thread-1] INFO org.onap.sdc.utils.NotificationSender - Trying to send status: { 09:11:01 "distributionID": "bf3df55e-cdc6-4bf7-b3b3-0fdccab91106", 09:11:01 "consumerID": "dcae-openapi-manager", 09:11:01 "timestamp": 1770973847571, 09:11:01 "artifactURL": "/sdc/v1/catalog/services/DemovlbCds/1.0/artifacts/service-DemovlbCds-template.yml", 09:11:01 "status": "NOT_NOTIFIED" 09:11:01 } 09:11:01 to topic SDC-DIST-STATUS-TOPIC 09:11:02 09:11:02.222 [pool-1-thread-1] INFO org.onap.sdc.utils.NotificationSender - DistributionClient - sendStatus 09:11:02 09:11:02.223 [pool-1-thread-1] INFO org.onap.sdc.utils.NotificationSender - Trying to send status: { 09:11:02 "distributionID": "bf3df55e-cdc6-4bf7-b3b3-0fdccab91106", 09:11:02 "consumerID": "dcae-openapi-manager", 09:11:02 "timestamp": 1770973847571, 09:11:02 "artifactURL": "/sdc/v1/catalog/services/DemovlbCds/1.0/artifacts/service-DemovlbCds-csar.csar", 09:11:02 "status": "NOT_NOTIFIED" 09:11:02 } 09:11:02 to topic SDC-DIST-STATUS-TOPIC 09:11:03 09:11:03.227 [pool-1-thread-1] INFO org.onap.test.core.service.ClientNotifyCallback - ================================================= 09:11:03 09:11:03.227 [pool-1-thread-1] INFO org.onap.test.core.service.ClientNotifyCallback - Distrubuted service information 09:11:03 09:11:03.227 [pool-1-thread-1] INFO org.onap.test.core.service.ClientNotifyCallback - Service UUID: d2192fd5-6ba4-40d2-9078-e3642d9175ee 09:11:03 09:11:03.228 [pool-1-thread-1] INFO org.onap.test.core.service.ClientNotifyCallback - Service name: demoVLB_CDS 09:11:03 09:11:03.228 [pool-1-thread-1] INFO org.onap.test.core.service.ClientNotifyCallback - Service resources: 09:11:03 09:11:03.229 [pool-1-thread-1] INFO org.onap.test.core.service.ClientNotifyCallback - - Resource: vLB_CDS 68b6da59-68e4 09:11:03 09:11:03.229 [pool-1-thread-1] INFO org.onap.test.core.service.ClientNotifyCallback - Artifacts: 09:11:03 09:11:03.229 [pool-1-thread-1] INFO org.onap.test.core.service.ClientNotifyCallback - - Name: vpkg.yaml 09:11:03 09:11:03.229 [pool-1-thread-1] INFO org.onap.test.core.service.ClientNotifyCallback - - Name: vlb.yaml 09:11:03 09:11:03.229 [pool-1-thread-1] INFO org.onap.test.core.service.ClientNotifyCallback - - Name: vdns.yaml 09:11:03 09:11:03.230 [pool-1-thread-1] INFO org.onap.test.core.service.ClientNotifyCallback - - Name: base_template.yaml 09:11:03 09:11:03.230 [pool-1-thread-1] INFO org.onap.test.core.service.ClientNotifyCallback - ================================================= 09:11:03 09:11:03.230 [pool-1-thread-1] INFO org.onap.test.core.service.ArtifactsDownloader - Downloading artifacts... 09:11:03 09:11:03.237 [pool-1-thread-1] ERROR org.onap.sdc.http.HttpSdcClient - failed to connect to url: / 09:11:03 org.apache.http.conn.HttpHostConnectException: Connect to localhost:30206 [localhost/127.0.0.1] failed: Connection refused (Connection refused) 09:11:03 at org.apache.http.impl.conn.DefaultHttpClientConnectionOperator.connect(DefaultHttpClientConnectionOperator.java:156) 09:11:03 at org.apache.http.impl.conn.PoolingHttpClientConnectionManager.connect(PoolingHttpClientConnectionManager.java:376) 09:11:03 at org.apache.http.impl.execchain.MainClientExec.establishRoute(MainClientExec.java:393) 09:11:03 at org.apache.http.impl.execchain.MainClientExec.execute(MainClientExec.java:236) 09:11:03 at org.apache.http.impl.execchain.ProtocolExec.execute(ProtocolExec.java:186) 09:11:03 at org.apache.http.impl.execchain.RetryExec.execute(RetryExec.java:89) 09:11:03 at org.apache.http.impl.execchain.RedirectExec.execute(RedirectExec.java:110) 09:11:03 at org.apache.http.impl.client.InternalHttpClient.doExecute(InternalHttpClient.java:185) 09:11:03 at org.apache.http.impl.client.CloseableHttpClient.execute(CloseableHttpClient.java:83) 09:11:03 at org.apache.http.impl.client.CloseableHttpClient.execute(CloseableHttpClient.java:108) 09:11:03 at org.onap.sdc.http.HttpSdcClient.getRequest(HttpSdcClient.java:116) 09:11:03 at org.onap.sdc.http.SdcConnectorClient.downloadArtifact(SdcConnectorClient.java:136) 09:11:03 at java.base/java.util.stream.ReferencePipeline$3$1.accept(ReferencePipeline.java:195) 09:11:03 at java.base/java.util.ArrayList$ArrayListSpliterator.forEachRemaining(ArrayList.java:1655) 09:11:03 at java.base/java.util.stream.ReferencePipeline$Head.forEach(ReferencePipeline.java:658) 09:11:03 at java.base/java.util.stream.ReferencePipeline$7$1.accept(ReferencePipeline.java:274) 09:11:03 at java.base/java.util.ArrayList$ArrayListSpliterator.forEachRemaining(ArrayList.java:1655) 09:11:03 at java.base/java.util.stream.AbstractPipeline.copyInto(AbstractPipeline.java:484) 09:11:03 at java.base/java.util.stream.AbstractPipeline.wrapAndCopyInto(AbstractPipeline.java:474) 09:11:03 at java.base/java.util.stream.ReduceOps$ReduceOp.evaluateSequential(ReduceOps.java:913) 09:11:03 at java.base/java.util.stream.AbstractPipeline.evaluate(AbstractPipeline.java:234) 09:11:03 at java.base/java.util.stream.ReferencePipeline.collect(ReferencePipeline.java:578) 09:11:03 at org.onap.test.core.service.ArtifactsDownloader.pullArtifacts(ArtifactsDownloader.java:56) 09:11:03 at org.onap.test.core.service.ClientNotifyCallback.activateCallback(ClientNotifyCallback.java:65) 09:11:03 at org.onap.sdc.impl.NotificationConsumer.run(NotificationConsumer.java:62) 09:11:03 at java.base/java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:515) 09:11:03 at java.base/java.util.concurrent.FutureTask.runAndReset(FutureTask.java:305) 09:11:03 at java.base/java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(ScheduledThreadPoolExecutor.java:305) 09:11:03 at java.base/java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128) 09:11:03 at java.base/java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628) 09:11:03 at java.base/java.lang.Thread.run(Thread.java:829) 09:11:03 Caused by: java.net.ConnectException: Connection refused (Connection refused) 09:11:03 at java.base/java.net.PlainSocketImpl.socketConnect(Native Method) 09:11:03 at java.base/java.net.AbstractPlainSocketImpl.doConnect(AbstractPlainSocketImpl.java:412) 09:11:03 at java.base/java.net.AbstractPlainSocketImpl.connectToAddress(AbstractPlainSocketImpl.java:255) 09:11:03 at java.base/java.net.AbstractPlainSocketImpl.connect(AbstractPlainSocketImpl.java:237) 09:11:03 at java.base/java.net.SocksSocketImpl.connect(SocksSocketImpl.java:392) 09:11:03 at java.base/java.net.Socket.connect(Socket.java:609) 09:11:03 at org.apache.http.conn.socket.PlainConnectionSocketFactory.connectSocket(PlainConnectionSocketFactory.java:75) 09:11:03 at org.apache.http.impl.conn.DefaultHttpClientConnectionOperator.connect(DefaultHttpClientConnectionOperator.java:142) 09:11:03 ... 30 common frames omitted 09:11:03 09:11:03.238 [pool-1-thread-1] ERROR org.onap.sdc.http.SdcConnectorClient - status from SDC is org.onap.sdc.http.HttpSdcResponse@6a648552 09:11:03 09:11:03.242 [pool-1-thread-1] ERROR org.onap.sdc.http.SdcConnectorClient - DistributionClientResultImpl [responseStatus=GENERAL_ERROR, responseMessage=failed to send request to SDC] 09:11:03 09:11:03.243 [pool-1-thread-1] ERROR org.onap.sdc.http.HttpSdcClient - failed to connect to url: / 09:11:03 org.apache.http.conn.HttpHostConnectException: Connect to localhost:30206 [localhost/127.0.0.1] failed: Connection refused (Connection refused) 09:11:03 at org.apache.http.impl.conn.DefaultHttpClientConnectionOperator.connect(DefaultHttpClientConnectionOperator.java:156) 09:11:03 at org.apache.http.impl.conn.PoolingHttpClientConnectionManager.connect(PoolingHttpClientConnectionManager.java:376) 09:11:03 at org.apache.http.impl.execchain.MainClientExec.establishRoute(MainClientExec.java:393) 09:11:03 at org.apache.http.impl.execchain.MainClientExec.execute(MainClientExec.java:236) 09:11:03 at org.apache.http.impl.execchain.ProtocolExec.execute(ProtocolExec.java:186) 09:11:03 at org.apache.http.impl.execchain.RetryExec.execute(RetryExec.java:89) 09:11:03 at org.apache.http.impl.execchain.RedirectExec.execute(RedirectExec.java:110) 09:11:03 at org.apache.http.impl.client.InternalHttpClient.doExecute(InternalHttpClient.java:185) 09:11:03 at org.apache.http.impl.client.CloseableHttpClient.execute(CloseableHttpClient.java:83) 09:11:03 at org.apache.http.impl.client.CloseableHttpClient.execute(CloseableHttpClient.java:108) 09:11:03 at org.onap.sdc.http.HttpSdcClient.getRequest(HttpSdcClient.java:116) 09:11:03 at org.onap.sdc.http.SdcConnectorClient.downloadArtifact(SdcConnectorClient.java:136) 09:11:03 at java.base/java.util.stream.ReferencePipeline$3$1.accept(ReferencePipeline.java:195) 09:11:03 at java.base/java.util.ArrayList$ArrayListSpliterator.forEachRemaining(ArrayList.java:1655) 09:11:03 at java.base/java.util.stream.ReferencePipeline$Head.forEach(ReferencePipeline.java:658) 09:11:03 at java.base/java.util.stream.ReferencePipeline$7$1.accept(ReferencePipeline.java:274) 09:11:03 at java.base/java.util.ArrayList$ArrayListSpliterator.forEachRemaining(ArrayList.java:1655) 09:11:03 at java.base/java.util.stream.AbstractPipeline.copyInto(AbstractPipeline.java:484) 09:11:03 at java.base/java.util.stream.AbstractPipeline.wrapAndCopyInto(AbstractPipeline.java:474) 09:11:03 at java.base/java.util.stream.ReduceOps$ReduceOp.evaluateSequential(ReduceOps.java:913) 09:11:03 at java.base/java.util.stream.AbstractPipeline.evaluate(AbstractPipeline.java:234) 09:11:03 at java.base/java.util.stream.ReferencePipeline.collect(ReferencePipeline.java:578) 09:11:03 at org.onap.test.core.service.ArtifactsDownloader.pullArtifacts(ArtifactsDownloader.java:56) 09:11:03 at org.onap.test.core.service.ClientNotifyCallback.activateCallback(ClientNotifyCallback.java:65) 09:11:03 at org.onap.sdc.impl.NotificationConsumer.run(NotificationConsumer.java:62) 09:11:03 at java.base/java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:515) 09:11:03 at java.base/java.util.concurrent.FutureTask.runAndReset(FutureTask.java:305) 09:11:03 at java.base/java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(ScheduledThreadPoolExecutor.java:305) 09:11:03 at java.base/java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128) 09:11:03 at java.base/java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628) 09:11:03 at java.base/java.lang.Thread.run(Thread.java:829) 09:11:03 Caused by: java.net.ConnectException: Connection refused (Connection refused) 09:11:03 at java.base/java.net.PlainSocketImpl.socketConnect(Native Method) 09:11:03 at java.base/java.net.AbstractPlainSocketImpl.doConnect(AbstractPlainSocketImpl.java:412) 09:11:03 at java.base/java.net.AbstractPlainSocketImpl.connectToAddress(AbstractPlainSocketImpl.java:255) 09:11:03 at java.base/java.net.AbstractPlainSocketImpl.connect(AbstractPlainSocketImpl.java:237) 09:11:03 at java.base/java.net.SocksSocketImpl.connect(SocksSocketImpl.java:392) 09:11:03 at java.base/java.net.Socket.connect(Socket.java:609) 09:11:03 at org.apache.http.conn.socket.PlainConnectionSocketFactory.connectSocket(PlainConnectionSocketFactory.java:75) 09:11:03 at org.apache.http.impl.conn.DefaultHttpClientConnectionOperator.connect(DefaultHttpClientConnectionOperator.java:142) 09:11:03 ... 30 common frames omitted 09:11:03 09:11:03.244 [pool-1-thread-1] ERROR org.onap.sdc.http.SdcConnectorClient - status from SDC is org.onap.sdc.http.HttpSdcResponse@e23a6e 09:11:03 09:11:03.244 [pool-1-thread-1] ERROR org.onap.sdc.http.SdcConnectorClient - DistributionClientResultImpl [responseStatus=GENERAL_ERROR, responseMessage=failed to send request to SDC] 09:11:03 09:11:03.244 [pool-1-thread-1] ERROR org.onap.sdc.http.HttpSdcClient - failed to connect to url: / 09:11:03 org.apache.http.conn.HttpHostConnectException: Connect to localhost:30206 [localhost/127.0.0.1] failed: Connection refused (Connection refused) 09:11:03 at org.apache.http.impl.conn.DefaultHttpClientConnectionOperator.connect(DefaultHttpClientConnectionOperator.java:156) 09:11:03 at org.apache.http.impl.conn.PoolingHttpClientConnectionManager.connect(PoolingHttpClientConnectionManager.java:376) 09:11:03 at org.apache.http.impl.execchain.MainClientExec.establishRoute(MainClientExec.java:393) 09:11:03 at org.apache.http.impl.execchain.MainClientExec.execute(MainClientExec.java:236) 09:11:03 at org.apache.http.impl.execchain.ProtocolExec.execute(ProtocolExec.java:186) 09:11:03 at org.apache.http.impl.execchain.RetryExec.execute(RetryExec.java:89) 09:11:03 at org.apache.http.impl.execchain.RedirectExec.execute(RedirectExec.java:110) 09:11:03 at org.apache.http.impl.client.InternalHttpClient.doExecute(InternalHttpClient.java:185) 09:11:03 at org.apache.http.impl.client.CloseableHttpClient.execute(CloseableHttpClient.java:83) 09:11:03 at org.apache.http.impl.client.CloseableHttpClient.execute(CloseableHttpClient.java:108) 09:11:03 at org.onap.sdc.http.HttpSdcClient.getRequest(HttpSdcClient.java:116) 09:11:03 at org.onap.sdc.http.SdcConnectorClient.downloadArtifact(SdcConnectorClient.java:136) 09:11:03 at java.base/java.util.stream.ReferencePipeline$3$1.accept(ReferencePipeline.java:195) 09:11:03 at java.base/java.util.ArrayList$ArrayListSpliterator.forEachRemaining(ArrayList.java:1655) 09:11:03 at java.base/java.util.stream.ReferencePipeline$Head.forEach(ReferencePipeline.java:658) 09:11:03 at java.base/java.util.stream.ReferencePipeline$7$1.accept(ReferencePipeline.java:274) 09:11:03 at java.base/java.util.ArrayList$ArrayListSpliterator.forEachRemaining(ArrayList.java:1655) 09:11:03 at java.base/java.util.stream.AbstractPipeline.copyInto(AbstractPipeline.java:484) 09:11:03 at java.base/java.util.stream.AbstractPipeline.wrapAndCopyInto(AbstractPipeline.java:474) 09:11:03 at java.base/java.util.stream.ReduceOps$ReduceOp.evaluateSequential(ReduceOps.java:913) 09:11:03 at java.base/java.util.stream.AbstractPipeline.evaluate(AbstractPipeline.java:234) 09:11:03 at java.base/java.util.stream.ReferencePipeline.collect(ReferencePipeline.java:578) 09:11:03 at org.onap.test.core.service.ArtifactsDownloader.pullArtifacts(ArtifactsDownloader.java:56) 09:11:03 at org.onap.test.core.service.ClientNotifyCallback.activateCallback(ClientNotifyCallback.java:65) 09:11:03 at org.onap.sdc.impl.NotificationConsumer.run(NotificationConsumer.java:62) 09:11:03 at java.base/java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:515) 09:11:03 at java.base/java.util.concurrent.FutureTask.runAndReset(FutureTask.java:305) 09:11:03 at java.base/java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(ScheduledThreadPoolExecutor.java:305) 09:11:03 at java.base/java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128) 09:11:03 at java.base/java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628) 09:11:03 at java.base/java.lang.Thread.run(Thread.java:829) 09:11:03 Caused by: java.net.ConnectException: Connection refused (Connection refused) 09:11:03 at java.base/java.net.PlainSocketImpl.socketConnect(Native Method) 09:11:03 at java.base/java.net.AbstractPlainSocketImpl.doConnect(AbstractPlainSocketImpl.java:412) 09:11:03 at java.base/java.net.AbstractPlainSocketImpl.connectToAddress(AbstractPlainSocketImpl.java:255) 09:11:03 at java.base/java.net.AbstractPlainSocketImpl.connect(AbstractPlainSocketImpl.java:237) 09:11:03 at java.base/java.net.SocksSocketImpl.connect(SocksSocketImpl.java:392) 09:11:03 at java.base/java.net.Socket.connect(Socket.java:609) 09:11:03 at org.apache.http.conn.socket.PlainConnectionSocketFactory.connectSocket(PlainConnectionSocketFactory.java:75) 09:11:03 at org.apache.http.impl.conn.DefaultHttpClientConnectionOperator.connect(DefaultHttpClientConnectionOperator.java:142) 09:11:03 ... 30 common frames omitted 09:11:03 09:11:03.245 [pool-1-thread-1] ERROR org.onap.sdc.http.SdcConnectorClient - status from SDC is org.onap.sdc.http.HttpSdcResponse@21524e74 09:11:03 09:11:03.245 [pool-1-thread-1] ERROR org.onap.sdc.http.SdcConnectorClient - DistributionClientResultImpl [responseStatus=GENERAL_ERROR, responseMessage=failed to send request to SDC] 09:11:03 09:11:03.246 [pool-1-thread-1] ERROR org.onap.sdc.http.HttpSdcClient - failed to connect to url: / 09:11:03 org.apache.http.conn.HttpHostConnectException: Connect to localhost:30206 [localhost/127.0.0.1] failed: Connection refused (Connection refused) 09:11:03 at org.apache.http.impl.conn.DefaultHttpClientConnectionOperator.connect(DefaultHttpClientConnectionOperator.java:156) 09:11:03 at org.apache.http.impl.conn.PoolingHttpClientConnectionManager.connect(PoolingHttpClientConnectionManager.java:376) 09:11:03 at org.apache.http.impl.execchain.MainClientExec.establishRoute(MainClientExec.java:393) 09:11:03 at org.apache.http.impl.execchain.MainClientExec.execute(MainClientExec.java:236) 09:11:03 at org.apache.http.impl.execchain.ProtocolExec.execute(ProtocolExec.java:186) 09:11:03 at org.apache.http.impl.execchain.RetryExec.execute(RetryExec.java:89) 09:11:03 at org.apache.http.impl.execchain.RedirectExec.execute(RedirectExec.java:110) 09:11:03 at org.apache.http.impl.client.InternalHttpClient.doExecute(InternalHttpClient.java:185) 09:11:03 at org.apache.http.impl.client.CloseableHttpClient.execute(CloseableHttpClient.java:83) 09:11:03 at org.apache.http.impl.client.CloseableHttpClient.execute(CloseableHttpClient.java:108) 09:11:03 at org.onap.sdc.http.HttpSdcClient.getRequest(HttpSdcClient.java:116) 09:11:03 at org.onap.sdc.http.SdcConnectorClient.downloadArtifact(SdcConnectorClient.java:136) 09:11:03 at java.base/java.util.stream.ReferencePipeline$3$1.accept(ReferencePipeline.java:195) 09:11:03 at java.base/java.util.ArrayList$ArrayListSpliterator.forEachRemaining(ArrayList.java:1655) 09:11:03 at java.base/java.util.stream.ReferencePipeline$Head.forEach(ReferencePipeline.java:658) 09:11:03 at java.base/java.util.stream.ReferencePipeline$7$1.accept(ReferencePipeline.java:274) 09:11:03 at java.base/java.util.ArrayList$ArrayListSpliterator.forEachRemaining(ArrayList.java:1655) 09:11:03 at java.base/java.util.stream.AbstractPipeline.copyInto(AbstractPipeline.java:484) 09:11:03 at java.base/java.util.stream.AbstractPipeline.wrapAndCopyInto(AbstractPipeline.java:474) 09:11:03 at java.base/java.util.stream.ReduceOps$ReduceOp.evaluateSequential(ReduceOps.java:913) 09:11:03 at java.base/java.util.stream.AbstractPipeline.evaluate(AbstractPipeline.java:234) 09:11:03 at java.base/java.util.stream.ReferencePipeline.collect(ReferencePipeline.java:578) 09:11:03 at org.onap.test.core.service.ArtifactsDownloader.pullArtifacts(ArtifactsDownloader.java:56) 09:11:03 at org.onap.test.core.service.ClientNotifyCallback.activateCallback(ClientNotifyCallback.java:65) 09:11:03 at org.onap.sdc.impl.NotificationConsumer.run(NotificationConsumer.java:62) 09:11:03 at java.base/java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:515) 09:11:03 at java.base/java.util.concurrent.FutureTask.runAndReset(FutureTask.java:305) 09:11:03 at java.base/java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(ScheduledThreadPoolExecutor.java:305) 09:11:03 at java.base/java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128) 09:11:03 at java.base/java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628) 09:11:03 at java.base/java.lang.Thread.run(Thread.java:829) 09:11:03 Caused by: java.net.ConnectException: Connection refused (Connection refused) 09:11:03 at java.base/java.net.PlainSocketImpl.socketConnect(Native Method) 09:11:03 at java.base/java.net.AbstractPlainSocketImpl.doConnect(AbstractPlainSocketImpl.java:412) 09:11:03 at java.base/java.net.AbstractPlainSocketImpl.connectToAddress(AbstractPlainSocketImpl.java:255) 09:11:03 at java.base/java.net.AbstractPlainSocketImpl.connect(AbstractPlainSocketImpl.java:237) 09:11:03 at java.base/java.net.SocksSocketImpl.connect(SocksSocketImpl.java:392) 09:11:03 at java.base/java.net.Socket.connect(Socket.java:609) 09:11:03 at org.apache.http.conn.socket.PlainConnectionSocketFactory.connectSocket(PlainConnectionSocketFactory.java:75) 09:11:03 at org.apache.http.impl.conn.DefaultHttpClientConnectionOperator.connect(DefaultHttpClientConnectionOperator.java:142) 09:11:03 ... 30 common frames omitted 09:11:03 09:11:03.246 [pool-1-thread-1] ERROR org.onap.sdc.http.SdcConnectorClient - status from SDC is org.onap.sdc.http.HttpSdcResponse@2cf7f00a 09:11:03 09:11:03.246 [pool-1-thread-1] ERROR org.onap.sdc.http.SdcConnectorClient - DistributionClientResultImpl [responseStatus=GENERAL_ERROR, responseMessage=failed to send request to SDC] 09:11:03 09:11:03.260 [main] INFO org.onap.test.core.service.ClientInitializer - ======================================== 09:11:03 09:11:03.261 [main] INFO org.onap.test.core.service.ClientInitializer - distribution client stopped successfully 09:11:03 09:11:03.261 [main] INFO org.onap.test.core.service.ClientInitializer - ======================================== 09:11:03 09:11:03.659 [kafka-producer-network-thread | dcae-openapi-manager-producer-86baf609-ac5e-4824-aac4-aae3cf0903e3] INFO org.apache.kafka.clients.NetworkClient - [Producer clientId=dcae-openapi-manager-producer-86baf609-ac5e-4824-aac4-aae3cf0903e3] Node 1 disconnected. 09:11:03 09:11:03.666 [kafka-producer-network-thread | dcae-openapi-manager-producer-86baf609-ac5e-4824-aac4-aae3cf0903e3] INFO org.apache.kafka.clients.NetworkClient - [Producer clientId=dcae-openapi-manager-producer-86baf609-ac5e-4824-aac4-aae3cf0903e3] Node -1 disconnected. 09:11:03 09:11:03.754 [kafka-coordinator-heartbeat-thread | noapp] INFO org.apache.kafka.clients.NetworkClient - [Consumer clientId=dcae-openapi-manager-consumer-0dbce27b-c5df-4a6e-a5b5-2b3da3b96a89, groupId=noapp] Node 1 disconnected. 09:11:03 09:11:03.754 [kafka-coordinator-heartbeat-thread | noapp] INFO org.apache.kafka.clients.NetworkClient - [Consumer clientId=dcae-openapi-manager-consumer-0dbce27b-c5df-4a6e-a5b5-2b3da3b96a89, groupId=noapp] Node -1 disconnected. 09:11:03 09:11:03.755 [kafka-coordinator-heartbeat-thread | noapp] INFO org.apache.kafka.clients.NetworkClient - [Consumer clientId=dcae-openapi-manager-consumer-0dbce27b-c5df-4a6e-a5b5-2b3da3b96a89, groupId=noapp] Node 2147483646 disconnected. 09:11:03 09:11:03.755 [kafka-coordinator-heartbeat-thread | noapp] INFO org.apache.kafka.clients.consumer.internals.ConsumerCoordinator - [Consumer clientId=dcae-openapi-manager-consumer-0dbce27b-c5df-4a6e-a5b5-2b3da3b96a89, groupId=noapp] Group coordinator localhost:43219 (id: 2147483646 rack: null) is unavailable or invalid due to cause: coordinator unavailable. isDisconnected: true. Rediscovery will be attempted. 09:11:03 09:11:03.768 [kafka-producer-network-thread | dcae-openapi-manager-producer-86baf609-ac5e-4824-aac4-aae3cf0903e3] INFO org.apache.kafka.clients.NetworkClient - [Producer clientId=dcae-openapi-manager-producer-86baf609-ac5e-4824-aac4-aae3cf0903e3] Node 1 disconnected. 09:11:03 09:11:03.768 [kafka-producer-network-thread | dcae-openapi-manager-producer-86baf609-ac5e-4824-aac4-aae3cf0903e3] WARN org.apache.kafka.clients.NetworkClient - [Producer clientId=dcae-openapi-manager-producer-86baf609-ac5e-4824-aac4-aae3cf0903e3] Connection to node 1 (localhost/127.0.0.1:43219) could not be established. Broker may not be available. 09:11:03 09:11:03.856 [kafka-coordinator-heartbeat-thread | noapp] INFO org.apache.kafka.clients.NetworkClient - [Consumer clientId=dcae-openapi-manager-consumer-0dbce27b-c5df-4a6e-a5b5-2b3da3b96a89, groupId=noapp] Node 1 disconnected. 09:11:03 09:11:03.857 [kafka-coordinator-heartbeat-thread | noapp] WARN org.apache.kafka.clients.NetworkClient - [Consumer clientId=dcae-openapi-manager-consumer-0dbce27b-c5df-4a6e-a5b5-2b3da3b96a89, groupId=noapp] Connection to node 1 (localhost/127.0.0.1:43219) could not be established. Broker may not be available. 09:11:03 09:11:03.870 [kafka-producer-network-thread | dcae-openapi-manager-producer-86baf609-ac5e-4824-aac4-aae3cf0903e3] INFO org.apache.kafka.clients.NetworkClient - [Producer clientId=dcae-openapi-manager-producer-86baf609-ac5e-4824-aac4-aae3cf0903e3] Node 1 disconnected. 09:11:03 09:11:03.870 [kafka-producer-network-thread | dcae-openapi-manager-producer-86baf609-ac5e-4824-aac4-aae3cf0903e3] WARN org.apache.kafka.clients.NetworkClient - [Producer clientId=dcae-openapi-manager-producer-86baf609-ac5e-4824-aac4-aae3cf0903e3] Connection to node 1 (localhost/127.0.0.1:43219) could not be established. Broker may not be available. 09:11:04 [INFO] Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 52.151 s - in org.onap.test.core.service.ClientInitializerTest 09:11:04 09:11:04.059 [kafka-coordinator-heartbeat-thread | noapp] INFO org.apache.kafka.clients.NetworkClient - [Consumer clientId=dcae-openapi-manager-consumer-0dbce27b-c5df-4a6e-a5b5-2b3da3b96a89, groupId=noapp] Node 1 disconnected. 09:11:04 09:11:04.060 [kafka-coordinator-heartbeat-thread | noapp] WARN org.apache.kafka.clients.NetworkClient - [Consumer clientId=dcae-openapi-manager-consumer-0dbce27b-c5df-4a6e-a5b5-2b3da3b96a89, groupId=noapp] Connection to node 1 (localhost/127.0.0.1:43219) could not be established. Broker may not be available. 09:11:04 09:11:04.072 [kafka-producer-network-thread | dcae-openapi-manager-producer-86baf609-ac5e-4824-aac4-aae3cf0903e3] INFO org.apache.kafka.clients.NetworkClient - [Producer clientId=dcae-openapi-manager-producer-86baf609-ac5e-4824-aac4-aae3cf0903e3] Node 1 disconnected. 09:11:04 09:11:04.072 [kafka-producer-network-thread | dcae-openapi-manager-producer-86baf609-ac5e-4824-aac4-aae3cf0903e3] WARN org.apache.kafka.clients.NetworkClient - [Producer clientId=dcae-openapi-manager-producer-86baf609-ac5e-4824-aac4-aae3cf0903e3] Connection to node 1 (localhost/127.0.0.1:43219) could not be established. Broker may not be available. 09:11:04 [INFO] 09:11:04 [INFO] Results: 09:11:04 [INFO] 09:11:04 [INFO] Tests run: 1, Failures: 0, Errors: 0, Skipped: 0 09:11:04 [INFO] 09:11:04 [INFO] 09:11:04 [INFO] --- jacoco-maven-plugin:0.8.6:report (post-unit-test) @ sdc-distribution-ci --- 09:11:04 [INFO] Loading execution data file /w/workspace/sdc-sdc-distribution-client-master-integration-pairwise/sdc-distribution-ci/target/code-coverage/jacoco-ut.exec 09:11:04 [INFO] Analyzed bundle 'sdc-distribution-ci' with 9 classes 09:11:04 [INFO] 09:11:04 [INFO] --- maven-jar-plugin:2.4:jar (default-jar) @ sdc-distribution-ci --- 09:11:04 [INFO] Building jar: /w/workspace/sdc-sdc-distribution-client-master-integration-pairwise/sdc-distribution-ci/target/client-initialization.jar 09:11:04 [INFO] 09:11:04 [INFO] --- maven-javadoc-plugin:3.2.0:jar (attach-javadocs) @ sdc-distribution-ci --- 09:11:04 [INFO] No previous run data found, generating javadoc. 09:11:06 [INFO] 09:11:06 Loading source files for package org.onap.test.core.service... 09:11:06 Loading source files for package org.onap.test.core.config... 09:11:06 Loading source files for package org.onap.test.it... 09:11:06 Constructing Javadoc information... 09:11:06 Standard Doclet version 11.0.16 09:11:06 Building tree for all the packages and classes... 09:11:06 Generating /w/workspace/sdc-sdc-distribution-client-master-integration-pairwise/sdc-distribution-ci/target/apidocs/org/onap/test/core/config/ArtifactTypeEnum.html... 09:11:06 Generating /w/workspace/sdc-sdc-distribution-client-master-integration-pairwise/sdc-distribution-ci/target/apidocs/org/onap/test/core/config/DistributionClientConfig.html... 09:11:06 Generating /w/workspace/sdc-sdc-distribution-client-master-integration-pairwise/sdc-distribution-ci/target/apidocs/org/onap/test/core/service/ArtifactsDownloader.html... 09:11:06 Generating /w/workspace/sdc-sdc-distribution-client-master-integration-pairwise/sdc-distribution-ci/target/apidocs/org/onap/test/core/service/ArtifactsValidator.html... 09:11:06 Generating /w/workspace/sdc-sdc-distribution-client-master-integration-pairwise/sdc-distribution-ci/target/apidocs/org/onap/test/core/service/ClientInitializer.html... 09:11:06 Generating /w/workspace/sdc-sdc-distribution-client-master-integration-pairwise/sdc-distribution-ci/target/apidocs/org/onap/test/core/service/ClientNotifyCallback.html... 09:11:06 Generating /w/workspace/sdc-sdc-distribution-client-master-integration-pairwise/sdc-distribution-ci/target/apidocs/org/onap/test/core/service/DistributionStatusMessage.html... 09:11:06 Generating /w/workspace/sdc-sdc-distribution-client-master-integration-pairwise/sdc-distribution-ci/target/apidocs/org/onap/test/core/service/ValidationMessage.html... 09:11:06 Generating /w/workspace/sdc-sdc-distribution-client-master-integration-pairwise/sdc-distribution-ci/target/apidocs/org/onap/test/core/service/ValidationResult.html... 09:11:06 Generating /w/workspace/sdc-sdc-distribution-client-master-integration-pairwise/sdc-distribution-ci/target/apidocs/org/onap/test/it/RegisterToSdcTopicIT.html... 09:11:06 Generating /w/workspace/sdc-sdc-distribution-client-master-integration-pairwise/sdc-distribution-ci/target/apidocs/org/onap/test/core/config/package-summary.html... 09:11:06 Generating /w/workspace/sdc-sdc-distribution-client-master-integration-pairwise/sdc-distribution-ci/target/apidocs/org/onap/test/core/config/package-tree.html... 09:11:06 Generating /w/workspace/sdc-sdc-distribution-client-master-integration-pairwise/sdc-distribution-ci/target/apidocs/org/onap/test/core/service/package-summary.html... 09:11:06 Generating /w/workspace/sdc-sdc-distribution-client-master-integration-pairwise/sdc-distribution-ci/target/apidocs/org/onap/test/core/service/package-tree.html... 09:11:06 Generating /w/workspace/sdc-sdc-distribution-client-master-integration-pairwise/sdc-distribution-ci/target/apidocs/org/onap/test/it/package-summary.html... 09:11:06 Generating /w/workspace/sdc-sdc-distribution-client-master-integration-pairwise/sdc-distribution-ci/target/apidocs/org/onap/test/it/package-tree.html... 09:11:06 Generating /w/workspace/sdc-sdc-distribution-client-master-integration-pairwise/sdc-distribution-ci/target/apidocs/constant-values.html... 09:11:06 Generating /w/workspace/sdc-sdc-distribution-client-master-integration-pairwise/sdc-distribution-ci/target/apidocs/org/onap/test/core/service/class-use/ArtifactsDownloader.html... 09:11:06 Generating /w/workspace/sdc-sdc-distribution-client-master-integration-pairwise/sdc-distribution-ci/target/apidocs/org/onap/test/core/service/class-use/ClientInitializer.html... 09:11:06 Generating /w/workspace/sdc-sdc-distribution-client-master-integration-pairwise/sdc-distribution-ci/target/apidocs/org/onap/test/core/service/class-use/ValidationResult.html... 09:11:06 Generating /w/workspace/sdc-sdc-distribution-client-master-integration-pairwise/sdc-distribution-ci/target/apidocs/org/onap/test/core/service/class-use/ValidationMessage.html... 09:11:06 Generating /w/workspace/sdc-sdc-distribution-client-master-integration-pairwise/sdc-distribution-ci/target/apidocs/org/onap/test/core/service/class-use/ArtifactsValidator.html... 09:11:06 Generating /w/workspace/sdc-sdc-distribution-client-master-integration-pairwise/sdc-distribution-ci/target/apidocs/org/onap/test/core/service/class-use/DistributionStatusMessage.html... 09:11:06 Generating /w/workspace/sdc-sdc-distribution-client-master-integration-pairwise/sdc-distribution-ci/target/apidocs/org/onap/test/core/service/class-use/ClientNotifyCallback.html... 09:11:06 Generating /w/workspace/sdc-sdc-distribution-client-master-integration-pairwise/sdc-distribution-ci/target/apidocs/org/onap/test/core/config/class-use/DistributionClientConfig.html... 09:11:06 Generating /w/workspace/sdc-sdc-distribution-client-master-integration-pairwise/sdc-distribution-ci/target/apidocs/org/onap/test/core/config/class-use/ArtifactTypeEnum.html... 09:11:06 Generating /w/workspace/sdc-sdc-distribution-client-master-integration-pairwise/sdc-distribution-ci/target/apidocs/org/onap/test/it/class-use/RegisterToSdcTopicIT.html... 09:11:06 Generating /w/workspace/sdc-sdc-distribution-client-master-integration-pairwise/sdc-distribution-ci/target/apidocs/org/onap/test/core/config/package-use.html... 09:11:06 Generating /w/workspace/sdc-sdc-distribution-client-master-integration-pairwise/sdc-distribution-ci/target/apidocs/org/onap/test/core/service/package-use.html... 09:11:06 Generating /w/workspace/sdc-sdc-distribution-client-master-integration-pairwise/sdc-distribution-ci/target/apidocs/org/onap/test/it/package-use.html... 09:11:06 Building index for all the packages and classes... 09:11:06 Generating /w/workspace/sdc-sdc-distribution-client-master-integration-pairwise/sdc-distribution-ci/target/apidocs/overview-tree.html... 09:11:06 Generating /w/workspace/sdc-sdc-distribution-client-master-integration-pairwise/sdc-distribution-ci/target/apidocs/index-all.html... 09:11:06 Building index for all classes... 09:11:06 Generating /w/workspace/sdc-sdc-distribution-client-master-integration-pairwise/sdc-distribution-ci/target/apidocs/allclasses-index.html... 09:11:06 Generating /w/workspace/sdc-sdc-distribution-client-master-integration-pairwise/sdc-distribution-ci/target/apidocs/allpackages-index.html... 09:11:06 Generating /w/workspace/sdc-sdc-distribution-client-master-integration-pairwise/sdc-distribution-ci/target/apidocs/deprecated-list.html... 09:11:06 Building index for all classes... 09:11:06 Generating /w/workspace/sdc-sdc-distribution-client-master-integration-pairwise/sdc-distribution-ci/target/apidocs/allclasses.html... 09:11:06 Generating /w/workspace/sdc-sdc-distribution-client-master-integration-pairwise/sdc-distribution-ci/target/apidocs/allclasses.html... 09:11:06 Generating /w/workspace/sdc-sdc-distribution-client-master-integration-pairwise/sdc-distribution-ci/target/apidocs/index.html... 09:11:06 Generating /w/workspace/sdc-sdc-distribution-client-master-integration-pairwise/sdc-distribution-ci/target/apidocs/overview-summary.html... 09:11:06 Generating /w/workspace/sdc-sdc-distribution-client-master-integration-pairwise/sdc-distribution-ci/target/apidocs/help-doc.html... 09:11:06 [INFO] Building jar: /w/workspace/sdc-sdc-distribution-client-master-integration-pairwise/sdc-distribution-ci/target/client-initialization-javadoc.jar 09:11:06 [INFO] 09:11:06 [INFO] --- jacoco-maven-plugin:0.8.6:prepare-agent (pre-integration-test) @ sdc-distribution-ci --- 09:11:06 [INFO] failsafeArgLine set to -javaagent:/home/jenkins/.m2/repository/org/jacoco/org.jacoco.agent/0.8.6/org.jacoco.agent-0.8.6-runtime.jar=destfile=/w/workspace/sdc-sdc-distribution-client-master-integration-pairwise/sdc-distribution-ci/target/code-coverage/jacoco-it.exec,excludes=**/gen/**:**/generated-sources/**:**/yang-gen/**:**/pax/** 09:11:06 [INFO] 09:11:06 [INFO] --- maven-failsafe-plugin:3.0.0-M4:integration-test (integration-tests) @ sdc-distribution-ci --- 09:11:06 [INFO] 09:11:06 [INFO] --- jacoco-maven-plugin:0.8.6:report (post-integration-test) @ sdc-distribution-ci --- 09:11:06 [INFO] Skipping JaCoCo execution due to missing execution data file. 09:11:06 [INFO] 09:11:06 [INFO] --- maven-failsafe-plugin:3.0.0-M4:verify (integration-tests) @ sdc-distribution-ci --- 09:11:06 [INFO] 09:11:06 [INFO] --- maven-install-plugin:2.4:install (default-install) @ sdc-distribution-ci --- 09:11:06 [INFO] Installing /w/workspace/sdc-sdc-distribution-client-master-integration-pairwise/sdc-distribution-ci/target/client-initialization.jar to /home/jenkins/.m2/repository/org/onap/sdc/sdc-distribution-client/sdc-distribution-ci/2.2.0-SNAPSHOT/sdc-distribution-ci-2.2.0-SNAPSHOT.jar 09:11:06 [INFO] Installing /w/workspace/sdc-sdc-distribution-client-master-integration-pairwise/sdc-distribution-ci/pom.xml to /home/jenkins/.m2/repository/org/onap/sdc/sdc-distribution-client/sdc-distribution-ci/2.2.0-SNAPSHOT/sdc-distribution-ci-2.2.0-SNAPSHOT.pom 09:11:06 [INFO] Installing /w/workspace/sdc-sdc-distribution-client-master-integration-pairwise/sdc-distribution-ci/target/client-initialization-javadoc.jar to /home/jenkins/.m2/repository/org/onap/sdc/sdc-distribution-client/sdc-distribution-ci/2.2.0-SNAPSHOT/sdc-distribution-ci-2.2.0-SNAPSHOT-javadoc.jar 09:11:06 [INFO] ------------------------------------------------------------------------ 09:11:06 [INFO] Reactor Summary for sdc-sdc-distribution-client 2.2.0-SNAPSHOT: 09:11:06 [INFO] 09:11:06 [INFO] sdc-sdc-distribution-client ........................ SUCCESS [ 15.363 s] 09:11:06 [INFO] sdc-distribution-client-api ........................ SUCCESS [ 5.321 s] 09:11:06 [INFO] sdc-distribution-client ............................ SUCCESS [ 53.131 s] 09:11:06 [INFO] sdc-distribution-ci ................................ SUCCESS [ 56.727 s] 09:11:06 [INFO] ------------------------------------------------------------------------ 09:11:06 [INFO] BUILD SUCCESS 09:11:06 [INFO] ------------------------------------------------------------------------ 09:11:06 [INFO] Total time: 02:11 min 09:11:06 [INFO] Finished at: 2026-02-13T09:11:06Z 09:11:06 [INFO] ------------------------------------------------------------------------ 09:11:06 $ ssh-agent -k 09:11:06 unset SSH_AUTH_SOCK; 09:11:06 unset SSH_AGENT_PID; 09:11:06 echo Agent pid 2057 killed; 09:11:06 [ssh-agent] Stopped. 09:11:06 [PostBuildScript] - [INFO] Executing post build scripts. 09:11:06 [sdc-sdc-distribution-client-master-integration-pairwise] $ /bin/bash /tmp/jenkins11116293067627283739.sh 09:11:06 ---> sysstat.sh 09:11:07 [sdc-sdc-distribution-client-master-integration-pairwise] $ /bin/bash /tmp/jenkins6731702564467569411.sh 09:11:07 ---> package-listing.sh 09:11:07 ++ facter osfamily 09:11:07 ++ tr '[:upper:]' '[:lower:]' 09:11:07 + OS_FAMILY=debian 09:11:07 + workspace=/w/workspace/sdc-sdc-distribution-client-master-integration-pairwise 09:11:07 + START_PACKAGES=/tmp/packages_start.txt 09:11:07 + END_PACKAGES=/tmp/packages_end.txt 09:11:07 + DIFF_PACKAGES=/tmp/packages_diff.txt 09:11:07 + PACKAGES=/tmp/packages_start.txt 09:11:07 + '[' /w/workspace/sdc-sdc-distribution-client-master-integration-pairwise ']' 09:11:07 + PACKAGES=/tmp/packages_end.txt 09:11:07 + case "${OS_FAMILY}" in 09:11:07 + dpkg -l 09:11:07 + grep '^ii' 09:11:07 + '[' -f /tmp/packages_start.txt ']' 09:11:07 + '[' -f /tmp/packages_end.txt ']' 09:11:07 + diff /tmp/packages_start.txt /tmp/packages_end.txt 09:11:07 + '[' /w/workspace/sdc-sdc-distribution-client-master-integration-pairwise ']' 09:11:07 + mkdir -p /w/workspace/sdc-sdc-distribution-client-master-integration-pairwise/archives/ 09:11:07 + cp -f /tmp/packages_diff.txt /tmp/packages_end.txt /tmp/packages_start.txt /w/workspace/sdc-sdc-distribution-client-master-integration-pairwise/archives/ 09:11:07 [sdc-sdc-distribution-client-master-integration-pairwise] $ /bin/bash /tmp/jenkins1466468935741804574.sh 09:11:07 ---> capture-instance-metadata.sh 09:11:07 Setup pyenv: 09:11:07 system 09:11:07 3.8.13 09:11:07 3.9.13 09:11:07 * 3.10.6 (set by /w/workspace/sdc-sdc-distribution-client-master-integration-pairwise/.python-version) 09:11:07 lf-activate-venv(): INFO: Reuse venv:/tmp/venv-Qtqx from file:/tmp/.os_lf_venv 09:11:07 lf-activate-venv(): INFO: Installing base packages (pip, setuptools, virtualenv) 09:11:07 lf-activate-venv(): INFO: Attempting to install with network-safe options... 09:11:09 lf-activate-venv(): INFO: Base packages installed successfully 09:11:09 lf-activate-venv(): INFO: Installing additional packages: lftools 09:11:17 lf-activate-venv(): INFO: Adding /tmp/venv-Qtqx/bin to PATH 09:11:17 INFO: Running in OpenStack, capturing instance metadata 09:11:18 [sdc-sdc-distribution-client-master-integration-pairwise] $ /bin/bash /tmp/jenkins9761599638771960575.sh 09:11:18 provisioning config files... 09:11:18 copy managed file [jenkins-log-archives-settings] to file:/w/workspace/sdc-sdc-distribution-client-master-integration-pairwise@tmp/config13592943559334537964tmp 09:11:18 Regular expression run condition: Expression=[^.*logs-s3.*], Label=[] 09:11:18 Run condition [Regular expression match] preventing perform for step [Provide Configuration files] 09:11:18 [EnvInject] - Injecting environment variables from a build step. 09:11:18 [EnvInject] - Injecting as environment variables the properties content 09:11:18 SERVER_ID=logs 09:11:18 09:11:18 [EnvInject] - Variables injected successfully. 09:11:18 [sdc-sdc-distribution-client-master-integration-pairwise] $ /bin/bash /tmp/jenkins16761338855166170083.sh 09:11:18 ---> create-netrc.sh 09:11:18 [sdc-sdc-distribution-client-master-integration-pairwise] $ /bin/bash /tmp/jenkins766706990956931461.sh 09:11:18 ---> python-tools-install.sh 09:11:18 Setup pyenv: 09:11:18 system 09:11:18 3.8.13 09:11:18 3.9.13 09:11:18 * 3.10.6 (set by /w/workspace/sdc-sdc-distribution-client-master-integration-pairwise/.python-version) 09:11:18 lf-activate-venv(): INFO: Reuse venv:/tmp/venv-Qtqx from file:/tmp/.os_lf_venv 09:11:18 lf-activate-venv(): INFO: Installing base packages (pip, setuptools, virtualenv) 09:11:18 lf-activate-venv(): INFO: Attempting to install with network-safe options... 09:11:20 lf-activate-venv(): INFO: Base packages installed successfully 09:11:20 lf-activate-venv(): INFO: Installing additional packages: lftools 09:11:28 lf-activate-venv(): INFO: Adding /tmp/venv-Qtqx/bin to PATH 09:11:28 [sdc-sdc-distribution-client-master-integration-pairwise] $ /bin/bash /tmp/jenkins14252157707667802898.sh 09:11:28 ---> sudo-logs.sh 09:11:28 Archiving 'sudo' log.. 09:11:28 [sdc-sdc-distribution-client-master-integration-pairwise] $ /bin/bash /tmp/jenkins12868756915135235511.sh 09:11:28 ---> job-cost.sh 09:11:28 INFO: Activating Python virtual environment... 09:11:28 Setup pyenv: 09:11:28 system 09:11:28 3.8.13 09:11:28 3.9.13 09:11:28 * 3.10.6 (set by /w/workspace/sdc-sdc-distribution-client-master-integration-pairwise/.python-version) 09:11:28 lf-activate-venv(): INFO: Reuse venv:/tmp/venv-Qtqx from file:/tmp/.os_lf_venv 09:11:28 lf-activate-venv(): INFO: Installing base packages (pip, setuptools, virtualenv) 09:11:28 lf-activate-venv(): INFO: Attempting to install with network-safe options... 09:11:30 lf-activate-venv(): INFO: Base packages installed successfully 09:11:30 lf-activate-venv(): INFO: Installing additional packages: zipp==1.1.0 python-openstackclient urllib3~=1.26.15 09:11:35 lf-activate-venv(): INFO: Adding /tmp/venv-Qtqx/bin to PATH 09:11:35 INFO: No stack-cost file found 09:11:35 INFO: Instance uptime: 295s 09:11:35 INFO: Fetching instance metadata (attempt 1 of 3)... 09:11:35 DEBUG: URL: http://169.254.169.254/latest/meta-data/instance-type 09:11:35 INFO: Successfully fetched instance metadata 09:11:35 INFO: Instance type: v3-standard-8 09:11:35 INFO: Retrieving pricing info for: v3-standard-8 09:11:35 INFO: Fetching Vexxhost pricing API (attempt 1 of 3)... 09:11:35 DEBUG: URL: https://pricing.vexxhost.net/v1/pricing/v3-standard-8/cost?seconds=295 09:11:35 INFO: Successfully fetched Vexxhost pricing API 09:11:35 INFO: Retrieved cost: 0.22 09:11:35 INFO: Retrieved resource: v3-standard-8 09:11:35 INFO: Creating archive directory: /w/workspace/sdc-sdc-distribution-client-master-integration-pairwise/archives/cost 09:11:35 INFO: Archiving costs to: /w/workspace/sdc-sdc-distribution-client-master-integration-pairwise/archives/cost.csv 09:11:35 INFO: Successfully archived job cost data 09:11:35 DEBUG: Cost data: sdc-sdc-distribution-client-master-integration-pairwise,1250,2026-02-13 09:11:35,v3-standard-8,295,0.22,0.00,SUCCESS 09:11:35 [sdc-sdc-distribution-client-master-integration-pairwise] $ /bin/bash -l /tmp/jenkins8215333493167805017.sh 09:11:35 ---> logs-deploy.sh 09:11:35 Setup pyenv: 09:11:35 system 09:11:35 3.8.13 09:11:35 3.9.13 09:11:35 * 3.10.6 (set by /w/workspace/sdc-sdc-distribution-client-master-integration-pairwise/.python-version) 09:11:35 lf-activate-venv(): INFO: Reuse venv:/tmp/venv-Qtqx from file:/tmp/.os_lf_venv 09:11:35 lf-activate-venv(): INFO: Installing base packages (pip, setuptools, virtualenv) 09:11:35 lf-activate-venv(): INFO: Attempting to install with network-safe options... 09:11:37 lf-activate-venv(): INFO: Base packages installed successfully 09:11:37 lf-activate-venv(): INFO: Installing additional packages: lftools urllib3~=1.26.15 09:11:45 lf-activate-venv(): INFO: Adding /tmp/venv-Qtqx/bin to PATH 09:11:45 INFO: Nexus URL https://nexus.onap.org path production/vex-yul-ecomp-jenkins-1/sdc-sdc-distribution-client-master-integration-pairwise/1250 09:11:45 INFO: archiving workspace using pattern(s): -p **/target/surefire-reports/*-output.txt 09:11:46 Archives upload complete. 09:11:46 INFO: archiving logs to Nexus 09:11:47 ---> uname -a: 09:11:47 Linux prd-ubuntu1804-docker-8c-8g-8464 4.15.0-192-generic #203-Ubuntu SMP Wed Aug 10 17:40:03 UTC 2022 x86_64 x86_64 x86_64 GNU/Linux 09:11:47 09:11:47 09:11:47 ---> lscpu: 09:11:47 Architecture: x86_64 09:11:47 CPU op-mode(s): 32-bit, 64-bit 09:11:47 Byte Order: Little Endian 09:11:47 CPU(s): 8 09:11:47 On-line CPU(s) list: 0-7 09:11:47 Thread(s) per core: 1 09:11:47 Core(s) per socket: 1 09:11:47 Socket(s): 8 09:11:47 NUMA node(s): 1 09:11:47 Vendor ID: AuthenticAMD 09:11:47 CPU family: 23 09:11:47 Model: 49 09:11:47 Model name: AMD EPYC-Rome Processor 09:11:47 Stepping: 0 09:11:47 CPU MHz: 2800.000 09:11:47 BogoMIPS: 5600.00 09:11:47 Virtualization: AMD-V 09:11:47 Hypervisor vendor: KVM 09:11:47 Virtualization type: full 09:11:47 L1d cache: 32K 09:11:47 L1i cache: 32K 09:11:47 L2 cache: 512K 09:11:47 L3 cache: 16384K 09:11:47 NUMA node0 CPU(s): 0-7 09:11:47 Flags: fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush mmx fxsr sse sse2 syscall nx mmxext fxsr_opt pdpe1gb rdtscp lm rep_good nopl xtopology cpuid extd_apicid tsc_known_freq pni pclmulqdq ssse3 fma cx16 sse4_1 sse4_2 x2apic movbe popcnt tsc_deadline_timer aes xsave avx f16c rdrand hypervisor lahf_lm cmp_legacy svm cr8_legacy abm sse4a misalignsse 3dnowprefetch osvw topoext perfctr_core ssbd ibrs ibpb stibp vmmcall fsgsbase tsc_adjust bmi1 avx2 smep bmi2 rdseed adx smap clflushopt clwb sha_ni xsaveopt xsavec xgetbv1 xsaves clzero xsaveerptr arat npt nrip_save umip rdpid arch_capabilities 09:11:47 09:11:47 09:11:47 ---> nproc: 09:11:47 8 09:11:47 09:11:47 09:11:47 ---> df -h: 09:11:47 Filesystem Size Used Avail Use% Mounted on 09:11:47 udev 16G 0 16G 0% /dev 09:11:47 tmpfs 3.2G 712K 3.2G 1% /run 09:11:47 /dev/vda1 155G 11G 145G 8% / 09:11:47 tmpfs 16G 0 16G 0% /dev/shm 09:11:47 tmpfs 5.0M 0 5.0M 0% /run/lock 09:11:47 tmpfs 16G 0 16G 0% /sys/fs/cgroup 09:11:47 /dev/vda15 105M 4.4M 100M 5% /boot/efi 09:11:47 tmpfs 3.2G 0 3.2G 0% /run/user/1001 09:11:47 09:11:47 09:11:47 ---> free -m: 09:11:47 total used free shared buff/cache available 09:11:47 Mem: 32167 866 28174 0 3127 30850 09:11:47 Swap: 1023 0 1023 09:11:47 09:11:47 09:11:47 ---> ip addr: 09:11:47 1: lo: mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000 09:11:47 link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00 09:11:47 inet 127.0.0.1/8 scope host lo 09:11:47 valid_lft forever preferred_lft forever 09:11:47 inet6 ::1/128 scope host 09:11:47 valid_lft forever preferred_lft forever 09:11:47 2: ens3: mtu 1458 qdisc mq state UP group default qlen 1000 09:11:47 link/ether fa:16:3e:4c:3d:2c brd ff:ff:ff:ff:ff:ff 09:11:47 inet 10.30.107.143/23 brd 10.30.107.255 scope global dynamic ens3 09:11:47 valid_lft 86101sec preferred_lft 86101sec 09:11:47 inet6 fe80::f816:3eff:fe4c:3d2c/64 scope link 09:11:47 valid_lft forever preferred_lft forever 09:11:47 3: docker0: mtu 1500 qdisc noqueue state DOWN group default 09:11:47 link/ether 02:42:24:74:3d:8a brd ff:ff:ff:ff:ff:ff 09:11:47 inet 10.250.0.254/24 brd 10.250.0.255 scope global docker0 09:11:47 valid_lft forever preferred_lft forever 09:11:47 inet6 fe80::42:24ff:fe74:3d8a/64 scope link 09:11:47 valid_lft forever preferred_lft forever 09:11:47 09:11:47 09:11:47 ---> sar -b -r -n DEV: 09:11:47 Linux 4.15.0-192-generic (prd-ubuntu1804-docker-8c-8g-8464) 02/13/26 _x86_64_ (8 CPU) 09:11:47 09:11:47 09:06:50 LINUX RESTART (8 CPU) 09:11:47 09:11:47 09:07:02 tps rtps wtps bread/s bwrtn/s 09:11:47 09:08:01 170.11 74.92 95.19 5368.20 83488.30 09:11:47 09:09:01 139.04 13.98 125.06 281.82 65674.79 09:11:47 09:10:01 121.18 8.75 112.43 652.69 52877.32 09:11:47 09:11:01 87.49 3.90 83.59 501.12 29681.45 09:11:47 Average: 129.29 25.19 104.10 1686.23 57827.84 09:11:47 09:11:47 09:07:02 kbmemfree kbavail kbmemused %memused kbbuffers kbcached kbcommit %commit kbactive kbinact kbdirty 09:11:47 09:08:01 30438860 31768896 2500360 7.59 45600 1608588 1385836 4.08 762336 1473672 36276 09:11:47 09:09:01 29882440 31561056 3056780 9.28 76968 1905072 2027412 5.97 996096 1747632 125592 09:11:47 09:10:01 28208932 30039088 4730288 14.36 84512 2044016 3268796 9.62 2557652 1851848 1504 09:11:47 09:11:01 26994352 29719556 5944868 18.05 104208 2891076 6139832 18.06 2998084 2546220 772 09:11:47 Average: 28881146 30772149 4058074 12.32 77822 2112188 3205469 9.43 1828542 1904843 41036 09:11:47 09:11:47 09:07:02 IFACE rxpck/s txpck/s rxkB/s txkB/s rxcmp/s txcmp/s rxmcst/s %ifutil 09:11:47 09:08:01 lo 1.35 1.35 0.14 0.14 0.00 0.00 0.00 0.00 09:11:47 09:08:01 ens3 327.18 226.20 1123.59 59.48 0.00 0.00 0.00 0.00 09:11:47 09:08:01 docker0 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 09:11:47 09:09:01 lo 1.20 1.20 0.14 0.14 0.00 0.00 0.00 0.00 09:11:47 09:09:01 ens3 95.53 67.11 1169.98 16.41 0.00 0.00 0.00 0.00 09:11:47 09:09:01 docker0 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 09:11:47 09:10:01 lo 18.98 18.98 2.42 2.42 0.00 0.00 0.00 0.00 09:11:47 09:10:01 ens3 1144.39 836.54 2070.29 270.61 0.00 0.00 0.00 0.00 09:11:47 09:10:01 docker0 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 09:11:47 09:11:01 lo 8.73 8.73 1.36 1.36 0.00 0.00 0.00 0.00 09:11:47 09:11:01 veth66141d4 0.15 0.35 0.02 0.04 0.00 0.00 0.00 0.00 09:11:47 09:11:01 ens3 463.71 294.60 6957.53 58.36 0.00 0.00 0.00 0.00 09:11:47 09:11:01 docker0 2.00 2.88 0.40 0.55 0.00 0.00 0.00 0.00 09:11:47 Average: lo 7.59 7.59 1.02 1.02 0.00 0.00 0.00 0.00 09:11:47 Average: veth66141d4 0.04 0.09 0.00 0.01 0.00 0.00 0.00 0.00 09:11:47 Average: ens3 508.43 356.63 2837.20 101.38 0.00 0.00 0.00 0.00 09:11:47 Average: docker0 0.50 0.72 0.10 0.14 0.00 0.00 0.00 0.00 09:11:47 09:11:47 09:11:47 ---> sar -P ALL: 09:11:47 Linux 4.15.0-192-generic (prd-ubuntu1804-docker-8c-8g-8464) 02/13/26 _x86_64_ (8 CPU) 09:11:47 09:11:47 09:06:50 LINUX RESTART (8 CPU) 09:11:47 09:11:47 09:07:02 CPU %user %nice %system %iowait %steal %idle 09:11:47 09:08:01 all 5.79 0.00 1.03 9.77 0.04 83.37 09:11:47 09:08:01 0 3.38 0.00 1.07 2.43 0.02 93.11 09:11:47 09:08:01 1 2.61 0.00 0.86 3.64 0.03 92.85 09:11:47 09:08:01 2 8.85 0.00 0.71 1.44 0.03 88.96 09:11:47 09:08:01 3 8.33 0.00 1.34 6.41 0.05 83.86 09:11:47 09:08:01 4 2.58 0.00 0.58 17.64 0.05 79.15 09:11:47 09:08:01 5 2.88 0.00 1.87 38.56 0.03 56.65 09:11:47 09:08:01 6 10.98 0.00 1.04 1.87 0.05 86.07 09:11:47 09:08:01 7 6.70 0.00 0.78 6.26 0.03 86.22 09:11:47 09:09:01 all 9.39 0.00 0.63 6.33 0.03 83.61 09:11:47 09:09:01 0 1.47 0.00 0.23 0.27 0.02 98.01 09:11:47 09:09:01 1 6.36 0.00 0.23 0.15 0.02 93.24 09:11:47 09:09:01 2 2.03 0.00 0.05 0.00 0.02 97.90 09:11:47 09:09:01 3 25.93 0.00 1.51 6.16 0.07 66.34 09:11:47 09:09:01 4 6.47 0.00 0.67 4.53 0.03 88.30 09:11:47 09:09:01 5 20.17 0.00 1.24 15.15 0.05 63.39 09:11:47 09:09:01 6 10.19 0.00 0.65 0.90 0.02 88.24 09:11:47 09:09:01 7 2.57 0.00 0.52 23.56 0.05 73.30 09:11:47 09:10:01 all 17.85 0.00 1.57 3.00 0.06 77.51 09:11:47 09:10:01 0 14.20 0.00 2.24 0.92 0.08 82.55 09:11:47 09:10:01 1 20.79 0.00 1.07 0.20 0.07 77.87 09:11:47 09:10:01 2 21.36 0.00 1.69 0.05 0.07 76.83 09:11:47 09:10:01 3 21.42 0.00 1.49 5.88 0.07 71.14 09:11:47 09:10:01 4 10.37 0.00 1.17 0.60 0.05 87.81 09:11:47 09:10:01 5 19.58 0.00 1.45 0.07 0.07 78.83 09:11:47 09:10:01 6 20.25 0.00 1.46 8.67 0.07 69.56 09:11:47 09:10:01 7 14.86 0.00 1.93 7.59 0.07 75.56 09:11:47 09:11:01 all 14.40 0.00 2.38 3.32 0.06 79.84 09:11:47 09:11:01 0 14.99 0.00 3.05 12.43 0.07 69.46 09:11:47 09:11:01 1 16.99 0.00 2.43 0.50 0.08 79.98 09:11:47 09:11:01 2 12.19 0.00 2.23 0.52 0.07 84.99 09:11:47 09:11:01 3 13.97 0.00 2.81 5.55 0.05 77.63 09:11:47 09:11:01 4 16.06 0.00 2.55 2.52 0.07 78.81 09:11:47 09:11:01 5 15.64 0.00 2.15 3.62 0.07 78.52 09:11:47 09:11:01 6 13.20 0.00 2.23 0.57 0.05 83.95 09:11:47 09:11:01 7 12.09 0.00 1.69 0.84 0.05 85.33 09:11:47 Average: all 11.88 0.00 1.41 5.59 0.05 81.08 09:11:47 Average: 0 8.52 0.00 1.65 4.01 0.05 85.77 09:11:47 Average: 1 11.71 0.00 1.15 1.12 0.05 85.98 09:11:47 Average: 2 11.10 0.00 1.17 0.50 0.05 87.18 09:11:47 Average: 3 17.45 0.00 1.79 6.00 0.06 74.70 09:11:47 Average: 4 8.89 0.00 1.24 6.28 0.05 83.54 09:11:47 Average: 5 14.62 0.00 1.68 14.24 0.05 69.40 09:11:47 Average: 6 13.66 0.00 1.34 3.00 0.05 81.95 09:11:47 Average: 7 9.06 0.00 1.23 9.58 0.05 80.08 09:11:47 09:11:47 09:11:47