Triggered by Gerrit: https://gerrit.onap.org/r/c/sdc/sdc-distribution-client/+/141775 Running as SYSTEM [EnvInject] - Loading node environment variables. Building remotely on prd-ubuntu1804-docker-8c-8g-37411 (ubuntu1804-docker-8c-8g) in workspace /w/workspace/sdc-sdc-distribution-client-master-integration-pairwise [ssh-agent] Looking for ssh-agent implementation... [ssh-agent] Exec ssh-agent (binary ssh-agent on a remote machine) $ ssh-agent SSH_AUTH_SOCK=/tmp/ssh-12Aa2F2FLmmC/agent.2238 SSH_AGENT_PID=2240 [ssh-agent] Started. Running ssh-add (command line suppressed) Identity added: /w/workspace/sdc-sdc-distribution-client-master-integration-pairwise@tmp/private_key_13941460755700624332.key (/w/workspace/sdc-sdc-distribution-client-master-integration-pairwise@tmp/private_key_13941460755700624332.key) [ssh-agent] Using credentials onap-jobbuiler (Gerrit user) The recommended git tool is: NONE using credential onap-jenkins-ssh Wiping out workspace first. Cloning the remote Git repository Cloning repository git://cloud.onap.org/mirror/sdc/sdc-distribution-client.git > git init /w/workspace/sdc-sdc-distribution-client-master-integration-pairwise # timeout=10 Fetching upstream changes from git://cloud.onap.org/mirror/sdc/sdc-distribution-client.git > git --version # timeout=10 > git --version # 'git version 2.17.1' using GIT_SSH to set credentials Gerrit user Verifying host key using manually-configured host key entries > git fetch --tags --progress -- git://cloud.onap.org/mirror/sdc/sdc-distribution-client.git +refs/heads/*:refs/remotes/origin/* # timeout=30 > git config remote.origin.url git://cloud.onap.org/mirror/sdc/sdc-distribution-client.git # timeout=10 > git config --add remote.origin.fetch +refs/heads/*:refs/remotes/origin/* # timeout=10 > git config remote.origin.url git://cloud.onap.org/mirror/sdc/sdc-distribution-client.git # timeout=10 Fetching upstream changes from git://cloud.onap.org/mirror/sdc/sdc-distribution-client.git using GIT_SSH to set credentials Gerrit user Verifying host key using manually-configured host key entries > git fetch --tags --progress -- git://cloud.onap.org/mirror/sdc/sdc-distribution-client.git refs/changes/75/141775/2 # timeout=30 > git rev-parse 30cdcc1934dceee49d95346da5a57543a16b6c99^{commit} # timeout=10 JENKINS-19022: warning: possible memory leak due to Git plugin usage; see: https://plugins.jenkins.io/git/#remove-git-plugin-buildsbybranch-builddata-script Checking out Revision 30cdcc1934dceee49d95346da5a57543a16b6c99 (refs/changes/75/141775/2) > git config core.sparsecheckout # timeout=10 > git checkout -f 30cdcc1934dceee49d95346da5a57543a16b6c99 # timeout=30 Commit message: "Chore: Add dependabot config" > git rev-parse FETCH_HEAD^{commit} # timeout=10 > git rev-list --no-walk d1d24e354436c253d2342cde452fb99856e1bae4 # timeout=10 [sdc-sdc-distribution-client-master-integration-pairwise] $ /bin/bash /tmp/jenkins14198182039986363242.sh ---> python-tools-install.sh Setup pyenv: * system (set by /opt/pyenv/version) * 3.8.13 (set by /opt/pyenv/version) * 3.9.13 (set by /opt/pyenv/version) * 3.10.6 (set by /opt/pyenv/version) lf-activate-venv(): INFO: Creating python3 venv at /tmp/venv-RsCQ lf-activate-venv(): INFO: Save venv in file: /tmp/.os_lf_venv lf-activate-venv(): INFO: Installing: lftools lf-activate-venv(): INFO: Adding /tmp/venv-RsCQ/bin to PATH Generating Requirements File Python 3.10.6 pip 25.2 from /tmp/venv-RsCQ/lib/python3.10/site-packages/pip (python 3.10) appdirs==1.4.4 argcomplete==3.6.2 aspy.yaml==1.3.0 attrs==25.3.0 autopage==0.5.2 beautifulsoup4==4.13.4 boto3==1.40.8 botocore==1.40.8 bs4==0.0.2 cachetools==5.5.2 certifi==2025.8.3 cffi==1.17.1 cfgv==3.4.0 chardet==5.2.0 charset-normalizer==3.4.3 click==8.2.1 cliff==4.10.0 cmd2==2.7.0 cryptography==3.3.2 debtcollector==3.0.0 decorator==5.2.1 defusedxml==0.7.1 Deprecated==1.2.18 distlib==0.4.0 dnspython==2.7.0 docker==7.1.0 dogpile.cache==1.4.0 durationpy==0.10 email_validator==2.2.0 filelock==3.18.0 future==1.0.0 gitdb==4.0.12 GitPython==3.1.45 google-auth==2.40.3 httplib2==0.22.0 identify==2.6.13 idna==3.10 importlib-resources==1.5.0 iso8601==2.1.0 Jinja2==3.1.6 jmespath==1.0.1 jsonpatch==1.33 jsonpointer==3.0.0 jsonschema==4.25.0 jsonschema-specifications==2025.4.1 keystoneauth1==5.11.1 kubernetes==33.1.0 lftools==0.37.13 lxml==6.0.0 markdown-it-py==4.0.0 MarkupSafe==3.0.2 mdurl==0.1.2 msgpack==1.1.1 multi_key_dict==2.0.3 munch==4.0.0 netaddr==1.3.0 niet==1.4.2 nodeenv==1.9.1 oauth2client==4.1.3 oauthlib==3.3.1 openstacksdk==4.6.0 os-client-config==2.3.0 os-service-types==1.8.0 osc-lib==4.1.0 oslo.config==10.0.0 oslo.context==6.0.0 oslo.i18n==6.5.1 oslo.log==7.2.0 oslo.serialization==5.7.0 oslo.utils==9.0.0 packaging==25.0 pbr==7.0.0 platformdirs==4.3.8 prettytable==3.16.0 psutil==7.0.0 pyasn1==0.6.1 pyasn1_modules==0.4.2 pycparser==2.22 pygerrit2==2.0.15 PyGithub==2.7.0 Pygments==2.19.2 PyJWT==2.10.1 PyNaCl==1.5.0 pyparsing==2.4.7 pyperclip==1.9.0 pyrsistent==0.20.0 python-cinderclient==9.7.0 python-dateutil==2.9.0.post0 python-heatclient==4.3.0 python-jenkins==1.8.3 python-keystoneclient==5.6.0 python-magnumclient==4.8.1 python-openstackclient==8.1.0 python-swiftclient==4.8.0 PyYAML==6.0.2 referencing==0.36.2 requests==2.32.4 requests-oauthlib==2.0.0 requestsexceptions==1.4.0 rfc3986==2.0.0 rich==14.1.0 rich-argparse==1.7.1 rpds-py==0.27.0 rsa==4.9.1 ruamel.yaml==0.18.14 ruamel.yaml.clib==0.2.12 s3transfer==0.13.1 simplejson==3.20.1 six==1.17.0 smmap==5.0.2 soupsieve==2.7 stevedore==5.4.1 tabulate==0.9.0 toml==0.10.2 tomlkit==0.13.3 tqdm==4.67.1 typing_extensions==4.14.1 tzdata==2025.2 urllib3==1.26.20 virtualenv==20.34.0 wcwidth==0.2.13 websocket-client==1.8.0 wrapt==1.17.3 xdg==6.0.0 xmltodict==0.14.2 yq==3.4.3 [EnvInject] - Injecting environment variables from a build step. [EnvInject] - Injecting as environment variables the properties content SET_JDK_VERSION=openjdk11 GIT_URL="git://cloud.onap.org/mirror" [EnvInject] - Variables injected successfully. [sdc-sdc-distribution-client-master-integration-pairwise] $ /bin/sh /tmp/jenkins2165171776991558262.sh ---> update-java-alternatives.sh ---> Updating Java version ---> Ubuntu/Debian system detected update-alternatives: using /usr/lib/jvm/java-11-openjdk-amd64/bin/java to provide /usr/bin/java (java) in manual mode update-alternatives: using /usr/lib/jvm/java-11-openjdk-amd64/bin/javac to provide /usr/bin/javac (javac) in manual mode update-alternatives: using /usr/lib/jvm/java-11-openjdk-amd64 to provide /usr/lib/jvm/java-openjdk (java_sdk_openjdk) in manual mode openjdk version "11.0.16" 2022-07-19 OpenJDK Runtime Environment (build 11.0.16+8-post-Ubuntu-0ubuntu118.04) OpenJDK 64-Bit Server VM (build 11.0.16+8-post-Ubuntu-0ubuntu118.04, mixed mode) JAVA_HOME=/usr/lib/jvm/java-11-openjdk-amd64 [EnvInject] - Injecting environment variables from a build step. [EnvInject] - Injecting as environment variables the properties file path '/tmp/java.env' [EnvInject] - Variables injected successfully. provisioning config files... copy managed file [global-settings] to file:/w/workspace/sdc-sdc-distribution-client-master-integration-pairwise@tmp/config5284510697017295497tmp copy managed file [sdc-sdc-distribution-client-settings] to file:/w/workspace/sdc-sdc-distribution-client-master-integration-pairwise@tmp/config14898516578198721684tmp [EnvInject] - Injecting environment variables from a build step. Unpacking https://repo.maven.apache.org/maven2/org/apache/maven/apache-maven/3.6.3/apache-maven-3.6.3-bin.zip to /w/tools/hudson.tasks.Maven_MavenInstallation/mvn36 on prd-ubuntu1804-docker-8c-8g-37411 using settings config with name sdc-sdc-distribution-client-settings Replacing all maven server entries not found in credentials list is true using global settings config with name global-settings Replacing all maven server entries not found in credentials list is true [sdc-sdc-distribution-client-master-integration-pairwise] $ /w/tools/hudson.tasks.Maven_MavenInstallation/mvn36/bin/mvn -s /tmp/settings2202248450190939817.xml -gs /tmp/global-settings12248119734590257080.xml -DGERRIT_BRANCH=master -DGERRIT_PATCHSET_REVISION=30cdcc1934dceee49d95346da5a57543a16b6c99 -DGERRIT_HOST=gerrit.onap.org -DMVN=/w/tools/hudson.tasks.Maven_MavenInstallation/mvn36/bin/mvn -DGERRIT_CHANGE_OWNER_EMAIL=ksandi@contractor.linuxfoundation.org "-DGERRIT_EVENT_ACCOUNT_NAME=Kevin Sandi" -DGERRIT_CHANGE_URL=https://gerrit.onap.org/r/c/sdc/sdc-distribution-client/+/141775 -DGERRIT_PATCHSET_UPLOADER_EMAIL=ksandi@contractor.linuxfoundation.org "-DARCHIVE_ARTIFACTS= **/target/surefire-reports/*-output.txt" -DGERRIT_EVENT_TYPE=patchset-created -DSTACK_NAME=$JOB_NAME-$BUILD_NUMBER -DGERRIT_PROJECT=sdc/sdc-distribution-client -DGERRIT_CHANGE_NUMBER=141775 -DGERRIT_SCHEME=ssh '-DGERRIT_PATCHSET_UPLOADER=\"Kevin Sandi\" ' -DGERRIT_PORT=29418 -DGERRIT_CHANGE_PRIVATE_STATE=false -DGERRIT_REFSPEC=refs/changes/75/141775/2 "-DGERRIT_PATCHSET_UPLOADER_NAME=Kevin Sandi" '-DGERRIT_CHANGE_OWNER=\"Kevin Sandi\" ' -DPROJECT=sdc/sdc-distribution-client -DGERRIT_HASHTAGS= -DGERRIT_CHANGE_COMMIT_MESSAGE=Q2hvcmU6IEFkZCBkZXBlbmRhYm90IGNvbmZpZwoKSXNzdWUtSUQ6IENJTUFOLTMzCkNoYW5nZS1JZDogSTk2NmJhMjM1OGNiMTNkY2IxODY3NDYwNGFiODI2ODA0OWQ1MDA5OTAKU2lnbmVkLW9mZi1ieTogS2V2aW4gU2FuZGkgPGtzYW5kaUBjb250cmFjdG9yLmxpbnV4Zm91bmRhdGlvbi5vcmc+Cg== -DGERRIT_NAME=Primary -DGERRIT_TOPIC= "-DGERRIT_CHANGE_SUBJECT=Chore: Add dependabot config" '-DGERRIT_EVENT_ACCOUNT=\"Kevin Sandi\" ' -DGERRIT_CHANGE_WIP_STATE=false -DGERRIT_CHANGE_ID=I966ba2358cb13dcb18674604ab8268049d500990 -DGERRIT_EVENT_HASH=-1614213431 -DGERRIT_VERSION=3.7.2 -DGERRIT_EVENT_ACCOUNT_EMAIL=ksandi@contractor.linuxfoundation.org -DGERRIT_PATCHSET_NUMBER=2 "-DMAVEN_PARAMS= -P integration-pairwise" "-DGERRIT_CHANGE_OWNER_NAME=Kevin Sandi" -DMAVEN_OPTS='' clean install -B -Dorg.slf4j.simpleLogger.log.org.apache.maven.cli.transfer.Slf4jMavenTransferListener=warn -P integration-pairwise [INFO] Scanning for projects... [INFO] ------------------------------------------------------------------------ [INFO] Reactor Build Order: [INFO] [INFO] sdc-sdc-distribution-client [pom] [INFO] sdc-distribution-client [jar] [INFO] sdc-distribution-ci [jar] [INFO] [INFO] --< org.onap.sdc.sdc-distribution-client:sdc-main-distribution-client >-- [INFO] Building sdc-sdc-distribution-client 2.1.2-SNAPSHOT [1/3] [INFO] --------------------------------[ pom ]--------------------------------- [INFO] [INFO] --- maven-clean-plugin:2.5:clean (default-clean) @ sdc-main-distribution-client --- [INFO] [INFO] --- maven-enforcer-plugin:3.0.0-M2:enforce (enforce-property) @ sdc-main-distribution-client --- [INFO] [INFO] --- maven-enforcer-plugin:3.0.0-M2:enforce (enforce-no-snapshots) @ sdc-main-distribution-client --- [INFO] [INFO] --- jacoco-maven-plugin:0.8.6:prepare-agent (pre-unit-test) @ sdc-main-distribution-client --- [INFO] surefireArgLine set to -javaagent:/home/jenkins/.m2/repository/org/jacoco/org.jacoco.agent/0.8.6/org.jacoco.agent-0.8.6-runtime.jar=destfile=/w/workspace/sdc-sdc-distribution-client-master-integration-pairwise/target/code-coverage/jacoco-ut.exec,excludes=**/gen/**:**/generated-sources/**:**/yang-gen/**:**/pax/** [INFO] [INFO] --- jacoco-maven-plugin:0.8.6:prepare-agent (prepare-agent) @ sdc-main-distribution-client --- [INFO] argLine set to -javaagent:/home/jenkins/.m2/repository/org/jacoco/org.jacoco.agent/0.8.6/org.jacoco.agent-0.8.6-runtime.jar=destfile=/w/workspace/sdc-sdc-distribution-client-master-integration-pairwise/target/jacoco.exec,excludes=**/gen/**:**/generated-sources/**:**/yang-gen/**:**/pax/** [INFO] [INFO] --- maven-checkstyle-plugin:3.1.1:check (onap-license) @ sdc-main-distribution-client --- [INFO] [INFO] --- maven-checkstyle-plugin:3.1.1:check (onap-java-style) @ sdc-main-distribution-client --- [INFO] [INFO] --- jacoco-maven-plugin:0.8.6:report (post-unit-test) @ sdc-main-distribution-client --- [INFO] Skipping JaCoCo execution due to missing execution data file. [INFO] [INFO] --- maven-javadoc-plugin:3.2.0:jar (attach-javadocs) @ sdc-main-distribution-client --- [INFO] Not executing Javadoc as the project is not a Java classpath-capable package [INFO] [INFO] --- jacoco-maven-plugin:0.8.6:prepare-agent (pre-integration-test) @ sdc-main-distribution-client --- [INFO] failsafeArgLine set to -javaagent:/home/jenkins/.m2/repository/org/jacoco/org.jacoco.agent/0.8.6/org.jacoco.agent-0.8.6-runtime.jar=destfile=/w/workspace/sdc-sdc-distribution-client-master-integration-pairwise/target/code-coverage/jacoco-it.exec,excludes=**/gen/**:**/generated-sources/**:**/yang-gen/**:**/pax/** [INFO] [INFO] --- maven-failsafe-plugin:3.0.0-M4:integration-test (integration-tests) @ sdc-main-distribution-client --- [INFO] No tests to run. [INFO] [INFO] --- jacoco-maven-plugin:0.8.6:report (post-integration-test) @ sdc-main-distribution-client --- [INFO] Skipping JaCoCo execution due to missing execution data file. [INFO] [INFO] --- maven-failsafe-plugin:3.0.0-M4:verify (integration-tests) @ sdc-main-distribution-client --- [INFO] [INFO] --- maven-install-plugin:2.4:install (default-install) @ sdc-main-distribution-client --- [INFO] Installing /w/workspace/sdc-sdc-distribution-client-master-integration-pairwise/pom.xml to /home/jenkins/.m2/repository/org/onap/sdc/sdc-distribution-client/sdc-main-distribution-client/2.1.2-SNAPSHOT/sdc-main-distribution-client-2.1.2-SNAPSHOT.pom [INFO] [INFO] ----< org.onap.sdc.sdc-distribution-client:sdc-distribution-client >---- [INFO] Building sdc-distribution-client 2.1.2-SNAPSHOT [2/3] [INFO] --------------------------------[ jar ]--------------------------------- [INFO] [INFO] --- maven-clean-plugin:2.5:clean (default-clean) @ sdc-distribution-client --- [INFO] [INFO] --- maven-enforcer-plugin:3.0.0-M2:enforce (enforce-property) @ sdc-distribution-client --- [INFO] [INFO] --- maven-enforcer-plugin:3.0.0-M2:enforce (enforce-no-snapshots) @ sdc-distribution-client --- [INFO] [INFO] --- jacoco-maven-plugin:0.8.6:prepare-agent (pre-unit-test) @ sdc-distribution-client --- [INFO] surefireArgLine set to -javaagent:/home/jenkins/.m2/repository/org/jacoco/org.jacoco.agent/0.8.6/org.jacoco.agent-0.8.6-runtime.jar=destfile=/w/workspace/sdc-sdc-distribution-client-master-integration-pairwise/sdc-distribution-client/target/code-coverage/jacoco-ut.exec,excludes=**/gen/**:**/generated-sources/**:**/yang-gen/**:**/pax/** [INFO] [INFO] --- jacoco-maven-plugin:0.8.6:prepare-agent (prepare-agent) @ sdc-distribution-client --- [INFO] argLine set to -javaagent:/home/jenkins/.m2/repository/org/jacoco/org.jacoco.agent/0.8.6/org.jacoco.agent-0.8.6-runtime.jar=destfile=/w/workspace/sdc-sdc-distribution-client-master-integration-pairwise/sdc-distribution-client/target/jacoco.exec,excludes=**/gen/**:**/generated-sources/**:**/yang-gen/**:**/pax/** [INFO] [INFO] --- maven-checkstyle-plugin:3.1.1:check (onap-license) @ sdc-distribution-client --- [INFO] [INFO] --- maven-checkstyle-plugin:3.1.1:check (onap-java-style) @ sdc-distribution-client --- [INFO] [INFO] --- maven-resources-plugin:2.6:resources (default-resources) @ sdc-distribution-client --- [INFO] Using 'UTF-8' encoding to copy filtered resources. [INFO] skip non existing resourceDirectory /w/workspace/sdc-sdc-distribution-client-master-integration-pairwise/sdc-distribution-client/src/main/resources [INFO] [INFO] --- maven-compiler-plugin:3.8.1:compile (default-compile) @ sdc-distribution-client --- [INFO] Changes detected - recompiling the module! [INFO] Compiling 61 source files to /w/workspace/sdc-sdc-distribution-client-master-integration-pairwise/sdc-distribution-client/target/classes [INFO] /w/workspace/sdc-sdc-distribution-client-master-integration-pairwise/sdc-distribution-client/src/main/java/org/onap/sdc/impl/DistributionClientImpl.java: Some input files use or override a deprecated API. [INFO] /w/workspace/sdc-sdc-distribution-client-master-integration-pairwise/sdc-distribution-client/src/main/java/org/onap/sdc/impl/DistributionClientImpl.java: Recompile with -Xlint:deprecation for details. [INFO] [INFO] --- maven-resources-plugin:2.6:testResources (default-testResources) @ sdc-distribution-client --- [INFO] Using 'UTF-8' encoding to copy filtered resources. [INFO] Copying 10 resources [INFO] [INFO] --- maven-compiler-plugin:3.8.1:testCompile (default-testCompile) @ sdc-distribution-client --- [INFO] Changes detected - recompiling the module! [INFO] Compiling 24 source files to /w/workspace/sdc-sdc-distribution-client-master-integration-pairwise/sdc-distribution-client/target/test-classes [INFO] /w/workspace/sdc-sdc-distribution-client-master-integration-pairwise/sdc-distribution-client/src/test/java/org/onap/sdc/http/SdcConnectorClientTest.java: Some input files use or override a deprecated API. [INFO] /w/workspace/sdc-sdc-distribution-client-master-integration-pairwise/sdc-distribution-client/src/test/java/org/onap/sdc/http/SdcConnectorClientTest.java: Recompile with -Xlint:deprecation for details. [INFO] /w/workspace/sdc-sdc-distribution-client-master-integration-pairwise/sdc-distribution-client/src/test/java/org/onap/sdc/utils/NotificationSenderTest.java: /w/workspace/sdc-sdc-distribution-client-master-integration-pairwise/sdc-distribution-client/src/test/java/org/onap/sdc/utils/NotificationSenderTest.java uses unchecked or unsafe operations. [INFO] /w/workspace/sdc-sdc-distribution-client-master-integration-pairwise/sdc-distribution-client/src/test/java/org/onap/sdc/utils/NotificationSenderTest.java: Recompile with -Xlint:unchecked for details. [INFO] [INFO] --- maven-surefire-plugin:3.0.0-M4:test (default-test) @ sdc-distribution-client --- [INFO] [INFO] ------------------------------------------------------- [INFO] T E S T S [INFO] ------------------------------------------------------- [INFO] Running org.onap.sdc.http.HttpSdcClientResponseTest [INFO] Tests run: 3, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 1.46 s - in org.onap.sdc.http.HttpSdcClientResponseTest [INFO] Running org.onap.sdc.http.HttpSdcClientTest 19:06:33.871 [main] DEBUG org.onap.sdc.http.HttpSdcClient - url to send http://localhost:8443http://127.0.0.1:8080/target 19:06:34.691 [main] DEBUG org.onap.sdc.http.HttpSdcClient - url to send http://localhost:8443http://127.0.0.1:8080/target 19:06:34.693 [main] DEBUG org.onap.sdc.http.HttpSdcClient - GET Response Status 200 [INFO] Tests run: 4, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 1.764 s - in org.onap.sdc.http.HttpSdcClientTest [INFO] Running org.onap.sdc.http.HttpClientFactoryTest [INFO] Tests run: 3, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.451 s - in org.onap.sdc.http.HttpClientFactoryTest [INFO] Running org.onap.sdc.http.HttpRequestFactoryTest [INFO] Tests run: 2, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.012 s - in org.onap.sdc.http.HttpRequestFactoryTest [INFO] Running org.onap.sdc.http.SdcConnectorClientTest 19:06:35.627 [main] DEBUG org.onap.sdc.http.SdcConnectorClient - about to perform get on SDC. requestId= 56f6a90e-03c4-49f2-8c62-c1617fbbca98 url= /sdc/v1/artifactTypes 19:06:35.629 [main] ERROR org.onap.sdc.http.SdcConnectorClient - status from SDC is Mock for HttpSdcResponse, hashCode: 1083840077 19:06:35.635 [main] ERROR org.onap.sdc.http.SdcConnectorClient - DistributionClientResultImpl [responseStatus=SDC_SERVER_TIMEOUT, responseMessage=SDC server problem] 19:06:35.637 [main] DEBUG org.onap.sdc.http.SdcConnectorClient - error from SDC is: ["Service","Resource","VF","VFC"] 19:06:35.638 [main] ERROR org.onap.sdc.http.SdcConnectorClient - failed to close http response 19:06:35.654 [main] DEBUG org.onap.sdc.http.SdcConnectorClient - about to perform get on SDC. requestId= 44acd012-f51c-43e7-a555-f8d77ebec6b8 url= /sdc/v1/artifactTypes 19:06:35.658 [main] ERROR org.onap.sdc.http.SdcConnectorClient - failed to parse response from SDC. error: java.io.IOException: Not implemented. This is expected as the implementation is for unit tests only. at org.onap.sdc.http.SdcConnectorClientTest$ThrowingInputStreamForTesting.read(SdcConnectorClientTest.java:312) at java.base/java.io.InputStream.read(InputStream.java:271) at java.base/sun.nio.cs.StreamDecoder.readBytes(StreamDecoder.java:284) at java.base/sun.nio.cs.StreamDecoder.implRead(StreamDecoder.java:326) at java.base/sun.nio.cs.StreamDecoder.read(StreamDecoder.java:178) at java.base/java.io.InputStreamReader.read(InputStreamReader.java:181) at java.base/java.io.Reader.read(Reader.java:229) at org.apache.commons.io.IOUtils.copyLarge(IOUtils.java:1282) at org.apache.commons.io.IOUtils.copyLarge(IOUtils.java:1261) at org.apache.commons.io.IOUtils.copy(IOUtils.java:1108) at org.apache.commons.io.IOUtils.copy(IOUtils.java:922) at org.apache.commons.io.IOUtils.toString(IOUtils.java:2681) at org.apache.commons.io.IOUtils.toString(IOUtils.java:2661) at org.onap.sdc.http.SdcConnectorClient.parseGetValidArtifactTypesResponse(SdcConnectorClient.java:155) at org.onap.sdc.http.SdcConnectorClient.getValidArtifactTypesList(SdcConnectorClient.java:79) at java.base/java.lang.invoke.MethodHandle.invokeWithArguments(MethodHandle.java:710) at org.mockito.internal.util.reflection.InstrumentationMemberAccessor$Dispatcher$ByteBuddy$0KOrTgaY.invokeWithArguments(Unknown Source) at org.mockito.internal.util.reflection.InstrumentationMemberAccessor.invoke(InstrumentationMemberAccessor.java:239) at org.mockito.internal.util.reflection.ModuleMemberAccessor.invoke(ModuleMemberAccessor.java:55) at org.mockito.internal.creation.bytebuddy.MockMethodAdvice.tryInvoke(MockMethodAdvice.java:333) at org.mockito.internal.creation.bytebuddy.MockMethodAdvice.access$500(MockMethodAdvice.java:60) at org.mockito.internal.creation.bytebuddy.MockMethodAdvice$RealMethodCall.invoke(MockMethodAdvice.java:253) at org.mockito.internal.invocation.InterceptedInvocation.callRealMethod(InterceptedInvocation.java:142) at org.mockito.internal.stubbing.answers.CallsRealMethods.answer(CallsRealMethods.java:45) at org.mockito.Answers.answer(Answers.java:99) at org.mockito.internal.handler.MockHandlerImpl.handle(MockHandlerImpl.java:110) at org.mockito.internal.handler.NullResultGuardian.handle(NullResultGuardian.java:29) at org.mockito.internal.handler.InvocationNotifierHandler.handle(InvocationNotifierHandler.java:34) at org.mockito.internal.creation.bytebuddy.MockMethodInterceptor.doIntercept(MockMethodInterceptor.java:82) at org.mockito.internal.creation.bytebuddy.MockMethodAdvice.handle(MockMethodAdvice.java:151) at org.onap.sdc.http.SdcConnectorClient.getValidArtifactTypesList(SdcConnectorClient.java:74) at org.onap.sdc.http.SdcConnectorClientTest.getValidArtifactTypesListParsingExceptionHandlingTest(SdcConnectorClientTest.java:216) at java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at java.base/jdk.internal.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.base/java.lang.reflect.Method.invoke(Method.java:566) at org.junit.platform.commons.util.ReflectionUtils.invokeMethod(ReflectionUtils.java:688) at org.junit.jupiter.engine.execution.MethodInvocation.proceed(MethodInvocation.java:60) at org.junit.jupiter.engine.execution.InvocationInterceptorChain$ValidatingInvocation.proceed(InvocationInterceptorChain.java:131) at org.junit.jupiter.engine.extension.TimeoutExtension.intercept(TimeoutExtension.java:149) at org.junit.jupiter.engine.extension.TimeoutExtension.interceptTestableMethod(TimeoutExtension.java:140) at org.junit.jupiter.engine.extension.TimeoutExtension.interceptTestMethod(TimeoutExtension.java:84) at org.junit.jupiter.engine.execution.ExecutableInvoker$ReflectiveInterceptorCall.lambda$ofVoidMethod$0(ExecutableInvoker.java:115) at org.junit.jupiter.engine.execution.ExecutableInvoker.lambda$invoke$0(ExecutableInvoker.java:105) at org.junit.jupiter.engine.execution.InvocationInterceptorChain$InterceptedInvocation.proceed(InvocationInterceptorChain.java:106) at org.junit.jupiter.engine.execution.InvocationInterceptorChain.proceed(InvocationInterceptorChain.java:64) at org.junit.jupiter.engine.execution.InvocationInterceptorChain.chainAndInvoke(InvocationInterceptorChain.java:45) at org.junit.jupiter.engine.execution.InvocationInterceptorChain.invoke(InvocationInterceptorChain.java:37) at org.junit.jupiter.engine.execution.ExecutableInvoker.invoke(ExecutableInvoker.java:104) at org.junit.jupiter.engine.execution.ExecutableInvoker.invoke(ExecutableInvoker.java:98) at org.junit.jupiter.engine.descriptor.TestMethodTestDescriptor.lambda$invokeTestMethod$6(TestMethodTestDescriptor.java:210) at org.junit.platform.engine.support.hierarchical.ThrowableCollector.execute(ThrowableCollector.java:73) at org.junit.jupiter.engine.descriptor.TestMethodTestDescriptor.invokeTestMethod(TestMethodTestDescriptor.java:206) at org.junit.jupiter.engine.descriptor.TestMethodTestDescriptor.execute(TestMethodTestDescriptor.java:131) at org.junit.jupiter.engine.descriptor.TestMethodTestDescriptor.execute(TestMethodTestDescriptor.java:65) at org.junit.platform.engine.support.hierarchical.NodeTestTask.lambda$executeRecursively$5(NodeTestTask.java:139) at org.junit.platform.engine.support.hierarchical.ThrowableCollector.execute(ThrowableCollector.java:73) at org.junit.platform.engine.support.hierarchical.NodeTestTask.lambda$executeRecursively$7(NodeTestTask.java:129) at org.junit.platform.engine.support.hierarchical.Node.around(Node.java:137) at org.junit.platform.engine.support.hierarchical.NodeTestTask.lambda$executeRecursively$8(NodeTestTask.java:127) at org.junit.platform.engine.support.hierarchical.ThrowableCollector.execute(ThrowableCollector.java:73) at org.junit.platform.engine.support.hierarchical.NodeTestTask.executeRecursively(NodeTestTask.java:126) at org.junit.platform.engine.support.hierarchical.NodeTestTask.execute(NodeTestTask.java:84) at java.base/java.util.ArrayList.forEach(ArrayList.java:1541) at org.junit.platform.engine.support.hierarchical.SameThreadHierarchicalTestExecutorService.invokeAll(SameThreadHierarchicalTestExecutorService.java:38) at org.junit.platform.engine.support.hierarchical.NodeTestTask.lambda$executeRecursively$5(NodeTestTask.java:143) at org.junit.platform.engine.support.hierarchical.ThrowableCollector.execute(ThrowableCollector.java:73) at org.junit.platform.engine.support.hierarchical.NodeTestTask.lambda$executeRecursively$7(NodeTestTask.java:129) at org.junit.platform.engine.support.hierarchical.Node.around(Node.java:137) at org.junit.platform.engine.support.hierarchical.NodeTestTask.lambda$executeRecursively$8(NodeTestTask.java:127) at org.junit.platform.engine.support.hierarchical.ThrowableCollector.execute(ThrowableCollector.java:73) at org.junit.platform.engine.support.hierarchical.NodeTestTask.executeRecursively(NodeTestTask.java:126) at org.junit.platform.engine.support.hierarchical.NodeTestTask.execute(NodeTestTask.java:84) at java.base/java.util.ArrayList.forEach(ArrayList.java:1541) at org.junit.platform.engine.support.hierarchical.SameThreadHierarchicalTestExecutorService.invokeAll(SameThreadHierarchicalTestExecutorService.java:38) at org.junit.platform.engine.support.hierarchical.NodeTestTask.lambda$executeRecursively$5(NodeTestTask.java:143) at org.junit.platform.engine.support.hierarchical.ThrowableCollector.execute(ThrowableCollector.java:73) at org.junit.platform.engine.support.hierarchical.NodeTestTask.lambda$executeRecursively$7(NodeTestTask.java:129) at org.junit.platform.engine.support.hierarchical.Node.around(Node.java:137) at org.junit.platform.engine.support.hierarchical.NodeTestTask.lambda$executeRecursively$8(NodeTestTask.java:127) at org.junit.platform.engine.support.hierarchical.ThrowableCollector.execute(ThrowableCollector.java:73) at org.junit.platform.engine.support.hierarchical.NodeTestTask.executeRecursively(NodeTestTask.java:126) at org.junit.platform.engine.support.hierarchical.NodeTestTask.execute(NodeTestTask.java:84) at org.junit.platform.engine.support.hierarchical.SameThreadHierarchicalTestExecutorService.submit(SameThreadHierarchicalTestExecutorService.java:32) at org.junit.platform.engine.support.hierarchical.HierarchicalTestExecutor.execute(HierarchicalTestExecutor.java:57) at org.junit.platform.engine.support.hierarchical.HierarchicalTestEngine.execute(HierarchicalTestEngine.java:51) at org.junit.platform.launcher.core.EngineExecutionOrchestrator.execute(EngineExecutionOrchestrator.java:108) at org.junit.platform.launcher.core.EngineExecutionOrchestrator.execute(EngineExecutionOrchestrator.java:88) at org.junit.platform.launcher.core.EngineExecutionOrchestrator.lambda$execute$0(EngineExecutionOrchestrator.java:54) at org.junit.platform.launcher.core.EngineExecutionOrchestrator.withInterceptedStreams(EngineExecutionOrchestrator.java:67) at org.junit.platform.launcher.core.EngineExecutionOrchestrator.execute(EngineExecutionOrchestrator.java:52) at org.junit.platform.launcher.core.DefaultLauncher.execute(DefaultLauncher.java:96) at org.junit.platform.launcher.core.DefaultLauncher.execute(DefaultLauncher.java:75) at org.apache.maven.surefire.junitplatform.JUnitPlatformProvider.invokeAllTests(JUnitPlatformProvider.java:154) at org.apache.maven.surefire.junitplatform.JUnitPlatformProvider.invoke(JUnitPlatformProvider.java:127) at org.apache.maven.surefire.booter.ForkedBooter.runSuitesInProcess(ForkedBooter.java:377) at org.apache.maven.surefire.booter.ForkedBooter.execute(ForkedBooter.java:138) at org.apache.maven.surefire.booter.ForkedBooter.run(ForkedBooter.java:465) at org.apache.maven.surefire.booter.ForkedBooter.main(ForkedBooter.java:451) 19:06:35.765 [main] ERROR org.onap.sdc.http.SdcConnectorClient - failed to get artifact from response 19:06:35.771 [main] DEBUG org.onap.sdc.http.SdcConnectorClient - about to perform get on SDC. requestId= 490b5bf1-fd4f-4ec9-ae23-f40c8e0b4c18 url= /sdc/v1/artifactTypes 19:06:35.772 [main] ERROR org.onap.sdc.http.SdcConnectorClient - status from SDC is Mock for HttpSdcResponse, hashCode: 849575186 19:06:35.772 [main] ERROR org.onap.sdc.http.SdcConnectorClient - DistributionClientResultImpl [responseStatus=SDC_SERVER_TIMEOUT, responseMessage=SDC server problem] 19:06:35.773 [main] DEBUG org.onap.sdc.http.SdcConnectorClient - error from SDC is: It just didn't work 19:06:35.777 [main] DEBUG org.onap.sdc.http.SdcConnectorClient - about to perform get on SDC. requestId= fb36ea39-7c3e-4cb4-99ed-5df03fd37ceb url= /sdc/v1/distributionKafkaData 19:06:35.778 [main] ERROR org.onap.sdc.http.SdcConnectorClient - status from SDC is Mock for HttpSdcResponse, hashCode: 719016748 19:06:35.778 [main] ERROR org.onap.sdc.http.SdcConnectorClient - DistributionClientResultImpl [responseStatus=SDC_SERVER_TIMEOUT, responseMessage=SDC server problem] 19:06:35.779 [main] DEBUG org.onap.sdc.http.SdcConnectorClient - error from SDC is: It just didn't work 19:06:35.788 [main] ERROR org.onap.sdc.http.SdcConnectorClient - status from SDC is Mock for HttpSdcResponse, hashCode: 348574135 19:06:35.788 [main] ERROR org.onap.sdc.http.SdcConnectorClient - DistributionClientResultImpl [responseStatus=SDC_SERVER_PROBLEM, responseMessage=SDC server problem] 19:06:35.789 [main] ERROR org.onap.sdc.http.SdcConnectorClient - During error handling another exception occurred: java.io.IOException: Not implemented. This is expected as the implementation is for unit tests only. at org.onap.sdc.http.SdcConnectorClientTest$ThrowingInputStreamForTesting.read(SdcConnectorClientTest.java:312) at java.base/java.io.InputStream.read(InputStream.java:271) at java.base/sun.nio.cs.StreamDecoder.readBytes(StreamDecoder.java:284) at java.base/sun.nio.cs.StreamDecoder.implRead(StreamDecoder.java:326) at java.base/sun.nio.cs.StreamDecoder.read(StreamDecoder.java:178) at java.base/java.io.InputStreamReader.read(InputStreamReader.java:181) at java.base/java.io.Reader.read(Reader.java:229) at org.apache.commons.io.IOUtils.copyLarge(IOUtils.java:1282) at org.apache.commons.io.IOUtils.copyLarge(IOUtils.java:1261) at org.apache.commons.io.IOUtils.copy(IOUtils.java:1108) at org.apache.commons.io.IOUtils.copy(IOUtils.java:922) at org.apache.commons.io.IOUtils.toString(IOUtils.java:2681) at org.apache.commons.io.IOUtils.toString(IOUtils.java:2661) at org.onap.sdc.http.SdcConnectorClient.handleSdcDownloadArtifactError(SdcConnectorClient.java:256) at org.onap.sdc.http.SdcConnectorClient.downloadArtifact(SdcConnectorClient.java:144) at java.base/java.lang.invoke.MethodHandle.invokeWithArguments(MethodHandle.java:710) at org.mockito.internal.util.reflection.InstrumentationMemberAccessor$Dispatcher$ByteBuddy$0KOrTgaY.invokeWithArguments(Unknown Source) at org.mockito.internal.util.reflection.InstrumentationMemberAccessor.invoke(InstrumentationMemberAccessor.java:239) at org.mockito.internal.util.reflection.ModuleMemberAccessor.invoke(ModuleMemberAccessor.java:55) at org.mockito.internal.creation.bytebuddy.MockMethodAdvice.tryInvoke(MockMethodAdvice.java:333) at org.mockito.internal.creation.bytebuddy.MockMethodAdvice.access$500(MockMethodAdvice.java:60) at org.mockito.internal.creation.bytebuddy.MockMethodAdvice$RealMethodCall.invoke(MockMethodAdvice.java:253) at org.mockito.internal.invocation.InterceptedInvocation.callRealMethod(InterceptedInvocation.java:142) at org.mockito.internal.stubbing.answers.CallsRealMethods.answer(CallsRealMethods.java:45) at org.mockito.Answers.answer(Answers.java:99) at org.mockito.internal.handler.MockHandlerImpl.handle(MockHandlerImpl.java:110) at org.mockito.internal.handler.NullResultGuardian.handle(NullResultGuardian.java:29) at org.mockito.internal.handler.InvocationNotifierHandler.handle(InvocationNotifierHandler.java:34) at org.mockito.internal.creation.bytebuddy.MockMethodInterceptor.doIntercept(MockMethodInterceptor.java:82) at org.mockito.internal.creation.bytebuddy.MockMethodAdvice.handle(MockMethodAdvice.java:151) at org.onap.sdc.http.SdcConnectorClient.downloadArtifact(SdcConnectorClient.java:130) at org.onap.sdc.http.SdcConnectorClientTest.downloadArtifactHandleDownloadErrorTest(SdcConnectorClientTest.java:304) at java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at java.base/jdk.internal.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.base/java.lang.reflect.Method.invoke(Method.java:566) at org.junit.platform.commons.util.ReflectionUtils.invokeMethod(ReflectionUtils.java:688) at org.junit.jupiter.engine.execution.MethodInvocation.proceed(MethodInvocation.java:60) at org.junit.jupiter.engine.execution.InvocationInterceptorChain$ValidatingInvocation.proceed(InvocationInterceptorChain.java:131) at org.junit.jupiter.engine.extension.TimeoutExtension.intercept(TimeoutExtension.java:149) at org.junit.jupiter.engine.extension.TimeoutExtension.interceptTestableMethod(TimeoutExtension.java:140) at org.junit.jupiter.engine.extension.TimeoutExtension.interceptTestMethod(TimeoutExtension.java:84) at org.junit.jupiter.engine.execution.ExecutableInvoker$ReflectiveInterceptorCall.lambda$ofVoidMethod$0(ExecutableInvoker.java:115) at org.junit.jupiter.engine.execution.ExecutableInvoker.lambda$invoke$0(ExecutableInvoker.java:105) at org.junit.jupiter.engine.execution.InvocationInterceptorChain$InterceptedInvocation.proceed(InvocationInterceptorChain.java:106) at org.junit.jupiter.engine.execution.InvocationInterceptorChain.proceed(InvocationInterceptorChain.java:64) at org.junit.jupiter.engine.execution.InvocationInterceptorChain.chainAndInvoke(InvocationInterceptorChain.java:45) at org.junit.jupiter.engine.execution.InvocationInterceptorChain.invoke(InvocationInterceptorChain.java:37) at org.junit.jupiter.engine.execution.ExecutableInvoker.invoke(ExecutableInvoker.java:104) at org.junit.jupiter.engine.execution.ExecutableInvoker.invoke(ExecutableInvoker.java:98) at org.junit.jupiter.engine.descriptor.TestMethodTestDescriptor.lambda$invokeTestMethod$6(TestMethodTestDescriptor.java:210) at org.junit.platform.engine.support.hierarchical.ThrowableCollector.execute(ThrowableCollector.java:73) at org.junit.jupiter.engine.descriptor.TestMethodTestDescriptor.invokeTestMethod(TestMethodTestDescriptor.java:206) at org.junit.jupiter.engine.descriptor.TestMethodTestDescriptor.execute(TestMethodTestDescriptor.java:131) at org.junit.jupiter.engine.descriptor.TestMethodTestDescriptor.execute(TestMethodTestDescriptor.java:65) at org.junit.platform.engine.support.hierarchical.NodeTestTask.lambda$executeRecursively$5(NodeTestTask.java:139) at org.junit.platform.engine.support.hierarchical.ThrowableCollector.execute(ThrowableCollector.java:73) at org.junit.platform.engine.support.hierarchical.NodeTestTask.lambda$executeRecursively$7(NodeTestTask.java:129) at org.junit.platform.engine.support.hierarchical.Node.around(Node.java:137) at org.junit.platform.engine.support.hierarchical.NodeTestTask.lambda$executeRecursively$8(NodeTestTask.java:127) at org.junit.platform.engine.support.hierarchical.ThrowableCollector.execute(ThrowableCollector.java:73) at org.junit.platform.engine.support.hierarchical.NodeTestTask.executeRecursively(NodeTestTask.java:126) at org.junit.platform.engine.support.hierarchical.NodeTestTask.execute(NodeTestTask.java:84) at java.base/java.util.ArrayList.forEach(ArrayList.java:1541) at org.junit.platform.engine.support.hierarchical.SameThreadHierarchicalTestExecutorService.invokeAll(SameThreadHierarchicalTestExecutorService.java:38) at org.junit.platform.engine.support.hierarchical.NodeTestTask.lambda$executeRecursively$5(NodeTestTask.java:143) at org.junit.platform.engine.support.hierarchical.ThrowableCollector.execute(ThrowableCollector.java:73) at org.junit.platform.engine.support.hierarchical.NodeTestTask.lambda$executeRecursively$7(NodeTestTask.java:129) at org.junit.platform.engine.support.hierarchical.Node.around(Node.java:137) at org.junit.platform.engine.support.hierarchical.NodeTestTask.lambda$executeRecursively$8(NodeTestTask.java:127) at org.junit.platform.engine.support.hierarchical.ThrowableCollector.execute(ThrowableCollector.java:73) at org.junit.platform.engine.support.hierarchical.NodeTestTask.executeRecursively(NodeTestTask.java:126) at org.junit.platform.engine.support.hierarchical.NodeTestTask.execute(NodeTestTask.java:84) at java.base/java.util.ArrayList.forEach(ArrayList.java:1541) at org.junit.platform.engine.support.hierarchical.SameThreadHierarchicalTestExecutorService.invokeAll(SameThreadHierarchicalTestExecutorService.java:38) at org.junit.platform.engine.support.hierarchical.NodeTestTask.lambda$executeRecursively$5(NodeTestTask.java:143) at org.junit.platform.engine.support.hierarchical.ThrowableCollector.execute(ThrowableCollector.java:73) at org.junit.platform.engine.support.hierarchical.NodeTestTask.lambda$executeRecursively$7(NodeTestTask.java:129) at org.junit.platform.engine.support.hierarchical.Node.around(Node.java:137) at org.junit.platform.engine.support.hierarchical.NodeTestTask.lambda$executeRecursively$8(NodeTestTask.java:127) at org.junit.platform.engine.support.hierarchical.ThrowableCollector.execute(ThrowableCollector.java:73) at org.junit.platform.engine.support.hierarchical.NodeTestTask.executeRecursively(NodeTestTask.java:126) at org.junit.platform.engine.support.hierarchical.NodeTestTask.execute(NodeTestTask.java:84) at org.junit.platform.engine.support.hierarchical.SameThreadHierarchicalTestExecutorService.submit(SameThreadHierarchicalTestExecutorService.java:32) at org.junit.platform.engine.support.hierarchical.HierarchicalTestExecutor.execute(HierarchicalTestExecutor.java:57) at org.junit.platform.engine.support.hierarchical.HierarchicalTestEngine.execute(HierarchicalTestEngine.java:51) at org.junit.platform.launcher.core.EngineExecutionOrchestrator.execute(EngineExecutionOrchestrator.java:108) at org.junit.platform.launcher.core.EngineExecutionOrchestrator.execute(EngineExecutionOrchestrator.java:88) at org.junit.platform.launcher.core.EngineExecutionOrchestrator.lambda$execute$0(EngineExecutionOrchestrator.java:54) at org.junit.platform.launcher.core.EngineExecutionOrchestrator.withInterceptedStreams(EngineExecutionOrchestrator.java:67) at org.junit.platform.launcher.core.EngineExecutionOrchestrator.execute(EngineExecutionOrchestrator.java:52) at org.junit.platform.launcher.core.DefaultLauncher.execute(DefaultLauncher.java:96) at org.junit.platform.launcher.core.DefaultLauncher.execute(DefaultLauncher.java:75) at org.apache.maven.surefire.junitplatform.JUnitPlatformProvider.invokeAllTests(JUnitPlatformProvider.java:154) at org.apache.maven.surefire.junitplatform.JUnitPlatformProvider.invoke(JUnitPlatformProvider.java:127) at org.apache.maven.surefire.booter.ForkedBooter.runSuitesInProcess(ForkedBooter.java:377) at org.apache.maven.surefire.booter.ForkedBooter.execute(ForkedBooter.java:138) at org.apache.maven.surefire.booter.ForkedBooter.run(ForkedBooter.java:465) at org.apache.maven.surefire.booter.ForkedBooter.main(ForkedBooter.java:451) 19:06:35.828 [main] DEBUG org.onap.sdc.http.SdcConnectorClient - about to perform get on SDC. requestId= c7780f2c-7acc-4ad0-af19-ac3a1d67e2ae url= /sdc/v1/artifactTypes 19:06:35.840 [main] DEBUG org.onap.sdc.http.SdcConnectorClient - about to perform get on SDC. requestId= 059579c3-232e-4801-b92e-96f76455a4cf url= /sdc/v1/distributionKafkaData [INFO] Tests run: 11, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.651 s - in org.onap.sdc.http.SdcConnectorClientTest [INFO] Running org.onap.sdc.utils.SdcKafkaTest 19:06:35.938 [main] INFO com.salesforce.kafka.test.ZookeeperTestServer - Starting Zookeeper test server 19:06:36.123 [Thread-2] INFO org.apache.zookeeper.server.quorum.QuorumPeerConfig - clientPortAddress is 0.0.0.0:42969 19:06:36.124 [Thread-2] INFO org.apache.zookeeper.server.quorum.QuorumPeerConfig - secureClientPort is not set 19:06:36.124 [Thread-2] INFO org.apache.zookeeper.server.quorum.QuorumPeerConfig - observerMasterPort is not set 19:06:36.124 [Thread-2] INFO org.apache.zookeeper.server.quorum.QuorumPeerConfig - metricsProvider.className is org.apache.zookeeper.metrics.impl.DefaultMetricsProvider 19:06:36.126 [Thread-2] INFO org.apache.zookeeper.server.ZooKeeperServerMain - Starting server 19:06:36.142 [Thread-2] INFO org.apache.zookeeper.server.ServerMetrics - ServerMetrics initialized with provider org.apache.zookeeper.metrics.impl.DefaultMetricsProvider@6ad67691 19:06:36.147 [Thread-2] DEBUG org.apache.zookeeper.server.persistence.FileTxnSnapLog - Opening datadir:/tmp/kafka-unit11219859268625780946 snapDir:/tmp/kafka-unit11219859268625780946 19:06:36.148 [Thread-2] INFO org.apache.zookeeper.server.persistence.FileTxnSnapLog - zookeeper.snapshot.trust.empty : false 19:06:36.162 [Thread-2] INFO org.apache.zookeeper.server.ZooKeeperServer - 19:06:36.163 [Thread-2] INFO org.apache.zookeeper.server.ZooKeeperServer - ______ _ 19:06:36.163 [Thread-2] INFO org.apache.zookeeper.server.ZooKeeperServer - |___ / | | 19:06:36.163 [Thread-2] INFO org.apache.zookeeper.server.ZooKeeperServer - / / ___ ___ | | __ ___ ___ _ __ ___ _ __ 19:06:36.163 [Thread-2] INFO org.apache.zookeeper.server.ZooKeeperServer - / / / _ \ / _ \ | |/ / / _ \ / _ \ | '_ \ / _ \ | '__| 19:06:36.163 [Thread-2] INFO org.apache.zookeeper.server.ZooKeeperServer - / /__ | (_) | | (_) | | < | __/ | __/ | |_) | | __/ | | 19:06:36.163 [Thread-2] INFO org.apache.zookeeper.server.ZooKeeperServer - /_____| \___/ \___/ |_|\_\ \___| \___| | .__/ \___| |_| 19:06:36.163 [Thread-2] INFO org.apache.zookeeper.server.ZooKeeperServer - | | 19:06:36.163 [Thread-2] INFO org.apache.zookeeper.server.ZooKeeperServer - |_| 19:06:36.163 [Thread-2] INFO org.apache.zookeeper.server.ZooKeeperServer - 19:06:36.167 [Thread-2] INFO org.apache.zookeeper.server.ZooKeeperServer - Server environment:zookeeper.version=3.6.3--6401e4ad2087061bc6b9f80dec2d69f2e3c8660a, built on 04/08/2021 16:35 GMT 19:06:36.167 [Thread-2] INFO org.apache.zookeeper.server.ZooKeeperServer - Server environment:host.name=prd-ubuntu1804-docker-8c-8g-37411 19:06:36.167 [Thread-2] INFO org.apache.zookeeper.server.ZooKeeperServer - Server environment:java.version=11.0.16 19:06:36.167 [Thread-2] INFO org.apache.zookeeper.server.ZooKeeperServer - Server environment:java.vendor=Ubuntu 19:06:36.167 [Thread-2] INFO org.apache.zookeeper.server.ZooKeeperServer - Server environment:java.home=/usr/lib/jvm/java-11-openjdk-amd64 19:06:36.167 [Thread-2] INFO org.apache.zookeeper.server.ZooKeeperServer - Server environment:java.class.path=/w/workspace/sdc-sdc-distribution-client-master-integration-pairwise/sdc-distribution-client/target/test-classes:/w/workspace/sdc-sdc-distribution-client-master-integration-pairwise/sdc-distribution-client/target/classes:/home/jenkins/.m2/repository/org/apache/kafka/kafka-clients/3.3.1/kafka-clients-3.3.1.jar:/home/jenkins/.m2/repository/com/github/luben/zstd-jni/1.5.2-1/zstd-jni-1.5.2-1.jar:/home/jenkins/.m2/repository/org/lz4/lz4-java/1.8.0/lz4-java-1.8.0.jar:/home/jenkins/.m2/repository/org/xerial/snappy/snappy-java/1.1.8.4/snappy-java-1.1.8.4.jar:/home/jenkins/.m2/repository/com/fasterxml/jackson/core/jackson-core/2.15.2/jackson-core-2.15.2.jar:/home/jenkins/.m2/repository/com/fasterxml/jackson/core/jackson-databind/2.15.2/jackson-databind-2.15.2.jar:/home/jenkins/.m2/repository/com/fasterxml/jackson/core/jackson-annotations/2.15.2/jackson-annotations-2.15.2.jar:/home/jenkins/.m2/repository/org/projectlombok/lombok/1.18.24/lombok-1.18.24.jar:/home/jenkins/.m2/repository/org/json/json/20220320/json-20220320.jar:/home/jenkins/.m2/repository/org/slf4j/slf4j-api/1.7.30/slf4j-api-1.7.30.jar:/home/jenkins/.m2/repository/com/google/code/gson/gson/2.8.9/gson-2.8.9.jar:/home/jenkins/.m2/repository/org/functionaljava/functionaljava/4.8/functionaljava-4.8.jar:/home/jenkins/.m2/repository/commons-io/commons-io/2.8.0/commons-io-2.8.0.jar:/home/jenkins/.m2/repository/org/apache/httpcomponents/httpclient/4.5.13/httpclient-4.5.13.jar:/home/jenkins/.m2/repository/commons-logging/commons-logging/1.2/commons-logging-1.2.jar:/home/jenkins/.m2/repository/org/yaml/snakeyaml/1.30/snakeyaml-1.30.jar:/home/jenkins/.m2/repository/org/apache/httpcomponents/httpcore/4.4.15/httpcore-4.4.15.jar:/home/jenkins/.m2/repository/com/google/guava/guava/32.1.2-jre/guava-32.1.2-jre.jar:/home/jenkins/.m2/repository/com/google/guava/failureaccess/1.0.1/failureaccess-1.0.1.jar:/home/jenkins/.m2/repository/com/google/guava/listenablefuture/9999.0-empty-to-avoid-conflict-with-guava/listenablefuture-9999.0-empty-to-avoid-conflict-with-guava.jar:/home/jenkins/.m2/repository/com/google/code/findbugs/jsr305/3.0.2/jsr305-3.0.2.jar:/home/jenkins/.m2/repository/org/checkerframework/checker-qual/3.33.0/checker-qual-3.33.0.jar:/home/jenkins/.m2/repository/com/google/errorprone/error_prone_annotations/2.18.0/error_prone_annotations-2.18.0.jar:/home/jenkins/.m2/repository/com/google/j2objc/j2objc-annotations/2.8/j2objc-annotations-2.8.jar:/home/jenkins/.m2/repository/org/eclipse/jetty/jetty-servlet/9.4.48.v20220622/jetty-servlet-9.4.48.v20220622.jar:/home/jenkins/.m2/repository/org/eclipse/jetty/jetty-util-ajax/9.4.48.v20220622/jetty-util-ajax-9.4.48.v20220622.jar:/home/jenkins/.m2/repository/org/eclipse/jetty/jetty-webapp/9.4.48.v20220622/jetty-webapp-9.4.48.v20220622.jar:/home/jenkins/.m2/repository/org/eclipse/jetty/jetty-xml/9.4.48.v20220622/jetty-xml-9.4.48.v20220622.jar:/home/jenkins/.m2/repository/org/eclipse/jetty/jetty-util/9.4.48.v20220622/jetty-util-9.4.48.v20220622.jar:/home/jenkins/.m2/repository/org/junit/jupiter/junit-jupiter/5.7.2/junit-jupiter-5.7.2.jar:/home/jenkins/.m2/repository/org/junit/jupiter/junit-jupiter-api/5.7.2/junit-jupiter-api-5.7.2.jar:/home/jenkins/.m2/repository/org/apiguardian/apiguardian-api/1.1.0/apiguardian-api-1.1.0.jar:/home/jenkins/.m2/repository/org/opentest4j/opentest4j/1.2.0/opentest4j-1.2.0.jar:/home/jenkins/.m2/repository/org/junit/jupiter/junit-jupiter-params/5.7.2/junit-jupiter-params-5.7.2.jar:/home/jenkins/.m2/repository/org/junit/jupiter/junit-jupiter-engine/5.7.2/junit-jupiter-engine-5.7.2.jar:/home/jenkins/.m2/repository/org/junit/platform/junit-platform-engine/1.7.2/junit-platform-engine-1.7.2.jar:/home/jenkins/.m2/repository/org/mockito/mockito-junit-jupiter/3.12.4/mockito-junit-jupiter-3.12.4.jar:/home/jenkins/.m2/repository/org/mockito/mockito-inline/3.12.4/mockito-inline-3.12.4.jar:/home/jenkins/.m2/repository/org/junit-pioneer/junit-pioneer/1.4.2/junit-pioneer-1.4.2.jar:/home/jenkins/.m2/repository/org/junit/platform/junit-platform-commons/1.7.1/junit-platform-commons-1.7.1.jar:/home/jenkins/.m2/repository/org/junit/platform/junit-platform-launcher/1.7.1/junit-platform-launcher-1.7.1.jar:/home/jenkins/.m2/repository/org/mockito/mockito-core/3.12.4/mockito-core-3.12.4.jar:/home/jenkins/.m2/repository/net/bytebuddy/byte-buddy/1.11.13/byte-buddy-1.11.13.jar:/home/jenkins/.m2/repository/net/bytebuddy/byte-buddy-agent/1.11.13/byte-buddy-agent-1.11.13.jar:/home/jenkins/.m2/repository/org/objenesis/objenesis/3.2/objenesis-3.2.jar:/home/jenkins/.m2/repository/com/google/code/bean-matchers/bean-matchers/0.12/bean-matchers-0.12.jar:/home/jenkins/.m2/repository/org/hamcrest/hamcrest/2.2/hamcrest-2.2.jar:/home/jenkins/.m2/repository/org/assertj/assertj-core/3.18.1/assertj-core-3.18.1.jar:/home/jenkins/.m2/repository/io/github/hakky54/logcaptor/2.7.10/logcaptor-2.7.10.jar:/home/jenkins/.m2/repository/ch/qos/logback/logback-classic/1.2.3/logback-classic-1.2.3.jar:/home/jenkins/.m2/repository/ch/qos/logback/logback-core/1.2.3/logback-core-1.2.3.jar:/home/jenkins/.m2/repository/org/apache/logging/log4j/log4j-to-slf4j/2.17.2/log4j-to-slf4j-2.17.2.jar:/home/jenkins/.m2/repository/org/apache/logging/log4j/log4j-api/2.17.2/log4j-api-2.17.2.jar:/home/jenkins/.m2/repository/org/slf4j/jul-to-slf4j/1.7.36/jul-to-slf4j-1.7.36.jar:/home/jenkins/.m2/repository/com/salesforce/kafka/test/kafka-junit5/3.2.4/kafka-junit5-3.2.4.jar:/home/jenkins/.m2/repository/com/salesforce/kafka/test/kafka-junit-core/3.2.4/kafka-junit-core-3.2.4.jar:/home/jenkins/.m2/repository/org/apache/curator/curator-test/2.12.0/curator-test-2.12.0.jar:/home/jenkins/.m2/repository/org/javassist/javassist/3.18.1-GA/javassist-3.18.1-GA.jar:/home/jenkins/.m2/repository/org/apache/kafka/kafka_2.13/3.3.1/kafka_2.13-3.3.1.jar:/home/jenkins/.m2/repository/org/scala-lang/scala-library/2.13.8/scala-library-2.13.8.jar:/home/jenkins/.m2/repository/org/apache/kafka/kafka-server-common/3.3.1/kafka-server-common-3.3.1.jar:/home/jenkins/.m2/repository/org/apache/kafka/kafka-metadata/3.3.1/kafka-metadata-3.3.1.jar:/home/jenkins/.m2/repository/org/apache/kafka/kafka-raft/3.3.1/kafka-raft-3.3.1.jar:/home/jenkins/.m2/repository/org/apache/kafka/kafka-storage/3.3.1/kafka-storage-3.3.1.jar:/home/jenkins/.m2/repository/org/apache/kafka/kafka-storage-api/3.3.1/kafka-storage-api-3.3.1.jar:/home/jenkins/.m2/repository/net/sourceforge/argparse4j/argparse4j/0.7.0/argparse4j-0.7.0.jar:/home/jenkins/.m2/repository/net/sf/jopt-simple/jopt-simple/5.0.4/jopt-simple-5.0.4.jar:/home/jenkins/.m2/repository/org/bitbucket/b_c/jose4j/0.7.9/jose4j-0.7.9.jar:/home/jenkins/.m2/repository/com/yammer/metrics/metrics-core/2.2.0/metrics-core-2.2.0.jar:/home/jenkins/.m2/repository/org/scala-lang/modules/scala-collection-compat_2.13/2.6.0/scala-collection-compat_2.13-2.6.0.jar:/home/jenkins/.m2/repository/org/scala-lang/modules/scala-java8-compat_2.13/1.0.2/scala-java8-compat_2.13-1.0.2.jar:/home/jenkins/.m2/repository/org/scala-lang/scala-reflect/2.13.8/scala-reflect-2.13.8.jar:/home/jenkins/.m2/repository/com/typesafe/scala-logging/scala-logging_2.13/3.9.4/scala-logging_2.13-3.9.4.jar:/home/jenkins/.m2/repository/io/dropwizard/metrics/metrics-core/4.1.12.1/metrics-core-4.1.12.1.jar:/home/jenkins/.m2/repository/org/apache/zookeeper/zookeeper/3.6.3/zookeeper-3.6.3.jar:/home/jenkins/.m2/repository/org/apache/zookeeper/zookeeper-jute/3.6.3/zookeeper-jute-3.6.3.jar:/home/jenkins/.m2/repository/org/apache/yetus/audience-annotations/0.5.0/audience-annotations-0.5.0.jar:/home/jenkins/.m2/repository/io/netty/netty-handler/4.1.63.Final/netty-handler-4.1.63.Final.jar:/home/jenkins/.m2/repository/io/netty/netty-common/4.1.63.Final/netty-common-4.1.63.Final.jar:/home/jenkins/.m2/repository/io/netty/netty-resolver/4.1.63.Final/netty-resolver-4.1.63.Final.jar:/home/jenkins/.m2/repository/io/netty/netty-buffer/4.1.63.Final/netty-buffer-4.1.63.Final.jar:/home/jenkins/.m2/repository/io/netty/netty-transport/4.1.63.Final/netty-transport-4.1.63.Final.jar:/home/jenkins/.m2/repository/io/netty/netty-codec/4.1.63.Final/netty-codec-4.1.63.Final.jar:/home/jenkins/.m2/repository/io/netty/netty-transport-native-epoll/4.1.63.Final/netty-transport-native-epoll-4.1.63.Final.jar:/home/jenkins/.m2/repository/io/netty/netty-transport-native-unix-common/4.1.63.Final/netty-transport-native-unix-common-4.1.63.Final.jar:/home/jenkins/.m2/repository/commons-cli/commons-cli/1.4/commons-cli-1.4.jar:/home/jenkins/.m2/repository/org/skyscreamer/jsonassert/1.5.3/jsonassert-1.5.3.jar:/home/jenkins/.m2/repository/com/vaadin/external/google/android-json/0.0.20131108.vaadin1/android-json-0.0.20131108.vaadin1.jar: 19:06:36.167 [Thread-2] INFO org.apache.zookeeper.server.ZooKeeperServer - Server environment:java.library.path=/usr/java/packages/lib:/usr/lib/x86_64-linux-gnu/jni:/lib/x86_64-linux-gnu:/usr/lib/x86_64-linux-gnu:/usr/lib/jni:/lib:/usr/lib 19:06:36.168 [Thread-2] INFO org.apache.zookeeper.server.ZooKeeperServer - Server environment:java.io.tmpdir=/tmp 19:06:36.168 [Thread-2] INFO org.apache.zookeeper.server.ZooKeeperServer - Server environment:java.compiler= 19:06:36.168 [Thread-2] INFO org.apache.zookeeper.server.ZooKeeperServer - Server environment:os.name=Linux 19:06:36.168 [Thread-2] INFO org.apache.zookeeper.server.ZooKeeperServer - Server environment:os.arch=amd64 19:06:36.168 [Thread-2] INFO org.apache.zookeeper.server.ZooKeeperServer - Server environment:os.version=4.15.0-192-generic 19:06:36.168 [Thread-2] INFO org.apache.zookeeper.server.ZooKeeperServer - Server environment:user.name=jenkins 19:06:36.168 [Thread-2] INFO org.apache.zookeeper.server.ZooKeeperServer - Server environment:user.home=/home/jenkins 19:06:36.168 [Thread-2] INFO org.apache.zookeeper.server.ZooKeeperServer - Server environment:user.dir=/w/workspace/sdc-sdc-distribution-client-master-integration-pairwise/sdc-distribution-client 19:06:36.168 [Thread-2] INFO org.apache.zookeeper.server.ZooKeeperServer - Server environment:os.memory.free=441MB 19:06:36.168 [Thread-2] INFO org.apache.zookeeper.server.ZooKeeperServer - Server environment:os.memory.max=8042MB 19:06:36.168 [Thread-2] INFO org.apache.zookeeper.server.ZooKeeperServer - Server environment:os.memory.total=504MB 19:06:36.168 [Thread-2] INFO org.apache.zookeeper.server.ZooKeeperServer - zookeeper.enableEagerACLCheck = false 19:06:36.169 [Thread-2] INFO org.apache.zookeeper.server.ZooKeeperServer - zookeeper.digest.enabled = true 19:06:36.169 [Thread-2] INFO org.apache.zookeeper.server.ZooKeeperServer - zookeeper.closeSessionTxn.enabled = true 19:06:36.169 [Thread-2] INFO org.apache.zookeeper.server.ZooKeeperServer - zookeeper.flushDelay=0 19:06:36.169 [Thread-2] INFO org.apache.zookeeper.server.ZooKeeperServer - zookeeper.maxWriteQueuePollTime=0 19:06:36.169 [Thread-2] INFO org.apache.zookeeper.server.ZooKeeperServer - zookeeper.maxBatchSize=1000 19:06:36.169 [Thread-2] INFO org.apache.zookeeper.server.ZooKeeperServer - zookeeper.intBufferStartingSizeBytes = 1024 19:06:36.171 [Thread-2] INFO org.apache.zookeeper.server.BlueThrottle - Weighed connection throttling is disabled 19:06:36.173 [Thread-2] INFO org.apache.zookeeper.server.ZooKeeperServer - minSessionTimeout set to 6000 19:06:36.173 [Thread-2] INFO org.apache.zookeeper.server.ZooKeeperServer - maxSessionTimeout set to 60000 19:06:36.174 [Thread-2] INFO org.apache.zookeeper.server.ResponseCache - Response cache size is initialized with value 400. 19:06:36.174 [Thread-2] INFO org.apache.zookeeper.server.ResponseCache - Response cache size is initialized with value 400. 19:06:36.176 [Thread-2] INFO org.apache.zookeeper.server.util.RequestPathMetricsCollector - zookeeper.pathStats.slotCapacity = 60 19:06:36.176 [Thread-2] INFO org.apache.zookeeper.server.util.RequestPathMetricsCollector - zookeeper.pathStats.slotDuration = 15 19:06:36.176 [Thread-2] INFO org.apache.zookeeper.server.util.RequestPathMetricsCollector - zookeeper.pathStats.maxDepth = 6 19:06:36.176 [Thread-2] INFO org.apache.zookeeper.server.util.RequestPathMetricsCollector - zookeeper.pathStats.initialDelay = 5 19:06:36.176 [Thread-2] INFO org.apache.zookeeper.server.util.RequestPathMetricsCollector - zookeeper.pathStats.delay = 5 19:06:36.176 [Thread-2] INFO org.apache.zookeeper.server.util.RequestPathMetricsCollector - zookeeper.pathStats.enabled = false 19:06:36.179 [Thread-2] INFO org.apache.zookeeper.server.ZooKeeperServer - The max bytes for all large requests are set to 104857600 19:06:36.179 [Thread-2] INFO org.apache.zookeeper.server.ZooKeeperServer - The large request threshold is set to -1 19:06:36.179 [Thread-2] INFO org.apache.zookeeper.server.ZooKeeperServer - Created server with tickTime 3000 minSessionTimeout 6000 maxSessionTimeout 60000 clientPortListenBacklog -1 datadir /tmp/kafka-unit11219859268625780946/version-2 snapdir /tmp/kafka-unit11219859268625780946/version-2 19:06:36.198 [Thread-2] INFO org.apache.zookeeper.server.ServerCnxnFactory - Using org.apache.zookeeper.server.NIOServerCnxnFactory as server connection factory 19:06:36.214 [Thread-2] INFO org.apache.zookeeper.common.X509Util - Setting -D jdk.tls.rejectClientInitiatedRenegotiation=true to disable client-initiated TLS renegotiation 19:06:36.236 [Thread-2] INFO org.apache.zookeeper.Login - Server successfully logged in. 19:06:36.240 [Thread-2] WARN org.apache.zookeeper.server.ServerCnxnFactory - maxCnxns is not configured, using default value 0. 19:06:36.243 [Thread-2] INFO org.apache.zookeeper.server.NIOServerCnxnFactory - Configuring NIO connection handler with 10s sessionless connection timeout, 2 selector thread(s), 16 worker threads, and 64 kB direct buffers. 19:06:36.252 [Thread-2] INFO org.apache.zookeeper.server.NIOServerCnxnFactory - binding to port 0.0.0.0/0.0.0.0:42969 19:06:36.283 [Thread-2] INFO org.apache.zookeeper.server.watch.WatchManagerFactory - Using org.apache.zookeeper.server.watch.WatchManager as watch manager 19:06:36.283 [Thread-2] INFO org.apache.zookeeper.server.watch.WatchManagerFactory - Using org.apache.zookeeper.server.watch.WatchManager as watch manager 19:06:36.284 [Thread-2] INFO org.apache.zookeeper.server.ZKDatabase - zookeeper.snapshotSizeFactor = 0.33 19:06:36.284 [Thread-2] INFO org.apache.zookeeper.server.ZKDatabase - zookeeper.commitLogCount=500 19:06:36.299 [Thread-2] INFO org.apache.zookeeper.server.persistence.SnapStream - zookeeper.snapshot.compression.method = CHECKED 19:06:36.299 [Thread-2] INFO org.apache.zookeeper.server.persistence.FileTxnSnapLog - Snapshotting: 0x0 to /tmp/kafka-unit11219859268625780946/version-2/snapshot.0 19:06:36.310 [Thread-2] INFO org.apache.zookeeper.server.ZKDatabase - Snapshot loaded in 26 ms, highest zxid is 0x0, digest is 1371985504 19:06:36.311 [Thread-2] INFO org.apache.zookeeper.server.persistence.FileTxnSnapLog - Snapshotting: 0x0 to /tmp/kafka-unit11219859268625780946/version-2/snapshot.0 19:06:36.312 [Thread-2] INFO org.apache.zookeeper.server.ZooKeeperServer - Snapshot taken in 1 ms 19:06:36.332 [Thread-2] INFO org.apache.zookeeper.server.RequestThrottler - zookeeper.request_throttler.shutdownTimeout = 10000 19:06:36.332 [ProcessThread(sid:0 cport:42969):] INFO org.apache.zookeeper.server.PrepRequestProcessor - PrepRequestProcessor (sid:0) started, reconfigEnabled=false 19:06:36.356 [Thread-2] INFO org.apache.zookeeper.server.ContainerManager - Using checkIntervalMs=60000 maxPerMinute=10000 maxNeverUsedIntervalMs=0 19:06:36.359 [Thread-2] INFO org.apache.zookeeper.audit.ZKAuditProvider - ZooKeeper audit is disabled. 19:06:38.003 [main] INFO kafka.server.KafkaConfig - KafkaConfig values: advertised.listeners = SASL_PLAINTEXT://localhost:38537 alter.config.policy.class.name = null alter.log.dirs.replication.quota.window.num = 11 alter.log.dirs.replication.quota.window.size.seconds = 1 authorizer.class.name = auto.create.topics.enable = true auto.leader.rebalance.enable = true background.threads = 10 broker.heartbeat.interval.ms = 2000 broker.id = 1 broker.id.generation.enable = true broker.rack = null broker.session.timeout.ms = 9000 client.quota.callback.class = null compression.type = producer connection.failed.authentication.delay.ms = 100 connections.max.idle.ms = 600000 connections.max.reauth.ms = 0 control.plane.listener.name = null controlled.shutdown.enable = true controlled.shutdown.max.retries = 3 controlled.shutdown.retry.backoff.ms = 5000 controller.listener.names = null controller.quorum.append.linger.ms = 25 controller.quorum.election.backoff.max.ms = 1000 controller.quorum.election.timeout.ms = 1000 controller.quorum.fetch.timeout.ms = 2000 controller.quorum.request.timeout.ms = 2000 controller.quorum.retry.backoff.ms = 20 controller.quorum.voters = [] controller.quota.window.num = 11 controller.quota.window.size.seconds = 1 controller.socket.timeout.ms = 30000 create.topic.policy.class.name = null default.replication.factor = 1 delegation.token.expiry.check.interval.ms = 3600000 delegation.token.expiry.time.ms = 86400000 delegation.token.master.key = null delegation.token.max.lifetime.ms = 604800000 delegation.token.secret.key = null delete.records.purgatory.purge.interval.requests = 1 delete.topic.enable = true early.start.listeners = null fetch.max.bytes = 57671680 fetch.purgatory.purge.interval.requests = 1000 group.initial.rebalance.delay.ms = 3000 group.max.session.timeout.ms = 1800000 group.max.size = 2147483647 group.min.session.timeout.ms = 6000 initial.broker.registration.timeout.ms = 60000 inter.broker.listener.name = null inter.broker.protocol.version = 3.3-IV3 kafka.metrics.polling.interval.secs = 10 kafka.metrics.reporters = [] leader.imbalance.check.interval.seconds = 300 leader.imbalance.per.broker.percentage = 10 listener.security.protocol.map = PLAINTEXT:PLAINTEXT,SSL:SSL,SASL_PLAINTEXT:SASL_PLAINTEXT,SASL_SSL:SASL_SSL listeners = SASL_PLAINTEXT://localhost:38537 log.cleaner.backoff.ms = 15000 log.cleaner.dedupe.buffer.size = 134217728 log.cleaner.delete.retention.ms = 86400000 log.cleaner.enable = true log.cleaner.io.buffer.load.factor = 0.9 log.cleaner.io.buffer.size = 524288 log.cleaner.io.max.bytes.per.second = 1.7976931348623157E308 log.cleaner.max.compaction.lag.ms = 9223372036854775807 log.cleaner.min.cleanable.ratio = 0.5 log.cleaner.min.compaction.lag.ms = 0 log.cleaner.threads = 1 log.cleanup.policy = [delete] log.dir = /tmp/kafka-unit12180474530667575823 log.dirs = null log.flush.interval.messages = 1 log.flush.interval.ms = null log.flush.offset.checkpoint.interval.ms = 60000 log.flush.scheduler.interval.ms = 9223372036854775807 log.flush.start.offset.checkpoint.interval.ms = 60000 log.index.interval.bytes = 4096 log.index.size.max.bytes = 10485760 log.message.downconversion.enable = true log.message.format.version = 3.0-IV1 log.message.timestamp.difference.max.ms = 9223372036854775807 log.message.timestamp.type = CreateTime log.preallocate = false log.retention.bytes = -1 log.retention.check.interval.ms = 300000 log.retention.hours = 168 log.retention.minutes = null log.retention.ms = null log.roll.hours = 168 log.roll.jitter.hours = 0 log.roll.jitter.ms = null log.roll.ms = null log.segment.bytes = 1073741824 log.segment.delete.delay.ms = 60000 max.connection.creation.rate = 2147483647 max.connections = 2147483647 max.connections.per.ip = 2147483647 max.connections.per.ip.overrides = max.incremental.fetch.session.cache.slots = 1000 message.max.bytes = 1048588 metadata.log.dir = null metadata.log.max.record.bytes.between.snapshots = 20971520 metadata.log.segment.bytes = 1073741824 metadata.log.segment.min.bytes = 8388608 metadata.log.segment.ms = 604800000 metadata.max.idle.interval.ms = 500 metadata.max.retention.bytes = -1 metadata.max.retention.ms = 604800000 metric.reporters = [] metrics.num.samples = 2 metrics.recording.level = INFO metrics.sample.window.ms = 30000 min.insync.replicas = 1 node.id = 1 num.io.threads = 2 num.network.threads = 2 num.partitions = 1 num.recovery.threads.per.data.dir = 1 num.replica.alter.log.dirs.threads = null num.replica.fetchers = 1 offset.metadata.max.bytes = 4096 offsets.commit.required.acks = -1 offsets.commit.timeout.ms = 5000 offsets.load.buffer.size = 5242880 offsets.retention.check.interval.ms = 600000 offsets.retention.minutes = 10080 offsets.topic.compression.codec = 0 offsets.topic.num.partitions = 50 offsets.topic.replication.factor = 1 offsets.topic.segment.bytes = 104857600 password.encoder.cipher.algorithm = AES/CBC/PKCS5Padding password.encoder.iterations = 4096 password.encoder.key.length = 128 password.encoder.keyfactory.algorithm = null password.encoder.old.secret = null password.encoder.secret = null principal.builder.class = class org.apache.kafka.common.security.authenticator.DefaultKafkaPrincipalBuilder process.roles = [] producer.purgatory.purge.interval.requests = 1000 queued.max.request.bytes = -1 queued.max.requests = 500 quota.window.num = 11 quota.window.size.seconds = 1 remote.log.index.file.cache.total.size.bytes = 1073741824 remote.log.manager.task.interval.ms = 30000 remote.log.manager.task.retry.backoff.max.ms = 30000 remote.log.manager.task.retry.backoff.ms = 500 remote.log.manager.task.retry.jitter = 0.2 remote.log.manager.thread.pool.size = 10 remote.log.metadata.manager.class.name = null remote.log.metadata.manager.class.path = null remote.log.metadata.manager.impl.prefix = null remote.log.metadata.manager.listener.name = null remote.log.reader.max.pending.tasks = 100 remote.log.reader.threads = 10 remote.log.storage.manager.class.name = null remote.log.storage.manager.class.path = null remote.log.storage.manager.impl.prefix = null remote.log.storage.system.enable = false replica.fetch.backoff.ms = 1000 replica.fetch.max.bytes = 1048576 replica.fetch.min.bytes = 1 replica.fetch.response.max.bytes = 10485760 replica.fetch.wait.max.ms = 500 replica.high.watermark.checkpoint.interval.ms = 5000 replica.lag.time.max.ms = 30000 replica.selector.class = null replica.socket.receive.buffer.bytes = 65536 replica.socket.timeout.ms = 30000 replication.quota.window.num = 11 replication.quota.window.size.seconds = 1 request.timeout.ms = 30000 reserved.broker.max.id = 1000 sasl.client.callback.handler.class = null sasl.enabled.mechanisms = [PLAIN] sasl.jaas.config = null sasl.kerberos.kinit.cmd = /usr/bin/kinit sasl.kerberos.min.time.before.relogin = 60000 sasl.kerberos.principal.to.local.rules = [DEFAULT] sasl.kerberos.service.name = null sasl.kerberos.ticket.renew.jitter = 0.05 sasl.kerberos.ticket.renew.window.factor = 0.8 sasl.login.callback.handler.class = null sasl.login.class = null sasl.login.connect.timeout.ms = null sasl.login.read.timeout.ms = null sasl.login.refresh.buffer.seconds = 300 sasl.login.refresh.min.period.seconds = 60 sasl.login.refresh.window.factor = 0.8 sasl.login.refresh.window.jitter = 0.05 sasl.login.retry.backoff.max.ms = 10000 sasl.login.retry.backoff.ms = 100 sasl.mechanism.controller.protocol = GSSAPI sasl.mechanism.inter.broker.protocol = PLAIN sasl.oauthbearer.clock.skew.seconds = 30 sasl.oauthbearer.expected.audience = null sasl.oauthbearer.expected.issuer = null sasl.oauthbearer.jwks.endpoint.refresh.ms = 3600000 sasl.oauthbearer.jwks.endpoint.retry.backoff.max.ms = 10000 sasl.oauthbearer.jwks.endpoint.retry.backoff.ms = 100 sasl.oauthbearer.jwks.endpoint.url = null sasl.oauthbearer.scope.claim.name = scope sasl.oauthbearer.sub.claim.name = sub sasl.oauthbearer.token.endpoint.url = null sasl.server.callback.handler.class = null sasl.server.max.receive.size = 524288 security.inter.broker.protocol = SASL_PLAINTEXT security.providers = null socket.connection.setup.timeout.max.ms = 30000 socket.connection.setup.timeout.ms = 10000 socket.listen.backlog.size = 50 socket.receive.buffer.bytes = 102400 socket.request.max.bytes = 104857600 socket.send.buffer.bytes = 102400 ssl.cipher.suites = [] ssl.client.auth = none ssl.enabled.protocols = [TLSv1.2, TLSv1.3] ssl.endpoint.identification.algorithm = https ssl.engine.factory.class = null ssl.key.password = null ssl.keymanager.algorithm = SunX509 ssl.keystore.certificate.chain = null ssl.keystore.key = null ssl.keystore.location = null ssl.keystore.password = null ssl.keystore.type = JKS ssl.principal.mapping.rules = DEFAULT ssl.protocol = TLSv1.3 ssl.provider = null ssl.secure.random.implementation = null ssl.trustmanager.algorithm = PKIX ssl.truststore.certificates = null ssl.truststore.location = null ssl.truststore.password = null ssl.truststore.type = JKS transaction.abort.timed.out.transaction.cleanup.interval.ms = 10000 transaction.max.timeout.ms = 900000 transaction.remove.expired.transaction.cleanup.interval.ms = 3600000 transaction.state.log.load.buffer.size = 5242880 transaction.state.log.min.isr = 1 transaction.state.log.num.partitions = 4 transaction.state.log.replication.factor = 1 transaction.state.log.segment.bytes = 104857600 transactional.id.expiration.ms = 604800000 unclean.leader.election.enable = false zookeeper.clientCnxnSocket = null zookeeper.connect = 127.0.0.1:42969 zookeeper.connection.timeout.ms = null zookeeper.max.in.flight.requests = 10 zookeeper.session.timeout.ms = 30000 zookeeper.set.acl = false zookeeper.ssl.cipher.suites = null zookeeper.ssl.client.enable = false zookeeper.ssl.crl.enable = false zookeeper.ssl.enabled.protocols = null zookeeper.ssl.endpoint.identification.algorithm = HTTPS zookeeper.ssl.keystore.location = null zookeeper.ssl.keystore.password = null zookeeper.ssl.keystore.type = null zookeeper.ssl.ocsp.enable = false zookeeper.ssl.protocol = TLSv1.2 zookeeper.ssl.truststore.location = null zookeeper.ssl.truststore.password = null zookeeper.ssl.truststore.type = null 19:06:38.082 [main] INFO kafka.utils.Log4jControllerRegistration$ - Registered kafka:type=kafka.Log4jController MBean 19:06:38.227 [main] DEBUG org.apache.kafka.common.security.JaasUtils - Checking login config for Zookeeper JAAS context [java.security.auth.login.config=src/test/resources/jaas.conf, zookeeper.sasl.client=default:true, zookeeper.sasl.clientconfig=default:Client] 19:06:38.233 [main] INFO kafka.server.KafkaServer - starting 19:06:38.233 [main] INFO kafka.server.KafkaServer - Connecting to zookeeper on 127.0.0.1:42969 19:06:38.234 [main] DEBUG org.apache.kafka.common.security.JaasUtils - Checking login config for Zookeeper JAAS context [java.security.auth.login.config=src/test/resources/jaas.conf, zookeeper.sasl.client=default:true, zookeeper.sasl.clientconfig=default:Client] 19:06:38.256 [main] INFO kafka.zookeeper.ZooKeeperClient - [ZooKeeperClient Kafka server] Initializing a new session to 127.0.0.1:42969. 19:06:38.263 [main] INFO org.apache.zookeeper.ZooKeeper - Client environment:zookeeper.version=3.6.3--6401e4ad2087061bc6b9f80dec2d69f2e3c8660a, built on 04/08/2021 16:35 GMT 19:06:38.264 [main] INFO org.apache.zookeeper.ZooKeeper - Client environment:host.name=prd-ubuntu1804-docker-8c-8g-37411 19:06:38.264 [main] INFO org.apache.zookeeper.ZooKeeper - Client environment:java.version=11.0.16 19:06:38.264 [main] INFO org.apache.zookeeper.ZooKeeper - Client environment:java.vendor=Ubuntu 19:06:38.264 [main] INFO org.apache.zookeeper.ZooKeeper - Client environment:java.home=/usr/lib/jvm/java-11-openjdk-amd64 19:06:38.264 [main] INFO org.apache.zookeeper.ZooKeeper - Client environment:java.class.path=/w/workspace/sdc-sdc-distribution-client-master-integration-pairwise/sdc-distribution-client/target/test-classes:/w/workspace/sdc-sdc-distribution-client-master-integration-pairwise/sdc-distribution-client/target/classes:/home/jenkins/.m2/repository/org/apache/kafka/kafka-clients/3.3.1/kafka-clients-3.3.1.jar:/home/jenkins/.m2/repository/com/github/luben/zstd-jni/1.5.2-1/zstd-jni-1.5.2-1.jar:/home/jenkins/.m2/repository/org/lz4/lz4-java/1.8.0/lz4-java-1.8.0.jar:/home/jenkins/.m2/repository/org/xerial/snappy/snappy-java/1.1.8.4/snappy-java-1.1.8.4.jar:/home/jenkins/.m2/repository/com/fasterxml/jackson/core/jackson-core/2.15.2/jackson-core-2.15.2.jar:/home/jenkins/.m2/repository/com/fasterxml/jackson/core/jackson-databind/2.15.2/jackson-databind-2.15.2.jar:/home/jenkins/.m2/repository/com/fasterxml/jackson/core/jackson-annotations/2.15.2/jackson-annotations-2.15.2.jar:/home/jenkins/.m2/repository/org/projectlombok/lombok/1.18.24/lombok-1.18.24.jar:/home/jenkins/.m2/repository/org/json/json/20220320/json-20220320.jar:/home/jenkins/.m2/repository/org/slf4j/slf4j-api/1.7.30/slf4j-api-1.7.30.jar:/home/jenkins/.m2/repository/com/google/code/gson/gson/2.8.9/gson-2.8.9.jar:/home/jenkins/.m2/repository/org/functionaljava/functionaljava/4.8/functionaljava-4.8.jar:/home/jenkins/.m2/repository/commons-io/commons-io/2.8.0/commons-io-2.8.0.jar:/home/jenkins/.m2/repository/org/apache/httpcomponents/httpclient/4.5.13/httpclient-4.5.13.jar:/home/jenkins/.m2/repository/commons-logging/commons-logging/1.2/commons-logging-1.2.jar:/home/jenkins/.m2/repository/org/yaml/snakeyaml/1.30/snakeyaml-1.30.jar:/home/jenkins/.m2/repository/org/apache/httpcomponents/httpcore/4.4.15/httpcore-4.4.15.jar:/home/jenkins/.m2/repository/com/google/guava/guava/32.1.2-jre/guava-32.1.2-jre.jar:/home/jenkins/.m2/repository/com/google/guava/failureaccess/1.0.1/failureaccess-1.0.1.jar:/home/jenkins/.m2/repository/com/google/guava/listenablefuture/9999.0-empty-to-avoid-conflict-with-guava/listenablefuture-9999.0-empty-to-avoid-conflict-with-guava.jar:/home/jenkins/.m2/repository/com/google/code/findbugs/jsr305/3.0.2/jsr305-3.0.2.jar:/home/jenkins/.m2/repository/org/checkerframework/checker-qual/3.33.0/checker-qual-3.33.0.jar:/home/jenkins/.m2/repository/com/google/errorprone/error_prone_annotations/2.18.0/error_prone_annotations-2.18.0.jar:/home/jenkins/.m2/repository/com/google/j2objc/j2objc-annotations/2.8/j2objc-annotations-2.8.jar:/home/jenkins/.m2/repository/org/eclipse/jetty/jetty-servlet/9.4.48.v20220622/jetty-servlet-9.4.48.v20220622.jar:/home/jenkins/.m2/repository/org/eclipse/jetty/jetty-util-ajax/9.4.48.v20220622/jetty-util-ajax-9.4.48.v20220622.jar:/home/jenkins/.m2/repository/org/eclipse/jetty/jetty-webapp/9.4.48.v20220622/jetty-webapp-9.4.48.v20220622.jar:/home/jenkins/.m2/repository/org/eclipse/jetty/jetty-xml/9.4.48.v20220622/jetty-xml-9.4.48.v20220622.jar:/home/jenkins/.m2/repository/org/eclipse/jetty/jetty-util/9.4.48.v20220622/jetty-util-9.4.48.v20220622.jar:/home/jenkins/.m2/repository/org/junit/jupiter/junit-jupiter/5.7.2/junit-jupiter-5.7.2.jar:/home/jenkins/.m2/repository/org/junit/jupiter/junit-jupiter-api/5.7.2/junit-jupiter-api-5.7.2.jar:/home/jenkins/.m2/repository/org/apiguardian/apiguardian-api/1.1.0/apiguardian-api-1.1.0.jar:/home/jenkins/.m2/repository/org/opentest4j/opentest4j/1.2.0/opentest4j-1.2.0.jar:/home/jenkins/.m2/repository/org/junit/jupiter/junit-jupiter-params/5.7.2/junit-jupiter-params-5.7.2.jar:/home/jenkins/.m2/repository/org/junit/jupiter/junit-jupiter-engine/5.7.2/junit-jupiter-engine-5.7.2.jar:/home/jenkins/.m2/repository/org/junit/platform/junit-platform-engine/1.7.2/junit-platform-engine-1.7.2.jar:/home/jenkins/.m2/repository/org/mockito/mockito-junit-jupiter/3.12.4/mockito-junit-jupiter-3.12.4.jar:/home/jenkins/.m2/repository/org/mockito/mockito-inline/3.12.4/mockito-inline-3.12.4.jar:/home/jenkins/.m2/repository/org/junit-pioneer/junit-pioneer/1.4.2/junit-pioneer-1.4.2.jar:/home/jenkins/.m2/repository/org/junit/platform/junit-platform-commons/1.7.1/junit-platform-commons-1.7.1.jar:/home/jenkins/.m2/repository/org/junit/platform/junit-platform-launcher/1.7.1/junit-platform-launcher-1.7.1.jar:/home/jenkins/.m2/repository/org/mockito/mockito-core/3.12.4/mockito-core-3.12.4.jar:/home/jenkins/.m2/repository/net/bytebuddy/byte-buddy/1.11.13/byte-buddy-1.11.13.jar:/home/jenkins/.m2/repository/net/bytebuddy/byte-buddy-agent/1.11.13/byte-buddy-agent-1.11.13.jar:/home/jenkins/.m2/repository/org/objenesis/objenesis/3.2/objenesis-3.2.jar:/home/jenkins/.m2/repository/com/google/code/bean-matchers/bean-matchers/0.12/bean-matchers-0.12.jar:/home/jenkins/.m2/repository/org/hamcrest/hamcrest/2.2/hamcrest-2.2.jar:/home/jenkins/.m2/repository/org/assertj/assertj-core/3.18.1/assertj-core-3.18.1.jar:/home/jenkins/.m2/repository/io/github/hakky54/logcaptor/2.7.10/logcaptor-2.7.10.jar:/home/jenkins/.m2/repository/ch/qos/logback/logback-classic/1.2.3/logback-classic-1.2.3.jar:/home/jenkins/.m2/repository/ch/qos/logback/logback-core/1.2.3/logback-core-1.2.3.jar:/home/jenkins/.m2/repository/org/apache/logging/log4j/log4j-to-slf4j/2.17.2/log4j-to-slf4j-2.17.2.jar:/home/jenkins/.m2/repository/org/apache/logging/log4j/log4j-api/2.17.2/log4j-api-2.17.2.jar:/home/jenkins/.m2/repository/org/slf4j/jul-to-slf4j/1.7.36/jul-to-slf4j-1.7.36.jar:/home/jenkins/.m2/repository/com/salesforce/kafka/test/kafka-junit5/3.2.4/kafka-junit5-3.2.4.jar:/home/jenkins/.m2/repository/com/salesforce/kafka/test/kafka-junit-core/3.2.4/kafka-junit-core-3.2.4.jar:/home/jenkins/.m2/repository/org/apache/curator/curator-test/2.12.0/curator-test-2.12.0.jar:/home/jenkins/.m2/repository/org/javassist/javassist/3.18.1-GA/javassist-3.18.1-GA.jar:/home/jenkins/.m2/repository/org/apache/kafka/kafka_2.13/3.3.1/kafka_2.13-3.3.1.jar:/home/jenkins/.m2/repository/org/scala-lang/scala-library/2.13.8/scala-library-2.13.8.jar:/home/jenkins/.m2/repository/org/apache/kafka/kafka-server-common/3.3.1/kafka-server-common-3.3.1.jar:/home/jenkins/.m2/repository/org/apache/kafka/kafka-metadata/3.3.1/kafka-metadata-3.3.1.jar:/home/jenkins/.m2/repository/org/apache/kafka/kafka-raft/3.3.1/kafka-raft-3.3.1.jar:/home/jenkins/.m2/repository/org/apache/kafka/kafka-storage/3.3.1/kafka-storage-3.3.1.jar:/home/jenkins/.m2/repository/org/apache/kafka/kafka-storage-api/3.3.1/kafka-storage-api-3.3.1.jar:/home/jenkins/.m2/repository/net/sourceforge/argparse4j/argparse4j/0.7.0/argparse4j-0.7.0.jar:/home/jenkins/.m2/repository/net/sf/jopt-simple/jopt-simple/5.0.4/jopt-simple-5.0.4.jar:/home/jenkins/.m2/repository/org/bitbucket/b_c/jose4j/0.7.9/jose4j-0.7.9.jar:/home/jenkins/.m2/repository/com/yammer/metrics/metrics-core/2.2.0/metrics-core-2.2.0.jar:/home/jenkins/.m2/repository/org/scala-lang/modules/scala-collection-compat_2.13/2.6.0/scala-collection-compat_2.13-2.6.0.jar:/home/jenkins/.m2/repository/org/scala-lang/modules/scala-java8-compat_2.13/1.0.2/scala-java8-compat_2.13-1.0.2.jar:/home/jenkins/.m2/repository/org/scala-lang/scala-reflect/2.13.8/scala-reflect-2.13.8.jar:/home/jenkins/.m2/repository/com/typesafe/scala-logging/scala-logging_2.13/3.9.4/scala-logging_2.13-3.9.4.jar:/home/jenkins/.m2/repository/io/dropwizard/metrics/metrics-core/4.1.12.1/metrics-core-4.1.12.1.jar:/home/jenkins/.m2/repository/org/apache/zookeeper/zookeeper/3.6.3/zookeeper-3.6.3.jar:/home/jenkins/.m2/repository/org/apache/zookeeper/zookeeper-jute/3.6.3/zookeeper-jute-3.6.3.jar:/home/jenkins/.m2/repository/org/apache/yetus/audience-annotations/0.5.0/audience-annotations-0.5.0.jar:/home/jenkins/.m2/repository/io/netty/netty-handler/4.1.63.Final/netty-handler-4.1.63.Final.jar:/home/jenkins/.m2/repository/io/netty/netty-common/4.1.63.Final/netty-common-4.1.63.Final.jar:/home/jenkins/.m2/repository/io/netty/netty-resolver/4.1.63.Final/netty-resolver-4.1.63.Final.jar:/home/jenkins/.m2/repository/io/netty/netty-buffer/4.1.63.Final/netty-buffer-4.1.63.Final.jar:/home/jenkins/.m2/repository/io/netty/netty-transport/4.1.63.Final/netty-transport-4.1.63.Final.jar:/home/jenkins/.m2/repository/io/netty/netty-codec/4.1.63.Final/netty-codec-4.1.63.Final.jar:/home/jenkins/.m2/repository/io/netty/netty-transport-native-epoll/4.1.63.Final/netty-transport-native-epoll-4.1.63.Final.jar:/home/jenkins/.m2/repository/io/netty/netty-transport-native-unix-common/4.1.63.Final/netty-transport-native-unix-common-4.1.63.Final.jar:/home/jenkins/.m2/repository/commons-cli/commons-cli/1.4/commons-cli-1.4.jar:/home/jenkins/.m2/repository/org/skyscreamer/jsonassert/1.5.3/jsonassert-1.5.3.jar:/home/jenkins/.m2/repository/com/vaadin/external/google/android-json/0.0.20131108.vaadin1/android-json-0.0.20131108.vaadin1.jar: 19:06:38.264 [main] INFO org.apache.zookeeper.ZooKeeper - Client environment:java.library.path=/usr/java/packages/lib:/usr/lib/x86_64-linux-gnu/jni:/lib/x86_64-linux-gnu:/usr/lib/x86_64-linux-gnu:/usr/lib/jni:/lib:/usr/lib 19:06:38.264 [main] INFO org.apache.zookeeper.ZooKeeper - Client environment:java.io.tmpdir=/tmp 19:06:38.264 [main] INFO org.apache.zookeeper.ZooKeeper - Client environment:java.compiler= 19:06:38.264 [main] INFO org.apache.zookeeper.ZooKeeper - Client environment:os.name=Linux 19:06:38.264 [main] INFO org.apache.zookeeper.ZooKeeper - Client environment:os.arch=amd64 19:06:38.264 [main] INFO org.apache.zookeeper.ZooKeeper - Client environment:os.version=4.15.0-192-generic 19:06:38.264 [main] INFO org.apache.zookeeper.ZooKeeper - Client environment:user.name=jenkins 19:06:38.264 [main] INFO org.apache.zookeeper.ZooKeeper - Client environment:user.home=/home/jenkins 19:06:38.264 [main] INFO org.apache.zookeeper.ZooKeeper - Client environment:user.dir=/w/workspace/sdc-sdc-distribution-client-master-integration-pairwise/sdc-distribution-client 19:06:38.264 [main] INFO org.apache.zookeeper.ZooKeeper - Client environment:os.memory.free=577MB 19:06:38.264 [main] INFO org.apache.zookeeper.ZooKeeper - Client environment:os.memory.max=8042MB 19:06:38.264 [main] INFO org.apache.zookeeper.ZooKeeper - Client environment:os.memory.total=674MB 19:06:38.268 [main] INFO org.apache.zookeeper.ZooKeeper - Initiating client connection, connectString=127.0.0.1:42969 sessionTimeout=30000 watcher=kafka.zookeeper.ZooKeeperClient$ZooKeeperClientWatcher$@74134494 19:06:38.273 [main] INFO org.apache.zookeeper.ClientCnxnSocket - jute.maxbuffer value is 4194304 Bytes 19:06:38.284 [main] INFO org.apache.zookeeper.ClientCnxn - zookeeper.request.timeout value is 0. feature enabled=false 19:06:38.286 [main] DEBUG kafka.utils.KafkaScheduler - Initializing task scheduler. 19:06:38.288 [main] INFO kafka.zookeeper.ZooKeeperClient - [ZooKeeperClient Kafka server] Waiting until connected. 19:06:38.294 [main-SendThread(127.0.0.1:42969)] DEBUG org.apache.zookeeper.SaslServerPrincipal - Canonicalized address to localhost 19:06:38.295 [main-SendThread(127.0.0.1:42969)] DEBUG org.apache.zookeeper.client.ZooKeeperSaslClient - JAAS loginContext is: Client 19:06:38.297 [main-SendThread(127.0.0.1:42969)] INFO org.apache.zookeeper.Login - Client successfully logged in. 19:06:38.299 [main-SendThread(127.0.0.1:42969)] INFO org.apache.zookeeper.client.ZooKeeperSaslClient - Client will use DIGEST-MD5 as SASL mechanism. 19:06:38.312 [main-SendThread(127.0.0.1:42969)] INFO org.apache.zookeeper.ClientCnxn - Opening socket connection to server localhost/127.0.0.1:42969. 19:06:38.313 [main-SendThread(127.0.0.1:42969)] INFO org.apache.zookeeper.ClientCnxn - SASL config status: Will attempt to SASL-authenticate using Login Context section 'Client' 19:06:38.316 [main-SendThread(127.0.0.1:42969)] INFO org.apache.zookeeper.ClientCnxn - Socket connection established, initiating session, client: /127.0.0.1:46386, server: localhost/127.0.0.1:42969 19:06:38.316 [NIOServerCxnFactory.AcceptThread:0.0.0.0/0.0.0.0:42969] DEBUG org.apache.zookeeper.server.NIOServerCnxnFactory - Accepted socket connection from /127.0.0.1:46386 19:06:38.319 [main-SendThread(127.0.0.1:42969)] DEBUG org.apache.zookeeper.ClientCnxn - Session establishment request sent on localhost/127.0.0.1:42969 19:06:38.329 [NIOWorkerThread-1] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Session establishment request from client /127.0.0.1:46386 client's lastZxid is 0x0 19:06:38.331 [NIOWorkerThread-1] DEBUG org.apache.zookeeper.server.SessionTrackerImpl - Adding session 0x1000002c50e0000 19:06:38.331 [NIOWorkerThread-1] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Client attempting to establish new session: session = 0x1000002c50e0000, zxid = 0x0, timeout = 30000, address = /127.0.0.1:46386 19:06:38.336 [ProcessThread(sid:0 cport:42969):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from data tree is: 1371985504 19:06:38.337 [SyncThread:0] INFO org.apache.zookeeper.server.persistence.FileTxnLog - Creating new log file: log.1 19:06:38.372 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - Processing request:: sessionid:0x1000002c50e0000 type:createSession cxid:0x0 zxid:0x1 txntype:-10 reqpath:n/a 19:06:38.377 [SyncThread:0] DEBUG org.apache.zookeeper.server.DataTree - Digests are matching for Zxid: 1, Digest in log and actual tree: 1371985504 19:06:38.380 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - sessionid:0x1000002c50e0000 type:createSession cxid:0x0 zxid:0x1 txntype:-10 reqpath:n/a 19:06:38.385 [SyncThread:0] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Established session 0x1000002c50e0000 with negotiated timeout 30000 for client /127.0.0.1:46386 19:06:38.387 [main-SendThread(127.0.0.1:42969)] INFO org.apache.zookeeper.ClientCnxn - Session establishment complete on server localhost/127.0.0.1:42969, session id = 0x1000002c50e0000, negotiated timeout = 30000 19:06:38.394 [main-SendThread(127.0.0.1:42969)] DEBUG org.apache.zookeeper.client.ZooKeeperSaslClient - ClientCnxn:sendSaslPacket:length=0 19:06:38.395 [main-EventThread] DEBUG kafka.zookeeper.ZooKeeperClient - [ZooKeeperClient Kafka server] Received event: WatchedEvent state:SyncConnected type:None path:null 19:06:38.398 [main] INFO kafka.zookeeper.ZooKeeperClient - [ZooKeeperClient Kafka server] Connected. 19:06:38.399 [NIOWorkerThread-3] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Responding to client SASL token. 19:06:38.400 [NIOWorkerThread-3] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Size of client SASL token: 0 19:06:38.400 [NIOWorkerThread-3] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Size of server SASL response: 101 19:06:38.404 [main-SendThread(127.0.0.1:42969)] DEBUG org.apache.zookeeper.client.ZooKeeperSaslClient - saslClient.evaluateChallenge(len=101) 19:06:38.406 [main-SendThread(127.0.0.1:42969)] DEBUG org.apache.zookeeper.client.ZooKeeperSaslClient - ClientCnxn:sendSaslPacket:length=284 19:06:38.407 [NIOWorkerThread-5] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Responding to client SASL token. 19:06:38.407 [NIOWorkerThread-5] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Size of client SASL token: 284 19:06:38.407 [NIOWorkerThread-5] DEBUG org.apache.zookeeper.server.auth.SaslServerCallbackHandler - client supplied realm: zk-sasl-md5 19:06:38.408 [NIOWorkerThread-5] INFO org.apache.zookeeper.server.auth.SaslServerCallbackHandler - Successfully authenticated client: authenticationID=zooclient; authorizationID=zooclient. 19:06:38.454 [NIOWorkerThread-5] INFO org.apache.zookeeper.server.auth.SaslServerCallbackHandler - Setting authorizedID: zooclient 19:06:38.455 [NIOWorkerThread-5] INFO org.apache.zookeeper.server.ZooKeeperServer - adding SASL authorization for authorizationID: zooclient 19:06:38.455 [NIOWorkerThread-5] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Size of server SASL response: 40 19:06:38.458 [main-SendThread(127.0.0.1:42969)] DEBUG org.apache.zookeeper.ClientCnxnSocketNIO - Deferring non-priming packet clientPath:/consumers serverPath:/consumers finished:false header:: 0,1 replyHeader:: 0,0,0 request:: '/consumers,,v{s{31,s{'world,'anyone}}},0 response:: until SASL authentication completes. 19:06:38.460 [main-SendThread(127.0.0.1:42969)] DEBUG org.apache.zookeeper.ClientCnxnSocketNIO - Deferring non-priming packet clientPath:/consumers serverPath:/consumers finished:false header:: 0,1 replyHeader:: 0,0,0 request:: '/consumers,,v{s{31,s{'world,'anyone}}},0 response:: until SASL authentication completes. 19:06:38.460 [main-SendThread(127.0.0.1:42969)] DEBUG org.apache.zookeeper.client.ZooKeeperSaslClient - saslClient.evaluateChallenge(len=40) 19:06:38.461 [main-SendThread(127.0.0.1:42969)] DEBUG org.apache.zookeeper.ClientCnxnSocketNIO - Deferring non-priming packet clientPath:/consumers serverPath:/consumers finished:false header:: 0,1 replyHeader:: 0,0,0 request:: '/consumers,,v{s{31,s{'world,'anyone}}},0 response:: until SASL authentication completes. 19:06:38.462 [main-EventThread] DEBUG kafka.zookeeper.ZooKeeperClient - [ZooKeeperClient Kafka server] Received event: WatchedEvent state:SaslAuthenticated type:None path:null 19:06:38.464 [ProcessThread(sid:0 cport:42969):] DEBUG org.apache.zookeeper.server.SessionTrackerImpl - Checking session 0x1000002c50e0000 19:06:38.465 [ProcessThread(sid:0 cport:42969):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Processing ACL: 31,s{'world,'anyone} 19:06:38.467 [ProcessThread(sid:0 cport:42969):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Permission requested: 4 19:06:38.467 [ProcessThread(sid:0 cport:42969):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - ACLs for node: [31,s{'world,'anyone} ] 19:06:38.467 [ProcessThread(sid:0 cport:42969):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Client credentials: ['sasl,'zooclient , 'ip,'127.0.0.1 ] 19:06:38.475 [ProcessThread(sid:0 cport:42969):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from data tree is: 1371985504 19:06:38.475 [ProcessThread(sid:0 cport:42969):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 1355400778 19:06:38.478 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - Processing request:: sessionid:0x1000002c50e0000 type:create cxid:0x3 zxid:0x2 txntype:1 reqpath:n/a 19:06:38.481 [SyncThread:0] DEBUG org.apache.zookeeper.common.PathTrie - consumers 19:06:38.484 [SyncThread:0] DEBUG org.apache.zookeeper.server.DataTree - Digests are matching for Zxid: 2, Digest in log and actual tree: 4365704094 19:06:38.484 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - sessionid:0x1000002c50e0000 type:create cxid:0x3 zxid:0x2 txntype:1 reqpath:n/a 19:06:38.486 [main-SendThread(127.0.0.1:42969)] DEBUG org.apache.zookeeper.ClientCnxn - Reading reply session id: 0x1000002c50e0000, packet:: clientPath:/consumers serverPath:/consumers finished:false header:: 3,1 replyHeader:: 3,2,0 request:: '/consumers,,v{s{31,s{'world,'anyone}}},0 response:: '/consumers 19:06:38.509 [ProcessThread(sid:0 cport:42969):] DEBUG org.apache.zookeeper.server.SessionTrackerImpl - Checking session 0x1000002c50e0000 19:06:38.510 [ProcessThread(sid:0 cport:42969):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Processing ACL: 31,s{'world,'anyone} 19:06:38.514 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - Processing request:: sessionid:0x1000002c50e0000 type:create cxid:0x4 zxid:0x3 txntype:-1 reqpath:n/a 19:06:38.514 [SyncThread:0] DEBUG org.apache.zookeeper.server.DataTree - Ignoring processTxn failure hdr: -1 : error: -101 19:06:38.515 [main-SendThread(127.0.0.1:42969)] DEBUG org.apache.zookeeper.ClientCnxn - Reading reply session id: 0x1000002c50e0000, packet:: clientPath:/brokers/ids serverPath:/brokers/ids finished:false header:: 4,1 replyHeader:: 4,3,-101 request:: '/brokers/ids,,v{s{31,s{'world,'anyone}}},0 response:: 19:06:38.517 [ProcessThread(sid:0 cport:42969):] DEBUG org.apache.zookeeper.server.SessionTrackerImpl - Checking session 0x1000002c50e0000 19:06:38.518 [ProcessThread(sid:0 cport:42969):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Processing ACL: 31,s{'world,'anyone} 19:06:38.518 [ProcessThread(sid:0 cport:42969):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Permission requested: 4 19:06:38.518 [ProcessThread(sid:0 cport:42969):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - ACLs for node: [31,s{'world,'anyone} ] 19:06:38.518 [ProcessThread(sid:0 cport:42969):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Client credentials: ['sasl,'zooclient , 'ip,'127.0.0.1 ] 19:06:38.519 [ProcessThread(sid:0 cport:42969):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from data tree is: 4365704094 19:06:38.519 [ProcessThread(sid:0 cport:42969):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 4483665320 19:06:38.521 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - Processing request:: sessionid:0x1000002c50e0000 type:create cxid:0x5 zxid:0x4 txntype:1 reqpath:n/a 19:06:38.521 [SyncThread:0] DEBUG org.apache.zookeeper.common.PathTrie - brokers 19:06:38.522 [SyncThread:0] DEBUG org.apache.zookeeper.server.DataTree - Digests are matching for Zxid: 4, Digest in log and actual tree: 5518826070 19:06:38.522 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - sessionid:0x1000002c50e0000 type:create cxid:0x5 zxid:0x4 txntype:1 reqpath:n/a 19:06:38.523 [main-SendThread(127.0.0.1:42969)] DEBUG org.apache.zookeeper.ClientCnxn - Reading reply session id: 0x1000002c50e0000, packet:: clientPath:/brokers serverPath:/brokers finished:false header:: 5,1 replyHeader:: 5,4,0 request:: '/brokers,,v{s{31,s{'world,'anyone}}},0 response:: '/brokers 19:06:38.525 [ProcessThread(sid:0 cport:42969):] DEBUG org.apache.zookeeper.server.SessionTrackerImpl - Checking session 0x1000002c50e0000 19:06:38.525 [ProcessThread(sid:0 cport:42969):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Processing ACL: 31,s{'world,'anyone} 19:06:38.526 [ProcessThread(sid:0 cport:42969):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Permission requested: 4 19:06:38.526 [ProcessThread(sid:0 cport:42969):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - ACLs for node: [31,s{'world,'anyone} ] 19:06:38.526 [ProcessThread(sid:0 cport:42969):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Client credentials: ['sasl,'zooclient , 'ip,'127.0.0.1 ] 19:06:38.527 [ProcessThread(sid:0 cport:42969):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from data tree is: 5518826070 19:06:38.527 [ProcessThread(sid:0 cport:42969):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 6108270556 19:06:38.529 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - Processing request:: sessionid:0x1000002c50e0000 type:create cxid:0x6 zxid:0x5 txntype:1 reqpath:n/a 19:06:38.529 [SyncThread:0] DEBUG org.apache.zookeeper.common.PathTrie - brokers 19:06:38.529 [SyncThread:0] DEBUG org.apache.zookeeper.server.DataTree - Digests are matching for Zxid: 5, Digest in log and actual tree: 9632999860 19:06:38.529 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - sessionid:0x1000002c50e0000 type:create cxid:0x6 zxid:0x5 txntype:1 reqpath:n/a 19:06:38.531 [main-SendThread(127.0.0.1:42969)] DEBUG org.apache.zookeeper.ClientCnxn - Reading reply session id: 0x1000002c50e0000, packet:: clientPath:/brokers/ids serverPath:/brokers/ids finished:false header:: 6,1 replyHeader:: 6,5,0 request:: '/brokers/ids,,v{s{31,s{'world,'anyone}}},0 response:: '/brokers/ids 19:06:38.534 [ProcessThread(sid:0 cport:42969):] DEBUG org.apache.zookeeper.server.SessionTrackerImpl - Checking session 0x1000002c50e0000 19:06:38.534 [ProcessThread(sid:0 cport:42969):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Processing ACL: 31,s{'world,'anyone} 19:06:38.534 [ProcessThread(sid:0 cport:42969):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Permission requested: 4 19:06:38.534 [ProcessThread(sid:0 cport:42969):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - ACLs for node: [31,s{'world,'anyone} ] 19:06:38.534 [ProcessThread(sid:0 cport:42969):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Client credentials: ['sasl,'zooclient , 'ip,'127.0.0.1 ] 19:06:38.535 [ProcessThread(sid:0 cport:42969):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from data tree is: 9632999860 19:06:38.535 [ProcessThread(sid:0 cport:42969):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 10281629978 19:06:38.536 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - Processing request:: sessionid:0x1000002c50e0000 type:create cxid:0x7 zxid:0x6 txntype:1 reqpath:n/a 19:06:38.537 [SyncThread:0] DEBUG org.apache.zookeeper.common.PathTrie - brokers 19:06:38.538 [SyncThread:0] DEBUG org.apache.zookeeper.server.DataTree - Digests are matching for Zxid: 6, Digest in log and actual tree: 14471174270 19:06:38.538 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - sessionid:0x1000002c50e0000 type:create cxid:0x7 zxid:0x6 txntype:1 reqpath:n/a 19:06:38.539 [main-SendThread(127.0.0.1:42969)] DEBUG org.apache.zookeeper.ClientCnxn - Reading reply session id: 0x1000002c50e0000, packet:: clientPath:/brokers/topics serverPath:/brokers/topics finished:false header:: 7,1 replyHeader:: 7,6,0 request:: '/brokers/topics,,v{s{31,s{'world,'anyone}}},0 response:: '/brokers/topics 19:06:38.541 [ProcessThread(sid:0 cport:42969):] DEBUG org.apache.zookeeper.server.SessionTrackerImpl - Checking session 0x1000002c50e0000 19:06:38.541 [ProcessThread(sid:0 cport:42969):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Processing ACL: 31,s{'world,'anyone} 19:06:38.544 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - Processing request:: sessionid:0x1000002c50e0000 type:create cxid:0x8 zxid:0x7 txntype:-1 reqpath:n/a 19:06:38.544 [SyncThread:0] DEBUG org.apache.zookeeper.server.DataTree - Ignoring processTxn failure hdr: -1 : error: -101 19:06:38.546 [main-SendThread(127.0.0.1:42969)] DEBUG org.apache.zookeeper.ClientCnxn - Reading reply session id: 0x1000002c50e0000, packet:: clientPath:/config/changes serverPath:/config/changes finished:false header:: 8,1 replyHeader:: 8,7,-101 request:: '/config/changes,,v{s{31,s{'world,'anyone}}},0 response:: 19:06:38.547 [ProcessThread(sid:0 cport:42969):] DEBUG org.apache.zookeeper.server.SessionTrackerImpl - Checking session 0x1000002c50e0000 19:06:38.548 [ProcessThread(sid:0 cport:42969):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Processing ACL: 31,s{'world,'anyone} 19:06:38.548 [ProcessThread(sid:0 cport:42969):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Permission requested: 4 19:06:38.548 [ProcessThread(sid:0 cport:42969):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - ACLs for node: [31,s{'world,'anyone} ] 19:06:38.548 [ProcessThread(sid:0 cport:42969):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Client credentials: ['sasl,'zooclient , 'ip,'127.0.0.1 ] 19:06:38.548 [ProcessThread(sid:0 cport:42969):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from data tree is: 14471174270 19:06:38.548 [ProcessThread(sid:0 cport:42969):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 16004249146 19:06:38.550 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - Processing request:: sessionid:0x1000002c50e0000 type:create cxid:0x9 zxid:0x8 txntype:1 reqpath:n/a 19:06:38.550 [SyncThread:0] DEBUG org.apache.zookeeper.common.PathTrie - config 19:06:38.551 [SyncThread:0] DEBUG org.apache.zookeeper.server.DataTree - Digests are matching for Zxid: 8, Digest in log and actual tree: 18424691238 19:06:38.551 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - sessionid:0x1000002c50e0000 type:create cxid:0x9 zxid:0x8 txntype:1 reqpath:n/a 19:06:38.552 [main-SendThread(127.0.0.1:42969)] DEBUG org.apache.zookeeper.ClientCnxn - Reading reply session id: 0x1000002c50e0000, packet:: clientPath:/config serverPath:/config finished:false header:: 9,1 replyHeader:: 9,8,0 request:: '/config,,v{s{31,s{'world,'anyone}}},0 response:: '/config 19:06:38.554 [ProcessThread(sid:0 cport:42969):] DEBUG org.apache.zookeeper.server.SessionTrackerImpl - Checking session 0x1000002c50e0000 19:06:38.554 [ProcessThread(sid:0 cport:42969):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Processing ACL: 31,s{'world,'anyone} 19:06:38.554 [ProcessThread(sid:0 cport:42969):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Permission requested: 4 19:06:38.555 [ProcessThread(sid:0 cport:42969):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - ACLs for node: [31,s{'world,'anyone} ] 19:06:38.555 [ProcessThread(sid:0 cport:42969):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Client credentials: ['sasl,'zooclient , 'ip,'127.0.0.1 ] 19:06:38.556 [ProcessThread(sid:0 cport:42969):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from data tree is: 18424691238 19:06:38.556 [ProcessThread(sid:0 cport:42969):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 19445924784 19:06:38.558 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - Processing request:: sessionid:0x1000002c50e0000 type:create cxid:0xa zxid:0x9 txntype:1 reqpath:n/a 19:06:38.559 [SyncThread:0] DEBUG org.apache.zookeeper.common.PathTrie - config 19:06:38.559 [SyncThread:0] DEBUG org.apache.zookeeper.server.DataTree - Digests are matching for Zxid: 9, Digest in log and actual tree: 22794224793 19:06:38.560 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - sessionid:0x1000002c50e0000 type:create cxid:0xa zxid:0x9 txntype:1 reqpath:n/a 19:06:38.560 [main-SendThread(127.0.0.1:42969)] DEBUG org.apache.zookeeper.ClientCnxn - Reading reply session id: 0x1000002c50e0000, packet:: clientPath:/config/changes serverPath:/config/changes finished:false header:: 10,1 replyHeader:: 10,9,0 request:: '/config/changes,,v{s{31,s{'world,'anyone}}},0 response:: '/config/changes 19:06:38.563 [ProcessThread(sid:0 cport:42969):] DEBUG org.apache.zookeeper.server.SessionTrackerImpl - Checking session 0x1000002c50e0000 19:06:38.563 [ProcessThread(sid:0 cport:42969):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Processing ACL: 31,s{'world,'anyone} 19:06:38.564 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - Processing request:: sessionid:0x1000002c50e0000 type:create cxid:0xb zxid:0xa txntype:-1 reqpath:n/a 19:06:38.565 [SyncThread:0] DEBUG org.apache.zookeeper.server.DataTree - Ignoring processTxn failure hdr: -1 : error: -101 19:06:38.566 [main-SendThread(127.0.0.1:42969)] DEBUG org.apache.zookeeper.ClientCnxn - Reading reply session id: 0x1000002c50e0000, packet:: clientPath:/admin/delete_topics serverPath:/admin/delete_topics finished:false header:: 11,1 replyHeader:: 11,10,-101 request:: '/admin/delete_topics,,v{s{31,s{'world,'anyone}}},0 response:: 19:06:38.568 [ProcessThread(sid:0 cport:42969):] DEBUG org.apache.zookeeper.server.SessionTrackerImpl - Checking session 0x1000002c50e0000 19:06:38.568 [ProcessThread(sid:0 cport:42969):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Processing ACL: 31,s{'world,'anyone} 19:06:38.569 [ProcessThread(sid:0 cport:42969):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Permission requested: 4 19:06:38.569 [ProcessThread(sid:0 cport:42969):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - ACLs for node: [31,s{'world,'anyone} ] 19:06:38.569 [ProcessThread(sid:0 cport:42969):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Client credentials: ['sasl,'zooclient , 'ip,'127.0.0.1 ] 19:06:38.569 [ProcessThread(sid:0 cport:42969):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from data tree is: 22794224793 19:06:38.569 [ProcessThread(sid:0 cport:42969):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 22205853025 19:06:38.570 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - Processing request:: sessionid:0x1000002c50e0000 type:create cxid:0xc zxid:0xb txntype:1 reqpath:n/a 19:06:38.571 [SyncThread:0] DEBUG org.apache.zookeeper.common.PathTrie - admin 19:06:38.572 [SyncThread:0] DEBUG org.apache.zookeeper.server.DataTree - Digests are matching for Zxid: b, Digest in log and actual tree: 24855557870 19:06:38.572 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - sessionid:0x1000002c50e0000 type:create cxid:0xc zxid:0xb txntype:1 reqpath:n/a 19:06:38.573 [main-SendThread(127.0.0.1:42969)] DEBUG org.apache.zookeeper.ClientCnxn - Reading reply session id: 0x1000002c50e0000, packet:: clientPath:/admin serverPath:/admin finished:false header:: 12,1 replyHeader:: 12,11,0 request:: '/admin,,v{s{31,s{'world,'anyone}}},0 response:: '/admin 19:06:38.575 [ProcessThread(sid:0 cport:42969):] DEBUG org.apache.zookeeper.server.SessionTrackerImpl - Checking session 0x1000002c50e0000 19:06:38.575 [ProcessThread(sid:0 cport:42969):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Processing ACL: 31,s{'world,'anyone} 19:06:38.576 [ProcessThread(sid:0 cport:42969):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Permission requested: 4 19:06:38.576 [ProcessThread(sid:0 cport:42969):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - ACLs for node: [31,s{'world,'anyone} ] 19:06:38.576 [ProcessThread(sid:0 cport:42969):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Client credentials: ['sasl,'zooclient , 'ip,'127.0.0.1 ] 19:06:38.576 [ProcessThread(sid:0 cport:42969):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from data tree is: 24855557870 19:06:38.576 [ProcessThread(sid:0 cport:42969):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 24156989240 19:06:38.580 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - Processing request:: sessionid:0x1000002c50e0000 type:create cxid:0xd zxid:0xc txntype:1 reqpath:n/a 19:06:38.580 [SyncThread:0] DEBUG org.apache.zookeeper.common.PathTrie - admin 19:06:38.580 [SyncThread:0] DEBUG org.apache.zookeeper.server.DataTree - Digests are matching for Zxid: c, Digest in log and actual tree: 27631773099 19:06:38.581 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - sessionid:0x1000002c50e0000 type:create cxid:0xd zxid:0xc txntype:1 reqpath:n/a 19:06:38.581 [main-SendThread(127.0.0.1:42969)] DEBUG org.apache.zookeeper.ClientCnxn - Reading reply session id: 0x1000002c50e0000, packet:: clientPath:/admin/delete_topics serverPath:/admin/delete_topics finished:false header:: 13,1 replyHeader:: 13,12,0 request:: '/admin/delete_topics,,v{s{31,s{'world,'anyone}}},0 response:: '/admin/delete_topics 19:06:38.583 [ProcessThread(sid:0 cport:42969):] DEBUG org.apache.zookeeper.server.SessionTrackerImpl - Checking session 0x1000002c50e0000 19:06:38.584 [ProcessThread(sid:0 cport:42969):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Processing ACL: 31,s{'world,'anyone} 19:06:38.584 [ProcessThread(sid:0 cport:42969):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Permission requested: 4 19:06:38.584 [ProcessThread(sid:0 cport:42969):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - ACLs for node: [31,s{'world,'anyone} ] 19:06:38.584 [ProcessThread(sid:0 cport:42969):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Client credentials: ['sasl,'zooclient , 'ip,'127.0.0.1 ] 19:06:38.585 [ProcessThread(sid:0 cport:42969):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from data tree is: 27631773099 19:06:38.585 [ProcessThread(sid:0 cport:42969):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 29052644882 19:06:38.588 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - Processing request:: sessionid:0x1000002c50e0000 type:create cxid:0xe zxid:0xd txntype:1 reqpath:n/a 19:06:38.588 [SyncThread:0] DEBUG org.apache.zookeeper.common.PathTrie - brokers 19:06:38.589 [SyncThread:0] DEBUG org.apache.zookeeper.server.DataTree - Digests are matching for Zxid: d, Digest in log and actual tree: 32000070324 19:06:38.589 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - sessionid:0x1000002c50e0000 type:create cxid:0xe zxid:0xd txntype:1 reqpath:n/a 19:06:38.590 [main-SendThread(127.0.0.1:42969)] DEBUG org.apache.zookeeper.ClientCnxn - Reading reply session id: 0x1000002c50e0000, packet:: clientPath:/brokers/seqid serverPath:/brokers/seqid finished:false header:: 14,1 replyHeader:: 14,13,0 request:: '/brokers/seqid,,v{s{31,s{'world,'anyone}}},0 response:: '/brokers/seqid 19:06:38.591 [ProcessThread(sid:0 cport:42969):] DEBUG org.apache.zookeeper.server.SessionTrackerImpl - Checking session 0x1000002c50e0000 19:06:38.591 [ProcessThread(sid:0 cport:42969):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Processing ACL: 31,s{'world,'anyone} 19:06:38.592 [ProcessThread(sid:0 cport:42969):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Permission requested: 4 19:06:38.592 [ProcessThread(sid:0 cport:42969):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - ACLs for node: [31,s{'world,'anyone} ] 19:06:38.592 [ProcessThread(sid:0 cport:42969):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Client credentials: ['sasl,'zooclient , 'ip,'127.0.0.1 ] 19:06:38.592 [ProcessThread(sid:0 cport:42969):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from data tree is: 32000070324 19:06:38.592 [ProcessThread(sid:0 cport:42969):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 32787385049 19:06:38.593 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - Processing request:: sessionid:0x1000002c50e0000 type:create cxid:0xf zxid:0xe txntype:1 reqpath:n/a 19:06:38.593 [SyncThread:0] DEBUG org.apache.zookeeper.common.PathTrie - isr_change_notification 19:06:38.594 [SyncThread:0] DEBUG org.apache.zookeeper.server.DataTree - Digests are matching for Zxid: e, Digest in log and actual tree: 34899445175 19:06:38.594 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - sessionid:0x1000002c50e0000 type:create cxid:0xf zxid:0xe txntype:1 reqpath:n/a 19:06:38.594 [main-SendThread(127.0.0.1:42969)] DEBUG org.apache.zookeeper.ClientCnxn - Reading reply session id: 0x1000002c50e0000, packet:: clientPath:/isr_change_notification serverPath:/isr_change_notification finished:false header:: 15,1 replyHeader:: 15,14,0 request:: '/isr_change_notification,,v{s{31,s{'world,'anyone}}},0 response:: '/isr_change_notification 19:06:38.597 [ProcessThread(sid:0 cport:42969):] DEBUG org.apache.zookeeper.server.SessionTrackerImpl - Checking session 0x1000002c50e0000 19:06:38.597 [ProcessThread(sid:0 cport:42969):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Processing ACL: 31,s{'world,'anyone} 19:06:38.597 [ProcessThread(sid:0 cport:42969):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Permission requested: 4 19:06:38.597 [ProcessThread(sid:0 cport:42969):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - ACLs for node: [31,s{'world,'anyone} ] 19:06:38.597 [ProcessThread(sid:0 cport:42969):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Client credentials: ['sasl,'zooclient , 'ip,'127.0.0.1 ] 19:06:38.598 [ProcessThread(sid:0 cport:42969):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from data tree is: 34899445175 19:06:38.598 [ProcessThread(sid:0 cport:42969):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 34279321222 19:06:38.599 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - Processing request:: sessionid:0x1000002c50e0000 type:create cxid:0x10 zxid:0xf txntype:1 reqpath:n/a 19:06:38.599 [SyncThread:0] DEBUG org.apache.zookeeper.common.PathTrie - latest_producer_id_block 19:06:38.599 [SyncThread:0] DEBUG org.apache.zookeeper.server.DataTree - Digests are matching for Zxid: f, Digest in log and actual tree: 35994256551 19:06:38.599 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - sessionid:0x1000002c50e0000 type:create cxid:0x10 zxid:0xf txntype:1 reqpath:n/a 19:06:38.600 [main-SendThread(127.0.0.1:42969)] DEBUG org.apache.zookeeper.ClientCnxn - Reading reply session id: 0x1000002c50e0000, packet:: clientPath:/latest_producer_id_block serverPath:/latest_producer_id_block finished:false header:: 16,1 replyHeader:: 16,15,0 request:: '/latest_producer_id_block,,v{s{31,s{'world,'anyone}}},0 response:: '/latest_producer_id_block 19:06:38.602 [ProcessThread(sid:0 cport:42969):] DEBUG org.apache.zookeeper.server.SessionTrackerImpl - Checking session 0x1000002c50e0000 19:06:38.602 [ProcessThread(sid:0 cport:42969):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Processing ACL: 31,s{'world,'anyone} 19:06:38.602 [ProcessThread(sid:0 cport:42969):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Permission requested: 4 19:06:38.603 [ProcessThread(sid:0 cport:42969):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - ACLs for node: [31,s{'world,'anyone} ] 19:06:38.603 [ProcessThread(sid:0 cport:42969):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Client credentials: ['sasl,'zooclient , 'ip,'127.0.0.1 ] 19:06:38.603 [ProcessThread(sid:0 cport:42969):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from data tree is: 35994256551 19:06:38.603 [ProcessThread(sid:0 cport:42969):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 35171796512 19:06:38.604 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - Processing request:: sessionid:0x1000002c50e0000 type:create cxid:0x11 zxid:0x10 txntype:1 reqpath:n/a 19:06:38.605 [SyncThread:0] DEBUG org.apache.zookeeper.common.PathTrie - log_dir_event_notification 19:06:38.605 [SyncThread:0] DEBUG org.apache.zookeeper.server.DataTree - Digests are matching for Zxid: 10, Digest in log and actual tree: 35959757173 19:06:38.605 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - sessionid:0x1000002c50e0000 type:create cxid:0x11 zxid:0x10 txntype:1 reqpath:n/a 19:06:38.606 [main-SendThread(127.0.0.1:42969)] DEBUG org.apache.zookeeper.ClientCnxn - Reading reply session id: 0x1000002c50e0000, packet:: clientPath:/log_dir_event_notification serverPath:/log_dir_event_notification finished:false header:: 17,1 replyHeader:: 17,16,0 request:: '/log_dir_event_notification,,v{s{31,s{'world,'anyone}}},0 response:: '/log_dir_event_notification 19:06:38.607 [ProcessThread(sid:0 cport:42969):] DEBUG org.apache.zookeeper.server.SessionTrackerImpl - Checking session 0x1000002c50e0000 19:06:38.607 [ProcessThread(sid:0 cport:42969):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Processing ACL: 31,s{'world,'anyone} 19:06:38.607 [ProcessThread(sid:0 cport:42969):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Permission requested: 4 19:06:38.607 [ProcessThread(sid:0 cport:42969):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - ACLs for node: [31,s{'world,'anyone} ] 19:06:38.608 [ProcessThread(sid:0 cport:42969):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Client credentials: ['sasl,'zooclient , 'ip,'127.0.0.1 ] 19:06:38.608 [ProcessThread(sid:0 cport:42969):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from data tree is: 35959757173 19:06:38.608 [ProcessThread(sid:0 cport:42969):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 35981745274 19:06:38.609 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - Processing request:: sessionid:0x1000002c50e0000 type:create cxid:0x12 zxid:0x11 txntype:1 reqpath:n/a 19:06:38.609 [SyncThread:0] DEBUG org.apache.zookeeper.common.PathTrie - config 19:06:38.609 [SyncThread:0] DEBUG org.apache.zookeeper.server.DataTree - Digests are matching for Zxid: 11, Digest in log and actual tree: 37871775523 19:06:38.609 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - sessionid:0x1000002c50e0000 type:create cxid:0x12 zxid:0x11 txntype:1 reqpath:n/a 19:06:38.610 [main-SendThread(127.0.0.1:42969)] DEBUG org.apache.zookeeper.ClientCnxn - Reading reply session id: 0x1000002c50e0000, packet:: clientPath:/config/topics serverPath:/config/topics finished:false header:: 18,1 replyHeader:: 18,17,0 request:: '/config/topics,,v{s{31,s{'world,'anyone}}},0 response:: '/config/topics 19:06:38.612 [ProcessThread(sid:0 cport:42969):] DEBUG org.apache.zookeeper.server.SessionTrackerImpl - Checking session 0x1000002c50e0000 19:06:38.612 [ProcessThread(sid:0 cport:42969):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Processing ACL: 31,s{'world,'anyone} 19:06:38.612 [ProcessThread(sid:0 cport:42969):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Permission requested: 4 19:06:38.612 [ProcessThread(sid:0 cport:42969):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - ACLs for node: [31,s{'world,'anyone} ] 19:06:38.612 [ProcessThread(sid:0 cport:42969):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Client credentials: ['sasl,'zooclient , 'ip,'127.0.0.1 ] 19:06:38.612 [ProcessThread(sid:0 cport:42969):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from data tree is: 37871775523 19:06:38.612 [ProcessThread(sid:0 cport:42969):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 35641546054 19:06:38.614 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - Processing request:: sessionid:0x1000002c50e0000 type:create cxid:0x13 zxid:0x12 txntype:1 reqpath:n/a 19:06:38.614 [SyncThread:0] DEBUG org.apache.zookeeper.common.PathTrie - config 19:06:38.614 [SyncThread:0] DEBUG org.apache.zookeeper.server.DataTree - Digests are matching for Zxid: 12, Digest in log and actual tree: 38723852310 19:06:38.614 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - sessionid:0x1000002c50e0000 type:create cxid:0x13 zxid:0x12 txntype:1 reqpath:n/a 19:06:38.615 [main-SendThread(127.0.0.1:42969)] DEBUG org.apache.zookeeper.ClientCnxn - Reading reply session id: 0x1000002c50e0000, packet:: clientPath:/config/clients serverPath:/config/clients finished:false header:: 19,1 replyHeader:: 19,18,0 request:: '/config/clients,,v{s{31,s{'world,'anyone}}},0 response:: '/config/clients 19:06:38.616 [ProcessThread(sid:0 cport:42969):] DEBUG org.apache.zookeeper.server.SessionTrackerImpl - Checking session 0x1000002c50e0000 19:06:38.616 [ProcessThread(sid:0 cport:42969):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Processing ACL: 31,s{'world,'anyone} 19:06:38.617 [ProcessThread(sid:0 cport:42969):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Permission requested: 4 19:06:38.617 [ProcessThread(sid:0 cport:42969):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - ACLs for node: [31,s{'world,'anyone} ] 19:06:38.617 [ProcessThread(sid:0 cport:42969):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Client credentials: ['sasl,'zooclient , 'ip,'127.0.0.1 ] 19:06:38.617 [ProcessThread(sid:0 cport:42969):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from data tree is: 38723852310 19:06:38.617 [ProcessThread(sid:0 cport:42969):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 40510818641 19:06:38.618 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - Processing request:: sessionid:0x1000002c50e0000 type:create cxid:0x14 zxid:0x13 txntype:1 reqpath:n/a 19:06:38.619 [SyncThread:0] DEBUG org.apache.zookeeper.common.PathTrie - config 19:06:38.619 [SyncThread:0] DEBUG org.apache.zookeeper.server.DataTree - Digests are matching for Zxid: 13, Digest in log and actual tree: 41666093280 19:06:38.619 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - sessionid:0x1000002c50e0000 type:create cxid:0x14 zxid:0x13 txntype:1 reqpath:n/a 19:06:38.620 [main-SendThread(127.0.0.1:42969)] DEBUG org.apache.zookeeper.ClientCnxn - Reading reply session id: 0x1000002c50e0000, packet:: clientPath:/config/users serverPath:/config/users finished:false header:: 20,1 replyHeader:: 20,19,0 request:: '/config/users,,v{s{31,s{'world,'anyone}}},0 response:: '/config/users 19:06:38.621 [ProcessThread(sid:0 cport:42969):] DEBUG org.apache.zookeeper.server.SessionTrackerImpl - Checking session 0x1000002c50e0000 19:06:38.621 [ProcessThread(sid:0 cport:42969):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Processing ACL: 31,s{'world,'anyone} 19:06:38.621 [ProcessThread(sid:0 cport:42969):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Permission requested: 4 19:06:38.622 [ProcessThread(sid:0 cport:42969):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - ACLs for node: [31,s{'world,'anyone} ] 19:06:38.622 [ProcessThread(sid:0 cport:42969):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Client credentials: ['sasl,'zooclient , 'ip,'127.0.0.1 ] 19:06:38.622 [ProcessThread(sid:0 cport:42969):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from data tree is: 41666093280 19:06:38.622 [ProcessThread(sid:0 cport:42969):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 40216662154 19:06:38.623 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - Processing request:: sessionid:0x1000002c50e0000 type:create cxid:0x15 zxid:0x14 txntype:1 reqpath:n/a 19:06:38.623 [SyncThread:0] DEBUG org.apache.zookeeper.common.PathTrie - config 19:06:38.623 [SyncThread:0] DEBUG org.apache.zookeeper.server.DataTree - Digests are matching for Zxid: 14, Digest in log and actual tree: 43617296110 19:06:38.624 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - sessionid:0x1000002c50e0000 type:create cxid:0x15 zxid:0x14 txntype:1 reqpath:n/a 19:06:38.624 [main-SendThread(127.0.0.1:42969)] DEBUG org.apache.zookeeper.ClientCnxn - Reading reply session id: 0x1000002c50e0000, packet:: clientPath:/config/brokers serverPath:/config/brokers finished:false header:: 21,1 replyHeader:: 21,20,0 request:: '/config/brokers,,v{s{31,s{'world,'anyone}}},0 response:: '/config/brokers 19:06:38.626 [ProcessThread(sid:0 cport:42969):] DEBUG org.apache.zookeeper.server.SessionTrackerImpl - Checking session 0x1000002c50e0000 19:06:38.626 [ProcessThread(sid:0 cport:42969):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Processing ACL: 31,s{'world,'anyone} 19:06:38.626 [ProcessThread(sid:0 cport:42969):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Permission requested: 4 19:06:38.626 [ProcessThread(sid:0 cport:42969):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - ACLs for node: [31,s{'world,'anyone} ] 19:06:38.626 [ProcessThread(sid:0 cport:42969):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Client credentials: ['sasl,'zooclient , 'ip,'127.0.0.1 ] 19:06:38.627 [ProcessThread(sid:0 cport:42969):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from data tree is: 43617296110 19:06:38.627 [ProcessThread(sid:0 cport:42969):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 43666994109 19:06:38.628 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - Processing request:: sessionid:0x1000002c50e0000 type:create cxid:0x16 zxid:0x15 txntype:1 reqpath:n/a 19:06:38.628 [SyncThread:0] DEBUG org.apache.zookeeper.common.PathTrie - config 19:06:38.628 [SyncThread:0] DEBUG org.apache.zookeeper.server.DataTree - Digests are matching for Zxid: 15, Digest in log and actual tree: 46465554091 19:06:38.628 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - sessionid:0x1000002c50e0000 type:create cxid:0x16 zxid:0x15 txntype:1 reqpath:n/a 19:06:38.632 [main-SendThread(127.0.0.1:42969)] DEBUG org.apache.zookeeper.ClientCnxn - Reading reply session id: 0x1000002c50e0000, packet:: clientPath:/config/ips serverPath:/config/ips finished:false header:: 22,1 replyHeader:: 22,21,0 request:: '/config/ips,,v{s{31,s{'world,'anyone}}},0 response:: '/config/ips 19:06:38.645 [ProcessThread(sid:0 cport:42969):] DEBUG org.apache.zookeeper.server.SessionTrackerImpl - Checking session 0x1000002c50e0000 19:06:38.645 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - Processing request:: sessionid:0x1000002c50e0000 type:getData cxid:0x17 zxid:0xfffffffffffffffe txntype:unknown reqpath:/cluster/id 19:06:38.648 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - sessionid:0x1000002c50e0000 type:getData cxid:0x17 zxid:0xfffffffffffffffe txntype:unknown reqpath:/cluster/id 19:06:38.649 [main-SendThread(127.0.0.1:42969)] DEBUG org.apache.zookeeper.ClientCnxn - Reading reply session id: 0x1000002c50e0000, packet:: clientPath:/cluster/id serverPath:/cluster/id finished:false header:: 23,4 replyHeader:: 23,21,-101 request:: '/cluster/id,F response:: 19:06:38.998 [ProcessThread(sid:0 cport:42969):] DEBUG org.apache.zookeeper.server.SessionTrackerImpl - Checking session 0x1000002c50e0000 19:06:38.999 [ProcessThread(sid:0 cport:42969):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Processing ACL: 31,s{'world,'anyone} 19:06:39.006 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - Processing request:: sessionid:0x1000002c50e0000 type:create cxid:0x18 zxid:0x16 txntype:-1 reqpath:n/a 19:06:39.006 [SyncThread:0] DEBUG org.apache.zookeeper.server.DataTree - Ignoring processTxn failure hdr: -1 : error: -101 19:06:39.007 [main-SendThread(127.0.0.1:42969)] DEBUG org.apache.zookeeper.ClientCnxn - Reading reply session id: 0x1000002c50e0000, packet:: clientPath:/cluster/id serverPath:/cluster/id finished:false header:: 24,1 replyHeader:: 24,22,-101 request:: '/cluster/id,#7b2276657273696f6e223a2231222c226964223a22724e4231506d5476547a2d3750445f33583077683367227d,v{s{31,s{'world,'anyone}}},0 response:: 19:06:39.009 [ProcessThread(sid:0 cport:42969):] DEBUG org.apache.zookeeper.server.SessionTrackerImpl - Checking session 0x1000002c50e0000 19:06:39.009 [ProcessThread(sid:0 cport:42969):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Processing ACL: 31,s{'world,'anyone} 19:06:39.009 [ProcessThread(sid:0 cport:42969):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Permission requested: 4 19:06:39.009 [ProcessThread(sid:0 cport:42969):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - ACLs for node: [31,s{'world,'anyone} ] 19:06:39.010 [ProcessThread(sid:0 cport:42969):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Client credentials: ['sasl,'zooclient , 'ip,'127.0.0.1 ] 19:06:39.010 [ProcessThread(sid:0 cport:42969):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from data tree is: 46465554091 19:06:39.010 [ProcessThread(sid:0 cport:42969):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 46142697925 19:06:39.011 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - Processing request:: sessionid:0x1000002c50e0000 type:create cxid:0x19 zxid:0x17 txntype:1 reqpath:n/a 19:06:39.011 [SyncThread:0] DEBUG org.apache.zookeeper.common.PathTrie - cluster 19:06:39.012 [SyncThread:0] DEBUG org.apache.zookeeper.server.DataTree - Digests are matching for Zxid: 17, Digest in log and actual tree: 46859194475 19:06:39.012 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - sessionid:0x1000002c50e0000 type:create cxid:0x19 zxid:0x17 txntype:1 reqpath:n/a 19:06:39.013 [main-SendThread(127.0.0.1:42969)] DEBUG org.apache.zookeeper.ClientCnxn - Reading reply session id: 0x1000002c50e0000, packet:: clientPath:/cluster serverPath:/cluster finished:false header:: 25,1 replyHeader:: 25,23,0 request:: '/cluster,,v{s{31,s{'world,'anyone}}},0 response:: '/cluster 19:06:39.014 [ProcessThread(sid:0 cport:42969):] DEBUG org.apache.zookeeper.server.SessionTrackerImpl - Checking session 0x1000002c50e0000 19:06:39.014 [ProcessThread(sid:0 cport:42969):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Processing ACL: 31,s{'world,'anyone} 19:06:39.015 [ProcessThread(sid:0 cport:42969):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Permission requested: 4 19:06:39.015 [ProcessThread(sid:0 cport:42969):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - ACLs for node: [31,s{'world,'anyone} ] 19:06:39.015 [ProcessThread(sid:0 cport:42969):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Client credentials: ['sasl,'zooclient , 'ip,'127.0.0.1 ] 19:06:39.015 [ProcessThread(sid:0 cport:42969):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from data tree is: 46859194475 19:06:39.015 [ProcessThread(sid:0 cport:42969):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 46674769223 19:06:39.017 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - Processing request:: sessionid:0x1000002c50e0000 type:create cxid:0x1a zxid:0x18 txntype:1 reqpath:n/a 19:06:39.017 [SyncThread:0] DEBUG org.apache.zookeeper.common.PathTrie - cluster 19:06:39.017 [SyncThread:0] DEBUG org.apache.zookeeper.server.DataTree - Digests are matching for Zxid: 18, Digest in log and actual tree: 50535947550 19:06:39.017 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - sessionid:0x1000002c50e0000 type:create cxid:0x1a zxid:0x18 txntype:1 reqpath:n/a 19:06:39.018 [main-SendThread(127.0.0.1:42969)] DEBUG org.apache.zookeeper.ClientCnxn - Reading reply session id: 0x1000002c50e0000, packet:: clientPath:/cluster/id serverPath:/cluster/id finished:false header:: 26,1 replyHeader:: 26,24,0 request:: '/cluster/id,#7b2276657273696f6e223a2231222c226964223a22724e4231506d5476547a2d3750445f33583077683367227d,v{s{31,s{'world,'anyone}}},0 response:: '/cluster/id 19:06:39.019 [main] INFO kafka.server.KafkaServer - Cluster ID = rNB1PmTvTz-7PD_3X0wh3g 19:06:39.025 [main] WARN kafka.server.BrokerMetadataCheckpoint - No meta.properties file under dir /tmp/kafka-unit12180474530667575823/meta.properties 19:06:39.038 [ProcessThread(sid:0 cport:42969):] DEBUG org.apache.zookeeper.server.SessionTrackerImpl - Checking session 0x1000002c50e0000 19:06:39.039 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - Processing request:: sessionid:0x1000002c50e0000 type:getData cxid:0x1b zxid:0xfffffffffffffffe txntype:unknown reqpath:/config/brokers/ 19:06:39.039 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - sessionid:0x1000002c50e0000 type:getData cxid:0x1b zxid:0xfffffffffffffffe txntype:unknown reqpath:/config/brokers/ 19:06:39.040 [main-SendThread(127.0.0.1:42969)] DEBUG org.apache.zookeeper.ClientCnxn - Reading reply session id: 0x1000002c50e0000, packet:: clientPath:/config/brokers/ serverPath:/config/brokers/ finished:false header:: 27,4 replyHeader:: 27,24,-101 request:: '/config/brokers/,F response:: 19:06:39.091 [ProcessThread(sid:0 cport:42969):] DEBUG org.apache.zookeeper.server.SessionTrackerImpl - Checking session 0x1000002c50e0000 19:06:39.091 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - Processing request:: sessionid:0x1000002c50e0000 type:getData cxid:0x1c zxid:0xfffffffffffffffe txntype:unknown reqpath:/config/brokers/1 19:06:39.091 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - sessionid:0x1000002c50e0000 type:getData cxid:0x1c zxid:0xfffffffffffffffe txntype:unknown reqpath:/config/brokers/1 19:06:39.092 [main-SendThread(127.0.0.1:42969)] DEBUG org.apache.zookeeper.ClientCnxn - Reading reply session id: 0x1000002c50e0000, packet:: clientPath:/config/brokers/1 serverPath:/config/brokers/1 finished:false header:: 28,4 replyHeader:: 28,24,-101 request:: '/config/brokers/1,F response:: 19:06:39.096 [main] INFO kafka.server.KafkaConfig - KafkaConfig values: advertised.listeners = SASL_PLAINTEXT://localhost:38537 alter.config.policy.class.name = null alter.log.dirs.replication.quota.window.num = 11 alter.log.dirs.replication.quota.window.size.seconds = 1 authorizer.class.name = auto.create.topics.enable = true auto.leader.rebalance.enable = true background.threads = 10 broker.heartbeat.interval.ms = 2000 broker.id = 1 broker.id.generation.enable = true broker.rack = null broker.session.timeout.ms = 9000 client.quota.callback.class = null compression.type = producer connection.failed.authentication.delay.ms = 100 connections.max.idle.ms = 600000 connections.max.reauth.ms = 0 control.plane.listener.name = null controlled.shutdown.enable = true controlled.shutdown.max.retries = 3 controlled.shutdown.retry.backoff.ms = 5000 controller.listener.names = null controller.quorum.append.linger.ms = 25 controller.quorum.election.backoff.max.ms = 1000 controller.quorum.election.timeout.ms = 1000 controller.quorum.fetch.timeout.ms = 2000 controller.quorum.request.timeout.ms = 2000 controller.quorum.retry.backoff.ms = 20 controller.quorum.voters = [] controller.quota.window.num = 11 controller.quota.window.size.seconds = 1 controller.socket.timeout.ms = 30000 create.topic.policy.class.name = null default.replication.factor = 1 delegation.token.expiry.check.interval.ms = 3600000 delegation.token.expiry.time.ms = 86400000 delegation.token.master.key = null delegation.token.max.lifetime.ms = 604800000 delegation.token.secret.key = null delete.records.purgatory.purge.interval.requests = 1 delete.topic.enable = true early.start.listeners = null fetch.max.bytes = 57671680 fetch.purgatory.purge.interval.requests = 1000 group.initial.rebalance.delay.ms = 3000 group.max.session.timeout.ms = 1800000 group.max.size = 2147483647 group.min.session.timeout.ms = 6000 initial.broker.registration.timeout.ms = 60000 inter.broker.listener.name = null inter.broker.protocol.version = 3.3-IV3 kafka.metrics.polling.interval.secs = 10 kafka.metrics.reporters = [] leader.imbalance.check.interval.seconds = 300 leader.imbalance.per.broker.percentage = 10 listener.security.protocol.map = PLAINTEXT:PLAINTEXT,SSL:SSL,SASL_PLAINTEXT:SASL_PLAINTEXT,SASL_SSL:SASL_SSL listeners = SASL_PLAINTEXT://localhost:38537 log.cleaner.backoff.ms = 15000 log.cleaner.dedupe.buffer.size = 134217728 log.cleaner.delete.retention.ms = 86400000 log.cleaner.enable = true log.cleaner.io.buffer.load.factor = 0.9 log.cleaner.io.buffer.size = 524288 log.cleaner.io.max.bytes.per.second = 1.7976931348623157E308 log.cleaner.max.compaction.lag.ms = 9223372036854775807 log.cleaner.min.cleanable.ratio = 0.5 log.cleaner.min.compaction.lag.ms = 0 log.cleaner.threads = 1 log.cleanup.policy = [delete] log.dir = /tmp/kafka-unit12180474530667575823 log.dirs = null log.flush.interval.messages = 1 log.flush.interval.ms = null log.flush.offset.checkpoint.interval.ms = 60000 log.flush.scheduler.interval.ms = 9223372036854775807 log.flush.start.offset.checkpoint.interval.ms = 60000 log.index.interval.bytes = 4096 log.index.size.max.bytes = 10485760 log.message.downconversion.enable = true log.message.format.version = 3.0-IV1 log.message.timestamp.difference.max.ms = 9223372036854775807 log.message.timestamp.type = CreateTime log.preallocate = false log.retention.bytes = -1 log.retention.check.interval.ms = 300000 log.retention.hours = 168 log.retention.minutes = null log.retention.ms = null log.roll.hours = 168 log.roll.jitter.hours = 0 log.roll.jitter.ms = null log.roll.ms = null log.segment.bytes = 1073741824 log.segment.delete.delay.ms = 60000 max.connection.creation.rate = 2147483647 max.connections = 2147483647 max.connections.per.ip = 2147483647 max.connections.per.ip.overrides = max.incremental.fetch.session.cache.slots = 1000 message.max.bytes = 1048588 metadata.log.dir = null metadata.log.max.record.bytes.between.snapshots = 20971520 metadata.log.segment.bytes = 1073741824 metadata.log.segment.min.bytes = 8388608 metadata.log.segment.ms = 604800000 metadata.max.idle.interval.ms = 500 metadata.max.retention.bytes = -1 metadata.max.retention.ms = 604800000 metric.reporters = [] metrics.num.samples = 2 metrics.recording.level = INFO metrics.sample.window.ms = 30000 min.insync.replicas = 1 node.id = 1 num.io.threads = 2 num.network.threads = 2 num.partitions = 1 num.recovery.threads.per.data.dir = 1 num.replica.alter.log.dirs.threads = null num.replica.fetchers = 1 offset.metadata.max.bytes = 4096 offsets.commit.required.acks = -1 offsets.commit.timeout.ms = 5000 offsets.load.buffer.size = 5242880 offsets.retention.check.interval.ms = 600000 offsets.retention.minutes = 10080 offsets.topic.compression.codec = 0 offsets.topic.num.partitions = 50 offsets.topic.replication.factor = 1 offsets.topic.segment.bytes = 104857600 password.encoder.cipher.algorithm = AES/CBC/PKCS5Padding password.encoder.iterations = 4096 password.encoder.key.length = 128 password.encoder.keyfactory.algorithm = null password.encoder.old.secret = null password.encoder.secret = null principal.builder.class = class org.apache.kafka.common.security.authenticator.DefaultKafkaPrincipalBuilder process.roles = [] producer.purgatory.purge.interval.requests = 1000 queued.max.request.bytes = -1 queued.max.requests = 500 quota.window.num = 11 quota.window.size.seconds = 1 remote.log.index.file.cache.total.size.bytes = 1073741824 remote.log.manager.task.interval.ms = 30000 remote.log.manager.task.retry.backoff.max.ms = 30000 remote.log.manager.task.retry.backoff.ms = 500 remote.log.manager.task.retry.jitter = 0.2 remote.log.manager.thread.pool.size = 10 remote.log.metadata.manager.class.name = null remote.log.metadata.manager.class.path = null remote.log.metadata.manager.impl.prefix = null remote.log.metadata.manager.listener.name = null remote.log.reader.max.pending.tasks = 100 remote.log.reader.threads = 10 remote.log.storage.manager.class.name = null remote.log.storage.manager.class.path = null remote.log.storage.manager.impl.prefix = null remote.log.storage.system.enable = false replica.fetch.backoff.ms = 1000 replica.fetch.max.bytes = 1048576 replica.fetch.min.bytes = 1 replica.fetch.response.max.bytes = 10485760 replica.fetch.wait.max.ms = 500 replica.high.watermark.checkpoint.interval.ms = 5000 replica.lag.time.max.ms = 30000 replica.selector.class = null replica.socket.receive.buffer.bytes = 65536 replica.socket.timeout.ms = 30000 replication.quota.window.num = 11 replication.quota.window.size.seconds = 1 request.timeout.ms = 30000 reserved.broker.max.id = 1000 sasl.client.callback.handler.class = null sasl.enabled.mechanisms = [PLAIN] sasl.jaas.config = null sasl.kerberos.kinit.cmd = /usr/bin/kinit sasl.kerberos.min.time.before.relogin = 60000 sasl.kerberos.principal.to.local.rules = [DEFAULT] sasl.kerberos.service.name = null sasl.kerberos.ticket.renew.jitter = 0.05 sasl.kerberos.ticket.renew.window.factor = 0.8 sasl.login.callback.handler.class = null sasl.login.class = null sasl.login.connect.timeout.ms = null sasl.login.read.timeout.ms = null sasl.login.refresh.buffer.seconds = 300 sasl.login.refresh.min.period.seconds = 60 sasl.login.refresh.window.factor = 0.8 sasl.login.refresh.window.jitter = 0.05 sasl.login.retry.backoff.max.ms = 10000 sasl.login.retry.backoff.ms = 100 sasl.mechanism.controller.protocol = GSSAPI sasl.mechanism.inter.broker.protocol = PLAIN sasl.oauthbearer.clock.skew.seconds = 30 sasl.oauthbearer.expected.audience = null sasl.oauthbearer.expected.issuer = null sasl.oauthbearer.jwks.endpoint.refresh.ms = 3600000 sasl.oauthbearer.jwks.endpoint.retry.backoff.max.ms = 10000 sasl.oauthbearer.jwks.endpoint.retry.backoff.ms = 100 sasl.oauthbearer.jwks.endpoint.url = null sasl.oauthbearer.scope.claim.name = scope sasl.oauthbearer.sub.claim.name = sub sasl.oauthbearer.token.endpoint.url = null sasl.server.callback.handler.class = null sasl.server.max.receive.size = 524288 security.inter.broker.protocol = SASL_PLAINTEXT security.providers = null socket.connection.setup.timeout.max.ms = 30000 socket.connection.setup.timeout.ms = 10000 socket.listen.backlog.size = 50 socket.receive.buffer.bytes = 102400 socket.request.max.bytes = 104857600 socket.send.buffer.bytes = 102400 ssl.cipher.suites = [] ssl.client.auth = none ssl.enabled.protocols = [TLSv1.2, TLSv1.3] ssl.endpoint.identification.algorithm = https ssl.engine.factory.class = null ssl.key.password = null ssl.keymanager.algorithm = SunX509 ssl.keystore.certificate.chain = null ssl.keystore.key = null ssl.keystore.location = null ssl.keystore.password = null ssl.keystore.type = JKS ssl.principal.mapping.rules = DEFAULT ssl.protocol = TLSv1.3 ssl.provider = null ssl.secure.random.implementation = null ssl.trustmanager.algorithm = PKIX ssl.truststore.certificates = null ssl.truststore.location = null ssl.truststore.password = null ssl.truststore.type = JKS transaction.abort.timed.out.transaction.cleanup.interval.ms = 10000 transaction.max.timeout.ms = 900000 transaction.remove.expired.transaction.cleanup.interval.ms = 3600000 transaction.state.log.load.buffer.size = 5242880 transaction.state.log.min.isr = 1 transaction.state.log.num.partitions = 4 transaction.state.log.replication.factor = 1 transaction.state.log.segment.bytes = 104857600 transactional.id.expiration.ms = 604800000 unclean.leader.election.enable = false zookeeper.clientCnxnSocket = null zookeeper.connect = 127.0.0.1:42969 zookeeper.connection.timeout.ms = null zookeeper.max.in.flight.requests = 10 zookeeper.session.timeout.ms = 30000 zookeeper.set.acl = false zookeeper.ssl.cipher.suites = null zookeeper.ssl.client.enable = false zookeeper.ssl.crl.enable = false zookeeper.ssl.enabled.protocols = null zookeeper.ssl.endpoint.identification.algorithm = HTTPS zookeeper.ssl.keystore.location = null zookeeper.ssl.keystore.password = null zookeeper.ssl.keystore.type = null zookeeper.ssl.ocsp.enable = false zookeeper.ssl.protocol = TLSv1.2 zookeeper.ssl.truststore.location = null zookeeper.ssl.truststore.password = null zookeeper.ssl.truststore.type = null 19:06:39.100 [main] DEBUG kafka.utils.KafkaScheduler - Initializing task scheduler. 19:06:39.147 [TestBroker:1ThrottledChannelReaper-Produce] INFO kafka.server.ClientQuotaManager$ThrottledChannelReaper - [TestBroker:1ThrottledChannelReaper-Produce]: Starting 19:06:39.147 [TestBroker:1ThrottledChannelReaper-Fetch] INFO kafka.server.ClientQuotaManager$ThrottledChannelReaper - [TestBroker:1ThrottledChannelReaper-Fetch]: Starting 19:06:39.148 [TestBroker:1ThrottledChannelReaper-Request] INFO kafka.server.ClientQuotaManager$ThrottledChannelReaper - [TestBroker:1ThrottledChannelReaper-Request]: Starting 19:06:39.155 [TestBroker:1ThrottledChannelReaper-ControllerMutation] INFO kafka.server.ClientQuotaManager$ThrottledChannelReaper - [TestBroker:1ThrottledChannelReaper-ControllerMutation]: Starting 19:06:39.198 [ProcessThread(sid:0 cport:42969):] DEBUG org.apache.zookeeper.server.SessionTrackerImpl - Checking session 0x1000002c50e0000 19:06:39.198 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - Processing request:: sessionid:0x1000002c50e0000 type:getChildren2 cxid:0x1d zxid:0xfffffffffffffffe txntype:unknown reqpath:/brokers/topics 19:06:39.198 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - sessionid:0x1000002c50e0000 type:getChildren2 cxid:0x1d zxid:0xfffffffffffffffe txntype:unknown reqpath:/brokers/topics 19:06:39.198 [SyncThread:0] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Permission requested: 1 19:06:39.198 [SyncThread:0] DEBUG org.apache.zookeeper.server.ZooKeeperServer - ACLs for node: [31,s{'world,'anyone} ] 19:06:39.198 [SyncThread:0] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Client credentials: ['sasl,'zooclient , 'ip,'127.0.0.1 ] 19:06:39.201 [main-SendThread(127.0.0.1:42969)] DEBUG org.apache.zookeeper.ClientCnxn - Reading reply session id: 0x1000002c50e0000, packet:: clientPath:/brokers/topics serverPath:/brokers/topics finished:false header:: 29,12 replyHeader:: 29,24,0 request:: '/brokers/topics,F response:: v{},s{6,6,1755111998534,1755111998534,0,0,0,0,0,0,6} 19:06:39.205 [main] INFO kafka.log.LogManager - Loading logs from log dirs ArraySeq(/tmp/kafka-unit12180474530667575823) 19:06:39.209 [main] INFO kafka.log.LogManager - Attempting recovery for all logs in /tmp/kafka-unit12180474530667575823 since no clean shutdown file was found 19:06:39.215 [main] DEBUG kafka.log.LogManager - Adding log recovery metrics 19:06:39.220 [main] DEBUG kafka.log.LogManager - Removing log recovery metrics 19:06:39.223 [main] INFO kafka.log.LogManager - Loaded 0 logs in 18ms. 19:06:39.223 [main] INFO kafka.log.LogManager - Starting log cleanup with a period of 300000 ms. 19:06:39.225 [main] DEBUG kafka.utils.KafkaScheduler - Scheduling task kafka-log-retention with initial delay 30000 ms and period 300000 ms. 19:06:39.225 [main] INFO kafka.log.LogManager - Starting log flusher with a default period of 9223372036854775807 ms. 19:06:39.226 [main] DEBUG kafka.utils.KafkaScheduler - Scheduling task kafka-log-flusher with initial delay 30000 ms and period 9223372036854775807 ms. 19:06:39.226 [main] DEBUG kafka.utils.KafkaScheduler - Scheduling task kafka-recovery-point-checkpoint with initial delay 30000 ms and period 60000 ms. 19:06:39.227 [main] DEBUG kafka.utils.KafkaScheduler - Scheduling task kafka-log-start-offset-checkpoint with initial delay 30000 ms and period 60000 ms. 19:06:39.228 [main] DEBUG kafka.utils.KafkaScheduler - Scheduling task kafka-delete-logs with initial delay 30000 ms and period -1 ms. 19:06:39.242 [main] INFO kafka.log.LogCleaner - Starting the log cleaner 19:06:39.294 [kafka-log-cleaner-thread-0] INFO kafka.log.LogCleaner - [kafka-log-cleaner-thread-0]: Starting 19:06:39.323 [feature-zk-node-event-process-thread] INFO kafka.server.FinalizedFeatureChangeListener$ChangeNotificationProcessorThread - [feature-zk-node-event-process-thread]: Starting 19:06:39.329 [ProcessThread(sid:0 cport:42969):] DEBUG org.apache.zookeeper.server.SessionTrackerImpl - Checking session 0x1000002c50e0000 19:06:39.330 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - Processing request:: sessionid:0x1000002c50e0000 type:exists cxid:0x1e zxid:0xfffffffffffffffe txntype:unknown reqpath:/feature 19:06:39.330 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - sessionid:0x1000002c50e0000 type:exists cxid:0x1e zxid:0xfffffffffffffffe txntype:unknown reqpath:/feature 19:06:39.332 [main-SendThread(127.0.0.1:42969)] DEBUG org.apache.zookeeper.ClientCnxn - Reading reply session id: 0x1000002c50e0000, packet:: clientPath:/feature serverPath:/feature finished:false header:: 30,3 replyHeader:: 30,24,-101 request:: '/feature,T response:: 19:06:39.337 [feature-zk-node-event-process-thread] DEBUG kafka.server.FinalizedFeatureChangeListener - Reading feature ZK node at path: /feature 19:06:39.340 [ProcessThread(sid:0 cport:42969):] DEBUG org.apache.zookeeper.server.SessionTrackerImpl - Checking session 0x1000002c50e0000 19:06:39.340 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - Processing request:: sessionid:0x1000002c50e0000 type:getData cxid:0x1f zxid:0xfffffffffffffffe txntype:unknown reqpath:/feature 19:06:39.340 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - sessionid:0x1000002c50e0000 type:getData cxid:0x1f zxid:0xfffffffffffffffe txntype:unknown reqpath:/feature 19:06:39.340 [main-SendThread(127.0.0.1:42969)] DEBUG org.apache.zookeeper.ClientCnxn - Reading reply session id: 0x1000002c50e0000, packet:: clientPath:/feature serverPath:/feature finished:false header:: 31,4 replyHeader:: 31,24,-101 request:: '/feature,T response:: 19:06:39.342 [feature-zk-node-event-process-thread] INFO kafka.server.FinalizedFeatureChangeListener - Feature ZK node at path: /feature does not exist 19:06:39.367 [main] INFO org.apache.kafka.common.security.authenticator.AbstractLogin - Successfully logged in. 19:06:39.401 [TestBroker:1:BrokerToControllerChannelManager broker=1 name=forwarding] INFO kafka.server.BrokerToControllerRequestThread - [TestBroker:1:BrokerToControllerChannelManager broker=1 name=forwarding]: Starting 19:06:39.402 [TestBroker:1:BrokerToControllerChannelManager broker=1 name=forwarding] DEBUG kafka.server.BrokerToControllerRequestThread - [TestBroker:1:BrokerToControllerChannelManager broker=1 name=forwarding]: Controller isn't cached, looking for local metadata changes 19:06:39.404 [TestBroker:1:BrokerToControllerChannelManager broker=1 name=forwarding] DEBUG kafka.server.BrokerToControllerRequestThread - [TestBroker:1:BrokerToControllerChannelManager broker=1 name=forwarding]: No controller defined in metadata cache, retrying after backoff 19:06:39.520 [TestBroker:1:BrokerToControllerChannelManager broker=1 name=forwarding] DEBUG kafka.server.BrokerToControllerRequestThread - [TestBroker:1:BrokerToControllerChannelManager broker=1 name=forwarding]: Controller isn't cached, looking for local metadata changes 19:06:39.520 [TestBroker:1:BrokerToControllerChannelManager broker=1 name=forwarding] DEBUG kafka.server.BrokerToControllerRequestThread - [TestBroker:1:BrokerToControllerChannelManager broker=1 name=forwarding]: No controller defined in metadata cache, retrying after backoff 19:06:39.622 [TestBroker:1:BrokerToControllerChannelManager broker=1 name=forwarding] DEBUG kafka.server.BrokerToControllerRequestThread - [TestBroker:1:BrokerToControllerChannelManager broker=1 name=forwarding]: Controller isn't cached, looking for local metadata changes 19:06:39.622 [TestBroker:1:BrokerToControllerChannelManager broker=1 name=forwarding] DEBUG kafka.server.BrokerToControllerRequestThread - [TestBroker:1:BrokerToControllerChannelManager broker=1 name=forwarding]: No controller defined in metadata cache, retrying after backoff 19:06:39.723 [TestBroker:1:BrokerToControllerChannelManager broker=1 name=forwarding] DEBUG kafka.server.BrokerToControllerRequestThread - [TestBroker:1:BrokerToControllerChannelManager broker=1 name=forwarding]: Controller isn't cached, looking for local metadata changes 19:06:39.723 [TestBroker:1:BrokerToControllerChannelManager broker=1 name=forwarding] DEBUG kafka.server.BrokerToControllerRequestThread - [TestBroker:1:BrokerToControllerChannelManager broker=1 name=forwarding]: No controller defined in metadata cache, retrying after backoff 19:06:39.824 [TestBroker:1:BrokerToControllerChannelManager broker=1 name=forwarding] DEBUG kafka.server.BrokerToControllerRequestThread - [TestBroker:1:BrokerToControllerChannelManager broker=1 name=forwarding]: Controller isn't cached, looking for local metadata changes 19:06:39.824 [TestBroker:1:BrokerToControllerChannelManager broker=1 name=forwarding] DEBUG kafka.server.BrokerToControllerRequestThread - [TestBroker:1:BrokerToControllerChannelManager broker=1 name=forwarding]: No controller defined in metadata cache, retrying after backoff 19:06:39.925 [TestBroker:1:BrokerToControllerChannelManager broker=1 name=forwarding] DEBUG kafka.server.BrokerToControllerRequestThread - [TestBroker:1:BrokerToControllerChannelManager broker=1 name=forwarding]: Controller isn't cached, looking for local metadata changes 19:06:39.925 [TestBroker:1:BrokerToControllerChannelManager broker=1 name=forwarding] DEBUG kafka.server.BrokerToControllerRequestThread - [TestBroker:1:BrokerToControllerChannelManager broker=1 name=forwarding]: No controller defined in metadata cache, retrying after backoff 19:06:40.018 [main] INFO kafka.network.ConnectionQuotas - Updated connection-accept-rate max connection creation rate to 2147483647 19:06:40.023 [main] INFO kafka.network.DataPlaneAcceptor - Awaiting socket connections on localhost:38537. 19:06:40.026 [TestBroker:1:BrokerToControllerChannelManager broker=1 name=forwarding] DEBUG kafka.server.BrokerToControllerRequestThread - [TestBroker:1:BrokerToControllerChannelManager broker=1 name=forwarding]: Controller isn't cached, looking for local metadata changes 19:06:40.027 [TestBroker:1:BrokerToControllerChannelManager broker=1 name=forwarding] DEBUG kafka.server.BrokerToControllerRequestThread - [TestBroker:1:BrokerToControllerChannelManager broker=1 name=forwarding]: No controller defined in metadata cache, retrying after backoff 19:06:40.070 [main] INFO kafka.network.SocketServer - [SocketServer listenerType=ZK_BROKER, nodeId=1] Created data-plane acceptor and processors for endpoint : ListenerName(SASL_PLAINTEXT) 19:06:40.081 [TestBroker:1:BrokerToControllerChannelManager broker=1 name=alterPartition] INFO kafka.server.BrokerToControllerRequestThread - [TestBroker:1:BrokerToControllerChannelManager broker=1 name=alterPartition]: Starting 19:06:40.081 [TestBroker:1:BrokerToControllerChannelManager broker=1 name=alterPartition] DEBUG kafka.server.BrokerToControllerRequestThread - [TestBroker:1:BrokerToControllerChannelManager broker=1 name=alterPartition]: Controller isn't cached, looking for local metadata changes 19:06:40.081 [TestBroker:1:BrokerToControllerChannelManager broker=1 name=alterPartition] DEBUG kafka.server.BrokerToControllerRequestThread - [TestBroker:1:BrokerToControllerChannelManager broker=1 name=alterPartition]: No controller defined in metadata cache, retrying after backoff 19:06:40.119 [ExpirationReaper-1-Produce] INFO kafka.server.DelayedOperationPurgatory$ExpiredOperationReaper - [ExpirationReaper-1-Produce]: Starting 19:06:40.121 [ExpirationReaper-1-Fetch] INFO kafka.server.DelayedOperationPurgatory$ExpiredOperationReaper - [ExpirationReaper-1-Fetch]: Starting 19:06:40.123 [ExpirationReaper-1-DeleteRecords] INFO kafka.server.DelayedOperationPurgatory$ExpiredOperationReaper - [ExpirationReaper-1-DeleteRecords]: Starting 19:06:40.125 [ExpirationReaper-1-ElectLeader] INFO kafka.server.DelayedOperationPurgatory$ExpiredOperationReaper - [ExpirationReaper-1-ElectLeader]: Starting 19:06:40.128 [TestBroker:1:BrokerToControllerChannelManager broker=1 name=forwarding] DEBUG kafka.server.BrokerToControllerRequestThread - [TestBroker:1:BrokerToControllerChannelManager broker=1 name=forwarding]: Controller isn't cached, looking for local metadata changes 19:06:40.128 [TestBroker:1:BrokerToControllerChannelManager broker=1 name=forwarding] DEBUG kafka.server.BrokerToControllerRequestThread - [TestBroker:1:BrokerToControllerChannelManager broker=1 name=forwarding]: No controller defined in metadata cache, retrying after backoff 19:06:40.146 [main] DEBUG kafka.utils.KafkaScheduler - Scheduling task isr-expiration with initial delay 0 ms and period 15000 ms. 19:06:40.147 [main] DEBUG kafka.utils.KafkaScheduler - Scheduling task shutdown-idle-replica-alter-log-dirs-thread with initial delay 0 ms and period 10000 ms. 19:06:40.149 [LogDirFailureHandler] INFO kafka.server.ReplicaManager$LogDirFailureHandler - [LogDirFailureHandler]: Starting 19:06:40.151 [ProcessThread(sid:0 cport:42969):] DEBUG org.apache.zookeeper.server.SessionTrackerImpl - Checking session 0x1000002c50e0000 19:06:40.152 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - Processing request:: sessionid:0x1000002c50e0000 type:getChildren2 cxid:0x20 zxid:0xfffffffffffffffe txntype:unknown reqpath:/brokers/ids 19:06:40.152 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - sessionid:0x1000002c50e0000 type:getChildren2 cxid:0x20 zxid:0xfffffffffffffffe txntype:unknown reqpath:/brokers/ids 19:06:40.152 [SyncThread:0] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Permission requested: 1 19:06:40.152 [SyncThread:0] DEBUG org.apache.zookeeper.server.ZooKeeperServer - ACLs for node: [31,s{'world,'anyone} ] 19:06:40.152 [SyncThread:0] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Client credentials: ['sasl,'zooclient , 'ip,'127.0.0.1 ] 19:06:40.153 [main-SendThread(127.0.0.1:42969)] DEBUG org.apache.zookeeper.ClientCnxn - Reading reply session id: 0x1000002c50e0000, packet:: clientPath:/brokers/ids serverPath:/brokers/ids finished:false header:: 32,12 replyHeader:: 32,24,0 request:: '/brokers/ids,F response:: v{},s{5,5,1755111998525,1755111998525,0,0,0,0,0,0,5} 19:06:40.183 [TestBroker:1:BrokerToControllerChannelManager broker=1 name=alterPartition] DEBUG kafka.server.BrokerToControllerRequestThread - [TestBroker:1:BrokerToControllerChannelManager broker=1 name=alterPartition]: Controller isn't cached, looking for local metadata changes 19:06:40.183 [TestBroker:1:BrokerToControllerChannelManager broker=1 name=alterPartition] DEBUG kafka.server.BrokerToControllerRequestThread - [TestBroker:1:BrokerToControllerChannelManager broker=1 name=alterPartition]: No controller defined in metadata cache, retrying after backoff 19:06:40.194 [main] INFO kafka.zk.KafkaZkClient - Creating /brokers/ids/1 (is it secure? false) 19:06:40.211 [ProcessThread(sid:0 cport:42969):] DEBUG org.apache.zookeeper.server.SessionTrackerImpl - Checking session 0x1000002c50e0000 19:06:40.212 [ProcessThread(sid:0 cport:42969):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Processing ACL: 31,s{'world,'anyone} 19:06:40.212 [ProcessThread(sid:0 cport:42969):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Permission requested: 4 19:06:40.212 [ProcessThread(sid:0 cport:42969):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - ACLs for node: [31,s{'world,'anyone} ] 19:06:40.212 [ProcessThread(sid:0 cport:42969):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Client credentials: ['sasl,'zooclient , 'ip,'127.0.0.1 ] 19:06:40.213 [ProcessThread(sid:0 cport:42969):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from data tree is: 50535947550 19:06:40.213 [ProcessThread(sid:0 cport:42969):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 50757282407 19:06:40.214 [ProcessThread(sid:0 cport:42969):] DEBUG org.apache.zookeeper.server.SessionTrackerImpl - Checking session 0x1000002c50e0000 19:06:40.214 [ProcessThread(sid:0 cport:42969):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Permission requested: 2 19:06:40.214 [ProcessThread(sid:0 cport:42969):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - ACLs for node: [31,s{'world,'anyone} ] 19:06:40.214 [ProcessThread(sid:0 cport:42969):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Client credentials: ['sasl,'zooclient , 'ip,'127.0.0.1 ] 19:06:40.215 [ProcessThread(sid:0 cport:42969):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 50789499953 19:06:40.216 [ProcessThread(sid:0 cport:42969):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 54960586414 19:06:40.217 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - Processing request:: sessionid:0x1000002c50e0000 type:multi cxid:0x21 zxid:0x19 txntype:14 reqpath:n/a 19:06:40.218 [SyncThread:0] DEBUG org.apache.zookeeper.common.PathTrie - brokers 19:06:40.218 [SyncThread:0] DEBUG org.apache.zookeeper.common.PathTrie - brokers 19:06:40.218 [SyncThread:0] DEBUG org.apache.zookeeper.server.DataTree - Digests are matching for Zxid: 19, Digest in log and actual tree: 54960586414 19:06:40.218 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - sessionid:0x1000002c50e0000 type:multi cxid:0x21 zxid:0x19 txntype:14 reqpath:n/a 19:06:40.219 [main-SendThread(127.0.0.1:42969)] DEBUG org.apache.zookeeper.ClientCnxn - Reading reply session id: 0x1000002c50e0000, packet:: clientPath:null serverPath:null finished:false header:: 33,14 replyHeader:: 33,25,0 request:: org.apache.zookeeper.MultiOperationRecord@89e1c694 response:: org.apache.zookeeper.MultiResponse@1dbbce85 19:06:40.224 [main] INFO kafka.zk.KafkaZkClient - Stat of the created znode at /brokers/ids/1 is: 25,25,1755112000211,1755112000211,1,0,0,72057605933891584,209,0,25 19:06:40.226 [main] INFO kafka.zk.KafkaZkClient - Registered broker 1 at path /brokers/ids/1 with addresses: SASL_PLAINTEXT://localhost:38537, czxid (broker epoch): 25 19:06:40.230 [TestBroker:1:BrokerToControllerChannelManager broker=1 name=forwarding] DEBUG kafka.server.BrokerToControllerRequestThread - [TestBroker:1:BrokerToControllerChannelManager broker=1 name=forwarding]: Controller isn't cached, looking for local metadata changes 19:06:40.231 [TestBroker:1:BrokerToControllerChannelManager broker=1 name=forwarding] DEBUG kafka.server.BrokerToControllerRequestThread - [TestBroker:1:BrokerToControllerChannelManager broker=1 name=forwarding]: No controller defined in metadata cache, retrying after backoff 19:06:40.284 [TestBroker:1:BrokerToControllerChannelManager broker=1 name=alterPartition] DEBUG kafka.server.BrokerToControllerRequestThread - [TestBroker:1:BrokerToControllerChannelManager broker=1 name=alterPartition]: Controller isn't cached, looking for local metadata changes 19:06:40.284 [TestBroker:1:BrokerToControllerChannelManager broker=1 name=alterPartition] DEBUG kafka.server.BrokerToControllerRequestThread - [TestBroker:1:BrokerToControllerChannelManager broker=1 name=alterPartition]: No controller defined in metadata cache, retrying after backoff 19:06:40.329 [controller-event-thread] INFO kafka.controller.ControllerEventManager$ControllerEventThread - [ControllerEventThread controllerId=1] Starting 19:06:40.332 [TestBroker:1:BrokerToControllerChannelManager broker=1 name=forwarding] DEBUG kafka.server.BrokerToControllerRequestThread - [TestBroker:1:BrokerToControllerChannelManager broker=1 name=forwarding]: Controller isn't cached, looking for local metadata changes 19:06:40.332 [TestBroker:1:BrokerToControllerChannelManager broker=1 name=forwarding] DEBUG kafka.server.BrokerToControllerRequestThread - [TestBroker:1:BrokerToControllerChannelManager broker=1 name=forwarding]: No controller defined in metadata cache, retrying after backoff 19:06:40.343 [ExpirationReaper-1-topic] INFO kafka.server.DelayedOperationPurgatory$ExpiredOperationReaper - [ExpirationReaper-1-topic]: Starting 19:06:40.347 [ProcessThread(sid:0 cport:42969):] DEBUG org.apache.zookeeper.server.SessionTrackerImpl - Checking session 0x1000002c50e0000 19:06:40.348 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - Processing request:: sessionid:0x1000002c50e0000 type:exists cxid:0x22 zxid:0xfffffffffffffffe txntype:unknown reqpath:/controller 19:06:40.348 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - sessionid:0x1000002c50e0000 type:exists cxid:0x22 zxid:0xfffffffffffffffe txntype:unknown reqpath:/controller 19:06:40.349 [main-SendThread(127.0.0.1:42969)] DEBUG org.apache.zookeeper.ClientCnxn - Reading reply session id: 0x1000002c50e0000, packet:: clientPath:/controller serverPath:/controller finished:false header:: 34,3 replyHeader:: 34,25,-101 request:: '/controller,T response:: 19:06:40.351 [ProcessThread(sid:0 cport:42969):] DEBUG org.apache.zookeeper.server.SessionTrackerImpl - Checking session 0x1000002c50e0000 19:06:40.351 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - Processing request:: sessionid:0x1000002c50e0000 type:getData cxid:0x23 zxid:0xfffffffffffffffe txntype:unknown reqpath:/controller 19:06:40.351 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - sessionid:0x1000002c50e0000 type:getData cxid:0x23 zxid:0xfffffffffffffffe txntype:unknown reqpath:/controller 19:06:40.351 [ExpirationReaper-1-Heartbeat] INFO kafka.server.DelayedOperationPurgatory$ExpiredOperationReaper - [ExpirationReaper-1-Heartbeat]: Starting 19:06:40.352 [main-SendThread(127.0.0.1:42969)] DEBUG org.apache.zookeeper.ClientCnxn - Reading reply session id: 0x1000002c50e0000, packet:: clientPath:/controller serverPath:/controller finished:false header:: 35,4 replyHeader:: 35,25,-101 request:: '/controller,T response:: 19:06:40.353 [ExpirationReaper-1-Rebalance] INFO kafka.server.DelayedOperationPurgatory$ExpiredOperationReaper - [ExpirationReaper-1-Rebalance]: Starting 19:06:40.356 [ProcessThread(sid:0 cport:42969):] DEBUG org.apache.zookeeper.server.SessionTrackerImpl - Checking session 0x1000002c50e0000 19:06:40.356 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - Processing request:: sessionid:0x1000002c50e0000 type:getData cxid:0x24 zxid:0xfffffffffffffffe txntype:unknown reqpath:/controller_epoch 19:06:40.356 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - sessionid:0x1000002c50e0000 type:getData cxid:0x24 zxid:0xfffffffffffffffe txntype:unknown reqpath:/controller_epoch 19:06:40.357 [main-SendThread(127.0.0.1:42969)] DEBUG org.apache.zookeeper.ClientCnxn - Reading reply session id: 0x1000002c50e0000, packet:: clientPath:/controller_epoch serverPath:/controller_epoch finished:false header:: 36,4 replyHeader:: 36,25,-101 request:: '/controller_epoch,F response:: 19:06:40.359 [ProcessThread(sid:0 cport:42969):] DEBUG org.apache.zookeeper.server.SessionTrackerImpl - Checking session 0x1000002c50e0000 19:06:40.360 [ProcessThread(sid:0 cport:42969):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Processing ACL: 31,s{'world,'anyone} 19:06:40.360 [ProcessThread(sid:0 cport:42969):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Permission requested: 4 19:06:40.360 [ProcessThread(sid:0 cport:42969):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - ACLs for node: [31,s{'world,'anyone} ] 19:06:40.360 [ProcessThread(sid:0 cport:42969):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Client credentials: ['sasl,'zooclient , 'ip,'127.0.0.1 ] 19:06:40.360 [ProcessThread(sid:0 cport:42969):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from data tree is: 54960586414 19:06:40.360 [ProcessThread(sid:0 cport:42969):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 58709043715 19:06:40.361 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - Processing request:: sessionid:0x1000002c50e0000 type:create cxid:0x25 zxid:0x1a txntype:1 reqpath:n/a 19:06:40.362 [SyncThread:0] DEBUG org.apache.zookeeper.common.PathTrie - controller_epoch 19:06:40.362 [SyncThread:0] DEBUG org.apache.zookeeper.server.DataTree - Digests are matching for Zxid: 1a, Digest in log and actual tree: 60593566232 19:06:40.362 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - sessionid:0x1000002c50e0000 type:create cxid:0x25 zxid:0x1a txntype:1 reqpath:n/a 19:06:40.363 [main-SendThread(127.0.0.1:42969)] DEBUG org.apache.zookeeper.ClientCnxn - Reading reply session id: 0x1000002c50e0000, packet:: clientPath:/controller_epoch serverPath:/controller_epoch finished:false header:: 37,1 replyHeader:: 37,26,0 request:: '/controller_epoch,#30,v{s{31,s{'world,'anyone}}},0 response:: '/controller_epoch 19:06:40.364 [controller-event-thread] INFO kafka.zk.KafkaZkClient - Successfully created /controller_epoch with initial epoch 0 19:06:40.365 [controller-event-thread] DEBUG kafka.zk.KafkaZkClient - Try to create /controller and increment controller epoch to 1 with expected controller epoch zkVersion 0 19:06:40.368 [ProcessThread(sid:0 cport:42969):] DEBUG org.apache.zookeeper.server.SessionTrackerImpl - Checking session 0x1000002c50e0000 19:06:40.368 [ProcessThread(sid:0 cport:42969):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Processing ACL: 31,s{'world,'anyone} 19:06:40.368 [ProcessThread(sid:0 cport:42969):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Permission requested: 4 19:06:40.368 [ProcessThread(sid:0 cport:42969):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - ACLs for node: [31,s{'world,'anyone} ] 19:06:40.368 [ProcessThread(sid:0 cport:42969):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Client credentials: ['sasl,'zooclient , 'ip,'127.0.0.1 ] 19:06:40.369 [ProcessThread(sid:0 cport:42969):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from data tree is: 60593566232 19:06:40.369 [ProcessThread(sid:0 cport:42969):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 60378200393 19:06:40.369 [ProcessThread(sid:0 cport:42969):] DEBUG org.apache.zookeeper.server.SessionTrackerImpl - Checking session 0x1000002c50e0000 19:06:40.369 [ProcessThread(sid:0 cport:42969):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Permission requested: 2 19:06:40.369 [ProcessThread(sid:0 cport:42969):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - ACLs for node: [31,s{'world,'anyone} ] 19:06:40.369 [ProcessThread(sid:0 cport:42969):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Client credentials: ['sasl,'zooclient , 'ip,'127.0.0.1 ] 19:06:40.370 [ProcessThread(sid:0 cport:42969):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 60795534718 19:06:40.370 [ProcessThread(sid:0 cport:42969):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 59106945018 19:06:40.371 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - Processing request:: sessionid:0x1000002c50e0000 type:multi cxid:0x26 zxid:0x1b txntype:14 reqpath:n/a 19:06:40.371 [SyncThread:0] DEBUG org.apache.zookeeper.common.PathTrie - controller 19:06:40.375 [SyncThread:0] DEBUG org.apache.zookeeper.common.PathTrie - controller_epoch 19:06:40.375 [SyncThread:0] DEBUG org.apache.zookeeper.server.DataTree - Digests are matching for Zxid: 1b, Digest in log and actual tree: 59106945018 19:06:40.376 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - sessionid:0x1000002c50e0000 type:multi cxid:0x26 zxid:0x1b txntype:14 reqpath:n/a 19:06:40.375 [main-SendThread(127.0.0.1:42969)] DEBUG org.apache.zookeeper.ClientCnxn - Got notification session id: 0x1000002c50e0000 19:06:40.376 [main-SendThread(127.0.0.1:42969)] DEBUG org.apache.zookeeper.ClientCnxn - Got WatchedEvent state:SyncConnected type:NodeCreated path:/controller for session id 0x1000002c50e0000 19:06:40.376 [main-EventThread] DEBUG kafka.zookeeper.ZooKeeperClient - [ZooKeeperClient Kafka server] Received event: WatchedEvent state:SyncConnected type:NodeCreated path:/controller 19:06:40.377 [main-SendThread(127.0.0.1:42969)] DEBUG org.apache.zookeeper.ClientCnxn - Reading reply session id: 0x1000002c50e0000, packet:: clientPath:null serverPath:null finished:false header:: 38,14 replyHeader:: 38,27,0 request:: org.apache.zookeeper.MultiOperationRecord@d55ce86f response:: org.apache.zookeeper.MultiResponse@f3584fa6 19:06:40.379 [controller-event-thread] INFO kafka.controller.KafkaController - [Controller id=1] 1 successfully elected as the controller. Epoch incremented to 1 and epoch zk version is now 1 19:06:40.380 [ProcessThread(sid:0 cport:42969):] DEBUG org.apache.zookeeper.server.SessionTrackerImpl - Checking session 0x1000002c50e0000 19:06:40.380 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - Processing request:: sessionid:0x1000002c50e0000 type:getData cxid:0x27 zxid:0xfffffffffffffffe txntype:unknown reqpath:/feature 19:06:40.380 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - sessionid:0x1000002c50e0000 type:getData cxid:0x27 zxid:0xfffffffffffffffe txntype:unknown reqpath:/feature 19:06:40.381 [main-SendThread(127.0.0.1:42969)] DEBUG org.apache.zookeeper.ClientCnxn - Reading reply session id: 0x1000002c50e0000, packet:: clientPath:/feature serverPath:/feature finished:false header:: 39,4 replyHeader:: 39,27,-101 request:: '/feature,T response:: 19:06:40.385 [TestBroker:1:BrokerToControllerChannelManager broker=1 name=alterPartition] DEBUG kafka.server.BrokerToControllerRequestThread - [TestBroker:1:BrokerToControllerChannelManager broker=1 name=alterPartition]: Controller isn't cached, looking for local metadata changes 19:06:40.385 [TestBroker:1:BrokerToControllerChannelManager broker=1 name=alterPartition] DEBUG kafka.server.BrokerToControllerRequestThread - [TestBroker:1:BrokerToControllerChannelManager broker=1 name=alterPartition]: No controller defined in metadata cache, retrying after backoff 19:06:40.386 [controller-event-thread] INFO kafka.controller.KafkaController - [Controller id=1] Creating FeatureZNode at path: /feature with contents: FeatureZNode(2,Enabled,Map()) 19:06:40.387 [main] INFO kafka.coordinator.group.GroupCoordinator - [GroupCoordinator 1]: Starting up. 19:06:40.389 [ProcessThread(sid:0 cport:42969):] DEBUG org.apache.zookeeper.server.SessionTrackerImpl - Checking session 0x1000002c50e0000 19:06:40.389 [ProcessThread(sid:0 cport:42969):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Processing ACL: 31,s{'world,'anyone} 19:06:40.389 [ProcessThread(sid:0 cport:42969):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Permission requested: 4 19:06:40.389 [ProcessThread(sid:0 cport:42969):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - ACLs for node: [31,s{'world,'anyone} ] 19:06:40.389 [ProcessThread(sid:0 cport:42969):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Client credentials: ['sasl,'zooclient , 'ip,'127.0.0.1 ] 19:06:40.390 [ProcessThread(sid:0 cport:42969):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from data tree is: 59106945018 19:06:40.390 [ProcessThread(sid:0 cport:42969):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 56294430788 19:06:40.390 [ProcessThread(sid:0 cport:42969):] DEBUG org.apache.zookeeper.server.SessionTrackerImpl - Checking session 0x1000002c50e0000 19:06:40.391 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - Processing request:: sessionid:0x1000002c50e0000 type:create cxid:0x28 zxid:0x1c txntype:1 reqpath:n/a 19:06:40.391 [SyncThread:0] DEBUG org.apache.zookeeper.common.PathTrie - feature 19:06:40.392 [SyncThread:0] DEBUG org.apache.zookeeper.server.DataTree - Digests are matching for Zxid: 1c, Digest in log and actual tree: 56904395646 19:06:40.392 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - sessionid:0x1000002c50e0000 type:create cxid:0x28 zxid:0x1c txntype:1 reqpath:n/a 19:06:40.393 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - Processing request:: sessionid:0x1000002c50e0000 type:getData cxid:0x29 zxid:0xfffffffffffffffe txntype:unknown reqpath:/brokers/topics/__consumer_offsets 19:06:40.393 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - sessionid:0x1000002c50e0000 type:getData cxid:0x29 zxid:0xfffffffffffffffe txntype:unknown reqpath:/brokers/topics/__consumer_offsets 19:06:40.392 [main-SendThread(127.0.0.1:42969)] DEBUG org.apache.zookeeper.ClientCnxn - Got notification session id: 0x1000002c50e0000 19:06:40.393 [main-SendThread(127.0.0.1:42969)] DEBUG org.apache.zookeeper.ClientCnxn - Got WatchedEvent state:SyncConnected type:NodeCreated path:/feature for session id 0x1000002c50e0000 19:06:40.394 [main-EventThread] DEBUG kafka.zookeeper.ZooKeeperClient - [ZooKeeperClient Kafka server] Received event: WatchedEvent state:SyncConnected type:NodeCreated path:/feature 19:06:40.395 [main-EventThread] INFO kafka.server.FinalizedFeatureChangeListener - Feature ZK node created at path: /feature 19:06:40.395 [feature-zk-node-event-process-thread] DEBUG kafka.server.FinalizedFeatureChangeListener - Reading feature ZK node at path: /feature 19:06:40.396 [main-SendThread(127.0.0.1:42969)] DEBUG org.apache.zookeeper.ClientCnxn - Reading reply session id: 0x1000002c50e0000, packet:: clientPath:/feature serverPath:/feature finished:false header:: 40,1 replyHeader:: 40,28,0 request:: '/feature,#7b226665617475726573223a7b7d2c2276657273696f6e223a322c22737461747573223a317d,v{s{31,s{'world,'anyone}}},0 response:: '/feature 19:06:40.396 [ProcessThread(sid:0 cport:42969):] DEBUG org.apache.zookeeper.server.SessionTrackerImpl - Checking session 0x1000002c50e0000 19:06:40.397 [main-SendThread(127.0.0.1:42969)] DEBUG org.apache.zookeeper.ClientCnxn - Reading reply session id: 0x1000002c50e0000, packet:: clientPath:/brokers/topics/__consumer_offsets serverPath:/brokers/topics/__consumer_offsets finished:false header:: 41,4 replyHeader:: 41,28,-101 request:: '/brokers/topics/__consumer_offsets,F response:: 19:06:40.397 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - Processing request:: sessionid:0x1000002c50e0000 type:getData cxid:0x2a zxid:0xfffffffffffffffe txntype:unknown reqpath:/feature 19:06:40.398 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - sessionid:0x1000002c50e0000 type:getData cxid:0x2a zxid:0xfffffffffffffffe txntype:unknown reqpath:/feature 19:06:40.398 [SyncThread:0] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Permission requested: 1 19:06:40.398 [SyncThread:0] DEBUG org.apache.zookeeper.server.ZooKeeperServer - ACLs for node: [31,s{'world,'anyone} ] 19:06:40.399 [main] DEBUG kafka.utils.KafkaScheduler - Initializing task scheduler. 19:06:40.399 [SyncThread:0] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Client credentials: ['sasl,'zooclient , 'ip,'127.0.0.1 ] 19:06:40.399 [main] DEBUG kafka.utils.KafkaScheduler - Scheduling task delete-expired-group-metadata with initial delay 0 ms and period 600000 ms. 19:06:40.399 [ProcessThread(sid:0 cport:42969):] DEBUG org.apache.zookeeper.server.SessionTrackerImpl - Checking session 0x1000002c50e0000 19:06:40.400 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - Processing request:: sessionid:0x1000002c50e0000 type:getData cxid:0x2b zxid:0xfffffffffffffffe txntype:unknown reqpath:/feature 19:06:40.400 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - sessionid:0x1000002c50e0000 type:getData cxid:0x2b zxid:0xfffffffffffffffe txntype:unknown reqpath:/feature 19:06:40.400 [SyncThread:0] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Permission requested: 1 19:06:40.401 [SyncThread:0] DEBUG org.apache.zookeeper.server.ZooKeeperServer - ACLs for node: [31,s{'world,'anyone} ] 19:06:40.401 [SyncThread:0] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Client credentials: ['sasl,'zooclient , 'ip,'127.0.0.1 ] 19:06:40.400 [main] INFO kafka.coordinator.group.GroupCoordinator - [GroupCoordinator 1]: Startup complete. 19:06:40.400 [main-SendThread(127.0.0.1:42969)] DEBUG org.apache.zookeeper.ClientCnxn - Reading reply session id: 0x1000002c50e0000, packet:: clientPath:/feature serverPath:/feature finished:false header:: 42,4 replyHeader:: 42,28,0 request:: '/feature,T response:: #7b226665617475726573223a7b7d2c2276657273696f6e223a322c22737461747573223a317d,s{28,28,1755112000389,1755112000389,0,0,0,0,38,0,28} 19:06:40.404 [main-SendThread(127.0.0.1:42969)] DEBUG org.apache.zookeeper.ClientCnxn - Reading reply session id: 0x1000002c50e0000, packet:: clientPath:/feature serverPath:/feature finished:false header:: 43,4 replyHeader:: 43,28,0 request:: '/feature,T response:: #7b226665617475726573223a7b7d2c2276657273696f6e223a322c22737461747573223a317d,s{28,28,1755112000389,1755112000389,0,0,0,0,38,0,28} 19:06:40.433 [TestBroker:1:BrokerToControllerChannelManager broker=1 name=forwarding] DEBUG kafka.server.BrokerToControllerRequestThread - [TestBroker:1:BrokerToControllerChannelManager broker=1 name=forwarding]: Controller isn't cached, looking for local metadata changes 19:06:40.433 [TestBroker:1:BrokerToControllerChannelManager broker=1 name=forwarding] DEBUG kafka.server.BrokerToControllerRequestThread - [TestBroker:1:BrokerToControllerChannelManager broker=1 name=forwarding]: No controller defined in metadata cache, retrying after backoff 19:06:40.435 [main] INFO kafka.coordinator.transaction.TransactionCoordinator - [TransactionCoordinator id=1] Starting up. 19:06:40.435 [main] DEBUG kafka.utils.KafkaScheduler - Initializing task scheduler. 19:06:40.436 [main] DEBUG kafka.utils.KafkaScheduler - Scheduling task transaction-abort with initial delay 10000 ms and period 10000 ms. 19:06:40.438 [ProcessThread(sid:0 cport:42969):] DEBUG org.apache.zookeeper.server.SessionTrackerImpl - Checking session 0x1000002c50e0000 19:06:40.438 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - Processing request:: sessionid:0x1000002c50e0000 type:getData cxid:0x2c zxid:0xfffffffffffffffe txntype:unknown reqpath:/brokers/topics/__transaction_state 19:06:40.438 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - sessionid:0x1000002c50e0000 type:getData cxid:0x2c zxid:0xfffffffffffffffe txntype:unknown reqpath:/brokers/topics/__transaction_state 19:06:40.439 [main-SendThread(127.0.0.1:42969)] DEBUG org.apache.zookeeper.ClientCnxn - Reading reply session id: 0x1000002c50e0000, packet:: clientPath:/brokers/topics/__transaction_state serverPath:/brokers/topics/__transaction_state finished:false header:: 44,4 replyHeader:: 44,28,-101 request:: '/brokers/topics/__transaction_state,F response:: 19:06:40.442 [main] DEBUG kafka.utils.KafkaScheduler - Scheduling task transactionalId-expiration with initial delay 3600000 ms and period 3600000 ms. 19:06:40.443 [main] INFO kafka.coordinator.transaction.TransactionCoordinator - [TransactionCoordinator id=1] Startup complete. 19:06:40.445 [TxnMarkerSenderThread-1] INFO kafka.coordinator.transaction.TransactionMarkerChannelManager - [Transaction Marker Channel Manager 1]: Starting 19:06:40.448 [feature-zk-node-event-process-thread] INFO kafka.server.metadata.ZkMetadataCache - [MetadataCache brokerId=1] Updated cache from existing to latest FinalizedFeaturesAndEpoch(features=Map(), epoch=0). 19:06:40.448 [controller-event-thread] INFO kafka.controller.KafkaController - [Controller id=1] Registering handlers 19:06:40.451 [ProcessThread(sid:0 cport:42969):] DEBUG org.apache.zookeeper.server.SessionTrackerImpl - Checking session 0x1000002c50e0000 19:06:40.451 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - Processing request:: sessionid:0x1000002c50e0000 type:exists cxid:0x2d zxid:0xfffffffffffffffe txntype:unknown reqpath:/admin/preferred_replica_election 19:06:40.451 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - sessionid:0x1000002c50e0000 type:exists cxid:0x2d zxid:0xfffffffffffffffe txntype:unknown reqpath:/admin/preferred_replica_election 19:06:40.452 [main-SendThread(127.0.0.1:42969)] DEBUG org.apache.zookeeper.ClientCnxn - Reading reply session id: 0x1000002c50e0000, packet:: clientPath:/admin/preferred_replica_election serverPath:/admin/preferred_replica_election finished:false header:: 45,3 replyHeader:: 45,28,-101 request:: '/admin/preferred_replica_election,T response:: 19:06:40.455 [ProcessThread(sid:0 cport:42969):] DEBUG org.apache.zookeeper.server.SessionTrackerImpl - Checking session 0x1000002c50e0000 19:06:40.455 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - Processing request:: sessionid:0x1000002c50e0000 type:exists cxid:0x2e zxid:0xfffffffffffffffe txntype:unknown reqpath:/admin/reassign_partitions 19:06:40.455 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - sessionid:0x1000002c50e0000 type:exists cxid:0x2e zxid:0xfffffffffffffffe txntype:unknown reqpath:/admin/reassign_partitions 19:06:40.456 [main-SendThread(127.0.0.1:42969)] DEBUG org.apache.zookeeper.ClientCnxn - Reading reply session id: 0x1000002c50e0000, packet:: clientPath:/admin/reassign_partitions serverPath:/admin/reassign_partitions finished:false header:: 46,3 replyHeader:: 46,28,-101 request:: '/admin/reassign_partitions,T response:: 19:06:40.457 [controller-event-thread] INFO kafka.controller.KafkaController - [Controller id=1] Deleting log dir event notifications 19:06:40.458 [ProcessThread(sid:0 cport:42969):] DEBUG org.apache.zookeeper.server.SessionTrackerImpl - Checking session 0x1000002c50e0000 19:06:40.458 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - Processing request:: sessionid:0x1000002c50e0000 type:getChildren2 cxid:0x2f zxid:0xfffffffffffffffe txntype:unknown reqpath:/log_dir_event_notification 19:06:40.458 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - sessionid:0x1000002c50e0000 type:getChildren2 cxid:0x2f zxid:0xfffffffffffffffe txntype:unknown reqpath:/log_dir_event_notification 19:06:40.458 [SyncThread:0] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Permission requested: 1 19:06:40.458 [SyncThread:0] DEBUG org.apache.zookeeper.server.ZooKeeperServer - ACLs for node: [31,s{'world,'anyone} ] 19:06:40.458 [SyncThread:0] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Client credentials: ['sasl,'zooclient , 'ip,'127.0.0.1 ] 19:06:40.459 [main-SendThread(127.0.0.1:42969)] DEBUG org.apache.zookeeper.ClientCnxn - Reading reply session id: 0x1000002c50e0000, packet:: clientPath:/log_dir_event_notification serverPath:/log_dir_event_notification finished:false header:: 47,12 replyHeader:: 47,28,0 request:: '/log_dir_event_notification,T response:: v{},s{16,16,1755111998602,1755111998602,0,0,0,0,0,0,16} 19:06:40.462 [controller-event-thread] INFO kafka.controller.KafkaController - [Controller id=1] Deleting isr change notifications 19:06:40.463 [ProcessThread(sid:0 cport:42969):] DEBUG org.apache.zookeeper.server.SessionTrackerImpl - Checking session 0x1000002c50e0000 19:06:40.464 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - Processing request:: sessionid:0x1000002c50e0000 type:getChildren2 cxid:0x30 zxid:0xfffffffffffffffe txntype:unknown reqpath:/isr_change_notification 19:06:40.464 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - sessionid:0x1000002c50e0000 type:getChildren2 cxid:0x30 zxid:0xfffffffffffffffe txntype:unknown reqpath:/isr_change_notification 19:06:40.464 [SyncThread:0] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Permission requested: 1 19:06:40.464 [SyncThread:0] DEBUG org.apache.zookeeper.server.ZooKeeperServer - ACLs for node: [31,s{'world,'anyone} ] 19:06:40.464 [SyncThread:0] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Client credentials: ['sasl,'zooclient , 'ip,'127.0.0.1 ] 19:06:40.465 [main-SendThread(127.0.0.1:42969)] DEBUG org.apache.zookeeper.ClientCnxn - Reading reply session id: 0x1000002c50e0000, packet:: clientPath:/isr_change_notification serverPath:/isr_change_notification finished:false header:: 48,12 replyHeader:: 48,28,0 request:: '/isr_change_notification,T response:: v{},s{14,14,1755111998591,1755111998591,0,0,0,0,0,0,14} 19:06:40.467 [controller-event-thread] INFO kafka.controller.KafkaController - [Controller id=1] Initializing controller context 19:06:40.468 [ProcessThread(sid:0 cport:42969):] DEBUG org.apache.zookeeper.server.SessionTrackerImpl - Checking session 0x1000002c50e0000 19:06:40.468 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - Processing request:: sessionid:0x1000002c50e0000 type:getChildren2 cxid:0x31 zxid:0xfffffffffffffffe txntype:unknown reqpath:/brokers/ids 19:06:40.468 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - sessionid:0x1000002c50e0000 type:getChildren2 cxid:0x31 zxid:0xfffffffffffffffe txntype:unknown reqpath:/brokers/ids 19:06:40.468 [SyncThread:0] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Permission requested: 1 19:06:40.468 [SyncThread:0] DEBUG org.apache.zookeeper.server.ZooKeeperServer - ACLs for node: [31,s{'world,'anyone} ] 19:06:40.468 [SyncThread:0] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Client credentials: ['sasl,'zooclient , 'ip,'127.0.0.1 ] 19:06:40.469 [main-SendThread(127.0.0.1:42969)] DEBUG org.apache.zookeeper.ClientCnxn - Reading reply session id: 0x1000002c50e0000, packet:: clientPath:/brokers/ids serverPath:/brokers/ids finished:false header:: 49,12 replyHeader:: 49,28,0 request:: '/brokers/ids,T response:: v{'1},s{5,5,1755111998525,1755111998525,0,1,0,0,0,1,25} 19:06:40.474 [ProcessThread(sid:0 cport:42969):] DEBUG org.apache.zookeeper.server.SessionTrackerImpl - Checking session 0x1000002c50e0000 19:06:40.474 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - Processing request:: sessionid:0x1000002c50e0000 type:getData cxid:0x32 zxid:0xfffffffffffffffe txntype:unknown reqpath:/brokers/ids/1 19:06:40.474 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - sessionid:0x1000002c50e0000 type:getData cxid:0x32 zxid:0xfffffffffffffffe txntype:unknown reqpath:/brokers/ids/1 19:06:40.474 [SyncThread:0] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Permission requested: 1 19:06:40.474 [SyncThread:0] DEBUG org.apache.zookeeper.server.ZooKeeperServer - ACLs for node: [31,s{'world,'anyone} ] 19:06:40.474 [SyncThread:0] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Client credentials: ['sasl,'zooclient , 'ip,'127.0.0.1 ] 19:06:40.475 [main-SendThread(127.0.0.1:42969)] DEBUG org.apache.zookeeper.ClientCnxn - Reading reply session id: 0x1000002c50e0000, packet:: clientPath:/brokers/ids/1 serverPath:/brokers/ids/1 finished:false header:: 50,4 replyHeader:: 50,28,0 request:: '/brokers/ids/1,F response:: #7b226665617475726573223a7b7d2c226c697374656e65725f73656375726974795f70726f746f636f6c5f6d6170223a7b225341534c5f504c41494e54455854223a225341534c5f504c41494e54455854227d2c22656e64706f696e7473223a5b225341534c5f504c41494e544558543a2f2f6c6f63616c686f73743a3338353337225d2c226a6d785f706f7274223a2d312c22706f7274223a2d312c22686f7374223a6e756c6c2c2276657273696f6e223a352c2274696d657374616d70223a2231373535313132303030313637227d,s{25,25,1755112000211,1755112000211,1,0,0,72057605933891584,209,0,25} 19:06:40.486 [TestBroker:1:BrokerToControllerChannelManager broker=1 name=alterPartition] DEBUG kafka.server.BrokerToControllerRequestThread - [TestBroker:1:BrokerToControllerChannelManager broker=1 name=alterPartition]: Controller isn't cached, looking for local metadata changes 19:06:40.486 [TestBroker:1:BrokerToControllerChannelManager broker=1 name=alterPartition] DEBUG kafka.server.BrokerToControllerRequestThread - [TestBroker:1:BrokerToControllerChannelManager broker=1 name=alterPartition]: No controller defined in metadata cache, retrying after backoff 19:06:40.501 [controller-event-thread] INFO kafka.controller.KafkaController - [Controller id=1] Initialized broker epochs cache: HashMap(1 -> 25) 19:06:40.504 [ProcessThread(sid:0 cport:42969):] DEBUG org.apache.zookeeper.server.SessionTrackerImpl - Checking session 0x1000002c50e0000 19:06:40.504 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - Processing request:: sessionid:0x1000002c50e0000 type:getChildren2 cxid:0x33 zxid:0xfffffffffffffffe txntype:unknown reqpath:/brokers/topics 19:06:40.504 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - sessionid:0x1000002c50e0000 type:getChildren2 cxid:0x33 zxid:0xfffffffffffffffe txntype:unknown reqpath:/brokers/topics 19:06:40.504 [SyncThread:0] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Permission requested: 1 19:06:40.504 [SyncThread:0] DEBUG org.apache.zookeeper.server.ZooKeeperServer - ACLs for node: [31,s{'world,'anyone} ] 19:06:40.504 [SyncThread:0] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Client credentials: ['sasl,'zooclient , 'ip,'127.0.0.1 ] 19:06:40.505 [main-SendThread(127.0.0.1:42969)] DEBUG org.apache.zookeeper.ClientCnxn - Reading reply session id: 0x1000002c50e0000, packet:: clientPath:/brokers/topics serverPath:/brokers/topics finished:false header:: 51,12 replyHeader:: 51,28,0 request:: '/brokers/topics,T response:: v{},s{6,6,1755111998534,1755111998534,0,0,0,0,0,0,6} 19:06:40.513 [controller-event-thread] DEBUG kafka.controller.KafkaController - [Controller id=1] Register BrokerModifications handler for Set(1) 19:06:40.515 [ProcessThread(sid:0 cport:42969):] DEBUG org.apache.zookeeper.server.SessionTrackerImpl - Checking session 0x1000002c50e0000 19:06:40.515 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - Processing request:: sessionid:0x1000002c50e0000 type:exists cxid:0x34 zxid:0xfffffffffffffffe txntype:unknown reqpath:/brokers/ids/1 19:06:40.515 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - sessionid:0x1000002c50e0000 type:exists cxid:0x34 zxid:0xfffffffffffffffe txntype:unknown reqpath:/brokers/ids/1 19:06:40.516 [main-SendThread(127.0.0.1:42969)] DEBUG org.apache.zookeeper.ClientCnxn - Reading reply session id: 0x1000002c50e0000, packet:: clientPath:/brokers/ids/1 serverPath:/brokers/ids/1 finished:false header:: 52,3 replyHeader:: 52,28,0 request:: '/brokers/ids/1,T response:: s{25,25,1755112000211,1755112000211,1,0,0,72057605933891584,209,0,25} 19:06:40.522 [controller-event-thread] DEBUG kafka.controller.ControllerChannelManager - [Channel manager on controller 1]: Controller 1 trying to connect to broker 1 19:06:40.530 [ExpirationReaper-1-AlterAcls] INFO kafka.server.DelayedOperationPurgatory$ExpiredOperationReaper - [ExpirationReaper-1-AlterAcls]: Starting 19:06:40.534 [TestBroker:1:BrokerToControllerChannelManager broker=1 name=forwarding] DEBUG kafka.server.BrokerToControllerRequestThread - [TestBroker:1:BrokerToControllerChannelManager broker=1 name=forwarding]: Controller isn't cached, looking for local metadata changes 19:06:40.534 [TestBroker:1:BrokerToControllerChannelManager broker=1 name=forwarding] DEBUG kafka.server.BrokerToControllerRequestThread - [TestBroker:1:BrokerToControllerChannelManager broker=1 name=forwarding]: No controller defined in metadata cache, retrying after backoff 19:06:40.537 [TestBroker:1:Controller-1-to-broker-1-send-thread] INFO kafka.controller.RequestSendThread - [RequestSendThread controllerId=1] Starting 19:06:40.540 [controller-event-thread] INFO kafka.controller.KafkaController - [Controller id=1] Currently active brokers in the cluster: Set(1) 19:06:40.541 [controller-event-thread] INFO kafka.controller.KafkaController - [Controller id=1] Currently shutting brokers in the cluster: HashSet() 19:06:40.541 [controller-event-thread] INFO kafka.controller.KafkaController - [Controller id=1] Current list of topics in the cluster: HashSet() 19:06:40.541 [controller-event-thread] INFO kafka.controller.KafkaController - [Controller id=1] Fetching topic deletions in progress 19:06:40.543 [ProcessThread(sid:0 cport:42969):] DEBUG org.apache.zookeeper.server.SessionTrackerImpl - Checking session 0x1000002c50e0000 19:06:40.543 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - Processing request:: sessionid:0x1000002c50e0000 type:getChildren2 cxid:0x35 zxid:0xfffffffffffffffe txntype:unknown reqpath:/admin/delete_topics 19:06:40.543 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - sessionid:0x1000002c50e0000 type:getChildren2 cxid:0x35 zxid:0xfffffffffffffffe txntype:unknown reqpath:/admin/delete_topics 19:06:40.543 [SyncThread:0] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Permission requested: 1 19:06:40.543 [SyncThread:0] DEBUG org.apache.zookeeper.server.ZooKeeperServer - ACLs for node: [31,s{'world,'anyone} ] 19:06:40.543 [SyncThread:0] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Client credentials: ['sasl,'zooclient , 'ip,'127.0.0.1 ] 19:06:40.545 [main-SendThread(127.0.0.1:42969)] DEBUG org.apache.zookeeper.ClientCnxn - Reading reply session id: 0x1000002c50e0000, packet:: clientPath:/admin/delete_topics serverPath:/admin/delete_topics finished:false header:: 53,12 replyHeader:: 53,28,0 request:: '/admin/delete_topics,T response:: v{},s{12,12,1755111998575,1755111998575,0,0,0,0,0,0,12} 19:06:40.547 [controller-event-thread] INFO kafka.controller.KafkaController - [Controller id=1] List of topics to be deleted: 19:06:40.547 [controller-event-thread] INFO kafka.controller.KafkaController - [Controller id=1] List of topics ineligible for deletion: 19:06:40.547 [controller-event-thread] INFO kafka.controller.KafkaController - [Controller id=1] Initializing topic deletion manager 19:06:40.548 [controller-event-thread] INFO kafka.controller.TopicDeletionManager - [Topic Deletion Manager 1] Initializing manager with initial deletions: Set(), initial ineligible deletions: HashSet() 19:06:40.550 [controller-event-thread] INFO kafka.controller.KafkaController - [Controller id=1] Sending update metadata request 19:06:40.554 [controller-event-thread] INFO state.change.logger - [Controller id=1 epoch=1] Sending UpdateMetadata request to brokers HashSet(1) for 0 partitions 19:06:40.562 [controller-event-thread] INFO kafka.controller.ZkReplicaStateMachine - [ReplicaStateMachine controllerId=1] Initializing replica state 19:06:40.563 [/config/changes-event-process-thread] INFO kafka.common.ZkNodeChangeNotificationListener$ChangeEventProcessThread - [/config/changes-event-process-thread]: Starting 19:06:40.564 [controller-event-thread] INFO kafka.controller.ZkReplicaStateMachine - [ReplicaStateMachine controllerId=1] Triggering online replica state changes 19:06:40.565 [ProcessThread(sid:0 cport:42969):] DEBUG org.apache.zookeeper.server.SessionTrackerImpl - Checking session 0x1000002c50e0000 19:06:40.565 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - Processing request:: sessionid:0x1000002c50e0000 type:getChildren2 cxid:0x36 zxid:0xfffffffffffffffe txntype:unknown reqpath:/config/topics 19:06:40.565 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - sessionid:0x1000002c50e0000 type:getChildren2 cxid:0x36 zxid:0xfffffffffffffffe txntype:unknown reqpath:/config/topics 19:06:40.565 [SyncThread:0] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Permission requested: 1 19:06:40.565 [SyncThread:0] DEBUG org.apache.zookeeper.server.ZooKeeperServer - ACLs for node: [31,s{'world,'anyone} ] 19:06:40.565 [SyncThread:0] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Client credentials: ['sasl,'zooclient , 'ip,'127.0.0.1 ] 19:06:40.566 [ProcessThread(sid:0 cport:42969):] DEBUG org.apache.zookeeper.server.SessionTrackerImpl - Checking session 0x1000002c50e0000 19:06:40.566 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - Processing request:: sessionid:0x1000002c50e0000 type:getChildren2 cxid:0x37 zxid:0xfffffffffffffffe txntype:unknown reqpath:/config/changes 19:06:40.566 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - sessionid:0x1000002c50e0000 type:getChildren2 cxid:0x37 zxid:0xfffffffffffffffe txntype:unknown reqpath:/config/changes 19:06:40.566 [SyncThread:0] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Permission requested: 1 19:06:40.566 [SyncThread:0] DEBUG org.apache.zookeeper.server.ZooKeeperServer - ACLs for node: [31,s{'world,'anyone} ] 19:06:40.566 [SyncThread:0] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Client credentials: ['sasl,'zooclient , 'ip,'127.0.0.1 ] 19:06:40.566 [main-SendThread(127.0.0.1:42969)] DEBUG org.apache.zookeeper.ClientCnxn - Reading reply session id: 0x1000002c50e0000, packet:: clientPath:/config/topics serverPath:/config/topics finished:false header:: 54,12 replyHeader:: 54,28,0 request:: '/config/topics,F response:: v{},s{17,17,1755111998607,1755111998607,0,0,0,0,0,0,17} 19:06:40.567 [main-SendThread(127.0.0.1:42969)] DEBUG org.apache.zookeeper.ClientCnxn - Reading reply session id: 0x1000002c50e0000, packet:: clientPath:/config/changes serverPath:/config/changes finished:false header:: 55,12 replyHeader:: 55,28,0 request:: '/config/changes,T response:: v{},s{9,9,1755111998554,1755111998554,0,0,0,0,0,0,9} 19:06:40.569 [ProcessThread(sid:0 cport:42969):] DEBUG org.apache.zookeeper.server.SessionTrackerImpl - Checking session 0x1000002c50e0000 19:06:40.569 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - Processing request:: sessionid:0x1000002c50e0000 type:getChildren2 cxid:0x38 zxid:0xfffffffffffffffe txntype:unknown reqpath:/config/clients 19:06:40.569 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - sessionid:0x1000002c50e0000 type:getChildren2 cxid:0x38 zxid:0xfffffffffffffffe txntype:unknown reqpath:/config/clients 19:06:40.569 [SyncThread:0] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Permission requested: 1 19:06:40.569 [SyncThread:0] DEBUG org.apache.zookeeper.server.ZooKeeperServer - ACLs for node: [31,s{'world,'anyone} ] 19:06:40.569 [controller-event-thread] INFO kafka.controller.ZkReplicaStateMachine - [ReplicaStateMachine controllerId=1] Triggering offline replica state changes 19:06:40.569 [SyncThread:0] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Client credentials: ['sasl,'zooclient , 'ip,'127.0.0.1 ] 19:06:40.570 [controller-event-thread] DEBUG kafka.controller.ZkReplicaStateMachine - [ReplicaStateMachine controllerId=1] Started replica state machine with initial state -> HashMap() 19:06:40.570 [main-SendThread(127.0.0.1:42969)] DEBUG org.apache.zookeeper.ClientCnxn - Reading reply session id: 0x1000002c50e0000, packet:: clientPath:/config/clients serverPath:/config/clients finished:false header:: 56,12 replyHeader:: 56,28,0 request:: '/config/clients,F response:: v{},s{18,18,1755111998612,1755111998612,0,0,0,0,0,0,18} 19:06:40.570 [controller-event-thread] INFO kafka.controller.ZkPartitionStateMachine - [PartitionStateMachine controllerId=1] Initializing partition state 19:06:40.571 [controller-event-thread] INFO kafka.controller.ZkPartitionStateMachine - [PartitionStateMachine controllerId=1] Triggering online partition state changes 19:06:40.571 [ProcessThread(sid:0 cport:42969):] DEBUG org.apache.zookeeper.server.SessionTrackerImpl - Checking session 0x1000002c50e0000 19:06:40.571 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - Processing request:: sessionid:0x1000002c50e0000 type:getChildren2 cxid:0x39 zxid:0xfffffffffffffffe txntype:unknown reqpath:/config/users 19:06:40.571 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - sessionid:0x1000002c50e0000 type:getChildren2 cxid:0x39 zxid:0xfffffffffffffffe txntype:unknown reqpath:/config/users 19:06:40.571 [SyncThread:0] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Permission requested: 1 19:06:40.571 [SyncThread:0] DEBUG org.apache.zookeeper.server.ZooKeeperServer - ACLs for node: [31,s{'world,'anyone} ] 19:06:40.571 [SyncThread:0] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Client credentials: ['sasl,'zooclient , 'ip,'127.0.0.1 ] 19:06:40.572 [main-SendThread(127.0.0.1:42969)] DEBUG org.apache.zookeeper.ClientCnxn - Reading reply session id: 0x1000002c50e0000, packet:: clientPath:/config/users serverPath:/config/users finished:false header:: 57,12 replyHeader:: 57,28,0 request:: '/config/users,F response:: v{},s{19,19,1755111998616,1755111998616,0,0,0,0,0,0,19} 19:06:40.573 [TestBroker:1:Controller-1-to-broker-1-send-thread] DEBUG org.apache.kafka.clients.ClientUtils - Resolved host localhost as 127.0.0.1 19:06:40.573 [TestBroker:1:Controller-1-to-broker-1-send-thread] DEBUG org.apache.kafka.clients.NetworkClient - [Controller id=1, targetBrokerId=1] Initiating connection to node localhost:38537 (id: 1 rack: null) using address localhost/127.0.0.1 19:06:40.574 [ProcessThread(sid:0 cport:42969):] DEBUG org.apache.zookeeper.server.SessionTrackerImpl - Checking session 0x1000002c50e0000 19:06:40.574 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - Processing request:: sessionid:0x1000002c50e0000 type:getChildren2 cxid:0x3a zxid:0xfffffffffffffffe txntype:unknown reqpath:/config/users 19:06:40.574 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - sessionid:0x1000002c50e0000 type:getChildren2 cxid:0x3a zxid:0xfffffffffffffffe txntype:unknown reqpath:/config/users 19:06:40.574 [SyncThread:0] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Permission requested: 1 19:06:40.574 [SyncThread:0] DEBUG org.apache.zookeeper.server.ZooKeeperServer - ACLs for node: [31,s{'world,'anyone} ] 19:06:40.574 [SyncThread:0] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Client credentials: ['sasl,'zooclient , 'ip,'127.0.0.1 ] 19:06:40.575 [main-SendThread(127.0.0.1:42969)] DEBUG org.apache.zookeeper.ClientCnxn - Reading reply session id: 0x1000002c50e0000, packet:: clientPath:/config/users serverPath:/config/users finished:false header:: 58,12 replyHeader:: 58,28,0 request:: '/config/users,F response:: v{},s{19,19,1755111998616,1755111998616,0,0,0,0,0,0,19} 19:06:40.576 [controller-event-thread] DEBUG kafka.controller.ZkPartitionStateMachine - [PartitionStateMachine controllerId=1] Started partition state machine with initial state -> HashMap() 19:06:40.576 [controller-event-thread] INFO kafka.controller.KafkaController - [Controller id=1] Ready to serve as the new controller with epoch 1 19:06:40.577 [ProcessThread(sid:0 cport:42969):] DEBUG org.apache.zookeeper.server.SessionTrackerImpl - Checking session 0x1000002c50e0000 19:06:40.577 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - Processing request:: sessionid:0x1000002c50e0000 type:exists cxid:0x3b zxid:0xfffffffffffffffe txntype:unknown reqpath:/admin/reassign_partitions 19:06:40.577 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - sessionid:0x1000002c50e0000 type:exists cxid:0x3b zxid:0xfffffffffffffffe txntype:unknown reqpath:/admin/reassign_partitions 19:06:40.577 [ProcessThread(sid:0 cport:42969):] DEBUG org.apache.zookeeper.server.SessionTrackerImpl - Checking session 0x1000002c50e0000 19:06:40.577 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - Processing request:: sessionid:0x1000002c50e0000 type:getChildren2 cxid:0x3c zxid:0xfffffffffffffffe txntype:unknown reqpath:/config/ips 19:06:40.577 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - sessionid:0x1000002c50e0000 type:getChildren2 cxid:0x3c zxid:0xfffffffffffffffe txntype:unknown reqpath:/config/ips 19:06:40.578 [SyncThread:0] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Permission requested: 1 19:06:40.578 [SyncThread:0] DEBUG org.apache.zookeeper.server.ZooKeeperServer - ACLs for node: [31,s{'world,'anyone} ] 19:06:40.578 [SyncThread:0] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Client credentials: ['sasl,'zooclient , 'ip,'127.0.0.1 ] 19:06:40.578 [main-SendThread(127.0.0.1:42969)] DEBUG org.apache.zookeeper.ClientCnxn - Reading reply session id: 0x1000002c50e0000, packet:: clientPath:/admin/reassign_partitions serverPath:/admin/reassign_partitions finished:false header:: 59,3 replyHeader:: 59,28,-101 request:: '/admin/reassign_partitions,T response:: 19:06:40.578 [main-SendThread(127.0.0.1:42969)] DEBUG org.apache.zookeeper.ClientCnxn - Reading reply session id: 0x1000002c50e0000, packet:: clientPath:/config/ips serverPath:/config/ips finished:false header:: 60,12 replyHeader:: 60,28,0 request:: '/config/ips,F response:: v{},s{21,21,1755111998626,1755111998626,0,0,0,0,0,0,21} 19:06:40.579 [ProcessThread(sid:0 cport:42969):] DEBUG org.apache.zookeeper.server.SessionTrackerImpl - Checking session 0x1000002c50e0000 19:06:40.579 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - Processing request:: sessionid:0x1000002c50e0000 type:getChildren2 cxid:0x3d zxid:0xfffffffffffffffe txntype:unknown reqpath:/config/brokers 19:06:40.579 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - sessionid:0x1000002c50e0000 type:getChildren2 cxid:0x3d zxid:0xfffffffffffffffe txntype:unknown reqpath:/config/brokers 19:06:40.579 [SyncThread:0] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Permission requested: 1 19:06:40.579 [SyncThread:0] DEBUG org.apache.zookeeper.server.ZooKeeperServer - ACLs for node: [31,s{'world,'anyone} ] 19:06:40.579 [SyncThread:0] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Client credentials: ['sasl,'zooclient , 'ip,'127.0.0.1 ] 19:06:40.580 [main-SendThread(127.0.0.1:42969)] DEBUG org.apache.zookeeper.ClientCnxn - Reading reply session id: 0x1000002c50e0000, packet:: clientPath:/config/brokers serverPath:/config/brokers finished:false header:: 61,12 replyHeader:: 61,28,0 request:: '/config/brokers,F response:: v{},s{20,20,1755111998621,1755111998621,0,0,0,0,0,0,20} 19:06:40.581 [main] INFO kafka.network.SocketServer - [SocketServer listenerType=ZK_BROKER, nodeId=1] Enabling request processing. 19:06:40.582 [ProcessThread(sid:0 cport:42969):] DEBUG org.apache.zookeeper.server.SessionTrackerImpl - Checking session 0x1000002c50e0000 19:06:40.582 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - Processing request:: sessionid:0x1000002c50e0000 type:getData cxid:0x3e zxid:0xfffffffffffffffe txntype:unknown reqpath:/admin/preferred_replica_election 19:06:40.582 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - sessionid:0x1000002c50e0000 type:getData cxid:0x3e zxid:0xfffffffffffffffe txntype:unknown reqpath:/admin/preferred_replica_election 19:06:40.582 [main-SendThread(127.0.0.1:42969)] DEBUG org.apache.zookeeper.ClientCnxn - Reading reply session id: 0x1000002c50e0000, packet:: clientPath:/admin/preferred_replica_election serverPath:/admin/preferred_replica_election finished:false header:: 62,4 replyHeader:: 62,28,-101 request:: '/admin/preferred_replica_election,T response:: 19:06:40.583 [main] DEBUG kafka.network.DataPlaneAcceptor - Starting processors for listener ListenerName(SASL_PLAINTEXT) 19:06:40.584 [controller-event-thread] INFO kafka.controller.KafkaController - [Controller id=1] Partitions undergoing preferred replica election: 19:06:40.584 [controller-event-thread] INFO kafka.controller.KafkaController - [Controller id=1] Partitions that completed preferred replica election: 19:06:40.585 [main] DEBUG kafka.network.DataPlaneAcceptor - Starting acceptor thread for listener ListenerName(SASL_PLAINTEXT) 19:06:40.585 [controller-event-thread] INFO kafka.controller.KafkaController - [Controller id=1] Skipping preferred replica election for partitions due to topic deletion: 19:06:40.585 [controller-event-thread] INFO kafka.controller.KafkaController - [Controller id=1] Resuming preferred replica election for partitions: 19:06:40.586 [TestBroker:1:BrokerToControllerChannelManager broker=1 name=alterPartition] DEBUG kafka.server.BrokerToControllerRequestThread - [TestBroker:1:BrokerToControllerChannelManager broker=1 name=alterPartition]: Controller isn't cached, looking for local metadata changes 19:06:40.587 [TestBroker:1:BrokerToControllerChannelManager broker=1 name=alterPartition] DEBUG kafka.server.BrokerToControllerRequestThread - [TestBroker:1:BrokerToControllerChannelManager broker=1 name=alterPartition]: No controller defined in metadata cache, retrying after backoff 19:06:40.587 [controller-event-thread] INFO kafka.controller.KafkaController - [Controller id=1] Starting replica leader election (PREFERRED) for partitions triggered by ZkTriggered 19:06:40.591 [TestBroker:1:Controller-1-to-broker-1-send-thread] DEBUG org.apache.kafka.common.security.authenticator.SaslClientAuthenticator - [Controller id=1, targetBrokerId=1] Set SASL client state to SEND_APIVERSIONS_REQUEST 19:06:40.592 [main] INFO org.apache.kafka.common.utils.AppInfoParser - Kafka version: 3.3.1 19:06:40.592 [main] INFO org.apache.kafka.common.utils.AppInfoParser - Kafka commitId: e23c59d00e687ff5 19:06:40.592 [main] INFO org.apache.kafka.common.utils.AppInfoParser - Kafka startTimeMs: 1755112000589 19:06:40.592 [TestBroker:1:Controller-1-to-broker-1-send-thread] DEBUG org.apache.kafka.common.security.authenticator.SaslClientAuthenticator - [Controller id=1, targetBrokerId=1] Creating SaslClient: client=null;service=kafka;serviceHostname=localhost;mechs=[PLAIN] 19:06:40.594 [main] INFO kafka.server.KafkaServer - [KafkaServer id=1] started 19:06:40.598 [ProcessThread(sid:0 cport:42969):] DEBUG org.apache.zookeeper.server.SessionTrackerImpl - Checking session 0x1000002c50e0000 19:06:40.599 [ProcessThread(sid:0 cport:42969):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Permission requested: 1 19:06:40.599 [ProcessThread(sid:0 cport:42969):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - ACLs for node: [31,s{'world,'anyone} ] 19:06:40.599 [ProcessThread(sid:0 cport:42969):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Client credentials: ['sasl,'zooclient , 'ip,'127.0.0.1 ] 19:06:40.600 [ProcessThread(sid:0 cport:42969):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from data tree is: 56904395646 19:06:40.601 [ProcessThread(sid:0 cport:42969):] DEBUG org.apache.zookeeper.server.SessionTrackerImpl - Checking session 0x1000002c50e0000 19:06:40.601 [ProcessThread(sid:0 cport:42969):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Permission requested: 8 19:06:40.601 [ProcessThread(sid:0 cport:42969):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - ACLs for node: [31,s{'world,'anyone} ] 19:06:40.601 [ProcessThread(sid:0 cport:42969):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Client credentials: ['sasl,'zooclient , 'ip,'127.0.0.1 ] 19:06:40.601 [ProcessThread(sid:0 cport:42969):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from data tree is: 56904395646 19:06:40.602 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - Processing request:: sessionid:0x1000002c50e0000 type:multi cxid:0x3f zxid:0x1d txntype:14 reqpath:n/a 19:06:40.602 [SyncThread:0] DEBUG org.apache.zookeeper.server.DataTree - Ignoring processTxn failure hdr: -1 : error: -101 19:06:40.603 [SyncThread:0] DEBUG org.apache.zookeeper.server.DataTree - Ignoring processTxn failure hdr: 14 : error: -101 19:06:40.603 [SyncThread:0] DEBUG org.apache.zookeeper.server.DataTree - Digests are matching for Zxid: 1d, Digest in log and actual tree: 56904395646 19:06:40.603 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - sessionid:0x1000002c50e0000 type:multi cxid:0x3f zxid:0x1d txntype:14 reqpath:n/a 19:06:40.604 [data-plane-kafka-socket-acceptor-ListenerName(SASL_PLAINTEXT)-SASL_PLAINTEXT-38537] DEBUG kafka.network.DataPlaneAcceptor - Accepted connection from /127.0.0.1:54958 on /127.0.0.1:38537 and assigned it to processor 0, sendBufferSize [actual|requested]: [102400|102400] recvBufferSize [actual|requested]: [102400|102400] 19:06:40.606 [main-SendThread(127.0.0.1:42969)] DEBUG org.apache.zookeeper.ClientCnxn - Reading reply session id: 0x1000002c50e0000, packet:: clientPath:null serverPath:null finished:false header:: 63,14 replyHeader:: 63,29,0 request:: org.apache.zookeeper.MultiOperationRecord@228011e8 response:: org.apache.zookeeper.MultiResponse@441 19:06:40.606 [TestBroker:1:Controller-1-to-broker-1-send-thread] DEBUG org.apache.kafka.common.network.Selector - [Controller id=1, targetBrokerId=1] Created socket with SO_RCVBUF = 65536, SO_SNDBUF = 1313280, SO_TIMEOUT = 0 to node 1 19:06:40.614 [data-plane-kafka-network-thread-1-ListenerName(SASL_PLAINTEXT)-SASL_PLAINTEXT-0] DEBUG kafka.network.Processor - Processor 0 listening to new connection from /127.0.0.1:54958 19:06:40.618 [controller-event-thread] INFO kafka.controller.KafkaController - [Controller id=1] Starting the controller scheduler 19:06:40.618 [controller-event-thread] DEBUG kafka.utils.KafkaScheduler - Initializing task scheduler. 19:06:40.619 [controller-event-thread] DEBUG kafka.utils.KafkaScheduler - Scheduling task auto-leader-rebalance-task with initial delay 5000 ms and period -1000 ms. 19:06:40.621 [main] INFO org.apache.kafka.clients.admin.AdminClientConfig - AdminClientConfig values: bootstrap.servers = [SASL_PLAINTEXT://localhost:38537] client.dns.lookup = use_all_dns_ips client.id = test-consumer-id connections.max.idle.ms = 300000 default.api.timeout.ms = 60000 metadata.max.age.ms = 300000 metric.reporters = [] metrics.num.samples = 2 metrics.recording.level = INFO metrics.sample.window.ms = 30000 receive.buffer.bytes = 65536 reconnect.backoff.max.ms = 1000 reconnect.backoff.ms = 50 request.timeout.ms = 15000 retries = 2147483647 retry.backoff.ms = 100 sasl.client.callback.handler.class = null sasl.jaas.config = [hidden] sasl.kerberos.kinit.cmd = /usr/bin/kinit sasl.kerberos.min.time.before.relogin = 60000 sasl.kerberos.service.name = null sasl.kerberos.ticket.renew.jitter = 0.05 sasl.kerberos.ticket.renew.window.factor = 0.8 sasl.login.callback.handler.class = null sasl.login.class = null sasl.login.connect.timeout.ms = null sasl.login.read.timeout.ms = null sasl.login.refresh.buffer.seconds = 300 sasl.login.refresh.min.period.seconds = 60 sasl.login.refresh.window.factor = 0.8 sasl.login.refresh.window.jitter = 0.05 sasl.login.retry.backoff.max.ms = 10000 sasl.login.retry.backoff.ms = 100 sasl.mechanism = PLAIN sasl.oauthbearer.clock.skew.seconds = 30 sasl.oauthbearer.expected.audience = null sasl.oauthbearer.expected.issuer = null sasl.oauthbearer.jwks.endpoint.refresh.ms = 3600000 sasl.oauthbearer.jwks.endpoint.retry.backoff.max.ms = 10000 sasl.oauthbearer.jwks.endpoint.retry.backoff.ms = 100 sasl.oauthbearer.jwks.endpoint.url = null sasl.oauthbearer.scope.claim.name = scope sasl.oauthbearer.sub.claim.name = sub sasl.oauthbearer.token.endpoint.url = null security.protocol = SASL_PLAINTEXT security.providers = null send.buffer.bytes = 131072 socket.connection.setup.timeout.max.ms = 30000 socket.connection.setup.timeout.ms = 10000 ssl.cipher.suites = null ssl.enabled.protocols = [TLSv1.2, TLSv1.3] ssl.endpoint.identification.algorithm = https ssl.engine.factory.class = null ssl.key.password = null ssl.keymanager.algorithm = SunX509 ssl.keystore.certificate.chain = null ssl.keystore.key = null ssl.keystore.location = null ssl.keystore.password = null ssl.keystore.type = JKS ssl.protocol = TLSv1.3 ssl.provider = null ssl.secure.random.implementation = null ssl.trustmanager.algorithm = PKIX ssl.truststore.certificates = null ssl.truststore.location = null ssl.truststore.password = null ssl.truststore.type = JKS 19:06:40.627 [ProcessThread(sid:0 cport:42969):] DEBUG org.apache.zookeeper.server.SessionTrackerImpl - Checking session 0x1000002c50e0000 19:06:40.628 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - Processing request:: sessionid:0x1000002c50e0000 type:exists cxid:0x40 zxid:0xfffffffffffffffe txntype:unknown reqpath:/controller 19:06:40.628 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - sessionid:0x1000002c50e0000 type:exists cxid:0x40 zxid:0xfffffffffffffffe txntype:unknown reqpath:/controller 19:06:40.629 [main-SendThread(127.0.0.1:42969)] DEBUG org.apache.zookeeper.ClientCnxn - Reading reply session id: 0x1000002c50e0000, packet:: clientPath:/controller serverPath:/controller finished:false header:: 64,3 replyHeader:: 64,29,0 request:: '/controller,T response:: s{27,27,1755112000368,1755112000368,0,0,0,72057605933891584,54,0,27} 19:06:40.630 [ProcessThread(sid:0 cport:42969):] DEBUG org.apache.zookeeper.server.SessionTrackerImpl - Checking session 0x1000002c50e0000 19:06:40.630 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - Processing request:: sessionid:0x1000002c50e0000 type:getData cxid:0x41 zxid:0xfffffffffffffffe txntype:unknown reqpath:/controller 19:06:40.631 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - sessionid:0x1000002c50e0000 type:getData cxid:0x41 zxid:0xfffffffffffffffe txntype:unknown reqpath:/controller 19:06:40.631 [SyncThread:0] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Permission requested: 1 19:06:40.631 [SyncThread:0] DEBUG org.apache.zookeeper.server.ZooKeeperServer - ACLs for node: [31,s{'world,'anyone} ] 19:06:40.631 [SyncThread:0] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Client credentials: ['sasl,'zooclient , 'ip,'127.0.0.1 ] 19:06:40.632 [main-SendThread(127.0.0.1:42969)] DEBUG org.apache.zookeeper.ClientCnxn - Reading reply session id: 0x1000002c50e0000, packet:: clientPath:/controller serverPath:/controller finished:false header:: 65,4 replyHeader:: 65,29,0 request:: '/controller,T response:: #7b2276657273696f6e223a312c2262726f6b65726964223a312c2274696d657374616d70223a2231373535313132303030333535227d,s{27,27,1755112000368,1755112000368,0,0,0,72057605933891584,54,0,27} 19:06:40.635 [TestBroker:1:BrokerToControllerChannelManager broker=1 name=forwarding] DEBUG kafka.server.BrokerToControllerRequestThread - [TestBroker:1:BrokerToControllerChannelManager broker=1 name=forwarding]: Controller isn't cached, looking for local metadata changes 19:06:40.635 [TestBroker:1:BrokerToControllerChannelManager broker=1 name=forwarding] DEBUG kafka.server.BrokerToControllerRequestThread - [TestBroker:1:BrokerToControllerChannelManager broker=1 name=forwarding]: No controller defined in metadata cache, retrying after backoff 19:06:40.636 [ProcessThread(sid:0 cport:42969):] DEBUG org.apache.zookeeper.server.SessionTrackerImpl - Checking session 0x1000002c50e0000 19:06:40.636 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - Processing request:: sessionid:0x1000002c50e0000 type:exists cxid:0x42 zxid:0xfffffffffffffffe txntype:unknown reqpath:/admin/preferred_replica_election 19:06:40.636 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - sessionid:0x1000002c50e0000 type:exists cxid:0x42 zxid:0xfffffffffffffffe txntype:unknown reqpath:/admin/preferred_replica_election 19:06:40.637 [main-SendThread(127.0.0.1:42969)] DEBUG org.apache.zookeeper.ClientCnxn - Reading reply session id: 0x1000002c50e0000, packet:: clientPath:/admin/preferred_replica_election serverPath:/admin/preferred_replica_election finished:false header:: 66,3 replyHeader:: 66,29,-101 request:: '/admin/preferred_replica_election,T response:: 19:06:40.648 [main] DEBUG org.apache.kafka.clients.admin.internals.AdminMetadataManager - [AdminClient clientId=test-consumer-id] Setting bootstrap cluster metadata Cluster(id = null, nodes = [localhost:38537 (id: -1 rack: null)], partitions = [], controller = null). 19:06:40.650 [main] INFO org.apache.kafka.common.security.authenticator.AbstractLogin - Successfully logged in. 19:06:40.655 [main] INFO org.apache.kafka.common.utils.AppInfoParser - Kafka version: 3.3.1 19:06:40.655 [main] INFO org.apache.kafka.common.utils.AppInfoParser - Kafka commitId: e23c59d00e687ff5 19:06:40.655 [main] INFO org.apache.kafka.common.utils.AppInfoParser - Kafka startTimeMs: 1755112000655 19:06:40.655 [main] DEBUG org.apache.kafka.clients.admin.KafkaAdminClient - [AdminClient clientId=test-consumer-id] Kafka admin client initialized 19:06:40.656 [kafka-admin-client-thread | test-consumer-id] DEBUG org.apache.kafka.clients.admin.KafkaAdminClient - [AdminClient clientId=test-consumer-id] Thread starting 19:06:40.658 [main] DEBUG org.apache.kafka.clients.admin.KafkaAdminClient - [AdminClient clientId=test-consumer-id] Queueing Call(callName=listNodes, deadlineMs=1755112060657, tries=0, nextAllowedTryMs=0) with a timeout 15000 ms from now. 19:06:40.660 [kafka-admin-client-thread | test-consumer-id] DEBUG org.apache.kafka.clients.ClientUtils - Resolved host localhost as 127.0.0.1 19:06:40.660 [TestBroker:1:Controller-1-to-broker-1-send-thread] DEBUG org.apache.kafka.common.security.authenticator.SaslClientAuthenticator - [Controller id=1, targetBrokerId=1] Set SASL client state to RECEIVE_APIVERSIONS_RESPONSE 19:06:40.661 [kafka-admin-client-thread | test-consumer-id] DEBUG org.apache.kafka.clients.NetworkClient - [AdminClient clientId=test-consumer-id] Initiating connection to node localhost:38537 (id: -1 rack: null) using address localhost/127.0.0.1 19:06:40.661 [TestBroker:1:Controller-1-to-broker-1-send-thread] DEBUG org.apache.kafka.clients.NetworkClient - [Controller id=1, targetBrokerId=1] Completed connection to node 1. Ready. 19:06:40.662 [data-plane-kafka-socket-acceptor-ListenerName(SASL_PLAINTEXT)-SASL_PLAINTEXT-38537] DEBUG kafka.network.DataPlaneAcceptor - Accepted connection from /127.0.0.1:54960 on /127.0.0.1:38537 and assigned it to processor 1, sendBufferSize [actual|requested]: [102400|102400] recvBufferSize [actual|requested]: [102400|102400] 19:06:40.662 [data-plane-kafka-network-thread-1-ListenerName(SASL_PLAINTEXT)-SASL_PLAINTEXT-1] DEBUG kafka.network.Processor - Processor 1 listening to new connection from /127.0.0.1:54960 19:06:40.663 [kafka-admin-client-thread | test-consumer-id] DEBUG org.apache.kafka.common.security.authenticator.SaslClientAuthenticator - [AdminClient clientId=test-consumer-id] Set SASL client state to SEND_APIVERSIONS_REQUEST 19:06:40.663 [kafka-admin-client-thread | test-consumer-id] DEBUG org.apache.kafka.common.security.authenticator.SaslClientAuthenticator - [AdminClient clientId=test-consumer-id] Creating SaslClient: client=null;service=kafka;serviceHostname=localhost;mechs=[PLAIN] 19:06:40.665 [data-plane-kafka-network-thread-1-ListenerName(SASL_PLAINTEXT)-SASL_PLAINTEXT-0] DEBUG org.apache.kafka.common.security.authenticator.SaslServerAuthenticator - Set SASL server state to HANDSHAKE_OR_VERSIONS_REQUEST during authentication 19:06:40.665 [data-plane-kafka-network-thread-1-ListenerName(SASL_PLAINTEXT)-SASL_PLAINTEXT-0] DEBUG org.apache.kafka.common.security.authenticator.SaslServerAuthenticator - Handling Kafka request API_VERSIONS during authentication 19:06:40.667 [kafka-admin-client-thread | test-consumer-id] DEBUG org.apache.kafka.common.network.Selector - [AdminClient clientId=test-consumer-id] Created socket with SO_RCVBUF = 65536, SO_SNDBUF = 131072, SO_TIMEOUT = 0 to node -1 19:06:40.668 [kafka-admin-client-thread | test-consumer-id] DEBUG org.apache.kafka.common.security.authenticator.SaslClientAuthenticator - [AdminClient clientId=test-consumer-id] Set SASL client state to RECEIVE_APIVERSIONS_RESPONSE 19:06:40.668 [data-plane-kafka-network-thread-1-ListenerName(SASL_PLAINTEXT)-SASL_PLAINTEXT-1] DEBUG org.apache.kafka.common.security.authenticator.SaslServerAuthenticator - Set SASL server state to HANDSHAKE_OR_VERSIONS_REQUEST during authentication 19:06:40.668 [kafka-admin-client-thread | test-consumer-id] DEBUG org.apache.kafka.clients.NetworkClient - [AdminClient clientId=test-consumer-id] Completed connection to node -1. Fetching API versions. 19:06:40.668 [data-plane-kafka-network-thread-1-ListenerName(SASL_PLAINTEXT)-SASL_PLAINTEXT-1] DEBUG org.apache.kafka.common.security.authenticator.SaslServerAuthenticator - Handling Kafka request API_VERSIONS during authentication 19:06:40.687 [TestBroker:1:BrokerToControllerChannelManager broker=1 name=alterPartition] DEBUG kafka.server.BrokerToControllerRequestThread - [TestBroker:1:BrokerToControllerChannelManager broker=1 name=alterPartition]: Controller isn't cached, looking for local metadata changes 19:06:40.687 [TestBroker:1:BrokerToControllerChannelManager broker=1 name=alterPartition] DEBUG kafka.server.BrokerToControllerRequestThread - [TestBroker:1:BrokerToControllerChannelManager broker=1 name=alterPartition]: No controller defined in metadata cache, retrying after backoff 19:06:40.704 [data-plane-kafka-network-thread-1-ListenerName(SASL_PLAINTEXT)-SASL_PLAINTEXT-0] DEBUG org.apache.kafka.common.security.authenticator.SaslServerAuthenticator - Set SASL server state to HANDSHAKE_REQUEST during authentication 19:06:40.705 [data-plane-kafka-network-thread-1-ListenerName(SASL_PLAINTEXT)-SASL_PLAINTEXT-1] DEBUG org.apache.kafka.common.security.authenticator.SaslServerAuthenticator - Set SASL server state to HANDSHAKE_REQUEST during authentication 19:06:40.707 [TestBroker:1:Controller-1-to-broker-1-send-thread] DEBUG org.apache.kafka.common.security.authenticator.SaslClientAuthenticator - [Controller id=1, targetBrokerId=1] Set SASL client state to SEND_HANDSHAKE_REQUEST 19:06:40.707 [kafka-admin-client-thread | test-consumer-id] DEBUG org.apache.kafka.common.security.authenticator.SaslClientAuthenticator - [AdminClient clientId=test-consumer-id] Set SASL client state to SEND_HANDSHAKE_REQUEST 19:06:40.708 [TestBroker:1:Controller-1-to-broker-1-send-thread] DEBUG org.apache.kafka.common.security.authenticator.SaslClientAuthenticator - [Controller id=1, targetBrokerId=1] Set SASL client state to RECEIVE_HANDSHAKE_RESPONSE 19:06:40.708 [data-plane-kafka-network-thread-1-ListenerName(SASL_PLAINTEXT)-SASL_PLAINTEXT-0] DEBUG org.apache.kafka.common.security.authenticator.SaslServerAuthenticator - Handling Kafka request SASL_HANDSHAKE during authentication 19:06:40.708 [data-plane-kafka-network-thread-1-ListenerName(SASL_PLAINTEXT)-SASL_PLAINTEXT-0] DEBUG org.apache.kafka.common.security.authenticator.SaslServerAuthenticator - Using SASL mechanism 'PLAIN' provided by client 19:06:40.708 [kafka-admin-client-thread | test-consumer-id] DEBUG org.apache.kafka.common.security.authenticator.SaslClientAuthenticator - [AdminClient clientId=test-consumer-id] Set SASL client state to RECEIVE_HANDSHAKE_RESPONSE 19:06:40.708 [TestBroker:1:Controller-1-to-broker-1-send-thread] DEBUG org.apache.kafka.common.security.authenticator.SaslClientAuthenticator - [Controller id=1, targetBrokerId=1] Set SASL client state to INITIAL 19:06:40.708 [data-plane-kafka-network-thread-1-ListenerName(SASL_PLAINTEXT)-SASL_PLAINTEXT-1] DEBUG org.apache.kafka.common.security.authenticator.SaslServerAuthenticator - Handling Kafka request SASL_HANDSHAKE during authentication 19:06:40.709 [data-plane-kafka-network-thread-1-ListenerName(SASL_PLAINTEXT)-SASL_PLAINTEXT-1] DEBUG org.apache.kafka.common.security.authenticator.SaslServerAuthenticator - Using SASL mechanism 'PLAIN' provided by client 19:06:40.712 [data-plane-kafka-network-thread-1-ListenerName(SASL_PLAINTEXT)-SASL_PLAINTEXT-0] DEBUG org.apache.kafka.common.security.authenticator.SaslServerAuthenticator - Set SASL server state to AUTHENTICATE during authentication 19:06:40.712 [data-plane-kafka-network-thread-1-ListenerName(SASL_PLAINTEXT)-SASL_PLAINTEXT-1] DEBUG org.apache.kafka.common.security.authenticator.SaslServerAuthenticator - Set SASL server state to AUTHENTICATE during authentication 19:06:40.712 [kafka-admin-client-thread | test-consumer-id] DEBUG org.apache.kafka.common.security.authenticator.SaslClientAuthenticator - [AdminClient clientId=test-consumer-id] Set SASL client state to INITIAL 19:06:40.712 [TestBroker:1:Controller-1-to-broker-1-send-thread] DEBUG org.apache.kafka.common.security.authenticator.SaslClientAuthenticator - [Controller id=1, targetBrokerId=1] Set SASL client state to INTERMEDIATE 19:06:40.712 [kafka-admin-client-thread | test-consumer-id] DEBUG org.apache.kafka.common.security.authenticator.SaslClientAuthenticator - [AdminClient clientId=test-consumer-id] Set SASL client state to INTERMEDIATE 19:06:40.713 [data-plane-kafka-network-thread-1-ListenerName(SASL_PLAINTEXT)-SASL_PLAINTEXT-0] DEBUG org.apache.kafka.common.security.authenticator.SaslServerAuthenticator - Authentication complete; session max lifetime from broker config=0 ms, no credential expiration; no session expiration, sending 0 ms to client 19:06:40.713 [data-plane-kafka-network-thread-1-ListenerName(SASL_PLAINTEXT)-SASL_PLAINTEXT-1] DEBUG org.apache.kafka.common.security.authenticator.SaslServerAuthenticator - Authentication complete; session max lifetime from broker config=0 ms, no credential expiration; no session expiration, sending 0 ms to client 19:06:40.714 [data-plane-kafka-network-thread-1-ListenerName(SASL_PLAINTEXT)-SASL_PLAINTEXT-0] DEBUG org.apache.kafka.common.security.authenticator.SaslServerAuthenticator - Set SASL server state to COMPLETE during authentication 19:06:40.714 [data-plane-kafka-network-thread-1-ListenerName(SASL_PLAINTEXT)-SASL_PLAINTEXT-0] DEBUG org.apache.kafka.common.network.Selector - [SocketServer listenerType=ZK_BROKER, nodeId=1] Successfully authenticated with /127.0.0.1 19:06:40.714 [TestBroker:1:Controller-1-to-broker-1-send-thread] DEBUG org.apache.kafka.common.security.authenticator.SaslClientAuthenticator - [Controller id=1, targetBrokerId=1] Set SASL client state to COMPLETE 19:06:40.714 [TestBroker:1:Controller-1-to-broker-1-send-thread] DEBUG org.apache.kafka.common.security.authenticator.SaslClientAuthenticator - [Controller id=1, targetBrokerId=1] Finished authentication with no session expiration and no session re-authentication 19:06:40.714 [data-plane-kafka-network-thread-1-ListenerName(SASL_PLAINTEXT)-SASL_PLAINTEXT-1] DEBUG org.apache.kafka.common.security.authenticator.SaslServerAuthenticator - Set SASL server state to COMPLETE during authentication 19:06:40.714 [TestBroker:1:Controller-1-to-broker-1-send-thread] DEBUG org.apache.kafka.common.network.Selector - [Controller id=1, targetBrokerId=1] Successfully authenticated with localhost/127.0.0.1 19:06:40.714 [data-plane-kafka-network-thread-1-ListenerName(SASL_PLAINTEXT)-SASL_PLAINTEXT-1] DEBUG org.apache.kafka.common.network.Selector - [SocketServer listenerType=ZK_BROKER, nodeId=1] Successfully authenticated with /127.0.0.1 19:06:40.714 [kafka-admin-client-thread | test-consumer-id] DEBUG org.apache.kafka.common.security.authenticator.SaslClientAuthenticator - [AdminClient clientId=test-consumer-id] Set SASL client state to COMPLETE 19:06:40.714 [kafka-admin-client-thread | test-consumer-id] DEBUG org.apache.kafka.common.security.authenticator.SaslClientAuthenticator - [AdminClient clientId=test-consumer-id] Finished authentication with no session expiration and no session re-authentication 19:06:40.714 [kafka-admin-client-thread | test-consumer-id] DEBUG org.apache.kafka.common.network.Selector - [AdminClient clientId=test-consumer-id] Successfully authenticated with localhost/127.0.0.1 19:06:40.714 [TestBroker:1:Controller-1-to-broker-1-send-thread] INFO kafka.controller.RequestSendThread - [RequestSendThread controllerId=1] Controller 1 connected to localhost:38537 (id: 1 rack: null) for sending state change requests 19:06:40.714 [kafka-admin-client-thread | test-consumer-id] DEBUG org.apache.kafka.clients.NetworkClient - [AdminClient clientId=test-consumer-id] Initiating API versions fetch from node -1. 19:06:40.715 [kafka-admin-client-thread | test-consumer-id] DEBUG org.apache.kafka.clients.NetworkClient - [AdminClient clientId=test-consumer-id] Sending API_VERSIONS request with header RequestHeader(apiKey=API_VERSIONS, apiVersion=3, clientId=test-consumer-id, correlationId=0) and timeout 3600000 to node -1: ApiVersionsRequestData(clientSoftwareName='apache-kafka-java', clientSoftwareVersion='3.3.1') 19:06:40.716 [TestBroker:1:Controller-1-to-broker-1-send-thread] DEBUG org.apache.kafka.clients.NetworkClient - [Controller id=1, targetBrokerId=1] Sending UPDATE_METADATA request with header RequestHeader(apiKey=UPDATE_METADATA, apiVersion=7, clientId=1, correlationId=0) and timeout 30000 to node 1: UpdateMetadataRequestData(controllerId=1, controllerEpoch=1, brokerEpoch=25, ungroupedPartitionStates=[], topicStates=[], liveBrokers=[UpdateMetadataBroker(id=1, v0Host='', v0Port=0, endpoints=[UpdateMetadataEndpoint(port=38537, host='localhost', listener='SASL_PLAINTEXT', securityProtocol=2)], rack=null)]) 19:06:40.735 [TestBroker:1:BrokerToControllerChannelManager broker=1 name=forwarding] DEBUG kafka.server.BrokerToControllerRequestThread - [TestBroker:1:BrokerToControllerChannelManager broker=1 name=forwarding]: Controller isn't cached, looking for local metadata changes 19:06:40.735 [TestBroker:1:BrokerToControllerChannelManager broker=1 name=forwarding] DEBUG kafka.server.BrokerToControllerRequestThread - [TestBroker:1:BrokerToControllerChannelManager broker=1 name=forwarding]: No controller defined in metadata cache, retrying after backoff 19:06:40.753 [kafka-admin-client-thread | test-consumer-id] DEBUG org.apache.kafka.clients.NetworkClient - [AdminClient clientId=test-consumer-id] Received API_VERSIONS response from node -1 for request with header RequestHeader(apiKey=API_VERSIONS, apiVersion=3, clientId=test-consumer-id, correlationId=0): ApiVersionsResponseData(errorCode=0, apiKeys=[ApiVersion(apiKey=0, minVersion=0, maxVersion=9), ApiVersion(apiKey=1, minVersion=0, maxVersion=13), ApiVersion(apiKey=2, minVersion=0, maxVersion=7), ApiVersion(apiKey=3, minVersion=0, maxVersion=12), ApiVersion(apiKey=4, minVersion=0, maxVersion=6), ApiVersion(apiKey=5, minVersion=0, maxVersion=3), ApiVersion(apiKey=6, minVersion=0, maxVersion=7), ApiVersion(apiKey=7, minVersion=0, maxVersion=3), ApiVersion(apiKey=8, minVersion=0, maxVersion=8), ApiVersion(apiKey=9, minVersion=0, maxVersion=8), ApiVersion(apiKey=10, minVersion=0, maxVersion=4), ApiVersion(apiKey=11, minVersion=0, maxVersion=9), ApiVersion(apiKey=12, minVersion=0, maxVersion=4), ApiVersion(apiKey=13, minVersion=0, maxVersion=5), ApiVersion(apiKey=14, minVersion=0, maxVersion=5), ApiVersion(apiKey=15, minVersion=0, maxVersion=5), ApiVersion(apiKey=16, minVersion=0, maxVersion=4), ApiVersion(apiKey=17, minVersion=0, maxVersion=1), ApiVersion(apiKey=18, minVersion=0, maxVersion=3), ApiVersion(apiKey=19, minVersion=0, maxVersion=7), ApiVersion(apiKey=20, minVersion=0, maxVersion=6), ApiVersion(apiKey=21, minVersion=0, maxVersion=2), ApiVersion(apiKey=22, minVersion=0, maxVersion=4), ApiVersion(apiKey=23, minVersion=0, maxVersion=4), ApiVersion(apiKey=24, minVersion=0, maxVersion=3), ApiVersion(apiKey=25, minVersion=0, maxVersion=3), ApiVersion(apiKey=26, minVersion=0, maxVersion=3), ApiVersion(apiKey=27, minVersion=0, maxVersion=1), ApiVersion(apiKey=28, minVersion=0, maxVersion=3), ApiVersion(apiKey=29, minVersion=0, maxVersion=3), ApiVersion(apiKey=30, minVersion=0, maxVersion=3), ApiVersion(apiKey=31, minVersion=0, maxVersion=3), ApiVersion(apiKey=32, minVersion=0, maxVersion=4), ApiVersion(apiKey=33, minVersion=0, maxVersion=2), ApiVersion(apiKey=34, minVersion=0, maxVersion=2), ApiVersion(apiKey=35, minVersion=0, maxVersion=4), ApiVersion(apiKey=36, minVersion=0, maxVersion=2), ApiVersion(apiKey=37, minVersion=0, maxVersion=3), ApiVersion(apiKey=38, minVersion=0, maxVersion=3), ApiVersion(apiKey=39, minVersion=0, maxVersion=2), ApiVersion(apiKey=40, minVersion=0, maxVersion=2), ApiVersion(apiKey=41, minVersion=0, maxVersion=3), ApiVersion(apiKey=42, minVersion=0, maxVersion=2), ApiVersion(apiKey=43, minVersion=0, maxVersion=2), ApiVersion(apiKey=44, minVersion=0, maxVersion=1), ApiVersion(apiKey=45, minVersion=0, maxVersion=0), ApiVersion(apiKey=46, minVersion=0, maxVersion=0), ApiVersion(apiKey=47, minVersion=0, maxVersion=0), ApiVersion(apiKey=48, minVersion=0, maxVersion=1), ApiVersion(apiKey=49, minVersion=0, maxVersion=1), ApiVersion(apiKey=50, minVersion=0, maxVersion=0), ApiVersion(apiKey=51, minVersion=0, maxVersion=0), ApiVersion(apiKey=56, minVersion=0, maxVersion=2), ApiVersion(apiKey=57, minVersion=0, maxVersion=1), ApiVersion(apiKey=60, minVersion=0, maxVersion=0), ApiVersion(apiKey=61, minVersion=0, maxVersion=0), ApiVersion(apiKey=65, minVersion=0, maxVersion=0), ApiVersion(apiKey=66, minVersion=0, maxVersion=0), ApiVersion(apiKey=67, minVersion=0, maxVersion=0)], throttleTimeMs=0, supportedFeatures=[], finalizedFeaturesEpoch=0, finalizedFeatures=[]) 19:06:40.757 [TestBroker:1:Controller-1-to-broker-1-send-thread] DEBUG org.apache.kafka.clients.NetworkClient - [Controller id=1, targetBrokerId=1] Received UPDATE_METADATA response from node 1 for request with header RequestHeader(apiKey=UPDATE_METADATA, apiVersion=7, clientId=1, correlationId=0): UpdateMetadataResponseData(errorCode=0) 19:06:40.764 [kafka-admin-client-thread | test-consumer-id] DEBUG org.apache.kafka.clients.NetworkClient - [AdminClient clientId=test-consumer-id] Node -1 has finalized features epoch: 0, finalized features: [], supported features: [], API versions: (Produce(0): 0 to 9 [usable: 9], Fetch(1): 0 to 13 [usable: 13], ListOffsets(2): 0 to 7 [usable: 7], Metadata(3): 0 to 12 [usable: 12], LeaderAndIsr(4): 0 to 6 [usable: 6], StopReplica(5): 0 to 3 [usable: 3], UpdateMetadata(6): 0 to 7 [usable: 7], ControlledShutdown(7): 0 to 3 [usable: 3], OffsetCommit(8): 0 to 8 [usable: 8], OffsetFetch(9): 0 to 8 [usable: 8], FindCoordinator(10): 0 to 4 [usable: 4], JoinGroup(11): 0 to 9 [usable: 9], Heartbeat(12): 0 to 4 [usable: 4], LeaveGroup(13): 0 to 5 [usable: 5], SyncGroup(14): 0 to 5 [usable: 5], DescribeGroups(15): 0 to 5 [usable: 5], ListGroups(16): 0 to 4 [usable: 4], SaslHandshake(17): 0 to 1 [usable: 1], ApiVersions(18): 0 to 3 [usable: 3], CreateTopics(19): 0 to 7 [usable: 7], DeleteTopics(20): 0 to 6 [usable: 6], DeleteRecords(21): 0 to 2 [usable: 2], InitProducerId(22): 0 to 4 [usable: 4], OffsetForLeaderEpoch(23): 0 to 4 [usable: 4], AddPartitionsToTxn(24): 0 to 3 [usable: 3], AddOffsetsToTxn(25): 0 to 3 [usable: 3], EndTxn(26): 0 to 3 [usable: 3], WriteTxnMarkers(27): 0 to 1 [usable: 1], TxnOffsetCommit(28): 0 to 3 [usable: 3], DescribeAcls(29): 0 to 3 [usable: 3], CreateAcls(30): 0 to 3 [usable: 3], DeleteAcls(31): 0 to 3 [usable: 3], DescribeConfigs(32): 0 to 4 [usable: 4], AlterConfigs(33): 0 to 2 [usable: 2], AlterReplicaLogDirs(34): 0 to 2 [usable: 2], DescribeLogDirs(35): 0 to 4 [usable: 4], SaslAuthenticate(36): 0 to 2 [usable: 2], CreatePartitions(37): 0 to 3 [usable: 3], CreateDelegationToken(38): 0 to 3 [usable: 3], RenewDelegationToken(39): 0 to 2 [usable: 2], ExpireDelegationToken(40): 0 to 2 [usable: 2], DescribeDelegationToken(41): 0 to 3 [usable: 3], DeleteGroups(42): 0 to 2 [usable: 2], ElectLeaders(43): 0 to 2 [usable: 2], IncrementalAlterConfigs(44): 0 to 1 [usable: 1], AlterPartitionReassignments(45): 0 [usable: 0], ListPartitionReassignments(46): 0 [usable: 0], OffsetDelete(47): 0 [usable: 0], DescribeClientQuotas(48): 0 to 1 [usable: 1], AlterClientQuotas(49): 0 to 1 [usable: 1], DescribeUserScramCredentials(50): 0 [usable: 0], AlterUserScramCredentials(51): 0 [usable: 0], AlterPartition(56): 0 to 2 [usable: 2], UpdateFeatures(57): 0 to 1 [usable: 1], DescribeCluster(60): 0 [usable: 0], DescribeProducers(61): 0 [usable: 0], DescribeTransactions(65): 0 [usable: 0], ListTransactions(66): 0 [usable: 0], AllocateProducerIds(67): 0 [usable: 0]). 19:06:40.765 [kafka-admin-client-thread | test-consumer-id] DEBUG org.apache.kafka.clients.admin.KafkaAdminClient - [AdminClient clientId=test-consumer-id] Sending MetadataRequestData(topics=[], allowAutoTopicCreation=true, includeClusterAuthorizedOperations=false, includeTopicAuthorizedOperations=false) to localhost:38537 (id: -1 rack: null). correlationId=1, timeoutMs=14892 19:06:40.766 [kafka-admin-client-thread | test-consumer-id] DEBUG org.apache.kafka.clients.NetworkClient - [AdminClient clientId=test-consumer-id] Sending METADATA request with header RequestHeader(apiKey=METADATA, apiVersion=12, clientId=test-consumer-id, correlationId=1) and timeout 14892 to node -1: MetadataRequestData(topics=[], allowAutoTopicCreation=true, includeClusterAuthorizedOperations=false, includeTopicAuthorizedOperations=false) 19:06:40.788 [TestBroker:1:BrokerToControllerChannelManager broker=1 name=alterPartition] DEBUG kafka.server.BrokerToControllerRequestThread - [TestBroker:1:BrokerToControllerChannelManager broker=1 name=alterPartition]: Controller isn't cached, looking for local metadata changes 19:06:40.789 [TestBroker:1:BrokerToControllerChannelManager broker=1 name=alterPartition] INFO kafka.server.BrokerToControllerRequestThread - [TestBroker:1:BrokerToControllerChannelManager broker=1 name=alterPartition]: Recorded new controller, from now on will use broker localhost:38537 (id: 1 rack: null) 19:06:40.798 [data-plane-kafka-network-thread-1-ListenerName(SASL_PLAINTEXT)-SASL_PLAINTEXT-0] DEBUG kafka.request.logger - Completed request:{"isForwarded":false,"requestHeader":{"requestApiKey":6,"requestApiVersion":7,"correlationId":0,"clientId":"1","requestApiKeyName":"UPDATE_METADATA"},"request":{"controllerId":1,"controllerEpoch":1,"brokerEpoch":25,"topicStates":[],"liveBrokers":[{"id":1,"endpoints":[{"port":38537,"host":"localhost","listener":"SASL_PLAINTEXT","securityProtocol":2}],"rack":null}]},"response":{"errorCode":0},"connection":"127.0.0.1:38537-127.0.0.1:54958-0","totalTimeMs":38.592,"requestQueueTimeMs":23.411,"localTimeMs":14.483,"remoteTimeMs":0.0,"throttleTimeMs":0,"responseQueueTimeMs":0.232,"sendTimeMs":0.464,"securityProtocol":"SASL_PLAINTEXT","principal":"User:admin","listener":"SASL_PLAINTEXT","clientInformation":{"softwareName":"unknown","softwareVersion":"unknown"}} 19:06:40.802 [data-plane-kafka-network-thread-1-ListenerName(SASL_PLAINTEXT)-SASL_PLAINTEXT-1] DEBUG kafka.request.logger - Completed request:{"isForwarded":false,"requestHeader":{"requestApiKey":18,"requestApiVersion":3,"correlationId":0,"clientId":"test-consumer-id","requestApiKeyName":"API_VERSIONS"},"request":{"clientSoftwareName":"apache-kafka-java","clientSoftwareVersion":"3.3.1"},"response":{"errorCode":0,"apiKeys":[{"apiKey":0,"minVersion":0,"maxVersion":9},{"apiKey":1,"minVersion":0,"maxVersion":13},{"apiKey":2,"minVersion":0,"maxVersion":7},{"apiKey":3,"minVersion":0,"maxVersion":12},{"apiKey":4,"minVersion":0,"maxVersion":6},{"apiKey":5,"minVersion":0,"maxVersion":3},{"apiKey":6,"minVersion":0,"maxVersion":7},{"apiKey":7,"minVersion":0,"maxVersion":3},{"apiKey":8,"minVersion":0,"maxVersion":8},{"apiKey":9,"minVersion":0,"maxVersion":8},{"apiKey":10,"minVersion":0,"maxVersion":4},{"apiKey":11,"minVersion":0,"maxVersion":9},{"apiKey":12,"minVersion":0,"maxVersion":4},{"apiKey":13,"minVersion":0,"maxVersion":5},{"apiKey":14,"minVersion":0,"maxVersion":5},{"apiKey":15,"minVersion":0,"maxVersion":5},{"apiKey":16,"minVersion":0,"maxVersion":4},{"apiKey":17,"minVersion":0,"maxVersion":1},{"apiKey":18,"minVersion":0,"maxVersion":3},{"apiKey":19,"minVersion":0,"maxVersion":7},{"apiKey":20,"minVersion":0,"maxVersion":6},{"apiKey":21,"minVersion":0,"maxVersion":2},{"apiKey":22,"minVersion":0,"maxVersion":4},{"apiKey":23,"minVersion":0,"maxVersion":4},{"apiKey":24,"minVersion":0,"maxVersion":3},{"apiKey":25,"minVersion":0,"maxVersion":3},{"apiKey":26,"minVersion":0,"maxVersion":3},{"apiKey":27,"minVersion":0,"maxVersion":1},{"apiKey":28,"minVersion":0,"maxVersion":3},{"apiKey":29,"minVersion":0,"maxVersion":3},{"apiKey":30,"minVersion":0,"maxVersion":3},{"apiKey":31,"minVersion":0,"maxVersion":3},{"apiKey":32,"minVersion":0,"maxVersion":4},{"apiKey":33,"minVersion":0,"maxVersion":2},{"apiKey":34,"minVersion":0,"maxVersion":2},{"apiKey":35,"minVersion":0,"maxVersion":4},{"apiKey":36,"minVersion":0,"maxVersion":2},{"apiKey":37,"minVersion":0,"maxVersion":3},{"apiKey":38,"minVersion":0,"maxVersion":3},{"apiKey":39,"minVersion":0,"maxVersion":2},{"apiKey":40,"minVersion":0,"maxVersion":2},{"apiKey":41,"minVersion":0,"maxVersion":3},{"apiKey":42,"minVersion":0,"maxVersion":2},{"apiKey":43,"minVersion":0,"maxVersion":2},{"apiKey":44,"minVersion":0,"maxVersion":1},{"apiKey":45,"minVersion":0,"maxVersion":0},{"apiKey":46,"minVersion":0,"maxVersion":0},{"apiKey":47,"minVersion":0,"maxVersion":0},{"apiKey":48,"minVersion":0,"maxVersion":1},{"apiKey":49,"minVersion":0,"maxVersion":1},{"apiKey":50,"minVersion":0,"maxVersion":0},{"apiKey":51,"minVersion":0,"maxVersion":0},{"apiKey":56,"minVersion":0,"maxVersion":2},{"apiKey":57,"minVersion":0,"maxVersion":1},{"apiKey":60,"minVersion":0,"maxVersion":0},{"apiKey":61,"minVersion":0,"maxVersion":0},{"apiKey":65,"minVersion":0,"maxVersion":0},{"apiKey":66,"minVersion":0,"maxVersion":0},{"apiKey":67,"minVersion":0,"maxVersion":0}],"throttleTimeMs":0,"finalizedFeaturesEpoch":0},"connection":"127.0.0.1:38537-127.0.0.1:54960-0","totalTimeMs":33.166,"requestQueueTimeMs":16.63,"localTimeMs":12.788,"remoteTimeMs":0.0,"throttleTimeMs":0,"responseQueueTimeMs":0.275,"sendTimeMs":3.472,"securityProtocol":"SASL_PLAINTEXT","principal":"User:kafkaclient","listener":"SASL_PLAINTEXT","clientInformation":{"softwareName":"unknown","softwareVersion":"unknown"}} 19:06:40.824 [kafka-admin-client-thread | test-consumer-id] DEBUG org.apache.kafka.clients.NetworkClient - [AdminClient clientId=test-consumer-id] Received METADATA response from node -1 for request with header RequestHeader(apiKey=METADATA, apiVersion=12, clientId=test-consumer-id, correlationId=1): MetadataResponseData(throttleTimeMs=0, brokers=[MetadataResponseBroker(nodeId=1, host='localhost', port=38537, rack=null)], clusterId='rNB1PmTvTz-7PD_3X0wh3g', controllerId=1, topics=[], clusterAuthorizedOperations=-2147483648) 19:06:40.826 [data-plane-kafka-network-thread-1-ListenerName(SASL_PLAINTEXT)-SASL_PLAINTEXT-1] DEBUG kafka.request.logger - Completed request:{"isForwarded":false,"requestHeader":{"requestApiKey":3,"requestApiVersion":12,"correlationId":1,"clientId":"test-consumer-id","requestApiKeyName":"METADATA"},"request":{"topics":[],"allowAutoTopicCreation":true,"includeTopicAuthorizedOperations":false},"response":{"throttleTimeMs":0,"brokers":[{"nodeId":1,"host":"localhost","port":38537,"rack":null}],"clusterId":"rNB1PmTvTz-7PD_3X0wh3g","controllerId":1,"topics":[]},"connection":"127.0.0.1:38537-127.0.0.1:54960-0","totalTimeMs":20.423,"requestQueueTimeMs":2.821,"localTimeMs":16.659,"remoteTimeMs":0.0,"throttleTimeMs":0,"responseQueueTimeMs":0.34,"sendTimeMs":0.602,"securityProtocol":"SASL_PLAINTEXT","principal":"User:kafkaclient","listener":"SASL_PLAINTEXT","clientInformation":{"softwareName":"apache-kafka-java","softwareVersion":"3.3.1"}} 19:06:40.836 [TestBroker:1:BrokerToControllerChannelManager broker=1 name=forwarding] DEBUG kafka.server.BrokerToControllerRequestThread - [TestBroker:1:BrokerToControllerChannelManager broker=1 name=forwarding]: Controller isn't cached, looking for local metadata changes 19:06:40.836 [TestBroker:1:BrokerToControllerChannelManager broker=1 name=forwarding] INFO kafka.server.BrokerToControllerRequestThread - [TestBroker:1:BrokerToControllerChannelManager broker=1 name=forwarding]: Recorded new controller, from now on will use broker localhost:38537 (id: 1 rack: null) 19:06:40.837 [kafka-admin-client-thread | test-consumer-id] DEBUG org.apache.kafka.clients.admin.internals.AdminMetadataManager - [AdminClient clientId=test-consumer-id] Updating cluster metadata to Cluster(id = rNB1PmTvTz-7PD_3X0wh3g, nodes = [localhost:38537 (id: 1 rack: null)], partitions = [], controller = localhost:38537 (id: 1 rack: null)) 19:06:40.838 [kafka-admin-client-thread | test-consumer-id] DEBUG org.apache.kafka.clients.ClientUtils - Resolved host localhost as 127.0.0.1 19:06:40.838 [kafka-admin-client-thread | test-consumer-id] DEBUG org.apache.kafka.clients.NetworkClient - [AdminClient clientId=test-consumer-id] Initiating connection to node localhost:38537 (id: 1 rack: null) using address localhost/127.0.0.1 19:06:40.839 [kafka-admin-client-thread | test-consumer-id] DEBUG org.apache.kafka.common.security.authenticator.SaslClientAuthenticator - [AdminClient clientId=test-consumer-id] Set SASL client state to SEND_APIVERSIONS_REQUEST 19:06:40.839 [kafka-admin-client-thread | test-consumer-id] DEBUG org.apache.kafka.common.security.authenticator.SaslClientAuthenticator - [AdminClient clientId=test-consumer-id] Creating SaslClient: client=null;service=kafka;serviceHostname=localhost;mechs=[PLAIN] 19:06:40.839 [data-plane-kafka-socket-acceptor-ListenerName(SASL_PLAINTEXT)-SASL_PLAINTEXT-38537] DEBUG kafka.network.DataPlaneAcceptor - Accepted connection from /127.0.0.1:54962 on /127.0.0.1:38537 and assigned it to processor 0, sendBufferSize [actual|requested]: [102400|102400] recvBufferSize [actual|requested]: [102400|102400] 19:06:40.839 [data-plane-kafka-network-thread-1-ListenerName(SASL_PLAINTEXT)-SASL_PLAINTEXT-0] DEBUG kafka.network.Processor - Processor 0 listening to new connection from /127.0.0.1:54962 19:06:40.842 [kafka-admin-client-thread | test-consumer-id] DEBUG org.apache.kafka.common.network.Selector - [AdminClient clientId=test-consumer-id] Created socket with SO_RCVBUF = 65536, SO_SNDBUF = 131072, SO_TIMEOUT = 0 to node 1 19:06:40.843 [data-plane-kafka-network-thread-1-ListenerName(SASL_PLAINTEXT)-SASL_PLAINTEXT-0] DEBUG org.apache.kafka.common.security.authenticator.SaslServerAuthenticator - Set SASL server state to HANDSHAKE_OR_VERSIONS_REQUEST during authentication 19:06:40.843 [data-plane-kafka-network-thread-1-ListenerName(SASL_PLAINTEXT)-SASL_PLAINTEXT-0] DEBUG org.apache.kafka.common.security.authenticator.SaslServerAuthenticator - Handling Kafka request API_VERSIONS during authentication 19:06:40.843 [kafka-admin-client-thread | test-consumer-id] DEBUG org.apache.kafka.common.security.authenticator.SaslClientAuthenticator - [AdminClient clientId=test-consumer-id] Set SASL client state to RECEIVE_APIVERSIONS_RESPONSE 19:06:40.844 [kafka-admin-client-thread | test-consumer-id] DEBUG org.apache.kafka.clients.NetworkClient - [AdminClient clientId=test-consumer-id] Completed connection to node 1. Fetching API versions. 19:06:40.844 [data-plane-kafka-network-thread-1-ListenerName(SASL_PLAINTEXT)-SASL_PLAINTEXT-0] DEBUG org.apache.kafka.common.security.authenticator.SaslServerAuthenticator - Set SASL server state to HANDSHAKE_REQUEST during authentication 19:06:40.846 [kafka-admin-client-thread | test-consumer-id] DEBUG org.apache.kafka.common.security.authenticator.SaslClientAuthenticator - [AdminClient clientId=test-consumer-id] Set SASL client state to SEND_HANDSHAKE_REQUEST 19:06:40.846 [kafka-admin-client-thread | test-consumer-id] DEBUG org.apache.kafka.common.security.authenticator.SaslClientAuthenticator - [AdminClient clientId=test-consumer-id] Set SASL client state to RECEIVE_HANDSHAKE_RESPONSE 19:06:40.846 [data-plane-kafka-network-thread-1-ListenerName(SASL_PLAINTEXT)-SASL_PLAINTEXT-0] DEBUG org.apache.kafka.common.security.authenticator.SaslServerAuthenticator - Handling Kafka request SASL_HANDSHAKE during authentication 19:06:40.847 [data-plane-kafka-network-thread-1-ListenerName(SASL_PLAINTEXT)-SASL_PLAINTEXT-0] DEBUG org.apache.kafka.common.security.authenticator.SaslServerAuthenticator - Using SASL mechanism 'PLAIN' provided by client 19:06:40.847 [kafka-admin-client-thread | test-consumer-id] DEBUG org.apache.kafka.common.security.authenticator.SaslClientAuthenticator - [AdminClient clientId=test-consumer-id] Set SASL client state to INITIAL 19:06:40.847 [data-plane-kafka-network-thread-1-ListenerName(SASL_PLAINTEXT)-SASL_PLAINTEXT-0] DEBUG org.apache.kafka.common.security.authenticator.SaslServerAuthenticator - Set SASL server state to AUTHENTICATE during authentication 19:06:40.848 [kafka-admin-client-thread | test-consumer-id] DEBUG org.apache.kafka.common.security.authenticator.SaslClientAuthenticator - [AdminClient clientId=test-consumer-id] Set SASL client state to INTERMEDIATE 19:06:40.848 [data-plane-kafka-network-thread-1-ListenerName(SASL_PLAINTEXT)-SASL_PLAINTEXT-0] DEBUG org.apache.kafka.common.security.authenticator.SaslServerAuthenticator - Authentication complete; session max lifetime from broker config=0 ms, no credential expiration; no session expiration, sending 0 ms to client 19:06:40.849 [data-plane-kafka-network-thread-1-ListenerName(SASL_PLAINTEXT)-SASL_PLAINTEXT-0] DEBUG org.apache.kafka.common.security.authenticator.SaslServerAuthenticator - Set SASL server state to COMPLETE during authentication 19:06:40.849 [data-plane-kafka-network-thread-1-ListenerName(SASL_PLAINTEXT)-SASL_PLAINTEXT-0] DEBUG org.apache.kafka.common.network.Selector - [SocketServer listenerType=ZK_BROKER, nodeId=1] Successfully authenticated with /127.0.0.1 19:06:40.849 [kafka-admin-client-thread | test-consumer-id] DEBUG org.apache.kafka.common.security.authenticator.SaslClientAuthenticator - [AdminClient clientId=test-consumer-id] Set SASL client state to COMPLETE 19:06:40.849 [kafka-admin-client-thread | test-consumer-id] DEBUG org.apache.kafka.common.security.authenticator.SaslClientAuthenticator - [AdminClient clientId=test-consumer-id] Finished authentication with no session expiration and no session re-authentication 19:06:40.849 [kafka-admin-client-thread | test-consumer-id] DEBUG org.apache.kafka.common.network.Selector - [AdminClient clientId=test-consumer-id] Successfully authenticated with localhost/127.0.0.1 19:06:40.849 [kafka-admin-client-thread | test-consumer-id] DEBUG org.apache.kafka.clients.NetworkClient - [AdminClient clientId=test-consumer-id] Initiating API versions fetch from node 1. 19:06:40.849 [kafka-admin-client-thread | test-consumer-id] DEBUG org.apache.kafka.clients.NetworkClient - [AdminClient clientId=test-consumer-id] Sending API_VERSIONS request with header RequestHeader(apiKey=API_VERSIONS, apiVersion=3, clientId=test-consumer-id, correlationId=2) and timeout 3600000 to node 1: ApiVersionsRequestData(clientSoftwareName='apache-kafka-java', clientSoftwareVersion='3.3.1') 19:06:40.855 [kafka-admin-client-thread | test-consumer-id] DEBUG org.apache.kafka.clients.NetworkClient - [AdminClient clientId=test-consumer-id] Received API_VERSIONS response from node 1 for request with header RequestHeader(apiKey=API_VERSIONS, apiVersion=3, clientId=test-consumer-id, correlationId=2): ApiVersionsResponseData(errorCode=0, apiKeys=[ApiVersion(apiKey=0, minVersion=0, maxVersion=9), ApiVersion(apiKey=1, minVersion=0, maxVersion=13), ApiVersion(apiKey=2, minVersion=0, maxVersion=7), ApiVersion(apiKey=3, minVersion=0, maxVersion=12), ApiVersion(apiKey=4, minVersion=0, maxVersion=6), ApiVersion(apiKey=5, minVersion=0, maxVersion=3), ApiVersion(apiKey=6, minVersion=0, maxVersion=7), ApiVersion(apiKey=7, minVersion=0, maxVersion=3), ApiVersion(apiKey=8, minVersion=0, maxVersion=8), ApiVersion(apiKey=9, minVersion=0, maxVersion=8), ApiVersion(apiKey=10, minVersion=0, maxVersion=4), ApiVersion(apiKey=11, minVersion=0, maxVersion=9), ApiVersion(apiKey=12, minVersion=0, maxVersion=4), ApiVersion(apiKey=13, minVersion=0, maxVersion=5), ApiVersion(apiKey=14, minVersion=0, maxVersion=5), ApiVersion(apiKey=15, minVersion=0, maxVersion=5), ApiVersion(apiKey=16, minVersion=0, maxVersion=4), ApiVersion(apiKey=17, minVersion=0, maxVersion=1), ApiVersion(apiKey=18, minVersion=0, maxVersion=3), ApiVersion(apiKey=19, minVersion=0, maxVersion=7), ApiVersion(apiKey=20, minVersion=0, maxVersion=6), ApiVersion(apiKey=21, minVersion=0, maxVersion=2), ApiVersion(apiKey=22, minVersion=0, maxVersion=4), ApiVersion(apiKey=23, minVersion=0, maxVersion=4), ApiVersion(apiKey=24, minVersion=0, maxVersion=3), ApiVersion(apiKey=25, minVersion=0, maxVersion=3), ApiVersion(apiKey=26, minVersion=0, maxVersion=3), ApiVersion(apiKey=27, minVersion=0, maxVersion=1), ApiVersion(apiKey=28, minVersion=0, maxVersion=3), ApiVersion(apiKey=29, minVersion=0, maxVersion=3), ApiVersion(apiKey=30, minVersion=0, maxVersion=3), ApiVersion(apiKey=31, minVersion=0, maxVersion=3), ApiVersion(apiKey=32, minVersion=0, maxVersion=4), ApiVersion(apiKey=33, minVersion=0, maxVersion=2), ApiVersion(apiKey=34, minVersion=0, maxVersion=2), ApiVersion(apiKey=35, minVersion=0, maxVersion=4), ApiVersion(apiKey=36, minVersion=0, maxVersion=2), ApiVersion(apiKey=37, minVersion=0, maxVersion=3), ApiVersion(apiKey=38, minVersion=0, maxVersion=3), ApiVersion(apiKey=39, minVersion=0, maxVersion=2), ApiVersion(apiKey=40, minVersion=0, maxVersion=2), ApiVersion(apiKey=41, minVersion=0, maxVersion=3), ApiVersion(apiKey=42, minVersion=0, maxVersion=2), ApiVersion(apiKey=43, minVersion=0, maxVersion=2), ApiVersion(apiKey=44, minVersion=0, maxVersion=1), ApiVersion(apiKey=45, minVersion=0, maxVersion=0), ApiVersion(apiKey=46, minVersion=0, maxVersion=0), ApiVersion(apiKey=47, minVersion=0, maxVersion=0), ApiVersion(apiKey=48, minVersion=0, maxVersion=1), ApiVersion(apiKey=49, minVersion=0, maxVersion=1), ApiVersion(apiKey=50, minVersion=0, maxVersion=0), ApiVersion(apiKey=51, minVersion=0, maxVersion=0), ApiVersion(apiKey=56, minVersion=0, maxVersion=2), ApiVersion(apiKey=57, minVersion=0, maxVersion=1), ApiVersion(apiKey=60, minVersion=0, maxVersion=0), ApiVersion(apiKey=61, minVersion=0, maxVersion=0), ApiVersion(apiKey=65, minVersion=0, maxVersion=0), ApiVersion(apiKey=66, minVersion=0, maxVersion=0), ApiVersion(apiKey=67, minVersion=0, maxVersion=0)], throttleTimeMs=0, supportedFeatures=[], finalizedFeaturesEpoch=0, finalizedFeatures=[]) 19:06:40.857 [kafka-admin-client-thread | test-consumer-id] DEBUG org.apache.kafka.clients.NetworkClient - [AdminClient clientId=test-consumer-id] Node 1 has finalized features epoch: 0, finalized features: [], supported features: [], API versions: (Produce(0): 0 to 9 [usable: 9], Fetch(1): 0 to 13 [usable: 13], ListOffsets(2): 0 to 7 [usable: 7], Metadata(3): 0 to 12 [usable: 12], LeaderAndIsr(4): 0 to 6 [usable: 6], StopReplica(5): 0 to 3 [usable: 3], UpdateMetadata(6): 0 to 7 [usable: 7], ControlledShutdown(7): 0 to 3 [usable: 3], OffsetCommit(8): 0 to 8 [usable: 8], OffsetFetch(9): 0 to 8 [usable: 8], FindCoordinator(10): 0 to 4 [usable: 4], JoinGroup(11): 0 to 9 [usable: 9], Heartbeat(12): 0 to 4 [usable: 4], LeaveGroup(13): 0 to 5 [usable: 5], SyncGroup(14): 0 to 5 [usable: 5], DescribeGroups(15): 0 to 5 [usable: 5], ListGroups(16): 0 to 4 [usable: 4], SaslHandshake(17): 0 to 1 [usable: 1], ApiVersions(18): 0 to 3 [usable: 3], CreateTopics(19): 0 to 7 [usable: 7], DeleteTopics(20): 0 to 6 [usable: 6], DeleteRecords(21): 0 to 2 [usable: 2], InitProducerId(22): 0 to 4 [usable: 4], OffsetForLeaderEpoch(23): 0 to 4 [usable: 4], AddPartitionsToTxn(24): 0 to 3 [usable: 3], AddOffsetsToTxn(25): 0 to 3 [usable: 3], EndTxn(26): 0 to 3 [usable: 3], WriteTxnMarkers(27): 0 to 1 [usable: 1], TxnOffsetCommit(28): 0 to 3 [usable: 3], DescribeAcls(29): 0 to 3 [usable: 3], CreateAcls(30): 0 to 3 [usable: 3], DeleteAcls(31): 0 to 3 [usable: 3], DescribeConfigs(32): 0 to 4 [usable: 4], AlterConfigs(33): 0 to 2 [usable: 2], AlterReplicaLogDirs(34): 0 to 2 [usable: 2], DescribeLogDirs(35): 0 to 4 [usable: 4], SaslAuthenticate(36): 0 to 2 [usable: 2], CreatePartitions(37): 0 to 3 [usable: 3], CreateDelegationToken(38): 0 to 3 [usable: 3], RenewDelegationToken(39): 0 to 2 [usable: 2], ExpireDelegationToken(40): 0 to 2 [usable: 2], DescribeDelegationToken(41): 0 to 3 [usable: 3], DeleteGroups(42): 0 to 2 [usable: 2], ElectLeaders(43): 0 to 2 [usable: 2], IncrementalAlterConfigs(44): 0 to 1 [usable: 1], AlterPartitionReassignments(45): 0 [usable: 0], ListPartitionReassignments(46): 0 [usable: 0], OffsetDelete(47): 0 [usable: 0], DescribeClientQuotas(48): 0 to 1 [usable: 1], AlterClientQuotas(49): 0 to 1 [usable: 1], DescribeUserScramCredentials(50): 0 [usable: 0], AlterUserScramCredentials(51): 0 [usable: 0], AlterPartition(56): 0 to 2 [usable: 2], UpdateFeatures(57): 0 to 1 [usable: 1], DescribeCluster(60): 0 [usable: 0], DescribeProducers(61): 0 [usable: 0], DescribeTransactions(65): 0 [usable: 0], ListTransactions(66): 0 [usable: 0], AllocateProducerIds(67): 0 [usable: 0]). 19:06:40.858 [data-plane-kafka-network-thread-1-ListenerName(SASL_PLAINTEXT)-SASL_PLAINTEXT-0] DEBUG kafka.request.logger - Completed request:{"isForwarded":false,"requestHeader":{"requestApiKey":18,"requestApiVersion":3,"correlationId":2,"clientId":"test-consumer-id","requestApiKeyName":"API_VERSIONS"},"request":{"clientSoftwareName":"apache-kafka-java","clientSoftwareVersion":"3.3.1"},"response":{"errorCode":0,"apiKeys":[{"apiKey":0,"minVersion":0,"maxVersion":9},{"apiKey":1,"minVersion":0,"maxVersion":13},{"apiKey":2,"minVersion":0,"maxVersion":7},{"apiKey":3,"minVersion":0,"maxVersion":12},{"apiKey":4,"minVersion":0,"maxVersion":6},{"apiKey":5,"minVersion":0,"maxVersion":3},{"apiKey":6,"minVersion":0,"maxVersion":7},{"apiKey":7,"minVersion":0,"maxVersion":3},{"apiKey":8,"minVersion":0,"maxVersion":8},{"apiKey":9,"minVersion":0,"maxVersion":8},{"apiKey":10,"minVersion":0,"maxVersion":4},{"apiKey":11,"minVersion":0,"maxVersion":9},{"apiKey":12,"minVersion":0,"maxVersion":4},{"apiKey":13,"minVersion":0,"maxVersion":5},{"apiKey":14,"minVersion":0,"maxVersion":5},{"apiKey":15,"minVersion":0,"maxVersion":5},{"apiKey":16,"minVersion":0,"maxVersion":4},{"apiKey":17,"minVersion":0,"maxVersion":1},{"apiKey":18,"minVersion":0,"maxVersion":3},{"apiKey":19,"minVersion":0,"maxVersion":7},{"apiKey":20,"minVersion":0,"maxVersion":6},{"apiKey":21,"minVersion":0,"maxVersion":2},{"apiKey":22,"minVersion":0,"maxVersion":4},{"apiKey":23,"minVersion":0,"maxVersion":4},{"apiKey":24,"minVersion":0,"maxVersion":3},{"apiKey":25,"minVersion":0,"maxVersion":3},{"apiKey":26,"minVersion":0,"maxVersion":3},{"apiKey":27,"minVersion":0,"maxVersion":1},{"apiKey":28,"minVersion":0,"maxVersion":3},{"apiKey":29,"minVersion":0,"maxVersion":3},{"apiKey":30,"minVersion":0,"maxVersion":3},{"apiKey":31,"minVersion":0,"maxVersion":3},{"apiKey":32,"minVersion":0,"maxVersion":4},{"apiKey":33,"minVersion":0,"maxVersion":2},{"apiKey":34,"minVersion":0,"maxVersion":2},{"apiKey":35,"minVersion":0,"maxVersion":4},{"apiKey":36,"minVersion":0,"maxVersion":2},{"apiKey":37,"minVersion":0,"maxVersion":3},{"apiKey":38,"minVersion":0,"maxVersion":3},{"apiKey":39,"minVersion":0,"maxVersion":2},{"apiKey":40,"minVersion":0,"maxVersion":2},{"apiKey":41,"minVersion":0,"maxVersion":3},{"apiKey":42,"minVersion":0,"maxVersion":2},{"apiKey":43,"minVersion":0,"maxVersion":2},{"apiKey":44,"minVersion":0,"maxVersion":1},{"apiKey":45,"minVersion":0,"maxVersion":0},{"apiKey":46,"minVersion":0,"maxVersion":0},{"apiKey":47,"minVersion":0,"maxVersion":0},{"apiKey":48,"minVersion":0,"maxVersion":1},{"apiKey":49,"minVersion":0,"maxVersion":1},{"apiKey":50,"minVersion":0,"maxVersion":0},{"apiKey":51,"minVersion":0,"maxVersion":0},{"apiKey":56,"minVersion":0,"maxVersion":2},{"apiKey":57,"minVersion":0,"maxVersion":1},{"apiKey":60,"minVersion":0,"maxVersion":0},{"apiKey":61,"minVersion":0,"maxVersion":0},{"apiKey":65,"minVersion":0,"maxVersion":0},{"apiKey":66,"minVersion":0,"maxVersion":0},{"apiKey":67,"minVersion":0,"maxVersion":0}],"throttleTimeMs":0,"finalizedFeaturesEpoch":0},"connection":"127.0.0.1:38537-127.0.0.1:54962-1","totalTimeMs":2.877,"requestQueueTimeMs":0.565,"localTimeMs":1.708,"remoteTimeMs":0.0,"throttleTimeMs":0,"responseQueueTimeMs":0.123,"sendTimeMs":0.48,"securityProtocol":"SASL_PLAINTEXT","principal":"User:kafkaclient","listener":"SASL_PLAINTEXT","clientInformation":{"softwareName":"unknown","softwareVersion":"unknown"}} 19:06:40.858 [kafka-admin-client-thread | test-consumer-id] DEBUG org.apache.kafka.clients.admin.KafkaAdminClient - [AdminClient clientId=test-consumer-id] Sending DescribeClusterRequestData(includeClusterAuthorizedOperations=false) to localhost:38537 (id: 1 rack: null). correlationId=3, timeoutMs=14968 19:06:40.858 [kafka-admin-client-thread | test-consumer-id] DEBUG org.apache.kafka.clients.NetworkClient - [AdminClient clientId=test-consumer-id] Sending DESCRIBE_CLUSTER request with header RequestHeader(apiKey=DESCRIBE_CLUSTER, apiVersion=0, clientId=test-consumer-id, correlationId=3) and timeout 14968 to node 1: DescribeClusterRequestData(includeClusterAuthorizedOperations=false) 19:06:40.870 [kafka-admin-client-thread | test-consumer-id] DEBUG org.apache.kafka.clients.NetworkClient - [AdminClient clientId=test-consumer-id] Received DESCRIBE_CLUSTER response from node 1 for request with header RequestHeader(apiKey=DESCRIBE_CLUSTER, apiVersion=0, clientId=test-consumer-id, correlationId=3): DescribeClusterResponseData(throttleTimeMs=0, errorCode=0, errorMessage=null, clusterId='rNB1PmTvTz-7PD_3X0wh3g', controllerId=1, brokers=[DescribeClusterBroker(brokerId=1, host='localhost', port=38537, rack=null)], clusterAuthorizedOperations=-2147483648) 19:06:40.871 [data-plane-kafka-network-thread-1-ListenerName(SASL_PLAINTEXT)-SASL_PLAINTEXT-0] DEBUG kafka.request.logger - Completed request:{"isForwarded":false,"requestHeader":{"requestApiKey":60,"requestApiVersion":0,"correlationId":3,"clientId":"test-consumer-id","requestApiKeyName":"DESCRIBE_CLUSTER"},"request":{"includeClusterAuthorizedOperations":false},"response":{"throttleTimeMs":0,"errorCode":0,"errorMessage":null,"clusterId":"rNB1PmTvTz-7PD_3X0wh3g","controllerId":1,"brokers":[{"brokerId":1,"host":"localhost","port":38537,"rack":null}],"clusterAuthorizedOperations":-2147483648},"connection":"127.0.0.1:38537-127.0.0.1:54962-1","totalTimeMs":10.544,"requestQueueTimeMs":1.418,"localTimeMs":8.483,"remoteTimeMs":0.0,"throttleTimeMs":0,"responseQueueTimeMs":0.163,"sendTimeMs":0.478,"securityProtocol":"SASL_PLAINTEXT","principal":"User:kafkaclient","listener":"SASL_PLAINTEXT","clientInformation":{"softwareName":"apache-kafka-java","softwareVersion":"3.3.1"}} 19:06:40.872 [main] DEBUG org.apache.kafka.clients.admin.KafkaAdminClient - [AdminClient clientId=test-consumer-id] Initiating close operation. 19:06:40.872 [main] DEBUG org.apache.kafka.clients.admin.KafkaAdminClient - [AdminClient clientId=test-consumer-id] Waiting for the I/O thread to exit. Hard shutdown in 31536000000 ms. 19:06:40.872 [kafka-admin-client-thread | test-consumer-id] INFO org.apache.kafka.common.utils.AppInfoParser - App info kafka.admin.client for test-consumer-id unregistered 19:06:40.875 [data-plane-kafka-network-thread-1-ListenerName(SASL_PLAINTEXT)-SASL_PLAINTEXT-1] DEBUG org.apache.kafka.common.network.Selector - [SocketServer listenerType=ZK_BROKER, nodeId=1] Connection with /127.0.0.1 (channelId=127.0.0.1:38537-127.0.0.1:54960-0) disconnected java.io.EOFException: null at org.apache.kafka.common.network.NetworkReceive.readFrom(NetworkReceive.java:97) at org.apache.kafka.common.network.KafkaChannel.receive(KafkaChannel.java:452) at org.apache.kafka.common.network.KafkaChannel.read(KafkaChannel.java:402) at org.apache.kafka.common.network.Selector.attemptRead(Selector.java:674) at org.apache.kafka.common.network.Selector.pollSelectionKeys(Selector.java:576) at org.apache.kafka.common.network.Selector.poll(Selector.java:481) at kafka.network.Processor.poll(SocketServer.scala:1055) at kafka.network.Processor.run(SocketServer.scala:959) at java.base/java.lang.Thread.run(Thread.java:829) 19:06:40.875 [data-plane-kafka-network-thread-1-ListenerName(SASL_PLAINTEXT)-SASL_PLAINTEXT-0] DEBUG org.apache.kafka.common.network.Selector - [SocketServer listenerType=ZK_BROKER, nodeId=1] Connection with /127.0.0.1 (channelId=127.0.0.1:38537-127.0.0.1:54962-1) disconnected java.io.EOFException: null at org.apache.kafka.common.network.NetworkReceive.readFrom(NetworkReceive.java:97) at org.apache.kafka.common.network.KafkaChannel.receive(KafkaChannel.java:452) at org.apache.kafka.common.network.KafkaChannel.read(KafkaChannel.java:402) at org.apache.kafka.common.network.Selector.attemptRead(Selector.java:674) at org.apache.kafka.common.network.Selector.pollSelectionKeys(Selector.java:576) at org.apache.kafka.common.network.Selector.poll(Selector.java:481) at kafka.network.Processor.poll(SocketServer.scala:1055) at kafka.network.Processor.run(SocketServer.scala:959) at java.base/java.lang.Thread.run(Thread.java:829) 19:06:40.877 [kafka-admin-client-thread | test-consumer-id] INFO org.apache.kafka.common.metrics.Metrics - Metrics scheduler closed 19:06:40.878 [kafka-admin-client-thread | test-consumer-id] INFO org.apache.kafka.common.metrics.Metrics - Closing reporter org.apache.kafka.common.metrics.JmxReporter 19:06:40.878 [kafka-admin-client-thread | test-consumer-id] INFO org.apache.kafka.common.metrics.Metrics - Metrics reporters closed 19:06:40.878 [kafka-admin-client-thread | test-consumer-id] DEBUG org.apache.kafka.clients.admin.KafkaAdminClient - [AdminClient clientId=test-consumer-id] Exiting AdminClientRunnable thread. 19:06:40.878 [main] DEBUG org.apache.kafka.clients.admin.KafkaAdminClient - [AdminClient clientId=test-consumer-id] Kafka admin client closed. 19:06:40.878 [main] INFO com.salesforce.kafka.test.KafkaTestCluster - Found 1 brokers on-line, cluster is ready. 19:06:40.878 [main] DEBUG org.onap.sdc.utils.SdcKafkaTest - Cluster started at: SASL_PLAINTEXT://localhost:38537 19:06:40.879 [main] INFO org.apache.kafka.clients.admin.AdminClientConfig - AdminClientConfig values: bootstrap.servers = [SASL_PLAINTEXT://localhost:38537] client.dns.lookup = use_all_dns_ips client.id = test-consumer-id connections.max.idle.ms = 300000 default.api.timeout.ms = 60000 metadata.max.age.ms = 300000 metric.reporters = [] metrics.num.samples = 2 metrics.recording.level = INFO metrics.sample.window.ms = 30000 receive.buffer.bytes = 65536 reconnect.backoff.max.ms = 1000 reconnect.backoff.ms = 50 request.timeout.ms = 15000 retries = 2147483647 retry.backoff.ms = 100 sasl.client.callback.handler.class = null sasl.jaas.config = [hidden] sasl.kerberos.kinit.cmd = /usr/bin/kinit sasl.kerberos.min.time.before.relogin = 60000 sasl.kerberos.service.name = null sasl.kerberos.ticket.renew.jitter = 0.05 sasl.kerberos.ticket.renew.window.factor = 0.8 sasl.login.callback.handler.class = null sasl.login.class = null sasl.login.connect.timeout.ms = null sasl.login.read.timeout.ms = null sasl.login.refresh.buffer.seconds = 300 sasl.login.refresh.min.period.seconds = 60 sasl.login.refresh.window.factor = 0.8 sasl.login.refresh.window.jitter = 0.05 sasl.login.retry.backoff.max.ms = 10000 sasl.login.retry.backoff.ms = 100 sasl.mechanism = PLAIN sasl.oauthbearer.clock.skew.seconds = 30 sasl.oauthbearer.expected.audience = null sasl.oauthbearer.expected.issuer = null sasl.oauthbearer.jwks.endpoint.refresh.ms = 3600000 sasl.oauthbearer.jwks.endpoint.retry.backoff.max.ms = 10000 sasl.oauthbearer.jwks.endpoint.retry.backoff.ms = 100 sasl.oauthbearer.jwks.endpoint.url = null sasl.oauthbearer.scope.claim.name = scope sasl.oauthbearer.sub.claim.name = sub sasl.oauthbearer.token.endpoint.url = null security.protocol = SASL_PLAINTEXT security.providers = null send.buffer.bytes = 131072 socket.connection.setup.timeout.max.ms = 30000 socket.connection.setup.timeout.ms = 10000 ssl.cipher.suites = null ssl.enabled.protocols = [TLSv1.2, TLSv1.3] ssl.endpoint.identification.algorithm = https ssl.engine.factory.class = null ssl.key.password = null ssl.keymanager.algorithm = SunX509 ssl.keystore.certificate.chain = null ssl.keystore.key = null ssl.keystore.location = null ssl.keystore.password = null ssl.keystore.type = JKS ssl.protocol = TLSv1.3 ssl.provider = null ssl.secure.random.implementation = null ssl.trustmanager.algorithm = PKIX ssl.truststore.certificates = null ssl.truststore.location = null ssl.truststore.password = null ssl.truststore.type = JKS 19:06:40.879 [main] DEBUG org.apache.kafka.clients.admin.internals.AdminMetadataManager - [AdminClient clientId=test-consumer-id] Setting bootstrap cluster metadata Cluster(id = null, nodes = [localhost:38537 (id: -1 rack: null)], partitions = [], controller = null). 19:06:40.880 [main] INFO org.apache.kafka.common.security.authenticator.AbstractLogin - Successfully logged in. 19:06:40.884 [main] INFO org.apache.kafka.common.utils.AppInfoParser - Kafka version: 3.3.1 19:06:40.884 [main] INFO org.apache.kafka.common.utils.AppInfoParser - Kafka commitId: e23c59d00e687ff5 19:06:40.884 [main] INFO org.apache.kafka.common.utils.AppInfoParser - Kafka startTimeMs: 1755112000884 19:06:40.885 [main] DEBUG org.apache.kafka.clients.admin.KafkaAdminClient - [AdminClient clientId=test-consumer-id] Kafka admin client initialized 19:06:40.885 [kafka-admin-client-thread | test-consumer-id] DEBUG org.apache.kafka.clients.admin.KafkaAdminClient - [AdminClient clientId=test-consumer-id] Thread starting 19:06:40.885 [kafka-admin-client-thread | test-consumer-id] DEBUG org.apache.kafka.clients.ClientUtils - Resolved host localhost as 127.0.0.1 19:06:40.885 [kafka-admin-client-thread | test-consumer-id] DEBUG org.apache.kafka.clients.NetworkClient - [AdminClient clientId=test-consumer-id] Initiating connection to node localhost:38537 (id: -1 rack: null) using address localhost/127.0.0.1 19:06:40.886 [kafka-admin-client-thread | test-consumer-id] DEBUG org.apache.kafka.common.security.authenticator.SaslClientAuthenticator - [AdminClient clientId=test-consumer-id] Set SASL client state to SEND_APIVERSIONS_REQUEST 19:06:40.886 [kafka-admin-client-thread | test-consumer-id] DEBUG org.apache.kafka.common.security.authenticator.SaslClientAuthenticator - [AdminClient clientId=test-consumer-id] Creating SaslClient: client=null;service=kafka;serviceHostname=localhost;mechs=[PLAIN] 19:06:40.886 [data-plane-kafka-socket-acceptor-ListenerName(SASL_PLAINTEXT)-SASL_PLAINTEXT-38537] DEBUG kafka.network.DataPlaneAcceptor - Accepted connection from /127.0.0.1:54964 on /127.0.0.1:38537 and assigned it to processor 1, sendBufferSize [actual|requested]: [102400|102400] recvBufferSize [actual|requested]: [102400|102400] 19:06:40.886 [data-plane-kafka-network-thread-1-ListenerName(SASL_PLAINTEXT)-SASL_PLAINTEXT-1] DEBUG kafka.network.Processor - Processor 1 listening to new connection from /127.0.0.1:54964 19:06:40.889 [kafka-admin-client-thread | test-consumer-id] DEBUG org.apache.kafka.common.network.Selector - [AdminClient clientId=test-consumer-id] Created socket with SO_RCVBUF = 65536, SO_SNDBUF = 131072, SO_TIMEOUT = 0 to node -1 19:06:40.889 [kafka-admin-client-thread | test-consumer-id] DEBUG org.apache.kafka.common.security.authenticator.SaslClientAuthenticator - [AdminClient clientId=test-consumer-id] Set SASL client state to RECEIVE_APIVERSIONS_RESPONSE 19:06:40.889 [kafka-admin-client-thread | test-consumer-id] DEBUG org.apache.kafka.clients.NetworkClient - [AdminClient clientId=test-consumer-id] Completed connection to node -1. Fetching API versions. 19:06:40.889 [data-plane-kafka-network-thread-1-ListenerName(SASL_PLAINTEXT)-SASL_PLAINTEXT-1] DEBUG org.apache.kafka.common.security.authenticator.SaslServerAuthenticator - Set SASL server state to HANDSHAKE_OR_VERSIONS_REQUEST during authentication 19:06:40.890 [data-plane-kafka-network-thread-1-ListenerName(SASL_PLAINTEXT)-SASL_PLAINTEXT-1] DEBUG org.apache.kafka.common.security.authenticator.SaslServerAuthenticator - Handling Kafka request API_VERSIONS during authentication 19:06:40.891 [data-plane-kafka-network-thread-1-ListenerName(SASL_PLAINTEXT)-SASL_PLAINTEXT-1] DEBUG org.apache.kafka.common.security.authenticator.SaslServerAuthenticator - Set SASL server state to HANDSHAKE_REQUEST during authentication 19:06:40.891 [kafka-admin-client-thread | test-consumer-id] DEBUG org.apache.kafka.common.security.authenticator.SaslClientAuthenticator - [AdminClient clientId=test-consumer-id] Set SASL client state to SEND_HANDSHAKE_REQUEST 19:06:40.891 [kafka-admin-client-thread | test-consumer-id] DEBUG org.apache.kafka.common.security.authenticator.SaslClientAuthenticator - [AdminClient clientId=test-consumer-id] Set SASL client state to RECEIVE_HANDSHAKE_RESPONSE 19:06:40.891 [data-plane-kafka-network-thread-1-ListenerName(SASL_PLAINTEXT)-SASL_PLAINTEXT-1] DEBUG org.apache.kafka.common.security.authenticator.SaslServerAuthenticator - Handling Kafka request SASL_HANDSHAKE during authentication 19:06:40.891 [data-plane-kafka-network-thread-1-ListenerName(SASL_PLAINTEXT)-SASL_PLAINTEXT-1] DEBUG org.apache.kafka.common.security.authenticator.SaslServerAuthenticator - Using SASL mechanism 'PLAIN' provided by client 19:06:40.892 [main] DEBUG org.apache.kafka.clients.admin.KafkaAdminClient - [AdminClient clientId=test-consumer-id] Queueing Call(callName=createTopics, deadlineMs=1755112060891, tries=0, nextAllowedTryMs=0) with a timeout 15000 ms from now. 19:06:40.892 [kafka-admin-client-thread | test-consumer-id] DEBUG org.apache.kafka.common.security.authenticator.SaslClientAuthenticator - [AdminClient clientId=test-consumer-id] Set SASL client state to INITIAL 19:06:40.893 [kafka-admin-client-thread | test-consumer-id] DEBUG org.apache.kafka.common.security.authenticator.SaslClientAuthenticator - [AdminClient clientId=test-consumer-id] Set SASL client state to INTERMEDIATE 19:06:40.893 [data-plane-kafka-network-thread-1-ListenerName(SASL_PLAINTEXT)-SASL_PLAINTEXT-1] DEBUG org.apache.kafka.common.security.authenticator.SaslServerAuthenticator - Set SASL server state to AUTHENTICATE during authentication 19:06:40.893 [data-plane-kafka-network-thread-1-ListenerName(SASL_PLAINTEXT)-SASL_PLAINTEXT-1] DEBUG org.apache.kafka.common.security.authenticator.SaslServerAuthenticator - Authentication complete; session max lifetime from broker config=0 ms, no credential expiration; no session expiration, sending 0 ms to client 19:06:40.893 [data-plane-kafka-network-thread-1-ListenerName(SASL_PLAINTEXT)-SASL_PLAINTEXT-1] DEBUG org.apache.kafka.common.security.authenticator.SaslServerAuthenticator - Set SASL server state to COMPLETE during authentication 19:06:40.893 [data-plane-kafka-network-thread-1-ListenerName(SASL_PLAINTEXT)-SASL_PLAINTEXT-1] DEBUG org.apache.kafka.common.network.Selector - [SocketServer listenerType=ZK_BROKER, nodeId=1] Successfully authenticated with /127.0.0.1 19:06:40.893 [kafka-admin-client-thread | test-consumer-id] DEBUG org.apache.kafka.common.security.authenticator.SaslClientAuthenticator - [AdminClient clientId=test-consumer-id] Set SASL client state to COMPLETE 19:06:40.894 [kafka-admin-client-thread | test-consumer-id] DEBUG org.apache.kafka.common.security.authenticator.SaslClientAuthenticator - [AdminClient clientId=test-consumer-id] Finished authentication with no session expiration and no session re-authentication 19:06:40.894 [kafka-admin-client-thread | test-consumer-id] DEBUG org.apache.kafka.common.network.Selector - [AdminClient clientId=test-consumer-id] Successfully authenticated with localhost/127.0.0.1 19:06:40.894 [kafka-admin-client-thread | test-consumer-id] DEBUG org.apache.kafka.clients.NetworkClient - [AdminClient clientId=test-consumer-id] Initiating API versions fetch from node -1. 19:06:40.894 [kafka-admin-client-thread | test-consumer-id] DEBUG org.apache.kafka.clients.NetworkClient - [AdminClient clientId=test-consumer-id] Sending API_VERSIONS request with header RequestHeader(apiKey=API_VERSIONS, apiVersion=3, clientId=test-consumer-id, correlationId=0) and timeout 3600000 to node -1: ApiVersionsRequestData(clientSoftwareName='apache-kafka-java', clientSoftwareVersion='3.3.1') 19:06:40.897 [kafka-admin-client-thread | test-consumer-id] DEBUG org.apache.kafka.clients.NetworkClient - [AdminClient clientId=test-consumer-id] Received API_VERSIONS response from node -1 for request with header RequestHeader(apiKey=API_VERSIONS, apiVersion=3, clientId=test-consumer-id, correlationId=0): ApiVersionsResponseData(errorCode=0, apiKeys=[ApiVersion(apiKey=0, minVersion=0, maxVersion=9), ApiVersion(apiKey=1, minVersion=0, maxVersion=13), ApiVersion(apiKey=2, minVersion=0, maxVersion=7), ApiVersion(apiKey=3, minVersion=0, maxVersion=12), ApiVersion(apiKey=4, minVersion=0, maxVersion=6), ApiVersion(apiKey=5, minVersion=0, maxVersion=3), ApiVersion(apiKey=6, minVersion=0, maxVersion=7), ApiVersion(apiKey=7, minVersion=0, maxVersion=3), ApiVersion(apiKey=8, minVersion=0, maxVersion=8), ApiVersion(apiKey=9, minVersion=0, maxVersion=8), ApiVersion(apiKey=10, minVersion=0, maxVersion=4), ApiVersion(apiKey=11, minVersion=0, maxVersion=9), ApiVersion(apiKey=12, minVersion=0, maxVersion=4), ApiVersion(apiKey=13, minVersion=0, maxVersion=5), ApiVersion(apiKey=14, minVersion=0, maxVersion=5), ApiVersion(apiKey=15, minVersion=0, maxVersion=5), ApiVersion(apiKey=16, minVersion=0, maxVersion=4), ApiVersion(apiKey=17, minVersion=0, maxVersion=1), ApiVersion(apiKey=18, minVersion=0, maxVersion=3), ApiVersion(apiKey=19, minVersion=0, maxVersion=7), ApiVersion(apiKey=20, minVersion=0, maxVersion=6), ApiVersion(apiKey=21, minVersion=0, maxVersion=2), ApiVersion(apiKey=22, minVersion=0, maxVersion=4), ApiVersion(apiKey=23, minVersion=0, maxVersion=4), ApiVersion(apiKey=24, minVersion=0, maxVersion=3), ApiVersion(apiKey=25, minVersion=0, maxVersion=3), ApiVersion(apiKey=26, minVersion=0, maxVersion=3), ApiVersion(apiKey=27, minVersion=0, maxVersion=1), ApiVersion(apiKey=28, minVersion=0, maxVersion=3), ApiVersion(apiKey=29, minVersion=0, maxVersion=3), ApiVersion(apiKey=30, minVersion=0, maxVersion=3), ApiVersion(apiKey=31, minVersion=0, maxVersion=3), ApiVersion(apiKey=32, minVersion=0, maxVersion=4), ApiVersion(apiKey=33, minVersion=0, maxVersion=2), ApiVersion(apiKey=34, minVersion=0, maxVersion=2), ApiVersion(apiKey=35, minVersion=0, maxVersion=4), ApiVersion(apiKey=36, minVersion=0, maxVersion=2), ApiVersion(apiKey=37, minVersion=0, maxVersion=3), ApiVersion(apiKey=38, minVersion=0, maxVersion=3), ApiVersion(apiKey=39, minVersion=0, maxVersion=2), ApiVersion(apiKey=40, minVersion=0, maxVersion=2), ApiVersion(apiKey=41, minVersion=0, maxVersion=3), ApiVersion(apiKey=42, minVersion=0, maxVersion=2), ApiVersion(apiKey=43, minVersion=0, maxVersion=2), ApiVersion(apiKey=44, minVersion=0, maxVersion=1), ApiVersion(apiKey=45, minVersion=0, maxVersion=0), ApiVersion(apiKey=46, minVersion=0, maxVersion=0), ApiVersion(apiKey=47, minVersion=0, maxVersion=0), ApiVersion(apiKey=48, minVersion=0, maxVersion=1), ApiVersion(apiKey=49, minVersion=0, maxVersion=1), ApiVersion(apiKey=50, minVersion=0, maxVersion=0), ApiVersion(apiKey=51, minVersion=0, maxVersion=0), ApiVersion(apiKey=56, minVersion=0, maxVersion=2), ApiVersion(apiKey=57, minVersion=0, maxVersion=1), ApiVersion(apiKey=60, minVersion=0, maxVersion=0), ApiVersion(apiKey=61, minVersion=0, maxVersion=0), ApiVersion(apiKey=65, minVersion=0, maxVersion=0), ApiVersion(apiKey=66, minVersion=0, maxVersion=0), ApiVersion(apiKey=67, minVersion=0, maxVersion=0)], throttleTimeMs=0, supportedFeatures=[], finalizedFeaturesEpoch=0, finalizedFeatures=[]) 19:06:40.897 [kafka-admin-client-thread | test-consumer-id] DEBUG org.apache.kafka.clients.NetworkClient - [AdminClient clientId=test-consumer-id] Node -1 has finalized features epoch: 0, finalized features: [], supported features: [], API versions: (Produce(0): 0 to 9 [usable: 9], Fetch(1): 0 to 13 [usable: 13], ListOffsets(2): 0 to 7 [usable: 7], Metadata(3): 0 to 12 [usable: 12], LeaderAndIsr(4): 0 to 6 [usable: 6], StopReplica(5): 0 to 3 [usable: 3], UpdateMetadata(6): 0 to 7 [usable: 7], ControlledShutdown(7): 0 to 3 [usable: 3], OffsetCommit(8): 0 to 8 [usable: 8], OffsetFetch(9): 0 to 8 [usable: 8], FindCoordinator(10): 0 to 4 [usable: 4], JoinGroup(11): 0 to 9 [usable: 9], Heartbeat(12): 0 to 4 [usable: 4], LeaveGroup(13): 0 to 5 [usable: 5], SyncGroup(14): 0 to 5 [usable: 5], DescribeGroups(15): 0 to 5 [usable: 5], ListGroups(16): 0 to 4 [usable: 4], SaslHandshake(17): 0 to 1 [usable: 1], ApiVersions(18): 0 to 3 [usable: 3], CreateTopics(19): 0 to 7 [usable: 7], DeleteTopics(20): 0 to 6 [usable: 6], DeleteRecords(21): 0 to 2 [usable: 2], InitProducerId(22): 0 to 4 [usable: 4], OffsetForLeaderEpoch(23): 0 to 4 [usable: 4], AddPartitionsToTxn(24): 0 to 3 [usable: 3], AddOffsetsToTxn(25): 0 to 3 [usable: 3], EndTxn(26): 0 to 3 [usable: 3], WriteTxnMarkers(27): 0 to 1 [usable: 1], TxnOffsetCommit(28): 0 to 3 [usable: 3], DescribeAcls(29): 0 to 3 [usable: 3], CreateAcls(30): 0 to 3 [usable: 3], DeleteAcls(31): 0 to 3 [usable: 3], DescribeConfigs(32): 0 to 4 [usable: 4], AlterConfigs(33): 0 to 2 [usable: 2], AlterReplicaLogDirs(34): 0 to 2 [usable: 2], DescribeLogDirs(35): 0 to 4 [usable: 4], SaslAuthenticate(36): 0 to 2 [usable: 2], CreatePartitions(37): 0 to 3 [usable: 3], CreateDelegationToken(38): 0 to 3 [usable: 3], RenewDelegationToken(39): 0 to 2 [usable: 2], ExpireDelegationToken(40): 0 to 2 [usable: 2], DescribeDelegationToken(41): 0 to 3 [usable: 3], DeleteGroups(42): 0 to 2 [usable: 2], ElectLeaders(43): 0 to 2 [usable: 2], IncrementalAlterConfigs(44): 0 to 1 [usable: 1], AlterPartitionReassignments(45): 0 [usable: 0], ListPartitionReassignments(46): 0 [usable: 0], OffsetDelete(47): 0 [usable: 0], DescribeClientQuotas(48): 0 to 1 [usable: 1], AlterClientQuotas(49): 0 to 1 [usable: 1], DescribeUserScramCredentials(50): 0 [usable: 0], AlterUserScramCredentials(51): 0 [usable: 0], AlterPartition(56): 0 to 2 [usable: 2], UpdateFeatures(57): 0 to 1 [usable: 1], DescribeCluster(60): 0 [usable: 0], DescribeProducers(61): 0 [usable: 0], DescribeTransactions(65): 0 [usable: 0], ListTransactions(66): 0 [usable: 0], AllocateProducerIds(67): 0 [usable: 0]). 19:06:40.898 [kafka-admin-client-thread | test-consumer-id] DEBUG org.apache.kafka.clients.admin.KafkaAdminClient - [AdminClient clientId=test-consumer-id] Sending MetadataRequestData(topics=[], allowAutoTopicCreation=true, includeClusterAuthorizedOperations=false, includeTopicAuthorizedOperations=false) to localhost:38537 (id: -1 rack: null). correlationId=1, timeoutMs=14987 19:06:40.898 [kafka-admin-client-thread | test-consumer-id] DEBUG org.apache.kafka.clients.NetworkClient - [AdminClient clientId=test-consumer-id] Sending METADATA request with header RequestHeader(apiKey=METADATA, apiVersion=12, clientId=test-consumer-id, correlationId=1) and timeout 14987 to node -1: MetadataRequestData(topics=[], allowAutoTopicCreation=true, includeClusterAuthorizedOperations=false, includeTopicAuthorizedOperations=false) 19:06:40.898 [data-plane-kafka-network-thread-1-ListenerName(SASL_PLAINTEXT)-SASL_PLAINTEXT-1] DEBUG kafka.request.logger - Completed request:{"isForwarded":false,"requestHeader":{"requestApiKey":18,"requestApiVersion":3,"correlationId":0,"clientId":"test-consumer-id","requestApiKeyName":"API_VERSIONS"},"request":{"clientSoftwareName":"apache-kafka-java","clientSoftwareVersion":"3.3.1"},"response":{"errorCode":0,"apiKeys":[{"apiKey":0,"minVersion":0,"maxVersion":9},{"apiKey":1,"minVersion":0,"maxVersion":13},{"apiKey":2,"minVersion":0,"maxVersion":7},{"apiKey":3,"minVersion":0,"maxVersion":12},{"apiKey":4,"minVersion":0,"maxVersion":6},{"apiKey":5,"minVersion":0,"maxVersion":3},{"apiKey":6,"minVersion":0,"maxVersion":7},{"apiKey":7,"minVersion":0,"maxVersion":3},{"apiKey":8,"minVersion":0,"maxVersion":8},{"apiKey":9,"minVersion":0,"maxVersion":8},{"apiKey":10,"minVersion":0,"maxVersion":4},{"apiKey":11,"minVersion":0,"maxVersion":9},{"apiKey":12,"minVersion":0,"maxVersion":4},{"apiKey":13,"minVersion":0,"maxVersion":5},{"apiKey":14,"minVersion":0,"maxVersion":5},{"apiKey":15,"minVersion":0,"maxVersion":5},{"apiKey":16,"minVersion":0,"maxVersion":4},{"apiKey":17,"minVersion":0,"maxVersion":1},{"apiKey":18,"minVersion":0,"maxVersion":3},{"apiKey":19,"minVersion":0,"maxVersion":7},{"apiKey":20,"minVersion":0,"maxVersion":6},{"apiKey":21,"minVersion":0,"maxVersion":2},{"apiKey":22,"minVersion":0,"maxVersion":4},{"apiKey":23,"minVersion":0,"maxVersion":4},{"apiKey":24,"minVersion":0,"maxVersion":3},{"apiKey":25,"minVersion":0,"maxVersion":3},{"apiKey":26,"minVersion":0,"maxVersion":3},{"apiKey":27,"minVersion":0,"maxVersion":1},{"apiKey":28,"minVersion":0,"maxVersion":3},{"apiKey":29,"minVersion":0,"maxVersion":3},{"apiKey":30,"minVersion":0,"maxVersion":3},{"apiKey":31,"minVersion":0,"maxVersion":3},{"apiKey":32,"minVersion":0,"maxVersion":4},{"apiKey":33,"minVersion":0,"maxVersion":2},{"apiKey":34,"minVersion":0,"maxVersion":2},{"apiKey":35,"minVersion":0,"maxVersion":4},{"apiKey":36,"minVersion":0,"maxVersion":2},{"apiKey":37,"minVersion":0,"maxVersion":3},{"apiKey":38,"minVersion":0,"maxVersion":3},{"apiKey":39,"minVersion":0,"maxVersion":2},{"apiKey":40,"minVersion":0,"maxVersion":2},{"apiKey":41,"minVersion":0,"maxVersion":3},{"apiKey":42,"minVersion":0,"maxVersion":2},{"apiKey":43,"minVersion":0,"maxVersion":2},{"apiKey":44,"minVersion":0,"maxVersion":1},{"apiKey":45,"minVersion":0,"maxVersion":0},{"apiKey":46,"minVersion":0,"maxVersion":0},{"apiKey":47,"minVersion":0,"maxVersion":0},{"apiKey":48,"minVersion":0,"maxVersion":1},{"apiKey":49,"minVersion":0,"maxVersion":1},{"apiKey":50,"minVersion":0,"maxVersion":0},{"apiKey":51,"minVersion":0,"maxVersion":0},{"apiKey":56,"minVersion":0,"maxVersion":2},{"apiKey":57,"minVersion":0,"maxVersion":1},{"apiKey":60,"minVersion":0,"maxVersion":0},{"apiKey":61,"minVersion":0,"maxVersion":0},{"apiKey":65,"minVersion":0,"maxVersion":0},{"apiKey":66,"minVersion":0,"maxVersion":0},{"apiKey":67,"minVersion":0,"maxVersion":0}],"throttleTimeMs":0,"finalizedFeaturesEpoch":0},"connection":"127.0.0.1:38537-127.0.0.1:54964-1","totalTimeMs":1.827,"requestQueueTimeMs":0.313,"localTimeMs":1.074,"remoteTimeMs":0.0,"throttleTimeMs":0,"responseQueueTimeMs":0.089,"sendTimeMs":0.35,"securityProtocol":"SASL_PLAINTEXT","principal":"User:kafkaclient","listener":"SASL_PLAINTEXT","clientInformation":{"softwareName":"unknown","softwareVersion":"unknown"}} 19:06:40.901 [kafka-admin-client-thread | test-consumer-id] DEBUG org.apache.kafka.clients.NetworkClient - [AdminClient clientId=test-consumer-id] Received METADATA response from node -1 for request with header RequestHeader(apiKey=METADATA, apiVersion=12, clientId=test-consumer-id, correlationId=1): MetadataResponseData(throttleTimeMs=0, brokers=[MetadataResponseBroker(nodeId=1, host='localhost', port=38537, rack=null)], clusterId='rNB1PmTvTz-7PD_3X0wh3g', controllerId=1, topics=[], clusterAuthorizedOperations=-2147483648) 19:06:40.901 [kafka-admin-client-thread | test-consumer-id] DEBUG org.apache.kafka.clients.admin.internals.AdminMetadataManager - [AdminClient clientId=test-consumer-id] Updating cluster metadata to Cluster(id = rNB1PmTvTz-7PD_3X0wh3g, nodes = [localhost:38537 (id: 1 rack: null)], partitions = [], controller = localhost:38537 (id: 1 rack: null)) 19:06:40.902 [kafka-admin-client-thread | test-consumer-id] DEBUG org.apache.kafka.clients.ClientUtils - Resolved host localhost as 127.0.0.1 19:06:40.902 [kafka-admin-client-thread | test-consumer-id] DEBUG org.apache.kafka.clients.NetworkClient - [AdminClient clientId=test-consumer-id] Initiating connection to node localhost:38537 (id: 1 rack: null) using address localhost/127.0.0.1 19:06:40.902 [kafka-admin-client-thread | test-consumer-id] DEBUG org.apache.kafka.common.security.authenticator.SaslClientAuthenticator - [AdminClient clientId=test-consumer-id] Set SASL client state to SEND_APIVERSIONS_REQUEST 19:06:40.902 [kafka-admin-client-thread | test-consumer-id] DEBUG org.apache.kafka.common.security.authenticator.SaslClientAuthenticator - [AdminClient clientId=test-consumer-id] Creating SaslClient: client=null;service=kafka;serviceHostname=localhost;mechs=[PLAIN] 19:06:40.902 [data-plane-kafka-network-thread-1-ListenerName(SASL_PLAINTEXT)-SASL_PLAINTEXT-1] DEBUG kafka.request.logger - Completed request:{"isForwarded":false,"requestHeader":{"requestApiKey":3,"requestApiVersion":12,"correlationId":1,"clientId":"test-consumer-id","requestApiKeyName":"METADATA"},"request":{"topics":[],"allowAutoTopicCreation":true,"includeTopicAuthorizedOperations":false},"response":{"throttleTimeMs":0,"brokers":[{"nodeId":1,"host":"localhost","port":38537,"rack":null}],"clusterId":"rNB1PmTvTz-7PD_3X0wh3g","controllerId":1,"topics":[]},"connection":"127.0.0.1:38537-127.0.0.1:54964-1","totalTimeMs":1.941,"requestQueueTimeMs":0.239,"localTimeMs":1.241,"remoteTimeMs":0.0,"throttleTimeMs":0,"responseQueueTimeMs":0.148,"sendTimeMs":0.311,"securityProtocol":"SASL_PLAINTEXT","principal":"User:kafkaclient","listener":"SASL_PLAINTEXT","clientInformation":{"softwareName":"apache-kafka-java","softwareVersion":"3.3.1"}} 19:06:40.939 [data-plane-kafka-socket-acceptor-ListenerName(SASL_PLAINTEXT)-SASL_PLAINTEXT-38537] DEBUG kafka.network.DataPlaneAcceptor - Accepted connection from /127.0.0.1:54966 on /127.0.0.1:38537 and assigned it to processor 0, sendBufferSize [actual|requested]: [102400|102400] recvBufferSize [actual|requested]: [102400|102400] 19:06:40.940 [data-plane-kafka-network-thread-1-ListenerName(SASL_PLAINTEXT)-SASL_PLAINTEXT-0] DEBUG kafka.network.Processor - Processor 0 listening to new connection from /127.0.0.1:54966 19:06:40.941 [kafka-admin-client-thread | test-consumer-id] DEBUG org.apache.kafka.common.network.Selector - [AdminClient clientId=test-consumer-id] Created socket with SO_RCVBUF = 65536, SO_SNDBUF = 131072, SO_TIMEOUT = 0 to node 1 19:06:40.942 [kafka-admin-client-thread | test-consumer-id] DEBUG org.apache.kafka.common.security.authenticator.SaslClientAuthenticator - [AdminClient clientId=test-consumer-id] Set SASL client state to RECEIVE_APIVERSIONS_RESPONSE 19:06:40.942 [kafka-admin-client-thread | test-consumer-id] DEBUG org.apache.kafka.clients.NetworkClient - [AdminClient clientId=test-consumer-id] Completed connection to node 1. Fetching API versions. 19:06:40.942 [data-plane-kafka-network-thread-1-ListenerName(SASL_PLAINTEXT)-SASL_PLAINTEXT-0] DEBUG org.apache.kafka.common.security.authenticator.SaslServerAuthenticator - Set SASL server state to HANDSHAKE_OR_VERSIONS_REQUEST during authentication 19:06:40.942 [data-plane-kafka-network-thread-1-ListenerName(SASL_PLAINTEXT)-SASL_PLAINTEXT-0] DEBUG org.apache.kafka.common.security.authenticator.SaslServerAuthenticator - Handling Kafka request API_VERSIONS during authentication 19:06:40.943 [data-plane-kafka-network-thread-1-ListenerName(SASL_PLAINTEXT)-SASL_PLAINTEXT-0] DEBUG org.apache.kafka.common.security.authenticator.SaslServerAuthenticator - Set SASL server state to HANDSHAKE_REQUEST during authentication 19:06:40.943 [kafka-admin-client-thread | test-consumer-id] DEBUG org.apache.kafka.common.security.authenticator.SaslClientAuthenticator - [AdminClient clientId=test-consumer-id] Set SASL client state to SEND_HANDSHAKE_REQUEST 19:06:40.943 [kafka-admin-client-thread | test-consumer-id] DEBUG org.apache.kafka.common.security.authenticator.SaslClientAuthenticator - [AdminClient clientId=test-consumer-id] Set SASL client state to RECEIVE_HANDSHAKE_RESPONSE 19:06:40.944 [data-plane-kafka-network-thread-1-ListenerName(SASL_PLAINTEXT)-SASL_PLAINTEXT-0] DEBUG org.apache.kafka.common.security.authenticator.SaslServerAuthenticator - Handling Kafka request SASL_HANDSHAKE during authentication 19:06:40.944 [data-plane-kafka-network-thread-1-ListenerName(SASL_PLAINTEXT)-SASL_PLAINTEXT-0] DEBUG org.apache.kafka.common.security.authenticator.SaslServerAuthenticator - Using SASL mechanism 'PLAIN' provided by client 19:06:40.944 [kafka-admin-client-thread | test-consumer-id] DEBUG org.apache.kafka.common.security.authenticator.SaslClientAuthenticator - [AdminClient clientId=test-consumer-id] Set SASL client state to INITIAL 19:06:40.944 [kafka-admin-client-thread | test-consumer-id] DEBUG org.apache.kafka.common.security.authenticator.SaslClientAuthenticator - [AdminClient clientId=test-consumer-id] Set SASL client state to INTERMEDIATE 19:06:40.944 [data-plane-kafka-network-thread-1-ListenerName(SASL_PLAINTEXT)-SASL_PLAINTEXT-0] DEBUG org.apache.kafka.common.security.authenticator.SaslServerAuthenticator - Set SASL server state to AUTHENTICATE during authentication 19:06:40.945 [data-plane-kafka-network-thread-1-ListenerName(SASL_PLAINTEXT)-SASL_PLAINTEXT-0] DEBUG org.apache.kafka.common.security.authenticator.SaslServerAuthenticator - Authentication complete; session max lifetime from broker config=0 ms, no credential expiration; no session expiration, sending 0 ms to client 19:06:40.945 [data-plane-kafka-network-thread-1-ListenerName(SASL_PLAINTEXT)-SASL_PLAINTEXT-0] DEBUG org.apache.kafka.common.security.authenticator.SaslServerAuthenticator - Set SASL server state to COMPLETE during authentication 19:06:40.945 [data-plane-kafka-network-thread-1-ListenerName(SASL_PLAINTEXT)-SASL_PLAINTEXT-0] DEBUG org.apache.kafka.common.network.Selector - [SocketServer listenerType=ZK_BROKER, nodeId=1] Successfully authenticated with /127.0.0.1 19:06:40.945 [kafka-admin-client-thread | test-consumer-id] DEBUG org.apache.kafka.common.security.authenticator.SaslClientAuthenticator - [AdminClient clientId=test-consumer-id] Set SASL client state to COMPLETE 19:06:40.945 [kafka-admin-client-thread | test-consumer-id] DEBUG org.apache.kafka.common.security.authenticator.SaslClientAuthenticator - [AdminClient clientId=test-consumer-id] Finished authentication with no session expiration and no session re-authentication 19:06:40.945 [kafka-admin-client-thread | test-consumer-id] DEBUG org.apache.kafka.common.network.Selector - [AdminClient clientId=test-consumer-id] Successfully authenticated with localhost/127.0.0.1 19:06:40.945 [kafka-admin-client-thread | test-consumer-id] DEBUG org.apache.kafka.clients.NetworkClient - [AdminClient clientId=test-consumer-id] Initiating API versions fetch from node 1. 19:06:40.946 [kafka-admin-client-thread | test-consumer-id] DEBUG org.apache.kafka.clients.NetworkClient - [AdminClient clientId=test-consumer-id] Sending API_VERSIONS request with header RequestHeader(apiKey=API_VERSIONS, apiVersion=3, clientId=test-consumer-id, correlationId=2) and timeout 3600000 to node 1: ApiVersionsRequestData(clientSoftwareName='apache-kafka-java', clientSoftwareVersion='3.3.1') 19:06:40.950 [data-plane-kafka-network-thread-1-ListenerName(SASL_PLAINTEXT)-SASL_PLAINTEXT-0] DEBUG kafka.request.logger - Completed request:{"isForwarded":false,"requestHeader":{"requestApiKey":18,"requestApiVersion":3,"correlationId":2,"clientId":"test-consumer-id","requestApiKeyName":"API_VERSIONS"},"request":{"clientSoftwareName":"apache-kafka-java","clientSoftwareVersion":"3.3.1"},"response":{"errorCode":0,"apiKeys":[{"apiKey":0,"minVersion":0,"maxVersion":9},{"apiKey":1,"minVersion":0,"maxVersion":13},{"apiKey":2,"minVersion":0,"maxVersion":7},{"apiKey":3,"minVersion":0,"maxVersion":12},{"apiKey":4,"minVersion":0,"maxVersion":6},{"apiKey":5,"minVersion":0,"maxVersion":3},{"apiKey":6,"minVersion":0,"maxVersion":7},{"apiKey":7,"minVersion":0,"maxVersion":3},{"apiKey":8,"minVersion":0,"maxVersion":8},{"apiKey":9,"minVersion":0,"maxVersion":8},{"apiKey":10,"minVersion":0,"maxVersion":4},{"apiKey":11,"minVersion":0,"maxVersion":9},{"apiKey":12,"minVersion":0,"maxVersion":4},{"apiKey":13,"minVersion":0,"maxVersion":5},{"apiKey":14,"minVersion":0,"maxVersion":5},{"apiKey":15,"minVersion":0,"maxVersion":5},{"apiKey":16,"minVersion":0,"maxVersion":4},{"apiKey":17,"minVersion":0,"maxVersion":1},{"apiKey":18,"minVersion":0,"maxVersion":3},{"apiKey":19,"minVersion":0,"maxVersion":7},{"apiKey":20,"minVersion":0,"maxVersion":6},{"apiKey":21,"minVersion":0,"maxVersion":2},{"apiKey":22,"minVersion":0,"maxVersion":4},{"apiKey":23,"minVersion":0,"maxVersion":4},{"apiKey":24,"minVersion":0,"maxVersion":3},{"apiKey":25,"minVersion":0,"maxVersion":3},{"apiKey":26,"minVersion":0,"maxVersion":3},{"apiKey":27,"minVersion":0,"maxVersion":1},{"apiKey":28,"minVersion":0,"maxVersion":3},{"apiKey":29,"minVersion":0,"maxVersion":3},{"apiKey":30,"minVersion":0,"maxVersion":3},{"apiKey":31,"minVersion":0,"maxVersion":3},{"apiKey":32,"minVersion":0,"maxVersion":4},{"apiKey":33,"minVersion":0,"maxVersion":2},{"apiKey":34,"minVersion":0,"maxVersion":2},{"apiKey":35,"minVersion":0,"maxVersion":4},{"apiKey":36,"minVersion":0,"maxVersion":2},{"apiKey":37,"minVersion":0,"maxVersion":3},{"apiKey":38,"minVersion":0,"maxVersion":3},{"apiKey":39,"minVersion":0,"maxVersion":2},{"apiKey":40,"minVersion":0,"maxVersion":2},{"apiKey":41,"minVersion":0,"maxVersion":3},{"apiKey":42,"minVersion":0,"maxVersion":2},{"apiKey":43,"minVersion":0,"maxVersion":2},{"apiKey":44,"minVersion":0,"maxVersion":1},{"apiKey":45,"minVersion":0,"maxVersion":0},{"apiKey":46,"minVersion":0,"maxVersion":0},{"apiKey":47,"minVersion":0,"maxVersion":0},{"apiKey":48,"minVersion":0,"maxVersion":1},{"apiKey":49,"minVersion":0,"maxVersion":1},{"apiKey":50,"minVersion":0,"maxVersion":0},{"apiKey":51,"minVersion":0,"maxVersion":0},{"apiKey":56,"minVersion":0,"maxVersion":2},{"apiKey":57,"minVersion":0,"maxVersion":1},{"apiKey":60,"minVersion":0,"maxVersion":0},{"apiKey":61,"minVersion":0,"maxVersion":0},{"apiKey":65,"minVersion":0,"maxVersion":0},{"apiKey":66,"minVersion":0,"maxVersion":0},{"apiKey":67,"minVersion":0,"maxVersion":0}],"throttleTimeMs":0,"finalizedFeaturesEpoch":0},"connection":"127.0.0.1:38537-127.0.0.1:54966-2","totalTimeMs":2.286,"requestQueueTimeMs":0.662,"localTimeMs":1.301,"remoteTimeMs":0.0,"throttleTimeMs":0,"responseQueueTimeMs":0.098,"sendTimeMs":0.223,"securityProtocol":"SASL_PLAINTEXT","principal":"User:kafkaclient","listener":"SASL_PLAINTEXT","clientInformation":{"softwareName":"unknown","softwareVersion":"unknown"}} 19:06:40.950 [kafka-admin-client-thread | test-consumer-id] DEBUG org.apache.kafka.clients.NetworkClient - [AdminClient clientId=test-consumer-id] Received API_VERSIONS response from node 1 for request with header RequestHeader(apiKey=API_VERSIONS, apiVersion=3, clientId=test-consumer-id, correlationId=2): ApiVersionsResponseData(errorCode=0, apiKeys=[ApiVersion(apiKey=0, minVersion=0, maxVersion=9), ApiVersion(apiKey=1, minVersion=0, maxVersion=13), ApiVersion(apiKey=2, minVersion=0, maxVersion=7), ApiVersion(apiKey=3, minVersion=0, maxVersion=12), ApiVersion(apiKey=4, minVersion=0, maxVersion=6), ApiVersion(apiKey=5, minVersion=0, maxVersion=3), ApiVersion(apiKey=6, minVersion=0, maxVersion=7), ApiVersion(apiKey=7, minVersion=0, maxVersion=3), ApiVersion(apiKey=8, minVersion=0, maxVersion=8), ApiVersion(apiKey=9, minVersion=0, maxVersion=8), ApiVersion(apiKey=10, minVersion=0, maxVersion=4), ApiVersion(apiKey=11, minVersion=0, maxVersion=9), ApiVersion(apiKey=12, minVersion=0, maxVersion=4), ApiVersion(apiKey=13, minVersion=0, maxVersion=5), ApiVersion(apiKey=14, minVersion=0, maxVersion=5), ApiVersion(apiKey=15, minVersion=0, maxVersion=5), ApiVersion(apiKey=16, minVersion=0, maxVersion=4), ApiVersion(apiKey=17, minVersion=0, maxVersion=1), ApiVersion(apiKey=18, minVersion=0, maxVersion=3), ApiVersion(apiKey=19, minVersion=0, maxVersion=7), ApiVersion(apiKey=20, minVersion=0, maxVersion=6), ApiVersion(apiKey=21, minVersion=0, maxVersion=2), ApiVersion(apiKey=22, minVersion=0, maxVersion=4), ApiVersion(apiKey=23, minVersion=0, maxVersion=4), ApiVersion(apiKey=24, minVersion=0, maxVersion=3), ApiVersion(apiKey=25, minVersion=0, maxVersion=3), ApiVersion(apiKey=26, minVersion=0, maxVersion=3), ApiVersion(apiKey=27, minVersion=0, maxVersion=1), ApiVersion(apiKey=28, minVersion=0, maxVersion=3), ApiVersion(apiKey=29, minVersion=0, maxVersion=3), ApiVersion(apiKey=30, minVersion=0, maxVersion=3), ApiVersion(apiKey=31, minVersion=0, maxVersion=3), ApiVersion(apiKey=32, minVersion=0, maxVersion=4), ApiVersion(apiKey=33, minVersion=0, maxVersion=2), ApiVersion(apiKey=34, minVersion=0, maxVersion=2), ApiVersion(apiKey=35, minVersion=0, maxVersion=4), ApiVersion(apiKey=36, minVersion=0, maxVersion=2), ApiVersion(apiKey=37, minVersion=0, maxVersion=3), ApiVersion(apiKey=38, minVersion=0, maxVersion=3), ApiVersion(apiKey=39, minVersion=0, maxVersion=2), ApiVersion(apiKey=40, minVersion=0, maxVersion=2), ApiVersion(apiKey=41, minVersion=0, maxVersion=3), ApiVersion(apiKey=42, minVersion=0, maxVersion=2), ApiVersion(apiKey=43, minVersion=0, maxVersion=2), ApiVersion(apiKey=44, minVersion=0, maxVersion=1), ApiVersion(apiKey=45, minVersion=0, maxVersion=0), ApiVersion(apiKey=46, minVersion=0, maxVersion=0), ApiVersion(apiKey=47, minVersion=0, maxVersion=0), ApiVersion(apiKey=48, minVersion=0, maxVersion=1), ApiVersion(apiKey=49, minVersion=0, maxVersion=1), ApiVersion(apiKey=50, minVersion=0, maxVersion=0), ApiVersion(apiKey=51, minVersion=0, maxVersion=0), ApiVersion(apiKey=56, minVersion=0, maxVersion=2), ApiVersion(apiKey=57, minVersion=0, maxVersion=1), ApiVersion(apiKey=60, minVersion=0, maxVersion=0), ApiVersion(apiKey=61, minVersion=0, maxVersion=0), ApiVersion(apiKey=65, minVersion=0, maxVersion=0), ApiVersion(apiKey=66, minVersion=0, maxVersion=0), ApiVersion(apiKey=67, minVersion=0, maxVersion=0)], throttleTimeMs=0, supportedFeatures=[], finalizedFeaturesEpoch=0, finalizedFeatures=[]) 19:06:40.951 [kafka-admin-client-thread | test-consumer-id] DEBUG org.apache.kafka.clients.NetworkClient - [AdminClient clientId=test-consumer-id] Node 1 has finalized features epoch: 0, finalized features: [], supported features: [], API versions: (Produce(0): 0 to 9 [usable: 9], Fetch(1): 0 to 13 [usable: 13], ListOffsets(2): 0 to 7 [usable: 7], Metadata(3): 0 to 12 [usable: 12], LeaderAndIsr(4): 0 to 6 [usable: 6], StopReplica(5): 0 to 3 [usable: 3], UpdateMetadata(6): 0 to 7 [usable: 7], ControlledShutdown(7): 0 to 3 [usable: 3], OffsetCommit(8): 0 to 8 [usable: 8], OffsetFetch(9): 0 to 8 [usable: 8], FindCoordinator(10): 0 to 4 [usable: 4], JoinGroup(11): 0 to 9 [usable: 9], Heartbeat(12): 0 to 4 [usable: 4], LeaveGroup(13): 0 to 5 [usable: 5], SyncGroup(14): 0 to 5 [usable: 5], DescribeGroups(15): 0 to 5 [usable: 5], ListGroups(16): 0 to 4 [usable: 4], SaslHandshake(17): 0 to 1 [usable: 1], ApiVersions(18): 0 to 3 [usable: 3], CreateTopics(19): 0 to 7 [usable: 7], DeleteTopics(20): 0 to 6 [usable: 6], DeleteRecords(21): 0 to 2 [usable: 2], InitProducerId(22): 0 to 4 [usable: 4], OffsetForLeaderEpoch(23): 0 to 4 [usable: 4], AddPartitionsToTxn(24): 0 to 3 [usable: 3], AddOffsetsToTxn(25): 0 to 3 [usable: 3], EndTxn(26): 0 to 3 [usable: 3], WriteTxnMarkers(27): 0 to 1 [usable: 1], TxnOffsetCommit(28): 0 to 3 [usable: 3], DescribeAcls(29): 0 to 3 [usable: 3], CreateAcls(30): 0 to 3 [usable: 3], DeleteAcls(31): 0 to 3 [usable: 3], DescribeConfigs(32): 0 to 4 [usable: 4], AlterConfigs(33): 0 to 2 [usable: 2], AlterReplicaLogDirs(34): 0 to 2 [usable: 2], DescribeLogDirs(35): 0 to 4 [usable: 4], SaslAuthenticate(36): 0 to 2 [usable: 2], CreatePartitions(37): 0 to 3 [usable: 3], CreateDelegationToken(38): 0 to 3 [usable: 3], RenewDelegationToken(39): 0 to 2 [usable: 2], ExpireDelegationToken(40): 0 to 2 [usable: 2], DescribeDelegationToken(41): 0 to 3 [usable: 3], DeleteGroups(42): 0 to 2 [usable: 2], ElectLeaders(43): 0 to 2 [usable: 2], IncrementalAlterConfigs(44): 0 to 1 [usable: 1], AlterPartitionReassignments(45): 0 [usable: 0], ListPartitionReassignments(46): 0 [usable: 0], OffsetDelete(47): 0 [usable: 0], DescribeClientQuotas(48): 0 to 1 [usable: 1], AlterClientQuotas(49): 0 to 1 [usable: 1], DescribeUserScramCredentials(50): 0 [usable: 0], AlterUserScramCredentials(51): 0 [usable: 0], AlterPartition(56): 0 to 2 [usable: 2], UpdateFeatures(57): 0 to 1 [usable: 1], DescribeCluster(60): 0 [usable: 0], DescribeProducers(61): 0 [usable: 0], DescribeTransactions(65): 0 [usable: 0], ListTransactions(66): 0 [usable: 0], AllocateProducerIds(67): 0 [usable: 0]). 19:06:40.951 [kafka-admin-client-thread | test-consumer-id] DEBUG org.apache.kafka.clients.admin.KafkaAdminClient - [AdminClient clientId=test-consumer-id] Sending CreateTopicsRequestData(topics=[CreatableTopic(name='my-test-topic', numPartitions=1, replicationFactor=1, assignments=[], configs=[])], timeoutMs=14950, validateOnly=false) to localhost:38537 (id: 1 rack: null). correlationId=3, timeoutMs=14950 19:06:40.952 [kafka-admin-client-thread | test-consumer-id] DEBUG org.apache.kafka.clients.NetworkClient - [AdminClient clientId=test-consumer-id] Sending CREATE_TOPICS request with header RequestHeader(apiKey=CREATE_TOPICS, apiVersion=7, clientId=test-consumer-id, correlationId=3) and timeout 14950 to node 1: CreateTopicsRequestData(topics=[CreatableTopic(name='my-test-topic', numPartitions=1, replicationFactor=1, assignments=[], configs=[])], timeoutMs=14950, validateOnly=false) 19:06:40.980 [ProcessThread(sid:0 cport:42969):] DEBUG org.apache.zookeeper.server.SessionTrackerImpl - Checking session 0x1000002c50e0000 19:06:40.980 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - Processing request:: sessionid:0x1000002c50e0000 type:exists cxid:0x43 zxid:0xfffffffffffffffe txntype:unknown reqpath:/admin/delete_topics/my-test-topic 19:06:40.980 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - sessionid:0x1000002c50e0000 type:exists cxid:0x43 zxid:0xfffffffffffffffe txntype:unknown reqpath:/admin/delete_topics/my-test-topic 19:06:40.981 [main-SendThread(127.0.0.1:42969)] DEBUG org.apache.zookeeper.ClientCnxn - Reading reply session id: 0x1000002c50e0000, packet:: clientPath:/admin/delete_topics/my-test-topic serverPath:/admin/delete_topics/my-test-topic finished:false header:: 67,3 replyHeader:: 67,29,-101 request:: '/admin/delete_topics/my-test-topic,F response:: 19:06:40.982 [ProcessThread(sid:0 cport:42969):] DEBUG org.apache.zookeeper.server.SessionTrackerImpl - Checking session 0x1000002c50e0000 19:06:40.983 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - Processing request:: sessionid:0x1000002c50e0000 type:exists cxid:0x44 zxid:0xfffffffffffffffe txntype:unknown reqpath:/brokers/topics/my-test-topic 19:06:40.983 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - sessionid:0x1000002c50e0000 type:exists cxid:0x44 zxid:0xfffffffffffffffe txntype:unknown reqpath:/brokers/topics/my-test-topic 19:06:40.983 [main-SendThread(127.0.0.1:42969)] DEBUG org.apache.zookeeper.ClientCnxn - Reading reply session id: 0x1000002c50e0000, packet:: clientPath:/brokers/topics/my-test-topic serverPath:/brokers/topics/my-test-topic finished:false header:: 68,3 replyHeader:: 68,29,-101 request:: '/brokers/topics/my-test-topic,F response:: 19:06:41.019 [data-plane-kafka-request-handler-0] INFO kafka.zk.AdminZkClient - Creating topic my-test-topic with configuration {} and initial partition assignment HashMap(0 -> ArrayBuffer(1)) 19:06:41.023 [ProcessThread(sid:0 cport:42969):] DEBUG org.apache.zookeeper.server.SessionTrackerImpl - Checking session 0x1000002c50e0000 19:06:41.038 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - Processing request:: sessionid:0x1000002c50e0000 type:setData cxid:0x45 zxid:0x1e txntype:-1 reqpath:n/a 19:06:41.039 [SyncThread:0] DEBUG org.apache.zookeeper.server.DataTree - Ignoring processTxn failure hdr: -1 : error: -101 19:06:41.040 [main-SendThread(127.0.0.1:42969)] DEBUG org.apache.zookeeper.ClientCnxn - Reading reply session id: 0x1000002c50e0000, packet:: clientPath:/config/topics/my-test-topic serverPath:/config/topics/my-test-topic finished:false header:: 69,5 replyHeader:: 69,30,-101 request:: '/config/topics/my-test-topic,#7b2276657273696f6e223a312c22636f6e666967223a7b7d7d,-1 response:: 19:06:41.045 [ProcessThread(sid:0 cport:42969):] DEBUG org.apache.zookeeper.server.SessionTrackerImpl - Checking session 0x1000002c50e0000 19:06:41.045 [ProcessThread(sid:0 cport:42969):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Processing ACL: 31,s{'world,'anyone} 19:06:41.046 [ProcessThread(sid:0 cport:42969):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Permission requested: 4 19:06:41.046 [ProcessThread(sid:0 cport:42969):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - ACLs for node: [31,s{'world,'anyone} ] 19:06:41.046 [ProcessThread(sid:0 cport:42969):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Client credentials: ['sasl,'zooclient , 'ip,'127.0.0.1 ] 19:06:41.046 [ProcessThread(sid:0 cport:42969):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from data tree is: 56904395646 19:06:41.046 [ProcessThread(sid:0 cport:42969):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 58325086567 19:06:41.053 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - Processing request:: sessionid:0x1000002c50e0000 type:create cxid:0x46 zxid:0x1f txntype:1 reqpath:n/a 19:06:41.054 [SyncThread:0] DEBUG org.apache.zookeeper.common.PathTrie - config 19:06:41.054 [SyncThread:0] DEBUG org.apache.zookeeper.server.DataTree - Digests are matching for Zxid: 1f, Digest in log and actual tree: 62610896329 19:06:41.054 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - sessionid:0x1000002c50e0000 type:create cxid:0x46 zxid:0x1f txntype:1 reqpath:n/a 19:06:41.054 [main-SendThread(127.0.0.1:42969)] DEBUG org.apache.zookeeper.ClientCnxn - Reading reply session id: 0x1000002c50e0000, packet:: clientPath:/config/topics/my-test-topic serverPath:/config/topics/my-test-topic finished:false header:: 70,1 replyHeader:: 70,31,0 request:: '/config/topics/my-test-topic,#7b2276657273696f6e223a312c22636f6e666967223a7b7d7d,v{s{31,s{'world,'anyone}}},0 response:: '/config/topics/my-test-topic 19:06:41.070 [ProcessThread(sid:0 cport:42969):] DEBUG org.apache.zookeeper.server.SessionTrackerImpl - Checking session 0x1000002c50e0000 19:06:41.070 [ProcessThread(sid:0 cport:42969):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Processing ACL: 31,s{'world,'anyone} 19:06:41.070 [ProcessThread(sid:0 cport:42969):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Permission requested: 4 19:06:41.070 [ProcessThread(sid:0 cport:42969):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - ACLs for node: [31,s{'world,'anyone} ] 19:06:41.070 [ProcessThread(sid:0 cport:42969):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Client credentials: ['sasl,'zooclient , 'ip,'127.0.0.1 ] 19:06:41.070 [ProcessThread(sid:0 cport:42969):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from data tree is: 62610896329 19:06:41.071 [ProcessThread(sid:0 cport:42969):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 59840812992 19:06:41.072 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - Processing request:: sessionid:0x1000002c50e0000 type:create cxid:0x47 zxid:0x20 txntype:1 reqpath:n/a 19:06:41.072 [SyncThread:0] DEBUG org.apache.zookeeper.common.PathTrie - brokers 19:06:41.073 [SyncThread:0] DEBUG org.apache.zookeeper.server.DataTree - Digests are matching for Zxid: 20, Digest in log and actual tree: 63345445946 19:06:41.073 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - sessionid:0x1000002c50e0000 type:create cxid:0x47 zxid:0x20 txntype:1 reqpath:n/a 19:06:41.073 [main-SendThread(127.0.0.1:42969)] DEBUG org.apache.zookeeper.ClientCnxn - Got notification session id: 0x1000002c50e0000 19:06:41.073 [main-SendThread(127.0.0.1:42969)] DEBUG org.apache.zookeeper.ClientCnxn - Got WatchedEvent state:SyncConnected type:NodeChildrenChanged path:/brokers/topics for session id 0x1000002c50e0000 19:06:41.073 [main-EventThread] DEBUG kafka.zookeeper.ZooKeeperClient - [ZooKeeperClient Kafka server] Received event: WatchedEvent state:SyncConnected type:NodeChildrenChanged path:/brokers/topics 19:06:41.076 [main-SendThread(127.0.0.1:42969)] DEBUG org.apache.zookeeper.ClientCnxn - Reading reply session id: 0x1000002c50e0000, packet:: clientPath:/brokers/topics/my-test-topic serverPath:/brokers/topics/my-test-topic finished:false header:: 71,1 replyHeader:: 71,32,0 request:: '/brokers/topics/my-test-topic,#7b22706172746974696f6e73223a7b2230223a5b315d7d2c22746f7069635f6964223a22593746594d5a41535250716159726641533438354f77222c22616464696e675f7265706c69636173223a7b7d2c2272656d6f76696e675f7265706c69636173223a7b7d2c2276657273696f6e223a337d,v{s{31,s{'world,'anyone}}},0 response:: '/brokers/topics/my-test-topic 19:06:41.076 [ProcessThread(sid:0 cport:42969):] DEBUG org.apache.zookeeper.server.SessionTrackerImpl - Checking session 0x1000002c50e0000 19:06:41.076 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - Processing request:: sessionid:0x1000002c50e0000 type:getChildren2 cxid:0x48 zxid:0xfffffffffffffffe txntype:unknown reqpath:/brokers/topics 19:06:41.076 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - sessionid:0x1000002c50e0000 type:getChildren2 cxid:0x48 zxid:0xfffffffffffffffe txntype:unknown reqpath:/brokers/topics 19:06:41.077 [SyncThread:0] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Permission requested: 1 19:06:41.077 [SyncThread:0] DEBUG org.apache.zookeeper.server.ZooKeeperServer - ACLs for node: [31,s{'world,'anyone} ] 19:06:41.077 [SyncThread:0] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Client credentials: ['sasl,'zooclient , 'ip,'127.0.0.1 ] 19:06:41.078 [data-plane-kafka-request-handler-0] DEBUG kafka.zk.AdminZkClient - Updated path /brokers/topics/my-test-topic with Map(my-test-topic-0 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=)) for replica assignment 19:06:41.077 [main-SendThread(127.0.0.1:42969)] DEBUG org.apache.zookeeper.ClientCnxn - Reading reply session id: 0x1000002c50e0000, packet:: clientPath:/brokers/topics serverPath:/brokers/topics finished:false header:: 72,12 replyHeader:: 72,32,0 request:: '/brokers/topics,T response:: v{'my-test-topic},s{6,6,1755111998534,1755111998534,0,1,0,0,0,1,32} 19:06:41.080 [ProcessThread(sid:0 cport:42969):] DEBUG org.apache.zookeeper.server.SessionTrackerImpl - Checking session 0x1000002c50e0000 19:06:41.080 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - Processing request:: sessionid:0x1000002c50e0000 type:getData cxid:0x49 zxid:0xfffffffffffffffe txntype:unknown reqpath:/brokers/topics/my-test-topic 19:06:41.081 [ProcessThread(sid:0 cport:42969):] DEBUG org.apache.zookeeper.server.SessionTrackerImpl - Checking session 0x1000002c50e0000 19:06:41.081 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - sessionid:0x1000002c50e0000 type:getData cxid:0x49 zxid:0xfffffffffffffffe txntype:unknown reqpath:/brokers/topics/my-test-topic 19:06:41.081 [SyncThread:0] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Permission requested: 1 19:06:41.081 [SyncThread:0] DEBUG org.apache.zookeeper.server.ZooKeeperServer - ACLs for node: [31,s{'world,'anyone} ] 19:06:41.081 [SyncThread:0] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Client credentials: ['sasl,'zooclient , 'ip,'127.0.0.1 ] 19:06:41.081 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - Processing request:: sessionid:0x1000002c50e0000 type:getData cxid:0x4a zxid:0xfffffffffffffffe txntype:unknown reqpath:/brokers/topics/my-test-topic 19:06:41.081 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - sessionid:0x1000002c50e0000 type:getData cxid:0x4a zxid:0xfffffffffffffffe txntype:unknown reqpath:/brokers/topics/my-test-topic 19:06:41.081 [SyncThread:0] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Permission requested: 1 19:06:41.081 [SyncThread:0] DEBUG org.apache.zookeeper.server.ZooKeeperServer - ACLs for node: [31,s{'world,'anyone} ] 19:06:41.081 [SyncThread:0] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Client credentials: ['sasl,'zooclient , 'ip,'127.0.0.1 ] 19:06:41.081 [main-SendThread(127.0.0.1:42969)] DEBUG org.apache.zookeeper.ClientCnxn - Reading reply session id: 0x1000002c50e0000, packet:: clientPath:/brokers/topics/my-test-topic serverPath:/brokers/topics/my-test-topic finished:false header:: 73,4 replyHeader:: 73,32,0 request:: '/brokers/topics/my-test-topic,F response:: #7b22706172746974696f6e73223a7b2230223a5b315d7d2c22746f7069635f6964223a22593746594d5a41535250716159726641533438354f77222c22616464696e675f7265706c69636173223a7b7d2c2272656d6f76696e675f7265706c69636173223a7b7d2c2276657273696f6e223a337d,s{32,32,1755112001069,1755112001069,0,0,0,0,116,0,32} 19:06:41.082 [main-SendThread(127.0.0.1:42969)] DEBUG org.apache.zookeeper.ClientCnxn - Reading reply session id: 0x1000002c50e0000, packet:: clientPath:/brokers/topics/my-test-topic serverPath:/brokers/topics/my-test-topic finished:false header:: 74,4 replyHeader:: 74,32,0 request:: '/brokers/topics/my-test-topic,T response:: #7b22706172746974696f6e73223a7b2230223a5b315d7d2c22746f7069635f6964223a22593746594d5a41535250716159726641533438354f77222c22616464696e675f7265706c69636173223a7b7d2c2272656d6f76696e675f7265706c69636173223a7b7d2c2276657273696f6e223a337d,s{32,32,1755112001069,1755112001069,0,0,0,0,116,0,32} 19:06:41.090 [controller-event-thread] INFO kafka.controller.KafkaController - [Controller id=1] New topics: [Set(my-test-topic)], deleted topics: [HashSet()], new partition replica assignment [Set(TopicIdReplicaAssignment(my-test-topic,Some(Y7FYMZASRPqaYrfAS485Ow),Map(my-test-topic-0 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=))))] 19:06:41.091 [controller-event-thread] INFO kafka.controller.KafkaController - [Controller id=1] New partition creation callback for my-test-topic-0 19:06:41.094 [controller-event-thread] INFO state.change.logger - [Controller id=1 epoch=1] Changed partition my-test-topic-0 state from NonExistentPartition to NewPartition with assigned replicas 1 19:06:41.094 [controller-event-thread] INFO state.change.logger - [Controller id=1 epoch=1] Sending UpdateMetadata request to brokers HashSet() for 0 partitions 19:06:41.099 [controller-event-thread] INFO state.change.logger - [Controller id=1 epoch=1] Sending UpdateMetadata request to brokers HashSet() for 0 partitions 19:06:41.107 [ProcessThread(sid:0 cport:42969):] DEBUG org.apache.zookeeper.server.SessionTrackerImpl - Checking session 0x1000002c50e0000 19:06:41.108 [ProcessThread(sid:0 cport:42969):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Permission requested: 1 19:06:41.108 [ProcessThread(sid:0 cport:42969):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - ACLs for node: [31,s{'world,'anyone} ] 19:06:41.108 [ProcessThread(sid:0 cport:42969):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Client credentials: ['sasl,'zooclient , 'ip,'127.0.0.1 ] 19:06:41.108 [ProcessThread(sid:0 cport:42969):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from data tree is: 63345445946 19:06:41.108 [ProcessThread(sid:0 cport:42969):] DEBUG org.apache.zookeeper.server.SessionTrackerImpl - Checking session 0x1000002c50e0000 19:06:41.108 [ProcessThread(sid:0 cport:42969):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Processing ACL: 31,s{'world,'anyone} 19:06:41.109 [ProcessThread(sid:0 cport:42969):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Permission requested: 4 19:06:41.109 [ProcessThread(sid:0 cport:42969):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - ACLs for node: [31,s{'world,'anyone} ] 19:06:41.109 [ProcessThread(sid:0 cport:42969):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Client credentials: ['sasl,'zooclient , 'ip,'127.0.0.1 ] 19:06:41.109 [ProcessThread(sid:0 cport:42969):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from data tree is: 63345445946 19:06:41.109 [ProcessThread(sid:0 cport:42969):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 62214967968 19:06:41.109 [ProcessThread(sid:0 cport:42969):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 65679182783 19:06:41.120 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - Processing request:: sessionid:0x1000002c50e0000 type:multi cxid:0x4b zxid:0x21 txntype:14 reqpath:n/a 19:06:41.121 [SyncThread:0] DEBUG org.apache.zookeeper.common.PathTrie - brokers 19:06:41.121 [SyncThread:0] DEBUG org.apache.zookeeper.server.DataTree - Digests are matching for Zxid: 21, Digest in log and actual tree: 65679182783 19:06:41.121 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - sessionid:0x1000002c50e0000 type:multi cxid:0x4b zxid:0x21 txntype:14 reqpath:n/a 19:06:41.122 [main-SendThread(127.0.0.1:42969)] DEBUG org.apache.zookeeper.ClientCnxn - Reading reply session id: 0x1000002c50e0000, packet:: clientPath:null serverPath:null finished:false header:: 75,14 replyHeader:: 75,33,0 request:: org.apache.zookeeper.MultiOperationRecord@81bd0a85 response:: org.apache.zookeeper.MultiResponse@7b890ac6 19:06:41.126 [ProcessThread(sid:0 cport:42969):] DEBUG org.apache.zookeeper.server.SessionTrackerImpl - Checking session 0x1000002c50e0000 19:06:41.126 [ProcessThread(sid:0 cport:42969):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Permission requested: 1 19:06:41.126 [ProcessThread(sid:0 cport:42969):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - ACLs for node: [31,s{'world,'anyone} ] 19:06:41.126 [ProcessThread(sid:0 cport:42969):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Client credentials: ['sasl,'zooclient , 'ip,'127.0.0.1 ] 19:06:41.126 [ProcessThread(sid:0 cport:42969):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from data tree is: 65679182783 19:06:41.126 [ProcessThread(sid:0 cport:42969):] DEBUG org.apache.zookeeper.server.SessionTrackerImpl - Checking session 0x1000002c50e0000 19:06:41.126 [ProcessThread(sid:0 cport:42969):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Processing ACL: 31,s{'world,'anyone} 19:06:41.127 [ProcessThread(sid:0 cport:42969):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Permission requested: 4 19:06:41.127 [ProcessThread(sid:0 cport:42969):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - ACLs for node: [31,s{'world,'anyone} ] 19:06:41.127 [ProcessThread(sid:0 cport:42969):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Client credentials: ['sasl,'zooclient , 'ip,'127.0.0.1 ] 19:06:41.127 [ProcessThread(sid:0 cport:42969):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from data tree is: 65679182783 19:06:41.127 [ProcessThread(sid:0 cport:42969):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 63448904346 19:06:41.127 [ProcessThread(sid:0 cport:42969):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 65856214821 19:06:41.129 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - Processing request:: sessionid:0x1000002c50e0000 type:multi cxid:0x4c zxid:0x22 txntype:14 reqpath:n/a 19:06:41.130 [SyncThread:0] DEBUG org.apache.zookeeper.common.PathTrie - brokers 19:06:41.130 [SyncThread:0] DEBUG org.apache.zookeeper.server.DataTree - Digests are matching for Zxid: 22, Digest in log and actual tree: 65856214821 19:06:41.130 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - sessionid:0x1000002c50e0000 type:multi cxid:0x4c zxid:0x22 txntype:14 reqpath:n/a 19:06:41.131 [main-SendThread(127.0.0.1:42969)] DEBUG org.apache.zookeeper.ClientCnxn - Reading reply session id: 0x1000002c50e0000, packet:: clientPath:null serverPath:null finished:false header:: 76,14 replyHeader:: 76,34,0 request:: org.apache.zookeeper.MultiOperationRecord@c37a65e6 response:: org.apache.zookeeper.MultiResponse@bd466627 19:06:41.136 [ProcessThread(sid:0 cport:42969):] DEBUG org.apache.zookeeper.server.SessionTrackerImpl - Checking session 0x1000002c50e0000 19:06:41.136 [ProcessThread(sid:0 cport:42969):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Permission requested: 1 19:06:41.136 [ProcessThread(sid:0 cport:42969):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - ACLs for node: [31,s{'world,'anyone} ] 19:06:41.136 [ProcessThread(sid:0 cport:42969):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Client credentials: ['sasl,'zooclient , 'ip,'127.0.0.1 ] 19:06:41.137 [ProcessThread(sid:0 cport:42969):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from data tree is: 65856214821 19:06:41.137 [ProcessThread(sid:0 cport:42969):] DEBUG org.apache.zookeeper.server.SessionTrackerImpl - Checking session 0x1000002c50e0000 19:06:41.137 [ProcessThread(sid:0 cport:42969):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Processing ACL: 31,s{'world,'anyone} 19:06:41.137 [ProcessThread(sid:0 cport:42969):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Permission requested: 4 19:06:41.137 [ProcessThread(sid:0 cport:42969):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - ACLs for node: [31,s{'world,'anyone} ] 19:06:41.137 [ProcessThread(sid:0 cport:42969):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Client credentials: ['sasl,'zooclient , 'ip,'127.0.0.1 ] 19:06:41.137 [ProcessThread(sid:0 cport:42969):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from data tree is: 65856214821 19:06:41.137 [ProcessThread(sid:0 cport:42969):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 66973850795 19:06:41.137 [ProcessThread(sid:0 cport:42969):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 70542851690 19:06:41.138 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - Processing request:: sessionid:0x1000002c50e0000 type:multi cxid:0x4d zxid:0x23 txntype:14 reqpath:n/a 19:06:41.139 [SyncThread:0] DEBUG org.apache.zookeeper.common.PathTrie - brokers 19:06:41.139 [SyncThread:0] DEBUG org.apache.zookeeper.server.DataTree - Digests are matching for Zxid: 23, Digest in log and actual tree: 70542851690 19:06:41.139 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - sessionid:0x1000002c50e0000 type:multi cxid:0x4d zxid:0x23 txntype:14 reqpath:n/a 19:06:41.139 [main-SendThread(127.0.0.1:42969)] DEBUG org.apache.zookeeper.ClientCnxn - Reading reply session id: 0x1000002c50e0000, packet:: clientPath:null serverPath:null finished:false header:: 77,14 replyHeader:: 77,35,0 request:: org.apache.zookeeper.MultiOperationRecord@b3e0859f response:: org.apache.zookeeper.MultiResponse@ce2303a9 19:06:41.147 [controller-event-thread] INFO state.change.logger - [Controller id=1 epoch=1] Changed partition my-test-topic-0 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isr=List(1), leaderRecoveryState=RECOVERED, partitionEpoch=0) 19:06:41.150 [controller-event-thread] INFO state.change.logger - [Controller id=1 epoch=1] Sending LeaderAndIsr request to broker 1 with 1 become-leader and 0 become-follower partitions 19:06:41.152 [controller-event-thread] INFO state.change.logger - [Controller id=1 epoch=1] Sending UpdateMetadata request to brokers HashSet(1) for 1 partitions 19:06:41.154 [controller-event-thread] INFO state.change.logger - [Controller id=1 epoch=1] Sending UpdateMetadata request to brokers HashSet() for 0 partitions 19:06:41.155 [TestBroker:1:Controller-1-to-broker-1-send-thread] DEBUG org.apache.kafka.clients.NetworkClient - [Controller id=1, targetBrokerId=1] Sending LEADER_AND_ISR request with header RequestHeader(apiKey=LEADER_AND_ISR, apiVersion=6, clientId=1, correlationId=1) and timeout 30000 to node 1: LeaderAndIsrRequestData(controllerId=1, controllerEpoch=1, brokerEpoch=25, type=0, ungroupedPartitionStates=[], topicStates=[LeaderAndIsrTopicState(topicName='my-test-topic', topicId=Y7FYMZASRPqaYrfAS485Ow, partitionStates=[LeaderAndIsrPartitionState(topicName='my-test-topic', partitionIndex=0, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0)])], liveLeaders=[LeaderAndIsrLiveLeader(brokerId=1, hostName='localhost', port=38537)]) 19:06:41.166 [data-plane-kafka-request-handler-1] INFO state.change.logger - [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 for 1 partitions 19:06:41.201 [data-plane-kafka-request-handler-1] INFO kafka.server.ReplicaFetcherManager - [ReplicaFetcherManager on broker 1] Removed fetcher for partitions Set(my-test-topic-0) 19:06:41.201 [data-plane-kafka-request-handler-1] INFO state.change.logger - [Broker id=1] Stopped fetchers as part of LeaderAndIsr request correlationId 1 from controller 1 epoch 1 as part of the become-leader transition for 1 partitions 19:06:41.216 [ProcessThread(sid:0 cport:42969):] DEBUG org.apache.zookeeper.server.SessionTrackerImpl - Checking session 0x1000002c50e0000 19:06:41.216 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - Processing request:: sessionid:0x1000002c50e0000 type:getData cxid:0x4e zxid:0xfffffffffffffffe txntype:unknown reqpath:/config/topics/my-test-topic 19:06:41.216 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - sessionid:0x1000002c50e0000 type:getData cxid:0x4e zxid:0xfffffffffffffffe txntype:unknown reqpath:/config/topics/my-test-topic 19:06:41.216 [SyncThread:0] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Permission requested: 1 19:06:41.216 [SyncThread:0] DEBUG org.apache.zookeeper.server.ZooKeeperServer - ACLs for node: [31,s{'world,'anyone} ] 19:06:41.216 [SyncThread:0] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Client credentials: ['sasl,'zooclient , 'ip,'127.0.0.1 ] 19:06:41.217 [main-SendThread(127.0.0.1:42969)] DEBUG org.apache.zookeeper.ClientCnxn - Reading reply session id: 0x1000002c50e0000, packet:: clientPath:/config/topics/my-test-topic serverPath:/config/topics/my-test-topic finished:false header:: 78,4 replyHeader:: 78,35,0 request:: '/config/topics/my-test-topic,F response:: #7b2276657273696f6e223a312c22636f6e666967223a7b7d7d,s{31,31,1755112001045,1755112001045,0,0,0,0,25,0,31} 19:06:41.303 [data-plane-kafka-request-handler-1] DEBUG kafka.log.OffsetIndex - Loaded index file /tmp/kafka-unit12180474530667575823/my-test-topic-0/00000000000000000000.index with maxEntries = 1310720, maxIndexSize = 10485760, entries = 0, lastOffset = 0, file position = 0 19:06:41.307 [data-plane-kafka-request-handler-1] DEBUG kafka.log.AbstractIndex - Index /tmp/kafka-unit12180474530667575823/my-test-topic-0/00000000000000000000.index was not resized because it already has size 10485760 19:06:41.309 [data-plane-kafka-request-handler-1] DEBUG kafka.log.TimeIndex - Loaded index file /tmp/kafka-unit12180474530667575823/my-test-topic-0/00000000000000000000.timeindex with maxEntries = 873813, maxIndexSize = 10485760, entries = 0, lastOffset = TimestampOffset(-1,0), file position = 0 19:06:41.310 [data-plane-kafka-request-handler-1] DEBUG kafka.log.AbstractIndex - Index /tmp/kafka-unit12180474530667575823/my-test-topic-0/00000000000000000000.timeindex was not resized because it already has size 10485756 19:06:41.318 [data-plane-kafka-request-handler-1] INFO kafka.log.UnifiedLog$ - [LogLoader partition=my-test-topic-0, dir=/tmp/kafka-unit12180474530667575823] Loading producer state till offset 0 with message format version 2 19:06:41.338 [data-plane-kafka-request-handler-1] DEBUG kafka.utils.KafkaScheduler - Scheduling task flush-metadata-file with initial delay 0 ms and period -1 ms. 19:06:41.344 [data-plane-kafka-request-handler-1] DEBUG kafka.utils.KafkaScheduler - Scheduling task PeriodicProducerExpirationCheck with initial delay 600000 ms and period 600000 ms. 19:06:41.349 [data-plane-kafka-request-handler-1] INFO kafka.log.LogManager - Created log for partition my-test-topic-0 in /tmp/kafka-unit12180474530667575823/my-test-topic-0 with properties {} 19:06:41.351 [data-plane-kafka-request-handler-1] INFO kafka.cluster.Partition - [Partition my-test-topic-0 broker=1] No checkpointed highwatermark is found for partition my-test-topic-0 19:06:41.352 [data-plane-kafka-request-handler-1] INFO kafka.cluster.Partition - [Partition my-test-topic-0 broker=1] Log loaded for partition my-test-topic-0 with initial high watermark 0 19:06:41.355 [data-plane-kafka-request-handler-1] INFO state.change.logger - [Broker id=1] Leader my-test-topic-0 with topic id Some(Y7FYMZASRPqaYrfAS485Ow) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas []. Previous leader epoch was -1. 19:06:41.359 [data-plane-kafka-request-handler-1] DEBUG kafka.server.epoch.LeaderEpochFileCache - [LeaderEpochCache my-test-topic-0] Appended new epoch entry EpochEntry(epoch=0, startOffset=0). Cache now contains 1 entries. 19:06:41.371 [data-plane-kafka-request-handler-1] DEBUG kafka.utils.KafkaScheduler - Scheduling task highwatermark-checkpoint with initial delay 0 ms and period 5000 ms. 19:06:41.377 [data-plane-kafka-request-handler-1] INFO state.change.logger - [Broker id=1] Finished LeaderAndIsr request in 213ms correlationId 1 from controller 1 for 1 partitions 19:06:41.386 [TestBroker:1:Controller-1-to-broker-1-send-thread] DEBUG org.apache.kafka.clients.NetworkClient - [Controller id=1, targetBrokerId=1] Received LEADER_AND_ISR response from node 1 for request with header RequestHeader(apiKey=LEADER_AND_ISR, apiVersion=6, clientId=1, correlationId=1): LeaderAndIsrResponseData(errorCode=0, partitionErrors=[], topics=[LeaderAndIsrTopicError(topicId=Y7FYMZASRPqaYrfAS485Ow, partitionErrors=[LeaderAndIsrPartitionError(topicName='', partitionIndex=0, errorCode=0)])]) 19:06:41.389 [data-plane-kafka-network-thread-1-ListenerName(SASL_PLAINTEXT)-SASL_PLAINTEXT-0] DEBUG kafka.request.logger - Completed request:{"isForwarded":false,"requestHeader":{"requestApiKey":4,"requestApiVersion":6,"correlationId":1,"clientId":"1","requestApiKeyName":"LEADER_AND_ISR"},"request":{"controllerId":1,"controllerEpoch":1,"brokerEpoch":25,"type":0,"topicStates":[{"topicName":"my-test-topic","topicId":"Y7FYMZASRPqaYrfAS485Ow","partitionStates":[{"partitionIndex":0,"controllerEpoch":1,"leader":1,"leaderEpoch":0,"isr":[1],"partitionEpoch":0,"replicas":[1],"addingReplicas":[],"removingReplicas":[],"isNew":true,"leaderRecoveryState":0}]}],"liveLeaders":[{"brokerId":1,"hostName":"localhost","port":38537}]},"response":{"errorCode":0,"topics":[{"topicId":"Y7FYMZASRPqaYrfAS485Ow","partitionErrors":[{"partitionIndex":0,"errorCode":0}]}]},"connection":"127.0.0.1:38537-127.0.0.1:54958-0","totalTimeMs":228.922,"requestQueueTimeMs":5.529,"localTimeMs":222.567,"remoteTimeMs":0.0,"throttleTimeMs":0,"responseQueueTimeMs":0.28,"sendTimeMs":0.545,"securityProtocol":"SASL_PLAINTEXT","principal":"User:admin","listener":"SASL_PLAINTEXT","clientInformation":{"softwareName":"unknown","softwareVersion":"unknown"}} 19:06:41.391 [TestBroker:1:Controller-1-to-broker-1-send-thread] DEBUG org.apache.kafka.clients.NetworkClient - [Controller id=1, targetBrokerId=1] Sending UPDATE_METADATA request with header RequestHeader(apiKey=UPDATE_METADATA, apiVersion=7, clientId=1, correlationId=2) and timeout 30000 to node 1: UpdateMetadataRequestData(controllerId=1, controllerEpoch=1, brokerEpoch=25, ungroupedPartitionStates=[], topicStates=[UpdateMetadataTopicState(topicName='my-test-topic', topicId=Y7FYMZASRPqaYrfAS485Ow, partitionStates=[UpdateMetadataPartitionState(topicName='my-test-topic', partitionIndex=0, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[])])], liveBrokers=[UpdateMetadataBroker(id=1, v0Host='', v0Port=0, endpoints=[UpdateMetadataEndpoint(port=38537, host='localhost', listener='SASL_PLAINTEXT', securityProtocol=2)], rack=null)]) 19:06:41.400 [data-plane-kafka-request-handler-0] INFO state.change.logger - [Broker id=1] Add 1 partitions and deleted 0 partitions from metadata cache in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 19:06:41.409 [data-plane-kafka-request-handler-0] DEBUG kafka.server.DelayedOperationPurgatory - Request key TopicKey(my-test-topic) unblocked 1 topic operations 19:06:41.410 [kafka-admin-client-thread | test-consumer-id] DEBUG org.apache.kafka.clients.NetworkClient - [AdminClient clientId=test-consumer-id] Received CREATE_TOPICS response from node 1 for request with header RequestHeader(apiKey=CREATE_TOPICS, apiVersion=7, clientId=test-consumer-id, correlationId=3): CreateTopicsResponseData(throttleTimeMs=0, topics=[CreatableTopicResult(name='my-test-topic', topicId=Y7FYMZASRPqaYrfAS485Ow, errorCode=0, errorMessage=null, topicConfigErrorCode=0, numPartitions=1, replicationFactor=1, configs=[CreatableTopicConfigs(name='compression.type', value='producer', readOnly=false, configSource=5, isSensitive=false), CreatableTopicConfigs(name='leader.replication.throttled.replicas', value='', readOnly=false, configSource=5, isSensitive=false), CreatableTopicConfigs(name='message.downconversion.enable', value='true', readOnly=false, configSource=5, isSensitive=false), CreatableTopicConfigs(name='min.insync.replicas', value='1', readOnly=false, configSource=5, isSensitive=false), CreatableTopicConfigs(name='segment.jitter.ms', value='0', readOnly=false, configSource=5, isSensitive=false), CreatableTopicConfigs(name='cleanup.policy', value='delete', readOnly=false, configSource=5, isSensitive=false), CreatableTopicConfigs(name='flush.ms', value='9223372036854775807', readOnly=false, configSource=5, isSensitive=false), CreatableTopicConfigs(name='follower.replication.throttled.replicas', value='', readOnly=false, configSource=5, isSensitive=false), CreatableTopicConfigs(name='segment.bytes', value='1073741824', readOnly=false, configSource=5, isSensitive=false), CreatableTopicConfigs(name='retention.ms', value='604800000', readOnly=false, configSource=5, isSensitive=false), CreatableTopicConfigs(name='flush.messages', value='1', readOnly=false, configSource=4, isSensitive=false), CreatableTopicConfigs(name='message.format.version', value='3.0-IV1', readOnly=false, configSource=5, isSensitive=false), CreatableTopicConfigs(name='max.compaction.lag.ms', value='9223372036854775807', readOnly=false, configSource=5, isSensitive=false), CreatableTopicConfigs(name='file.delete.delay.ms', value='60000', readOnly=false, configSource=5, isSensitive=false), CreatableTopicConfigs(name='max.message.bytes', value='1048588', readOnly=false, configSource=5, isSensitive=false), CreatableTopicConfigs(name='min.compaction.lag.ms', value='0', readOnly=false, configSource=5, isSensitive=false), CreatableTopicConfigs(name='message.timestamp.type', value='CreateTime', readOnly=false, configSource=5, isSensitive=false), CreatableTopicConfigs(name='preallocate', value='false', readOnly=false, configSource=5, isSensitive=false), CreatableTopicConfigs(name='min.cleanable.dirty.ratio', value='0.5', readOnly=false, configSource=5, isSensitive=false), CreatableTopicConfigs(name='index.interval.bytes', value='4096', readOnly=false, configSource=5, isSensitive=false), CreatableTopicConfigs(name='unclean.leader.election.enable', value='false', readOnly=false, configSource=5, isSensitive=false), CreatableTopicConfigs(name='retention.bytes', value='-1', readOnly=false, configSource=5, isSensitive=false), CreatableTopicConfigs(name='delete.retention.ms', value='86400000', readOnly=false, configSource=5, isSensitive=false), CreatableTopicConfigs(name='segment.ms', value='604800000', readOnly=false, configSource=5, isSensitive=false), CreatableTopicConfigs(name='message.timestamp.difference.max.ms', value='9223372036854775807', readOnly=false, configSource=5, isSensitive=false), CreatableTopicConfigs(name='segment.index.bytes', value='10485760', readOnly=false, configSource=5, isSensitive=false)])]) 19:06:41.412 [data-plane-kafka-network-thread-1-ListenerName(SASL_PLAINTEXT)-SASL_PLAINTEXT-0] DEBUG kafka.request.logger - Completed request:{"isForwarded":false,"requestHeader":{"requestApiKey":19,"requestApiVersion":7,"correlationId":3,"clientId":"test-consumer-id","requestApiKeyName":"CREATE_TOPICS"},"request":{"topics":[{"name":"my-test-topic","numPartitions":1,"replicationFactor":1,"assignments":[],"configs":[]}],"timeoutMs":14950,"validateOnly":false},"response":{"throttleTimeMs":0,"topics":[{"name":"my-test-topic","topicId":"Y7FYMZASRPqaYrfAS485Ow","errorCode":0,"errorMessage":null,"numPartitions":1,"replicationFactor":1,"configs":[{"name":"compression.type","value":"producer","readOnly":false,"configSource":5,"isSensitive":false},{"name":"leader.replication.throttled.replicas","value":"","readOnly":false,"configSource":5,"isSensitive":false},{"name":"message.downconversion.enable","value":"true","readOnly":false,"configSource":5,"isSensitive":false},{"name":"min.insync.replicas","value":"1","readOnly":false,"configSource":5,"isSensitive":false},{"name":"segment.jitter.ms","value":"0","readOnly":false,"configSource":5,"isSensitive":false},{"name":"cleanup.policy","value":"delete","readOnly":false,"configSource":5,"isSensitive":false},{"name":"flush.ms","value":"9223372036854775807","readOnly":false,"configSource":5,"isSensitive":false},{"name":"follower.replication.throttled.replicas","value":"","readOnly":false,"configSource":5,"isSensitive":false},{"name":"segment.bytes","value":"1073741824","readOnly":false,"configSource":5,"isSensitive":false},{"name":"retention.ms","value":"604800000","readOnly":false,"configSource":5,"isSensitive":false},{"name":"flush.messages","value":"1","readOnly":false,"configSource":4,"isSensitive":false},{"name":"message.format.version","value":"3.0-IV1","readOnly":false,"configSource":5,"isSensitive":false},{"name":"max.compaction.lag.ms","value":"9223372036854775807","readOnly":false,"configSource":5,"isSensitive":false},{"name":"file.delete.delay.ms","value":"60000","readOnly":false,"configSource":5,"isSensitive":false},{"name":"max.message.bytes","value":"1048588","readOnly":false,"configSource":5,"isSensitive":false},{"name":"min.compaction.lag.ms","value":"0","readOnly":false,"configSource":5,"isSensitive":false},{"name":"message.timestamp.type","value":"CreateTime","readOnly":false,"configSource":5,"isSensitive":false},{"name":"preallocate","value":"false","readOnly":false,"configSource":5,"isSensitive":false},{"name":"min.cleanable.dirty.ratio","value":"0.5","readOnly":false,"configSource":5,"isSensitive":false},{"name":"index.interval.bytes","value":"4096","readOnly":false,"configSource":5,"isSensitive":false},{"name":"unclean.leader.election.enable","value":"false","readOnly":false,"configSource":5,"isSensitive":false},{"name":"retention.bytes","value":"-1","readOnly":false,"configSource":5,"isSensitive":false},{"name":"delete.retention.ms","value":"86400000","readOnly":false,"configSource":5,"isSensitive":false},{"name":"segment.ms","value":"604800000","readOnly":false,"configSource":5,"isSensitive":false},{"name":"message.timestamp.difference.max.ms","value":"9223372036854775807","readOnly":false,"configSource":5,"isSensitive":false},{"name":"segment.index.bytes","value":"10485760","readOnly":false,"configSource":5,"isSensitive":false}]}]},"connection":"127.0.0.1:38537-127.0.0.1:54966-2","totalTimeMs":456.566,"requestQueueTimeMs":2.841,"localTimeMs":143.51,"remoteTimeMs":309.505,"throttleTimeMs":0,"responseQueueTimeMs":0.214,"sendTimeMs":0.494,"securityProtocol":"SASL_PLAINTEXT","principal":"User:kafkaclient","listener":"SASL_PLAINTEXT","clientInformation":{"softwareName":"apache-kafka-java","softwareVersion":"3.3.1"}} 19:06:41.413 [main] DEBUG org.apache.kafka.clients.admin.KafkaAdminClient - [AdminClient clientId=test-consumer-id] Initiating close operation. 19:06:41.413 [main] DEBUG org.apache.kafka.clients.admin.KafkaAdminClient - [AdminClient clientId=test-consumer-id] Waiting for the I/O thread to exit. Hard shutdown in 31536000000 ms. 19:06:41.414 [kafka-admin-client-thread | test-consumer-id] INFO org.apache.kafka.common.utils.AppInfoParser - App info kafka.admin.client for test-consumer-id unregistered 19:06:41.414 [data-plane-kafka-network-thread-1-ListenerName(SASL_PLAINTEXT)-SASL_PLAINTEXT-0] DEBUG org.apache.kafka.common.network.Selector - [SocketServer listenerType=ZK_BROKER, nodeId=1] Connection with /127.0.0.1 (channelId=127.0.0.1:38537-127.0.0.1:54966-2) disconnected java.io.EOFException: null at org.apache.kafka.common.network.NetworkReceive.readFrom(NetworkReceive.java:97) at org.apache.kafka.common.network.KafkaChannel.receive(KafkaChannel.java:452) at org.apache.kafka.common.network.KafkaChannel.read(KafkaChannel.java:402) at org.apache.kafka.common.network.Selector.attemptRead(Selector.java:674) at org.apache.kafka.common.network.Selector.pollSelectionKeys(Selector.java:576) at org.apache.kafka.common.network.Selector.poll(Selector.java:481) at kafka.network.Processor.poll(SocketServer.scala:1055) at kafka.network.Processor.run(SocketServer.scala:959) at java.base/java.lang.Thread.run(Thread.java:829) 19:06:41.415 [data-plane-kafka-network-thread-1-ListenerName(SASL_PLAINTEXT)-SASL_PLAINTEXT-1] DEBUG org.apache.kafka.common.network.Selector - [SocketServer listenerType=ZK_BROKER, nodeId=1] Connection with /127.0.0.1 (channelId=127.0.0.1:38537-127.0.0.1:54964-1) disconnected java.io.EOFException: null at org.apache.kafka.common.network.NetworkReceive.readFrom(NetworkReceive.java:97) at org.apache.kafka.common.network.KafkaChannel.receive(KafkaChannel.java:452) at org.apache.kafka.common.network.KafkaChannel.read(KafkaChannel.java:402) at org.apache.kafka.common.network.Selector.attemptRead(Selector.java:674) at org.apache.kafka.common.network.Selector.pollSelectionKeys(Selector.java:576) at org.apache.kafka.common.network.Selector.poll(Selector.java:481) at kafka.network.Processor.poll(SocketServer.scala:1055) at kafka.network.Processor.run(SocketServer.scala:959) at java.base/java.lang.Thread.run(Thread.java:829) 19:06:41.416 [kafka-admin-client-thread | test-consumer-id] INFO org.apache.kafka.common.metrics.Metrics - Metrics scheduler closed 19:06:41.416 [kafka-admin-client-thread | test-consumer-id] INFO org.apache.kafka.common.metrics.Metrics - Closing reporter org.apache.kafka.common.metrics.JmxReporter 19:06:41.416 [kafka-admin-client-thread | test-consumer-id] INFO org.apache.kafka.common.metrics.Metrics - Metrics reporters closed 19:06:41.416 [kafka-admin-client-thread | test-consumer-id] DEBUG org.apache.kafka.clients.admin.KafkaAdminClient - [AdminClient clientId=test-consumer-id] Exiting AdminClientRunnable thread. 19:06:41.416 [main] DEBUG org.apache.kafka.clients.admin.KafkaAdminClient - [AdminClient clientId=test-consumer-id] Kafka admin client closed. 19:06:41.417 [data-plane-kafka-request-handler-0] DEBUG kafka.server.ZkAdminManager - [Admin Manager on Broker 1]: Request key my-test-topic unblocked 1 topic requests. 19:06:41.418 [TestBroker:1:Controller-1-to-broker-1-send-thread] DEBUG org.apache.kafka.clients.NetworkClient - [Controller id=1, targetBrokerId=1] Received UPDATE_METADATA response from node 1 for request with header RequestHeader(apiKey=UPDATE_METADATA, apiVersion=7, clientId=1, correlationId=2): UpdateMetadataResponseData(errorCode=0) 19:06:41.418 [data-plane-kafka-network-thread-1-ListenerName(SASL_PLAINTEXT)-SASL_PLAINTEXT-0] DEBUG kafka.request.logger - Completed request:{"isForwarded":false,"requestHeader":{"requestApiKey":6,"requestApiVersion":7,"correlationId":2,"clientId":"1","requestApiKeyName":"UPDATE_METADATA"},"request":{"controllerId":1,"controllerEpoch":1,"brokerEpoch":25,"topicStates":[{"topicName":"my-test-topic","topicId":"Y7FYMZASRPqaYrfAS485Ow","partitionStates":[{"partitionIndex":0,"controllerEpoch":1,"leader":1,"leaderEpoch":0,"isr":[1],"zkVersion":0,"replicas":[1],"offlineReplicas":[]}]}],"liveBrokers":[{"id":1,"endpoints":[{"port":38537,"host":"localhost","listener":"SASL_PLAINTEXT","securityProtocol":2}],"rack":null}]},"response":{"errorCode":0},"connection":"127.0.0.1:38537-127.0.0.1:54958-0","totalTimeMs":25.38,"requestQueueTimeMs":3.295,"localTimeMs":21.607,"remoteTimeMs":0.0,"throttleTimeMs":0,"responseQueueTimeMs":0.168,"sendTimeMs":0.309,"securityProtocol":"SASL_PLAINTEXT","principal":"User:admin","listener":"SASL_PLAINTEXT","clientInformation":{"softwareName":"unknown","softwareVersion":"unknown"}} 19:06:41.440 [main] INFO org.apache.kafka.clients.consumer.ConsumerConfig - ConsumerConfig values: allow.auto.create.topics = false auto.commit.interval.ms = 5000 auto.offset.reset = latest bootstrap.servers = [SASL_PLAINTEXT://localhost:38537] check.crcs = true client.dns.lookup = use_all_dns_ips client.id = mso-123456-consumer-e3947c42-8a25-4603-b050-ed449cbe5e27 client.rack = connections.max.idle.ms = 540000 default.api.timeout.ms = 60000 enable.auto.commit = true exclude.internal.topics = true fetch.max.bytes = 52428800 fetch.max.wait.ms = 500 fetch.min.bytes = 1 group.id = mso-group group.instance.id = null heartbeat.interval.ms = 3000 interceptor.classes = [] internal.leave.group.on.close = true internal.throw.on.fetch.stable.offset.unsupported = false isolation.level = read_uncommitted key.deserializer = class org.apache.kafka.common.serialization.StringDeserializer max.partition.fetch.bytes = 1048576 max.poll.interval.ms = 600000 max.poll.records = 500 metadata.max.age.ms = 300000 metric.reporters = [] metrics.num.samples = 2 metrics.recording.level = INFO metrics.sample.window.ms = 30000 partition.assignment.strategy = [class org.apache.kafka.clients.consumer.RangeAssignor, class org.apache.kafka.clients.consumer.CooperativeStickyAssignor] receive.buffer.bytes = 65536 reconnect.backoff.max.ms = 1000 reconnect.backoff.ms = 50 request.timeout.ms = 30000 retry.backoff.ms = 100 sasl.client.callback.handler.class = null sasl.jaas.config = [hidden] sasl.kerberos.kinit.cmd = /usr/bin/kinit sasl.kerberos.min.time.before.relogin = 60000 sasl.kerberos.service.name = null sasl.kerberos.ticket.renew.jitter = 0.05 sasl.kerberos.ticket.renew.window.factor = 0.8 sasl.login.callback.handler.class = null sasl.login.class = null sasl.login.connect.timeout.ms = null sasl.login.read.timeout.ms = null sasl.login.refresh.buffer.seconds = 300 sasl.login.refresh.min.period.seconds = 60 sasl.login.refresh.window.factor = 0.8 sasl.login.refresh.window.jitter = 0.05 sasl.login.retry.backoff.max.ms = 10000 sasl.login.retry.backoff.ms = 100 sasl.mechanism = PLAIN sasl.oauthbearer.clock.skew.seconds = 30 sasl.oauthbearer.expected.audience = null sasl.oauthbearer.expected.issuer = null sasl.oauthbearer.jwks.endpoint.refresh.ms = 3600000 sasl.oauthbearer.jwks.endpoint.retry.backoff.max.ms = 10000 sasl.oauthbearer.jwks.endpoint.retry.backoff.ms = 100 sasl.oauthbearer.jwks.endpoint.url = null sasl.oauthbearer.scope.claim.name = scope sasl.oauthbearer.sub.claim.name = sub sasl.oauthbearer.token.endpoint.url = null security.protocol = SASL_PLAINTEXT security.providers = null send.buffer.bytes = 131072 session.timeout.ms = 50000 socket.connection.setup.timeout.max.ms = 30000 socket.connection.setup.timeout.ms = 10000 ssl.cipher.suites = null ssl.enabled.protocols = [TLSv1.2, TLSv1.3] ssl.endpoint.identification.algorithm = https ssl.engine.factory.class = null ssl.key.password = null ssl.keymanager.algorithm = SunX509 ssl.keystore.certificate.chain = null ssl.keystore.key = null ssl.keystore.location = null ssl.keystore.password = null ssl.keystore.type = JKS ssl.protocol = TLSv1.3 ssl.provider = null ssl.secure.random.implementation = null ssl.trustmanager.algorithm = PKIX ssl.truststore.certificates = null ssl.truststore.location = null ssl.truststore.password = null ssl.truststore.type = JKS value.deserializer = class org.apache.kafka.common.serialization.StringDeserializer 19:06:41.442 [main] DEBUG org.apache.kafka.clients.consumer.KafkaConsumer - [Consumer clientId=mso-123456-consumer-e3947c42-8a25-4603-b050-ed449cbe5e27, groupId=mso-group] Initializing the Kafka consumer 19:06:41.454 [main] INFO org.apache.kafka.common.security.authenticator.AbstractLogin - Successfully logged in. 19:06:41.512 [main] INFO org.apache.kafka.common.utils.AppInfoParser - Kafka version: 3.3.1 19:06:41.512 [main] INFO org.apache.kafka.common.utils.AppInfoParser - Kafka commitId: e23c59d00e687ff5 19:06:41.512 [main] INFO org.apache.kafka.common.utils.AppInfoParser - Kafka startTimeMs: 1755112001512 19:06:41.512 [main] DEBUG org.apache.kafka.clients.consumer.KafkaConsumer - [Consumer clientId=mso-123456-consumer-e3947c42-8a25-4603-b050-ed449cbe5e27, groupId=mso-group] Kafka consumer initialized 19:06:41.513 [main] INFO org.apache.kafka.clients.consumer.KafkaConsumer - [Consumer clientId=mso-123456-consumer-e3947c42-8a25-4603-b050-ed449cbe5e27, groupId=mso-group] Subscribed to topic(s): my-test-topic 19:06:41.514 [main] DEBUG org.apache.kafka.clients.consumer.internals.ConsumerCoordinator - [Consumer clientId=mso-123456-consumer-e3947c42-8a25-4603-b050-ed449cbe5e27, groupId=mso-group] Sending FindCoordinator request to broker localhost:38537 (id: -1 rack: null) 19:06:41.519 [main] DEBUG org.apache.kafka.clients.ClientUtils - Resolved host localhost as 127.0.0.1 19:06:41.519 [main] DEBUG org.apache.kafka.clients.NetworkClient - [Consumer clientId=mso-123456-consumer-e3947c42-8a25-4603-b050-ed449cbe5e27, groupId=mso-group] Initiating connection to node localhost:38537 (id: -1 rack: null) using address localhost/127.0.0.1 19:06:41.519 [main] DEBUG org.apache.kafka.common.security.authenticator.SaslClientAuthenticator - [Consumer clientId=mso-123456-consumer-e3947c42-8a25-4603-b050-ed449cbe5e27, groupId=mso-group] Set SASL client state to SEND_APIVERSIONS_REQUEST 19:06:41.519 [main] DEBUG org.apache.kafka.common.security.authenticator.SaslClientAuthenticator - [Consumer clientId=mso-123456-consumer-e3947c42-8a25-4603-b050-ed449cbe5e27, groupId=mso-group] Creating SaslClient: client=null;service=kafka;serviceHostname=localhost;mechs=[PLAIN] 19:06:41.520 [data-plane-kafka-socket-acceptor-ListenerName(SASL_PLAINTEXT)-SASL_PLAINTEXT-38537] DEBUG kafka.network.DataPlaneAcceptor - Accepted connection from /127.0.0.1:54968 on /127.0.0.1:38537 and assigned it to processor 1, sendBufferSize [actual|requested]: [102400|102400] recvBufferSize [actual|requested]: [102400|102400] 19:06:41.520 [data-plane-kafka-network-thread-1-ListenerName(SASL_PLAINTEXT)-SASL_PLAINTEXT-1] DEBUG kafka.network.Processor - Processor 1 listening to new connection from /127.0.0.1:54968 19:06:41.521 [main] DEBUG org.apache.kafka.common.network.Selector - [Consumer clientId=mso-123456-consumer-e3947c42-8a25-4603-b050-ed449cbe5e27, groupId=mso-group] Created socket with SO_RCVBUF = 65536, SO_SNDBUF = 131072, SO_TIMEOUT = 0 to node -1 19:06:41.521 [main] DEBUG org.apache.kafka.common.security.authenticator.SaslClientAuthenticator - [Consumer clientId=mso-123456-consumer-e3947c42-8a25-4603-b050-ed449cbe5e27, groupId=mso-group] Set SASL client state to RECEIVE_APIVERSIONS_RESPONSE 19:06:41.521 [main] DEBUG org.apache.kafka.clients.NetworkClient - [Consumer clientId=mso-123456-consumer-e3947c42-8a25-4603-b050-ed449cbe5e27, groupId=mso-group] Completed connection to node -1. Fetching API versions. 19:06:41.521 [data-plane-kafka-network-thread-1-ListenerName(SASL_PLAINTEXT)-SASL_PLAINTEXT-1] DEBUG org.apache.kafka.common.security.authenticator.SaslServerAuthenticator - Set SASL server state to HANDSHAKE_OR_VERSIONS_REQUEST during authentication 19:06:41.521 [data-plane-kafka-network-thread-1-ListenerName(SASL_PLAINTEXT)-SASL_PLAINTEXT-1] DEBUG org.apache.kafka.common.security.authenticator.SaslServerAuthenticator - Handling Kafka request API_VERSIONS during authentication 19:06:41.523 [data-plane-kafka-network-thread-1-ListenerName(SASL_PLAINTEXT)-SASL_PLAINTEXT-1] DEBUG org.apache.kafka.common.security.authenticator.SaslServerAuthenticator - Set SASL server state to HANDSHAKE_REQUEST during authentication 19:06:41.523 [main] DEBUG org.apache.kafka.common.security.authenticator.SaslClientAuthenticator - [Consumer clientId=mso-123456-consumer-e3947c42-8a25-4603-b050-ed449cbe5e27, groupId=mso-group] Set SASL client state to SEND_HANDSHAKE_REQUEST 19:06:41.523 [main] DEBUG org.apache.kafka.common.security.authenticator.SaslClientAuthenticator - [Consumer clientId=mso-123456-consumer-e3947c42-8a25-4603-b050-ed449cbe5e27, groupId=mso-group] Set SASL client state to RECEIVE_HANDSHAKE_RESPONSE 19:06:41.523 [data-plane-kafka-network-thread-1-ListenerName(SASL_PLAINTEXT)-SASL_PLAINTEXT-1] DEBUG org.apache.kafka.common.security.authenticator.SaslServerAuthenticator - Handling Kafka request SASL_HANDSHAKE during authentication 19:06:41.523 [data-plane-kafka-network-thread-1-ListenerName(SASL_PLAINTEXT)-SASL_PLAINTEXT-1] DEBUG org.apache.kafka.common.security.authenticator.SaslServerAuthenticator - Using SASL mechanism 'PLAIN' provided by client 19:06:41.524 [main] DEBUG org.apache.kafka.common.security.authenticator.SaslClientAuthenticator - [Consumer clientId=mso-123456-consumer-e3947c42-8a25-4603-b050-ed449cbe5e27, groupId=mso-group] Set SASL client state to INITIAL 19:06:41.525 [main] DEBUG org.apache.kafka.common.security.authenticator.SaslClientAuthenticator - [Consumer clientId=mso-123456-consumer-e3947c42-8a25-4603-b050-ed449cbe5e27, groupId=mso-group] Set SASL client state to INTERMEDIATE 19:06:41.526 [data-plane-kafka-network-thread-1-ListenerName(SASL_PLAINTEXT)-SASL_PLAINTEXT-1] DEBUG org.apache.kafka.common.security.authenticator.SaslServerAuthenticator - Set SASL server state to AUTHENTICATE during authentication 19:06:41.526 [data-plane-kafka-network-thread-1-ListenerName(SASL_PLAINTEXT)-SASL_PLAINTEXT-1] DEBUG org.apache.kafka.common.security.authenticator.SaslServerAuthenticator - Authentication complete; session max lifetime from broker config=0 ms, no credential expiration; no session expiration, sending 0 ms to client 19:06:41.526 [data-plane-kafka-network-thread-1-ListenerName(SASL_PLAINTEXT)-SASL_PLAINTEXT-1] DEBUG org.apache.kafka.common.security.authenticator.SaslServerAuthenticator - Set SASL server state to COMPLETE during authentication 19:06:41.526 [data-plane-kafka-network-thread-1-ListenerName(SASL_PLAINTEXT)-SASL_PLAINTEXT-1] DEBUG org.apache.kafka.common.network.Selector - [SocketServer listenerType=ZK_BROKER, nodeId=1] Successfully authenticated with /127.0.0.1 19:06:41.526 [main] DEBUG org.apache.kafka.common.security.authenticator.SaslClientAuthenticator - [Consumer clientId=mso-123456-consumer-e3947c42-8a25-4603-b050-ed449cbe5e27, groupId=mso-group] Set SASL client state to COMPLETE 19:06:41.526 [main] DEBUG org.apache.kafka.common.security.authenticator.SaslClientAuthenticator - [Consumer clientId=mso-123456-consumer-e3947c42-8a25-4603-b050-ed449cbe5e27, groupId=mso-group] Finished authentication with no session expiration and no session re-authentication 19:06:41.526 [main] DEBUG org.apache.kafka.common.network.Selector - [Consumer clientId=mso-123456-consumer-e3947c42-8a25-4603-b050-ed449cbe5e27, groupId=mso-group] Successfully authenticated with localhost/127.0.0.1 19:06:41.526 [main] DEBUG org.apache.kafka.clients.NetworkClient - [Consumer clientId=mso-123456-consumer-e3947c42-8a25-4603-b050-ed449cbe5e27, groupId=mso-group] Initiating API versions fetch from node -1. 19:06:41.526 [main] DEBUG org.apache.kafka.clients.NetworkClient - [Consumer clientId=mso-123456-consumer-e3947c42-8a25-4603-b050-ed449cbe5e27, groupId=mso-group] Sending API_VERSIONS request with header RequestHeader(apiKey=API_VERSIONS, apiVersion=3, clientId=mso-123456-consumer-e3947c42-8a25-4603-b050-ed449cbe5e27, correlationId=1) and timeout 30000 to node -1: ApiVersionsRequestData(clientSoftwareName='apache-kafka-java', clientSoftwareVersion='3.3.1') 19:06:41.530 [data-plane-kafka-network-thread-1-ListenerName(SASL_PLAINTEXT)-SASL_PLAINTEXT-1] DEBUG kafka.request.logger - Completed request:{"isForwarded":false,"requestHeader":{"requestApiKey":18,"requestApiVersion":3,"correlationId":1,"clientId":"mso-123456-consumer-e3947c42-8a25-4603-b050-ed449cbe5e27","requestApiKeyName":"API_VERSIONS"},"request":{"clientSoftwareName":"apache-kafka-java","clientSoftwareVersion":"3.3.1"},"response":{"errorCode":0,"apiKeys":[{"apiKey":0,"minVersion":0,"maxVersion":9},{"apiKey":1,"minVersion":0,"maxVersion":13},{"apiKey":2,"minVersion":0,"maxVersion":7},{"apiKey":3,"minVersion":0,"maxVersion":12},{"apiKey":4,"minVersion":0,"maxVersion":6},{"apiKey":5,"minVersion":0,"maxVersion":3},{"apiKey":6,"minVersion":0,"maxVersion":7},{"apiKey":7,"minVersion":0,"maxVersion":3},{"apiKey":8,"minVersion":0,"maxVersion":8},{"apiKey":9,"minVersion":0,"maxVersion":8},{"apiKey":10,"minVersion":0,"maxVersion":4},{"apiKey":11,"minVersion":0,"maxVersion":9},{"apiKey":12,"minVersion":0,"maxVersion":4},{"apiKey":13,"minVersion":0,"maxVersion":5},{"apiKey":14,"minVersion":0,"maxVersion":5},{"apiKey":15,"minVersion":0,"maxVersion":5},{"apiKey":16,"minVersion":0,"maxVersion":4},{"apiKey":17,"minVersion":0,"maxVersion":1},{"apiKey":18,"minVersion":0,"maxVersion":3},{"apiKey":19,"minVersion":0,"maxVersion":7},{"apiKey":20,"minVersion":0,"maxVersion":6},{"apiKey":21,"minVersion":0,"maxVersion":2},{"apiKey":22,"minVersion":0,"maxVersion":4},{"apiKey":23,"minVersion":0,"maxVersion":4},{"apiKey":24,"minVersion":0,"maxVersion":3},{"apiKey":25,"minVersion":0,"maxVersion":3},{"apiKey":26,"minVersion":0,"maxVersion":3},{"apiKey":27,"minVersion":0,"maxVersion":1},{"apiKey":28,"minVersion":0,"maxVersion":3},{"apiKey":29,"minVersion":0,"maxVersion":3},{"apiKey":30,"minVersion":0,"maxVersion":3},{"apiKey":31,"minVersion":0,"maxVersion":3},{"apiKey":32,"minVersion":0,"maxVersion":4},{"apiKey":33,"minVersion":0,"maxVersion":2},{"apiKey":34,"minVersion":0,"maxVersion":2},{"apiKey":35,"minVersion":0,"maxVersion":4},{"apiKey":36,"minVersion":0,"maxVersion":2},{"apiKey":37,"minVersion":0,"maxVersion":3},{"apiKey":38,"minVersion":0,"maxVersion":3},{"apiKey":39,"minVersion":0,"maxVersion":2},{"apiKey":40,"minVersion":0,"maxVersion":2},{"apiKey":41,"minVersion":0,"maxVersion":3},{"apiKey":42,"minVersion":0,"maxVersion":2},{"apiKey":43,"minVersion":0,"maxVersion":2},{"apiKey":44,"minVersion":0,"maxVersion":1},{"apiKey":45,"minVersion":0,"maxVersion":0},{"apiKey":46,"minVersion":0,"maxVersion":0},{"apiKey":47,"minVersion":0,"maxVersion":0},{"apiKey":48,"minVersion":0,"maxVersion":1},{"apiKey":49,"minVersion":0,"maxVersion":1},{"apiKey":50,"minVersion":0,"maxVersion":0},{"apiKey":51,"minVersion":0,"maxVersion":0},{"apiKey":56,"minVersion":0,"maxVersion":2},{"apiKey":57,"minVersion":0,"maxVersion":1},{"apiKey":60,"minVersion":0,"maxVersion":0},{"apiKey":61,"minVersion":0,"maxVersion":0},{"apiKey":65,"minVersion":0,"maxVersion":0},{"apiKey":66,"minVersion":0,"maxVersion":0},{"apiKey":67,"minVersion":0,"maxVersion":0}],"throttleTimeMs":0,"finalizedFeaturesEpoch":0},"connection":"127.0.0.1:38537-127.0.0.1:54968-2","totalTimeMs":2.2,"requestQueueTimeMs":0.435,"localTimeMs":1.364,"remoteTimeMs":0.0,"throttleTimeMs":0,"responseQueueTimeMs":0.114,"sendTimeMs":0.285,"securityProtocol":"SASL_PLAINTEXT","principal":"User:admin","listener":"SASL_PLAINTEXT","clientInformation":{"softwareName":"unknown","softwareVersion":"unknown"}} 19:06:41.531 [main] DEBUG org.apache.kafka.clients.NetworkClient - [Consumer clientId=mso-123456-consumer-e3947c42-8a25-4603-b050-ed449cbe5e27, groupId=mso-group] Received API_VERSIONS response from node -1 for request with header RequestHeader(apiKey=API_VERSIONS, apiVersion=3, clientId=mso-123456-consumer-e3947c42-8a25-4603-b050-ed449cbe5e27, correlationId=1): ApiVersionsResponseData(errorCode=0, apiKeys=[ApiVersion(apiKey=0, minVersion=0, maxVersion=9), ApiVersion(apiKey=1, minVersion=0, maxVersion=13), ApiVersion(apiKey=2, minVersion=0, maxVersion=7), ApiVersion(apiKey=3, minVersion=0, maxVersion=12), ApiVersion(apiKey=4, minVersion=0, maxVersion=6), ApiVersion(apiKey=5, minVersion=0, maxVersion=3), ApiVersion(apiKey=6, minVersion=0, maxVersion=7), ApiVersion(apiKey=7, minVersion=0, maxVersion=3), ApiVersion(apiKey=8, minVersion=0, maxVersion=8), ApiVersion(apiKey=9, minVersion=0, maxVersion=8), ApiVersion(apiKey=10, minVersion=0, maxVersion=4), ApiVersion(apiKey=11, minVersion=0, maxVersion=9), ApiVersion(apiKey=12, minVersion=0, maxVersion=4), ApiVersion(apiKey=13, minVersion=0, maxVersion=5), ApiVersion(apiKey=14, minVersion=0, maxVersion=5), ApiVersion(apiKey=15, minVersion=0, maxVersion=5), ApiVersion(apiKey=16, minVersion=0, maxVersion=4), ApiVersion(apiKey=17, minVersion=0, maxVersion=1), ApiVersion(apiKey=18, minVersion=0, maxVersion=3), ApiVersion(apiKey=19, minVersion=0, maxVersion=7), ApiVersion(apiKey=20, minVersion=0, maxVersion=6), ApiVersion(apiKey=21, minVersion=0, maxVersion=2), ApiVersion(apiKey=22, minVersion=0, maxVersion=4), ApiVersion(apiKey=23, minVersion=0, maxVersion=4), ApiVersion(apiKey=24, minVersion=0, maxVersion=3), ApiVersion(apiKey=25, minVersion=0, maxVersion=3), ApiVersion(apiKey=26, minVersion=0, maxVersion=3), ApiVersion(apiKey=27, minVersion=0, maxVersion=1), ApiVersion(apiKey=28, minVersion=0, maxVersion=3), ApiVersion(apiKey=29, minVersion=0, maxVersion=3), ApiVersion(apiKey=30, minVersion=0, maxVersion=3), ApiVersion(apiKey=31, minVersion=0, maxVersion=3), ApiVersion(apiKey=32, minVersion=0, maxVersion=4), ApiVersion(apiKey=33, minVersion=0, maxVersion=2), ApiVersion(apiKey=34, minVersion=0, maxVersion=2), ApiVersion(apiKey=35, minVersion=0, maxVersion=4), ApiVersion(apiKey=36, minVersion=0, maxVersion=2), ApiVersion(apiKey=37, minVersion=0, maxVersion=3), ApiVersion(apiKey=38, minVersion=0, maxVersion=3), ApiVersion(apiKey=39, minVersion=0, maxVersion=2), ApiVersion(apiKey=40, minVersion=0, maxVersion=2), ApiVersion(apiKey=41, minVersion=0, maxVersion=3), ApiVersion(apiKey=42, minVersion=0, maxVersion=2), ApiVersion(apiKey=43, minVersion=0, maxVersion=2), ApiVersion(apiKey=44, minVersion=0, maxVersion=1), ApiVersion(apiKey=45, minVersion=0, maxVersion=0), ApiVersion(apiKey=46, minVersion=0, maxVersion=0), ApiVersion(apiKey=47, minVersion=0, maxVersion=0), ApiVersion(apiKey=48, minVersion=0, maxVersion=1), ApiVersion(apiKey=49, minVersion=0, maxVersion=1), ApiVersion(apiKey=50, minVersion=0, maxVersion=0), ApiVersion(apiKey=51, minVersion=0, maxVersion=0), ApiVersion(apiKey=56, minVersion=0, maxVersion=2), ApiVersion(apiKey=57, minVersion=0, maxVersion=1), ApiVersion(apiKey=60, minVersion=0, maxVersion=0), ApiVersion(apiKey=61, minVersion=0, maxVersion=0), ApiVersion(apiKey=65, minVersion=0, maxVersion=0), ApiVersion(apiKey=66, minVersion=0, maxVersion=0), ApiVersion(apiKey=67, minVersion=0, maxVersion=0)], throttleTimeMs=0, supportedFeatures=[], finalizedFeaturesEpoch=0, finalizedFeatures=[]) 19:06:41.532 [main] DEBUG org.apache.kafka.clients.NetworkClient - [Consumer clientId=mso-123456-consumer-e3947c42-8a25-4603-b050-ed449cbe5e27, groupId=mso-group] Node -1 has finalized features epoch: 0, finalized features: [], supported features: [], API versions: (Produce(0): 0 to 9 [usable: 9], Fetch(1): 0 to 13 [usable: 13], ListOffsets(2): 0 to 7 [usable: 7], Metadata(3): 0 to 12 [usable: 12], LeaderAndIsr(4): 0 to 6 [usable: 6], StopReplica(5): 0 to 3 [usable: 3], UpdateMetadata(6): 0 to 7 [usable: 7], ControlledShutdown(7): 0 to 3 [usable: 3], OffsetCommit(8): 0 to 8 [usable: 8], OffsetFetch(9): 0 to 8 [usable: 8], FindCoordinator(10): 0 to 4 [usable: 4], JoinGroup(11): 0 to 9 [usable: 9], Heartbeat(12): 0 to 4 [usable: 4], LeaveGroup(13): 0 to 5 [usable: 5], SyncGroup(14): 0 to 5 [usable: 5], DescribeGroups(15): 0 to 5 [usable: 5], ListGroups(16): 0 to 4 [usable: 4], SaslHandshake(17): 0 to 1 [usable: 1], ApiVersions(18): 0 to 3 [usable: 3], CreateTopics(19): 0 to 7 [usable: 7], DeleteTopics(20): 0 to 6 [usable: 6], DeleteRecords(21): 0 to 2 [usable: 2], InitProducerId(22): 0 to 4 [usable: 4], OffsetForLeaderEpoch(23): 0 to 4 [usable: 4], AddPartitionsToTxn(24): 0 to 3 [usable: 3], AddOffsetsToTxn(25): 0 to 3 [usable: 3], EndTxn(26): 0 to 3 [usable: 3], WriteTxnMarkers(27): 0 to 1 [usable: 1], TxnOffsetCommit(28): 0 to 3 [usable: 3], DescribeAcls(29): 0 to 3 [usable: 3], CreateAcls(30): 0 to 3 [usable: 3], DeleteAcls(31): 0 to 3 [usable: 3], DescribeConfigs(32): 0 to 4 [usable: 4], AlterConfigs(33): 0 to 2 [usable: 2], AlterReplicaLogDirs(34): 0 to 2 [usable: 2], DescribeLogDirs(35): 0 to 4 [usable: 4], SaslAuthenticate(36): 0 to 2 [usable: 2], CreatePartitions(37): 0 to 3 [usable: 3], CreateDelegationToken(38): 0 to 3 [usable: 3], RenewDelegationToken(39): 0 to 2 [usable: 2], ExpireDelegationToken(40): 0 to 2 [usable: 2], DescribeDelegationToken(41): 0 to 3 [usable: 3], DeleteGroups(42): 0 to 2 [usable: 2], ElectLeaders(43): 0 to 2 [usable: 2], IncrementalAlterConfigs(44): 0 to 1 [usable: 1], AlterPartitionReassignments(45): 0 [usable: 0], ListPartitionReassignments(46): 0 [usable: 0], OffsetDelete(47): 0 [usable: 0], DescribeClientQuotas(48): 0 to 1 [usable: 1], AlterClientQuotas(49): 0 to 1 [usable: 1], DescribeUserScramCredentials(50): 0 [usable: 0], AlterUserScramCredentials(51): 0 [usable: 0], AlterPartition(56): 0 to 2 [usable: 2], UpdateFeatures(57): 0 to 1 [usable: 1], DescribeCluster(60): 0 [usable: 0], DescribeProducers(61): 0 [usable: 0], DescribeTransactions(65): 0 [usable: 0], ListTransactions(66): 0 [usable: 0], AllocateProducerIds(67): 0 [usable: 0]). 19:06:41.533 [main] DEBUG org.apache.kafka.clients.NetworkClient - [Consumer clientId=mso-123456-consumer-e3947c42-8a25-4603-b050-ed449cbe5e27, groupId=mso-group] Sending metadata request MetadataRequestData(topics=[MetadataRequestTopic(topicId=AAAAAAAAAAAAAAAAAAAAAA, name='my-test-topic')], allowAutoTopicCreation=false, includeClusterAuthorizedOperations=false, includeTopicAuthorizedOperations=false) to node localhost:38537 (id: -1 rack: null) 19:06:41.533 [main] DEBUG org.apache.kafka.clients.NetworkClient - [Consumer clientId=mso-123456-consumer-e3947c42-8a25-4603-b050-ed449cbe5e27, groupId=mso-group] Sending METADATA request with header RequestHeader(apiKey=METADATA, apiVersion=12, clientId=mso-123456-consumer-e3947c42-8a25-4603-b050-ed449cbe5e27, correlationId=2) and timeout 30000 to node -1: MetadataRequestData(topics=[MetadataRequestTopic(topicId=AAAAAAAAAAAAAAAAAAAAAA, name='my-test-topic')], allowAutoTopicCreation=false, includeClusterAuthorizedOperations=false, includeTopicAuthorizedOperations=false) 19:06:41.535 [main] DEBUG org.apache.kafka.clients.NetworkClient - [Consumer clientId=mso-123456-consumer-e3947c42-8a25-4603-b050-ed449cbe5e27, groupId=mso-group] Sending FIND_COORDINATOR request with header RequestHeader(apiKey=FIND_COORDINATOR, apiVersion=4, clientId=mso-123456-consumer-e3947c42-8a25-4603-b050-ed449cbe5e27, correlationId=0) and timeout 30000 to node -1: FindCoordinatorRequestData(key='', keyType=0, coordinatorKeys=[mso-group]) 19:06:41.548 [main] DEBUG org.apache.kafka.clients.NetworkClient - [Consumer clientId=mso-123456-consumer-e3947c42-8a25-4603-b050-ed449cbe5e27, groupId=mso-group] Received METADATA response from node -1 for request with header RequestHeader(apiKey=METADATA, apiVersion=12, clientId=mso-123456-consumer-e3947c42-8a25-4603-b050-ed449cbe5e27, correlationId=2): MetadataResponseData(throttleTimeMs=0, brokers=[MetadataResponseBroker(nodeId=1, host='localhost', port=38537, rack=null)], clusterId='rNB1PmTvTz-7PD_3X0wh3g', controllerId=1, topics=[MetadataResponseTopic(errorCode=0, name='my-test-topic', topicId=Y7FYMZASRPqaYrfAS485Ow, isInternal=false, partitions=[MetadataResponsePartition(errorCode=0, partitionIndex=0, leaderId=1, leaderEpoch=0, replicaNodes=[1], isrNodes=[1], offlineReplicas=[])], topicAuthorizedOperations=-2147483648)], clusterAuthorizedOperations=-2147483648) 19:06:41.549 [data-plane-kafka-network-thread-1-ListenerName(SASL_PLAINTEXT)-SASL_PLAINTEXT-1] DEBUG kafka.request.logger - Completed request:{"isForwarded":false,"requestHeader":{"requestApiKey":3,"requestApiVersion":12,"correlationId":2,"clientId":"mso-123456-consumer-e3947c42-8a25-4603-b050-ed449cbe5e27","requestApiKeyName":"METADATA"},"request":{"topics":[{"topicId":"AAAAAAAAAAAAAAAAAAAAAA","name":"my-test-topic"}],"allowAutoTopicCreation":false,"includeTopicAuthorizedOperations":false},"response":{"throttleTimeMs":0,"brokers":[{"nodeId":1,"host":"localhost","port":38537,"rack":null}],"clusterId":"rNB1PmTvTz-7PD_3X0wh3g","controllerId":1,"topics":[{"errorCode":0,"name":"my-test-topic","topicId":"Y7FYMZASRPqaYrfAS485Ow","isInternal":false,"partitions":[{"errorCode":0,"partitionIndex":0,"leaderId":1,"leaderEpoch":0,"replicaNodes":[1],"isrNodes":[1],"offlineReplicas":[]}],"topicAuthorizedOperations":-2147483648}]},"connection":"127.0.0.1:38537-127.0.0.1:54968-2","totalTimeMs":12.985,"requestQueueTimeMs":2.255,"localTimeMs":9.79,"remoteTimeMs":0.0,"throttleTimeMs":0,"responseQueueTimeMs":0.378,"sendTimeMs":0.561,"securityProtocol":"SASL_PLAINTEXT","principal":"User:admin","listener":"SASL_PLAINTEXT","clientInformation":{"softwareName":"apache-kafka-java","softwareVersion":"3.3.1"}} 19:06:41.553 [main] INFO org.apache.kafka.clients.Metadata - [Consumer clientId=mso-123456-consumer-e3947c42-8a25-4603-b050-ed449cbe5e27, groupId=mso-group] Resetting the last seen epoch of partition my-test-topic-0 to 0 since the associated topicId changed from null to Y7FYMZASRPqaYrfAS485Ow 19:06:41.556 [ProcessThread(sid:0 cport:42969):] DEBUG org.apache.zookeeper.server.SessionTrackerImpl - Checking session 0x1000002c50e0000 19:06:41.556 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - Processing request:: sessionid:0x1000002c50e0000 type:exists cxid:0x4f zxid:0xfffffffffffffffe txntype:unknown reqpath:/admin/delete_topics/__consumer_offsets 19:06:41.556 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - sessionid:0x1000002c50e0000 type:exists cxid:0x4f zxid:0xfffffffffffffffe txntype:unknown reqpath:/admin/delete_topics/__consumer_offsets 19:06:41.556 [main] INFO org.apache.kafka.clients.Metadata - [Consumer clientId=mso-123456-consumer-e3947c42-8a25-4603-b050-ed449cbe5e27, groupId=mso-group] Cluster ID: rNB1PmTvTz-7PD_3X0wh3g 19:06:41.557 [main] DEBUG org.apache.kafka.clients.Metadata - [Consumer clientId=mso-123456-consumer-e3947c42-8a25-4603-b050-ed449cbe5e27, groupId=mso-group] Updated cluster metadata updateVersion 2 to MetadataCache{clusterId='rNB1PmTvTz-7PD_3X0wh3g', nodes={1=localhost:38537 (id: 1 rack: null)}, partitions=[PartitionMetadata(error=NONE, partition=my-test-topic-0, leader=Optional[1], leaderEpoch=Optional[0], replicas=1, isr=1, offlineReplicas=)], controller=localhost:38537 (id: 1 rack: null)} 19:06:41.557 [main-SendThread(127.0.0.1:42969)] DEBUG org.apache.zookeeper.ClientCnxn - Reading reply session id: 0x1000002c50e0000, packet:: clientPath:/admin/delete_topics/__consumer_offsets serverPath:/admin/delete_topics/__consumer_offsets finished:false header:: 79,3 replyHeader:: 79,35,-101 request:: '/admin/delete_topics/__consumer_offsets,F response:: 19:06:41.558 [ProcessThread(sid:0 cport:42969):] DEBUG org.apache.zookeeper.server.SessionTrackerImpl - Checking session 0x1000002c50e0000 19:06:41.558 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - Processing request:: sessionid:0x1000002c50e0000 type:exists cxid:0x50 zxid:0xfffffffffffffffe txntype:unknown reqpath:/brokers/topics/__consumer_offsets 19:06:41.558 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - sessionid:0x1000002c50e0000 type:exists cxid:0x50 zxid:0xfffffffffffffffe txntype:unknown reqpath:/brokers/topics/__consumer_offsets 19:06:41.559 [main-SendThread(127.0.0.1:42969)] DEBUG org.apache.zookeeper.ClientCnxn - Reading reply session id: 0x1000002c50e0000, packet:: clientPath:/brokers/topics/__consumer_offsets serverPath:/brokers/topics/__consumer_offsets finished:false header:: 80,3 replyHeader:: 80,35,-101 request:: '/brokers/topics/__consumer_offsets,F response:: 19:06:41.559 [ProcessThread(sid:0 cport:42969):] DEBUG org.apache.zookeeper.server.SessionTrackerImpl - Checking session 0x1000002c50e0000 19:06:41.560 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - Processing request:: sessionid:0x1000002c50e0000 type:getChildren2 cxid:0x51 zxid:0xfffffffffffffffe txntype:unknown reqpath:/brokers/topics 19:06:41.560 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - sessionid:0x1000002c50e0000 type:getChildren2 cxid:0x51 zxid:0xfffffffffffffffe txntype:unknown reqpath:/brokers/topics 19:06:41.560 [SyncThread:0] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Permission requested: 1 19:06:41.560 [SyncThread:0] DEBUG org.apache.zookeeper.server.ZooKeeperServer - ACLs for node: [31,s{'world,'anyone} ] 19:06:41.560 [SyncThread:0] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Client credentials: ['sasl,'zooclient , 'ip,'127.0.0.1 ] 19:06:41.560 [main-SendThread(127.0.0.1:42969)] DEBUG org.apache.zookeeper.ClientCnxn - Reading reply session id: 0x1000002c50e0000, packet:: clientPath:/brokers/topics serverPath:/brokers/topics finished:false header:: 81,12 replyHeader:: 81,35,0 request:: '/brokers/topics,F response:: v{'my-test-topic},s{6,6,1755111998534,1755111998534,0,1,0,0,0,1,32} 19:06:41.567 [data-plane-kafka-request-handler-1] INFO kafka.zk.AdminZkClient - Creating topic __consumer_offsets with configuration {compression.type=producer, cleanup.policy=compact, segment.bytes=104857600} and initial partition assignment HashMap(0 -> ArrayBuffer(1), 1 -> ArrayBuffer(1), 2 -> ArrayBuffer(1), 3 -> ArrayBuffer(1), 4 -> ArrayBuffer(1), 5 -> ArrayBuffer(1), 6 -> ArrayBuffer(1), 7 -> ArrayBuffer(1), 8 -> ArrayBuffer(1), 9 -> ArrayBuffer(1), 10 -> ArrayBuffer(1), 11 -> ArrayBuffer(1), 12 -> ArrayBuffer(1), 13 -> ArrayBuffer(1), 14 -> ArrayBuffer(1), 15 -> ArrayBuffer(1), 16 -> ArrayBuffer(1), 17 -> ArrayBuffer(1), 18 -> ArrayBuffer(1), 19 -> ArrayBuffer(1), 20 -> ArrayBuffer(1), 21 -> ArrayBuffer(1), 22 -> ArrayBuffer(1), 23 -> ArrayBuffer(1), 24 -> ArrayBuffer(1), 25 -> ArrayBuffer(1), 26 -> ArrayBuffer(1), 27 -> ArrayBuffer(1), 28 -> ArrayBuffer(1), 29 -> ArrayBuffer(1), 30 -> ArrayBuffer(1), 31 -> ArrayBuffer(1), 32 -> ArrayBuffer(1), 33 -> ArrayBuffer(1), 34 -> ArrayBuffer(1), 35 -> ArrayBuffer(1), 36 -> ArrayBuffer(1), 37 -> ArrayBuffer(1), 38 -> ArrayBuffer(1), 39 -> ArrayBuffer(1), 40 -> ArrayBuffer(1), 41 -> ArrayBuffer(1), 42 -> ArrayBuffer(1), 43 -> ArrayBuffer(1), 44 -> ArrayBuffer(1), 45 -> ArrayBuffer(1), 46 -> ArrayBuffer(1), 47 -> ArrayBuffer(1), 48 -> ArrayBuffer(1), 49 -> ArrayBuffer(1)) 19:06:41.569 [ProcessThread(sid:0 cport:42969):] DEBUG org.apache.zookeeper.server.SessionTrackerImpl - Checking session 0x1000002c50e0000 19:06:41.572 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - Processing request:: sessionid:0x1000002c50e0000 type:setData cxid:0x52 zxid:0x24 txntype:-1 reqpath:n/a 19:06:41.573 [SyncThread:0] DEBUG org.apache.zookeeper.server.DataTree - Ignoring processTxn failure hdr: -1 : error: -101 19:06:41.573 [main-SendThread(127.0.0.1:42969)] DEBUG org.apache.zookeeper.ClientCnxn - Reading reply session id: 0x1000002c50e0000, packet:: clientPath:/config/topics/__consumer_offsets serverPath:/config/topics/__consumer_offsets finished:false header:: 82,5 replyHeader:: 82,36,-101 request:: '/config/topics/__consumer_offsets,#7b2276657273696f6e223a312c22636f6e666967223a7b22636f6d7072657373696f6e2e74797065223a2270726f6475636572222c22636c65616e75702e706f6c696379223a22636f6d70616374222c227365676d656e742e6279746573223a22313034383537363030227d7d,-1 response:: 19:06:41.575 [ProcessThread(sid:0 cport:42969):] DEBUG org.apache.zookeeper.server.SessionTrackerImpl - Checking session 0x1000002c50e0000 19:06:41.575 [ProcessThread(sid:0 cport:42969):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Processing ACL: 31,s{'world,'anyone} 19:06:41.575 [ProcessThread(sid:0 cport:42969):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Permission requested: 4 19:06:41.575 [ProcessThread(sid:0 cport:42969):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - ACLs for node: [31,s{'world,'anyone} ] 19:06:41.575 [ProcessThread(sid:0 cport:42969):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Client credentials: ['sasl,'zooclient , 'ip,'127.0.0.1 ] 19:06:41.575 [ProcessThread(sid:0 cport:42969):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from data tree is: 70542851690 19:06:41.575 [ProcessThread(sid:0 cport:42969):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 70871513714 19:06:41.576 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - Processing request:: sessionid:0x1000002c50e0000 type:create cxid:0x53 zxid:0x25 txntype:1 reqpath:n/a 19:06:41.576 [SyncThread:0] DEBUG org.apache.zookeeper.common.PathTrie - config 19:06:41.577 [SyncThread:0] DEBUG org.apache.zookeeper.server.DataTree - Digests are matching for Zxid: 25, Digest in log and actual tree: 74167884298 19:06:41.577 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - sessionid:0x1000002c50e0000 type:create cxid:0x53 zxid:0x25 txntype:1 reqpath:n/a 19:06:41.577 [main-SendThread(127.0.0.1:42969)] DEBUG org.apache.zookeeper.ClientCnxn - Reading reply session id: 0x1000002c50e0000, packet:: clientPath:/config/topics/__consumer_offsets serverPath:/config/topics/__consumer_offsets finished:false header:: 83,1 replyHeader:: 83,37,0 request:: '/config/topics/__consumer_offsets,#7b2276657273696f6e223a312c22636f6e666967223a7b22636f6d7072657373696f6e2e74797065223a2270726f6475636572222c22636c65616e75702e706f6c696379223a22636f6d70616374222c227365676d656e742e6279746573223a22313034383537363030227d7d,v{s{31,s{'world,'anyone}}},0 response:: '/config/topics/__consumer_offsets 19:06:41.586 [ProcessThread(sid:0 cport:42969):] DEBUG org.apache.zookeeper.server.SessionTrackerImpl - Checking session 0x1000002c50e0000 19:06:41.586 [ProcessThread(sid:0 cport:42969):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Processing ACL: 31,s{'world,'anyone} 19:06:41.586 [ProcessThread(sid:0 cport:42969):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Permission requested: 4 19:06:41.586 [ProcessThread(sid:0 cport:42969):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - ACLs for node: [31,s{'world,'anyone} ] 19:06:41.587 [ProcessThread(sid:0 cport:42969):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Client credentials: ['sasl,'zooclient , 'ip,'127.0.0.1 ] 19:06:41.587 [ProcessThread(sid:0 cport:42969):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from data tree is: 74167884298 19:06:41.587 [ProcessThread(sid:0 cport:42969):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 74318359556 19:06:41.588 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - Processing request:: sessionid:0x1000002c50e0000 type:create cxid:0x54 zxid:0x26 txntype:1 reqpath:n/a 19:06:41.589 [SyncThread:0] DEBUG org.apache.zookeeper.common.PathTrie - brokers 19:06:41.589 [SyncThread:0] DEBUG org.apache.zookeeper.server.DataTree - Digests are matching for Zxid: 26, Digest in log and actual tree: 77059782957 19:06:41.589 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - sessionid:0x1000002c50e0000 type:create cxid:0x54 zxid:0x26 txntype:1 reqpath:n/a 19:06:41.589 [main-SendThread(127.0.0.1:42969)] DEBUG org.apache.zookeeper.ClientCnxn - Got notification session id: 0x1000002c50e0000 19:06:41.589 [main-SendThread(127.0.0.1:42969)] DEBUG org.apache.zookeeper.ClientCnxn - Got WatchedEvent state:SyncConnected type:NodeChildrenChanged path:/brokers/topics for session id 0x1000002c50e0000 19:06:41.589 [main-EventThread] DEBUG kafka.zookeeper.ZooKeeperClient - [ZooKeeperClient Kafka server] Received event: WatchedEvent state:SyncConnected type:NodeChildrenChanged path:/brokers/topics 19:06:41.589 [main-SendThread(127.0.0.1:42969)] DEBUG org.apache.zookeeper.ClientCnxn - Reading reply session id: 0x1000002c50e0000, packet:: clientPath:/brokers/topics/__consumer_offsets serverPath:/brokers/topics/__consumer_offsets finished:false header:: 84,1 replyHeader:: 84,38,0 request:: '/brokers/topics/__consumer_offsets,#7b22706172746974696f6e73223a7b223434223a5b315d2c223435223a5b315d2c223436223a5b315d2c223437223a5b315d2c223438223a5b315d2c223439223a5b315d2c223130223a5b315d2c223131223a5b315d2c223132223a5b315d2c223133223a5b315d2c223134223a5b315d2c223135223a5b315d2c223136223a5b315d2c223137223a5b315d2c223138223a5b315d2c223139223a5b315d2c2230223a5b315d2c2231223a5b315d2c2232223a5b315d2c2233223a5b315d2c2234223a5b315d2c2235223a5b315d2c2236223a5b315d2c2237223a5b315d2c2238223a5b315d2c2239223a5b315d2c223230223a5b315d2c223231223a5b315d2c223232223a5b315d2c223233223a5b315d2c223234223a5b315d2c223235223a5b315d2c223236223a5b315d2c223237223a5b315d2c223238223a5b315d2c223239223a5b315d2c223330223a5b315d2c223331223a5b315d2c223332223a5b315d2c223333223a5b315d2c223334223a5b315d2c223335223a5b315d2c223336223a5b315d2c223337223a5b315d2c223338223a5b315d2c223339223a5b315d2c223430223a5b315d2c223431223a5b315d2c223432223a5b315d2c223433223a5b315d7d2c22746f7069635f6964223a22656f7465676e615a53492d31694f4f7375545a704867222c22616464696e675f7265706c69636173223a7b7d2c2272656d6f76696e675f7265706c69636173223a7b7d2c2276657273696f6e223a337d,v{s{31,s{'world,'anyone}}},0 response:: '/brokers/topics/__consumer_offsets 19:06:41.591 [ProcessThread(sid:0 cport:42969):] DEBUG org.apache.zookeeper.server.SessionTrackerImpl - Checking session 0x1000002c50e0000 19:06:41.591 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - Processing request:: sessionid:0x1000002c50e0000 type:getChildren2 cxid:0x55 zxid:0xfffffffffffffffe txntype:unknown reqpath:/brokers/topics 19:06:41.591 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - sessionid:0x1000002c50e0000 type:getChildren2 cxid:0x55 zxid:0xfffffffffffffffe txntype:unknown reqpath:/brokers/topics 19:06:41.591 [SyncThread:0] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Permission requested: 1 19:06:41.591 [SyncThread:0] DEBUG org.apache.zookeeper.server.ZooKeeperServer - ACLs for node: [31,s{'world,'anyone} ] 19:06:41.591 [SyncThread:0] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Client credentials: ['sasl,'zooclient , 'ip,'127.0.0.1 ] 19:06:41.591 [data-plane-kafka-request-handler-1] DEBUG kafka.zk.AdminZkClient - Updated path /brokers/topics/__consumer_offsets with HashMap(__consumer_offsets-22 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-30 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-25 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-35 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-37 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-38 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-13 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-8 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-21 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-4 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-27 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-7 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-9 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-46 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-41 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-33 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-23 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-49 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-47 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-16 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-28 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-31 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-36 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-42 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-3 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-18 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-15 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-24 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-17 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-48 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-19 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-11 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-2 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-43 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-6 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-14 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-20 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-0 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-44 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-39 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-12 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-45 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-1 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-5 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-26 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-29 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-34 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-10 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-32 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-40 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=)) for replica assignment 19:06:41.592 [main-SendThread(127.0.0.1:42969)] DEBUG org.apache.zookeeper.ClientCnxn - Reading reply session id: 0x1000002c50e0000, packet:: clientPath:/brokers/topics serverPath:/brokers/topics finished:false header:: 85,12 replyHeader:: 85,38,0 request:: '/brokers/topics,T response:: v{'my-test-topic,'__consumer_offsets},s{6,6,1755111998534,1755111998534,0,2,0,0,0,2,38} 19:06:41.593 [ProcessThread(sid:0 cport:42969):] DEBUG org.apache.zookeeper.server.SessionTrackerImpl - Checking session 0x1000002c50e0000 19:06:41.593 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - Processing request:: sessionid:0x1000002c50e0000 type:getData cxid:0x56 zxid:0xfffffffffffffffe txntype:unknown reqpath:/brokers/topics/__consumer_offsets 19:06:41.593 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - sessionid:0x1000002c50e0000 type:getData cxid:0x56 zxid:0xfffffffffffffffe txntype:unknown reqpath:/brokers/topics/__consumer_offsets 19:06:41.593 [SyncThread:0] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Permission requested: 1 19:06:41.593 [SyncThread:0] DEBUG org.apache.zookeeper.server.ZooKeeperServer - ACLs for node: [31,s{'world,'anyone} ] 19:06:41.593 [SyncThread:0] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Client credentials: ['sasl,'zooclient , 'ip,'127.0.0.1 ] 19:06:41.594 [data-plane-kafka-request-handler-1] DEBUG kafka.server.DefaultAutoTopicCreationManager - Cleared inflight topic creation state for HashMap(__consumer_offsets -> CreatableTopic(name='__consumer_offsets', numPartitions=50, replicationFactor=1, assignments=[], configs=[CreateableTopicConfig(name='compression.type', value='producer'), CreateableTopicConfig(name='cleanup.policy', value='compact'), CreateableTopicConfig(name='segment.bytes', value='104857600')])) 19:06:41.594 [main-SendThread(127.0.0.1:42969)] DEBUG org.apache.zookeeper.ClientCnxn - Reading reply session id: 0x1000002c50e0000, packet:: clientPath:/brokers/topics/__consumer_offsets serverPath:/brokers/topics/__consumer_offsets finished:false header:: 86,4 replyHeader:: 86,38,0 request:: '/brokers/topics/__consumer_offsets,T response:: #7b22706172746974696f6e73223a7b223434223a5b315d2c223435223a5b315d2c223436223a5b315d2c223437223a5b315d2c223438223a5b315d2c223439223a5b315d2c223130223a5b315d2c223131223a5b315d2c223132223a5b315d2c223133223a5b315d2c223134223a5b315d2c223135223a5b315d2c223136223a5b315d2c223137223a5b315d2c223138223a5b315d2c223139223a5b315d2c2230223a5b315d2c2231223a5b315d2c2232223a5b315d2c2233223a5b315d2c2234223a5b315d2c2235223a5b315d2c2236223a5b315d2c2237223a5b315d2c2238223a5b315d2c2239223a5b315d2c223230223a5b315d2c223231223a5b315d2c223232223a5b315d2c223233223a5b315d2c223234223a5b315d2c223235223a5b315d2c223236223a5b315d2c223237223a5b315d2c223238223a5b315d2c223239223a5b315d2c223330223a5b315d2c223331223a5b315d2c223332223a5b315d2c223333223a5b315d2c223334223a5b315d2c223335223a5b315d2c223336223a5b315d2c223337223a5b315d2c223338223a5b315d2c223339223a5b315d2c223430223a5b315d2c223431223a5b315d2c223432223a5b315d2c223433223a5b315d7d2c22746f7069635f6964223a22656f7465676e615a53492d31694f4f7375545a704867222c22616464696e675f7265706c69636173223a7b7d2c2272656d6f76696e675f7265706c69636173223a7b7d2c2276657273696f6e223a337d,s{38,38,1755112001586,1755112001586,0,0,0,0,548,0,38} 19:06:41.599 [controller-event-thread] INFO kafka.controller.KafkaController - [Controller id=1] New topics: [Set(__consumer_offsets)], deleted topics: [HashSet()], new partition replica assignment [Set(TopicIdReplicaAssignment(__consumer_offsets,Some(eotegnaZSI-1iOOsuTZpHg),HashMap(__consumer_offsets-22 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-30 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-25 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-35 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-37 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-38 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-13 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-8 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-21 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-4 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-27 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-7 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-9 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-46 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-41 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-33 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-23 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-49 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-47 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-16 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-28 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-31 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-36 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-42 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-3 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-18 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-15 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-24 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-17 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-48 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-19 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-11 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-2 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-43 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-6 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-14 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-20 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-0 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-44 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-39 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-12 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-45 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-1 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-5 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-26 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-29 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-34 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-10 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-32 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-40 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=))))] 19:06:41.599 [main] DEBUG org.apache.kafka.clients.NetworkClient - [Consumer clientId=mso-123456-consumer-e3947c42-8a25-4603-b050-ed449cbe5e27, groupId=mso-group] Received FIND_COORDINATOR response from node -1 for request with header RequestHeader(apiKey=FIND_COORDINATOR, apiVersion=4, clientId=mso-123456-consumer-e3947c42-8a25-4603-b050-ed449cbe5e27, correlationId=0): FindCoordinatorResponseData(throttleTimeMs=0, errorCode=0, errorMessage='', nodeId=0, host='', port=0, coordinators=[Coordinator(key='mso-group', nodeId=-1, host='', port=-1, errorCode=15, errorMessage='')]) 19:06:41.600 [controller-event-thread] INFO kafka.controller.KafkaController - [Controller id=1] New partition creation callback for __consumer_offsets-22,__consumer_offsets-30,__consumer_offsets-25,__consumer_offsets-35,__consumer_offsets-37,__consumer_offsets-38,__consumer_offsets-13,__consumer_offsets-8,__consumer_offsets-21,__consumer_offsets-4,__consumer_offsets-27,__consumer_offsets-7,__consumer_offsets-9,__consumer_offsets-46,__consumer_offsets-41,__consumer_offsets-33,__consumer_offsets-23,__consumer_offsets-49,__consumer_offsets-47,__consumer_offsets-16,__consumer_offsets-28,__consumer_offsets-31,__consumer_offsets-36,__consumer_offsets-42,__consumer_offsets-3,__consumer_offsets-18,__consumer_offsets-15,__consumer_offsets-24,__consumer_offsets-17,__consumer_offsets-48,__consumer_offsets-19,__consumer_offsets-11,__consumer_offsets-2,__consumer_offsets-43,__consumer_offsets-6,__consumer_offsets-14,__consumer_offsets-20,__consumer_offsets-0,__consumer_offsets-44,__consumer_offsets-39,__consumer_offsets-12,__consumer_offsets-45,__consumer_offsets-1,__consumer_offsets-5,__consumer_offsets-26,__consumer_offsets-29,__consumer_offsets-34,__consumer_offsets-10,__consumer_offsets-32,__consumer_offsets-40 19:06:41.600 [main] DEBUG org.apache.kafka.clients.consumer.internals.ConsumerCoordinator - [Consumer clientId=mso-123456-consumer-e3947c42-8a25-4603-b050-ed449cbe5e27, groupId=mso-group] Received FindCoordinator response ClientResponse(receivedTimeMs=1755112001599, latencyMs=84, disconnected=false, requestHeader=RequestHeader(apiKey=FIND_COORDINATOR, apiVersion=4, clientId=mso-123456-consumer-e3947c42-8a25-4603-b050-ed449cbe5e27, correlationId=0), responseBody=FindCoordinatorResponseData(throttleTimeMs=0, errorCode=0, errorMessage='', nodeId=0, host='', port=0, coordinators=[Coordinator(key='mso-group', nodeId=-1, host='', port=-1, errorCode=15, errorMessage='')])) 19:06:41.600 [main] DEBUG org.apache.kafka.clients.consumer.internals.ConsumerCoordinator - [Consumer clientId=mso-123456-consumer-e3947c42-8a25-4603-b050-ed449cbe5e27, groupId=mso-group] Group coordinator lookup failed: 19:06:41.600 [main] DEBUG org.apache.kafka.clients.consumer.internals.ConsumerCoordinator - [Consumer clientId=mso-123456-consumer-e3947c42-8a25-4603-b050-ed449cbe5e27, groupId=mso-group] Coordinator discovery failed, refreshing metadata org.apache.kafka.common.errors.CoordinatorNotAvailableException: The coordinator is not available. 19:06:41.600 [controller-event-thread] INFO state.change.logger - [Controller id=1 epoch=1] Changed partition __consumer_offsets-22 state from NonExistentPartition to NewPartition with assigned replicas 1 19:06:41.600 [controller-event-thread] INFO state.change.logger - [Controller id=1 epoch=1] Changed partition __consumer_offsets-30 state from NonExistentPartition to NewPartition with assigned replicas 1 19:06:41.600 [controller-event-thread] INFO state.change.logger - [Controller id=1 epoch=1] Changed partition __consumer_offsets-25 state from NonExistentPartition to NewPartition with assigned replicas 1 19:06:41.600 [controller-event-thread] INFO state.change.logger - [Controller id=1 epoch=1] Changed partition __consumer_offsets-35 state from NonExistentPartition to NewPartition with assigned replicas 1 19:06:41.600 [controller-event-thread] INFO state.change.logger - [Controller id=1 epoch=1] Changed partition __consumer_offsets-37 state from NonExistentPartition to NewPartition with assigned replicas 1 19:06:41.600 [controller-event-thread] INFO state.change.logger - [Controller id=1 epoch=1] Changed partition __consumer_offsets-38 state from NonExistentPartition to NewPartition with assigned replicas 1 19:06:41.600 [controller-event-thread] INFO state.change.logger - [Controller id=1 epoch=1] Changed partition __consumer_offsets-13 state from NonExistentPartition to NewPartition with assigned replicas 1 19:06:41.600 [controller-event-thread] INFO state.change.logger - [Controller id=1 epoch=1] Changed partition __consumer_offsets-8 state from NonExistentPartition to NewPartition with assigned replicas 1 19:06:41.600 [controller-event-thread] INFO state.change.logger - [Controller id=1 epoch=1] Changed partition __consumer_offsets-21 state from NonExistentPartition to NewPartition with assigned replicas 1 19:06:41.600 [controller-event-thread] INFO state.change.logger - [Controller id=1 epoch=1] Changed partition __consumer_offsets-4 state from NonExistentPartition to NewPartition with assigned replicas 1 19:06:41.600 [controller-event-thread] INFO state.change.logger - [Controller id=1 epoch=1] Changed partition __consumer_offsets-27 state from NonExistentPartition to NewPartition with assigned replicas 1 19:06:41.600 [controller-event-thread] INFO state.change.logger - [Controller id=1 epoch=1] Changed partition __consumer_offsets-7 state from NonExistentPartition to NewPartition with assigned replicas 1 19:06:41.601 [controller-event-thread] INFO state.change.logger - [Controller id=1 epoch=1] Changed partition __consumer_offsets-9 state from NonExistentPartition to NewPartition with assigned replicas 1 19:06:41.601 [data-plane-kafka-network-thread-1-ListenerName(SASL_PLAINTEXT)-SASL_PLAINTEXT-1] DEBUG kafka.request.logger - Completed request:{"isForwarded":false,"requestHeader":{"requestApiKey":10,"requestApiVersion":4,"correlationId":0,"clientId":"mso-123456-consumer-e3947c42-8a25-4603-b050-ed449cbe5e27","requestApiKeyName":"FIND_COORDINATOR"},"request":{"keyType":0,"coordinatorKeys":["mso-group"]},"response":{"throttleTimeMs":0,"coordinators":[{"key":"mso-group","nodeId":-1,"host":"","port":-1,"errorCode":15,"errorMessage":""}]},"connection":"127.0.0.1:38537-127.0.0.1:54968-2","totalTimeMs":49.971,"requestQueueTimeMs":2.118,"localTimeMs":47.264,"remoteTimeMs":0.0,"throttleTimeMs":0,"responseQueueTimeMs":0.222,"sendTimeMs":0.366,"securityProtocol":"SASL_PLAINTEXT","principal":"User:admin","listener":"SASL_PLAINTEXT","clientInformation":{"softwareName":"apache-kafka-java","softwareVersion":"3.3.1"}} 19:06:41.601 [controller-event-thread] INFO state.change.logger - [Controller id=1 epoch=1] Changed partition __consumer_offsets-46 state from NonExistentPartition to NewPartition with assigned replicas 1 19:06:41.601 [controller-event-thread] INFO state.change.logger - [Controller id=1 epoch=1] Changed partition __consumer_offsets-41 state from NonExistentPartition to NewPartition with assigned replicas 1 19:06:41.601 [controller-event-thread] INFO state.change.logger - [Controller id=1 epoch=1] Changed partition __consumer_offsets-33 state from NonExistentPartition to NewPartition with assigned replicas 1 19:06:41.601 [controller-event-thread] INFO state.change.logger - [Controller id=1 epoch=1] Changed partition __consumer_offsets-23 state from NonExistentPartition to NewPartition with assigned replicas 1 19:06:41.601 [controller-event-thread] INFO state.change.logger - [Controller id=1 epoch=1] Changed partition __consumer_offsets-49 state from NonExistentPartition to NewPartition with assigned replicas 1 19:06:41.601 [controller-event-thread] INFO state.change.logger - [Controller id=1 epoch=1] Changed partition __consumer_offsets-47 state from NonExistentPartition to NewPartition with assigned replicas 1 19:06:41.601 [controller-event-thread] INFO state.change.logger - [Controller id=1 epoch=1] Changed partition __consumer_offsets-16 state from NonExistentPartition to NewPartition with assigned replicas 1 19:06:41.601 [controller-event-thread] INFO state.change.logger - [Controller id=1 epoch=1] Changed partition __consumer_offsets-28 state from NonExistentPartition to NewPartition with assigned replicas 1 19:06:41.601 [controller-event-thread] INFO state.change.logger - [Controller id=1 epoch=1] Changed partition __consumer_offsets-31 state from NonExistentPartition to NewPartition with assigned replicas 1 19:06:41.601 [controller-event-thread] INFO state.change.logger - [Controller id=1 epoch=1] Changed partition __consumer_offsets-36 state from NonExistentPartition to NewPartition with assigned replicas 1 19:06:41.601 [controller-event-thread] INFO state.change.logger - [Controller id=1 epoch=1] Changed partition __consumer_offsets-42 state from NonExistentPartition to NewPartition with assigned replicas 1 19:06:41.601 [controller-event-thread] INFO state.change.logger - [Controller id=1 epoch=1] Changed partition __consumer_offsets-3 state from NonExistentPartition to NewPartition with assigned replicas 1 19:06:41.601 [controller-event-thread] INFO state.change.logger - [Controller id=1 epoch=1] Changed partition __consumer_offsets-18 state from NonExistentPartition to NewPartition with assigned replicas 1 19:06:41.601 [controller-event-thread] INFO state.change.logger - [Controller id=1 epoch=1] Changed partition __consumer_offsets-15 state from NonExistentPartition to NewPartition with assigned replicas 1 19:06:41.601 [controller-event-thread] INFO state.change.logger - [Controller id=1 epoch=1] Changed partition __consumer_offsets-24 state from NonExistentPartition to NewPartition with assigned replicas 1 19:06:41.601 [controller-event-thread] INFO state.change.logger - [Controller id=1 epoch=1] Changed partition __consumer_offsets-17 state from NonExistentPartition to NewPartition with assigned replicas 1 19:06:41.601 [controller-event-thread] INFO state.change.logger - [Controller id=1 epoch=1] Changed partition __consumer_offsets-48 state from NonExistentPartition to NewPartition with assigned replicas 1 19:06:41.601 [controller-event-thread] INFO state.change.logger - [Controller id=1 epoch=1] Changed partition __consumer_offsets-19 state from NonExistentPartition to NewPartition with assigned replicas 1 19:06:41.602 [controller-event-thread] INFO state.change.logger - [Controller id=1 epoch=1] Changed partition __consumer_offsets-11 state from NonExistentPartition to NewPartition with assigned replicas 1 19:06:41.602 [controller-event-thread] INFO state.change.logger - [Controller id=1 epoch=1] Changed partition __consumer_offsets-2 state from NonExistentPartition to NewPartition with assigned replicas 1 19:06:41.602 [controller-event-thread] INFO state.change.logger - [Controller id=1 epoch=1] Changed partition __consumer_offsets-43 state from NonExistentPartition to NewPartition with assigned replicas 1 19:06:41.602 [controller-event-thread] INFO state.change.logger - [Controller id=1 epoch=1] Changed partition __consumer_offsets-6 state from NonExistentPartition to NewPartition with assigned replicas 1 19:06:41.602 [controller-event-thread] INFO state.change.logger - [Controller id=1 epoch=1] Changed partition __consumer_offsets-14 state from NonExistentPartition to NewPartition with assigned replicas 1 19:06:41.602 [controller-event-thread] INFO state.change.logger - [Controller id=1 epoch=1] Changed partition __consumer_offsets-20 state from NonExistentPartition to NewPartition with assigned replicas 1 19:06:41.602 [controller-event-thread] INFO state.change.logger - [Controller id=1 epoch=1] Changed partition __consumer_offsets-0 state from NonExistentPartition to NewPartition with assigned replicas 1 19:06:41.602 [controller-event-thread] INFO state.change.logger - [Controller id=1 epoch=1] Changed partition __consumer_offsets-44 state from NonExistentPartition to NewPartition with assigned replicas 1 19:06:41.602 [controller-event-thread] INFO state.change.logger - [Controller id=1 epoch=1] Changed partition __consumer_offsets-39 state from NonExistentPartition to NewPartition with assigned replicas 1 19:06:41.602 [controller-event-thread] INFO state.change.logger - [Controller id=1 epoch=1] Changed partition __consumer_offsets-12 state from NonExistentPartition to NewPartition with assigned replicas 1 19:06:41.602 [controller-event-thread] INFO state.change.logger - [Controller id=1 epoch=1] Changed partition __consumer_offsets-45 state from NonExistentPartition to NewPartition with assigned replicas 1 19:06:41.602 [controller-event-thread] INFO state.change.logger - [Controller id=1 epoch=1] Changed partition __consumer_offsets-1 state from NonExistentPartition to NewPartition with assigned replicas 1 19:06:41.602 [controller-event-thread] INFO state.change.logger - [Controller id=1 epoch=1] Changed partition __consumer_offsets-5 state from NonExistentPartition to NewPartition with assigned replicas 1 19:06:41.602 [controller-event-thread] INFO state.change.logger - [Controller id=1 epoch=1] Changed partition __consumer_offsets-26 state from NonExistentPartition to NewPartition with assigned replicas 1 19:06:41.602 [controller-event-thread] INFO state.change.logger - [Controller id=1 epoch=1] Changed partition __consumer_offsets-29 state from NonExistentPartition to NewPartition with assigned replicas 1 19:06:41.603 [controller-event-thread] INFO state.change.logger - [Controller id=1 epoch=1] Changed partition __consumer_offsets-34 state from NonExistentPartition to NewPartition with assigned replicas 1 19:06:41.603 [controller-event-thread] INFO state.change.logger - [Controller id=1 epoch=1] Changed partition __consumer_offsets-10 state from NonExistentPartition to NewPartition with assigned replicas 1 19:06:41.603 [controller-event-thread] INFO state.change.logger - [Controller id=1 epoch=1] Changed partition __consumer_offsets-32 state from NonExistentPartition to NewPartition with assigned replicas 1 19:06:41.603 [controller-event-thread] INFO state.change.logger - [Controller id=1 epoch=1] Changed partition __consumer_offsets-40 state from NonExistentPartition to NewPartition with assigned replicas 1 19:06:41.603 [controller-event-thread] INFO state.change.logger - [Controller id=1 epoch=1] Sending UpdateMetadata request to brokers HashSet() for 0 partitions 19:06:41.606 [controller-event-thread] INFO state.change.logger - [Controller id=1 epoch=1] Sending UpdateMetadata request to brokers HashSet() for 0 partitions 19:06:41.611 [ProcessThread(sid:0 cport:42969):] DEBUG org.apache.zookeeper.server.SessionTrackerImpl - Checking session 0x1000002c50e0000 19:06:41.611 [ProcessThread(sid:0 cport:42969):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Permission requested: 1 19:06:41.611 [ProcessThread(sid:0 cport:42969):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - ACLs for node: [31,s{'world,'anyone} ] 19:06:41.611 [ProcessThread(sid:0 cport:42969):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Client credentials: ['sasl,'zooclient , 'ip,'127.0.0.1 ] 19:06:41.611 [ProcessThread(sid:0 cport:42969):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from data tree is: 77059782957 19:06:41.611 [ProcessThread(sid:0 cport:42969):] DEBUG org.apache.zookeeper.server.SessionTrackerImpl - Checking session 0x1000002c50e0000 19:06:41.611 [ProcessThread(sid:0 cport:42969):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Processing ACL: 31,s{'world,'anyone} 19:06:41.612 [ProcessThread(sid:0 cport:42969):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Permission requested: 4 19:06:41.612 [ProcessThread(sid:0 cport:42969):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - ACLs for node: [31,s{'world,'anyone} ] 19:06:41.612 [ProcessThread(sid:0 cport:42969):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Client credentials: ['sasl,'zooclient , 'ip,'127.0.0.1 ] 19:06:41.612 [ProcessThread(sid:0 cport:42969):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from data tree is: 77059782957 19:06:41.612 [ProcessThread(sid:0 cport:42969):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 78579773367 19:06:41.612 [ProcessThread(sid:0 cport:42969):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 79263321429 19:06:41.618 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - Processing request:: sessionid:0x1000002c50e0000 type:multi cxid:0x57 zxid:0x27 txntype:14 reqpath:n/a 19:06:41.619 [SyncThread:0] DEBUG org.apache.zookeeper.common.PathTrie - brokers 19:06:41.619 [SyncThread:0] DEBUG org.apache.zookeeper.server.DataTree - Digests are matching for Zxid: 27, Digest in log and actual tree: 79263321429 19:06:41.619 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - sessionid:0x1000002c50e0000 type:multi cxid:0x57 zxid:0x27 txntype:14 reqpath:n/a 19:06:41.620 [main-SendThread(127.0.0.1:42969)] DEBUG org.apache.zookeeper.ClientCnxn - Reading reply session id: 0x1000002c50e0000, packet:: clientPath:null serverPath:null finished:false header:: 87,14 replyHeader:: 87,39,0 request:: org.apache.zookeeper.MultiOperationRecord@47c7375 response:: org.apache.zookeeper.MultiResponse@fe4873b6 19:06:41.623 [ProcessThread(sid:0 cport:42969):] DEBUG org.apache.zookeeper.server.SessionTrackerImpl - Checking session 0x1000002c50e0000 19:06:41.623 [ProcessThread(sid:0 cport:42969):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Permission requested: 1 19:06:41.623 [ProcessThread(sid:0 cport:42969):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - ACLs for node: [31,s{'world,'anyone} ] 19:06:41.623 [ProcessThread(sid:0 cport:42969):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Client credentials: ['sasl,'zooclient , 'ip,'127.0.0.1 ] 19:06:41.623 [ProcessThread(sid:0 cport:42969):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from data tree is: 79263321429 19:06:41.623 [ProcessThread(sid:0 cport:42969):] DEBUG org.apache.zookeeper.server.SessionTrackerImpl - Checking session 0x1000002c50e0000 19:06:41.623 [ProcessThread(sid:0 cport:42969):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Processing ACL: 31,s{'world,'anyone} 19:06:41.623 [ProcessThread(sid:0 cport:42969):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Permission requested: 4 19:06:41.623 [ProcessThread(sid:0 cport:42969):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - ACLs for node: [31,s{'world,'anyone} ] 19:06:41.624 [ProcessThread(sid:0 cport:42969):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Client credentials: ['sasl,'zooclient , 'ip,'127.0.0.1 ] 19:06:41.624 [ProcessThread(sid:0 cport:42969):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from data tree is: 79263321429 19:06:41.624 [ProcessThread(sid:0 cport:42969):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 79078633585 19:06:41.624 [ProcessThread(sid:0 cport:42969):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 79807753004 19:06:41.624 [ProcessThread(sid:0 cport:42969):] DEBUG org.apache.zookeeper.server.SessionTrackerImpl - Checking session 0x1000002c50e0000 19:06:41.624 [ProcessThread(sid:0 cport:42969):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Permission requested: 1 19:06:41.624 [ProcessThread(sid:0 cport:42969):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - ACLs for node: [31,s{'world,'anyone} ] 19:06:41.624 [ProcessThread(sid:0 cport:42969):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Client credentials: ['sasl,'zooclient , 'ip,'127.0.0.1 ] 19:06:41.624 [ProcessThread(sid:0 cport:42969):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 79807753004 19:06:41.624 [ProcessThread(sid:0 cport:42969):] DEBUG org.apache.zookeeper.server.SessionTrackerImpl - Checking session 0x1000002c50e0000 19:06:41.624 [ProcessThread(sid:0 cport:42969):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Processing ACL: 31,s{'world,'anyone} 19:06:41.624 [ProcessThread(sid:0 cport:42969):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Permission requested: 4 19:06:41.624 [ProcessThread(sid:0 cport:42969):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - ACLs for node: [31,s{'world,'anyone} ] 19:06:41.624 [ProcessThread(sid:0 cport:42969):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Client credentials: ['sasl,'zooclient , 'ip,'127.0.0.1 ] 19:06:41.624 [ProcessThread(sid:0 cport:42969):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 79807753004 19:06:41.624 [ProcessThread(sid:0 cport:42969):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 79854577117 19:06:41.624 [ProcessThread(sid:0 cport:42969):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 80591057523 19:06:41.624 [ProcessThread(sid:0 cport:42969):] DEBUG org.apache.zookeeper.server.SessionTrackerImpl - Checking session 0x1000002c50e0000 19:06:41.624 [ProcessThread(sid:0 cport:42969):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Permission requested: 1 19:06:41.624 [ProcessThread(sid:0 cport:42969):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - ACLs for node: [31,s{'world,'anyone} ] 19:06:41.624 [ProcessThread(sid:0 cport:42969):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Client credentials: ['sasl,'zooclient , 'ip,'127.0.0.1 ] 19:06:41.624 [ProcessThread(sid:0 cport:42969):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 80591057523 19:06:41.624 [ProcessThread(sid:0 cport:42969):] DEBUG org.apache.zookeeper.server.SessionTrackerImpl - Checking session 0x1000002c50e0000 19:06:41.624 [ProcessThread(sid:0 cport:42969):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Processing ACL: 31,s{'world,'anyone} 19:06:41.625 [ProcessThread(sid:0 cport:42969):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Permission requested: 4 19:06:41.625 [ProcessThread(sid:0 cport:42969):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - ACLs for node: [31,s{'world,'anyone} ] 19:06:41.625 [ProcessThread(sid:0 cport:42969):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Client credentials: ['sasl,'zooclient , 'ip,'127.0.0.1 ] 19:06:41.625 [ProcessThread(sid:0 cport:42969):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 80591057523 19:06:41.625 [ProcessThread(sid:0 cport:42969):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 82854614166 19:06:41.625 [ProcessThread(sid:0 cport:42969):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 83571103464 19:06:41.625 [ProcessThread(sid:0 cport:42969):] DEBUG org.apache.zookeeper.server.SessionTrackerImpl - Checking session 0x1000002c50e0000 19:06:41.625 [ProcessThread(sid:0 cport:42969):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Permission requested: 1 19:06:41.625 [ProcessThread(sid:0 cport:42969):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - ACLs for node: [31,s{'world,'anyone} ] 19:06:41.625 [ProcessThread(sid:0 cport:42969):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Client credentials: ['sasl,'zooclient , 'ip,'127.0.0.1 ] 19:06:41.625 [ProcessThread(sid:0 cport:42969):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 83571103464 19:06:41.625 [ProcessThread(sid:0 cport:42969):] DEBUG org.apache.zookeeper.server.SessionTrackerImpl - Checking session 0x1000002c50e0000 19:06:41.625 [ProcessThread(sid:0 cport:42969):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Processing ACL: 31,s{'world,'anyone} 19:06:41.625 [ProcessThread(sid:0 cport:42969):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Permission requested: 4 19:06:41.625 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - Processing request:: sessionid:0x1000002c50e0000 type:multi cxid:0x58 zxid:0x28 txntype:14 reqpath:n/a 19:06:41.625 [ProcessThread(sid:0 cport:42969):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - ACLs for node: [31,s{'world,'anyone} ] 19:06:41.625 [ProcessThread(sid:0 cport:42969):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Client credentials: ['sasl,'zooclient , 'ip,'127.0.0.1 ] 19:06:41.626 [SyncThread:0] DEBUG org.apache.zookeeper.common.PathTrie - brokers 19:06:41.626 [SyncThread:0] DEBUG org.apache.zookeeper.server.DataTree - Digests are matching for Zxid: 28, Digest in log and actual tree: 79807753004 19:06:41.626 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - sessionid:0x1000002c50e0000 type:multi cxid:0x58 zxid:0x28 txntype:14 reqpath:n/a 19:06:41.626 [ProcessThread(sid:0 cport:42969):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 83571103464 19:06:41.626 [ProcessThread(sid:0 cport:42969):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 82287707683 19:06:41.626 [ProcessThread(sid:0 cport:42969):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 83324550573 19:06:41.626 [ProcessThread(sid:0 cport:42969):] DEBUG org.apache.zookeeper.server.SessionTrackerImpl - Checking session 0x1000002c50e0000 19:06:41.626 [ProcessThread(sid:0 cport:42969):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Permission requested: 1 19:06:41.626 [ProcessThread(sid:0 cport:42969):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - ACLs for node: [31,s{'world,'anyone} ] 19:06:41.626 [ProcessThread(sid:0 cport:42969):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Client credentials: ['sasl,'zooclient , 'ip,'127.0.0.1 ] 19:06:41.626 [ProcessThread(sid:0 cport:42969):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 83324550573 19:06:41.626 [ProcessThread(sid:0 cport:42969):] DEBUG org.apache.zookeeper.server.SessionTrackerImpl - Checking session 0x1000002c50e0000 19:06:41.626 [ProcessThread(sid:0 cport:42969):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Processing ACL: 31,s{'world,'anyone} 19:06:41.626 [ProcessThread(sid:0 cport:42969):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Permission requested: 4 19:06:41.626 [ProcessThread(sid:0 cport:42969):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - ACLs for node: [31,s{'world,'anyone} ] 19:06:41.626 [ProcessThread(sid:0 cport:42969):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Client credentials: ['sasl,'zooclient , 'ip,'127.0.0.1 ] 19:06:41.626 [ProcessThread(sid:0 cport:42969):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 83324550573 19:06:41.627 [ProcessThread(sid:0 cport:42969):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 84807478647 19:06:41.627 [ProcessThread(sid:0 cport:42969):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 86200523963 19:06:41.627 [ProcessThread(sid:0 cport:42969):] DEBUG org.apache.zookeeper.server.SessionTrackerImpl - Checking session 0x1000002c50e0000 19:06:41.627 [ProcessThread(sid:0 cport:42969):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Permission requested: 1 19:06:41.627 [ProcessThread(sid:0 cport:42969):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - ACLs for node: [31,s{'world,'anyone} ] 19:06:41.627 [main-SendThread(127.0.0.1:42969)] DEBUG org.apache.zookeeper.ClientCnxn - Reading reply session id: 0x1000002c50e0000, packet:: clientPath:null serverPath:null finished:false header:: 88,14 replyHeader:: 88,40,0 request:: org.apache.zookeeper.MultiOperationRecord@324db770 response:: org.apache.zookeeper.MultiResponse@2c19b7b1 19:06:41.627 [ProcessThread(sid:0 cport:42969):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Client credentials: ['sasl,'zooclient , 'ip,'127.0.0.1 ] 19:06:41.627 [ProcessThread(sid:0 cport:42969):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 86200523963 19:06:41.627 [ProcessThread(sid:0 cport:42969):] DEBUG org.apache.zookeeper.server.SessionTrackerImpl - Checking session 0x1000002c50e0000 19:06:41.627 [ProcessThread(sid:0 cport:42969):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Processing ACL: 31,s{'world,'anyone} 19:06:41.627 [ProcessThread(sid:0 cport:42969):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Permission requested: 4 19:06:41.627 [ProcessThread(sid:0 cport:42969):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - ACLs for node: [31,s{'world,'anyone} ] 19:06:41.627 [ProcessThread(sid:0 cport:42969):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Client credentials: ['sasl,'zooclient , 'ip,'127.0.0.1 ] 19:06:41.627 [ProcessThread(sid:0 cport:42969):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 86200523963 19:06:41.627 [ProcessThread(sid:0 cport:42969):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 85581211498 19:06:41.627 [ProcessThread(sid:0 cport:42969):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 85596971476 19:06:41.627 [ProcessThread(sid:0 cport:42969):] DEBUG org.apache.zookeeper.server.SessionTrackerImpl - Checking session 0x1000002c50e0000 19:06:41.627 [ProcessThread(sid:0 cport:42969):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Permission requested: 1 19:06:41.627 [ProcessThread(sid:0 cport:42969):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - ACLs for node: [31,s{'world,'anyone} ] 19:06:41.627 [ProcessThread(sid:0 cport:42969):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Client credentials: ['sasl,'zooclient , 'ip,'127.0.0.1 ] 19:06:41.627 [ProcessThread(sid:0 cport:42969):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 85596971476 19:06:41.627 [ProcessThread(sid:0 cport:42969):] DEBUG org.apache.zookeeper.server.SessionTrackerImpl - Checking session 0x1000002c50e0000 19:06:41.627 [ProcessThread(sid:0 cport:42969):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Processing ACL: 31,s{'world,'anyone} 19:06:41.627 [ProcessThread(sid:0 cport:42969):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Permission requested: 4 19:06:41.627 [ProcessThread(sid:0 cport:42969):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - ACLs for node: [31,s{'world,'anyone} ] 19:06:41.627 [ProcessThread(sid:0 cport:42969):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Client credentials: ['sasl,'zooclient , 'ip,'127.0.0.1 ] 19:06:41.628 [ProcessThread(sid:0 cport:42969):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 85596971476 19:06:41.628 [ProcessThread(sid:0 cport:42969):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 83368705081 19:06:41.628 [ProcessThread(sid:0 cport:42969):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 85779028991 19:06:41.628 [ProcessThread(sid:0 cport:42969):] DEBUG org.apache.zookeeper.server.SessionTrackerImpl - Checking session 0x1000002c50e0000 19:06:41.628 [ProcessThread(sid:0 cport:42969):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Permission requested: 1 19:06:41.628 [ProcessThread(sid:0 cport:42969):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - ACLs for node: [31,s{'world,'anyone} ] 19:06:41.628 [ProcessThread(sid:0 cport:42969):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Client credentials: ['sasl,'zooclient , 'ip,'127.0.0.1 ] 19:06:41.628 [ProcessThread(sid:0 cport:42969):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 85779028991 19:06:41.628 [ProcessThread(sid:0 cport:42969):] DEBUG org.apache.zookeeper.server.SessionTrackerImpl - Checking session 0x1000002c50e0000 19:06:41.628 [ProcessThread(sid:0 cport:42969):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Processing ACL: 31,s{'world,'anyone} 19:06:41.628 [ProcessThread(sid:0 cport:42969):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Permission requested: 4 19:06:41.628 [ProcessThread(sid:0 cport:42969):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - ACLs for node: [31,s{'world,'anyone} ] 19:06:41.628 [ProcessThread(sid:0 cport:42969):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Client credentials: ['sasl,'zooclient , 'ip,'127.0.0.1 ] 19:06:41.628 [ProcessThread(sid:0 cport:42969):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 85779028991 19:06:41.628 [ProcessThread(sid:0 cport:42969):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 88538865125 19:06:41.628 [ProcessThread(sid:0 cport:42969):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 92766941546 19:06:41.628 [ProcessThread(sid:0 cport:42969):] DEBUG org.apache.zookeeper.server.SessionTrackerImpl - Checking session 0x1000002c50e0000 19:06:41.628 [ProcessThread(sid:0 cport:42969):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Permission requested: 1 19:06:41.628 [ProcessThread(sid:0 cport:42969):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - ACLs for node: [31,s{'world,'anyone} ] 19:06:41.628 [ProcessThread(sid:0 cport:42969):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Client credentials: ['sasl,'zooclient , 'ip,'127.0.0.1 ] 19:06:41.628 [ProcessThread(sid:0 cport:42969):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 92766941546 19:06:41.628 [ProcessThread(sid:0 cport:42969):] DEBUG org.apache.zookeeper.server.SessionTrackerImpl - Checking session 0x1000002c50e0000 19:06:41.628 [ProcessThread(sid:0 cport:42969):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Processing ACL: 31,s{'world,'anyone} 19:06:41.628 [ProcessThread(sid:0 cport:42969):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Permission requested: 4 19:06:41.628 [ProcessThread(sid:0 cport:42969):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - ACLs for node: [31,s{'world,'anyone} ] 19:06:41.628 [ProcessThread(sid:0 cport:42969):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Client credentials: ['sasl,'zooclient , 'ip,'127.0.0.1 ] 19:06:41.628 [ProcessThread(sid:0 cport:42969):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 92766941546 19:06:41.628 [ProcessThread(sid:0 cport:42969):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 94025871345 19:06:41.628 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - Processing request:: sessionid:0x1000002c50e0000 type:multi cxid:0x59 zxid:0x29 txntype:14 reqpath:n/a 19:06:41.628 [ProcessThread(sid:0 cport:42969):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 96773277702 19:06:41.629 [SyncThread:0] DEBUG org.apache.zookeeper.common.PathTrie - brokers 19:06:41.629 [SyncThread:0] DEBUG org.apache.zookeeper.server.DataTree - Digests are matching for Zxid: 29, Digest in log and actual tree: 80591057523 19:06:41.629 [ProcessThread(sid:0 cport:42969):] DEBUG org.apache.zookeeper.server.SessionTrackerImpl - Checking session 0x1000002c50e0000 19:06:41.629 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - sessionid:0x1000002c50e0000 type:multi cxid:0x59 zxid:0x29 txntype:14 reqpath:n/a 19:06:41.629 [ProcessThread(sid:0 cport:42969):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Permission requested: 1 19:06:41.629 [ProcessThread(sid:0 cport:42969):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - ACLs for node: [31,s{'world,'anyone} ] 19:06:41.629 [ProcessThread(sid:0 cport:42969):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Client credentials: ['sasl,'zooclient , 'ip,'127.0.0.1 ] 19:06:41.629 [ProcessThread(sid:0 cport:42969):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 96773277702 19:06:41.629 [ProcessThread(sid:0 cport:42969):] DEBUG org.apache.zookeeper.server.SessionTrackerImpl - Checking session 0x1000002c50e0000 19:06:41.629 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - Processing request:: sessionid:0x1000002c50e0000 type:multi cxid:0x5a zxid:0x2a txntype:14 reqpath:n/a 19:06:41.629 [ProcessThread(sid:0 cport:42969):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Processing ACL: 31,s{'world,'anyone} 19:06:41.630 [SyncThread:0] DEBUG org.apache.zookeeper.common.PathTrie - brokers 19:06:41.630 [main-SendThread(127.0.0.1:42969)] DEBUG org.apache.zookeeper.ClientCnxn - Reading reply session id: 0x1000002c50e0000, packet:: clientPath:null serverPath:null finished:false header:: 89,14 replyHeader:: 89,41,0 request:: org.apache.zookeeper.MultiOperationRecord@324db78d response:: org.apache.zookeeper.MultiResponse@2c19b7ce 19:06:41.630 [SyncThread:0] DEBUG org.apache.zookeeper.server.DataTree - Digests are matching for Zxid: 2a, Digest in log and actual tree: 83571103464 19:06:41.630 [ProcessThread(sid:0 cport:42969):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Permission requested: 4 19:06:41.630 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - sessionid:0x1000002c50e0000 type:multi cxid:0x5a zxid:0x2a txntype:14 reqpath:n/a 19:06:41.630 [ProcessThread(sid:0 cport:42969):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - ACLs for node: [31,s{'world,'anyone} ] 19:06:41.630 [ProcessThread(sid:0 cport:42969):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Client credentials: ['sasl,'zooclient , 'ip,'127.0.0.1 ] 19:06:41.630 [ProcessThread(sid:0 cport:42969):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 96773277702 19:06:41.630 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - Processing request:: sessionid:0x1000002c50e0000 type:multi cxid:0x5b zxid:0x2b txntype:14 reqpath:n/a 19:06:41.630 [ProcessThread(sid:0 cport:42969):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 95883706295 19:06:41.630 [SyncThread:0] DEBUG org.apache.zookeeper.common.PathTrie - brokers 19:06:41.631 [SyncThread:0] DEBUG org.apache.zookeeper.server.DataTree - Digests are matching for Zxid: 2b, Digest in log and actual tree: 83324550573 19:06:41.631 [main-SendThread(127.0.0.1:42969)] DEBUG org.apache.zookeeper.ClientCnxn - Reading reply session id: 0x1000002c50e0000, packet:: clientPath:null serverPath:null finished:false header:: 90,14 replyHeader:: 90,42,0 request:: org.apache.zookeeper.MultiOperationRecord@324db773 response:: org.apache.zookeeper.MultiResponse@2c19b7b4 19:06:41.631 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - sessionid:0x1000002c50e0000 type:multi cxid:0x5b zxid:0x2b txntype:14 reqpath:n/a 19:06:41.631 [ProcessThread(sid:0 cport:42969):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 96843999768 19:06:41.631 [ProcessThread(sid:0 cport:42969):] DEBUG org.apache.zookeeper.server.SessionTrackerImpl - Checking session 0x1000002c50e0000 19:06:41.631 [ProcessThread(sid:0 cport:42969):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Permission requested: 1 19:06:41.631 [ProcessThread(sid:0 cport:42969):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - ACLs for node: [31,s{'world,'anyone} ] 19:06:41.631 [ProcessThread(sid:0 cport:42969):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Client credentials: ['sasl,'zooclient , 'ip,'127.0.0.1 ] 19:06:41.631 [ProcessThread(sid:0 cport:42969):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 96843999768 19:06:41.631 [ProcessThread(sid:0 cport:42969):] DEBUG org.apache.zookeeper.server.SessionTrackerImpl - Checking session 0x1000002c50e0000 19:06:41.631 [ProcessThread(sid:0 cport:42969):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Processing ACL: 31,s{'world,'anyone} 19:06:41.631 [ProcessThread(sid:0 cport:42969):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Permission requested: 4 19:06:41.631 [ProcessThread(sid:0 cport:42969):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - ACLs for node: [31,s{'world,'anyone} ] 19:06:41.631 [main-SendThread(127.0.0.1:42969)] DEBUG org.apache.zookeeper.ClientCnxn - Reading reply session id: 0x1000002c50e0000, packet:: clientPath:null serverPath:null finished:false header:: 91,14 replyHeader:: 91,43,0 request:: org.apache.zookeeper.MultiOperationRecord@324db792 response:: org.apache.zookeeper.MultiResponse@2c19b7d3 19:06:41.631 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - Processing request:: sessionid:0x1000002c50e0000 type:multi cxid:0x5c zxid:0x2c txntype:14 reqpath:n/a 19:06:41.631 [ProcessThread(sid:0 cport:42969):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Client credentials: ['sasl,'zooclient , 'ip,'127.0.0.1 ] 19:06:41.632 [SyncThread:0] DEBUG org.apache.zookeeper.common.PathTrie - brokers 19:06:41.632 [SyncThread:0] DEBUG org.apache.zookeeper.server.DataTree - Digests are matching for Zxid: 2c, Digest in log and actual tree: 86200523963 19:06:41.633 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - sessionid:0x1000002c50e0000 type:multi cxid:0x5c zxid:0x2c txntype:14 reqpath:n/a 19:06:41.633 [ProcessThread(sid:0 cport:42969):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 96843999768 19:06:41.633 [ProcessThread(sid:0 cport:42969):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 94692937587 19:06:41.633 [ProcessThread(sid:0 cport:42969):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 94985967344 19:06:41.633 [ProcessThread(sid:0 cport:42969):] DEBUG org.apache.zookeeper.server.SessionTrackerImpl - Checking session 0x1000002c50e0000 19:06:41.633 [ProcessThread(sid:0 cport:42969):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Permission requested: 1 19:06:41.633 [ProcessThread(sid:0 cport:42969):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - ACLs for node: [31,s{'world,'anyone} ] 19:06:41.633 [main-SendThread(127.0.0.1:42969)] DEBUG org.apache.zookeeper.ClientCnxn - Reading reply session id: 0x1000002c50e0000, packet:: clientPath:null serverPath:null finished:false header:: 92,14 replyHeader:: 92,44,0 request:: org.apache.zookeeper.MultiOperationRecord@324db794 response:: org.apache.zookeeper.MultiResponse@2c19b7d5 19:06:41.633 [ProcessThread(sid:0 cport:42969):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Client credentials: ['sasl,'zooclient , 'ip,'127.0.0.1 ] 19:06:41.633 [ProcessThread(sid:0 cport:42969):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 94985967344 19:06:41.633 [ProcessThread(sid:0 cport:42969):] DEBUG org.apache.zookeeper.server.SessionTrackerImpl - Checking session 0x1000002c50e0000 19:06:41.633 [ProcessThread(sid:0 cport:42969):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Processing ACL: 31,s{'world,'anyone} 19:06:41.633 [ProcessThread(sid:0 cport:42969):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Permission requested: 4 19:06:41.633 [ProcessThread(sid:0 cport:42969):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - ACLs for node: [31,s{'world,'anyone} ] 19:06:41.633 [ProcessThread(sid:0 cport:42969):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Client credentials: ['sasl,'zooclient , 'ip,'127.0.0.1 ] 19:06:41.633 [ProcessThread(sid:0 cport:42969):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 94985967344 19:06:41.633 [ProcessThread(sid:0 cport:42969):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 97041368631 19:06:41.633 [ProcessThread(sid:0 cport:42969):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 98226992631 19:06:41.633 [ProcessThread(sid:0 cport:42969):] DEBUG org.apache.zookeeper.server.SessionTrackerImpl - Checking session 0x1000002c50e0000 19:06:41.634 [ProcessThread(sid:0 cport:42969):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Permission requested: 1 19:06:41.634 [ProcessThread(sid:0 cport:42969):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - ACLs for node: [31,s{'world,'anyone} ] 19:06:41.634 [ProcessThread(sid:0 cport:42969):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Client credentials: ['sasl,'zooclient , 'ip,'127.0.0.1 ] 19:06:41.634 [ProcessThread(sid:0 cport:42969):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 98226992631 19:06:41.634 [ProcessThread(sid:0 cport:42969):] DEBUG org.apache.zookeeper.server.SessionTrackerImpl - Checking session 0x1000002c50e0000 19:06:41.634 [ProcessThread(sid:0 cport:42969):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Processing ACL: 31,s{'world,'anyone} 19:06:41.634 [ProcessThread(sid:0 cport:42969):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Permission requested: 4 19:06:41.634 [ProcessThread(sid:0 cport:42969):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - ACLs for node: [31,s{'world,'anyone} ] 19:06:41.634 [ProcessThread(sid:0 cport:42969):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Client credentials: ['sasl,'zooclient , 'ip,'127.0.0.1 ] 19:06:41.634 [ProcessThread(sid:0 cport:42969):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 98226992631 19:06:41.634 [ProcessThread(sid:0 cport:42969):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 96505456077 19:06:41.634 [ProcessThread(sid:0 cport:42969):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 98698374931 19:06:41.634 [ProcessThread(sid:0 cport:42969):] DEBUG org.apache.zookeeper.server.SessionTrackerImpl - Checking session 0x1000002c50e0000 19:06:41.634 [ProcessThread(sid:0 cport:42969):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Permission requested: 1 19:06:41.634 [ProcessThread(sid:0 cport:42969):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - ACLs for node: [31,s{'world,'anyone} ] 19:06:41.634 [ProcessThread(sid:0 cport:42969):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Client credentials: ['sasl,'zooclient , 'ip,'127.0.0.1 ] 19:06:41.634 [ProcessThread(sid:0 cport:42969):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 98698374931 19:06:41.634 [ProcessThread(sid:0 cport:42969):] DEBUG org.apache.zookeeper.server.SessionTrackerImpl - Checking session 0x1000002c50e0000 19:06:41.634 [ProcessThread(sid:0 cport:42969):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Processing ACL: 31,s{'world,'anyone} 19:06:41.634 [ProcessThread(sid:0 cport:42969):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Permission requested: 4 19:06:41.634 [ProcessThread(sid:0 cport:42969):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - ACLs for node: [31,s{'world,'anyone} ] 19:06:41.634 [ProcessThread(sid:0 cport:42969):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Client credentials: ['sasl,'zooclient , 'ip,'127.0.0.1 ] 19:06:41.634 [ProcessThread(sid:0 cport:42969):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 98698374931 19:06:41.634 [ProcessThread(sid:0 cport:42969):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 99019650754 19:06:41.634 [ProcessThread(sid:0 cport:42969):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 101622788451 19:06:41.635 [ProcessThread(sid:0 cport:42969):] DEBUG org.apache.zookeeper.server.SessionTrackerImpl - Checking session 0x1000002c50e0000 19:06:41.635 [ProcessThread(sid:0 cport:42969):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Permission requested: 1 19:06:41.635 [ProcessThread(sid:0 cport:42969):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - ACLs for node: [31,s{'world,'anyone} ] 19:06:41.635 [ProcessThread(sid:0 cport:42969):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Client credentials: ['sasl,'zooclient , 'ip,'127.0.0.1 ] 19:06:41.635 [ProcessThread(sid:0 cport:42969):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 101622788451 19:06:41.635 [ProcessThread(sid:0 cport:42969):] DEBUG org.apache.zookeeper.server.SessionTrackerImpl - Checking session 0x1000002c50e0000 19:06:41.635 [ProcessThread(sid:0 cport:42969):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Processing ACL: 31,s{'world,'anyone} 19:06:41.635 [ProcessThread(sid:0 cport:42969):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Permission requested: 4 19:06:41.635 [ProcessThread(sid:0 cport:42969):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - ACLs for node: [31,s{'world,'anyone} ] 19:06:41.635 [ProcessThread(sid:0 cport:42969):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Client credentials: ['sasl,'zooclient , 'ip,'127.0.0.1 ] 19:06:41.635 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - Processing request:: sessionid:0x1000002c50e0000 type:multi cxid:0x5d zxid:0x2d txntype:14 reqpath:n/a 19:06:41.635 [ProcessThread(sid:0 cport:42969):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 101622788451 19:06:41.635 [SyncThread:0] DEBUG org.apache.zookeeper.common.PathTrie - brokers 19:06:41.635 [SyncThread:0] DEBUG org.apache.zookeeper.server.DataTree - Digests are matching for Zxid: 2d, Digest in log and actual tree: 85596971476 19:06:41.636 [ProcessThread(sid:0 cport:42969):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 103804685952 19:06:41.636 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - sessionid:0x1000002c50e0000 type:multi cxid:0x5d zxid:0x2d txntype:14 reqpath:n/a 19:06:41.636 [ProcessThread(sid:0 cport:42969):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 107428093163 19:06:41.636 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - Processing request:: sessionid:0x1000002c50e0000 type:multi cxid:0x5e zxid:0x2e txntype:14 reqpath:n/a 19:06:41.636 [main-SendThread(127.0.0.1:42969)] DEBUG org.apache.zookeeper.ClientCnxn - Reading reply session id: 0x1000002c50e0000, packet:: clientPath:null serverPath:null finished:false header:: 93,14 replyHeader:: 93,45,0 request:: org.apache.zookeeper.MultiOperationRecord@324db795 response:: org.apache.zookeeper.MultiResponse@2c19b7d6 19:06:41.637 [SyncThread:0] DEBUG org.apache.zookeeper.common.PathTrie - brokers 19:06:41.637 [SyncThread:0] DEBUG org.apache.zookeeper.server.DataTree - Digests are matching for Zxid: 2e, Digest in log and actual tree: 85779028991 19:06:41.637 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - sessionid:0x1000002c50e0000 type:multi cxid:0x5e zxid:0x2e txntype:14 reqpath:n/a 19:06:41.637 [ProcessThread(sid:0 cport:42969):] DEBUG org.apache.zookeeper.server.SessionTrackerImpl - Checking session 0x1000002c50e0000 19:06:41.637 [ProcessThread(sid:0 cport:42969):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Permission requested: 1 19:06:41.637 [ProcessThread(sid:0 cport:42969):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - ACLs for node: [31,s{'world,'anyone} ] 19:06:41.637 [ProcessThread(sid:0 cport:42969):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Client credentials: ['sasl,'zooclient , 'ip,'127.0.0.1 ] 19:06:41.637 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - Processing request:: sessionid:0x1000002c50e0000 type:multi cxid:0x5f zxid:0x2f txntype:14 reqpath:n/a 19:06:41.637 [ProcessThread(sid:0 cport:42969):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 107428093163 19:06:41.637 [ProcessThread(sid:0 cport:42969):] DEBUG org.apache.zookeeper.server.SessionTrackerImpl - Checking session 0x1000002c50e0000 19:06:41.637 [ProcessThread(sid:0 cport:42969):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Processing ACL: 31,s{'world,'anyone} 19:06:41.638 [main-SendThread(127.0.0.1:42969)] DEBUG org.apache.zookeeper.ClientCnxn - Reading reply session id: 0x1000002c50e0000, packet:: clientPath:null serverPath:null finished:false header:: 94,14 replyHeader:: 94,46,0 request:: org.apache.zookeeper.MultiOperationRecord@324db752 response:: org.apache.zookeeper.MultiResponse@2c19b793 19:06:41.638 [SyncThread:0] DEBUG org.apache.zookeeper.common.PathTrie - brokers 19:06:41.638 [SyncThread:0] DEBUG org.apache.zookeeper.server.DataTree - Digests are matching for Zxid: 2f, Digest in log and actual tree: 92766941546 19:06:41.638 [ProcessThread(sid:0 cport:42969):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Permission requested: 4 19:06:41.638 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - sessionid:0x1000002c50e0000 type:multi cxid:0x5f zxid:0x2f txntype:14 reqpath:n/a 19:06:41.638 [ProcessThread(sid:0 cport:42969):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - ACLs for node: [31,s{'world,'anyone} ] 19:06:41.638 [ProcessThread(sid:0 cport:42969):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Client credentials: ['sasl,'zooclient , 'ip,'127.0.0.1 ] 19:06:41.638 [ProcessThread(sid:0 cport:42969):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 107428093163 19:06:41.638 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - Processing request:: sessionid:0x1000002c50e0000 type:multi cxid:0x60 zxid:0x30 txntype:14 reqpath:n/a 19:06:41.639 [SyncThread:0] DEBUG org.apache.zookeeper.common.PathTrie - brokers 19:06:41.639 [main-SendThread(127.0.0.1:42969)] DEBUG org.apache.zookeeper.ClientCnxn - Reading reply session id: 0x1000002c50e0000, packet:: clientPath:null serverPath:null finished:false header:: 95,14 replyHeader:: 95,47,0 request:: org.apache.zookeeper.MultiOperationRecord@940352de response:: org.apache.zookeeper.MultiResponse@8dcf531f 19:06:41.639 [SyncThread:0] DEBUG org.apache.zookeeper.server.DataTree - Digests are matching for Zxid: 30, Digest in log and actual tree: 96773277702 19:06:41.639 [ProcessThread(sid:0 cport:42969):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 107721916891 19:06:41.639 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - sessionid:0x1000002c50e0000 type:multi cxid:0x60 zxid:0x30 txntype:14 reqpath:n/a 19:06:41.639 [ProcessThread(sid:0 cport:42969):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 109315554059 19:06:41.640 [ProcessThread(sid:0 cport:42969):] DEBUG org.apache.zookeeper.server.SessionTrackerImpl - Checking session 0x1000002c50e0000 19:06:41.640 [ProcessThread(sid:0 cport:42969):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Permission requested: 1 19:06:41.640 [ProcessThread(sid:0 cport:42969):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - ACLs for node: [31,s{'world,'anyone} ] 19:06:41.640 [ProcessThread(sid:0 cport:42969):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Client credentials: ['sasl,'zooclient , 'ip,'127.0.0.1 ] 19:06:41.640 [ProcessThread(sid:0 cport:42969):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 109315554059 19:06:41.640 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - Processing request:: sessionid:0x1000002c50e0000 type:multi cxid:0x61 zxid:0x31 txntype:14 reqpath:n/a 19:06:41.640 [ProcessThread(sid:0 cport:42969):] DEBUG org.apache.zookeeper.server.SessionTrackerImpl - Checking session 0x1000002c50e0000 19:06:41.640 [main-SendThread(127.0.0.1:42969)] DEBUG org.apache.zookeeper.ClientCnxn - Reading reply session id: 0x1000002c50e0000, packet:: clientPath:null serverPath:null finished:false header:: 96,14 replyHeader:: 96,48,0 request:: org.apache.zookeeper.MultiOperationRecord@324db76f response:: org.apache.zookeeper.MultiResponse@2c19b7b0 19:06:41.640 [ProcessThread(sid:0 cport:42969):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Processing ACL: 31,s{'world,'anyone} 19:06:41.640 [SyncThread:0] DEBUG org.apache.zookeeper.common.PathTrie - brokers 19:06:41.641 [SyncThread:0] DEBUG org.apache.zookeeper.server.DataTree - Digests are matching for Zxid: 31, Digest in log and actual tree: 96843999768 19:06:41.641 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - sessionid:0x1000002c50e0000 type:multi cxid:0x61 zxid:0x31 txntype:14 reqpath:n/a 19:06:41.641 [ProcessThread(sid:0 cport:42969):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Permission requested: 4 19:06:41.641 [ProcessThread(sid:0 cport:42969):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - ACLs for node: [31,s{'world,'anyone} ] 19:06:41.641 [ProcessThread(sid:0 cport:42969):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Client credentials: ['sasl,'zooclient , 'ip,'127.0.0.1 ] 19:06:41.641 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - Processing request:: sessionid:0x1000002c50e0000 type:multi cxid:0x62 zxid:0x32 txntype:14 reqpath:n/a 19:06:41.641 [main-SendThread(127.0.0.1:42969)] DEBUG org.apache.zookeeper.ClientCnxn - Reading reply session id: 0x1000002c50e0000, packet:: clientPath:null serverPath:null finished:false header:: 97,14 replyHeader:: 97,49,0 request:: org.apache.zookeeper.MultiOperationRecord@940352da response:: org.apache.zookeeper.MultiResponse@8dcf531b 19:06:41.642 [SyncThread:0] DEBUG org.apache.zookeeper.common.PathTrie - brokers 19:06:41.642 [SyncThread:0] DEBUG org.apache.zookeeper.server.DataTree - Digests are matching for Zxid: 32, Digest in log and actual tree: 94985967344 19:06:41.642 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - sessionid:0x1000002c50e0000 type:multi cxid:0x62 zxid:0x32 txntype:14 reqpath:n/a 19:06:41.642 [ProcessThread(sid:0 cport:42969):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 109315554059 19:06:41.642 [ProcessThread(sid:0 cport:42969):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 108460051951 19:06:41.642 [ProcessThread(sid:0 cport:42969):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 108477055240 19:06:41.642 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - Processing request:: sessionid:0x1000002c50e0000 type:multi cxid:0x63 zxid:0x33 txntype:14 reqpath:n/a 19:06:41.642 [ProcessThread(sid:0 cport:42969):] DEBUG org.apache.zookeeper.server.SessionTrackerImpl - Checking session 0x1000002c50e0000 19:06:41.643 [SyncThread:0] DEBUG org.apache.zookeeper.common.PathTrie - brokers 19:06:41.643 [main-SendThread(127.0.0.1:42969)] DEBUG org.apache.zookeeper.ClientCnxn - Reading reply session id: 0x1000002c50e0000, packet:: clientPath:null serverPath:null finished:false header:: 98,14 replyHeader:: 98,50,0 request:: org.apache.zookeeper.MultiOperationRecord@324db775 response:: org.apache.zookeeper.MultiResponse@2c19b7b6 19:06:41.643 [SyncThread:0] DEBUG org.apache.zookeeper.server.DataTree - Digests are matching for Zxid: 33, Digest in log and actual tree: 98226992631 19:06:41.643 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - sessionid:0x1000002c50e0000 type:multi cxid:0x63 zxid:0x33 txntype:14 reqpath:n/a 19:06:41.643 [ProcessThread(sid:0 cport:42969):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Permission requested: 1 19:06:41.643 [ProcessThread(sid:0 cport:42969):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - ACLs for node: [31,s{'world,'anyone} ] 19:06:41.643 [ProcessThread(sid:0 cport:42969):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Client credentials: ['sasl,'zooclient , 'ip,'127.0.0.1 ] 19:06:41.643 [ProcessThread(sid:0 cport:42969):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 108477055240 19:06:41.643 [ProcessThread(sid:0 cport:42969):] DEBUG org.apache.zookeeper.server.SessionTrackerImpl - Checking session 0x1000002c50e0000 19:06:41.643 [ProcessThread(sid:0 cport:42969):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Processing ACL: 31,s{'world,'anyone} 19:06:41.643 [ProcessThread(sid:0 cport:42969):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Permission requested: 4 19:06:41.643 [ProcessThread(sid:0 cport:42969):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - ACLs for node: [31,s{'world,'anyone} ] 19:06:41.643 [ProcessThread(sid:0 cport:42969):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Client credentials: ['sasl,'zooclient , 'ip,'127.0.0.1 ] 19:06:41.644 [ProcessThread(sid:0 cport:42969):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 108477055240 19:06:41.644 [ProcessThread(sid:0 cport:42969):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 109201983287 19:06:41.644 [ProcessThread(sid:0 cport:42969):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 112682321376 19:06:41.644 [main-SendThread(127.0.0.1:42969)] DEBUG org.apache.zookeeper.ClientCnxn - Reading reply session id: 0x1000002c50e0000, packet:: clientPath:null serverPath:null finished:false header:: 99,14 replyHeader:: 99,51,0 request:: org.apache.zookeeper.MultiOperationRecord@940352dd response:: org.apache.zookeeper.MultiResponse@8dcf531e 19:06:41.644 [ProcessThread(sid:0 cport:42969):] DEBUG org.apache.zookeeper.server.SessionTrackerImpl - Checking session 0x1000002c50e0000 19:06:41.644 [ProcessThread(sid:0 cport:42969):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Permission requested: 1 19:06:41.644 [ProcessThread(sid:0 cport:42969):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - ACLs for node: [31,s{'world,'anyone} ] 19:06:41.644 [ProcessThread(sid:0 cport:42969):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Client credentials: ['sasl,'zooclient , 'ip,'127.0.0.1 ] 19:06:41.644 [ProcessThread(sid:0 cport:42969):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 112682321376 19:06:41.644 [ProcessThread(sid:0 cport:42969):] DEBUG org.apache.zookeeper.server.SessionTrackerImpl - Checking session 0x1000002c50e0000 19:06:41.644 [ProcessThread(sid:0 cport:42969):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Processing ACL: 31,s{'world,'anyone} 19:06:41.644 [ProcessThread(sid:0 cport:42969):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Permission requested: 4 19:06:41.644 [ProcessThread(sid:0 cport:42969):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - ACLs for node: [31,s{'world,'anyone} ] 19:06:41.644 [ProcessThread(sid:0 cport:42969):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Client credentials: ['sasl,'zooclient , 'ip,'127.0.0.1 ] 19:06:41.644 [ProcessThread(sid:0 cport:42969):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 112682321376 19:06:41.644 [ProcessThread(sid:0 cport:42969):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 110443408573 19:06:41.645 [ProcessThread(sid:0 cport:42969):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 113693484086 19:06:41.645 [ProcessThread(sid:0 cport:42969):] DEBUG org.apache.zookeeper.server.SessionTrackerImpl - Checking session 0x1000002c50e0000 19:06:41.645 [ProcessThread(sid:0 cport:42969):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Permission requested: 1 19:06:41.645 [ProcessThread(sid:0 cport:42969):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - ACLs for node: [31,s{'world,'anyone} ] 19:06:41.645 [ProcessThread(sid:0 cport:42969):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Client credentials: ['sasl,'zooclient , 'ip,'127.0.0.1 ] 19:06:41.645 [ProcessThread(sid:0 cport:42969):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 113693484086 19:06:41.645 [ProcessThread(sid:0 cport:42969):] DEBUG org.apache.zookeeper.server.SessionTrackerImpl - Checking session 0x1000002c50e0000 19:06:41.645 [ProcessThread(sid:0 cport:42969):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Processing ACL: 31,s{'world,'anyone} 19:06:41.645 [ProcessThread(sid:0 cport:42969):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Permission requested: 4 19:06:41.645 [ProcessThread(sid:0 cport:42969):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - ACLs for node: [31,s{'world,'anyone} ] 19:06:41.645 [ProcessThread(sid:0 cport:42969):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Client credentials: ['sasl,'zooclient , 'ip,'127.0.0.1 ] 19:06:41.645 [ProcessThread(sid:0 cport:42969):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 113693484086 19:06:41.645 [ProcessThread(sid:0 cport:42969):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 114154795771 19:06:41.645 [ProcessThread(sid:0 cport:42969):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 114673956760 19:06:41.645 [ProcessThread(sid:0 cport:42969):] DEBUG org.apache.zookeeper.server.SessionTrackerImpl - Checking session 0x1000002c50e0000 19:06:41.645 [ProcessThread(sid:0 cport:42969):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Permission requested: 1 19:06:41.645 [ProcessThread(sid:0 cport:42969):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - ACLs for node: [31,s{'world,'anyone} ] 19:06:41.645 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - Processing request:: sessionid:0x1000002c50e0000 type:multi cxid:0x64 zxid:0x34 txntype:14 reqpath:n/a 19:06:41.645 [ProcessThread(sid:0 cport:42969):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Client credentials: ['sasl,'zooclient , 'ip,'127.0.0.1 ] 19:06:41.646 [SyncThread:0] DEBUG org.apache.zookeeper.common.PathTrie - brokers 19:06:41.646 [SyncThread:0] DEBUG org.apache.zookeeper.server.DataTree - Digests are matching for Zxid: 34, Digest in log and actual tree: 98698374931 19:06:41.646 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - sessionid:0x1000002c50e0000 type:multi cxid:0x64 zxid:0x34 txntype:14 reqpath:n/a 19:06:41.646 [ProcessThread(sid:0 cport:42969):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 114673956760 19:06:41.646 [ProcessThread(sid:0 cport:42969):] DEBUG org.apache.zookeeper.server.SessionTrackerImpl - Checking session 0x1000002c50e0000 19:06:41.646 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - Processing request:: sessionid:0x1000002c50e0000 type:multi cxid:0x65 zxid:0x35 txntype:14 reqpath:n/a 19:06:41.646 [ProcessThread(sid:0 cport:42969):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Processing ACL: 31,s{'world,'anyone} 19:06:41.646 [SyncThread:0] DEBUG org.apache.zookeeper.common.PathTrie - brokers 19:06:41.646 [SyncThread:0] DEBUG org.apache.zookeeper.server.DataTree - Digests are matching for Zxid: 35, Digest in log and actual tree: 101622788451 19:06:41.646 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - sessionid:0x1000002c50e0000 type:multi cxid:0x65 zxid:0x35 txntype:14 reqpath:n/a 19:06:41.646 [ProcessThread(sid:0 cport:42969):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Permission requested: 4 19:06:41.646 [main-SendThread(127.0.0.1:42969)] DEBUG org.apache.zookeeper.ClientCnxn - Reading reply session id: 0x1000002c50e0000, packet:: clientPath:null serverPath:null finished:false header:: 100,14 replyHeader:: 100,52,0 request:: org.apache.zookeeper.MultiOperationRecord@940352df response:: org.apache.zookeeper.MultiResponse@8dcf5320 19:06:41.646 [ProcessThread(sid:0 cport:42969):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - ACLs for node: [31,s{'world,'anyone} ] 19:06:41.646 [ProcessThread(sid:0 cport:42969):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Client credentials: ['sasl,'zooclient , 'ip,'127.0.0.1 ] 19:06:41.647 [ProcessThread(sid:0 cport:42969):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 114673956760 19:06:41.647 [ProcessThread(sid:0 cport:42969):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 114013636574 19:06:41.647 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - Processing request:: sessionid:0x1000002c50e0000 type:multi cxid:0x66 zxid:0x36 txntype:14 reqpath:n/a 19:06:41.647 [SyncThread:0] DEBUG org.apache.zookeeper.common.PathTrie - brokers 19:06:41.647 [SyncThread:0] DEBUG org.apache.zookeeper.server.DataTree - Digests are matching for Zxid: 36, Digest in log and actual tree: 107428093163 19:06:41.647 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - sessionid:0x1000002c50e0000 type:multi cxid:0x66 zxid:0x36 txntype:14 reqpath:n/a 19:06:41.647 [ProcessThread(sid:0 cport:42969):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 117699546280 19:06:41.647 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - Processing request:: sessionid:0x1000002c50e0000 type:multi cxid:0x67 zxid:0x37 txntype:14 reqpath:n/a 19:06:41.647 [main-SendThread(127.0.0.1:42969)] DEBUG org.apache.zookeeper.ClientCnxn - Reading reply session id: 0x1000002c50e0000, packet:: clientPath:null serverPath:null finished:false header:: 101,14 replyHeader:: 101,53,0 request:: org.apache.zookeeper.MultiOperationRecord@324db7b2 response:: org.apache.zookeeper.MultiResponse@2c19b7f3 19:06:41.647 [SyncThread:0] DEBUG org.apache.zookeeper.common.PathTrie - brokers 19:06:41.647 [SyncThread:0] DEBUG org.apache.zookeeper.server.DataTree - Digests are matching for Zxid: 37, Digest in log and actual tree: 109315554059 19:06:41.647 [main-SendThread(127.0.0.1:42969)] DEBUG org.apache.zookeeper.ClientCnxn - Reading reply session id: 0x1000002c50e0000, packet:: clientPath:null serverPath:null finished:false header:: 102,14 replyHeader:: 102,54,0 request:: org.apache.zookeeper.MultiOperationRecord@324db7ad response:: org.apache.zookeeper.MultiResponse@2c19b7ee 19:06:41.647 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - sessionid:0x1000002c50e0000 type:multi cxid:0x67 zxid:0x37 txntype:14 reqpath:n/a 19:06:41.648 [ProcessThread(sid:0 cport:42969):] DEBUG org.apache.zookeeper.server.SessionTrackerImpl - Checking session 0x1000002c50e0000 19:06:41.648 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - Processing request:: sessionid:0x1000002c50e0000 type:multi cxid:0x68 zxid:0x38 txntype:14 reqpath:n/a 19:06:41.648 [ProcessThread(sid:0 cport:42969):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Permission requested: 1 19:06:41.648 [ProcessThread(sid:0 cport:42969):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - ACLs for node: [31,s{'world,'anyone} ] 19:06:41.648 [ProcessThread(sid:0 cport:42969):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Client credentials: ['sasl,'zooclient , 'ip,'127.0.0.1 ] 19:06:41.648 [SyncThread:0] DEBUG org.apache.zookeeper.common.PathTrie - brokers 19:06:41.648 [SyncThread:0] DEBUG org.apache.zookeeper.server.DataTree - Digests are matching for Zxid: 38, Digest in log and actual tree: 108477055240 19:06:41.648 [ProcessThread(sid:0 cport:42969):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 117699546280 19:06:41.648 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - sessionid:0x1000002c50e0000 type:multi cxid:0x68 zxid:0x38 txntype:14 reqpath:n/a 19:06:41.648 [ProcessThread(sid:0 cport:42969):] DEBUG org.apache.zookeeper.server.SessionTrackerImpl - Checking session 0x1000002c50e0000 19:06:41.648 [ProcessThread(sid:0 cport:42969):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Processing ACL: 31,s{'world,'anyone} 19:06:41.648 [main-SendThread(127.0.0.1:42969)] DEBUG org.apache.zookeeper.ClientCnxn - Reading reply session id: 0x1000002c50e0000, packet:: clientPath:null serverPath:null finished:false header:: 103,14 replyHeader:: 103,55,0 request:: org.apache.zookeeper.MultiOperationRecord@324db790 response:: org.apache.zookeeper.MultiResponse@2c19b7d1 19:06:41.649 [ProcessThread(sid:0 cport:42969):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Permission requested: 4 19:06:41.649 [ProcessThread(sid:0 cport:42969):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - ACLs for node: [31,s{'world,'anyone} ] 19:06:41.649 [main] DEBUG org.apache.kafka.clients.NetworkClient - [Consumer clientId=mso-123456-consumer-e3947c42-8a25-4603-b050-ed449cbe5e27, groupId=mso-group] Initialize connection to node localhost:38537 (id: 1 rack: null) for sending metadata request 19:06:41.649 [ProcessThread(sid:0 cport:42969):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Client credentials: ['sasl,'zooclient , 'ip,'127.0.0.1 ] 19:06:41.649 [main-SendThread(127.0.0.1:42969)] DEBUG org.apache.zookeeper.ClientCnxn - Reading reply session id: 0x1000002c50e0000, packet:: clientPath:null serverPath:null finished:false header:: 104,14 replyHeader:: 104,56,0 request:: org.apache.zookeeper.MultiOperationRecord@324db771 response:: org.apache.zookeeper.MultiResponse@2c19b7b2 19:06:41.649 [ProcessThread(sid:0 cport:42969):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 117699546280 19:06:41.649 [ProcessThread(sid:0 cport:42969):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 117479998169 19:06:41.649 [main] DEBUG org.apache.kafka.clients.ClientUtils - Resolved host localhost as 127.0.0.1 19:06:41.649 [ProcessThread(sid:0 cport:42969):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 121407614681 19:06:41.649 [main] DEBUG org.apache.kafka.clients.NetworkClient - [Consumer clientId=mso-123456-consumer-e3947c42-8a25-4603-b050-ed449cbe5e27, groupId=mso-group] Initiating connection to node localhost:38537 (id: 1 rack: null) using address localhost/127.0.0.1 19:06:41.649 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - Processing request:: sessionid:0x1000002c50e0000 type:multi cxid:0x69 zxid:0x39 txntype:14 reqpath:n/a 19:06:41.649 [ProcessThread(sid:0 cport:42969):] DEBUG org.apache.zookeeper.server.SessionTrackerImpl - Checking session 0x1000002c50e0000 19:06:41.649 [SyncThread:0] DEBUG org.apache.zookeeper.common.PathTrie - brokers 19:06:41.649 [SyncThread:0] DEBUG org.apache.zookeeper.server.DataTree - Digests are matching for Zxid: 39, Digest in log and actual tree: 112682321376 19:06:41.649 [main] DEBUG org.apache.kafka.common.security.authenticator.SaslClientAuthenticator - [Consumer clientId=mso-123456-consumer-e3947c42-8a25-4603-b050-ed449cbe5e27, groupId=mso-group] Set SASL client state to SEND_APIVERSIONS_REQUEST 19:06:41.650 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - sessionid:0x1000002c50e0000 type:multi cxid:0x69 zxid:0x39 txntype:14 reqpath:n/a 19:06:41.650 [ProcessThread(sid:0 cport:42969):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Permission requested: 1 19:06:41.650 [ProcessThread(sid:0 cport:42969):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - ACLs for node: [31,s{'world,'anyone} ] 19:06:41.650 [ProcessThread(sid:0 cport:42969):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Client credentials: ['sasl,'zooclient , 'ip,'127.0.0.1 ] 19:06:41.650 [main] DEBUG org.apache.kafka.common.security.authenticator.SaslClientAuthenticator - [Consumer clientId=mso-123456-consumer-e3947c42-8a25-4603-b050-ed449cbe5e27, groupId=mso-group] Creating SaslClient: client=null;service=kafka;serviceHostname=localhost;mechs=[PLAIN] 19:06:41.650 [ProcessThread(sid:0 cport:42969):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 121407614681 19:06:41.650 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - Processing request:: sessionid:0x1000002c50e0000 type:multi cxid:0x6a zxid:0x3a txntype:14 reqpath:n/a 19:06:41.650 [ProcessThread(sid:0 cport:42969):] DEBUG org.apache.zookeeper.server.SessionTrackerImpl - Checking session 0x1000002c50e0000 19:06:41.650 [data-plane-kafka-socket-acceptor-ListenerName(SASL_PLAINTEXT)-SASL_PLAINTEXT-38537] DEBUG kafka.network.DataPlaneAcceptor - Accepted connection from /127.0.0.1:54970 on /127.0.0.1:38537 and assigned it to processor 0, sendBufferSize [actual|requested]: [102400|102400] recvBufferSize [actual|requested]: [102400|102400] 19:06:41.650 [SyncThread:0] DEBUG org.apache.zookeeper.common.PathTrie - brokers 19:06:41.650 [ProcessThread(sid:0 cport:42969):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Processing ACL: 31,s{'world,'anyone} 19:06:41.650 [SyncThread:0] DEBUG org.apache.zookeeper.server.DataTree - Digests are matching for Zxid: 3a, Digest in log and actual tree: 113693484086 19:06:41.650 [data-plane-kafka-network-thread-1-ListenerName(SASL_PLAINTEXT)-SASL_PLAINTEXT-0] DEBUG kafka.network.Processor - Processor 0 listening to new connection from /127.0.0.1:54970 19:06:41.650 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - sessionid:0x1000002c50e0000 type:multi cxid:0x6a zxid:0x3a txntype:14 reqpath:n/a 19:06:41.650 [ProcessThread(sid:0 cport:42969):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Permission requested: 4 19:06:41.650 [main-SendThread(127.0.0.1:42969)] DEBUG org.apache.zookeeper.ClientCnxn - Reading reply session id: 0x1000002c50e0000, packet:: clientPath:null serverPath:null finished:false header:: 105,14 replyHeader:: 105,57,0 request:: org.apache.zookeeper.MultiOperationRecord@324db7b5 response:: org.apache.zookeeper.MultiResponse@2c19b7f6 19:06:41.650 [ProcessThread(sid:0 cport:42969):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - ACLs for node: [31,s{'world,'anyone} ] 19:06:41.650 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - Processing request:: sessionid:0x1000002c50e0000 type:multi cxid:0x6b zxid:0x3b txntype:14 reqpath:n/a 19:06:41.650 [main-SendThread(127.0.0.1:42969)] DEBUG org.apache.zookeeper.ClientCnxn - Reading reply session id: 0x1000002c50e0000, packet:: clientPath:null serverPath:null finished:false header:: 106,14 replyHeader:: 106,58,0 request:: org.apache.zookeeper.MultiOperationRecord@324db7b3 response:: org.apache.zookeeper.MultiResponse@2c19b7f4 19:06:41.650 [ProcessThread(sid:0 cport:42969):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Client credentials: ['sasl,'zooclient , 'ip,'127.0.0.1 ] 19:06:41.651 [SyncThread:0] DEBUG org.apache.zookeeper.common.PathTrie - brokers 19:06:41.651 [SyncThread:0] DEBUG org.apache.zookeeper.server.DataTree - Digests are matching for Zxid: 3b, Digest in log and actual tree: 114673956760 19:06:41.651 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - sessionid:0x1000002c50e0000 type:multi cxid:0x6b zxid:0x3b txntype:14 reqpath:n/a 19:06:41.651 [ProcessThread(sid:0 cport:42969):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 121407614681 19:06:41.651 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - Processing request:: sessionid:0x1000002c50e0000 type:multi cxid:0x6c zxid:0x3c txntype:14 reqpath:n/a 19:06:41.651 [ProcessThread(sid:0 cport:42969):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 123677298036 19:06:41.651 [ProcessThread(sid:0 cport:42969):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 125188788196 19:06:41.651 [main-SendThread(127.0.0.1:42969)] DEBUG org.apache.zookeeper.ClientCnxn - Reading reply session id: 0x1000002c50e0000, packet:: clientPath:null serverPath:null finished:false header:: 107,14 replyHeader:: 107,59,0 request:: org.apache.zookeeper.MultiOperationRecord@324db755 response:: org.apache.zookeeper.MultiResponse@2c19b796 19:06:41.651 [SyncThread:0] DEBUG org.apache.zookeeper.common.PathTrie - brokers 19:06:41.652 [SyncThread:0] DEBUG org.apache.zookeeper.server.DataTree - Digests are matching for Zxid: 3c, Digest in log and actual tree: 117699546280 19:06:41.652 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - sessionid:0x1000002c50e0000 type:multi cxid:0x6c zxid:0x3c txntype:14 reqpath:n/a 19:06:41.652 [ProcessThread(sid:0 cport:42969):] DEBUG org.apache.zookeeper.server.SessionTrackerImpl - Checking session 0x1000002c50e0000 19:06:41.652 [ProcessThread(sid:0 cport:42969):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Permission requested: 1 19:06:41.652 [main] DEBUG org.apache.kafka.common.network.Selector - [Consumer clientId=mso-123456-consumer-e3947c42-8a25-4603-b050-ed449cbe5e27, groupId=mso-group] Created socket with SO_RCVBUF = 65536, SO_SNDBUF = 131072, SO_TIMEOUT = 0 to node 1 19:06:41.652 [ProcessThread(sid:0 cport:42969):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - ACLs for node: [31,s{'world,'anyone} ] 19:06:41.652 [ProcessThread(sid:0 cport:42969):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Client credentials: ['sasl,'zooclient , 'ip,'127.0.0.1 ] 19:06:41.652 [ProcessThread(sid:0 cport:42969):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 125188788196 19:06:41.652 [ProcessThread(sid:0 cport:42969):] DEBUG org.apache.zookeeper.server.SessionTrackerImpl - Checking session 0x1000002c50e0000 19:06:41.652 [ProcessThread(sid:0 cport:42969):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Processing ACL: 31,s{'world,'anyone} 19:06:41.652 [ProcessThread(sid:0 cport:42969):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Permission requested: 4 19:06:41.652 [main-SendThread(127.0.0.1:42969)] DEBUG org.apache.zookeeper.ClientCnxn - Reading reply session id: 0x1000002c50e0000, packet:: clientPath:null serverPath:null finished:false header:: 108,14 replyHeader:: 108,60,0 request:: org.apache.zookeeper.MultiOperationRecord@324db776 response:: org.apache.zookeeper.MultiResponse@2c19b7b7 19:06:41.652 [ProcessThread(sid:0 cport:42969):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - ACLs for node: [31,s{'world,'anyone} ] 19:06:41.652 [main] DEBUG org.apache.kafka.common.security.authenticator.SaslClientAuthenticator - [Consumer clientId=mso-123456-consumer-e3947c42-8a25-4603-b050-ed449cbe5e27, groupId=mso-group] Set SASL client state to RECEIVE_APIVERSIONS_RESPONSE 19:06:41.652 [ProcessThread(sid:0 cport:42969):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Client credentials: ['sasl,'zooclient , 'ip,'127.0.0.1 ] 19:06:41.652 [data-plane-kafka-network-thread-1-ListenerName(SASL_PLAINTEXT)-SASL_PLAINTEXT-0] DEBUG org.apache.kafka.common.security.authenticator.SaslServerAuthenticator - Set SASL server state to HANDSHAKE_OR_VERSIONS_REQUEST during authentication 19:06:41.652 [ProcessThread(sid:0 cport:42969):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 125188788196 19:06:41.652 [data-plane-kafka-network-thread-1-ListenerName(SASL_PLAINTEXT)-SASL_PLAINTEXT-0] DEBUG org.apache.kafka.common.security.authenticator.SaslServerAuthenticator - Handling Kafka request API_VERSIONS during authentication 19:06:41.652 [ProcessThread(sid:0 cport:42969):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 123452362254 19:06:41.652 [main] DEBUG org.apache.kafka.clients.NetworkClient - [Consumer clientId=mso-123456-consumer-e3947c42-8a25-4603-b050-ed449cbe5e27, groupId=mso-group] Completed connection to node 1. Fetching API versions. 19:06:41.652 [ProcessThread(sid:0 cport:42969):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 124090334443 19:06:41.653 [ProcessThread(sid:0 cport:42969):] DEBUG org.apache.zookeeper.server.SessionTrackerImpl - Checking session 0x1000002c50e0000 19:06:41.653 [ProcessThread(sid:0 cport:42969):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Permission requested: 1 19:06:41.653 [ProcessThread(sid:0 cport:42969):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - ACLs for node: [31,s{'world,'anyone} ] 19:06:41.653 [ProcessThread(sid:0 cport:42969):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Client credentials: ['sasl,'zooclient , 'ip,'127.0.0.1 ] 19:06:41.653 [ProcessThread(sid:0 cport:42969):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 124090334443 19:06:41.653 [ProcessThread(sid:0 cport:42969):] DEBUG org.apache.zookeeper.server.SessionTrackerImpl - Checking session 0x1000002c50e0000 19:06:41.653 [ProcessThread(sid:0 cport:42969):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Processing ACL: 31,s{'world,'anyone} 19:06:41.653 [ProcessThread(sid:0 cport:42969):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Permission requested: 4 19:06:41.653 [ProcessThread(sid:0 cport:42969):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - ACLs for node: [31,s{'world,'anyone} ] 19:06:41.653 [ProcessThread(sid:0 cport:42969):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Client credentials: ['sasl,'zooclient , 'ip,'127.0.0.1 ] 19:06:41.653 [data-plane-kafka-network-thread-1-ListenerName(SASL_PLAINTEXT)-SASL_PLAINTEXT-0] DEBUG org.apache.kafka.common.security.authenticator.SaslServerAuthenticator - Set SASL server state to HANDSHAKE_REQUEST during authentication 19:06:41.653 [ProcessThread(sid:0 cport:42969):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 124090334443 19:06:41.653 [ProcessThread(sid:0 cport:42969):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 125177018557 19:06:41.653 [ProcessThread(sid:0 cport:42969):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 126976101499 19:06:41.653 [ProcessThread(sid:0 cport:42969):] DEBUG org.apache.zookeeper.server.SessionTrackerImpl - Checking session 0x1000002c50e0000 19:06:41.653 [ProcessThread(sid:0 cport:42969):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Permission requested: 1 19:06:41.653 [ProcessThread(sid:0 cport:42969):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - ACLs for node: [31,s{'world,'anyone} ] 19:06:41.653 [ProcessThread(sid:0 cport:42969):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Client credentials: ['sasl,'zooclient , 'ip,'127.0.0.1 ] 19:06:41.653 [ProcessThread(sid:0 cport:42969):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 126976101499 19:06:41.653 [ProcessThread(sid:0 cport:42969):] DEBUG org.apache.zookeeper.server.SessionTrackerImpl - Checking session 0x1000002c50e0000 19:06:41.653 [main] DEBUG org.apache.kafka.common.security.authenticator.SaslClientAuthenticator - [Consumer clientId=mso-123456-consumer-e3947c42-8a25-4603-b050-ed449cbe5e27, groupId=mso-group] Set SASL client state to SEND_HANDSHAKE_REQUEST 19:06:41.653 [ProcessThread(sid:0 cport:42969):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Processing ACL: 31,s{'world,'anyone} 19:06:41.654 [ProcessThread(sid:0 cport:42969):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Permission requested: 4 19:06:41.654 [ProcessThread(sid:0 cport:42969):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - ACLs for node: [31,s{'world,'anyone} ] 19:06:41.654 [ProcessThread(sid:0 cport:42969):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Client credentials: ['sasl,'zooclient , 'ip,'127.0.0.1 ] 19:06:41.654 [main] DEBUG org.apache.kafka.common.security.authenticator.SaslClientAuthenticator - [Consumer clientId=mso-123456-consumer-e3947c42-8a25-4603-b050-ed449cbe5e27, groupId=mso-group] Set SASL client state to RECEIVE_HANDSHAKE_RESPONSE 19:06:41.654 [data-plane-kafka-network-thread-1-ListenerName(SASL_PLAINTEXT)-SASL_PLAINTEXT-0] DEBUG org.apache.kafka.common.security.authenticator.SaslServerAuthenticator - Handling Kafka request SASL_HANDSHAKE during authentication 19:06:41.654 [ProcessThread(sid:0 cport:42969):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 126976101499 19:06:41.654 [data-plane-kafka-network-thread-1-ListenerName(SASL_PLAINTEXT)-SASL_PLAINTEXT-0] DEBUG org.apache.kafka.common.security.authenticator.SaslServerAuthenticator - Using SASL mechanism 'PLAIN' provided by client 19:06:41.654 [ProcessThread(sid:0 cport:42969):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 126119830828 19:06:41.654 [ProcessThread(sid:0 cport:42969):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 128318928196 19:06:41.654 [ProcessThread(sid:0 cport:42969):] DEBUG org.apache.zookeeper.server.SessionTrackerImpl - Checking session 0x1000002c50e0000 19:06:41.654 [ProcessThread(sid:0 cport:42969):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Permission requested: 1 19:06:41.654 [ProcessThread(sid:0 cport:42969):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - ACLs for node: [31,s{'world,'anyone} ] 19:06:41.654 [ProcessThread(sid:0 cport:42969):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Client credentials: ['sasl,'zooclient , 'ip,'127.0.0.1 ] 19:06:41.654 [main] DEBUG org.apache.kafka.common.security.authenticator.SaslClientAuthenticator - [Consumer clientId=mso-123456-consumer-e3947c42-8a25-4603-b050-ed449cbe5e27, groupId=mso-group] Set SASL client state to INITIAL 19:06:41.654 [ProcessThread(sid:0 cport:42969):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 128318928196 19:06:41.654 [ProcessThread(sid:0 cport:42969):] DEBUG org.apache.zookeeper.server.SessionTrackerImpl - Checking session 0x1000002c50e0000 19:06:41.654 [ProcessThread(sid:0 cport:42969):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Processing ACL: 31,s{'world,'anyone} 19:06:41.654 [ProcessThread(sid:0 cport:42969):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Permission requested: 4 19:06:41.654 [ProcessThread(sid:0 cport:42969):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - ACLs for node: [31,s{'world,'anyone} ] 19:06:41.654 [ProcessThread(sid:0 cport:42969):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Client credentials: ['sasl,'zooclient , 'ip,'127.0.0.1 ] 19:06:41.655 [main] DEBUG org.apache.kafka.common.security.authenticator.SaslClientAuthenticator - [Consumer clientId=mso-123456-consumer-e3947c42-8a25-4603-b050-ed449cbe5e27, groupId=mso-group] Set SASL client state to INTERMEDIATE 19:06:41.655 [ProcessThread(sid:0 cport:42969):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 128318928196 19:06:41.655 [data-plane-kafka-network-thread-1-ListenerName(SASL_PLAINTEXT)-SASL_PLAINTEXT-0] DEBUG org.apache.kafka.common.security.authenticator.SaslServerAuthenticator - Set SASL server state to AUTHENTICATE during authentication 19:06:41.655 [ProcessThread(sid:0 cport:42969):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 126109519711 19:06:41.655 [ProcessThread(sid:0 cport:42969):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 129108967864 19:06:41.655 [ProcessThread(sid:0 cport:42969):] DEBUG org.apache.zookeeper.server.SessionTrackerImpl - Checking session 0x1000002c50e0000 19:06:41.655 [ProcessThread(sid:0 cport:42969):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Permission requested: 1 19:06:41.655 [data-plane-kafka-network-thread-1-ListenerName(SASL_PLAINTEXT)-SASL_PLAINTEXT-0] DEBUG org.apache.kafka.common.security.authenticator.SaslServerAuthenticator - Authentication complete; session max lifetime from broker config=0 ms, no credential expiration; no session expiration, sending 0 ms to client 19:06:41.655 [ProcessThread(sid:0 cport:42969):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - ACLs for node: [31,s{'world,'anyone} ] 19:06:41.655 [ProcessThread(sid:0 cport:42969):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Client credentials: ['sasl,'zooclient , 'ip,'127.0.0.1 ] 19:06:41.655 [ProcessThread(sid:0 cport:42969):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 129108967864 19:06:41.655 [ProcessThread(sid:0 cport:42969):] DEBUG org.apache.zookeeper.server.SessionTrackerImpl - Checking session 0x1000002c50e0000 19:06:41.655 [ProcessThread(sid:0 cport:42969):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Processing ACL: 31,s{'world,'anyone} 19:06:41.655 [data-plane-kafka-network-thread-1-ListenerName(SASL_PLAINTEXT)-SASL_PLAINTEXT-0] DEBUG org.apache.kafka.common.security.authenticator.SaslServerAuthenticator - Set SASL server state to COMPLETE during authentication 19:06:41.655 [ProcessThread(sid:0 cport:42969):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Permission requested: 4 19:06:41.655 [data-plane-kafka-network-thread-1-ListenerName(SASL_PLAINTEXT)-SASL_PLAINTEXT-0] DEBUG org.apache.kafka.common.network.Selector - [SocketServer listenerType=ZK_BROKER, nodeId=1] Successfully authenticated with /127.0.0.1 19:06:41.655 [ProcessThread(sid:0 cport:42969):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - ACLs for node: [31,s{'world,'anyone} ] 19:06:41.655 [main] DEBUG org.apache.kafka.common.security.authenticator.SaslClientAuthenticator - [Consumer clientId=mso-123456-consumer-e3947c42-8a25-4603-b050-ed449cbe5e27, groupId=mso-group] Set SASL client state to COMPLETE 19:06:41.655 [ProcessThread(sid:0 cport:42969):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Client credentials: ['sasl,'zooclient , 'ip,'127.0.0.1 ] 19:06:41.655 [main] DEBUG org.apache.kafka.common.security.authenticator.SaslClientAuthenticator - [Consumer clientId=mso-123456-consumer-e3947c42-8a25-4603-b050-ed449cbe5e27, groupId=mso-group] Finished authentication with no session expiration and no session re-authentication 19:06:41.655 [ProcessThread(sid:0 cport:42969):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 129108967864 19:06:41.655 [main] DEBUG org.apache.kafka.common.network.Selector - [Consumer clientId=mso-123456-consumer-e3947c42-8a25-4603-b050-ed449cbe5e27, groupId=mso-group] Successfully authenticated with localhost/127.0.0.1 19:06:41.655 [ProcessThread(sid:0 cport:42969):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 133361799935 19:06:41.655 [ProcessThread(sid:0 cport:42969):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 135227498270 19:06:41.656 [main] DEBUG org.apache.kafka.clients.NetworkClient - [Consumer clientId=mso-123456-consumer-e3947c42-8a25-4603-b050-ed449cbe5e27, groupId=mso-group] Initiating API versions fetch from node 1. 19:06:41.656 [ProcessThread(sid:0 cport:42969):] DEBUG org.apache.zookeeper.server.SessionTrackerImpl - Checking session 0x1000002c50e0000 19:06:41.656 [ProcessThread(sid:0 cport:42969):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Permission requested: 1 19:06:41.656 [main] DEBUG org.apache.kafka.clients.NetworkClient - [Consumer clientId=mso-123456-consumer-e3947c42-8a25-4603-b050-ed449cbe5e27, groupId=mso-group] Sending API_VERSIONS request with header RequestHeader(apiKey=API_VERSIONS, apiVersion=3, clientId=mso-123456-consumer-e3947c42-8a25-4603-b050-ed449cbe5e27, correlationId=3) and timeout 30000 to node 1: ApiVersionsRequestData(clientSoftwareName='apache-kafka-java', clientSoftwareVersion='3.3.1') 19:06:41.656 [ProcessThread(sid:0 cport:42969):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - ACLs for node: [31,s{'world,'anyone} ] 19:06:41.656 [ProcessThread(sid:0 cport:42969):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Client credentials: ['sasl,'zooclient , 'ip,'127.0.0.1 ] 19:06:41.656 [ProcessThread(sid:0 cport:42969):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 135227498270 19:06:41.656 [ProcessThread(sid:0 cport:42969):] DEBUG org.apache.zookeeper.server.SessionTrackerImpl - Checking session 0x1000002c50e0000 19:06:41.656 [ProcessThread(sid:0 cport:42969):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Processing ACL: 31,s{'world,'anyone} 19:06:41.656 [ProcessThread(sid:0 cport:42969):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Permission requested: 4 19:06:41.656 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - Processing request:: sessionid:0x1000002c50e0000 type:multi cxid:0x6d zxid:0x3d txntype:14 reqpath:n/a 19:06:41.656 [ProcessThread(sid:0 cport:42969):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - ACLs for node: [31,s{'world,'anyone} ] 19:06:41.656 [ProcessThread(sid:0 cport:42969):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Client credentials: ['sasl,'zooclient , 'ip,'127.0.0.1 ] 19:06:41.656 [SyncThread:0] DEBUG org.apache.zookeeper.common.PathTrie - brokers 19:06:41.656 [SyncThread:0] DEBUG org.apache.zookeeper.server.DataTree - Digests are matching for Zxid: 3d, Digest in log and actual tree: 121407614681 19:06:41.656 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - sessionid:0x1000002c50e0000 type:multi cxid:0x6d zxid:0x3d txntype:14 reqpath:n/a 19:06:41.656 [ProcessThread(sid:0 cport:42969):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 135227498270 19:06:41.656 [ProcessThread(sid:0 cport:42969):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 131308154596 19:06:41.656 [ProcessThread(sid:0 cport:42969):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 134325519461 19:06:41.656 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - Processing request:: sessionid:0x1000002c50e0000 type:multi cxid:0x6e zxid:0x3e txntype:14 reqpath:n/a 19:06:41.657 [SyncThread:0] DEBUG org.apache.zookeeper.common.PathTrie - brokers 19:06:41.657 [SyncThread:0] DEBUG org.apache.zookeeper.server.DataTree - Digests are matching for Zxid: 3e, Digest in log and actual tree: 125188788196 19:06:41.657 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - sessionid:0x1000002c50e0000 type:multi cxid:0x6e zxid:0x3e txntype:14 reqpath:n/a 19:06:41.657 [ProcessThread(sid:0 cport:42969):] DEBUG org.apache.zookeeper.server.SessionTrackerImpl - Checking session 0x1000002c50e0000 19:06:41.657 [main-SendThread(127.0.0.1:42969)] DEBUG org.apache.zookeeper.ClientCnxn - Reading reply session id: 0x1000002c50e0000, packet:: clientPath:null serverPath:null finished:false header:: 109,14 replyHeader:: 109,61,0 request:: org.apache.zookeeper.MultiOperationRecord@324db78e response:: org.apache.zookeeper.MultiResponse@2c19b7cf 19:06:41.657 [ProcessThread(sid:0 cport:42969):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Permission requested: 1 19:06:41.657 [ProcessThread(sid:0 cport:42969):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - ACLs for node: [31,s{'world,'anyone} ] 19:06:41.657 [ProcessThread(sid:0 cport:42969):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Client credentials: ['sasl,'zooclient , 'ip,'127.0.0.1 ] 19:06:41.657 [ProcessThread(sid:0 cport:42969):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 134325519461 19:06:41.657 [ProcessThread(sid:0 cport:42969):] DEBUG org.apache.zookeeper.server.SessionTrackerImpl - Checking session 0x1000002c50e0000 19:06:41.657 [ProcessThread(sid:0 cport:42969):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Processing ACL: 31,s{'world,'anyone} 19:06:41.657 [ProcessThread(sid:0 cport:42969):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Permission requested: 4 19:06:41.657 [ProcessThread(sid:0 cport:42969):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - ACLs for node: [31,s{'world,'anyone} ] 19:06:41.657 [ProcessThread(sid:0 cport:42969):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Client credentials: ['sasl,'zooclient , 'ip,'127.0.0.1 ] 19:06:41.657 [ProcessThread(sid:0 cport:42969):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 134325519461 19:06:41.657 [ProcessThread(sid:0 cport:42969):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 134680603924 19:06:41.657 [ProcessThread(sid:0 cport:42969):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 138502035696 19:06:41.657 [ProcessThread(sid:0 cport:42969):] DEBUG org.apache.zookeeper.server.SessionTrackerImpl - Checking session 0x1000002c50e0000 19:06:41.657 [ProcessThread(sid:0 cport:42969):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Permission requested: 1 19:06:41.657 [ProcessThread(sid:0 cport:42969):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - ACLs for node: [31,s{'world,'anyone} ] 19:06:41.657 [ProcessThread(sid:0 cport:42969):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Client credentials: ['sasl,'zooclient , 'ip,'127.0.0.1 ] 19:06:41.657 [ProcessThread(sid:0 cport:42969):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 138502035696 19:06:41.658 [ProcessThread(sid:0 cport:42969):] DEBUG org.apache.zookeeper.server.SessionTrackerImpl - Checking session 0x1000002c50e0000 19:06:41.658 [ProcessThread(sid:0 cport:42969):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Processing ACL: 31,s{'world,'anyone} 19:06:41.658 [ProcessThread(sid:0 cport:42969):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Permission requested: 4 19:06:41.658 [ProcessThread(sid:0 cport:42969):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - ACLs for node: [31,s{'world,'anyone} ] 19:06:41.658 [ProcessThread(sid:0 cport:42969):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Client credentials: ['sasl,'zooclient , 'ip,'127.0.0.1 ] 19:06:41.658 [ProcessThread(sid:0 cport:42969):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 138502035696 19:06:41.658 [ProcessThread(sid:0 cport:42969):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 140675431245 19:06:41.658 [ProcessThread(sid:0 cport:42969):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 142742165491 19:06:41.658 [ProcessThread(sid:0 cport:42969):] DEBUG org.apache.zookeeper.server.SessionTrackerImpl - Checking session 0x1000002c50e0000 19:06:41.658 [main] DEBUG org.apache.kafka.clients.NetworkClient - [Consumer clientId=mso-123456-consumer-e3947c42-8a25-4603-b050-ed449cbe5e27, groupId=mso-group] Received API_VERSIONS response from node 1 for request with header RequestHeader(apiKey=API_VERSIONS, apiVersion=3, clientId=mso-123456-consumer-e3947c42-8a25-4603-b050-ed449cbe5e27, correlationId=3): ApiVersionsResponseData(errorCode=0, apiKeys=[ApiVersion(apiKey=0, minVersion=0, maxVersion=9), ApiVersion(apiKey=1, minVersion=0, maxVersion=13), ApiVersion(apiKey=2, minVersion=0, maxVersion=7), ApiVersion(apiKey=3, minVersion=0, maxVersion=12), ApiVersion(apiKey=4, minVersion=0, maxVersion=6), ApiVersion(apiKey=5, minVersion=0, maxVersion=3), ApiVersion(apiKey=6, minVersion=0, maxVersion=7), ApiVersion(apiKey=7, minVersion=0, maxVersion=3), ApiVersion(apiKey=8, minVersion=0, maxVersion=8), ApiVersion(apiKey=9, minVersion=0, maxVersion=8), ApiVersion(apiKey=10, minVersion=0, maxVersion=4), ApiVersion(apiKey=11, minVersion=0, maxVersion=9), ApiVersion(apiKey=12, minVersion=0, maxVersion=4), ApiVersion(apiKey=13, minVersion=0, maxVersion=5), ApiVersion(apiKey=14, minVersion=0, maxVersion=5), ApiVersion(apiKey=15, minVersion=0, maxVersion=5), ApiVersion(apiKey=16, minVersion=0, maxVersion=4), ApiVersion(apiKey=17, minVersion=0, maxVersion=1), ApiVersion(apiKey=18, minVersion=0, maxVersion=3), ApiVersion(apiKey=19, minVersion=0, maxVersion=7), ApiVersion(apiKey=20, minVersion=0, maxVersion=6), ApiVersion(apiKey=21, minVersion=0, maxVersion=2), ApiVersion(apiKey=22, minVersion=0, maxVersion=4), ApiVersion(apiKey=23, minVersion=0, maxVersion=4), ApiVersion(apiKey=24, minVersion=0, maxVersion=3), ApiVersion(apiKey=25, minVersion=0, maxVersion=3), ApiVersion(apiKey=26, minVersion=0, maxVersion=3), ApiVersion(apiKey=27, minVersion=0, maxVersion=1), ApiVersion(apiKey=28, minVersion=0, maxVersion=3), ApiVersion(apiKey=29, minVersion=0, maxVersion=3), ApiVersion(apiKey=30, minVersion=0, maxVersion=3), ApiVersion(apiKey=31, minVersion=0, maxVersion=3), ApiVersion(apiKey=32, minVersion=0, maxVersion=4), ApiVersion(apiKey=33, minVersion=0, maxVersion=2), ApiVersion(apiKey=34, minVersion=0, maxVersion=2), ApiVersion(apiKey=35, minVersion=0, maxVersion=4), ApiVersion(apiKey=36, minVersion=0, maxVersion=2), ApiVersion(apiKey=37, minVersion=0, maxVersion=3), ApiVersion(apiKey=38, minVersion=0, maxVersion=3), ApiVersion(apiKey=39, minVersion=0, maxVersion=2), ApiVersion(apiKey=40, minVersion=0, maxVersion=2), ApiVersion(apiKey=41, minVersion=0, maxVersion=3), ApiVersion(apiKey=42, minVersion=0, maxVersion=2), ApiVersion(apiKey=43, minVersion=0, maxVersion=2), ApiVersion(apiKey=44, minVersion=0, maxVersion=1), ApiVersion(apiKey=45, minVersion=0, maxVersion=0), ApiVersion(apiKey=46, minVersion=0, maxVersion=0), ApiVersion(apiKey=47, minVersion=0, maxVersion=0), ApiVersion(apiKey=48, minVersion=0, maxVersion=1), ApiVersion(apiKey=49, minVersion=0, maxVersion=1), ApiVersion(apiKey=50, minVersion=0, maxVersion=0), ApiVersion(apiKey=51, minVersion=0, maxVersion=0), ApiVersion(apiKey=56, minVersion=0, maxVersion=2), ApiVersion(apiKey=57, minVersion=0, maxVersion=1), ApiVersion(apiKey=60, minVersion=0, maxVersion=0), ApiVersion(apiKey=61, minVersion=0, maxVersion=0), ApiVersion(apiKey=65, minVersion=0, maxVersion=0), ApiVersion(apiKey=66, minVersion=0, maxVersion=0), ApiVersion(apiKey=67, minVersion=0, maxVersion=0)], throttleTimeMs=0, supportedFeatures=[], finalizedFeaturesEpoch=0, finalizedFeatures=[]) 19:06:41.658 [ProcessThread(sid:0 cport:42969):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Permission requested: 1 19:06:41.658 [main-SendThread(127.0.0.1:42969)] DEBUG org.apache.zookeeper.ClientCnxn - Reading reply session id: 0x1000002c50e0000, packet:: clientPath:null serverPath:null finished:false header:: 110,14 replyHeader:: 110,62,0 request:: org.apache.zookeeper.MultiOperationRecord@324db793 response:: org.apache.zookeeper.MultiResponse@2c19b7d4 19:06:41.658 [ProcessThread(sid:0 cport:42969):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - ACLs for node: [31,s{'world,'anyone} ] 19:06:41.658 [ProcessThread(sid:0 cport:42969):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Client credentials: ['sasl,'zooclient , 'ip,'127.0.0.1 ] 19:06:41.658 [ProcessThread(sid:0 cport:42969):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 142742165491 19:06:41.658 [ProcessThread(sid:0 cport:42969):] DEBUG org.apache.zookeeper.server.SessionTrackerImpl - Checking session 0x1000002c50e0000 19:06:41.658 [ProcessThread(sid:0 cport:42969):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Processing ACL: 31,s{'world,'anyone} 19:06:41.658 [ProcessThread(sid:0 cport:42969):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Permission requested: 4 19:06:41.658 [ProcessThread(sid:0 cport:42969):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - ACLs for node: [31,s{'world,'anyone} ] 19:06:41.659 [ProcessThread(sid:0 cport:42969):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Client credentials: ['sasl,'zooclient , 'ip,'127.0.0.1 ] 19:06:41.659 [ProcessThread(sid:0 cport:42969):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 142742165491 19:06:41.659 [ProcessThread(sid:0 cport:42969):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 140048826102 19:06:41.659 [ProcessThread(sid:0 cport:42969):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 142734220855 19:06:41.659 [data-plane-kafka-network-thread-1-ListenerName(SASL_PLAINTEXT)-SASL_PLAINTEXT-0] DEBUG kafka.request.logger - Completed request:{"isForwarded":false,"requestHeader":{"requestApiKey":18,"requestApiVersion":3,"correlationId":3,"clientId":"mso-123456-consumer-e3947c42-8a25-4603-b050-ed449cbe5e27","requestApiKeyName":"API_VERSIONS"},"request":{"clientSoftwareName":"apache-kafka-java","clientSoftwareVersion":"3.3.1"},"response":{"errorCode":0,"apiKeys":[{"apiKey":0,"minVersion":0,"maxVersion":9},{"apiKey":1,"minVersion":0,"maxVersion":13},{"apiKey":2,"minVersion":0,"maxVersion":7},{"apiKey":3,"minVersion":0,"maxVersion":12},{"apiKey":4,"minVersion":0,"maxVersion":6},{"apiKey":5,"minVersion":0,"maxVersion":3},{"apiKey":6,"minVersion":0,"maxVersion":7},{"apiKey":7,"minVersion":0,"maxVersion":3},{"apiKey":8,"minVersion":0,"maxVersion":8},{"apiKey":9,"minVersion":0,"maxVersion":8},{"apiKey":10,"minVersion":0,"maxVersion":4},{"apiKey":11,"minVersion":0,"maxVersion":9},{"apiKey":12,"minVersion":0,"maxVersion":4},{"apiKey":13,"minVersion":0,"maxVersion":5},{"apiKey":14,"minVersion":0,"maxVersion":5},{"apiKey":15,"minVersion":0,"maxVersion":5},{"apiKey":16,"minVersion":0,"maxVersion":4},{"apiKey":17,"minVersion":0,"maxVersion":1},{"apiKey":18,"minVersion":0,"maxVersion":3},{"apiKey":19,"minVersion":0,"maxVersion":7},{"apiKey":20,"minVersion":0,"maxVersion":6},{"apiKey":21,"minVersion":0,"maxVersion":2},{"apiKey":22,"minVersion":0,"maxVersion":4},{"apiKey":23,"minVersion":0,"maxVersion":4},{"apiKey":24,"minVersion":0,"maxVersion":3},{"apiKey":25,"minVersion":0,"maxVersion":3},{"apiKey":26,"minVersion":0,"maxVersion":3},{"apiKey":27,"minVersion":0,"maxVersion":1},{"apiKey":28,"minVersion":0,"maxVersion":3},{"apiKey":29,"minVersion":0,"maxVersion":3},{"apiKey":30,"minVersion":0,"maxVersion":3},{"apiKey":31,"minVersion":0,"maxVersion":3},{"apiKey":32,"minVersion":0,"maxVersion":4},{"apiKey":33,"minVersion":0,"maxVersion":2},{"apiKey":34,"minVersion":0,"maxVersion":2},{"apiKey":35,"minVersion":0,"maxVersion":4},{"apiKey":36,"minVersion":0,"maxVersion":2},{"apiKey":37,"minVersion":0,"maxVersion":3},{"apiKey":38,"minVersion":0,"maxVersion":3},{"apiKey":39,"minVersion":0,"maxVersion":2},{"apiKey":40,"minVersion":0,"maxVersion":2},{"apiKey":41,"minVersion":0,"maxVersion":3},{"apiKey":42,"minVersion":0,"maxVersion":2},{"apiKey":43,"minVersion":0,"maxVersion":2},{"apiKey":44,"minVersion":0,"maxVersion":1},{"apiKey":45,"minVersion":0,"maxVersion":0},{"apiKey":46,"minVersion":0,"maxVersion":0},{"apiKey":47,"minVersion":0,"maxVersion":0},{"apiKey":48,"minVersion":0,"maxVersion":1},{"apiKey":49,"minVersion":0,"maxVersion":1},{"apiKey":50,"minVersion":0,"maxVersion":0},{"apiKey":51,"minVersion":0,"maxVersion":0},{"apiKey":56,"minVersion":0,"maxVersion":2},{"apiKey":57,"minVersion":0,"maxVersion":1},{"apiKey":60,"minVersion":0,"maxVersion":0},{"apiKey":61,"minVersion":0,"maxVersion":0},{"apiKey":65,"minVersion":0,"maxVersion":0},{"apiKey":66,"minVersion":0,"maxVersion":0},{"apiKey":67,"minVersion":0,"maxVersion":0}],"throttleTimeMs":0,"finalizedFeaturesEpoch":0},"connection":"127.0.0.1:38537-127.0.0.1:54970-3","totalTimeMs":1.464,"requestQueueTimeMs":0.322,"localTimeMs":0.831,"remoteTimeMs":0.0,"throttleTimeMs":0,"responseQueueTimeMs":0.078,"sendTimeMs":0.232,"securityProtocol":"SASL_PLAINTEXT","principal":"User:admin","listener":"SASL_PLAINTEXT","clientInformation":{"softwareName":"unknown","softwareVersion":"unknown"}} 19:06:41.659 [main] DEBUG org.apache.kafka.clients.NetworkClient - [Consumer clientId=mso-123456-consumer-e3947c42-8a25-4603-b050-ed449cbe5e27, groupId=mso-group] Node 1 has finalized features epoch: 0, finalized features: [], supported features: [], API versions: (Produce(0): 0 to 9 [usable: 9], Fetch(1): 0 to 13 [usable: 13], ListOffsets(2): 0 to 7 [usable: 7], Metadata(3): 0 to 12 [usable: 12], LeaderAndIsr(4): 0 to 6 [usable: 6], StopReplica(5): 0 to 3 [usable: 3], UpdateMetadata(6): 0 to 7 [usable: 7], ControlledShutdown(7): 0 to 3 [usable: 3], OffsetCommit(8): 0 to 8 [usable: 8], OffsetFetch(9): 0 to 8 [usable: 8], FindCoordinator(10): 0 to 4 [usable: 4], JoinGroup(11): 0 to 9 [usable: 9], Heartbeat(12): 0 to 4 [usable: 4], LeaveGroup(13): 0 to 5 [usable: 5], SyncGroup(14): 0 to 5 [usable: 5], DescribeGroups(15): 0 to 5 [usable: 5], ListGroups(16): 0 to 4 [usable: 4], SaslHandshake(17): 0 to 1 [usable: 1], ApiVersions(18): 0 to 3 [usable: 3], CreateTopics(19): 0 to 7 [usable: 7], DeleteTopics(20): 0 to 6 [usable: 6], DeleteRecords(21): 0 to 2 [usable: 2], InitProducerId(22): 0 to 4 [usable: 4], OffsetForLeaderEpoch(23): 0 to 4 [usable: 4], AddPartitionsToTxn(24): 0 to 3 [usable: 3], AddOffsetsToTxn(25): 0 to 3 [usable: 3], EndTxn(26): 0 to 3 [usable: 3], WriteTxnMarkers(27): 0 to 1 [usable: 1], TxnOffsetCommit(28): 0 to 3 [usable: 3], DescribeAcls(29): 0 to 3 [usable: 3], CreateAcls(30): 0 to 3 [usable: 3], DeleteAcls(31): 0 to 3 [usable: 3], DescribeConfigs(32): 0 to 4 [usable: 4], AlterConfigs(33): 0 to 2 [usable: 2], AlterReplicaLogDirs(34): 0 to 2 [usable: 2], DescribeLogDirs(35): 0 to 4 [usable: 4], SaslAuthenticate(36): 0 to 2 [usable: 2], CreatePartitions(37): 0 to 3 [usable: 3], CreateDelegationToken(38): 0 to 3 [usable: 3], RenewDelegationToken(39): 0 to 2 [usable: 2], ExpireDelegationToken(40): 0 to 2 [usable: 2], DescribeDelegationToken(41): 0 to 3 [usable: 3], DeleteGroups(42): 0 to 2 [usable: 2], ElectLeaders(43): 0 to 2 [usable: 2], IncrementalAlterConfigs(44): 0 to 1 [usable: 1], AlterPartitionReassignments(45): 0 [usable: 0], ListPartitionReassignments(46): 0 [usable: 0], OffsetDelete(47): 0 [usable: 0], DescribeClientQuotas(48): 0 to 1 [usable: 1], AlterClientQuotas(49): 0 to 1 [usable: 1], DescribeUserScramCredentials(50): 0 [usable: 0], AlterUserScramCredentials(51): 0 [usable: 0], AlterPartition(56): 0 to 2 [usable: 2], UpdateFeatures(57): 0 to 1 [usable: 1], DescribeCluster(60): 0 [usable: 0], DescribeProducers(61): 0 [usable: 0], DescribeTransactions(65): 0 [usable: 0], ListTransactions(66): 0 [usable: 0], AllocateProducerIds(67): 0 [usable: 0]). 19:06:41.659 [ProcessThread(sid:0 cport:42969):] DEBUG org.apache.zookeeper.server.SessionTrackerImpl - Checking session 0x1000002c50e0000 19:06:41.659 [ProcessThread(sid:0 cport:42969):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Permission requested: 1 19:06:41.659 [ProcessThread(sid:0 cport:42969):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - ACLs for node: [31,s{'world,'anyone} ] 19:06:41.659 [ProcessThread(sid:0 cport:42969):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Client credentials: ['sasl,'zooclient , 'ip,'127.0.0.1 ] 19:06:41.659 [ProcessThread(sid:0 cport:42969):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 142734220855 19:06:41.659 [main] DEBUG org.apache.kafka.clients.NetworkClient - [Consumer clientId=mso-123456-consumer-e3947c42-8a25-4603-b050-ed449cbe5e27, groupId=mso-group] Sending metadata request MetadataRequestData(topics=[MetadataRequestTopic(topicId=AAAAAAAAAAAAAAAAAAAAAA, name='my-test-topic')], allowAutoTopicCreation=false, includeClusterAuthorizedOperations=false, includeTopicAuthorizedOperations=false) to node localhost:38537 (id: 1 rack: null) 19:06:41.659 [ProcessThread(sid:0 cport:42969):] DEBUG org.apache.zookeeper.server.SessionTrackerImpl - Checking session 0x1000002c50e0000 19:06:41.659 [ProcessThread(sid:0 cport:42969):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Processing ACL: 31,s{'world,'anyone} 19:06:41.659 [ProcessThread(sid:0 cport:42969):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Permission requested: 4 19:06:41.659 [main] DEBUG org.apache.kafka.clients.NetworkClient - [Consumer clientId=mso-123456-consumer-e3947c42-8a25-4603-b050-ed449cbe5e27, groupId=mso-group] Sending METADATA request with header RequestHeader(apiKey=METADATA, apiVersion=12, clientId=mso-123456-consumer-e3947c42-8a25-4603-b050-ed449cbe5e27, correlationId=4) and timeout 30000 to node 1: MetadataRequestData(topics=[MetadataRequestTopic(topicId=AAAAAAAAAAAAAAAAAAAAAA, name='my-test-topic')], allowAutoTopicCreation=false, includeClusterAuthorizedOperations=false, includeTopicAuthorizedOperations=false) 19:06:41.659 [ProcessThread(sid:0 cport:42969):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - ACLs for node: [31,s{'world,'anyone} ] 19:06:41.659 [ProcessThread(sid:0 cport:42969):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Client credentials: ['sasl,'zooclient , 'ip,'127.0.0.1 ] 19:06:41.659 [ProcessThread(sid:0 cport:42969):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 142734220855 19:06:41.659 [ProcessThread(sid:0 cport:42969):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 143489321235 19:06:41.659 [ProcessThread(sid:0 cport:42969):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 145551081068 19:06:41.661 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - Processing request:: sessionid:0x1000002c50e0000 type:multi cxid:0x6f zxid:0x3f txntype:14 reqpath:n/a 19:06:41.661 [SyncThread:0] DEBUG org.apache.zookeeper.common.PathTrie - brokers 19:06:41.661 [SyncThread:0] DEBUG org.apache.zookeeper.server.DataTree - Digests are matching for Zxid: 3f, Digest in log and actual tree: 124090334443 19:06:41.661 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - sessionid:0x1000002c50e0000 type:multi cxid:0x6f zxid:0x3f txntype:14 reqpath:n/a 19:06:41.662 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - Processing request:: sessionid:0x1000002c50e0000 type:multi cxid:0x70 zxid:0x40 txntype:14 reqpath:n/a 19:06:41.662 [SyncThread:0] DEBUG org.apache.zookeeper.common.PathTrie - brokers 19:06:41.662 [SyncThread:0] DEBUG org.apache.zookeeper.server.DataTree - Digests are matching for Zxid: 40, Digest in log and actual tree: 126976101499 19:06:41.662 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - sessionid:0x1000002c50e0000 type:multi cxid:0x70 zxid:0x40 txntype:14 reqpath:n/a 19:06:41.662 [main-SendThread(127.0.0.1:42969)] DEBUG org.apache.zookeeper.ClientCnxn - Reading reply session id: 0x1000002c50e0000, packet:: clientPath:null serverPath:null finished:false header:: 111,14 replyHeader:: 111,63,0 request:: org.apache.zookeeper.MultiOperationRecord@324db7ae response:: org.apache.zookeeper.MultiResponse@2c19b7ef 19:06:41.662 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - Processing request:: sessionid:0x1000002c50e0000 type:multi cxid:0x71 zxid:0x41 txntype:14 reqpath:n/a 19:06:41.662 [main-SendThread(127.0.0.1:42969)] DEBUG org.apache.zookeeper.ClientCnxn - Reading reply session id: 0x1000002c50e0000, packet:: clientPath:null serverPath:null finished:false header:: 112,14 replyHeader:: 112,64,0 request:: org.apache.zookeeper.MultiOperationRecord@940352d9 response:: org.apache.zookeeper.MultiResponse@8dcf531a 19:06:41.662 [main] DEBUG org.apache.kafka.clients.NetworkClient - [Consumer clientId=mso-123456-consumer-e3947c42-8a25-4603-b050-ed449cbe5e27, groupId=mso-group] Received METADATA response from node 1 for request with header RequestHeader(apiKey=METADATA, apiVersion=12, clientId=mso-123456-consumer-e3947c42-8a25-4603-b050-ed449cbe5e27, correlationId=4): MetadataResponseData(throttleTimeMs=0, brokers=[MetadataResponseBroker(nodeId=1, host='localhost', port=38537, rack=null)], clusterId='rNB1PmTvTz-7PD_3X0wh3g', controllerId=1, topics=[MetadataResponseTopic(errorCode=0, name='my-test-topic', topicId=Y7FYMZASRPqaYrfAS485Ow, isInternal=false, partitions=[MetadataResponsePartition(errorCode=0, partitionIndex=0, leaderId=1, leaderEpoch=0, replicaNodes=[1], isrNodes=[1], offlineReplicas=[])], topicAuthorizedOperations=-2147483648)], clusterAuthorizedOperations=-2147483648) 19:06:41.662 [data-plane-kafka-network-thread-1-ListenerName(SASL_PLAINTEXT)-SASL_PLAINTEXT-0] DEBUG kafka.request.logger - Completed request:{"isForwarded":false,"requestHeader":{"requestApiKey":3,"requestApiVersion":12,"correlationId":4,"clientId":"mso-123456-consumer-e3947c42-8a25-4603-b050-ed449cbe5e27","requestApiKeyName":"METADATA"},"request":{"topics":[{"topicId":"AAAAAAAAAAAAAAAAAAAAAA","name":"my-test-topic"}],"allowAutoTopicCreation":false,"includeTopicAuthorizedOperations":false},"response":{"throttleTimeMs":0,"brokers":[{"nodeId":1,"host":"localhost","port":38537,"rack":null}],"clusterId":"rNB1PmTvTz-7PD_3X0wh3g","controllerId":1,"topics":[{"errorCode":0,"name":"my-test-topic","topicId":"Y7FYMZASRPqaYrfAS485Ow","isInternal":false,"partitions":[{"errorCode":0,"partitionIndex":0,"leaderId":1,"leaderEpoch":0,"replicaNodes":[1],"isrNodes":[1],"offlineReplicas":[]}],"topicAuthorizedOperations":-2147483648}]},"connection":"127.0.0.1:38537-127.0.0.1:54970-3","totalTimeMs":2.211,"requestQueueTimeMs":0.195,"localTimeMs":1.7,"remoteTimeMs":0.0,"throttleTimeMs":0,"responseQueueTimeMs":0.079,"sendTimeMs":0.236,"securityProtocol":"SASL_PLAINTEXT","principal":"User:admin","listener":"SASL_PLAINTEXT","clientInformation":{"softwareName":"apache-kafka-java","softwareVersion":"3.3.1"}} 19:06:41.663 [main] DEBUG org.apache.kafka.clients.Metadata - [Consumer clientId=mso-123456-consumer-e3947c42-8a25-4603-b050-ed449cbe5e27, groupId=mso-group] Updating last seen epoch for partition my-test-topic-0 from 0 to epoch 0 from new metadata 19:06:41.663 [SyncThread:0] DEBUG org.apache.zookeeper.common.PathTrie - brokers 19:06:41.663 [SyncThread:0] DEBUG org.apache.zookeeper.server.DataTree - Digests are matching for Zxid: 41, Digest in log and actual tree: 128318928196 19:06:41.663 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - sessionid:0x1000002c50e0000 type:multi cxid:0x71 zxid:0x41 txntype:14 reqpath:n/a 19:06:41.663 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - Processing request:: sessionid:0x1000002c50e0000 type:multi cxid:0x72 zxid:0x42 txntype:14 reqpath:n/a 19:06:41.663 [main] DEBUG org.apache.kafka.clients.Metadata - [Consumer clientId=mso-123456-consumer-e3947c42-8a25-4603-b050-ed449cbe5e27, groupId=mso-group] Updated cluster metadata updateVersion 3 to MetadataCache{clusterId='rNB1PmTvTz-7PD_3X0wh3g', nodes={1=localhost:38537 (id: 1 rack: null)}, partitions=[PartitionMetadata(error=NONE, partition=my-test-topic-0, leader=Optional[1], leaderEpoch=Optional[0], replicas=1, isr=1, offlineReplicas=)], controller=localhost:38537 (id: 1 rack: null)} 19:06:41.663 [SyncThread:0] DEBUG org.apache.zookeeper.common.PathTrie - brokers 19:06:41.663 [main] DEBUG org.apache.kafka.clients.consumer.internals.ConsumerCoordinator - [Consumer clientId=mso-123456-consumer-e3947c42-8a25-4603-b050-ed449cbe5e27, groupId=mso-group] Sending FindCoordinator request to broker localhost:38537 (id: 1 rack: null) 19:06:41.663 [main-SendThread(127.0.0.1:42969)] DEBUG org.apache.zookeeper.ClientCnxn - Reading reply session id: 0x1000002c50e0000, packet:: clientPath:null serverPath:null finished:false header:: 113,14 replyHeader:: 113,65,0 request:: org.apache.zookeeper.MultiOperationRecord@324db757 response:: org.apache.zookeeper.MultiResponse@2c19b798 19:06:41.663 [SyncThread:0] DEBUG org.apache.zookeeper.server.DataTree - Digests are matching for Zxid: 42, Digest in log and actual tree: 129108967864 19:06:41.663 [main] DEBUG org.apache.kafka.clients.NetworkClient - [Consumer clientId=mso-123456-consumer-e3947c42-8a25-4603-b050-ed449cbe5e27, groupId=mso-group] Sending FIND_COORDINATOR request with header RequestHeader(apiKey=FIND_COORDINATOR, apiVersion=4, clientId=mso-123456-consumer-e3947c42-8a25-4603-b050-ed449cbe5e27, correlationId=5) and timeout 30000 to node 1: FindCoordinatorRequestData(key='', keyType=0, coordinatorKeys=[mso-group]) 19:06:41.664 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - sessionid:0x1000002c50e0000 type:multi cxid:0x72 zxid:0x42 txntype:14 reqpath:n/a 19:06:41.664 [ProcessThread(sid:0 cport:42969):] DEBUG org.apache.zookeeper.server.SessionTrackerImpl - Checking session 0x1000002c50e0000 19:06:41.664 [ProcessThread(sid:0 cport:42969):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Permission requested: 1 19:06:41.664 [main-SendThread(127.0.0.1:42969)] DEBUG org.apache.zookeeper.ClientCnxn - Reading reply session id: 0x1000002c50e0000, packet:: clientPath:null serverPath:null finished:false header:: 114,14 replyHeader:: 114,66,0 request:: org.apache.zookeeper.MultiOperationRecord@324db754 response:: org.apache.zookeeper.MultiResponse@2c19b795 19:06:41.664 [ProcessThread(sid:0 cport:42969):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - ACLs for node: [31,s{'world,'anyone} ] 19:06:41.664 [ProcessThread(sid:0 cport:42969):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Client credentials: ['sasl,'zooclient , 'ip,'127.0.0.1 ] 19:06:41.664 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - Processing request:: sessionid:0x1000002c50e0000 type:multi cxid:0x73 zxid:0x43 txntype:14 reqpath:n/a 19:06:41.664 [ProcessThread(sid:0 cport:42969):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 145551081068 19:06:41.664 [ProcessThread(sid:0 cport:42969):] DEBUG org.apache.zookeeper.server.SessionTrackerImpl - Checking session 0x1000002c50e0000 19:06:41.664 [ProcessThread(sid:0 cport:42969):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Processing ACL: 31,s{'world,'anyone} 19:06:41.665 [SyncThread:0] DEBUG org.apache.zookeeper.common.PathTrie - brokers 19:06:41.665 [SyncThread:0] DEBUG org.apache.zookeeper.server.DataTree - Digests are matching for Zxid: 43, Digest in log and actual tree: 135227498270 19:06:41.665 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - sessionid:0x1000002c50e0000 type:multi cxid:0x73 zxid:0x43 txntype:14 reqpath:n/a 19:06:41.665 [ProcessThread(sid:0 cport:42969):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Permission requested: 4 19:06:41.665 [ProcessThread(sid:0 cport:42969):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - ACLs for node: [31,s{'world,'anyone} ] 19:06:41.665 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - Processing request:: sessionid:0x1000002c50e0000 type:multi cxid:0x74 zxid:0x44 txntype:14 reqpath:n/a 19:06:41.665 [ProcessThread(sid:0 cport:42969):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Client credentials: ['sasl,'zooclient , 'ip,'127.0.0.1 ] 19:06:41.665 [SyncThread:0] DEBUG org.apache.zookeeper.common.PathTrie - brokers 19:06:41.665 [SyncThread:0] DEBUG org.apache.zookeeper.server.DataTree - Digests are matching for Zxid: 44, Digest in log and actual tree: 134325519461 19:06:41.665 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - sessionid:0x1000002c50e0000 type:multi cxid:0x74 zxid:0x44 txntype:14 reqpath:n/a 19:06:41.665 [ProcessThread(sid:0 cport:42969):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 145551081068 19:06:41.665 [ProcessThread(sid:0 cport:42969):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 144658117949 19:06:41.665 [ProcessThread(sid:0 cport:42969):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 147720169407 19:06:41.666 [ProcessThread(sid:0 cport:42969):] DEBUG org.apache.zookeeper.server.SessionTrackerImpl - Checking session 0x1000002c50e0000 19:06:41.666 [ProcessThread(sid:0 cport:42969):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Permission requested: 1 19:06:41.666 [ProcessThread(sid:0 cport:42969):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - ACLs for node: [31,s{'world,'anyone} ] 19:06:41.666 [ProcessThread(sid:0 cport:42969):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Client credentials: ['sasl,'zooclient , 'ip,'127.0.0.1 ] 19:06:41.666 [ProcessThread(sid:0 cport:42969):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 147720169407 19:06:41.666 [ProcessThread(sid:0 cport:42969):] DEBUG org.apache.zookeeper.server.SessionTrackerImpl - Checking session 0x1000002c50e0000 19:06:41.666 [ProcessThread(sid:0 cport:42969):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Processing ACL: 31,s{'world,'anyone} 19:06:41.666 [ProcessThread(sid:0 cport:42969):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Permission requested: 4 19:06:41.666 [ProcessThread(sid:0 cport:42969):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - ACLs for node: [31,s{'world,'anyone} ] 19:06:41.666 [main-SendThread(127.0.0.1:42969)] DEBUG org.apache.zookeeper.ClientCnxn - Reading reply session id: 0x1000002c50e0000, packet:: clientPath:null serverPath:null finished:false header:: 115,14 replyHeader:: 115,67,0 request:: org.apache.zookeeper.MultiOperationRecord@324db772 response:: org.apache.zookeeper.MultiResponse@2c19b7b3 19:06:41.666 [ProcessThread(sid:0 cport:42969):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Client credentials: ['sasl,'zooclient , 'ip,'127.0.0.1 ] 19:06:41.666 [ProcessThread(sid:0 cport:42969):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 147720169407 19:06:41.666 [main-SendThread(127.0.0.1:42969)] DEBUG org.apache.zookeeper.ClientCnxn - Reading reply session id: 0x1000002c50e0000, packet:: clientPath:null serverPath:null finished:false header:: 116,14 replyHeader:: 116,68,0 request:: org.apache.zookeeper.MultiOperationRecord@324db756 response:: org.apache.zookeeper.MultiResponse@2c19b797 19:06:41.666 [ProcessThread(sid:0 cport:42969):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 149867005474 19:06:41.666 [ProcessThread(sid:0 cport:42969):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 151136600353 19:06:41.666 [ProcessThread(sid:0 cport:42969):] DEBUG org.apache.zookeeper.server.SessionTrackerImpl - Checking session 0x1000002c50e0000 19:06:41.666 [ProcessThread(sid:0 cport:42969):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Permission requested: 1 19:06:41.666 [ProcessThread(sid:0 cport:42969):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - ACLs for node: [31,s{'world,'anyone} ] 19:06:41.667 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - Processing request:: sessionid:0x1000002c50e0000 type:multi cxid:0x75 zxid:0x45 txntype:14 reqpath:n/a 19:06:41.667 [ProcessThread(sid:0 cport:42969):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Client credentials: ['sasl,'zooclient , 'ip,'127.0.0.1 ] 19:06:41.667 [SyncThread:0] DEBUG org.apache.zookeeper.common.PathTrie - brokers 19:06:41.667 [SyncThread:0] DEBUG org.apache.zookeeper.server.DataTree - Digests are matching for Zxid: 45, Digest in log and actual tree: 138502035696 19:06:41.667 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - sessionid:0x1000002c50e0000 type:multi cxid:0x75 zxid:0x45 txntype:14 reqpath:n/a 19:06:41.667 [ProcessThread(sid:0 cport:42969):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 151136600353 19:06:41.667 [ProcessThread(sid:0 cport:42969):] DEBUG org.apache.zookeeper.server.SessionTrackerImpl - Checking session 0x1000002c50e0000 19:06:41.667 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - Processing request:: sessionid:0x1000002c50e0000 type:multi cxid:0x76 zxid:0x46 txntype:14 reqpath:n/a 19:06:41.667 [ProcessThread(sid:0 cport:42969):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Processing ACL: 31,s{'world,'anyone} 19:06:41.667 [main-SendThread(127.0.0.1:42969)] DEBUG org.apache.zookeeper.ClientCnxn - Reading reply session id: 0x1000002c50e0000, packet:: clientPath:null serverPath:null finished:false header:: 117,14 replyHeader:: 117,69,0 request:: org.apache.zookeeper.MultiOperationRecord@324db7b4 response:: org.apache.zookeeper.MultiResponse@2c19b7f5 19:06:41.668 [SyncThread:0] DEBUG org.apache.zookeeper.common.PathTrie - brokers 19:06:41.668 [SyncThread:0] DEBUG org.apache.zookeeper.server.DataTree - Digests are matching for Zxid: 46, Digest in log and actual tree: 142742165491 19:06:41.668 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - sessionid:0x1000002c50e0000 type:multi cxid:0x76 zxid:0x46 txntype:14 reqpath:n/a 19:06:41.668 [ProcessThread(sid:0 cport:42969):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Permission requested: 4 19:06:41.668 [ProcessThread(sid:0 cport:42969):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - ACLs for node: [31,s{'world,'anyone} ] 19:06:41.668 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - Processing request:: sessionid:0x1000002c50e0000 type:multi cxid:0x77 zxid:0x47 txntype:14 reqpath:n/a 19:06:41.668 [ProcessThread(sid:0 cport:42969):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Client credentials: ['sasl,'zooclient , 'ip,'127.0.0.1 ] 19:06:41.668 [SyncThread:0] DEBUG org.apache.zookeeper.common.PathTrie - brokers 19:06:41.668 [main-SendThread(127.0.0.1:42969)] DEBUG org.apache.zookeeper.ClientCnxn - Reading reply session id: 0x1000002c50e0000, packet:: clientPath:null serverPath:null finished:false header:: 118,14 replyHeader:: 118,70,0 request:: org.apache.zookeeper.MultiOperationRecord@324db758 response:: org.apache.zookeeper.MultiResponse@2c19b799 19:06:41.668 [SyncThread:0] DEBUG org.apache.zookeeper.server.DataTree - Digests are matching for Zxid: 47, Digest in log and actual tree: 142734220855 19:06:41.669 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - sessionid:0x1000002c50e0000 type:multi cxid:0x77 zxid:0x47 txntype:14 reqpath:n/a 19:06:41.669 [ProcessThread(sid:0 cport:42969):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 151136600353 19:06:41.669 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - Processing request:: sessionid:0x1000002c50e0000 type:multi cxid:0x78 zxid:0x48 txntype:14 reqpath:n/a 19:06:41.669 [ProcessThread(sid:0 cport:42969):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 151044124768 19:06:41.669 [main-SendThread(127.0.0.1:42969)] DEBUG org.apache.zookeeper.ClientCnxn - Reading reply session id: 0x1000002c50e0000, packet:: clientPath:null serverPath:null finished:false header:: 119,14 replyHeader:: 119,71,0 request:: org.apache.zookeeper.MultiOperationRecord@324db750 response:: org.apache.zookeeper.MultiResponse@2c19b791 19:06:41.669 [SyncThread:0] DEBUG org.apache.zookeeper.common.PathTrie - brokers 19:06:41.669 [SyncThread:0] DEBUG org.apache.zookeeper.server.DataTree - Digests are matching for Zxid: 48, Digest in log and actual tree: 145551081068 19:06:41.670 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - sessionid:0x1000002c50e0000 type:multi cxid:0x78 zxid:0x48 txntype:14 reqpath:n/a 19:06:41.670 [ProcessThread(sid:0 cport:42969):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 153304924540 19:06:41.670 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - Processing request:: sessionid:0x1000002c50e0000 type:multi cxid:0x79 zxid:0x49 txntype:14 reqpath:n/a 19:06:41.670 [SyncThread:0] DEBUG org.apache.zookeeper.common.PathTrie - brokers 19:06:41.670 [main-SendThread(127.0.0.1:42969)] DEBUG org.apache.zookeeper.ClientCnxn - Reading reply session id: 0x1000002c50e0000, packet:: clientPath:null serverPath:null finished:false header:: 120,14 replyHeader:: 120,72,0 request:: org.apache.zookeeper.MultiOperationRecord@940352d8 response:: org.apache.zookeeper.MultiResponse@8dcf5319 19:06:41.670 [SyncThread:0] DEBUG org.apache.zookeeper.server.DataTree - Digests are matching for Zxid: 49, Digest in log and actual tree: 147720169407 19:06:41.670 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - sessionid:0x1000002c50e0000 type:multi cxid:0x79 zxid:0x49 txntype:14 reqpath:n/a 19:06:41.671 [ProcessThread(sid:0 cport:42969):] DEBUG org.apache.zookeeper.server.SessionTrackerImpl - Checking session 0x1000002c50e0000 19:06:41.671 [ProcessThread(sid:0 cport:42969):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Permission requested: 1 19:06:41.671 [ProcessThread(sid:0 cport:42969):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - ACLs for node: [31,s{'world,'anyone} ] 19:06:41.671 [main-SendThread(127.0.0.1:42969)] DEBUG org.apache.zookeeper.ClientCnxn - Reading reply session id: 0x1000002c50e0000, packet:: clientPath:null serverPath:null finished:false header:: 121,14 replyHeader:: 121,73,0 request:: org.apache.zookeeper.MultiOperationRecord@324db7af response:: org.apache.zookeeper.MultiResponse@2c19b7f0 19:06:41.671 [ProcessThread(sid:0 cport:42969):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Client credentials: ['sasl,'zooclient , 'ip,'127.0.0.1 ] 19:06:41.671 [ProcessThread(sid:0 cport:42969):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 153304924540 19:06:41.671 [ProcessThread(sid:0 cport:42969):] DEBUG org.apache.zookeeper.server.SessionTrackerImpl - Checking session 0x1000002c50e0000 19:06:41.671 [ProcessThread(sid:0 cport:42969):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Processing ACL: 31,s{'world,'anyone} 19:06:41.671 [ProcessThread(sid:0 cport:42969):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Permission requested: 4 19:06:41.671 [ProcessThread(sid:0 cport:42969):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - ACLs for node: [31,s{'world,'anyone} ] 19:06:41.671 [ProcessThread(sid:0 cport:42969):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Client credentials: ['sasl,'zooclient , 'ip,'127.0.0.1 ] 19:06:41.671 [ProcessThread(sid:0 cport:42969):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 153304924540 19:06:41.671 [ProcessThread(sid:0 cport:42969):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 153730429234 19:06:41.671 [ProcessThread(sid:0 cport:42969):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 155237925841 19:06:41.671 [ProcessThread(sid:0 cport:42969):] DEBUG org.apache.zookeeper.server.SessionTrackerImpl - Checking session 0x1000002c50e0000 19:06:41.671 [ProcessThread(sid:0 cport:42969):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Permission requested: 1 19:06:41.671 [ProcessThread(sid:0 cport:42969):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - ACLs for node: [31,s{'world,'anyone} ] 19:06:41.671 [ProcessThread(sid:0 cport:42969):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Client credentials: ['sasl,'zooclient , 'ip,'127.0.0.1 ] 19:06:41.671 [ProcessThread(sid:0 cport:42969):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 155237925841 19:06:41.672 [ProcessThread(sid:0 cport:42969):] DEBUG org.apache.zookeeper.server.SessionTrackerImpl - Checking session 0x1000002c50e0000 19:06:41.672 [ProcessThread(sid:0 cport:42969):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Processing ACL: 31,s{'world,'anyone} 19:06:41.672 [ProcessThread(sid:0 cport:42969):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Permission requested: 4 19:06:41.672 [ProcessThread(sid:0 cport:42969):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - ACLs for node: [31,s{'world,'anyone} ] 19:06:41.672 [ProcessThread(sid:0 cport:42969):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Client credentials: ['sasl,'zooclient , 'ip,'127.0.0.1 ] 19:06:41.672 [ProcessThread(sid:0 cport:42969):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 155237925841 19:06:41.672 [ProcessThread(sid:0 cport:42969):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 155558398624 19:06:41.672 [ProcessThread(sid:0 cport:42969):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 157411250291 19:06:41.672 [ProcessThread(sid:0 cport:42969):] DEBUG org.apache.zookeeper.server.SessionTrackerImpl - Checking session 0x1000002c50e0000 19:06:41.672 [ProcessThread(sid:0 cport:42969):] DEBUG org.apache.zookeeper.server.SessionTrackerImpl - Checking session 0x1000002c50e0000 19:06:41.672 [ProcessThread(sid:0 cport:42969):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Permission requested: 1 19:06:41.672 [ProcessThread(sid:0 cport:42969):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - ACLs for node: [31,s{'world,'anyone} ] 19:06:41.672 [ProcessThread(sid:0 cport:42969):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Client credentials: ['sasl,'zooclient , 'ip,'127.0.0.1 ] 19:06:41.672 [ProcessThread(sid:0 cport:42969):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 157411250291 19:06:41.672 [ProcessThread(sid:0 cport:42969):] DEBUG org.apache.zookeeper.server.SessionTrackerImpl - Checking session 0x1000002c50e0000 19:06:41.672 [ProcessThread(sid:0 cport:42969):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Processing ACL: 31,s{'world,'anyone} 19:06:41.672 [ProcessThread(sid:0 cport:42969):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Permission requested: 4 19:06:41.672 [ProcessThread(sid:0 cport:42969):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - ACLs for node: [31,s{'world,'anyone} ] 19:06:41.672 [ProcessThread(sid:0 cport:42969):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Client credentials: ['sasl,'zooclient , 'ip,'127.0.0.1 ] 19:06:41.672 [ProcessThread(sid:0 cport:42969):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 157411250291 19:06:41.672 [ProcessThread(sid:0 cport:42969):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 155300624024 19:06:41.672 [ProcessThread(sid:0 cport:42969):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 159007778605 19:06:41.672 [ProcessThread(sid:0 cport:42969):] DEBUG org.apache.zookeeper.server.SessionTrackerImpl - Checking session 0x1000002c50e0000 19:06:41.672 [ProcessThread(sid:0 cport:42969):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Permission requested: 1 19:06:41.673 [ProcessThread(sid:0 cport:42969):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - ACLs for node: [31,s{'world,'anyone} ] 19:06:41.673 [ProcessThread(sid:0 cport:42969):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Client credentials: ['sasl,'zooclient , 'ip,'127.0.0.1 ] 19:06:41.673 [ProcessThread(sid:0 cport:42969):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 159007778605 19:06:41.673 [ProcessThread(sid:0 cport:42969):] DEBUG org.apache.zookeeper.server.SessionTrackerImpl - Checking session 0x1000002c50e0000 19:06:41.673 [ProcessThread(sid:0 cport:42969):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Processing ACL: 31,s{'world,'anyone} 19:06:41.673 [ProcessThread(sid:0 cport:42969):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Permission requested: 4 19:06:41.673 [ProcessThread(sid:0 cport:42969):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - ACLs for node: [31,s{'world,'anyone} ] 19:06:41.673 [ProcessThread(sid:0 cport:42969):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Client credentials: ['sasl,'zooclient , 'ip,'127.0.0.1 ] 19:06:41.673 [ProcessThread(sid:0 cport:42969):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 159007778605 19:06:41.673 [ProcessThread(sid:0 cport:42969):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 160576496967 19:06:41.673 [ProcessThread(sid:0 cport:42969):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 161598369268 19:06:41.673 [ProcessThread(sid:0 cport:42969):] DEBUG org.apache.zookeeper.server.SessionTrackerImpl - Checking session 0x1000002c50e0000 19:06:41.673 [ProcessThread(sid:0 cport:42969):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Permission requested: 1 19:06:41.673 [ProcessThread(sid:0 cport:42969):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - ACLs for node: [31,s{'world,'anyone} ] 19:06:41.673 [ProcessThread(sid:0 cport:42969):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Client credentials: ['sasl,'zooclient , 'ip,'127.0.0.1 ] 19:06:41.673 [ProcessThread(sid:0 cport:42969):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 161598369268 19:06:41.673 [ProcessThread(sid:0 cport:42969):] DEBUG org.apache.zookeeper.server.SessionTrackerImpl - Checking session 0x1000002c50e0000 19:06:41.673 [ProcessThread(sid:0 cport:42969):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Processing ACL: 31,s{'world,'anyone} 19:06:41.673 [ProcessThread(sid:0 cport:42969):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Permission requested: 4 19:06:41.673 [ProcessThread(sid:0 cport:42969):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - ACLs for node: [31,s{'world,'anyone} ] 19:06:41.673 [ProcessThread(sid:0 cport:42969):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Client credentials: ['sasl,'zooclient , 'ip,'127.0.0.1 ] 19:06:41.673 [ProcessThread(sid:0 cport:42969):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 161598369268 19:06:41.673 [ProcessThread(sid:0 cport:42969):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 162991254635 19:06:41.673 [ProcessThread(sid:0 cport:42969):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 166099011953 19:06:41.674 [ProcessThread(sid:0 cport:42969):] DEBUG org.apache.zookeeper.server.SessionTrackerImpl - Checking session 0x1000002c50e0000 19:06:41.674 [ProcessThread(sid:0 cport:42969):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Permission requested: 1 19:06:41.674 [ProcessThread(sid:0 cport:42969):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - ACLs for node: [31,s{'world,'anyone} ] 19:06:41.674 [ProcessThread(sid:0 cport:42969):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Client credentials: ['sasl,'zooclient , 'ip,'127.0.0.1 ] 19:06:41.674 [ProcessThread(sid:0 cport:42969):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 166099011953 19:06:41.674 [ProcessThread(sid:0 cport:42969):] DEBUG org.apache.zookeeper.server.SessionTrackerImpl - Checking session 0x1000002c50e0000 19:06:41.674 [ProcessThread(sid:0 cport:42969):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Processing ACL: 31,s{'world,'anyone} 19:06:41.674 [ProcessThread(sid:0 cport:42969):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Permission requested: 4 19:06:41.674 [ProcessThread(sid:0 cport:42969):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - ACLs for node: [31,s{'world,'anyone} ] 19:06:41.674 [ProcessThread(sid:0 cport:42969):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Client credentials: ['sasl,'zooclient , 'ip,'127.0.0.1 ] 19:06:41.674 [ProcessThread(sid:0 cport:42969):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 166099011953 19:06:41.674 [ProcessThread(sid:0 cport:42969):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 166148703554 19:06:41.674 [ProcessThread(sid:0 cport:42969):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 168621275734 19:06:41.675 [ProcessThread(sid:0 cport:42969):] DEBUG org.apache.zookeeper.server.SessionTrackerImpl - Checking session 0x1000002c50e0000 19:06:41.675 [ProcessThread(sid:0 cport:42969):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Permission requested: 1 19:06:41.675 [ProcessThread(sid:0 cport:42969):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - ACLs for node: [31,s{'world,'anyone} ] 19:06:41.675 [ProcessThread(sid:0 cport:42969):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Client credentials: ['sasl,'zooclient , 'ip,'127.0.0.1 ] 19:06:41.675 [ProcessThread(sid:0 cport:42969):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 168621275734 19:06:41.675 [ProcessThread(sid:0 cport:42969):] DEBUG org.apache.zookeeper.server.SessionTrackerImpl - Checking session 0x1000002c50e0000 19:06:41.675 [ProcessThread(sid:0 cport:42969):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Processing ACL: 31,s{'world,'anyone} 19:06:41.675 [ProcessThread(sid:0 cport:42969):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Permission requested: 4 19:06:41.675 [ProcessThread(sid:0 cport:42969):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - ACLs for node: [31,s{'world,'anyone} ] 19:06:41.675 [ProcessThread(sid:0 cport:42969):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Client credentials: ['sasl,'zooclient , 'ip,'127.0.0.1 ] 19:06:41.675 [ProcessThread(sid:0 cport:42969):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 168621275734 19:06:41.675 [ProcessThread(sid:0 cport:42969):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 166588371825 19:06:41.675 [ProcessThread(sid:0 cport:42969):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 170037439818 19:06:41.691 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - Processing request:: sessionid:0x1000002c50e0000 type:multi cxid:0x7a zxid:0x4a txntype:14 reqpath:n/a 19:06:41.691 [SyncThread:0] DEBUG org.apache.zookeeper.common.PathTrie - brokers 19:06:41.691 [SyncThread:0] DEBUG org.apache.zookeeper.server.DataTree - Digests are matching for Zxid: 4a, Digest in log and actual tree: 151136600353 19:06:41.691 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - sessionid:0x1000002c50e0000 type:multi cxid:0x7a zxid:0x4a txntype:14 reqpath:n/a 19:06:41.692 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - Processing request:: sessionid:0x1000002c50e0000 type:multi cxid:0x7b zxid:0x4b txntype:14 reqpath:n/a 19:06:41.692 [SyncThread:0] DEBUG org.apache.zookeeper.common.PathTrie - brokers 19:06:41.692 [SyncThread:0] DEBUG org.apache.zookeeper.server.DataTree - Digests are matching for Zxid: 4b, Digest in log and actual tree: 153304924540 19:06:41.692 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - sessionid:0x1000002c50e0000 type:multi cxid:0x7b zxid:0x4b txntype:14 reqpath:n/a 19:06:41.693 [main-SendThread(127.0.0.1:42969)] DEBUG org.apache.zookeeper.ClientCnxn - Reading reply session id: 0x1000002c50e0000, packet:: clientPath:null serverPath:null finished:false header:: 122,14 replyHeader:: 122,74,0 request:: org.apache.zookeeper.MultiOperationRecord@940352dc response:: org.apache.zookeeper.MultiResponse@8dcf531d 19:06:41.693 [main-SendThread(127.0.0.1:42969)] DEBUG org.apache.zookeeper.ClientCnxn - Reading reply session id: 0x1000002c50e0000, packet:: clientPath:null serverPath:null finished:false header:: 123,14 replyHeader:: 123,75,0 request:: org.apache.zookeeper.MultiOperationRecord@324db753 response:: org.apache.zookeeper.MultiResponse@2c19b794 19:06:41.693 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - Processing request:: sessionid:0x1000002c50e0000 type:multi cxid:0x7c zxid:0x4c txntype:14 reqpath:n/a 19:06:41.694 [SyncThread:0] DEBUG org.apache.zookeeper.common.PathTrie - brokers 19:06:41.694 [SyncThread:0] DEBUG org.apache.zookeeper.server.DataTree - Digests are matching for Zxid: 4c, Digest in log and actual tree: 155237925841 19:06:41.694 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - sessionid:0x1000002c50e0000 type:multi cxid:0x7c zxid:0x4c txntype:14 reqpath:n/a 19:06:41.694 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - Processing request:: sessionid:0x1000002c50e0000 type:multi cxid:0x7d zxid:0x4d txntype:14 reqpath:n/a 19:06:41.694 [SyncThread:0] DEBUG org.apache.zookeeper.common.PathTrie - brokers 19:06:41.694 [SyncThread:0] DEBUG org.apache.zookeeper.server.DataTree - Digests are matching for Zxid: 4d, Digest in log and actual tree: 157411250291 19:06:41.694 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - sessionid:0x1000002c50e0000 type:multi cxid:0x7d zxid:0x4d txntype:14 reqpath:n/a 19:06:41.694 [main-SendThread(127.0.0.1:42969)] DEBUG org.apache.zookeeper.ClientCnxn - Reading reply session id: 0x1000002c50e0000, packet:: clientPath:null serverPath:null finished:false header:: 124,14 replyHeader:: 124,76,0 request:: org.apache.zookeeper.MultiOperationRecord@324db76e response:: org.apache.zookeeper.MultiResponse@2c19b7af 19:06:41.694 [main-SendThread(127.0.0.1:42969)] DEBUG org.apache.zookeeper.ClientCnxn - Reading reply session id: 0x1000002c50e0000, packet:: clientPath:null serverPath:null finished:false header:: 125,14 replyHeader:: 125,77,0 request:: org.apache.zookeeper.MultiOperationRecord@940352d6 response:: org.apache.zookeeper.MultiResponse@8dcf5317 19:06:41.695 [ProcessThread(sid:0 cport:42969):] DEBUG org.apache.zookeeper.server.SessionTrackerImpl - Checking session 0x1000002c50e0000 19:06:41.695 [ProcessThread(sid:0 cport:42969):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Permission requested: 1 19:06:41.694 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - Processing request:: sessionid:0x1000002c50e0000 type:exists cxid:0x7e zxid:0xfffffffffffffffe txntype:unknown reqpath:/admin/delete_topics/__consumer_offsets 19:06:41.695 [ProcessThread(sid:0 cport:42969):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - ACLs for node: [31,s{'world,'anyone} ] 19:06:41.695 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - sessionid:0x1000002c50e0000 type:exists cxid:0x7e zxid:0xfffffffffffffffe txntype:unknown reqpath:/admin/delete_topics/__consumer_offsets 19:06:41.695 [ProcessThread(sid:0 cport:42969):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Client credentials: ['sasl,'zooclient , 'ip,'127.0.0.1 ] 19:06:41.695 [ProcessThread(sid:0 cport:42969):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 170037439818 19:06:41.695 [ProcessThread(sid:0 cport:42969):] DEBUG org.apache.zookeeper.server.SessionTrackerImpl - Checking session 0x1000002c50e0000 19:06:41.695 [ProcessThread(sid:0 cport:42969):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Processing ACL: 31,s{'world,'anyone} 19:06:41.695 [ProcessThread(sid:0 cport:42969):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Permission requested: 4 19:06:41.695 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - Processing request:: sessionid:0x1000002c50e0000 type:multi cxid:0x7f zxid:0x4e txntype:14 reqpath:n/a 19:06:41.695 [ProcessThread(sid:0 cport:42969):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - ACLs for node: [31,s{'world,'anyone} ] 19:06:41.695 [ProcessThread(sid:0 cport:42969):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Client credentials: ['sasl,'zooclient , 'ip,'127.0.0.1 ] 19:06:41.695 [SyncThread:0] DEBUG org.apache.zookeeper.common.PathTrie - brokers 19:06:41.695 [main-SendThread(127.0.0.1:42969)] DEBUG org.apache.zookeeper.ClientCnxn - Reading reply session id: 0x1000002c50e0000, packet:: clientPath:/admin/delete_topics/__consumer_offsets serverPath:/admin/delete_topics/__consumer_offsets finished:false header:: 126,3 replyHeader:: 126,77,-101 request:: '/admin/delete_topics/__consumer_offsets,F response:: 19:06:41.696 [SyncThread:0] DEBUG org.apache.zookeeper.server.DataTree - Digests are matching for Zxid: 4e, Digest in log and actual tree: 159007778605 19:06:41.696 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - sessionid:0x1000002c50e0000 type:multi cxid:0x7f zxid:0x4e txntype:14 reqpath:n/a 19:06:41.696 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - Processing request:: sessionid:0x1000002c50e0000 type:multi cxid:0x80 zxid:0x4f txntype:14 reqpath:n/a 19:06:41.696 [ProcessThread(sid:0 cport:42969):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 170037439818 19:06:41.696 [SyncThread:0] DEBUG org.apache.zookeeper.common.PathTrie - brokers 19:06:41.696 [SyncThread:0] DEBUG org.apache.zookeeper.server.DataTree - Digests are matching for Zxid: 4f, Digest in log and actual tree: 161598369268 19:06:41.696 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - sessionid:0x1000002c50e0000 type:multi cxid:0x80 zxid:0x4f txntype:14 reqpath:n/a 19:06:41.696 [ProcessThread(sid:0 cport:42969):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 170901396621 19:06:41.697 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - Processing request:: sessionid:0x1000002c50e0000 type:multi cxid:0x81 zxid:0x50 txntype:14 reqpath:n/a 19:06:41.697 [main-SendThread(127.0.0.1:42969)] DEBUG org.apache.zookeeper.ClientCnxn - Reading reply session id: 0x1000002c50e0000, packet:: clientPath:null serverPath:null finished:false header:: 127,14 replyHeader:: 127,78,0 request:: org.apache.zookeeper.MultiOperationRecord@324db7b0 response:: org.apache.zookeeper.MultiResponse@2c19b7f1 19:06:41.697 [ProcessThread(sid:0 cport:42969):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 171787827842 19:06:41.697 [main-SendThread(127.0.0.1:42969)] DEBUG org.apache.zookeeper.ClientCnxn - Reading reply session id: 0x1000002c50e0000, packet:: clientPath:null serverPath:null finished:false header:: 128,14 replyHeader:: 128,79,0 request:: org.apache.zookeeper.MultiOperationRecord@324db796 response:: org.apache.zookeeper.MultiResponse@2c19b7d7 19:06:41.697 [ProcessThread(sid:0 cport:42969):] DEBUG org.apache.zookeeper.server.SessionTrackerImpl - Checking session 0x1000002c50e0000 19:06:41.697 [ProcessThread(sid:0 cport:42969):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Permission requested: 1 19:06:41.697 [ProcessThread(sid:0 cport:42969):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - ACLs for node: [31,s{'world,'anyone} ] 19:06:41.697 [ProcessThread(sid:0 cport:42969):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Client credentials: ['sasl,'zooclient , 'ip,'127.0.0.1 ] 19:06:41.697 [SyncThread:0] DEBUG org.apache.zookeeper.common.PathTrie - brokers 19:06:41.697 [SyncThread:0] DEBUG org.apache.zookeeper.server.DataTree - Digests are matching for Zxid: 50, Digest in log and actual tree: 166099011953 19:06:41.697 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - sessionid:0x1000002c50e0000 type:multi cxid:0x81 zxid:0x50 txntype:14 reqpath:n/a 19:06:41.697 [ProcessThread(sid:0 cport:42969):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 171787827842 19:06:41.698 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - Processing request:: sessionid:0x1000002c50e0000 type:multi cxid:0x82 zxid:0x51 txntype:14 reqpath:n/a 19:06:41.698 [ProcessThread(sid:0 cport:42969):] DEBUG org.apache.zookeeper.server.SessionTrackerImpl - Checking session 0x1000002c50e0000 19:06:41.698 [ProcessThread(sid:0 cport:42969):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Processing ACL: 31,s{'world,'anyone} 19:06:41.698 [SyncThread:0] DEBUG org.apache.zookeeper.common.PathTrie - brokers 19:06:41.698 [SyncThread:0] DEBUG org.apache.zookeeper.server.DataTree - Digests are matching for Zxid: 51, Digest in log and actual tree: 168621275734 19:06:41.698 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - sessionid:0x1000002c50e0000 type:multi cxid:0x82 zxid:0x51 txntype:14 reqpath:n/a 19:06:41.698 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - Processing request:: sessionid:0x1000002c50e0000 type:multi cxid:0x83 zxid:0x52 txntype:14 reqpath:n/a 19:06:41.698 [main-SendThread(127.0.0.1:42969)] DEBUG org.apache.zookeeper.ClientCnxn - Reading reply session id: 0x1000002c50e0000, packet:: clientPath:null serverPath:null finished:false header:: 129,14 replyHeader:: 129,80,0 request:: org.apache.zookeeper.MultiOperationRecord@324db751 response:: org.apache.zookeeper.MultiResponse@2c19b792 19:06:41.698 [main-SendThread(127.0.0.1:42969)] DEBUG org.apache.zookeeper.ClientCnxn - Reading reply session id: 0x1000002c50e0000, packet:: clientPath:null serverPath:null finished:false header:: 130,14 replyHeader:: 130,81,0 request:: org.apache.zookeeper.MultiOperationRecord@324db7b1 response:: org.apache.zookeeper.MultiResponse@2c19b7f2 19:06:41.698 [SyncThread:0] DEBUG org.apache.zookeeper.common.PathTrie - brokers 19:06:41.699 [SyncThread:0] DEBUG org.apache.zookeeper.server.DataTree - Digests are matching for Zxid: 52, Digest in log and actual tree: 170037439818 19:06:41.699 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - sessionid:0x1000002c50e0000 type:multi cxid:0x83 zxid:0x52 txntype:14 reqpath:n/a 19:06:41.699 [ProcessThread(sid:0 cport:42969):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Permission requested: 4 19:06:41.699 [ProcessThread(sid:0 cport:42969):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - ACLs for node: [31,s{'world,'anyone} ] 19:06:41.699 [ProcessThread(sid:0 cport:42969):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Client credentials: ['sasl,'zooclient , 'ip,'127.0.0.1 ] 19:06:41.699 [ProcessThread(sid:0 cport:42969):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 171787827842 19:06:41.699 [ProcessThread(sid:0 cport:42969):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 171122797100 19:06:41.699 [ProcessThread(sid:0 cport:42969):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 175210688592 19:06:41.699 [ProcessThread(sid:0 cport:42969):] DEBUG org.apache.zookeeper.server.SessionTrackerImpl - Checking session 0x1000002c50e0000 19:06:41.699 [main-SendThread(127.0.0.1:42969)] DEBUG org.apache.zookeeper.ClientCnxn - Reading reply session id: 0x1000002c50e0000, packet:: clientPath:null serverPath:null finished:false header:: 131,14 replyHeader:: 131,82,0 request:: org.apache.zookeeper.MultiOperationRecord@940352d7 response:: org.apache.zookeeper.MultiResponse@8dcf5318 19:06:41.699 [ProcessThread(sid:0 cport:42969):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Permission requested: 1 19:06:41.699 [ProcessThread(sid:0 cport:42969):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - ACLs for node: [31,s{'world,'anyone} ] 19:06:41.699 [ProcessThread(sid:0 cport:42969):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Client credentials: ['sasl,'zooclient , 'ip,'127.0.0.1 ] 19:06:41.699 [ProcessThread(sid:0 cport:42969):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 175210688592 19:06:41.699 [ProcessThread(sid:0 cport:42969):] DEBUG org.apache.zookeeper.server.SessionTrackerImpl - Checking session 0x1000002c50e0000 19:06:41.699 [ProcessThread(sid:0 cport:42969):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Processing ACL: 31,s{'world,'anyone} 19:06:41.699 [ProcessThread(sid:0 cport:42969):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Permission requested: 4 19:06:41.699 [ProcessThread(sid:0 cport:42969):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - ACLs for node: [31,s{'world,'anyone} ] 19:06:41.700 [ProcessThread(sid:0 cport:42969):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Client credentials: ['sasl,'zooclient , 'ip,'127.0.0.1 ] 19:06:41.700 [ProcessThread(sid:0 cport:42969):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 175210688592 19:06:41.700 [ProcessThread(sid:0 cport:42969):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 174592703519 19:06:41.700 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - Processing request:: sessionid:0x1000002c50e0000 type:multi cxid:0x84 zxid:0x53 txntype:14 reqpath:n/a 19:06:41.700 [ProcessThread(sid:0 cport:42969):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 176711284388 19:06:41.700 [SyncThread:0] DEBUG org.apache.zookeeper.common.PathTrie - brokers 19:06:41.700 [SyncThread:0] DEBUG org.apache.zookeeper.server.DataTree - Digests are matching for Zxid: 53, Digest in log and actual tree: 171787827842 19:06:41.700 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - sessionid:0x1000002c50e0000 type:multi cxid:0x84 zxid:0x53 txntype:14 reqpath:n/a 19:06:41.700 [ProcessThread(sid:0 cport:42969):] DEBUG org.apache.zookeeper.server.SessionTrackerImpl - Checking session 0x1000002c50e0000 19:06:41.700 [ProcessThread(sid:0 cport:42969):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Permission requested: 1 19:06:41.700 [ProcessThread(sid:0 cport:42969):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - ACLs for node: [31,s{'world,'anyone} ] 19:06:41.700 [ProcessThread(sid:0 cport:42969):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Client credentials: ['sasl,'zooclient , 'ip,'127.0.0.1 ] 19:06:41.700 [ProcessThread(sid:0 cport:42969):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 176711284388 19:06:41.700 [ProcessThread(sid:0 cport:42969):] DEBUG org.apache.zookeeper.server.SessionTrackerImpl - Checking session 0x1000002c50e0000 19:06:41.700 [ProcessThread(sid:0 cport:42969):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Processing ACL: 31,s{'world,'anyone} 19:06:41.700 [ProcessThread(sid:0 cport:42969):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Permission requested: 4 19:06:41.700 [ProcessThread(sid:0 cport:42969):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - ACLs for node: [31,s{'world,'anyone} ] 19:06:41.700 [ProcessThread(sid:0 cport:42969):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Client credentials: ['sasl,'zooclient , 'ip,'127.0.0.1 ] 19:06:41.701 [ProcessThread(sid:0 cport:42969):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 176711284388 19:06:41.701 [ProcessThread(sid:0 cport:42969):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 178775939073 19:06:41.701 [ProcessThread(sid:0 cport:42969):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 181932029663 19:06:41.701 [main-SendThread(127.0.0.1:42969)] DEBUG org.apache.zookeeper.ClientCnxn - Reading reply session id: 0x1000002c50e0000, packet:: clientPath:null serverPath:null finished:false header:: 132,14 replyHeader:: 132,83,0 request:: org.apache.zookeeper.MultiOperationRecord@940352db response:: org.apache.zookeeper.MultiResponse@8dcf531c 19:06:41.701 [ProcessThread(sid:0 cport:42969):] DEBUG org.apache.zookeeper.server.SessionTrackerImpl - Checking session 0x1000002c50e0000 19:06:41.701 [ProcessThread(sid:0 cport:42969):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Permission requested: 1 19:06:41.701 [ProcessThread(sid:0 cport:42969):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - ACLs for node: [31,s{'world,'anyone} ] 19:06:41.701 [ProcessThread(sid:0 cport:42969):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Client credentials: ['sasl,'zooclient , 'ip,'127.0.0.1 ] 19:06:41.701 [ProcessThread(sid:0 cport:42969):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 181932029663 19:06:41.701 [ProcessThread(sid:0 cport:42969):] DEBUG org.apache.zookeeper.server.SessionTrackerImpl - Checking session 0x1000002c50e0000 19:06:41.701 [ProcessThread(sid:0 cport:42969):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Processing ACL: 31,s{'world,'anyone} 19:06:41.701 [ProcessThread(sid:0 cport:42969):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Permission requested: 4 19:06:41.701 [ProcessThread(sid:0 cport:42969):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - ACLs for node: [31,s{'world,'anyone} ] 19:06:41.701 [ProcessThread(sid:0 cport:42969):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Client credentials: ['sasl,'zooclient , 'ip,'127.0.0.1 ] 19:06:41.701 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - Processing request:: sessionid:0x1000002c50e0000 type:multi cxid:0x85 zxid:0x54 txntype:14 reqpath:n/a 19:06:41.701 [ProcessThread(sid:0 cport:42969):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 181932029663 19:06:41.701 [SyncThread:0] DEBUG org.apache.zookeeper.common.PathTrie - brokers 19:06:41.701 [SyncThread:0] DEBUG org.apache.zookeeper.server.DataTree - Digests are matching for Zxid: 54, Digest in log and actual tree: 175210688592 19:06:41.702 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - sessionid:0x1000002c50e0000 type:multi cxid:0x85 zxid:0x54 txntype:14 reqpath:n/a 19:06:41.702 [ProcessThread(sid:0 cport:42969):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 182208811887 19:06:41.702 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - Processing request:: sessionid:0x1000002c50e0000 type:multi cxid:0x86 zxid:0x55 txntype:14 reqpath:n/a 19:06:41.702 [ProcessThread(sid:0 cport:42969):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 184465298137 19:06:41.702 [SyncThread:0] DEBUG org.apache.zookeeper.common.PathTrie - brokers 19:06:41.702 [SyncThread:0] DEBUG org.apache.zookeeper.server.DataTree - Digests are matching for Zxid: 55, Digest in log and actual tree: 176711284388 19:06:41.702 [main-SendThread(127.0.0.1:42969)] DEBUG org.apache.zookeeper.ClientCnxn - Reading reply session id: 0x1000002c50e0000, packet:: clientPath:null serverPath:null finished:false header:: 133,14 replyHeader:: 133,84,0 request:: org.apache.zookeeper.MultiOperationRecord@324db774 response:: org.apache.zookeeper.MultiResponse@2c19b7b5 19:06:41.702 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - sessionid:0x1000002c50e0000 type:multi cxid:0x86 zxid:0x55 txntype:14 reqpath:n/a 19:06:41.702 [ProcessThread(sid:0 cport:42969):] DEBUG org.apache.zookeeper.server.SessionTrackerImpl - Checking session 0x1000002c50e0000 19:06:41.702 [ProcessThread(sid:0 cport:42969):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Permission requested: 1 19:06:41.702 [ProcessThread(sid:0 cport:42969):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - ACLs for node: [31,s{'world,'anyone} ] 19:06:41.702 [ProcessThread(sid:0 cport:42969):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Client credentials: ['sasl,'zooclient , 'ip,'127.0.0.1 ] 19:06:41.702 [ProcessThread(sid:0 cport:42969):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 184465298137 19:06:41.702 [ProcessThread(sid:0 cport:42969):] DEBUG org.apache.zookeeper.server.SessionTrackerImpl - Checking session 0x1000002c50e0000 19:06:41.702 [ProcessThread(sid:0 cport:42969):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Processing ACL: 31,s{'world,'anyone} 19:06:41.702 [ProcessThread(sid:0 cport:42969):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Permission requested: 4 19:06:41.702 [ProcessThread(sid:0 cport:42969):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - ACLs for node: [31,s{'world,'anyone} ] 19:06:41.703 [ProcessThread(sid:0 cport:42969):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Client credentials: ['sasl,'zooclient , 'ip,'127.0.0.1 ] 19:06:41.702 [main-SendThread(127.0.0.1:42969)] DEBUG org.apache.zookeeper.ClientCnxn - Reading reply session id: 0x1000002c50e0000, packet:: clientPath:null serverPath:null finished:false header:: 134,14 replyHeader:: 134,85,0 request:: org.apache.zookeeper.MultiOperationRecord@324db777 response:: org.apache.zookeeper.MultiResponse@2c19b7b8 19:06:41.703 [ProcessThread(sid:0 cport:42969):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 184465298137 19:06:41.703 [ProcessThread(sid:0 cport:42969):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 184817495549 19:06:41.703 [ProcessThread(sid:0 cport:42969):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 188568928595 19:06:41.703 [ProcessThread(sid:0 cport:42969):] DEBUG org.apache.zookeeper.server.SessionTrackerImpl - Checking session 0x1000002c50e0000 19:06:41.703 [ProcessThread(sid:0 cport:42969):] DEBUG org.apache.zookeeper.server.SessionTrackerImpl - Checking session 0x1000002c50e0000 19:06:41.703 [ProcessThread(sid:0 cport:42969):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Permission requested: 1 19:06:41.703 [ProcessThread(sid:0 cport:42969):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - ACLs for node: [31,s{'world,'anyone} ] 19:06:41.703 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - Processing request:: sessionid:0x1000002c50e0000 type:multi cxid:0x87 zxid:0x56 txntype:14 reqpath:n/a 19:06:41.703 [ProcessThread(sid:0 cport:42969):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Client credentials: ['sasl,'zooclient , 'ip,'127.0.0.1 ] 19:06:41.703 [SyncThread:0] DEBUG org.apache.zookeeper.common.PathTrie - brokers 19:06:41.703 [SyncThread:0] DEBUG org.apache.zookeeper.server.DataTree - Digests are matching for Zxid: 56, Digest in log and actual tree: 181932029663 19:06:41.703 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - sessionid:0x1000002c50e0000 type:multi cxid:0x87 zxid:0x56 txntype:14 reqpath:n/a 19:06:41.703 [ProcessThread(sid:0 cport:42969):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 188568928595 19:06:41.703 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - Processing request:: sessionid:0x1000002c50e0000 type:multi cxid:0x88 zxid:0x57 txntype:14 reqpath:n/a 19:06:41.703 [ProcessThread(sid:0 cport:42969):] DEBUG org.apache.zookeeper.server.SessionTrackerImpl - Checking session 0x1000002c50e0000 19:06:41.703 [ProcessThread(sid:0 cport:42969):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Processing ACL: 31,s{'world,'anyone} 19:06:41.704 [SyncThread:0] DEBUG org.apache.zookeeper.common.PathTrie - brokers 19:06:41.704 [SyncThread:0] DEBUG org.apache.zookeeper.server.DataTree - Digests are matching for Zxid: 57, Digest in log and actual tree: 184465298137 19:06:41.704 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - sessionid:0x1000002c50e0000 type:multi cxid:0x88 zxid:0x57 txntype:14 reqpath:n/a 19:06:41.704 [ProcessThread(sid:0 cport:42969):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Permission requested: 4 19:06:41.704 [ProcessThread(sid:0 cport:42969):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - ACLs for node: [31,s{'world,'anyone} ] 19:06:41.704 [main-SendThread(127.0.0.1:42969)] DEBUG org.apache.zookeeper.ClientCnxn - Reading reply session id: 0x1000002c50e0000, packet:: clientPath:null serverPath:null finished:false header:: 135,14 replyHeader:: 135,86,0 request:: org.apache.zookeeper.MultiOperationRecord@324db791 response:: org.apache.zookeeper.MultiResponse@2c19b7d2 19:06:41.704 [ProcessThread(sid:0 cport:42969):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Client credentials: ['sasl,'zooclient , 'ip,'127.0.0.1 ] 19:06:41.704 [ProcessThread(sid:0 cport:42969):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 188568928595 19:06:41.704 [ProcessThread(sid:0 cport:42969):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 188086160290 19:06:41.704 [ProcessThread(sid:0 cport:42969):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 188921003252 19:06:41.704 [main-SendThread(127.0.0.1:42969)] DEBUG org.apache.zookeeper.ClientCnxn - Reading reply session id: 0x1000002c50e0000, packet:: clientPath:null serverPath:null finished:false header:: 136,14 replyHeader:: 136,87,0 request:: org.apache.zookeeper.MultiOperationRecord@324db74f response:: org.apache.zookeeper.MultiResponse@2c19b790 19:06:41.705 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - Processing request:: sessionid:0x1000002c50e0000 type:multi cxid:0x89 zxid:0x58 txntype:14 reqpath:n/a 19:06:41.705 [SyncThread:0] DEBUG org.apache.zookeeper.common.PathTrie - brokers 19:06:41.705 [SyncThread:0] DEBUG org.apache.zookeeper.server.DataTree - Digests are matching for Zxid: 58, Digest in log and actual tree: 188568928595 19:06:41.705 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - sessionid:0x1000002c50e0000 type:multi cxid:0x89 zxid:0x58 txntype:14 reqpath:n/a 19:06:41.705 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - Processing request:: sessionid:0x1000002c50e0000 type:exists cxid:0x8a zxid:0xfffffffffffffffe txntype:unknown reqpath:/brokers/topics/__consumer_offsets 19:06:41.705 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - sessionid:0x1000002c50e0000 type:exists cxid:0x8a zxid:0xfffffffffffffffe txntype:unknown reqpath:/brokers/topics/__consumer_offsets 19:06:41.705 [main-SendThread(127.0.0.1:42969)] DEBUG org.apache.zookeeper.ClientCnxn - Reading reply session id: 0x1000002c50e0000, packet:: clientPath:null serverPath:null finished:false header:: 137,14 replyHeader:: 137,88,0 request:: org.apache.zookeeper.MultiOperationRecord@324db78f response:: org.apache.zookeeper.MultiResponse@2c19b7d0 19:06:41.705 [main-SendThread(127.0.0.1:42969)] DEBUG org.apache.zookeeper.ClientCnxn - Reading reply session id: 0x1000002c50e0000, packet:: clientPath:/brokers/topics/__consumer_offsets serverPath:/brokers/topics/__consumer_offsets finished:false header:: 138,3 replyHeader:: 138,88,0 request:: '/brokers/topics/__consumer_offsets,T response:: s{38,38,1755112001586,1755112001586,0,1,0,0,548,1,39} 19:06:41.706 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - Processing request:: sessionid:0x1000002c50e0000 type:multi cxid:0x8b zxid:0x59 txntype:14 reqpath:n/a 19:06:41.706 [SyncThread:0] DEBUG org.apache.zookeeper.common.PathTrie - brokers 19:06:41.706 [SyncThread:0] DEBUG org.apache.zookeeper.server.DataTree - Digests are matching for Zxid: 59, Digest in log and actual tree: 188921003252 19:06:41.707 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - sessionid:0x1000002c50e0000 type:multi cxid:0x8b zxid:0x59 txntype:14 reqpath:n/a 19:06:41.707 [data-plane-kafka-request-handler-0] DEBUG kafka.server.ZkAdminManager - [Admin Manager on Broker 1]: Topic creation failed since topic '__consumer_offsets' already exists. org.apache.kafka.common.errors.TopicExistsException: Topic '__consumer_offsets' already exists. 19:06:41.707 [data-plane-kafka-request-handler-0] DEBUG kafka.server.DefaultAutoTopicCreationManager - Cleared inflight topic creation state for HashMap(__consumer_offsets -> CreatableTopic(name='__consumer_offsets', numPartitions=50, replicationFactor=1, assignments=[], configs=[CreateableTopicConfig(name='compression.type', value='producer'), CreateableTopicConfig(name='cleanup.policy', value='compact'), CreateableTopicConfig(name='segment.bytes', value='104857600')])) 19:06:41.708 [main-SendThread(127.0.0.1:42969)] DEBUG org.apache.zookeeper.ClientCnxn - Reading reply session id: 0x1000002c50e0000, packet:: clientPath:null serverPath:null finished:false header:: 139,14 replyHeader:: 139,89,0 request:: org.apache.zookeeper.MultiOperationRecord@324db7ac response:: org.apache.zookeeper.MultiResponse@2c19b7ed 19:06:41.709 [main] DEBUG org.apache.kafka.clients.NetworkClient - [Consumer clientId=mso-123456-consumer-e3947c42-8a25-4603-b050-ed449cbe5e27, groupId=mso-group] Received FIND_COORDINATOR response from node 1 for request with header RequestHeader(apiKey=FIND_COORDINATOR, apiVersion=4, clientId=mso-123456-consumer-e3947c42-8a25-4603-b050-ed449cbe5e27, correlationId=5): FindCoordinatorResponseData(throttleTimeMs=0, errorCode=0, errorMessage='', nodeId=0, host='', port=0, coordinators=[Coordinator(key='mso-group', nodeId=-1, host='', port=-1, errorCode=15, errorMessage='')]) 19:06:41.709 [main] DEBUG org.apache.kafka.clients.consumer.internals.ConsumerCoordinator - [Consumer clientId=mso-123456-consumer-e3947c42-8a25-4603-b050-ed449cbe5e27, groupId=mso-group] Received FindCoordinator response ClientResponse(receivedTimeMs=1755112001708, latencyMs=45, disconnected=false, requestHeader=RequestHeader(apiKey=FIND_COORDINATOR, apiVersion=4, clientId=mso-123456-consumer-e3947c42-8a25-4603-b050-ed449cbe5e27, correlationId=5), responseBody=FindCoordinatorResponseData(throttleTimeMs=0, errorCode=0, errorMessage='', nodeId=0, host='', port=0, coordinators=[Coordinator(key='mso-group', nodeId=-1, host='', port=-1, errorCode=15, errorMessage='')])) 19:06:41.709 [main] DEBUG org.apache.kafka.clients.consumer.internals.ConsumerCoordinator - [Consumer clientId=mso-123456-consumer-e3947c42-8a25-4603-b050-ed449cbe5e27, groupId=mso-group] Group coordinator lookup failed: 19:06:41.709 [main] DEBUG org.apache.kafka.clients.consumer.internals.ConsumerCoordinator - [Consumer clientId=mso-123456-consumer-e3947c42-8a25-4603-b050-ed449cbe5e27, groupId=mso-group] Coordinator discovery failed, refreshing metadata org.apache.kafka.common.errors.CoordinatorNotAvailableException: The coordinator is not available. 19:06:41.709 [data-plane-kafka-network-thread-1-ListenerName(SASL_PLAINTEXT)-SASL_PLAINTEXT-0] DEBUG kafka.request.logger - Completed request:{"isForwarded":false,"requestHeader":{"requestApiKey":10,"requestApiVersion":4,"correlationId":5,"clientId":"mso-123456-consumer-e3947c42-8a25-4603-b050-ed449cbe5e27","requestApiKeyName":"FIND_COORDINATOR"},"request":{"keyType":0,"coordinatorKeys":["mso-group"]},"response":{"throttleTimeMs":0,"coordinators":[{"key":"mso-group","nodeId":-1,"host":"","port":-1,"errorCode":15,"errorMessage":""}]},"connection":"127.0.0.1:38537-127.0.0.1:54970-3","totalTimeMs":44.325,"requestQueueTimeMs":0.151,"localTimeMs":43.589,"remoteTimeMs":0.0,"throttleTimeMs":0,"responseQueueTimeMs":0.226,"sendTimeMs":0.356,"securityProtocol":"SASL_PLAINTEXT","principal":"User:admin","listener":"SASL_PLAINTEXT","clientInformation":{"softwareName":"apache-kafka-java","softwareVersion":"3.3.1"}} 19:06:41.724 [ProcessThread(sid:0 cport:42969):] DEBUG org.apache.zookeeper.server.SessionTrackerImpl - Checking session 0x1000002c50e0000 19:06:41.724 [ProcessThread(sid:0 cport:42969):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Permission requested: 1 19:06:41.724 [ProcessThread(sid:0 cport:42969):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - ACLs for node: [31,s{'world,'anyone} ] 19:06:41.724 [ProcessThread(sid:0 cport:42969):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Client credentials: ['sasl,'zooclient , 'ip,'127.0.0.1 ] 19:06:41.724 [ProcessThread(sid:0 cport:42969):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from data tree is: 188921003252 19:06:41.724 [ProcessThread(sid:0 cport:42969):] DEBUG org.apache.zookeeper.server.SessionTrackerImpl - Checking session 0x1000002c50e0000 19:06:41.724 [ProcessThread(sid:0 cport:42969):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Processing ACL: 31,s{'world,'anyone} 19:06:41.724 [ProcessThread(sid:0 cport:42969):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Permission requested: 4 19:06:41.724 [ProcessThread(sid:0 cport:42969):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - ACLs for node: [31,s{'world,'anyone} ] 19:06:41.725 [ProcessThread(sid:0 cport:42969):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Client credentials: ['sasl,'zooclient , 'ip,'127.0.0.1 ] 19:06:41.725 [ProcessThread(sid:0 cport:42969):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from data tree is: 188921003252 19:06:41.725 [ProcessThread(sid:0 cport:42969):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 191844473964 19:06:41.725 [ProcessThread(sid:0 cport:42969):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 194483569297 19:06:41.725 [ProcessThread(sid:0 cport:42969):] DEBUG org.apache.zookeeper.server.SessionTrackerImpl - Checking session 0x1000002c50e0000 19:06:41.725 [ProcessThread(sid:0 cport:42969):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Permission requested: 1 19:06:41.725 [ProcessThread(sid:0 cport:42969):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - ACLs for node: [31,s{'world,'anyone} ] 19:06:41.725 [ProcessThread(sid:0 cport:42969):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Client credentials: ['sasl,'zooclient , 'ip,'127.0.0.1 ] 19:06:41.725 [ProcessThread(sid:0 cport:42969):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 194483569297 19:06:41.725 [ProcessThread(sid:0 cport:42969):] DEBUG org.apache.zookeeper.server.SessionTrackerImpl - Checking session 0x1000002c50e0000 19:06:41.725 [ProcessThread(sid:0 cport:42969):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Processing ACL: 31,s{'world,'anyone} 19:06:41.726 [ProcessThread(sid:0 cport:42969):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Permission requested: 4 19:06:41.726 [ProcessThread(sid:0 cport:42969):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - ACLs for node: [31,s{'world,'anyone} ] 19:06:41.726 [ProcessThread(sid:0 cport:42969):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Client credentials: ['sasl,'zooclient , 'ip,'127.0.0.1 ] 19:06:41.726 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - Processing request:: sessionid:0x1000002c50e0000 type:multi cxid:0x8c zxid:0x5a txntype:14 reqpath:n/a 19:06:41.726 [ProcessThread(sid:0 cport:42969):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 194483569297 19:06:41.726 [ProcessThread(sid:0 cport:42969):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 197390279193 19:06:41.726 [ProcessThread(sid:0 cport:42969):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 198903821719 19:06:41.726 [SyncThread:0] DEBUG org.apache.zookeeper.common.PathTrie - brokers 19:06:41.726 [SyncThread:0] DEBUG org.apache.zookeeper.server.DataTree - Digests are matching for Zxid: 5a, Digest in log and actual tree: 194483569297 19:06:41.726 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - sessionid:0x1000002c50e0000 type:multi cxid:0x8c zxid:0x5a txntype:14 reqpath:n/a 19:06:41.727 [ProcessThread(sid:0 cport:42969):] DEBUG org.apache.zookeeper.server.SessionTrackerImpl - Checking session 0x1000002c50e0000 19:06:41.727 [ProcessThread(sid:0 cport:42969):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Permission requested: 1 19:06:41.727 [ProcessThread(sid:0 cport:42969):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - ACLs for node: [31,s{'world,'anyone} ] 19:06:41.727 [ProcessThread(sid:0 cport:42969):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Client credentials: ['sasl,'zooclient , 'ip,'127.0.0.1 ] 19:06:41.727 [ProcessThread(sid:0 cport:42969):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 198903821719 19:06:41.727 [ProcessThread(sid:0 cport:42969):] DEBUG org.apache.zookeeper.server.SessionTrackerImpl - Checking session 0x1000002c50e0000 19:06:41.727 [ProcessThread(sid:0 cport:42969):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Processing ACL: 31,s{'world,'anyone} 19:06:41.727 [ProcessThread(sid:0 cport:42969):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Permission requested: 4 19:06:41.727 [ProcessThread(sid:0 cport:42969):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - ACLs for node: [31,s{'world,'anyone} ] 19:06:41.727 [ProcessThread(sid:0 cport:42969):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Client credentials: ['sasl,'zooclient , 'ip,'127.0.0.1 ] 19:06:41.727 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - Processing request:: sessionid:0x1000002c50e0000 type:multi cxid:0x8d zxid:0x5b txntype:14 reqpath:n/a 19:06:41.727 [ProcessThread(sid:0 cport:42969):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 198903821719 19:06:41.727 [main-SendThread(127.0.0.1:42969)] DEBUG org.apache.zookeeper.ClientCnxn - Reading reply session id: 0x1000002c50e0000, packet:: clientPath:null serverPath:null finished:false header:: 140,14 replyHeader:: 140,90,0 request:: org.apache.zookeeper.MultiOperationRecord@d54f07a9 response:: org.apache.zookeeper.MultiResponse@ef9185b3 19:06:41.728 [SyncThread:0] DEBUG org.apache.zookeeper.common.PathTrie - brokers 19:06:41.728 [SyncThread:0] DEBUG org.apache.zookeeper.server.DataTree - Digests are matching for Zxid: 5b, Digest in log and actual tree: 198903821719 19:06:41.728 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - sessionid:0x1000002c50e0000 type:multi cxid:0x8d zxid:0x5b txntype:14 reqpath:n/a 19:06:41.728 [ProcessThread(sid:0 cport:42969):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 201243207594 19:06:41.728 [ProcessThread(sid:0 cport:42969):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 204209657600 19:06:41.728 [ProcessThread(sid:0 cport:42969):] DEBUG org.apache.zookeeper.server.SessionTrackerImpl - Checking session 0x1000002c50e0000 19:06:41.728 [ProcessThread(sid:0 cport:42969):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Permission requested: 1 19:06:41.728 [ProcessThread(sid:0 cport:42969):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - ACLs for node: [31,s{'world,'anyone} ] 19:06:41.728 [ProcessThread(sid:0 cport:42969):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Client credentials: ['sasl,'zooclient , 'ip,'127.0.0.1 ] 19:06:41.728 [ProcessThread(sid:0 cport:42969):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 204209657600 19:06:41.728 [ProcessThread(sid:0 cport:42969):] DEBUG org.apache.zookeeper.server.SessionTrackerImpl - Checking session 0x1000002c50e0000 19:06:41.728 [ProcessThread(sid:0 cport:42969):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Processing ACL: 31,s{'world,'anyone} 19:06:41.728 [main-SendThread(127.0.0.1:42969)] DEBUG org.apache.zookeeper.ClientCnxn - Reading reply session id: 0x1000002c50e0000, packet:: clientPath:null serverPath:null finished:false header:: 141,14 replyHeader:: 141,91,0 request:: org.apache.zookeeper.MultiOperationRecord@d363be06 response:: org.apache.zookeeper.MultiResponse@eda63c10 19:06:41.729 [ProcessThread(sid:0 cport:42969):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Permission requested: 4 19:06:41.729 [ProcessThread(sid:0 cport:42969):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - ACLs for node: [31,s{'world,'anyone} ] 19:06:41.729 [ProcessThread(sid:0 cport:42969):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Client credentials: ['sasl,'zooclient , 'ip,'127.0.0.1 ] 19:06:41.729 [ProcessThread(sid:0 cport:42969):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 204209657600 19:06:41.729 [ProcessThread(sid:0 cport:42969):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 205880025395 19:06:41.729 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - Processing request:: sessionid:0x1000002c50e0000 type:multi cxid:0x8e zxid:0x5c txntype:14 reqpath:n/a 19:06:41.729 [SyncThread:0] DEBUG org.apache.zookeeper.common.PathTrie - brokers 19:06:41.729 [SyncThread:0] DEBUG org.apache.zookeeper.server.DataTree - Digests are matching for Zxid: 5c, Digest in log and actual tree: 204209657600 19:06:41.729 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - sessionid:0x1000002c50e0000 type:multi cxid:0x8e zxid:0x5c txntype:14 reqpath:n/a 19:06:41.729 [ProcessThread(sid:0 cport:42969):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 207533640061 19:06:41.729 [ProcessThread(sid:0 cport:42969):] DEBUG org.apache.zookeeper.server.SessionTrackerImpl - Checking session 0x1000002c50e0000 19:06:41.730 [ProcessThread(sid:0 cport:42969):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Permission requested: 1 19:06:41.730 [ProcessThread(sid:0 cport:42969):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - ACLs for node: [31,s{'world,'anyone} ] 19:06:41.730 [ProcessThread(sid:0 cport:42969):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Client credentials: ['sasl,'zooclient , 'ip,'127.0.0.1 ] 19:06:41.730 [ProcessThread(sid:0 cport:42969):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 207533640061 19:06:41.730 [main-SendThread(127.0.0.1:42969)] DEBUG org.apache.zookeeper.ClientCnxn - Reading reply session id: 0x1000002c50e0000, packet:: clientPath:null serverPath:null finished:false header:: 142,14 replyHeader:: 142,92,0 request:: org.apache.zookeeper.MultiOperationRecord@7401b96c response:: org.apache.zookeeper.MultiResponse@8e443776 19:06:41.730 [ProcessThread(sid:0 cport:42969):] DEBUG org.apache.zookeeper.server.SessionTrackerImpl - Checking session 0x1000002c50e0000 19:06:41.730 [ProcessThread(sid:0 cport:42969):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Processing ACL: 31,s{'world,'anyone} 19:06:41.730 [ProcessThread(sid:0 cport:42969):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Permission requested: 4 19:06:41.730 [ProcessThread(sid:0 cport:42969):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - ACLs for node: [31,s{'world,'anyone} ] 19:06:41.730 [ProcessThread(sid:0 cport:42969):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Client credentials: ['sasl,'zooclient , 'ip,'127.0.0.1 ] 19:06:41.731 [ProcessThread(sid:0 cport:42969):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 207533640061 19:06:41.731 [ProcessThread(sid:0 cport:42969):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 208855050757 19:06:41.731 [ProcessThread(sid:0 cport:42969):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 210789577201 19:06:41.731 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - Processing request:: sessionid:0x1000002c50e0000 type:multi cxid:0x8f zxid:0x5d txntype:14 reqpath:n/a 19:06:41.731 [ProcessThread(sid:0 cport:42969):] DEBUG org.apache.zookeeper.server.SessionTrackerImpl - Checking session 0x1000002c50e0000 19:06:41.731 [SyncThread:0] DEBUG org.apache.zookeeper.common.PathTrie - brokers 19:06:41.731 [SyncThread:0] DEBUG org.apache.zookeeper.server.DataTree - Digests are matching for Zxid: 5d, Digest in log and actual tree: 207533640061 19:06:41.731 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - sessionid:0x1000002c50e0000 type:multi cxid:0x8f zxid:0x5d txntype:14 reqpath:n/a 19:06:41.731 [ProcessThread(sid:0 cport:42969):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Permission requested: 1 19:06:41.731 [ProcessThread(sid:0 cport:42969):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - ACLs for node: [31,s{'world,'anyone} ] 19:06:41.731 [ProcessThread(sid:0 cport:42969):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Client credentials: ['sasl,'zooclient , 'ip,'127.0.0.1 ] 19:06:41.731 [ProcessThread(sid:0 cport:42969):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 210789577201 19:06:41.731 [ProcessThread(sid:0 cport:42969):] DEBUG org.apache.zookeeper.server.SessionTrackerImpl - Checking session 0x1000002c50e0000 19:06:41.731 [ProcessThread(sid:0 cport:42969):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Processing ACL: 31,s{'world,'anyone} 19:06:41.732 [ProcessThread(sid:0 cport:42969):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Permission requested: 4 19:06:41.732 [ProcessThread(sid:0 cport:42969):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - ACLs for node: [31,s{'world,'anyone} ] 19:06:41.732 [ProcessThread(sid:0 cport:42969):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Client credentials: ['sasl,'zooclient , 'ip,'127.0.0.1 ] 19:06:41.732 [main-SendThread(127.0.0.1:42969)] DEBUG org.apache.zookeeper.ClientCnxn - Reading reply session id: 0x1000002c50e0000, packet:: clientPath:null serverPath:null finished:false header:: 143,14 replyHeader:: 143,93,0 request:: org.apache.zookeeper.MultiOperationRecord@dbe2e64b response:: org.apache.zookeeper.MultiResponse@f6256455 19:06:41.732 [ProcessThread(sid:0 cport:42969):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 210789577201 19:06:41.732 [ProcessThread(sid:0 cport:42969):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 214837252713 19:06:41.732 [ProcessThread(sid:0 cport:42969):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 215820470129 19:06:41.733 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - Processing request:: sessionid:0x1000002c50e0000 type:multi cxid:0x90 zxid:0x5e txntype:14 reqpath:n/a 19:06:41.733 [ProcessThread(sid:0 cport:42969):] DEBUG org.apache.zookeeper.server.SessionTrackerImpl - Checking session 0x1000002c50e0000 19:06:41.733 [SyncThread:0] DEBUG org.apache.zookeeper.common.PathTrie - brokers 19:06:41.733 [SyncThread:0] DEBUG org.apache.zookeeper.server.DataTree - Digests are matching for Zxid: 5e, Digest in log and actual tree: 210789577201 19:06:41.733 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - sessionid:0x1000002c50e0000 type:multi cxid:0x90 zxid:0x5e txntype:14 reqpath:n/a 19:06:41.733 [ProcessThread(sid:0 cport:42969):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Permission requested: 1 19:06:41.733 [ProcessThread(sid:0 cport:42969):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - ACLs for node: [31,s{'world,'anyone} ] 19:06:41.733 [ProcessThread(sid:0 cport:42969):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Client credentials: ['sasl,'zooclient , 'ip,'127.0.0.1 ] 19:06:41.733 [ProcessThread(sid:0 cport:42969):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 215820470129 19:06:41.733 [ProcessThread(sid:0 cport:42969):] DEBUG org.apache.zookeeper.server.SessionTrackerImpl - Checking session 0x1000002c50e0000 19:06:41.733 [ProcessThread(sid:0 cport:42969):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Processing ACL: 31,s{'world,'anyone} 19:06:41.733 [ProcessThread(sid:0 cport:42969):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Permission requested: 4 19:06:41.733 [ProcessThread(sid:0 cport:42969):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - ACLs for node: [31,s{'world,'anyone} ] 19:06:41.733 [ProcessThread(sid:0 cport:42969):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Client credentials: ['sasl,'zooclient , 'ip,'127.0.0.1 ] 19:06:41.733 [ProcessThread(sid:0 cport:42969):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 215820470129 19:06:41.733 [ProcessThread(sid:0 cport:42969):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 215185967347 19:06:41.733 [main-SendThread(127.0.0.1:42969)] DEBUG org.apache.zookeeper.ClientCnxn - Reading reply session id: 0x1000002c50e0000, packet:: clientPath:null serverPath:null finished:false header:: 144,14 replyHeader:: 144,94,0 request:: org.apache.zookeeper.MultiOperationRecord@45af5ccd response:: org.apache.zookeeper.MultiResponse@5ff1dad7 19:06:41.733 [ProcessThread(sid:0 cport:42969):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 218873372777 19:06:41.733 [ProcessThread(sid:0 cport:42969):] DEBUG org.apache.zookeeper.server.SessionTrackerImpl - Checking session 0x1000002c50e0000 19:06:41.733 [ProcessThread(sid:0 cport:42969):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Permission requested: 1 19:06:41.733 [ProcessThread(sid:0 cport:42969):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - ACLs for node: [31,s{'world,'anyone} ] 19:06:41.734 [ProcessThread(sid:0 cport:42969):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Client credentials: ['sasl,'zooclient , 'ip,'127.0.0.1 ] 19:06:41.734 [ProcessThread(sid:0 cport:42969):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 218873372777 19:06:41.734 [ProcessThread(sid:0 cport:42969):] DEBUG org.apache.zookeeper.server.SessionTrackerImpl - Checking session 0x1000002c50e0000 19:06:41.734 [ProcessThread(sid:0 cport:42969):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Processing ACL: 31,s{'world,'anyone} 19:06:41.734 [ProcessThread(sid:0 cport:42969):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Permission requested: 4 19:06:41.734 [ProcessThread(sid:0 cport:42969):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - ACLs for node: [31,s{'world,'anyone} ] 19:06:41.734 [ProcessThread(sid:0 cport:42969):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Client credentials: ['sasl,'zooclient , 'ip,'127.0.0.1 ] 19:06:41.734 [ProcessThread(sid:0 cport:42969):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 218873372777 19:06:41.734 [ProcessThread(sid:0 cport:42969):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 215088921071 19:06:41.734 [ProcessThread(sid:0 cport:42969):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 217960200002 19:06:41.734 [ProcessThread(sid:0 cport:42969):] DEBUG org.apache.zookeeper.server.SessionTrackerImpl - Checking session 0x1000002c50e0000 19:06:41.734 [ProcessThread(sid:0 cport:42969):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Permission requested: 1 19:06:41.734 [ProcessThread(sid:0 cport:42969):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - ACLs for node: [31,s{'world,'anyone} ] 19:06:41.734 [ProcessThread(sid:0 cport:42969):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Client credentials: ['sasl,'zooclient , 'ip,'127.0.0.1 ] 19:06:41.734 [ProcessThread(sid:0 cport:42969):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 217960200002 19:06:41.734 [ProcessThread(sid:0 cport:42969):] DEBUG org.apache.zookeeper.server.SessionTrackerImpl - Checking session 0x1000002c50e0000 19:06:41.734 [ProcessThread(sid:0 cport:42969):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Processing ACL: 31,s{'world,'anyone} 19:06:41.734 [ProcessThread(sid:0 cport:42969):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Permission requested: 4 19:06:41.734 [ProcessThread(sid:0 cport:42969):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - ACLs for node: [31,s{'world,'anyone} ] 19:06:41.734 [ProcessThread(sid:0 cport:42969):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Client credentials: ['sasl,'zooclient , 'ip,'127.0.0.1 ] 19:06:41.734 [ProcessThread(sid:0 cport:42969):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 217960200002 19:06:41.734 [ProcessThread(sid:0 cport:42969):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 217720578324 19:06:41.734 [ProcessThread(sid:0 cport:42969):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 217771616944 19:06:41.734 [ProcessThread(sid:0 cport:42969):] DEBUG org.apache.zookeeper.server.SessionTrackerImpl - Checking session 0x1000002c50e0000 19:06:41.734 [ProcessThread(sid:0 cport:42969):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Permission requested: 1 19:06:41.734 [ProcessThread(sid:0 cport:42969):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - ACLs for node: [31,s{'world,'anyone} ] 19:06:41.734 [ProcessThread(sid:0 cport:42969):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Client credentials: ['sasl,'zooclient , 'ip,'127.0.0.1 ] 19:06:41.734 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - Processing request:: sessionid:0x1000002c50e0000 type:multi cxid:0x91 zxid:0x5f txntype:14 reqpath:n/a 19:06:41.734 [ProcessThread(sid:0 cport:42969):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 217771616944 19:06:41.734 [ProcessThread(sid:0 cport:42969):] DEBUG org.apache.zookeeper.server.SessionTrackerImpl - Checking session 0x1000002c50e0000 19:06:41.734 [ProcessThread(sid:0 cport:42969):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Processing ACL: 31,s{'world,'anyone} 19:06:41.735 [SyncThread:0] DEBUG org.apache.zookeeper.common.PathTrie - brokers 19:06:41.735 [SyncThread:0] DEBUG org.apache.zookeeper.server.DataTree - Digests are matching for Zxid: 5f, Digest in log and actual tree: 215820470129 19:06:41.735 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - sessionid:0x1000002c50e0000 type:multi cxid:0x91 zxid:0x5f txntype:14 reqpath:n/a 19:06:41.735 [ProcessThread(sid:0 cport:42969):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Permission requested: 4 19:06:41.735 [ProcessThread(sid:0 cport:42969):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - ACLs for node: [31,s{'world,'anyone} ] 19:06:41.735 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - Processing request:: sessionid:0x1000002c50e0000 type:multi cxid:0x92 zxid:0x60 txntype:14 reqpath:n/a 19:06:41.735 [ProcessThread(sid:0 cport:42969):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Client credentials: ['sasl,'zooclient , 'ip,'127.0.0.1 ] 19:06:41.735 [SyncThread:0] DEBUG org.apache.zookeeper.common.PathTrie - brokers 19:06:41.736 [SyncThread:0] DEBUG org.apache.zookeeper.server.DataTree - Digests are matching for Zxid: 60, Digest in log and actual tree: 218873372777 19:06:41.736 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - sessionid:0x1000002c50e0000 type:multi cxid:0x92 zxid:0x60 txntype:14 reqpath:n/a 19:06:41.736 [ProcessThread(sid:0 cport:42969):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 217771616944 19:06:41.736 [ProcessThread(sid:0 cport:42969):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 217071842274 19:06:41.736 [main-SendThread(127.0.0.1:42969)] DEBUG org.apache.zookeeper.ClientCnxn - Reading reply session id: 0x1000002c50e0000, packet:: clientPath:null serverPath:null finished:false header:: 145,14 replyHeader:: 145,95,0 request:: org.apache.zookeeper.MultiOperationRecord@7a95980e response:: org.apache.zookeeper.MultiResponse@94d81618 19:06:41.736 [ProcessThread(sid:0 cport:42969):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 221212575206 19:06:41.736 [ProcessThread(sid:0 cport:42969):] DEBUG org.apache.zookeeper.server.SessionTrackerImpl - Checking session 0x1000002c50e0000 19:06:41.736 [ProcessThread(sid:0 cport:42969):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Permission requested: 1 19:06:41.736 [ProcessThread(sid:0 cport:42969):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - ACLs for node: [31,s{'world,'anyone} ] 19:06:41.736 [ProcessThread(sid:0 cport:42969):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Client credentials: ['sasl,'zooclient , 'ip,'127.0.0.1 ] 19:06:41.736 [ProcessThread(sid:0 cport:42969):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 221212575206 19:06:41.736 [ProcessThread(sid:0 cport:42969):] DEBUG org.apache.zookeeper.server.SessionTrackerImpl - Checking session 0x1000002c50e0000 19:06:41.736 [main-SendThread(127.0.0.1:42969)] DEBUG org.apache.zookeeper.ClientCnxn - Reading reply session id: 0x1000002c50e0000, packet:: clientPath:null serverPath:null finished:false header:: 146,14 replyHeader:: 146,96,0 request:: org.apache.zookeeper.MultiOperationRecord@a254160b response:: org.apache.zookeeper.MultiResponse@bc969415 19:06:41.736 [ProcessThread(sid:0 cport:42969):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Processing ACL: 31,s{'world,'anyone} 19:06:41.736 [ProcessThread(sid:0 cport:42969):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Permission requested: 4 19:06:41.736 [ProcessThread(sid:0 cport:42969):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - ACLs for node: [31,s{'world,'anyone} ] 19:06:41.736 [ProcessThread(sid:0 cport:42969):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Client credentials: ['sasl,'zooclient , 'ip,'127.0.0.1 ] 19:06:41.736 [ProcessThread(sid:0 cport:42969):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 221212575206 19:06:41.736 [ProcessThread(sid:0 cport:42969):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 222154753689 19:06:41.736 [ProcessThread(sid:0 cport:42969):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 226010259165 19:06:41.737 [ProcessThread(sid:0 cport:42969):] DEBUG org.apache.zookeeper.server.SessionTrackerImpl - Checking session 0x1000002c50e0000 19:06:41.737 [ProcessThread(sid:0 cport:42969):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Permission requested: 1 19:06:41.737 [ProcessThread(sid:0 cport:42969):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - ACLs for node: [31,s{'world,'anyone} ] 19:06:41.737 [ProcessThread(sid:0 cport:42969):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Client credentials: ['sasl,'zooclient , 'ip,'127.0.0.1 ] 19:06:41.737 [ProcessThread(sid:0 cport:42969):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 226010259165 19:06:41.737 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - Processing request:: sessionid:0x1000002c50e0000 type:multi cxid:0x93 zxid:0x61 txntype:14 reqpath:n/a 19:06:41.737 [ProcessThread(sid:0 cport:42969):] DEBUG org.apache.zookeeper.server.SessionTrackerImpl - Checking session 0x1000002c50e0000 19:06:41.737 [ProcessThread(sid:0 cport:42969):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Processing ACL: 31,s{'world,'anyone} 19:06:41.737 [SyncThread:0] DEBUG org.apache.zookeeper.common.PathTrie - brokers 19:06:41.737 [SyncThread:0] DEBUG org.apache.zookeeper.server.DataTree - Digests are matching for Zxid: 61, Digest in log and actual tree: 217960200002 19:06:41.737 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - sessionid:0x1000002c50e0000 type:multi cxid:0x93 zxid:0x61 txntype:14 reqpath:n/a 19:06:41.737 [ProcessThread(sid:0 cport:42969):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Permission requested: 4 19:06:41.737 [ProcessThread(sid:0 cport:42969):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - ACLs for node: [31,s{'world,'anyone} ] 19:06:41.737 [ProcessThread(sid:0 cport:42969):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Client credentials: ['sasl,'zooclient , 'ip,'127.0.0.1 ] 19:06:41.737 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - Processing request:: sessionid:0x1000002c50e0000 type:multi cxid:0x94 zxid:0x62 txntype:14 reqpath:n/a 19:06:41.738 [SyncThread:0] DEBUG org.apache.zookeeper.common.PathTrie - brokers 19:06:41.738 [SyncThread:0] DEBUG org.apache.zookeeper.server.DataTree - Digests are matching for Zxid: 62, Digest in log and actual tree: 217771616944 19:06:41.738 [main-SendThread(127.0.0.1:42969)] DEBUG org.apache.zookeeper.ClientCnxn - Reading reply session id: 0x1000002c50e0000, packet:: clientPath:null serverPath:null finished:false header:: 147,14 replyHeader:: 147,97,0 request:: org.apache.zookeeper.MultiOperationRecord@7c11d897 response:: org.apache.zookeeper.MultiResponse@965456a1 19:06:41.738 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - sessionid:0x1000002c50e0000 type:multi cxid:0x94 zxid:0x62 txntype:14 reqpath:n/a 19:06:41.738 [ProcessThread(sid:0 cport:42969):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 226010259165 19:06:41.738 [ProcessThread(sid:0 cport:42969):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 225332383658 19:06:41.738 [ProcessThread(sid:0 cport:42969):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 229605831377 19:06:41.739 [ProcessThread(sid:0 cport:42969):] DEBUG org.apache.zookeeper.server.SessionTrackerImpl - Checking session 0x1000002c50e0000 19:06:41.739 [ProcessThread(sid:0 cport:42969):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Permission requested: 1 19:06:41.739 [ProcessThread(sid:0 cport:42969):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - ACLs for node: [31,s{'world,'anyone} ] 19:06:41.739 [ProcessThread(sid:0 cport:42969):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Client credentials: ['sasl,'zooclient , 'ip,'127.0.0.1 ] 19:06:41.739 [ProcessThread(sid:0 cport:42969):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 229605831377 19:06:41.739 [ProcessThread(sid:0 cport:42969):] DEBUG org.apache.zookeeper.server.SessionTrackerImpl - Checking session 0x1000002c50e0000 19:06:41.739 [ProcessThread(sid:0 cport:42969):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Processing ACL: 31,s{'world,'anyone} 19:06:41.739 [ProcessThread(sid:0 cport:42969):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Permission requested: 4 19:06:41.739 [ProcessThread(sid:0 cport:42969):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - ACLs for node: [31,s{'world,'anyone} ] 19:06:41.739 [main-SendThread(127.0.0.1:42969)] DEBUG org.apache.zookeeper.ClientCnxn - Reading reply session id: 0x1000002c50e0000, packet:: clientPath:null serverPath:null finished:false header:: 148,14 replyHeader:: 148,98,0 request:: org.apache.zookeeper.MultiOperationRecord@a068cc68 response:: org.apache.zookeeper.MultiResponse@baab4a72 19:06:41.739 [ProcessThread(sid:0 cport:42969):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Client credentials: ['sasl,'zooclient , 'ip,'127.0.0.1 ] 19:06:41.739 [ProcessThread(sid:0 cport:42969):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 229605831377 19:06:41.739 [ProcessThread(sid:0 cport:42969):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 230433765951 19:06:41.739 [ProcessThread(sid:0 cport:42969):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 234546536839 19:06:41.739 [ProcessThread(sid:0 cport:42969):] DEBUG org.apache.zookeeper.server.SessionTrackerImpl - Checking session 0x1000002c50e0000 19:06:41.739 [ProcessThread(sid:0 cport:42969):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Permission requested: 1 19:06:41.739 [ProcessThread(sid:0 cport:42969):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - ACLs for node: [31,s{'world,'anyone} ] 19:06:41.739 [ProcessThread(sid:0 cport:42969):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Client credentials: ['sasl,'zooclient , 'ip,'127.0.0.1 ] 19:06:41.739 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - Processing request:: sessionid:0x1000002c50e0000 type:multi cxid:0x95 zxid:0x63 txntype:14 reqpath:n/a 19:06:41.739 [ProcessThread(sid:0 cport:42969):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 234546536839 19:06:41.739 [ProcessThread(sid:0 cport:42969):] DEBUG org.apache.zookeeper.server.SessionTrackerImpl - Checking session 0x1000002c50e0000 19:06:41.739 [ProcessThread(sid:0 cport:42969):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Processing ACL: 31,s{'world,'anyone} 19:06:41.740 [SyncThread:0] DEBUG org.apache.zookeeper.common.PathTrie - brokers 19:06:41.740 [SyncThread:0] DEBUG org.apache.zookeeper.server.DataTree - Digests are matching for Zxid: 63, Digest in log and actual tree: 221212575206 19:06:41.740 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - sessionid:0x1000002c50e0000 type:multi cxid:0x95 zxid:0x63 txntype:14 reqpath:n/a 19:06:41.740 [ProcessThread(sid:0 cport:42969):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Permission requested: 4 19:06:41.740 [ProcessThread(sid:0 cport:42969):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - ACLs for node: [31,s{'world,'anyone} ] 19:06:41.740 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - Processing request:: sessionid:0x1000002c50e0000 type:multi cxid:0x96 zxid:0x64 txntype:14 reqpath:n/a 19:06:41.740 [ProcessThread(sid:0 cport:42969):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Client credentials: ['sasl,'zooclient , 'ip,'127.0.0.1 ] 19:06:41.740 [SyncThread:0] DEBUG org.apache.zookeeper.common.PathTrie - brokers 19:06:41.740 [main-SendThread(127.0.0.1:42969)] DEBUG org.apache.zookeeper.ClientCnxn - Reading reply session id: 0x1000002c50e0000, packet:: clientPath:null serverPath:null finished:false header:: 149,14 replyHeader:: 149,99,0 request:: org.apache.zookeeper.MultiOperationRecord@a878eb93 response:: org.apache.zookeeper.MultiResponse@c2bb699d 19:06:41.740 [SyncThread:0] DEBUG org.apache.zookeeper.server.DataTree - Digests are matching for Zxid: 64, Digest in log and actual tree: 226010259165 19:06:41.741 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - sessionid:0x1000002c50e0000 type:multi cxid:0x96 zxid:0x64 txntype:14 reqpath:n/a 19:06:41.741 [ProcessThread(sid:0 cport:42969):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 234546536839 19:06:41.741 [ProcessThread(sid:0 cport:42969):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 234855516729 19:06:41.741 [ProcessThread(sid:0 cport:42969):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 236723689656 19:06:41.741 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - Processing request:: sessionid:0x1000002c50e0000 type:multi cxid:0x97 zxid:0x65 txntype:14 reqpath:n/a 19:06:41.741 [ProcessThread(sid:0 cport:42969):] DEBUG org.apache.zookeeper.server.SessionTrackerImpl - Checking session 0x1000002c50e0000 19:06:41.741 [SyncThread:0] DEBUG org.apache.zookeeper.common.PathTrie - brokers 19:06:41.741 [main-SendThread(127.0.0.1:42969)] DEBUG org.apache.zookeeper.ClientCnxn - Reading reply session id: 0x1000002c50e0000, packet:: clientPath:null serverPath:null finished:false header:: 150,14 replyHeader:: 150,100,0 request:: org.apache.zookeeper.MultiOperationRecord@ddce2fee response:: org.apache.zookeeper.MultiResponse@f810adf8 19:06:41.741 [SyncThread:0] DEBUG org.apache.zookeeper.server.DataTree - Digests are matching for Zxid: 65, Digest in log and actual tree: 229605831377 19:06:41.742 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - sessionid:0x1000002c50e0000 type:multi cxid:0x97 zxid:0x65 txntype:14 reqpath:n/a 19:06:41.742 [ProcessThread(sid:0 cport:42969):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Permission requested: 1 19:06:41.742 [ProcessThread(sid:0 cport:42969):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - ACLs for node: [31,s{'world,'anyone} ] 19:06:41.742 [ProcessThread(sid:0 cport:42969):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Client credentials: ['sasl,'zooclient , 'ip,'127.0.0.1 ] 19:06:41.742 [ProcessThread(sid:0 cport:42969):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 236723689656 19:06:41.742 [ProcessThread(sid:0 cport:42969):] DEBUG org.apache.zookeeper.server.SessionTrackerImpl - Checking session 0x1000002c50e0000 19:06:41.742 [ProcessThread(sid:0 cport:42969):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Processing ACL: 31,s{'world,'anyone} 19:06:41.742 [ProcessThread(sid:0 cport:42969):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Permission requested: 4 19:06:41.742 [ProcessThread(sid:0 cport:42969):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - ACLs for node: [31,s{'world,'anyone} ] 19:06:41.742 [ProcessThread(sid:0 cport:42969):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Client credentials: ['sasl,'zooclient , 'ip,'127.0.0.1 ] 19:06:41.742 [ProcessThread(sid:0 cport:42969):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 236723689656 19:06:41.742 [ProcessThread(sid:0 cport:42969):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 234504779429 19:06:41.742 [ProcessThread(sid:0 cport:42969):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 238171230924 19:06:41.742 [ProcessThread(sid:0 cport:42969):] DEBUG org.apache.zookeeper.server.SessionTrackerImpl - Checking session 0x1000002c50e0000 19:06:41.742 [ProcessThread(sid:0 cport:42969):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Permission requested: 1 19:06:41.742 [ProcessThread(sid:0 cport:42969):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - ACLs for node: [31,s{'world,'anyone} ] 19:06:41.742 [ProcessThread(sid:0 cport:42969):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Client credentials: ['sasl,'zooclient , 'ip,'127.0.0.1 ] 19:06:41.742 [ProcessThread(sid:0 cport:42969):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 238171230924 19:06:41.742 [ProcessThread(sid:0 cport:42969):] DEBUG org.apache.zookeeper.server.SessionTrackerImpl - Checking session 0x1000002c50e0000 19:06:41.742 [ProcessThread(sid:0 cport:42969):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Processing ACL: 31,s{'world,'anyone} 19:06:41.742 [ProcessThread(sid:0 cport:42969):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Permission requested: 4 19:06:41.742 [ProcessThread(sid:0 cport:42969):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - ACLs for node: [31,s{'world,'anyone} ] 19:06:41.742 [ProcessThread(sid:0 cport:42969):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Client credentials: ['sasl,'zooclient , 'ip,'127.0.0.1 ] 19:06:41.743 [ProcessThread(sid:0 cport:42969):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 238171230924 19:06:41.743 [ProcessThread(sid:0 cport:42969):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 240246748319 19:06:41.743 [ProcessThread(sid:0 cport:42969):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 242538445693 19:06:41.743 [ProcessThread(sid:0 cport:42969):] DEBUG org.apache.zookeeper.server.SessionTrackerImpl - Checking session 0x1000002c50e0000 19:06:41.743 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - Processing request:: sessionid:0x1000002c50e0000 type:multi cxid:0x98 zxid:0x66 txntype:14 reqpath:n/a 19:06:41.743 [ProcessThread(sid:0 cport:42969):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Permission requested: 1 19:06:41.743 [ProcessThread(sid:0 cport:42969):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - ACLs for node: [31,s{'world,'anyone} ] 19:06:41.743 [ProcessThread(sid:0 cport:42969):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Client credentials: ['sasl,'zooclient , 'ip,'127.0.0.1 ] 19:06:41.742 [main-SendThread(127.0.0.1:42969)] DEBUG org.apache.zookeeper.ClientCnxn - Reading reply session id: 0x1000002c50e0000, packet:: clientPath:null serverPath:null finished:false header:: 151,14 replyHeader:: 151,101,0 request:: org.apache.zookeeper.MultiOperationRecord@472b9d56 response:: org.apache.zookeeper.MultiResponse@616e1b60 19:06:41.744 [SyncThread:0] DEBUG org.apache.zookeeper.common.PathTrie - brokers 19:06:41.744 [SyncThread:0] DEBUG org.apache.zookeeper.server.DataTree - Digests are matching for Zxid: 66, Digest in log and actual tree: 234546536839 19:06:41.745 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - sessionid:0x1000002c50e0000 type:multi cxid:0x98 zxid:0x66 txntype:14 reqpath:n/a 19:06:41.745 [ProcessThread(sid:0 cport:42969):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 242538445693 19:06:41.745 [ProcessThread(sid:0 cport:42969):] DEBUG org.apache.zookeeper.server.SessionTrackerImpl - Checking session 0x1000002c50e0000 19:06:41.745 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - Processing request:: sessionid:0x1000002c50e0000 type:multi cxid:0x99 zxid:0x67 txntype:14 reqpath:n/a 19:06:41.745 [ProcessThread(sid:0 cport:42969):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Processing ACL: 31,s{'world,'anyone} 19:06:41.745 [SyncThread:0] DEBUG org.apache.zookeeper.common.PathTrie - brokers 19:06:41.745 [SyncThread:0] DEBUG org.apache.zookeeper.server.DataTree - Digests are matching for Zxid: 67, Digest in log and actual tree: 236723689656 19:06:41.745 [main-SendThread(127.0.0.1:42969)] DEBUG org.apache.zookeeper.ClientCnxn - Reading reply session id: 0x1000002c50e0000, packet:: clientPath:null serverPath:null finished:false header:: 152,14 replyHeader:: 152,102,0 request:: org.apache.zookeeper.MultiOperationRecord@b0f813d8 response:: org.apache.zookeeper.MultiResponse@cb3a91e2 19:06:41.745 [ProcessThread(sid:0 cport:42969):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Permission requested: 4 19:06:41.745 [ProcessThread(sid:0 cport:42969):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - ACLs for node: [31,s{'world,'anyone} ] 19:06:41.745 [ProcessThread(sid:0 cport:42969):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Client credentials: ['sasl,'zooclient , 'ip,'127.0.0.1 ] 19:06:41.745 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - sessionid:0x1000002c50e0000 type:multi cxid:0x99 zxid:0x67 txntype:14 reqpath:n/a 19:06:41.745 [ProcessThread(sid:0 cport:42969):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 242538445693 19:06:41.746 [ProcessThread(sid:0 cport:42969):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 243456291663 19:06:41.746 [ProcessThread(sid:0 cport:42969):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 247740660954 19:06:41.746 [main-SendThread(127.0.0.1:42969)] DEBUG org.apache.zookeeper.ClientCnxn - Reading reply session id: 0x1000002c50e0000, packet:: clientPath:null serverPath:null finished:false header:: 153,14 replyHeader:: 153,103,0 request:: org.apache.zookeeper.MultiOperationRecord@78aa4e6b response:: org.apache.zookeeper.MultiResponse@92eccc75 19:06:41.746 [ProcessThread(sid:0 cport:42969):] DEBUG org.apache.zookeeper.server.SessionTrackerImpl - Checking session 0x1000002c50e0000 19:06:41.746 [ProcessThread(sid:0 cport:42969):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Permission requested: 1 19:06:41.746 [ProcessThread(sid:0 cport:42969):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - ACLs for node: [31,s{'world,'anyone} ] 19:06:41.746 [ProcessThread(sid:0 cport:42969):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Client credentials: ['sasl,'zooclient , 'ip,'127.0.0.1 ] 19:06:41.746 [ProcessThread(sid:0 cport:42969):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 247740660954 19:06:41.746 [ProcessThread(sid:0 cport:42969):] DEBUG org.apache.zookeeper.server.SessionTrackerImpl - Checking session 0x1000002c50e0000 19:06:41.746 [ProcessThread(sid:0 cport:42969):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Processing ACL: 31,s{'world,'anyone} 19:06:41.746 [ProcessThread(sid:0 cport:42969):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Permission requested: 4 19:06:41.746 [ProcessThread(sid:0 cport:42969):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - ACLs for node: [31,s{'world,'anyone} ] 19:06:41.746 [ProcessThread(sid:0 cport:42969):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Client credentials: ['sasl,'zooclient , 'ip,'127.0.0.1 ] 19:06:41.746 [ProcessThread(sid:0 cport:42969):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 247740660954 19:06:41.746 [ProcessThread(sid:0 cport:42969):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 248451114380 19:06:41.746 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - Processing request:: sessionid:0x1000002c50e0000 type:multi cxid:0x9a zxid:0x68 txntype:14 reqpath:n/a 19:06:41.746 [ProcessThread(sid:0 cport:42969):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 251920344813 19:06:41.747 [SyncThread:0] DEBUG org.apache.zookeeper.common.PathTrie - brokers 19:06:41.747 [SyncThread:0] DEBUG org.apache.zookeeper.server.DataTree - Digests are matching for Zxid: 68, Digest in log and actual tree: 238171230924 19:06:41.747 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - sessionid:0x1000002c50e0000 type:multi cxid:0x9a zxid:0x68 txntype:14 reqpath:n/a 19:06:41.747 [ProcessThread(sid:0 cport:42969):] DEBUG org.apache.zookeeper.server.SessionTrackerImpl - Checking session 0x1000002c50e0000 19:06:41.747 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - Processing request:: sessionid:0x1000002c50e0000 type:multi cxid:0x9b zxid:0x69 txntype:14 reqpath:n/a 19:06:41.747 [ProcessThread(sid:0 cport:42969):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Permission requested: 1 19:06:41.747 [SyncThread:0] DEBUG org.apache.zookeeper.common.PathTrie - brokers 19:06:41.747 [ProcessThread(sid:0 cport:42969):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - ACLs for node: [31,s{'world,'anyone} ] 19:06:41.747 [main-SendThread(127.0.0.1:42969)] DEBUG org.apache.zookeeper.ClientCnxn - Reading reply session id: 0x1000002c50e0000, packet:: clientPath:null serverPath:null finished:false header:: 154,14 replyHeader:: 154,104,0 request:: org.apache.zookeeper.MultiOperationRecord@702b2626 response:: org.apache.zookeeper.MultiResponse@8a6da430 19:06:41.748 [SyncThread:0] DEBUG org.apache.zookeeper.server.DataTree - Digests are matching for Zxid: 69, Digest in log and actual tree: 242538445693 19:06:41.748 [ProcessThread(sid:0 cport:42969):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Client credentials: ['sasl,'zooclient , 'ip,'127.0.0.1 ] 19:06:41.748 [ProcessThread(sid:0 cport:42969):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 251920344813 19:06:41.748 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - sessionid:0x1000002c50e0000 type:multi cxid:0x9b zxid:0x69 txntype:14 reqpath:n/a 19:06:41.748 [ProcessThread(sid:0 cport:42969):] DEBUG org.apache.zookeeper.server.SessionTrackerImpl - Checking session 0x1000002c50e0000 19:06:41.748 [ProcessThread(sid:0 cport:42969):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Processing ACL: 31,s{'world,'anyone} 19:06:41.748 [ProcessThread(sid:0 cport:42969):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Permission requested: 4 19:06:41.748 [ProcessThread(sid:0 cport:42969):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - ACLs for node: [31,s{'world,'anyone} ] 19:06:41.748 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - Processing request:: sessionid:0x1000002c50e0000 type:multi cxid:0x9c zxid:0x6a txntype:14 reqpath:n/a 19:06:41.748 [ProcessThread(sid:0 cport:42969):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Client credentials: ['sasl,'zooclient , 'ip,'127.0.0.1 ] 19:06:41.748 [SyncThread:0] DEBUG org.apache.zookeeper.common.PathTrie - brokers 19:06:41.748 [main-SendThread(127.0.0.1:42969)] DEBUG org.apache.zookeeper.ClientCnxn - Reading reply session id: 0x1000002c50e0000, packet:: clientPath:null serverPath:null finished:false header:: 155,14 replyHeader:: 155,105,0 request:: org.apache.zookeeper.MultiOperationRecord@72166fc9 response:: org.apache.zookeeper.MultiResponse@8c58edd3 19:06:41.748 [SyncThread:0] DEBUG org.apache.zookeeper.server.DataTree - Digests are matching for Zxid: 6a, Digest in log and actual tree: 247740660954 19:06:41.749 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - sessionid:0x1000002c50e0000 type:multi cxid:0x9c zxid:0x6a txntype:14 reqpath:n/a 19:06:41.749 [ProcessThread(sid:0 cport:42969):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 251920344813 19:06:41.749 [ProcessThread(sid:0 cport:42969):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 251242473384 19:06:41.749 [ProcessThread(sid:0 cport:42969):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 252485858266 19:06:41.749 [ProcessThread(sid:0 cport:42969):] DEBUG org.apache.zookeeper.server.SessionTrackerImpl - Checking session 0x1000002c50e0000 19:06:41.749 [ProcessThread(sid:0 cport:42969):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Permission requested: 1 19:06:41.749 [ProcessThread(sid:0 cport:42969):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - ACLs for node: [31,s{'world,'anyone} ] 19:06:41.749 [ProcessThread(sid:0 cport:42969):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Client credentials: ['sasl,'zooclient , 'ip,'127.0.0.1 ] 19:06:41.749 [main-SendThread(127.0.0.1:42969)] DEBUG org.apache.zookeeper.ClientCnxn - Reading reply session id: 0x1000002c50e0000, packet:: clientPath:null serverPath:null finished:false header:: 156,14 replyHeader:: 156,106,0 request:: org.apache.zookeeper.MultiOperationRecord@a3542ea response:: org.apache.zookeeper.MultiResponse@2477c0f4 19:06:41.749 [ProcessThread(sid:0 cport:42969):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 252485858266 19:06:41.749 [ProcessThread(sid:0 cport:42969):] DEBUG org.apache.zookeeper.server.SessionTrackerImpl - Checking session 0x1000002c50e0000 19:06:41.749 [ProcessThread(sid:0 cport:42969):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Processing ACL: 31,s{'world,'anyone} 19:06:41.749 [ProcessThread(sid:0 cport:42969):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Permission requested: 4 19:06:41.749 [ProcessThread(sid:0 cport:42969):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - ACLs for node: [31,s{'world,'anyone} ] 19:06:41.749 [ProcessThread(sid:0 cport:42969):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Client credentials: ['sasl,'zooclient , 'ip,'127.0.0.1 ] 19:06:41.749 [ProcessThread(sid:0 cport:42969):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 252485858266 19:06:41.749 [ProcessThread(sid:0 cport:42969):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 253142760205 19:06:41.750 [ProcessThread(sid:0 cport:42969):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 255103070371 19:06:41.750 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - Processing request:: sessionid:0x1000002c50e0000 type:multi cxid:0x9d zxid:0x6b txntype:14 reqpath:n/a 19:06:41.750 [ProcessThread(sid:0 cport:42969):] DEBUG org.apache.zookeeper.server.SessionTrackerImpl - Checking session 0x1000002c50e0000 19:06:41.750 [SyncThread:0] DEBUG org.apache.zookeeper.common.PathTrie - brokers 19:06:41.750 [SyncThread:0] DEBUG org.apache.zookeeper.server.DataTree - Digests are matching for Zxid: 6b, Digest in log and actual tree: 251920344813 19:06:41.750 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - sessionid:0x1000002c50e0000 type:multi cxid:0x9d zxid:0x6b txntype:14 reqpath:n/a 19:06:41.750 [ProcessThread(sid:0 cport:42969):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Permission requested: 1 19:06:41.750 [ProcessThread(sid:0 cport:42969):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - ACLs for node: [31,s{'world,'anyone} ] 19:06:41.750 [ProcessThread(sid:0 cport:42969):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Client credentials: ['sasl,'zooclient , 'ip,'127.0.0.1 ] 19:06:41.750 [ProcessThread(sid:0 cport:42969):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 255103070371 19:06:41.750 [ProcessThread(sid:0 cport:42969):] DEBUG org.apache.zookeeper.server.SessionTrackerImpl - Checking session 0x1000002c50e0000 19:06:41.750 [ProcessThread(sid:0 cport:42969):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Processing ACL: 31,s{'world,'anyone} 19:06:41.750 [ProcessThread(sid:0 cport:42969):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Permission requested: 4 19:06:41.750 [ProcessThread(sid:0 cport:42969):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - ACLs for node: [31,s{'world,'anyone} ] 19:06:41.750 [ProcessThread(sid:0 cport:42969):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Client credentials: ['sasl,'zooclient , 'ip,'127.0.0.1 ] 19:06:41.750 [main-SendThread(127.0.0.1:42969)] DEBUG org.apache.zookeeper.ClientCnxn - Reading reply session id: 0x1000002c50e0000, packet:: clientPath:null serverPath:null finished:false header:: 157,14 replyHeader:: 157,107,0 request:: org.apache.zookeeper.MultiOperationRecord@175d002e response:: org.apache.zookeeper.MultiResponse@319f7e38 19:06:41.750 [ProcessThread(sid:0 cport:42969):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 255103070371 19:06:41.751 [ProcessThread(sid:0 cport:42969):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 255393974545 19:06:41.751 [ProcessThread(sid:0 cport:42969):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 258769061283 19:06:41.751 [ProcessThread(sid:0 cport:42969):] DEBUG org.apache.zookeeper.server.SessionTrackerImpl - Checking session 0x1000002c50e0000 19:06:41.751 [ProcessThread(sid:0 cport:42969):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Permission requested: 1 19:06:41.751 [ProcessThread(sid:0 cport:42969):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - ACLs for node: [31,s{'world,'anyone} ] 19:06:41.751 [ProcessThread(sid:0 cport:42969):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Client credentials: ['sasl,'zooclient , 'ip,'127.0.0.1 ] 19:06:41.751 [ProcessThread(sid:0 cport:42969):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 258769061283 19:06:41.751 [ProcessThread(sid:0 cport:42969):] DEBUG org.apache.zookeeper.server.SessionTrackerImpl - Checking session 0x1000002c50e0000 19:06:41.751 [ProcessThread(sid:0 cport:42969):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Processing ACL: 31,s{'world,'anyone} 19:06:41.751 [ProcessThread(sid:0 cport:42969):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Permission requested: 4 19:06:41.751 [ProcessThread(sid:0 cport:42969):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - ACLs for node: [31,s{'world,'anyone} ] 19:06:41.751 [ProcessThread(sid:0 cport:42969):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Client credentials: ['sasl,'zooclient , 'ip,'127.0.0.1 ] 19:06:41.751 [ProcessThread(sid:0 cport:42969):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 258769061283 19:06:41.751 [ProcessThread(sid:0 cport:42969):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 258543013781 19:06:41.751 [ProcessThread(sid:0 cport:42969):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 259339001587 19:06:41.751 [ProcessThread(sid:0 cport:42969):] DEBUG org.apache.zookeeper.server.SessionTrackerImpl - Checking session 0x1000002c50e0000 19:06:41.751 [ProcessThread(sid:0 cport:42969):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Permission requested: 1 19:06:41.751 [ProcessThread(sid:0 cport:42969):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - ACLs for node: [31,s{'world,'anyone} ] 19:06:41.751 [ProcessThread(sid:0 cport:42969):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Client credentials: ['sasl,'zooclient , 'ip,'127.0.0.1 ] 19:06:41.751 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - Processing request:: sessionid:0x1000002c50e0000 type:multi cxid:0x9e zxid:0x6c txntype:14 reqpath:n/a 19:06:41.751 [ProcessThread(sid:0 cport:42969):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 259339001587 19:06:41.751 [ProcessThread(sid:0 cport:42969):] DEBUG org.apache.zookeeper.server.SessionTrackerImpl - Checking session 0x1000002c50e0000 19:06:41.751 [ProcessThread(sid:0 cport:42969):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Processing ACL: 31,s{'world,'anyone} 19:06:41.752 [SyncThread:0] DEBUG org.apache.zookeeper.common.PathTrie - brokers 19:06:41.752 [SyncThread:0] DEBUG org.apache.zookeeper.server.DataTree - Digests are matching for Zxid: 6c, Digest in log and actual tree: 252485858266 19:06:41.752 [ProcessThread(sid:0 cport:42969):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Permission requested: 4 19:06:41.752 [ProcessThread(sid:0 cport:42969):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - ACLs for node: [31,s{'world,'anyone} ] 19:06:41.752 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - sessionid:0x1000002c50e0000 type:multi cxid:0x9e zxid:0x6c txntype:14 reqpath:n/a 19:06:41.752 [ProcessThread(sid:0 cport:42969):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Client credentials: ['sasl,'zooclient , 'ip,'127.0.0.1 ] 19:06:41.752 [ProcessThread(sid:0 cport:42969):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 259339001587 19:06:41.752 [ProcessThread(sid:0 cport:42969):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 260988311937 19:06:41.752 [ProcessThread(sid:0 cport:42969):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 261630133311 19:06:41.752 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - Processing request:: sessionid:0x1000002c50e0000 type:multi cxid:0x9f zxid:0x6d txntype:14 reqpath:n/a 19:06:41.753 [SyncThread:0] DEBUG org.apache.zookeeper.common.PathTrie - brokers 19:06:41.753 [main-SendThread(127.0.0.1:42969)] DEBUG org.apache.zookeeper.ClientCnxn - Reading reply session id: 0x1000002c50e0000, packet:: clientPath:null serverPath:null finished:false header:: 158,14 replyHeader:: 158,108,0 request:: org.apache.zookeeper.MultiOperationRecord@ad9089ac response:: org.apache.zookeeper.MultiResponse@c7d307b6 19:06:41.753 [SyncThread:0] DEBUG org.apache.zookeeper.server.DataTree - Digests are matching for Zxid: 6d, Digest in log and actual tree: 255103070371 19:06:41.753 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - sessionid:0x1000002c50e0000 type:multi cxid:0x9f zxid:0x6d txntype:14 reqpath:n/a 19:06:41.753 [ProcessThread(sid:0 cport:42969):] DEBUG org.apache.zookeeper.server.SessionTrackerImpl - Checking session 0x1000002c50e0000 19:06:41.753 [ProcessThread(sid:0 cport:42969):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Permission requested: 1 19:06:41.753 [ProcessThread(sid:0 cport:42969):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - ACLs for node: [31,s{'world,'anyone} ] 19:06:41.753 [ProcessThread(sid:0 cport:42969):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Client credentials: ['sasl,'zooclient , 'ip,'127.0.0.1 ] 19:06:41.753 [ProcessThread(sid:0 cport:42969):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 261630133311 19:06:41.753 [ProcessThread(sid:0 cport:42969):] DEBUG org.apache.zookeeper.server.SessionTrackerImpl - Checking session 0x1000002c50e0000 19:06:41.753 [ProcessThread(sid:0 cport:42969):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Processing ACL: 31,s{'world,'anyone} 19:06:41.753 [ProcessThread(sid:0 cport:42969):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Permission requested: 4 19:06:41.753 [main-SendThread(127.0.0.1:42969)] DEBUG org.apache.zookeeper.ClientCnxn - Reading reply session id: 0x1000002c50e0000, packet:: clientPath:null serverPath:null finished:false header:: 159,14 replyHeader:: 159,109,0 request:: org.apache.zookeeper.MultiOperationRecord@4106c7ce response:: org.apache.zookeeper.MultiResponse@5b4945d8 19:06:41.754 [ProcessThread(sid:0 cport:42969):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - ACLs for node: [31,s{'world,'anyone} ] 19:06:41.754 [ProcessThread(sid:0 cport:42969):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Client credentials: ['sasl,'zooclient , 'ip,'127.0.0.1 ] 19:06:41.754 [ProcessThread(sid:0 cport:42969):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 261630133311 19:06:41.754 [ProcessThread(sid:0 cport:42969):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 264221042101 19:06:41.754 [ProcessThread(sid:0 cport:42969):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 265738605042 19:06:41.754 [ProcessThread(sid:0 cport:42969):] DEBUG org.apache.zookeeper.server.SessionTrackerImpl - Checking session 0x1000002c50e0000 19:06:41.754 [ProcessThread(sid:0 cport:42969):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Permission requested: 1 19:06:41.754 [ProcessThread(sid:0 cport:42969):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - ACLs for node: [31,s{'world,'anyone} ] 19:06:41.754 [ProcessThread(sid:0 cport:42969):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Client credentials: ['sasl,'zooclient , 'ip,'127.0.0.1 ] 19:06:41.754 [ProcessThread(sid:0 cport:42969):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 265738605042 19:06:41.754 [ProcessThread(sid:0 cport:42969):] DEBUG org.apache.zookeeper.server.SessionTrackerImpl - Checking session 0x1000002c50e0000 19:06:41.754 [ProcessThread(sid:0 cport:42969):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Processing ACL: 31,s{'world,'anyone} 19:06:41.754 [ProcessThread(sid:0 cport:42969):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Permission requested: 4 19:06:41.754 [ProcessThread(sid:0 cport:42969):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - ACLs for node: [31,s{'world,'anyone} ] 19:06:41.754 [ProcessThread(sid:0 cport:42969):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Client credentials: ['sasl,'zooclient , 'ip,'127.0.0.1 ] 19:06:41.754 [ProcessThread(sid:0 cport:42969):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 265738605042 19:06:41.754 [ProcessThread(sid:0 cport:42969):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 267336435127 19:06:41.754 [ProcessThread(sid:0 cport:42969):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 269131225732 19:06:41.754 [ProcessThread(sid:0 cport:42969):] DEBUG org.apache.zookeeper.server.SessionTrackerImpl - Checking session 0x1000002c50e0000 19:06:41.754 [ProcessThread(sid:0 cport:42969):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Permission requested: 1 19:06:41.754 [ProcessThread(sid:0 cport:42969):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - ACLs for node: [31,s{'world,'anyone} ] 19:06:41.755 [ProcessThread(sid:0 cport:42969):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Client credentials: ['sasl,'zooclient , 'ip,'127.0.0.1 ] 19:06:41.755 [ProcessThread(sid:0 cport:42969):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 269131225732 19:06:41.755 [ProcessThread(sid:0 cport:42969):] DEBUG org.apache.zookeeper.server.SessionTrackerImpl - Checking session 0x1000002c50e0000 19:06:41.755 [ProcessThread(sid:0 cport:42969):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Processing ACL: 31,s{'world,'anyone} 19:06:41.755 [ProcessThread(sid:0 cport:42969):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Permission requested: 4 19:06:41.755 [ProcessThread(sid:0 cport:42969):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - ACLs for node: [31,s{'world,'anyone} ] 19:06:41.755 [ProcessThread(sid:0 cport:42969):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Client credentials: ['sasl,'zooclient , 'ip,'127.0.0.1 ] 19:06:41.755 [ProcessThread(sid:0 cport:42969):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 269131225732 19:06:41.755 [ProcessThread(sid:0 cport:42969):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 267507830417 19:06:41.755 [ProcessThread(sid:0 cport:42969):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 267945561465 19:06:41.755 [ProcessThread(sid:0 cport:42969):] DEBUG org.apache.zookeeper.server.SessionTrackerImpl - Checking session 0x1000002c50e0000 19:06:41.755 [ProcessThread(sid:0 cport:42969):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Permission requested: 1 19:06:41.755 [ProcessThread(sid:0 cport:42969):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - ACLs for node: [31,s{'world,'anyone} ] 19:06:41.755 [ProcessThread(sid:0 cport:42969):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Client credentials: ['sasl,'zooclient , 'ip,'127.0.0.1 ] 19:06:41.755 [ProcessThread(sid:0 cport:42969):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 267945561465 19:06:41.755 [ProcessThread(sid:0 cport:42969):] DEBUG org.apache.zookeeper.server.SessionTrackerImpl - Checking session 0x1000002c50e0000 19:06:41.755 [ProcessThread(sid:0 cport:42969):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Processing ACL: 31,s{'world,'anyone} 19:06:41.755 [ProcessThread(sid:0 cport:42969):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Permission requested: 4 19:06:41.755 [ProcessThread(sid:0 cport:42969):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - ACLs for node: [31,s{'world,'anyone} ] 19:06:41.755 [ProcessThread(sid:0 cport:42969):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Client credentials: ['sasl,'zooclient , 'ip,'127.0.0.1 ] 19:06:41.755 [ProcessThread(sid:0 cport:42969):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 267945561465 19:06:41.755 [ProcessThread(sid:0 cport:42969):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 267057400827 19:06:41.755 [ProcessThread(sid:0 cport:42969):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 269488238975 19:06:41.755 [ProcessThread(sid:0 cport:42969):] DEBUG org.apache.zookeeper.server.SessionTrackerImpl - Checking session 0x1000002c50e0000 19:06:41.755 [ProcessThread(sid:0 cport:42969):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Permission requested: 1 19:06:41.755 [ProcessThread(sid:0 cport:42969):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - ACLs for node: [31,s{'world,'anyone} ] 19:06:41.755 [ProcessThread(sid:0 cport:42969):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Client credentials: ['sasl,'zooclient , 'ip,'127.0.0.1 ] 19:06:41.755 [ProcessThread(sid:0 cport:42969):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 269488238975 19:06:41.755 [ProcessThread(sid:0 cport:42969):] DEBUG org.apache.zookeeper.server.SessionTrackerImpl - Checking session 0x1000002c50e0000 19:06:41.755 [ProcessThread(sid:0 cport:42969):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Processing ACL: 31,s{'world,'anyone} 19:06:41.756 [ProcessThread(sid:0 cport:42969):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Permission requested: 4 19:06:41.756 [ProcessThread(sid:0 cport:42969):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - ACLs for node: [31,s{'world,'anyone} ] 19:06:41.756 [ProcessThread(sid:0 cport:42969):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Client credentials: ['sasl,'zooclient , 'ip,'127.0.0.1 ] 19:06:41.756 [ProcessThread(sid:0 cport:42969):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 269488238975 19:06:41.756 [ProcessThread(sid:0 cport:42969):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 270309258237 19:06:41.756 [ProcessThread(sid:0 cport:42969):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 272437152229 19:06:41.756 [ProcessThread(sid:0 cport:42969):] DEBUG org.apache.zookeeper.server.SessionTrackerImpl - Checking session 0x1000002c50e0000 19:06:41.756 [ProcessThread(sid:0 cport:42969):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Permission requested: 1 19:06:41.756 [ProcessThread(sid:0 cport:42969):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - ACLs for node: [31,s{'world,'anyone} ] 19:06:41.756 [ProcessThread(sid:0 cport:42969):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Client credentials: ['sasl,'zooclient , 'ip,'127.0.0.1 ] 19:06:41.756 [ProcessThread(sid:0 cport:42969):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 272437152229 19:06:41.756 [ProcessThread(sid:0 cport:42969):] DEBUG org.apache.zookeeper.server.SessionTrackerImpl - Checking session 0x1000002c50e0000 19:06:41.756 [ProcessThread(sid:0 cport:42969):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Processing ACL: 31,s{'world,'anyone} 19:06:41.756 [ProcessThread(sid:0 cport:42969):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Permission requested: 4 19:06:41.756 [ProcessThread(sid:0 cport:42969):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - ACLs for node: [31,s{'world,'anyone} ] 19:06:41.756 [ProcessThread(sid:0 cport:42969):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Client credentials: ['sasl,'zooclient , 'ip,'127.0.0.1 ] 19:06:41.756 [ProcessThread(sid:0 cport:42969):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 272437152229 19:06:41.756 [ProcessThread(sid:0 cport:42969):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 269731886624 19:06:41.756 [ProcessThread(sid:0 cport:42969):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 272883861135 19:06:41.756 [ProcessThread(sid:0 cport:42969):] DEBUG org.apache.zookeeper.server.SessionTrackerImpl - Checking session 0x1000002c50e0000 19:06:41.756 [ProcessThread(sid:0 cport:42969):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Permission requested: 1 19:06:41.756 [ProcessThread(sid:0 cport:42969):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - ACLs for node: [31,s{'world,'anyone} ] 19:06:41.756 [ProcessThread(sid:0 cport:42969):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Client credentials: ['sasl,'zooclient , 'ip,'127.0.0.1 ] 19:06:41.756 [ProcessThread(sid:0 cport:42969):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 272883861135 19:06:41.756 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - Processing request:: sessionid:0x1000002c50e0000 type:multi cxid:0xa0 zxid:0x6e txntype:14 reqpath:n/a 19:06:41.757 [ProcessThread(sid:0 cport:42969):] DEBUG org.apache.zookeeper.server.SessionTrackerImpl - Checking session 0x1000002c50e0000 19:06:41.757 [ProcessThread(sid:0 cport:42969):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Processing ACL: 31,s{'world,'anyone} 19:06:41.757 [SyncThread:0] DEBUG org.apache.zookeeper.common.PathTrie - brokers 19:06:41.757 [SyncThread:0] DEBUG org.apache.zookeeper.server.DataTree - Digests are matching for Zxid: 6e, Digest in log and actual tree: 258769061283 19:06:41.757 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - sessionid:0x1000002c50e0000 type:multi cxid:0xa0 zxid:0x6e txntype:14 reqpath:n/a 19:06:41.757 [ProcessThread(sid:0 cport:42969):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Permission requested: 4 19:06:41.757 [ProcessThread(sid:0 cport:42969):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - ACLs for node: [31,s{'world,'anyone} ] 19:06:41.757 [ProcessThread(sid:0 cport:42969):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Client credentials: ['sasl,'zooclient , 'ip,'127.0.0.1 ] 19:06:41.757 [ProcessThread(sid:0 cport:42969):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 272883861135 19:06:41.757 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - Processing request:: sessionid:0x1000002c50e0000 type:multi cxid:0xa1 zxid:0x6f txntype:14 reqpath:n/a 19:06:41.757 [ProcessThread(sid:0 cport:42969):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 270178474644 19:06:41.758 [main-SendThread(127.0.0.1:42969)] DEBUG org.apache.zookeeper.ClientCnxn - Reading reply session id: 0x1000002c50e0000, packet:: clientPath:null serverPath:null finished:false header:: 160,14 replyHeader:: 160,110,0 request:: org.apache.zookeeper.MultiOperationRecord@12b46b2f response:: org.apache.zookeeper.MultiResponse@2cf6e939 19:06:41.758 [SyncThread:0] DEBUG org.apache.zookeeper.common.PathTrie - brokers 19:06:41.758 [SyncThread:0] DEBUG org.apache.zookeeper.server.DataTree - Digests are matching for Zxid: 6f, Digest in log and actual tree: 259339001587 19:06:41.758 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - sessionid:0x1000002c50e0000 type:multi cxid:0xa1 zxid:0x6f txntype:14 reqpath:n/a 19:06:41.758 [ProcessThread(sid:0 cport:42969):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 273685551830 19:06:41.758 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - Processing request:: sessionid:0x1000002c50e0000 type:multi cxid:0xa2 zxid:0x70 txntype:14 reqpath:n/a 19:06:41.759 [SyncThread:0] DEBUG org.apache.zookeeper.common.PathTrie - brokers 19:06:41.759 [SyncThread:0] DEBUG org.apache.zookeeper.server.DataTree - Digests are matching for Zxid: 70, Digest in log and actual tree: 261630133311 19:06:41.759 [main-SendThread(127.0.0.1:42969)] DEBUG org.apache.zookeeper.ClientCnxn - Reading reply session id: 0x1000002c50e0000, packet:: clientPath:null serverPath:null finished:false header:: 161,14 replyHeader:: 161,111,0 request:: org.apache.zookeeper.MultiOperationRecord@849f947 response:: org.apache.zookeeper.MultiResponse@228c7751 19:06:41.759 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - sessionid:0x1000002c50e0000 type:multi cxid:0xa2 zxid:0x70 txntype:14 reqpath:n/a 19:06:41.759 [ProcessThread(sid:0 cport:42969):] DEBUG org.apache.zookeeper.server.SessionTrackerImpl - Checking session 0x1000002c50e0000 19:06:41.759 [ProcessThread(sid:0 cport:42969):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Permission requested: 1 19:06:41.759 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - Processing request:: sessionid:0x1000002c50e0000 type:multi cxid:0xa3 zxid:0x71 txntype:14 reqpath:n/a 19:06:41.759 [ProcessThread(sid:0 cport:42969):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - ACLs for node: [31,s{'world,'anyone} ] 19:06:41.759 [ProcessThread(sid:0 cport:42969):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Client credentials: ['sasl,'zooclient , 'ip,'127.0.0.1 ] 19:06:41.760 [SyncThread:0] DEBUG org.apache.zookeeper.common.PathTrie - brokers 19:06:41.760 [main-SendThread(127.0.0.1:42969)] DEBUG org.apache.zookeeper.ClientCnxn - Reading reply session id: 0x1000002c50e0000, packet:: clientPath:null serverPath:null finished:false header:: 162,14 replyHeader:: 162,112,0 request:: org.apache.zookeeper.MultiOperationRecord@10c9218c response:: org.apache.zookeeper.MultiResponse@2b0b9f96 19:06:41.760 [SyncThread:0] DEBUG org.apache.zookeeper.server.DataTree - Digests are matching for Zxid: 71, Digest in log and actual tree: 265738605042 19:06:41.760 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - sessionid:0x1000002c50e0000 type:multi cxid:0xa3 zxid:0x71 txntype:14 reqpath:n/a 19:06:41.760 [ProcessThread(sid:0 cport:42969):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 273685551830 19:06:41.760 [ProcessThread(sid:0 cport:42969):] DEBUG org.apache.zookeeper.server.SessionTrackerImpl - Checking session 0x1000002c50e0000 19:06:41.760 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - Processing request:: sessionid:0x1000002c50e0000 type:multi cxid:0xa4 zxid:0x72 txntype:14 reqpath:n/a 19:06:41.760 [ProcessThread(sid:0 cport:42969):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Processing ACL: 31,s{'world,'anyone} 19:06:41.762 [SyncThread:0] DEBUG org.apache.zookeeper.common.PathTrie - brokers 19:06:41.761 [main-SendThread(127.0.0.1:42969)] DEBUG org.apache.zookeeper.ClientCnxn - Reading reply session id: 0x1000002c50e0000, packet:: clientPath:null serverPath:null finished:false header:: 163,14 replyHeader:: 163,113,0 request:: org.apache.zookeeper.MultiOperationRecord@a5116167 response:: org.apache.zookeeper.MultiResponse@bf53df71 19:06:41.762 [SyncThread:0] DEBUG org.apache.zookeeper.server.DataTree - Digests are matching for Zxid: 72, Digest in log and actual tree: 269131225732 19:06:41.770 [ProcessThread(sid:0 cport:42969):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Permission requested: 4 19:06:41.770 [main] DEBUG org.apache.kafka.clients.NetworkClient - [Consumer clientId=mso-123456-consumer-e3947c42-8a25-4603-b050-ed449cbe5e27, groupId=mso-group] Sending metadata request MetadataRequestData(topics=[MetadataRequestTopic(topicId=AAAAAAAAAAAAAAAAAAAAAA, name='my-test-topic')], allowAutoTopicCreation=false, includeClusterAuthorizedOperations=false, includeTopicAuthorizedOperations=false) to node localhost:38537 (id: 1 rack: null) 19:06:41.771 [ProcessThread(sid:0 cport:42969):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - ACLs for node: [31,s{'world,'anyone} ] 19:06:41.769 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - sessionid:0x1000002c50e0000 type:multi cxid:0xa4 zxid:0x72 txntype:14 reqpath:n/a 19:06:41.771 [ProcessThread(sid:0 cport:42969):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Client credentials: ['sasl,'zooclient , 'ip,'127.0.0.1 ] 19:06:41.771 [main] DEBUG org.apache.kafka.clients.NetworkClient - [Consumer clientId=mso-123456-consumer-e3947c42-8a25-4603-b050-ed449cbe5e27, groupId=mso-group] Sending METADATA request with header RequestHeader(apiKey=METADATA, apiVersion=12, clientId=mso-123456-consumer-e3947c42-8a25-4603-b050-ed449cbe5e27, correlationId=6) and timeout 30000 to node 1: MetadataRequestData(topics=[MetadataRequestTopic(topicId=AAAAAAAAAAAAAAAAAAAAAA, name='my-test-topic')], allowAutoTopicCreation=false, includeClusterAuthorizedOperations=false, includeTopicAuthorizedOperations=false) 19:06:41.771 [ProcessThread(sid:0 cport:42969):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 273685551830 19:06:41.771 [ProcessThread(sid:0 cport:42969):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 273373688202 19:06:41.771 [ProcessThread(sid:0 cport:42969):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 276596606162 19:06:41.772 [ProcessThread(sid:0 cport:42969):] DEBUG org.apache.zookeeper.server.SessionTrackerImpl - Checking session 0x1000002c50e0000 19:06:41.772 [ProcessThread(sid:0 cport:42969):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Permission requested: 1 19:06:41.772 [ProcessThread(sid:0 cport:42969):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - ACLs for node: [31,s{'world,'anyone} ] 19:06:41.772 [ProcessThread(sid:0 cport:42969):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Client credentials: ['sasl,'zooclient , 'ip,'127.0.0.1 ] 19:06:41.772 [ProcessThread(sid:0 cport:42969):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 276596606162 19:06:41.772 [ProcessThread(sid:0 cport:42969):] DEBUG org.apache.zookeeper.server.SessionTrackerImpl - Checking session 0x1000002c50e0000 19:06:41.772 [ProcessThread(sid:0 cport:42969):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Processing ACL: 31,s{'world,'anyone} 19:06:41.772 [ProcessThread(sid:0 cport:42969):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Permission requested: 4 19:06:41.772 [ProcessThread(sid:0 cport:42969):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - ACLs for node: [31,s{'world,'anyone} ] 19:06:41.772 [ProcessThread(sid:0 cport:42969):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Client credentials: ['sasl,'zooclient , 'ip,'127.0.0.1 ] 19:06:41.772 [ProcessThread(sid:0 cport:42969):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 276596606162 19:06:41.772 [ProcessThread(sid:0 cport:42969):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 276926469710 19:06:41.772 [ProcessThread(sid:0 cport:42969):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 280696396117 19:06:41.772 [ProcessThread(sid:0 cport:42969):] DEBUG org.apache.zookeeper.server.SessionTrackerImpl - Checking session 0x1000002c50e0000 19:06:41.772 [ProcessThread(sid:0 cport:42969):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Permission requested: 1 19:06:41.772 [ProcessThread(sid:0 cport:42969):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - ACLs for node: [31,s{'world,'anyone} ] 19:06:41.772 [ProcessThread(sid:0 cport:42969):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Client credentials: ['sasl,'zooclient , 'ip,'127.0.0.1 ] 19:06:41.772 [ProcessThread(sid:0 cport:42969):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 280696396117 19:06:41.772 [ProcessThread(sid:0 cport:42969):] DEBUG org.apache.zookeeper.server.SessionTrackerImpl - Checking session 0x1000002c50e0000 19:06:41.772 [ProcessThread(sid:0 cport:42969):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Processing ACL: 31,s{'world,'anyone} 19:06:41.772 [ProcessThread(sid:0 cport:42969):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Permission requested: 4 19:06:41.772 [ProcessThread(sid:0 cport:42969):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - ACLs for node: [31,s{'world,'anyone} ] 19:06:41.772 [ProcessThread(sid:0 cport:42969):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Client credentials: ['sasl,'zooclient , 'ip,'127.0.0.1 ] 19:06:41.772 [ProcessThread(sid:0 cport:42969):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 280696396117 19:06:41.773 [ProcessThread(sid:0 cport:42969):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 282319390048 19:06:41.773 [ProcessThread(sid:0 cport:42969):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 282388950286 19:06:41.773 [ProcessThread(sid:0 cport:42969):] DEBUG org.apache.zookeeper.server.SessionTrackerImpl - Checking session 0x1000002c50e0000 19:06:41.773 [ProcessThread(sid:0 cport:42969):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Permission requested: 1 19:06:41.773 [ProcessThread(sid:0 cport:42969):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - ACLs for node: [31,s{'world,'anyone} ] 19:06:41.773 [ProcessThread(sid:0 cport:42969):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Client credentials: ['sasl,'zooclient , 'ip,'127.0.0.1 ] 19:06:41.773 [ProcessThread(sid:0 cport:42969):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 282388950286 19:06:41.773 [ProcessThread(sid:0 cport:42969):] DEBUG org.apache.zookeeper.server.SessionTrackerImpl - Checking session 0x1000002c50e0000 19:06:41.773 [ProcessThread(sid:0 cport:42969):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Processing ACL: 31,s{'world,'anyone} 19:06:41.773 [ProcessThread(sid:0 cport:42969):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Permission requested: 4 19:06:41.773 [ProcessThread(sid:0 cport:42969):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - ACLs for node: [31,s{'world,'anyone} ] 19:06:41.773 [ProcessThread(sid:0 cport:42969):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Client credentials: ['sasl,'zooclient , 'ip,'127.0.0.1 ] 19:06:41.773 [ProcessThread(sid:0 cport:42969):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 282388950286 19:06:41.773 [ProcessThread(sid:0 cport:42969):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 279725367627 19:06:41.774 [ProcessThread(sid:0 cport:42969):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 282629772186 19:06:41.772 [main-SendThread(127.0.0.1:42969)] DEBUG org.apache.zookeeper.ClientCnxn - Reading reply session id: 0x1000002c50e0000, packet:: clientPath:null serverPath:null finished:false header:: 164,14 replyHeader:: 164,114,0 request:: org.apache.zookeeper.MultiOperationRecord@7392b052 response:: org.apache.zookeeper.MultiResponse@8dd52e5c 19:06:41.774 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - Processing request:: sessionid:0x1000002c50e0000 type:multi cxid:0xa5 zxid:0x73 txntype:14 reqpath:n/a 19:06:41.775 [SyncThread:0] DEBUG org.apache.zookeeper.common.PathTrie - brokers 19:06:41.775 [main] DEBUG org.apache.kafka.clients.NetworkClient - [Consumer clientId=mso-123456-consumer-e3947c42-8a25-4603-b050-ed449cbe5e27, groupId=mso-group] Received METADATA response from node 1 for request with header RequestHeader(apiKey=METADATA, apiVersion=12, clientId=mso-123456-consumer-e3947c42-8a25-4603-b050-ed449cbe5e27, correlationId=6): MetadataResponseData(throttleTimeMs=0, brokers=[MetadataResponseBroker(nodeId=1, host='localhost', port=38537, rack=null)], clusterId='rNB1PmTvTz-7PD_3X0wh3g', controllerId=1, topics=[MetadataResponseTopic(errorCode=0, name='my-test-topic', topicId=Y7FYMZASRPqaYrfAS485Ow, isInternal=false, partitions=[MetadataResponsePartition(errorCode=0, partitionIndex=0, leaderId=1, leaderEpoch=0, replicaNodes=[1], isrNodes=[1], offlineReplicas=[])], topicAuthorizedOperations=-2147483648)], clusterAuthorizedOperations=-2147483648) 19:06:41.775 [SyncThread:0] DEBUG org.apache.zookeeper.server.DataTree - Digests are matching for Zxid: 73, Digest in log and actual tree: 267945561465 19:06:41.775 [main] DEBUG org.apache.kafka.clients.Metadata - [Consumer clientId=mso-123456-consumer-e3947c42-8a25-4603-b050-ed449cbe5e27, groupId=mso-group] Updating last seen epoch for partition my-test-topic-0 from 0 to epoch 0 from new metadata 19:06:41.775 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - sessionid:0x1000002c50e0000 type:multi cxid:0xa5 zxid:0x73 txntype:14 reqpath:n/a 19:06:41.776 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - Processing request:: sessionid:0x1000002c50e0000 type:multi cxid:0xa6 zxid:0x74 txntype:14 reqpath:n/a 19:06:41.776 [main] DEBUG org.apache.kafka.clients.Metadata - [Consumer clientId=mso-123456-consumer-e3947c42-8a25-4603-b050-ed449cbe5e27, groupId=mso-group] Updated cluster metadata updateVersion 4 to MetadataCache{clusterId='rNB1PmTvTz-7PD_3X0wh3g', nodes={1=localhost:38537 (id: 1 rack: null)}, partitions=[PartitionMetadata(error=NONE, partition=my-test-topic-0, leader=Optional[1], leaderEpoch=Optional[0], replicas=1, isr=1, offlineReplicas=)], controller=localhost:38537 (id: 1 rack: null)} 19:06:41.776 [SyncThread:0] DEBUG org.apache.zookeeper.common.PathTrie - brokers 19:06:41.776 [SyncThread:0] DEBUG org.apache.zookeeper.server.DataTree - Digests are matching for Zxid: 74, Digest in log and actual tree: 269488238975 19:06:41.776 [main] DEBUG org.apache.kafka.clients.consumer.internals.ConsumerCoordinator - [Consumer clientId=mso-123456-consumer-e3947c42-8a25-4603-b050-ed449cbe5e27, groupId=mso-group] Sending FindCoordinator request to broker localhost:38537 (id: 1 rack: null) 19:06:41.776 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - sessionid:0x1000002c50e0000 type:multi cxid:0xa6 zxid:0x74 txntype:14 reqpath:n/a 19:06:41.776 [ProcessThread(sid:0 cport:42969):] DEBUG org.apache.zookeeper.server.SessionTrackerImpl - Checking session 0x1000002c50e0000 19:06:41.776 [main] DEBUG org.apache.kafka.clients.NetworkClient - [Consumer clientId=mso-123456-consumer-e3947c42-8a25-4603-b050-ed449cbe5e27, groupId=mso-group] Sending FIND_COORDINATOR request with header RequestHeader(apiKey=FIND_COORDINATOR, apiVersion=4, clientId=mso-123456-consumer-e3947c42-8a25-4603-b050-ed449cbe5e27, correlationId=7) and timeout 30000 to node 1: FindCoordinatorRequestData(key='', keyType=0, coordinatorKeys=[mso-group]) 19:06:41.776 [ProcessThread(sid:0 cport:42969):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Permission requested: 1 19:06:41.776 [data-plane-kafka-network-thread-1-ListenerName(SASL_PLAINTEXT)-SASL_PLAINTEXT-0] DEBUG kafka.request.logger - Completed request:{"isForwarded":false,"requestHeader":{"requestApiKey":3,"requestApiVersion":12,"correlationId":6,"clientId":"mso-123456-consumer-e3947c42-8a25-4603-b050-ed449cbe5e27","requestApiKeyName":"METADATA"},"request":{"topics":[{"topicId":"AAAAAAAAAAAAAAAAAAAAAA","name":"my-test-topic"}],"allowAutoTopicCreation":false,"includeTopicAuthorizedOperations":false},"response":{"throttleTimeMs":0,"brokers":[{"nodeId":1,"host":"localhost","port":38537,"rack":null}],"clusterId":"rNB1PmTvTz-7PD_3X0wh3g","controllerId":1,"topics":[{"errorCode":0,"name":"my-test-topic","topicId":"Y7FYMZASRPqaYrfAS485Ow","isInternal":false,"partitions":[{"errorCode":0,"partitionIndex":0,"leaderId":1,"leaderEpoch":0,"replicaNodes":[1],"isrNodes":[1],"offlineReplicas":[]}],"topicAuthorizedOperations":-2147483648}]},"connection":"127.0.0.1:38537-127.0.0.1:54970-3","totalTimeMs":2.899,"requestQueueTimeMs":0.612,"localTimeMs":1.899,"remoteTimeMs":0.0,"throttleTimeMs":0,"responseQueueTimeMs":0.099,"sendTimeMs":0.287,"securityProtocol":"SASL_PLAINTEXT","principal":"User:admin","listener":"SASL_PLAINTEXT","clientInformation":{"softwareName":"apache-kafka-java","softwareVersion":"3.3.1"}} 19:06:41.776 [ProcessThread(sid:0 cport:42969):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - ACLs for node: [31,s{'world,'anyone} ] 19:06:41.776 [main-SendThread(127.0.0.1:42969)] DEBUG org.apache.zookeeper.ClientCnxn - Reading reply session id: 0x1000002c50e0000, packet:: clientPath:null serverPath:null finished:false header:: 165,14 replyHeader:: 165,115,0 request:: org.apache.zookeeper.MultiOperationRecord@aad33e50 response:: org.apache.zookeeper.MultiResponse@c515bc5a 19:06:41.776 [ProcessThread(sid:0 cport:42969):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Client credentials: ['sasl,'zooclient , 'ip,'127.0.0.1 ] 19:06:41.776 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - Processing request:: sessionid:0x1000002c50e0000 type:multi cxid:0xa7 zxid:0x75 txntype:14 reqpath:n/a 19:06:41.776 [ProcessThread(sid:0 cport:42969):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 282629772186 19:06:41.776 [ProcessThread(sid:0 cport:42969):] DEBUG org.apache.zookeeper.server.SessionTrackerImpl - Checking session 0x1000002c50e0000 19:06:41.776 [ProcessThread(sid:0 cport:42969):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Processing ACL: 31,s{'world,'anyone} 19:06:41.776 [SyncThread:0] DEBUG org.apache.zookeeper.common.PathTrie - brokers 19:06:41.776 [SyncThread:0] DEBUG org.apache.zookeeper.server.DataTree - Digests are matching for Zxid: 75, Digest in log and actual tree: 272437152229 19:06:41.777 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - sessionid:0x1000002c50e0000 type:multi cxid:0xa7 zxid:0x75 txntype:14 reqpath:n/a 19:06:41.777 [ProcessThread(sid:0 cport:42969):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Permission requested: 4 19:06:41.777 [ProcessThread(sid:0 cport:42969):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - ACLs for node: [31,s{'world,'anyone} ] 19:06:41.777 [ProcessThread(sid:0 cport:42969):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Client credentials: ['sasl,'zooclient , 'ip,'127.0.0.1 ] 19:06:41.777 [ProcessThread(sid:0 cport:42969):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 282629772186 19:06:41.777 [ProcessThread(sid:0 cport:42969):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 283587105048 19:06:41.777 [main-SendThread(127.0.0.1:42969)] DEBUG org.apache.zookeeper.ClientCnxn - Reading reply session id: 0x1000002c50e0000, packet:: clientPath:null serverPath:null finished:false header:: 166,14 replyHeader:: 166,116,0 request:: org.apache.zookeeper.MultiOperationRecord@c208c8d response:: org.apache.zookeeper.MultiResponse@26630a97 19:06:41.777 [ProcessThread(sid:0 cport:42969):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 283700793240 19:06:41.777 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - Processing request:: sessionid:0x1000002c50e0000 type:multi cxid:0xa8 zxid:0x76 txntype:14 reqpath:n/a 19:06:41.777 [SyncThread:0] DEBUG org.apache.zookeeper.common.PathTrie - brokers 19:06:41.777 [SyncThread:0] DEBUG org.apache.zookeeper.server.DataTree - Digests are matching for Zxid: 76, Digest in log and actual tree: 272883861135 19:06:41.777 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - sessionid:0x1000002c50e0000 type:multi cxid:0xa8 zxid:0x76 txntype:14 reqpath:n/a 19:06:41.777 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - Processing request:: sessionid:0x1000002c50e0000 type:multi cxid:0xa9 zxid:0x77 txntype:14 reqpath:n/a 19:06:41.778 [SyncThread:0] DEBUG org.apache.zookeeper.common.PathTrie - brokers 19:06:41.778 [SyncThread:0] DEBUG org.apache.zookeeper.server.DataTree - Digests are matching for Zxid: 77, Digest in log and actual tree: 273685551830 19:06:41.778 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - sessionid:0x1000002c50e0000 type:multi cxid:0xa9 zxid:0x77 txntype:14 reqpath:n/a 19:06:41.778 [ProcessThread(sid:0 cport:42969):] DEBUG org.apache.zookeeper.server.SessionTrackerImpl - Checking session 0x1000002c50e0000 19:06:41.778 [ProcessThread(sid:0 cport:42969):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Permission requested: 1 19:06:41.778 [ProcessThread(sid:0 cport:42969):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - ACLs for node: [31,s{'world,'anyone} ] 19:06:41.778 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - Processing request:: sessionid:0x1000002c50e0000 type:multi cxid:0xaa zxid:0x78 txntype:14 reqpath:n/a 19:06:41.778 [ProcessThread(sid:0 cport:42969):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Client credentials: ['sasl,'zooclient , 'ip,'127.0.0.1 ] 19:06:41.778 [SyncThread:0] DEBUG org.apache.zookeeper.common.PathTrie - brokers 19:06:41.778 [SyncThread:0] DEBUG org.apache.zookeeper.server.DataTree - Digests are matching for Zxid: 78, Digest in log and actual tree: 276596606162 19:06:41.778 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - sessionid:0x1000002c50e0000 type:multi cxid:0xaa zxid:0x78 txntype:14 reqpath:n/a 19:06:41.778 [ProcessThread(sid:0 cport:42969):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 283700793240 19:06:41.778 [ProcessThread(sid:0 cport:42969):] DEBUG org.apache.zookeeper.server.SessionTrackerImpl - Checking session 0x1000002c50e0000 19:06:41.778 [ProcessThread(sid:0 cport:42969):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Processing ACL: 31,s{'world,'anyone} 19:06:41.778 [ProcessThread(sid:0 cport:42969):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Permission requested: 4 19:06:41.778 [ProcessThread(sid:0 cport:42969):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - ACLs for node: [31,s{'world,'anyone} ] 19:06:41.778 [main-SendThread(127.0.0.1:42969)] DEBUG org.apache.zookeeper.ClientCnxn - Reading reply session id: 0x1000002c50e0000, packet:: clientPath:null serverPath:null finished:false header:: 167,14 replyHeader:: 167,117,0 request:: org.apache.zookeeper.MultiOperationRecord@3f1b7e2b response:: org.apache.zookeeper.MultiResponse@595dfc35 19:06:41.778 [ProcessThread(sid:0 cport:42969):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Client credentials: ['sasl,'zooclient , 'ip,'127.0.0.1 ] 19:06:41.779 [ProcessThread(sid:0 cport:42969):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 283700793240 19:06:41.778 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - Processing request:: sessionid:0x1000002c50e0000 type:multi cxid:0xab zxid:0x79 txntype:14 reqpath:n/a 19:06:41.779 [ProcessThread(sid:0 cport:42969):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 282678448666 19:06:41.779 [main-SendThread(127.0.0.1:42969)] DEBUG org.apache.zookeeper.ClientCnxn - Reading reply session id: 0x1000002c50e0000, packet:: clientPath:null serverPath:null finished:false header:: 168,14 replyHeader:: 168,118,0 request:: org.apache.zookeeper.MultiOperationRecord@75ed030f response:: org.apache.zookeeper.MultiResponse@902f8119 19:06:41.779 [main-SendThread(127.0.0.1:42969)] DEBUG org.apache.zookeeper.ClientCnxn - Reading reply session id: 0x1000002c50e0000, packet:: clientPath:null serverPath:null finished:false header:: 169,14 replyHeader:: 169,119,0 request:: org.apache.zookeeper.MultiOperationRecord@e276c4ed response:: org.apache.zookeeper.MultiResponse@fcb942f7 19:06:41.779 [ProcessThread(sid:0 cport:42969):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 284792493895 19:06:41.779 [ProcessThread(sid:0 cport:42969):] DEBUG org.apache.zookeeper.server.SessionTrackerImpl - Checking session 0x1000002c50e0000 19:06:41.779 [main-SendThread(127.0.0.1:42969)] DEBUG org.apache.zookeeper.ClientCnxn - Reading reply session id: 0x1000002c50e0000, packet:: clientPath:null serverPath:null finished:false header:: 170,14 replyHeader:: 170,120,0 request:: org.apache.zookeeper.MultiOperationRecord@dfb97991 response:: org.apache.zookeeper.MultiResponse@f9fbf79b 19:06:41.779 [ProcessThread(sid:0 cport:42969):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Permission requested: 1 19:06:41.779 [ProcessThread(sid:0 cport:42969):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - ACLs for node: [31,s{'world,'anyone} ] 19:06:41.779 [ProcessThread(sid:0 cport:42969):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Client credentials: ['sasl,'zooclient , 'ip,'127.0.0.1 ] 19:06:41.779 [SyncThread:0] DEBUG org.apache.zookeeper.common.PathTrie - brokers 19:06:41.780 [SyncThread:0] DEBUG org.apache.zookeeper.server.DataTree - Digests are matching for Zxid: 79, Digest in log and actual tree: 280696396117 19:06:41.780 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - sessionid:0x1000002c50e0000 type:multi cxid:0xab zxid:0x79 txntype:14 reqpath:n/a 19:06:41.780 [ProcessThread(sid:0 cport:42969):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 284792493895 19:06:41.780 [ProcessThread(sid:0 cport:42969):] DEBUG org.apache.zookeeper.server.SessionTrackerImpl - Checking session 0x1000002c50e0000 19:06:41.780 [ProcessThread(sid:0 cport:42969):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Processing ACL: 31,s{'world,'anyone} 19:06:41.780 [ProcessThread(sid:0 cport:42969):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Permission requested: 4 19:06:41.780 [ProcessThread(sid:0 cport:42969):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - ACLs for node: [31,s{'world,'anyone} ] 19:06:41.780 [ProcessThread(sid:0 cport:42969):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Client credentials: ['sasl,'zooclient , 'ip,'127.0.0.1 ] 19:06:41.780 [main-SendThread(127.0.0.1:42969)] DEBUG org.apache.zookeeper.ClientCnxn - Reading reply session id: 0x1000002c50e0000, packet:: clientPath:null serverPath:null finished:false header:: 171,14 replyHeader:: 171,121,0 request:: org.apache.zookeeper.MultiOperationRecord@38879f89 response:: org.apache.zookeeper.MultiResponse@52ca1d93 19:06:41.780 [ProcessThread(sid:0 cport:42969):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 284792493895 19:06:41.780 [ProcessThread(sid:0 cport:42969):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 287455808330 19:06:41.780 [ProcessThread(sid:0 cport:42969):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 290777159400 19:06:41.781 [ProcessThread(sid:0 cport:42969):] DEBUG org.apache.zookeeper.server.SessionTrackerImpl - Checking session 0x1000002c50e0000 19:06:41.781 [ProcessThread(sid:0 cport:42969):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Permission requested: 1 19:06:41.781 [ProcessThread(sid:0 cport:42969):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - ACLs for node: [31,s{'world,'anyone} ] 19:06:41.781 [ProcessThread(sid:0 cport:42969):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Client credentials: ['sasl,'zooclient , 'ip,'127.0.0.1 ] 19:06:41.781 [ProcessThread(sid:0 cport:42969):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 290777159400 19:06:41.781 [ProcessThread(sid:0 cport:42969):] DEBUG org.apache.zookeeper.server.SessionTrackerImpl - Checking session 0x1000002c50e0000 19:06:41.781 [ProcessThread(sid:0 cport:42969):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Processing ACL: 31,s{'world,'anyone} 19:06:41.781 [ProcessThread(sid:0 cport:42969):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Permission requested: 4 19:06:41.781 [ProcessThread(sid:0 cport:42969):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - ACLs for node: [31,s{'world,'anyone} ] 19:06:41.781 [ProcessThread(sid:0 cport:42969):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Client credentials: ['sasl,'zooclient , 'ip,'127.0.0.1 ] 19:06:41.781 [ProcessThread(sid:0 cport:42969):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 290777159400 19:06:41.781 [ProcessThread(sid:0 cport:42969):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 292400554755 19:06:41.781 [ProcessThread(sid:0 cport:42969):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 293559214619 19:06:41.782 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - Processing request:: sessionid:0x1000002c50e0000 type:multi cxid:0xac zxid:0x7a txntype:14 reqpath:n/a 19:06:41.782 [ProcessThread(sid:0 cport:42969):] DEBUG org.apache.zookeeper.server.SessionTrackerImpl - Checking session 0x1000002c50e0000 19:06:41.782 [SyncThread:0] DEBUG org.apache.zookeeper.common.PathTrie - brokers 19:06:41.782 [SyncThread:0] DEBUG org.apache.zookeeper.server.DataTree - Digests are matching for Zxid: 7a, Digest in log and actual tree: 282388950286 19:06:41.782 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - sessionid:0x1000002c50e0000 type:multi cxid:0xac zxid:0x7a txntype:14 reqpath:n/a 19:06:41.782 [ProcessThread(sid:0 cport:42969):] DEBUG org.apache.zookeeper.server.SessionTrackerImpl - Checking session 0x1000002c50e0000 19:06:41.782 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - Processing request:: sessionid:0x1000002c50e0000 type:multi cxid:0xad zxid:0x7b txntype:14 reqpath:n/a 19:06:41.782 [ProcessThread(sid:0 cport:42969):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Permission requested: 1 19:06:41.782 [ProcessThread(sid:0 cport:42969):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - ACLs for node: [31,s{'world,'anyone} ] 19:06:41.782 [ProcessThread(sid:0 cport:42969):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Client credentials: ['sasl,'zooclient , 'ip,'127.0.0.1 ] 19:06:41.782 [SyncThread:0] DEBUG org.apache.zookeeper.common.PathTrie - brokers 19:06:41.783 [SyncThread:0] DEBUG org.apache.zookeeper.server.DataTree - Digests are matching for Zxid: 7b, Digest in log and actual tree: 282629772186 19:06:41.783 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - sessionid:0x1000002c50e0000 type:multi cxid:0xad zxid:0x7b txntype:14 reqpath:n/a 19:06:41.783 [ProcessThread(sid:0 cport:42969):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 293559214619 19:06:41.783 [ProcessThread(sid:0 cport:42969):] DEBUG org.apache.zookeeper.server.SessionTrackerImpl - Checking session 0x1000002c50e0000 19:06:41.783 [ProcessThread(sid:0 cport:42969):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Processing ACL: 31,s{'world,'anyone} 19:06:41.783 [ProcessThread(sid:0 cport:42969):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Permission requested: 4 19:06:41.783 [ProcessThread(sid:0 cport:42969):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - ACLs for node: [31,s{'world,'anyone} ] 19:06:41.783 [ProcessThread(sid:0 cport:42969):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Client credentials: ['sasl,'zooclient , 'ip,'127.0.0.1 ] 19:06:41.783 [main-SendThread(127.0.0.1:42969)] DEBUG org.apache.zookeeper.ClientCnxn - Reading reply session id: 0x1000002c50e0000, packet:: clientPath:null serverPath:null finished:false header:: 172,14 replyHeader:: 172,122,0 request:: org.apache.zookeeper.MultiOperationRecord@3eac7511 response:: org.apache.zookeeper.MultiResponse@58eef31b 19:06:41.783 [ProcessThread(sid:0 cport:42969):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 293559214619 19:06:41.783 [ProcessThread(sid:0 cport:42969):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 292494743799 19:06:41.783 [ProcessThread(sid:0 cport:42969):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 293818823919 19:06:41.783 [ProcessThread(sid:0 cport:42969):] DEBUG org.apache.zookeeper.server.SessionTrackerImpl - Checking session 0x1000002c50e0000 19:06:41.783 [ProcessThread(sid:0 cport:42969):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Permission requested: 1 19:06:41.783 [ProcessThread(sid:0 cport:42969):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - ACLs for node: [31,s{'world,'anyone} ] 19:06:41.783 [ProcessThread(sid:0 cport:42969):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Client credentials: ['sasl,'zooclient , 'ip,'127.0.0.1 ] 19:06:41.783 [ProcessThread(sid:0 cport:42969):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 293818823919 19:06:41.783 [main-SendThread(127.0.0.1:42969)] DEBUG org.apache.zookeeper.ClientCnxn - Reading reply session id: 0x1000002c50e0000, packet:: clientPath:null serverPath:null finished:false header:: 173,14 replyHeader:: 173,123,0 request:: org.apache.zookeeper.MultiOperationRecord@d9f79ca8 response:: org.apache.zookeeper.MultiResponse@f43a1ab2 19:06:41.783 [ProcessThread(sid:0 cport:42969):] DEBUG org.apache.zookeeper.server.SessionTrackerImpl - Checking session 0x1000002c50e0000 19:06:41.783 [ProcessThread(sid:0 cport:42969):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Processing ACL: 31,s{'world,'anyone} 19:06:41.783 [ProcessThread(sid:0 cport:42969):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Permission requested: 4 19:06:41.783 [ProcessThread(sid:0 cport:42969):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - ACLs for node: [31,s{'world,'anyone} ] 19:06:41.783 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - Processing request:: sessionid:0x1000002c50e0000 type:multi cxid:0xae zxid:0x7c txntype:14 reqpath:n/a 19:06:41.783 [ProcessThread(sid:0 cport:42969):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Client credentials: ['sasl,'zooclient , 'ip,'127.0.0.1 ] 19:06:41.784 [SyncThread:0] DEBUG org.apache.zookeeper.common.PathTrie - brokers 19:06:41.784 [SyncThread:0] DEBUG org.apache.zookeeper.server.DataTree - Digests are matching for Zxid: 7c, Digest in log and actual tree: 283700793240 19:06:41.784 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - sessionid:0x1000002c50e0000 type:multi cxid:0xae zxid:0x7c txntype:14 reqpath:n/a 19:06:41.784 [ProcessThread(sid:0 cport:42969):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 293818823919 19:06:41.784 [ProcessThread(sid:0 cport:42969):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 294904204171 19:06:41.784 [ProcessThread(sid:0 cport:42969):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 295047891074 19:06:41.784 [ProcessThread(sid:0 cport:42969):] DEBUG org.apache.zookeeper.server.SessionTrackerImpl - Checking session 0x1000002c50e0000 19:06:41.784 [ProcessThread(sid:0 cport:42969):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Permission requested: 1 19:06:41.784 [ProcessThread(sid:0 cport:42969):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - ACLs for node: [31,s{'world,'anyone} ] 19:06:41.784 [ProcessThread(sid:0 cport:42969):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Client credentials: ['sasl,'zooclient , 'ip,'127.0.0.1 ] 19:06:41.784 [ProcessThread(sid:0 cport:42969):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 295047891074 19:06:41.784 [ProcessThread(sid:0 cport:42969):] DEBUG org.apache.zookeeper.server.SessionTrackerImpl - Checking session 0x1000002c50e0000 19:06:41.784 [ProcessThread(sid:0 cport:42969):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Processing ACL: 31,s{'world,'anyone} 19:06:41.784 [ProcessThread(sid:0 cport:42969):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Permission requested: 4 19:06:41.784 [ProcessThread(sid:0 cport:42969):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - ACLs for node: [31,s{'world,'anyone} ] 19:06:41.784 [ProcessThread(sid:0 cport:42969):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Client credentials: ['sasl,'zooclient , 'ip,'127.0.0.1 ] 19:06:41.784 [main-SendThread(127.0.0.1:42969)] DEBUG org.apache.zookeeper.ClientCnxn - Reading reply session id: 0x1000002c50e0000, packet:: clientPath:null serverPath:null finished:false header:: 174,14 replyHeader:: 174,124,0 request:: org.apache.zookeeper.MultiOperationRecord@12456215 response:: org.apache.zookeeper.MultiResponse@2c87e01f 19:06:41.784 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - Processing request:: sessionid:0x1000002c50e0000 type:multi cxid:0xaf zxid:0x7d txntype:14 reqpath:n/a 19:06:41.784 [ProcessThread(sid:0 cport:42969):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 295047891074 19:06:41.785 [ProcessThread(sid:0 cport:42969):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 292620114202 19:06:41.785 [SyncThread:0] DEBUG org.apache.zookeeper.common.PathTrie - brokers 19:06:41.785 [SyncThread:0] DEBUG org.apache.zookeeper.server.DataTree - Digests are matching for Zxid: 7d, Digest in log and actual tree: 284792493895 19:06:41.785 [ProcessThread(sid:0 cport:42969):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 292694391535 19:06:41.785 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - sessionid:0x1000002c50e0000 type:multi cxid:0xaf zxid:0x7d txntype:14 reqpath:n/a 19:06:41.786 [ProcessThread(sid:0 cport:42969):] DEBUG org.apache.zookeeper.server.SessionTrackerImpl - Checking session 0x1000002c50e0000 19:06:41.786 [ProcessThread(sid:0 cport:42969):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Permission requested: 1 19:06:41.786 [ProcessThread(sid:0 cport:42969):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - ACLs for node: [31,s{'world,'anyone} ] 19:06:41.786 [ProcessThread(sid:0 cport:42969):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Client credentials: ['sasl,'zooclient , 'ip,'127.0.0.1 ] 19:06:41.786 [ProcessThread(sid:0 cport:42969):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 292694391535 19:06:41.786 [ProcessThread(sid:0 cport:42969):] DEBUG org.apache.zookeeper.server.SessionTrackerImpl - Checking session 0x1000002c50e0000 19:06:41.786 [ProcessThread(sid:0 cport:42969):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Processing ACL: 31,s{'world,'anyone} 19:06:41.786 [ProcessThread(sid:0 cport:42969):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Permission requested: 4 19:06:41.786 [ProcessThread(sid:0 cport:42969):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - ACLs for node: [31,s{'world,'anyone} ] 19:06:41.786 [ProcessThread(sid:0 cport:42969):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Client credentials: ['sasl,'zooclient , 'ip,'127.0.0.1 ] 19:06:41.786 [ProcessThread(sid:0 cport:42969):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 292694391535 19:06:41.786 [ProcessThread(sid:0 cport:42969):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 290269743975 19:06:41.786 [ProcessThread(sid:0 cport:42969):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 291938147788 19:06:41.786 [ProcessThread(sid:0 cport:42969):] DEBUG org.apache.zookeeper.server.SessionTrackerImpl - Checking session 0x1000002c50e0000 19:06:41.786 [ProcessThread(sid:0 cport:42969):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Permission requested: 1 19:06:41.786 [ProcessThread(sid:0 cport:42969):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - ACLs for node: [31,s{'world,'anyone} ] 19:06:41.786 [ProcessThread(sid:0 cport:42969):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Client credentials: ['sasl,'zooclient , 'ip,'127.0.0.1 ] 19:06:41.786 [ProcessThread(sid:0 cport:42969):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 291938147788 19:06:41.786 [ProcessThread(sid:0 cport:42969):] DEBUG org.apache.zookeeper.server.SessionTrackerImpl - Checking session 0x1000002c50e0000 19:06:41.786 [ProcessThread(sid:0 cport:42969):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Processing ACL: 31,s{'world,'anyone} 19:06:41.786 [ProcessThread(sid:0 cport:42969):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Permission requested: 4 19:06:41.786 [ProcessThread(sid:0 cport:42969):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - ACLs for node: [31,s{'world,'anyone} ] 19:06:41.786 [main-SendThread(127.0.0.1:42969)] DEBUG org.apache.zookeeper.ClientCnxn - Reading reply session id: 0x1000002c50e0000, packet:: clientPath:null serverPath:null finished:false header:: 175,14 replyHeader:: 175,125,0 request:: org.apache.zookeeper.MultiOperationRecord@d73a514c response:: org.apache.zookeeper.MultiResponse@f17ccf56 19:06:41.786 [ProcessThread(sid:0 cport:42969):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Client credentials: ['sasl,'zooclient , 'ip,'127.0.0.1 ] 19:06:41.786 [ProcessThread(sid:0 cport:42969):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 291938147788 19:06:41.786 [ProcessThread(sid:0 cport:42969):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 289336132593 19:06:41.786 [ProcessThread(sid:0 cport:42969):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 290175158083 19:06:41.786 [ProcessThread(sid:0 cport:42969):] DEBUG org.apache.zookeeper.server.SessionTrackerImpl - Checking session 0x1000002c50e0000 19:06:41.787 [ProcessThread(sid:0 cport:42969):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Permission requested: 1 19:06:41.787 [ProcessThread(sid:0 cport:42969):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - ACLs for node: [31,s{'world,'anyone} ] 19:06:41.787 [ProcessThread(sid:0 cport:42969):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Client credentials: ['sasl,'zooclient , 'ip,'127.0.0.1 ] 19:06:41.787 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - Processing request:: sessionid:0x1000002c50e0000 type:multi cxid:0xb0 zxid:0x7e txntype:14 reqpath:n/a 19:06:41.787 [ProcessThread(sid:0 cport:42969):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 290175158083 19:06:41.787 [ProcessThread(sid:0 cport:42969):] DEBUG org.apache.zookeeper.server.SessionTrackerImpl - Checking session 0x1000002c50e0000 19:06:41.787 [ProcessThread(sid:0 cport:42969):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Processing ACL: 31,s{'world,'anyone} 19:06:41.787 [SyncThread:0] DEBUG org.apache.zookeeper.common.PathTrie - brokers 19:06:41.787 [SyncThread:0] DEBUG org.apache.zookeeper.server.DataTree - Digests are matching for Zxid: 7e, Digest in log and actual tree: 290777159400 19:06:41.787 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - sessionid:0x1000002c50e0000 type:multi cxid:0xb0 zxid:0x7e txntype:14 reqpath:n/a 19:06:41.787 [ProcessThread(sid:0 cport:42969):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Permission requested: 4 19:06:41.787 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - Processing request:: sessionid:0x1000002c50e0000 type:multi cxid:0xb1 zxid:0x7f txntype:14 reqpath:n/a 19:06:41.787 [ProcessThread(sid:0 cport:42969):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - ACLs for node: [31,s{'world,'anyone} ] 19:06:41.788 [ProcessThread(sid:0 cport:42969):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Client credentials: ['sasl,'zooclient , 'ip,'127.0.0.1 ] 19:06:41.788 [SyncThread:0] DEBUG org.apache.zookeeper.common.PathTrie - brokers 19:06:41.788 [SyncThread:0] DEBUG org.apache.zookeeper.server.DataTree - Digests are matching for Zxid: 7f, Digest in log and actual tree: 293559214619 19:06:41.788 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - sessionid:0x1000002c50e0000 type:multi cxid:0xb1 zxid:0x7f txntype:14 reqpath:n/a 19:06:41.788 [ProcessThread(sid:0 cport:42969):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 290175158083 19:06:41.788 [main-SendThread(127.0.0.1:42969)] DEBUG org.apache.zookeeper.ClientCnxn - Reading reply session id: 0x1000002c50e0000, packet:: clientPath:null serverPath:null finished:false header:: 176,14 replyHeader:: 176,126,0 request:: org.apache.zookeeper.MultiOperationRecord@6b829127 response:: org.apache.zookeeper.MultiResponse@85c50f31 19:06:41.788 [ProcessThread(sid:0 cport:42969):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 292698547488 19:06:41.788 [ProcessThread(sid:0 cport:42969):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 294715795764 19:06:41.788 [main-SendThread(127.0.0.1:42969)] DEBUG org.apache.zookeeper.ClientCnxn - Reading reply session id: 0x1000002c50e0000, packet:: clientPath:null serverPath:null finished:false header:: 177,14 replyHeader:: 177,127,0 request:: org.apache.zookeeper.MultiOperationRecord@d4dffe8f response:: org.apache.zookeeper.MultiResponse@ef227c99 19:06:41.788 [ProcessThread(sid:0 cport:42969):] DEBUG org.apache.zookeeper.server.SessionTrackerImpl - Checking session 0x1000002c50e0000 19:06:41.788 [ProcessThread(sid:0 cport:42969):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Permission requested: 1 19:06:41.789 [ProcessThread(sid:0 cport:42969):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - ACLs for node: [31,s{'world,'anyone} ] 19:06:41.788 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - Processing request:: sessionid:0x1000002c50e0000 type:exists cxid:0xb2 zxid:0xfffffffffffffffe txntype:unknown reqpath:/admin/delete_topics/__consumer_offsets 19:06:41.789 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - sessionid:0x1000002c50e0000 type:exists cxid:0xb2 zxid:0xfffffffffffffffe txntype:unknown reqpath:/admin/delete_topics/__consumer_offsets 19:06:41.789 [ProcessThread(sid:0 cport:42969):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Client credentials: ['sasl,'zooclient , 'ip,'127.0.0.1 ] 19:06:41.789 [ProcessThread(sid:0 cport:42969):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 294715795764 19:06:41.789 [ProcessThread(sid:0 cport:42969):] DEBUG org.apache.zookeeper.server.SessionTrackerImpl - Checking session 0x1000002c50e0000 19:06:41.789 [ProcessThread(sid:0 cport:42969):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Processing ACL: 31,s{'world,'anyone} 19:06:41.789 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - Processing request:: sessionid:0x1000002c50e0000 type:multi cxid:0xb3 zxid:0x80 txntype:14 reqpath:n/a 19:06:41.789 [ProcessThread(sid:0 cport:42969):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Permission requested: 4 19:06:41.789 [ProcessThread(sid:0 cport:42969):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - ACLs for node: [31,s{'world,'anyone} ] 19:06:41.789 [ProcessThread(sid:0 cport:42969):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Client credentials: ['sasl,'zooclient , 'ip,'127.0.0.1 ] 19:06:41.789 [SyncThread:0] DEBUG org.apache.zookeeper.common.PathTrie - brokers 19:06:41.789 [SyncThread:0] DEBUG org.apache.zookeeper.server.DataTree - Digests are matching for Zxid: 80, Digest in log and actual tree: 293818823919 19:06:41.789 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - sessionid:0x1000002c50e0000 type:multi cxid:0xb3 zxid:0x80 txntype:14 reqpath:n/a 19:06:41.789 [ProcessThread(sid:0 cport:42969):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 294715795764 19:06:41.789 [ProcessThread(sid:0 cport:42969):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 292273347020 19:06:41.789 [ProcessThread(sid:0 cport:42969):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 293575283285 19:06:41.789 [main-SendThread(127.0.0.1:42969)] DEBUG org.apache.zookeeper.ClientCnxn - Reading reply session id: 0x1000002c50e0000, packet:: clientPath:/admin/delete_topics/__consumer_offsets serverPath:/admin/delete_topics/__consumer_offsets finished:false header:: 178,3 replyHeader:: 178,127,-101 request:: '/admin/delete_topics/__consumer_offsets,F response:: 19:06:41.790 [ProcessThread(sid:0 cport:42969):] DEBUG org.apache.zookeeper.server.SessionTrackerImpl - Checking session 0x1000002c50e0000 19:06:41.790 [ProcessThread(sid:0 cport:42969):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Permission requested: 1 19:06:41.790 [ProcessThread(sid:0 cport:42969):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - ACLs for node: [31,s{'world,'anyone} ] 19:06:41.790 [main-SendThread(127.0.0.1:42969)] DEBUG org.apache.zookeeper.ClientCnxn - Reading reply session id: 0x1000002c50e0000, packet:: clientPath:null serverPath:null finished:false header:: 179,14 replyHeader:: 179,128,0 request:: org.apache.zookeeper.MultiOperationRecord@eddd7e9 response:: org.apache.zookeeper.MultiResponse@292055f3 19:06:41.790 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - Processing request:: sessionid:0x1000002c50e0000 type:multi cxid:0xb4 zxid:0x81 txntype:14 reqpath:n/a 19:06:41.790 [SyncThread:0] DEBUG org.apache.zookeeper.common.PathTrie - brokers 19:06:41.790 [SyncThread:0] DEBUG org.apache.zookeeper.server.DataTree - Digests are matching for Zxid: 81, Digest in log and actual tree: 295047891074 19:06:41.790 [ProcessThread(sid:0 cport:42969):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Client credentials: ['sasl,'zooclient , 'ip,'127.0.0.1 ] 19:06:41.791 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - sessionid:0x1000002c50e0000 type:multi cxid:0xb4 zxid:0x81 txntype:14 reqpath:n/a 19:06:41.791 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - Processing request:: sessionid:0x1000002c50e0000 type:multi cxid:0xb5 zxid:0x82 txntype:14 reqpath:n/a 19:06:41.790 [ProcessThread(sid:0 cport:42969):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 293575283285 19:06:41.791 [ProcessThread(sid:0 cport:42969):] DEBUG org.apache.zookeeper.server.SessionTrackerImpl - Checking session 0x1000002c50e0000 19:06:41.791 [ProcessThread(sid:0 cport:42969):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Processing ACL: 31,s{'world,'anyone} 19:06:41.791 [main-SendThread(127.0.0.1:42969)] DEBUG org.apache.zookeeper.ClientCnxn - Reading reply session id: 0x1000002c50e0000, packet:: clientPath:null serverPath:null finished:false header:: 180,14 replyHeader:: 180,129,0 request:: org.apache.zookeeper.MultiOperationRecord@af7bd34f response:: org.apache.zookeeper.MultiResponse@c9be5159 19:06:41.791 [ProcessThread(sid:0 cport:42969):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Permission requested: 4 19:06:41.791 [ProcessThread(sid:0 cport:42969):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - ACLs for node: [31,s{'world,'anyone} ] 19:06:41.791 [ProcessThread(sid:0 cport:42969):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Client credentials: ['sasl,'zooclient , 'ip,'127.0.0.1 ] 19:06:41.791 [SyncThread:0] DEBUG org.apache.zookeeper.common.PathTrie - brokers 19:06:41.791 [SyncThread:0] DEBUG org.apache.zookeeper.server.DataTree - Digests are matching for Zxid: 82, Digest in log and actual tree: 292694391535 19:06:41.791 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - sessionid:0x1000002c50e0000 type:multi cxid:0xb5 zxid:0x82 txntype:14 reqpath:n/a 19:06:41.791 [ProcessThread(sid:0 cport:42969):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 293575283285 19:06:41.791 [ProcessThread(sid:0 cport:42969):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 295483089389 19:06:41.791 [ProcessThread(sid:0 cport:42969):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 295555296346 19:06:41.792 [ProcessThread(sid:0 cport:42969):] DEBUG org.apache.zookeeper.server.SessionTrackerImpl - Checking session 0x1000002c50e0000 19:06:41.792 [ProcessThread(sid:0 cport:42969):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Permission requested: 1 19:06:41.792 [ProcessThread(sid:0 cport:42969):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - ACLs for node: [31,s{'world,'anyone} ] 19:06:41.792 [ProcessThread(sid:0 cport:42969):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Client credentials: ['sasl,'zooclient , 'ip,'127.0.0.1 ] 19:06:41.792 [ProcessThread(sid:0 cport:42969):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 295555296346 19:06:41.792 [ProcessThread(sid:0 cport:42969):] DEBUG org.apache.zookeeper.server.SessionTrackerImpl - Checking session 0x1000002c50e0000 19:06:41.792 [ProcessThread(sid:0 cport:42969):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Processing ACL: 31,s{'world,'anyone} 19:06:41.792 [ProcessThread(sid:0 cport:42969):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Permission requested: 4 19:06:41.792 [ProcessThread(sid:0 cport:42969):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - ACLs for node: [31,s{'world,'anyone} ] 19:06:41.792 [ProcessThread(sid:0 cport:42969):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Client credentials: ['sasl,'zooclient , 'ip,'127.0.0.1 ] 19:06:41.792 [ProcessThread(sid:0 cport:42969):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 295555296346 19:06:41.792 [ProcessThread(sid:0 cport:42969):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 295072080643 19:06:41.792 [main-SendThread(127.0.0.1:42969)] DEBUG org.apache.zookeeper.ClientCnxn - Reading reply session id: 0x1000002c50e0000, packet:: clientPath:null serverPath:null finished:false header:: 181,14 replyHeader:: 181,130,0 request:: org.apache.zookeeper.MultiOperationRecord@6d6ddaca response:: org.apache.zookeeper.MultiResponse@87b058d4 19:06:41.792 [ProcessThread(sid:0 cport:42969):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 297377167654 19:06:41.792 [ProcessThread(sid:0 cport:42969):] DEBUG org.apache.zookeeper.server.SessionTrackerImpl - Checking session 0x1000002c50e0000 19:06:41.792 [ProcessThread(sid:0 cport:42969):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Permission requested: 1 19:06:41.792 [ProcessThread(sid:0 cport:42969):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - ACLs for node: [31,s{'world,'anyone} ] 19:06:41.792 [ProcessThread(sid:0 cport:42969):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Client credentials: ['sasl,'zooclient , 'ip,'127.0.0.1 ] 19:06:41.792 [ProcessThread(sid:0 cport:42969):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 297377167654 19:06:41.792 [ProcessThread(sid:0 cport:42969):] DEBUG org.apache.zookeeper.server.SessionTrackerImpl - Checking session 0x1000002c50e0000 19:06:41.792 [ProcessThread(sid:0 cport:42969):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Processing ACL: 31,s{'world,'anyone} 19:06:41.792 [ProcessThread(sid:0 cport:42969):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Permission requested: 4 19:06:41.792 [ProcessThread(sid:0 cport:42969):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - ACLs for node: [31,s{'world,'anyone} ] 19:06:41.792 [ProcessThread(sid:0 cport:42969):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Client credentials: ['sasl,'zooclient , 'ip,'127.0.0.1 ] 19:06:41.793 [ProcessThread(sid:0 cport:42969):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 297377167654 19:06:41.793 [ProcessThread(sid:0 cport:42969):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 297892195823 19:06:41.793 [ProcessThread(sid:0 cport:42969):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 298310223558 19:06:41.793 [ProcessThread(sid:0 cport:42969):] DEBUG org.apache.zookeeper.server.SessionTrackerImpl - Checking session 0x1000002c50e0000 19:06:41.793 [ProcessThread(sid:0 cport:42969):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Permission requested: 1 19:06:41.793 [ProcessThread(sid:0 cport:42969):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - ACLs for node: [31,s{'world,'anyone} ] 19:06:41.793 [ProcessThread(sid:0 cport:42969):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Client credentials: ['sasl,'zooclient , 'ip,'127.0.0.1 ] 19:06:41.793 [ProcessThread(sid:0 cport:42969):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 298310223558 19:06:41.793 [ProcessThread(sid:0 cport:42969):] DEBUG org.apache.zookeeper.server.SessionTrackerImpl - Checking session 0x1000002c50e0000 19:06:41.793 [ProcessThread(sid:0 cport:42969):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Processing ACL: 31,s{'world,'anyone} 19:06:41.793 [ProcessThread(sid:0 cport:42969):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Permission requested: 4 19:06:41.793 [ProcessThread(sid:0 cport:42969):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - ACLs for node: [31,s{'world,'anyone} ] 19:06:41.793 [ProcessThread(sid:0 cport:42969):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Client credentials: ['sasl,'zooclient , 'ip,'127.0.0.1 ] 19:06:41.793 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - Processing request:: sessionid:0x1000002c50e0000 type:multi cxid:0xb6 zxid:0x83 txntype:14 reqpath:n/a 19:06:41.793 [SyncThread:0] DEBUG org.apache.zookeeper.common.PathTrie - brokers 19:06:41.794 [SyncThread:0] DEBUG org.apache.zookeeper.server.DataTree - Digests are matching for Zxid: 83, Digest in log and actual tree: 291938147788 19:06:41.794 [ProcessThread(sid:0 cport:42969):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 298310223558 19:06:41.794 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - sessionid:0x1000002c50e0000 type:multi cxid:0xb6 zxid:0x83 txntype:14 reqpath:n/a 19:06:41.794 [ProcessThread(sid:0 cport:42969):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 295869601598 19:06:41.794 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - Processing request:: sessionid:0x1000002c50e0000 type:multi cxid:0xb7 zxid:0x84 txntype:14 reqpath:n/a 19:06:41.794 [main-SendThread(127.0.0.1:42969)] DEBUG org.apache.zookeeper.ClientCnxn - Reading reply session id: 0x1000002c50e0000, packet:: clientPath:null serverPath:null finished:false header:: 182,14 replyHeader:: 182,131,0 request:: org.apache.zookeeper.MultiOperationRecord@43c4132a response:: org.apache.zookeeper.MultiResponse@5e069134 19:06:41.794 [SyncThread:0] DEBUG org.apache.zookeeper.common.PathTrie - brokers 19:06:41.794 [SyncThread:0] DEBUG org.apache.zookeeper.server.DataTree - Digests are matching for Zxid: 84, Digest in log and actual tree: 290175158083 19:06:41.794 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - sessionid:0x1000002c50e0000 type:multi cxid:0xb7 zxid:0x84 txntype:14 reqpath:n/a 19:06:41.794 [ProcessThread(sid:0 cport:42969):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 299401483777 19:06:41.794 [ProcessThread(sid:0 cport:42969):] DEBUG org.apache.zookeeper.server.SessionTrackerImpl - Checking session 0x1000002c50e0000 19:06:41.794 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - Processing request:: sessionid:0x1000002c50e0000 type:multi cxid:0xb8 zxid:0x85 txntype:14 reqpath:n/a 19:06:41.795 [main-SendThread(127.0.0.1:42969)] DEBUG org.apache.zookeeper.ClientCnxn - Reading reply session id: 0x1000002c50e0000, packet:: clientPath:null serverPath:null finished:false header:: 183,14 replyHeader:: 183,132,0 request:: org.apache.zookeeper.MultiOperationRecord@9c639d0 response:: org.apache.zookeeper.MultiResponse@2408b7da 19:06:41.795 [SyncThread:0] DEBUG org.apache.zookeeper.common.PathTrie - brokers 19:06:41.795 [SyncThread:0] DEBUG org.apache.zookeeper.server.DataTree - Digests are matching for Zxid: 85, Digest in log and actual tree: 294715795764 19:06:41.795 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - sessionid:0x1000002c50e0000 type:multi cxid:0xb8 zxid:0x85 txntype:14 reqpath:n/a 19:06:41.795 [ProcessThread(sid:0 cport:42969):] DEBUG org.apache.zookeeper.server.SessionTrackerImpl - Checking session 0x1000002c50e0000 19:06:41.795 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - Processing request:: sessionid:0x1000002c50e0000 type:multi cxid:0xb9 zxid:0x86 txntype:14 reqpath:n/a 19:06:41.795 [ProcessThread(sid:0 cport:42969):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Permission requested: 1 19:06:41.795 [ProcessThread(sid:0 cport:42969):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - ACLs for node: [31,s{'world,'anyone} ] 19:06:41.795 [main-SendThread(127.0.0.1:42969)] DEBUG org.apache.zookeeper.ClientCnxn - Reading reply session id: 0x1000002c50e0000, packet:: clientPath:null serverPath:null finished:false header:: 184,14 replyHeader:: 184,133,0 request:: org.apache.zookeeper.MultiOperationRecord@dd5f26d4 response:: org.apache.zookeeper.MultiResponse@f7a1a4de 19:06:41.795 [ProcessThread(sid:0 cport:42969):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Client credentials: ['sasl,'zooclient , 'ip,'127.0.0.1 ] 19:06:41.795 [SyncThread:0] DEBUG org.apache.zookeeper.common.PathTrie - brokers 19:06:41.796 [SyncThread:0] DEBUG org.apache.zookeeper.server.DataTree - Digests are matching for Zxid: 86, Digest in log and actual tree: 293575283285 19:06:41.796 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - sessionid:0x1000002c50e0000 type:multi cxid:0xb9 zxid:0x86 txntype:14 reqpath:n/a 19:06:41.796 [ProcessThread(sid:0 cport:42969):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 299401483777 19:06:41.796 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - Processing request:: sessionid:0x1000002c50e0000 type:multi cxid:0xba zxid:0x87 txntype:14 reqpath:n/a 19:06:41.796 [ProcessThread(sid:0 cport:42969):] DEBUG org.apache.zookeeper.server.SessionTrackerImpl - Checking session 0x1000002c50e0000 19:06:41.796 [ProcessThread(sid:0 cport:42969):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Processing ACL: 31,s{'world,'anyone} 19:06:41.796 [main-SendThread(127.0.0.1:42969)] DEBUG org.apache.zookeeper.ClientCnxn - Reading reply session id: 0x1000002c50e0000, packet:: clientPath:null serverPath:null finished:false header:: 185,14 replyHeader:: 185,134,0 request:: org.apache.zookeeper.MultiOperationRecord@a8e7f4ad response:: org.apache.zookeeper.MultiResponse@c32a72b7 19:06:41.796 [SyncThread:0] DEBUG org.apache.zookeeper.common.PathTrie - brokers 19:06:41.796 [SyncThread:0] DEBUG org.apache.zookeeper.server.DataTree - Digests are matching for Zxid: 87, Digest in log and actual tree: 295555296346 19:06:41.796 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - sessionid:0x1000002c50e0000 type:multi cxid:0xba zxid:0x87 txntype:14 reqpath:n/a 19:06:41.796 [ProcessThread(sid:0 cport:42969):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Permission requested: 4 19:06:41.796 [ProcessThread(sid:0 cport:42969):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - ACLs for node: [31,s{'world,'anyone} ] 19:06:41.796 [ProcessThread(sid:0 cport:42969):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Client credentials: ['sasl,'zooclient , 'ip,'127.0.0.1 ] 19:06:41.796 [ProcessThread(sid:0 cport:42969):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 299401483777 19:06:41.797 [ProcessThread(sid:0 cport:42969):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 301258958457 19:06:41.796 [main-SendThread(127.0.0.1:42969)] DEBUG org.apache.zookeeper.ClientCnxn - Reading reply session id: 0x1000002c50e0000, packet:: clientPath:null serverPath:null finished:false header:: 186,14 replyHeader:: 186,135,0 request:: org.apache.zookeeper.MultiOperationRecord@479aa670 response:: org.apache.zookeeper.MultiResponse@61dd247a 19:06:41.797 [ProcessThread(sid:0 cport:42969):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 303344084217 19:06:41.798 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - Processing request:: sessionid:0x1000002c50e0000 type:multi cxid:0xbb zxid:0x88 txntype:14 reqpath:n/a 19:06:41.798 [SyncThread:0] DEBUG org.apache.zookeeper.common.PathTrie - brokers 19:06:41.798 [SyncThread:0] DEBUG org.apache.zookeeper.server.DataTree - Digests are matching for Zxid: 88, Digest in log and actual tree: 297377167654 19:06:41.798 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - sessionid:0x1000002c50e0000 type:multi cxid:0xbb zxid:0x88 txntype:14 reqpath:n/a 19:06:41.798 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - Processing request:: sessionid:0x1000002c50e0000 type:multi cxid:0xbc zxid:0x89 txntype:14 reqpath:n/a 19:06:41.798 [main-SendThread(127.0.0.1:42969)] DEBUG org.apache.zookeeper.ClientCnxn - Reading reply session id: 0x1000002c50e0000, packet:: clientPath:null serverPath:null finished:false header:: 187,14 replyHeader:: 187,136,0 request:: org.apache.zookeeper.MultiOperationRecord@a6fcab0a response:: org.apache.zookeeper.MultiResponse@c13f2914 19:06:41.798 [SyncThread:0] DEBUG org.apache.zookeeper.common.PathTrie - brokers 19:06:41.798 [SyncThread:0] DEBUG org.apache.zookeeper.server.DataTree - Digests are matching for Zxid: 89, Digest in log and actual tree: 298310223558 19:06:41.798 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - sessionid:0x1000002c50e0000 type:multi cxid:0xbc zxid:0x89 txntype:14 reqpath:n/a 19:06:41.799 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - Processing request:: sessionid:0x1000002c50e0000 type:multi cxid:0xbd zxid:0x8a txntype:14 reqpath:n/a 19:06:41.799 [SyncThread:0] DEBUG org.apache.zookeeper.common.PathTrie - brokers 19:06:41.799 [main-SendThread(127.0.0.1:42969)] DEBUG org.apache.zookeeper.ClientCnxn - Reading reply session id: 0x1000002c50e0000, packet:: clientPath:null serverPath:null finished:false header:: 188,14 replyHeader:: 188,137,0 request:: org.apache.zookeeper.MultiOperationRecord@3a16448 response:: org.apache.zookeeper.MultiResponse@1de3e252 19:06:41.799 [SyncThread:0] DEBUG org.apache.zookeeper.server.DataTree - Digests are matching for Zxid: 8a, Digest in log and actual tree: 299401483777 19:06:41.799 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - sessionid:0x1000002c50e0000 type:multi cxid:0xbd zxid:0x8a txntype:14 reqpath:n/a 19:06:41.799 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - Processing request:: sessionid:0x1000002c50e0000 type:exists cxid:0xbe zxid:0xfffffffffffffffe txntype:unknown reqpath:/brokers/topics/__consumer_offsets 19:06:41.799 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - sessionid:0x1000002c50e0000 type:exists cxid:0xbe zxid:0xfffffffffffffffe txntype:unknown reqpath:/brokers/topics/__consumer_offsets 19:06:41.799 [main-SendThread(127.0.0.1:42969)] DEBUG org.apache.zookeeper.ClientCnxn - Reading reply session id: 0x1000002c50e0000, packet:: clientPath:null serverPath:null finished:false header:: 189,14 replyHeader:: 189,138,0 request:: org.apache.zookeeper.MultiOperationRecord@3d303488 response:: org.apache.zookeeper.MultiResponse@5772b292 19:06:41.800 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - Processing request:: sessionid:0x1000002c50e0000 type:multi cxid:0xbf zxid:0x8b txntype:14 reqpath:n/a 19:06:41.800 [main-SendThread(127.0.0.1:42969)] DEBUG org.apache.zookeeper.ClientCnxn - Reading reply session id: 0x1000002c50e0000, packet:: clientPath:/brokers/topics/__consumer_offsets serverPath:/brokers/topics/__consumer_offsets finished:false header:: 190,3 replyHeader:: 190,138,0 request:: '/brokers/topics/__consumer_offsets,T response:: s{38,38,1755112001586,1755112001586,0,1,0,0,548,1,39} 19:06:41.800 [SyncThread:0] DEBUG org.apache.zookeeper.common.PathTrie - brokers 19:06:41.800 [SyncThread:0] DEBUG org.apache.zookeeper.server.DataTree - Digests are matching for Zxid: 8b, Digest in log and actual tree: 303344084217 19:06:41.800 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - sessionid:0x1000002c50e0000 type:multi cxid:0xbf zxid:0x8b txntype:14 reqpath:n/a 19:06:41.800 [data-plane-kafka-request-handler-0] DEBUG kafka.server.ZkAdminManager - [Admin Manager on Broker 1]: Topic creation failed since topic '__consumer_offsets' already exists. org.apache.kafka.common.errors.TopicExistsException: Topic '__consumer_offsets' already exists. 19:06:41.800 [data-plane-kafka-request-handler-0] DEBUG kafka.server.DefaultAutoTopicCreationManager - Cleared inflight topic creation state for HashMap(__consumer_offsets -> CreatableTopic(name='__consumer_offsets', numPartitions=50, replicationFactor=1, assignments=[], configs=[CreateableTopicConfig(name='compression.type', value='producer'), CreateableTopicConfig(name='cleanup.policy', value='compact'), CreateableTopicConfig(name='segment.bytes', value='104857600')])) 19:06:41.800 [main-SendThread(127.0.0.1:42969)] DEBUG org.apache.zookeeper.ClientCnxn - Reading reply session id: 0x1000002c50e0000, packet:: clientPath:null serverPath:null finished:false header:: 191,14 replyHeader:: 191,139,0 request:: org.apache.zookeeper.MultiOperationRecord@3b44eae5 response:: org.apache.zookeeper.MultiResponse@558768ef 19:06:41.802 [main] DEBUG org.apache.kafka.clients.NetworkClient - [Consumer clientId=mso-123456-consumer-e3947c42-8a25-4603-b050-ed449cbe5e27, groupId=mso-group] Received FIND_COORDINATOR response from node 1 for request with header RequestHeader(apiKey=FIND_COORDINATOR, apiVersion=4, clientId=mso-123456-consumer-e3947c42-8a25-4603-b050-ed449cbe5e27, correlationId=7): FindCoordinatorResponseData(throttleTimeMs=0, errorCode=0, errorMessage='', nodeId=0, host='', port=0, coordinators=[Coordinator(key='mso-group', nodeId=-1, host='', port=-1, errorCode=15, errorMessage='')]) 19:06:41.802 [main] DEBUG org.apache.kafka.clients.consumer.internals.ConsumerCoordinator - [Consumer clientId=mso-123456-consumer-e3947c42-8a25-4603-b050-ed449cbe5e27, groupId=mso-group] Received FindCoordinator response ClientResponse(receivedTimeMs=1755112001802, latencyMs=26, disconnected=false, requestHeader=RequestHeader(apiKey=FIND_COORDINATOR, apiVersion=4, clientId=mso-123456-consumer-e3947c42-8a25-4603-b050-ed449cbe5e27, correlationId=7), responseBody=FindCoordinatorResponseData(throttleTimeMs=0, errorCode=0, errorMessage='', nodeId=0, host='', port=0, coordinators=[Coordinator(key='mso-group', nodeId=-1, host='', port=-1, errorCode=15, errorMessage='')])) 19:06:41.802 [main] DEBUG org.apache.kafka.clients.consumer.internals.ConsumerCoordinator - [Consumer clientId=mso-123456-consumer-e3947c42-8a25-4603-b050-ed449cbe5e27, groupId=mso-group] Group coordinator lookup failed: 19:06:41.802 [main] DEBUG org.apache.kafka.clients.consumer.internals.ConsumerCoordinator - [Consumer clientId=mso-123456-consumer-e3947c42-8a25-4603-b050-ed449cbe5e27, groupId=mso-group] Coordinator discovery failed, refreshing metadata org.apache.kafka.common.errors.CoordinatorNotAvailableException: The coordinator is not available. 19:06:41.802 [data-plane-kafka-network-thread-1-ListenerName(SASL_PLAINTEXT)-SASL_PLAINTEXT-0] DEBUG kafka.request.logger - Completed request:{"isForwarded":false,"requestHeader":{"requestApiKey":10,"requestApiVersion":4,"correlationId":7,"clientId":"mso-123456-consumer-e3947c42-8a25-4603-b050-ed449cbe5e27","requestApiKeyName":"FIND_COORDINATOR"},"request":{"keyType":0,"coordinatorKeys":["mso-group"]},"response":{"throttleTimeMs":0,"coordinators":[{"key":"mso-group","nodeId":-1,"host":"","port":-1,"errorCode":15,"errorMessage":""}]},"connection":"127.0.0.1:38537-127.0.0.1:54970-3","totalTimeMs":25.156,"requestQueueTimeMs":0.15,"localTimeMs":24.448,"remoteTimeMs":0.0,"throttleTimeMs":0,"responseQueueTimeMs":0.168,"sendTimeMs":0.388,"securityProtocol":"SASL_PLAINTEXT","principal":"User:admin","listener":"SASL_PLAINTEXT","clientInformation":{"softwareName":"apache-kafka-java","softwareVersion":"3.3.1"}} 19:06:41.811 [controller-event-thread] INFO state.change.logger - [Controller id=1 epoch=1] Changed partition __consumer_offsets-22 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isr=List(1), leaderRecoveryState=RECOVERED, partitionEpoch=0) 19:06:41.811 [controller-event-thread] INFO state.change.logger - [Controller id=1 epoch=1] Changed partition __consumer_offsets-30 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isr=List(1), leaderRecoveryState=RECOVERED, partitionEpoch=0) 19:06:41.812 [controller-event-thread] INFO state.change.logger - [Controller id=1 epoch=1] Changed partition __consumer_offsets-25 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isr=List(1), leaderRecoveryState=RECOVERED, partitionEpoch=0) 19:06:41.812 [controller-event-thread] INFO state.change.logger - [Controller id=1 epoch=1] Changed partition __consumer_offsets-35 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isr=List(1), leaderRecoveryState=RECOVERED, partitionEpoch=0) 19:06:41.812 [controller-event-thread] INFO state.change.logger - [Controller id=1 epoch=1] Changed partition __consumer_offsets-37 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isr=List(1), leaderRecoveryState=RECOVERED, partitionEpoch=0) 19:06:41.812 [controller-event-thread] INFO state.change.logger - [Controller id=1 epoch=1] Changed partition __consumer_offsets-38 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isr=List(1), leaderRecoveryState=RECOVERED, partitionEpoch=0) 19:06:41.812 [controller-event-thread] INFO state.change.logger - [Controller id=1 epoch=1] Changed partition __consumer_offsets-13 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isr=List(1), leaderRecoveryState=RECOVERED, partitionEpoch=0) 19:06:41.812 [controller-event-thread] INFO state.change.logger - [Controller id=1 epoch=1] Changed partition __consumer_offsets-8 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isr=List(1), leaderRecoveryState=RECOVERED, partitionEpoch=0) 19:06:41.812 [controller-event-thread] INFO state.change.logger - [Controller id=1 epoch=1] Changed partition __consumer_offsets-21 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isr=List(1), leaderRecoveryState=RECOVERED, partitionEpoch=0) 19:06:41.812 [controller-event-thread] INFO state.change.logger - [Controller id=1 epoch=1] Changed partition __consumer_offsets-4 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isr=List(1), leaderRecoveryState=RECOVERED, partitionEpoch=0) 19:06:41.812 [controller-event-thread] INFO state.change.logger - [Controller id=1 epoch=1] Changed partition __consumer_offsets-27 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isr=List(1), leaderRecoveryState=RECOVERED, partitionEpoch=0) 19:06:41.812 [controller-event-thread] INFO state.change.logger - [Controller id=1 epoch=1] Changed partition __consumer_offsets-7 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isr=List(1), leaderRecoveryState=RECOVERED, partitionEpoch=0) 19:06:41.812 [controller-event-thread] INFO state.change.logger - [Controller id=1 epoch=1] Changed partition __consumer_offsets-9 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isr=List(1), leaderRecoveryState=RECOVERED, partitionEpoch=0) 19:06:41.812 [controller-event-thread] INFO state.change.logger - [Controller id=1 epoch=1] Changed partition __consumer_offsets-46 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isr=List(1), leaderRecoveryState=RECOVERED, partitionEpoch=0) 19:06:41.812 [controller-event-thread] INFO state.change.logger - [Controller id=1 epoch=1] Changed partition __consumer_offsets-41 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isr=List(1), leaderRecoveryState=RECOVERED, partitionEpoch=0) 19:06:41.812 [controller-event-thread] INFO state.change.logger - [Controller id=1 epoch=1] Changed partition __consumer_offsets-33 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isr=List(1), leaderRecoveryState=RECOVERED, partitionEpoch=0) 19:06:41.812 [controller-event-thread] INFO state.change.logger - [Controller id=1 epoch=1] Changed partition __consumer_offsets-23 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isr=List(1), leaderRecoveryState=RECOVERED, partitionEpoch=0) 19:06:41.812 [controller-event-thread] INFO state.change.logger - [Controller id=1 epoch=1] Changed partition __consumer_offsets-49 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isr=List(1), leaderRecoveryState=RECOVERED, partitionEpoch=0) 19:06:41.812 [controller-event-thread] INFO state.change.logger - [Controller id=1 epoch=1] Changed partition __consumer_offsets-47 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isr=List(1), leaderRecoveryState=RECOVERED, partitionEpoch=0) 19:06:41.812 [controller-event-thread] INFO state.change.logger - [Controller id=1 epoch=1] Changed partition __consumer_offsets-16 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isr=List(1), leaderRecoveryState=RECOVERED, partitionEpoch=0) 19:06:41.813 [controller-event-thread] INFO state.change.logger - [Controller id=1 epoch=1] Changed partition __consumer_offsets-28 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isr=List(1), leaderRecoveryState=RECOVERED, partitionEpoch=0) 19:06:41.813 [controller-event-thread] INFO state.change.logger - [Controller id=1 epoch=1] Changed partition __consumer_offsets-31 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isr=List(1), leaderRecoveryState=RECOVERED, partitionEpoch=0) 19:06:41.813 [controller-event-thread] INFO state.change.logger - [Controller id=1 epoch=1] Changed partition __consumer_offsets-36 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isr=List(1), leaderRecoveryState=RECOVERED, partitionEpoch=0) 19:06:41.813 [controller-event-thread] INFO state.change.logger - [Controller id=1 epoch=1] Changed partition __consumer_offsets-42 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isr=List(1), leaderRecoveryState=RECOVERED, partitionEpoch=0) 19:06:41.813 [controller-event-thread] INFO state.change.logger - [Controller id=1 epoch=1] Changed partition __consumer_offsets-3 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isr=List(1), leaderRecoveryState=RECOVERED, partitionEpoch=0) 19:06:41.813 [controller-event-thread] INFO state.change.logger - [Controller id=1 epoch=1] Changed partition __consumer_offsets-18 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isr=List(1), leaderRecoveryState=RECOVERED, partitionEpoch=0) 19:06:41.813 [controller-event-thread] INFO state.change.logger - [Controller id=1 epoch=1] Changed partition __consumer_offsets-15 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isr=List(1), leaderRecoveryState=RECOVERED, partitionEpoch=0) 19:06:41.813 [controller-event-thread] INFO state.change.logger - [Controller id=1 epoch=1] Changed partition __consumer_offsets-24 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isr=List(1), leaderRecoveryState=RECOVERED, partitionEpoch=0) 19:06:41.813 [controller-event-thread] INFO state.change.logger - [Controller id=1 epoch=1] Changed partition __consumer_offsets-17 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isr=List(1), leaderRecoveryState=RECOVERED, partitionEpoch=0) 19:06:41.813 [controller-event-thread] INFO state.change.logger - [Controller id=1 epoch=1] Changed partition __consumer_offsets-48 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isr=List(1), leaderRecoveryState=RECOVERED, partitionEpoch=0) 19:06:41.813 [controller-event-thread] INFO state.change.logger - [Controller id=1 epoch=1] Changed partition __consumer_offsets-19 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isr=List(1), leaderRecoveryState=RECOVERED, partitionEpoch=0) 19:06:41.813 [controller-event-thread] INFO state.change.logger - [Controller id=1 epoch=1] Changed partition __consumer_offsets-11 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isr=List(1), leaderRecoveryState=RECOVERED, partitionEpoch=0) 19:06:41.813 [controller-event-thread] INFO state.change.logger - [Controller id=1 epoch=1] Changed partition __consumer_offsets-2 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isr=List(1), leaderRecoveryState=RECOVERED, partitionEpoch=0) 19:06:41.813 [controller-event-thread] INFO state.change.logger - [Controller id=1 epoch=1] Changed partition __consumer_offsets-43 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isr=List(1), leaderRecoveryState=RECOVERED, partitionEpoch=0) 19:06:41.813 [controller-event-thread] INFO state.change.logger - [Controller id=1 epoch=1] Changed partition __consumer_offsets-6 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isr=List(1), leaderRecoveryState=RECOVERED, partitionEpoch=0) 19:06:41.813 [controller-event-thread] INFO state.change.logger - [Controller id=1 epoch=1] Changed partition __consumer_offsets-14 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isr=List(1), leaderRecoveryState=RECOVERED, partitionEpoch=0) 19:06:41.814 [controller-event-thread] INFO state.change.logger - [Controller id=1 epoch=1] Changed partition __consumer_offsets-20 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isr=List(1), leaderRecoveryState=RECOVERED, partitionEpoch=0) 19:06:41.814 [controller-event-thread] INFO state.change.logger - [Controller id=1 epoch=1] Changed partition __consumer_offsets-0 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isr=List(1), leaderRecoveryState=RECOVERED, partitionEpoch=0) 19:06:41.814 [controller-event-thread] INFO state.change.logger - [Controller id=1 epoch=1] Changed partition __consumer_offsets-44 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isr=List(1), leaderRecoveryState=RECOVERED, partitionEpoch=0) 19:06:41.814 [controller-event-thread] INFO state.change.logger - [Controller id=1 epoch=1] Changed partition __consumer_offsets-39 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isr=List(1), leaderRecoveryState=RECOVERED, partitionEpoch=0) 19:06:41.814 [controller-event-thread] INFO state.change.logger - [Controller id=1 epoch=1] Changed partition __consumer_offsets-12 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isr=List(1), leaderRecoveryState=RECOVERED, partitionEpoch=0) 19:06:41.814 [controller-event-thread] INFO state.change.logger - [Controller id=1 epoch=1] Changed partition __consumer_offsets-45 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isr=List(1), leaderRecoveryState=RECOVERED, partitionEpoch=0) 19:06:41.814 [controller-event-thread] INFO state.change.logger - [Controller id=1 epoch=1] Changed partition __consumer_offsets-1 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isr=List(1), leaderRecoveryState=RECOVERED, partitionEpoch=0) 19:06:41.814 [controller-event-thread] INFO state.change.logger - [Controller id=1 epoch=1] Changed partition __consumer_offsets-5 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isr=List(1), leaderRecoveryState=RECOVERED, partitionEpoch=0) 19:06:41.814 [controller-event-thread] INFO state.change.logger - [Controller id=1 epoch=1] Changed partition __consumer_offsets-26 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isr=List(1), leaderRecoveryState=RECOVERED, partitionEpoch=0) 19:06:41.814 [controller-event-thread] INFO state.change.logger - [Controller id=1 epoch=1] Changed partition __consumer_offsets-29 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isr=List(1), leaderRecoveryState=RECOVERED, partitionEpoch=0) 19:06:41.814 [controller-event-thread] INFO state.change.logger - [Controller id=1 epoch=1] Changed partition __consumer_offsets-34 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isr=List(1), leaderRecoveryState=RECOVERED, partitionEpoch=0) 19:06:41.814 [controller-event-thread] INFO state.change.logger - [Controller id=1 epoch=1] Changed partition __consumer_offsets-10 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isr=List(1), leaderRecoveryState=RECOVERED, partitionEpoch=0) 19:06:41.814 [controller-event-thread] INFO state.change.logger - [Controller id=1 epoch=1] Changed partition __consumer_offsets-32 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isr=List(1), leaderRecoveryState=RECOVERED, partitionEpoch=0) 19:06:41.815 [controller-event-thread] INFO state.change.logger - [Controller id=1 epoch=1] Changed partition __consumer_offsets-40 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isr=List(1), leaderRecoveryState=RECOVERED, partitionEpoch=0) 19:06:41.815 [controller-event-thread] INFO state.change.logger - [Controller id=1 epoch=1] Sending LeaderAndIsr request to broker 1 with 50 become-leader and 0 become-follower partitions 19:06:41.816 [controller-event-thread] INFO state.change.logger - [Controller id=1 epoch=1] Sending UpdateMetadata request to brokers HashSet(1) for 50 partitions 19:06:41.817 [TestBroker:1:Controller-1-to-broker-1-send-thread] DEBUG org.apache.kafka.clients.NetworkClient - [Controller id=1, targetBrokerId=1] Sending LEADER_AND_ISR request with header RequestHeader(apiKey=LEADER_AND_ISR, apiVersion=6, clientId=1, correlationId=3) and timeout 30000 to node 1: LeaderAndIsrRequestData(controllerId=1, controllerEpoch=1, brokerEpoch=25, type=0, ungroupedPartitionStates=[], topicStates=[LeaderAndIsrTopicState(topicName='__consumer_offsets', topicId=eotegnaZSI-1iOOsuTZpHg, partitionStates=[LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=13, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0), LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=46, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0), LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=9, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0), LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=42, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0), LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=21, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0), LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=17, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0), LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=30, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0), LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=26, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0), LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=5, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0), LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=38, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0), LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=1, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0), LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=34, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0), LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=16, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0), LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=45, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0), LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=12, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0), LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=41, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0), LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=24, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0), LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=20, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0), LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=49, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0), LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=0, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0), LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=29, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0), LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=25, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0), LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=8, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0), LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=37, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0), LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=4, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0), LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=33, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0), LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=15, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0), LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=48, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0), LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=11, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0), LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=44, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0), LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=23, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0), LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=19, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0), LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=32, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0), LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=28, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0), LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=7, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0), LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=40, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0), LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=3, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0), LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=36, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0), LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=47, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0), LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=14, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0), LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=43, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0), LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=10, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0), LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=22, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0), LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=18, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0), LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=31, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0), LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=27, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0), LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=39, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0), LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=6, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0), LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=35, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0), LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=2, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0)])], liveLeaders=[LeaderAndIsrLiveLeader(brokerId=1, hostName='localhost', port=38537)]) 19:06:41.819 [controller-event-thread] INFO state.change.logger - [Controller id=1 epoch=1] Sending UpdateMetadata request to brokers HashSet() for 0 partitions 19:06:41.821 [data-plane-kafka-request-handler-1] INFO state.change.logger - [Broker id=1] Handling LeaderAndIsr request correlationId 3 from controller 1 for 50 partitions 19:06:41.853 [data-plane-kafka-request-handler-1] INFO kafka.server.ReplicaFetcherManager - [ReplicaFetcherManager on broker 1] Removed fetcher for partitions HashSet(__consumer_offsets-22, __consumer_offsets-30, __consumer_offsets-25, __consumer_offsets-35, __consumer_offsets-37, __consumer_offsets-38, __consumer_offsets-13, __consumer_offsets-8, __consumer_offsets-21, __consumer_offsets-4, __consumer_offsets-27, __consumer_offsets-7, __consumer_offsets-9, __consumer_offsets-46, __consumer_offsets-41, __consumer_offsets-33, __consumer_offsets-23, __consumer_offsets-49, __consumer_offsets-47, __consumer_offsets-16, __consumer_offsets-28, __consumer_offsets-31, __consumer_offsets-36, __consumer_offsets-42, __consumer_offsets-3, __consumer_offsets-18, __consumer_offsets-15, __consumer_offsets-24, __consumer_offsets-17, __consumer_offsets-48, __consumer_offsets-19, __consumer_offsets-11, __consumer_offsets-2, __consumer_offsets-43, __consumer_offsets-6, __consumer_offsets-14, __consumer_offsets-20, __consumer_offsets-0, __consumer_offsets-44, __consumer_offsets-39, __consumer_offsets-12, __consumer_offsets-45, __consumer_offsets-1, __consumer_offsets-5, __consumer_offsets-26, __consumer_offsets-29, __consumer_offsets-34, __consumer_offsets-10, __consumer_offsets-32, __consumer_offsets-40) 19:06:41.853 [data-plane-kafka-request-handler-1] INFO state.change.logger - [Broker id=1] Stopped fetchers as part of LeaderAndIsr request correlationId 3 from controller 1 epoch 1 as part of the become-leader transition for 50 partitions 19:06:41.855 [ProcessThread(sid:0 cport:42969):] DEBUG org.apache.zookeeper.server.SessionTrackerImpl - Checking session 0x1000002c50e0000 19:06:41.855 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - Processing request:: sessionid:0x1000002c50e0000 type:getData cxid:0xc0 zxid:0xfffffffffffffffe txntype:unknown reqpath:/config/topics/__consumer_offsets 19:06:41.855 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - sessionid:0x1000002c50e0000 type:getData cxid:0xc0 zxid:0xfffffffffffffffe txntype:unknown reqpath:/config/topics/__consumer_offsets 19:06:41.856 [SyncThread:0] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Permission requested: 1 19:06:41.856 [SyncThread:0] DEBUG org.apache.zookeeper.server.ZooKeeperServer - ACLs for node: [31,s{'world,'anyone} ] 19:06:41.856 [SyncThread:0] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Client credentials: ['sasl,'zooclient , 'ip,'127.0.0.1 ] 19:06:41.856 [main-SendThread(127.0.0.1:42969)] DEBUG org.apache.zookeeper.ClientCnxn - Reading reply session id: 0x1000002c50e0000, packet:: clientPath:/config/topics/__consumer_offsets serverPath:/config/topics/__consumer_offsets finished:false header:: 192,4 replyHeader:: 192,139,0 request:: '/config/topics/__consumer_offsets,F response:: #7b2276657273696f6e223a312c22636f6e666967223a7b22636f6d7072657373696f6e2e74797065223a2270726f6475636572222c22636c65616e75702e706f6c696379223a22636f6d70616374222c227365676d656e742e6279746573223a22313034383537363030227d7d,s{37,37,1755112001575,1755112001575,0,0,0,0,109,0,37} 19:06:41.860 [data-plane-kafka-request-handler-1] DEBUG kafka.log.OffsetIndex - Loaded index file /tmp/kafka-unit12180474530667575823/__consumer_offsets-3/00000000000000000000.index with maxEntries = 1310720, maxIndexSize = 10485760, entries = 0, lastOffset = 0, file position = 0 19:06:41.861 [data-plane-kafka-request-handler-1] DEBUG kafka.log.AbstractIndex - Index /tmp/kafka-unit12180474530667575823/__consumer_offsets-3/00000000000000000000.index was not resized because it already has size 10485760 19:06:41.861 [data-plane-kafka-request-handler-1] DEBUG kafka.log.TimeIndex - Loaded index file /tmp/kafka-unit12180474530667575823/__consumer_offsets-3/00000000000000000000.timeindex with maxEntries = 873813, maxIndexSize = 10485760, entries = 0, lastOffset = TimestampOffset(-1,0), file position = 0 19:06:41.861 [data-plane-kafka-request-handler-1] DEBUG kafka.log.AbstractIndex - Index /tmp/kafka-unit12180474530667575823/__consumer_offsets-3/00000000000000000000.timeindex was not resized because it already has size 10485756 19:06:41.861 [data-plane-kafka-request-handler-1] INFO kafka.log.UnifiedLog$ - [LogLoader partition=__consumer_offsets-3, dir=/tmp/kafka-unit12180474530667575823] Loading producer state till offset 0 with message format version 2 19:06:41.861 [data-plane-kafka-request-handler-1] DEBUG kafka.utils.KafkaScheduler - Scheduling task flush-metadata-file with initial delay 0 ms and period -1 ms. 19:06:41.862 [data-plane-kafka-request-handler-1] DEBUG kafka.utils.KafkaScheduler - Scheduling task PeriodicProducerExpirationCheck with initial delay 600000 ms and period 600000 ms. 19:06:41.863 [data-plane-kafka-request-handler-1] INFO kafka.log.LogManager - Created log for partition __consumer_offsets-3 in /tmp/kafka-unit12180474530667575823/__consumer_offsets-3 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} 19:06:41.863 [data-plane-kafka-request-handler-1] INFO kafka.cluster.Partition - [Partition __consumer_offsets-3 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-3 19:06:41.863 [data-plane-kafka-request-handler-1] INFO kafka.cluster.Partition - [Partition __consumer_offsets-3 broker=1] Log loaded for partition __consumer_offsets-3 with initial high watermark 0 19:06:41.863 [data-plane-kafka-request-handler-1] INFO state.change.logger - [Broker id=1] Leader __consumer_offsets-3 with topic id Some(eotegnaZSI-1iOOsuTZpHg) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas []. Previous leader epoch was -1. 19:06:41.863 [data-plane-kafka-request-handler-1] DEBUG kafka.server.epoch.LeaderEpochFileCache - [LeaderEpochCache __consumer_offsets-3] Appended new epoch entry EpochEntry(epoch=0, startOffset=0). Cache now contains 1 entries. 19:06:41.868 [ProcessThread(sid:0 cport:42969):] DEBUG org.apache.zookeeper.server.SessionTrackerImpl - Checking session 0x1000002c50e0000 19:06:41.868 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - Processing request:: sessionid:0x1000002c50e0000 type:getData cxid:0xc1 zxid:0xfffffffffffffffe txntype:unknown reqpath:/config/topics/__consumer_offsets 19:06:41.868 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - sessionid:0x1000002c50e0000 type:getData cxid:0xc1 zxid:0xfffffffffffffffe txntype:unknown reqpath:/config/topics/__consumer_offsets 19:06:41.868 [SyncThread:0] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Permission requested: 1 19:06:41.868 [SyncThread:0] DEBUG org.apache.zookeeper.server.ZooKeeperServer - ACLs for node: [31,s{'world,'anyone} ] 19:06:41.868 [SyncThread:0] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Client credentials: ['sasl,'zooclient , 'ip,'127.0.0.1 ] 19:06:41.869 [main-SendThread(127.0.0.1:42969)] DEBUG org.apache.zookeeper.ClientCnxn - Reading reply session id: 0x1000002c50e0000, packet:: clientPath:/config/topics/__consumer_offsets serverPath:/config/topics/__consumer_offsets finished:false header:: 193,4 replyHeader:: 193,139,0 request:: '/config/topics/__consumer_offsets,F response:: #7b2276657273696f6e223a312c22636f6e666967223a7b22636f6d7072657373696f6e2e74797065223a2270726f6475636572222c22636c65616e75702e706f6c696379223a22636f6d70616374222c227365676d656e742e6279746573223a22313034383537363030227d7d,s{37,37,1755112001575,1755112001575,0,0,0,0,109,0,37} 19:06:41.872 [data-plane-kafka-request-handler-1] DEBUG kafka.log.OffsetIndex - Loaded index file /tmp/kafka-unit12180474530667575823/__consumer_offsets-18/00000000000000000000.index with maxEntries = 1310720, maxIndexSize = 10485760, entries = 0, lastOffset = 0, file position = 0 19:06:41.873 [data-plane-kafka-request-handler-1] DEBUG kafka.log.AbstractIndex - Index /tmp/kafka-unit12180474530667575823/__consumer_offsets-18/00000000000000000000.index was not resized because it already has size 10485760 19:06:41.873 [data-plane-kafka-request-handler-1] DEBUG kafka.log.TimeIndex - Loaded index file /tmp/kafka-unit12180474530667575823/__consumer_offsets-18/00000000000000000000.timeindex with maxEntries = 873813, maxIndexSize = 10485760, entries = 0, lastOffset = TimestampOffset(-1,0), file position = 0 19:06:41.873 [data-plane-kafka-request-handler-1] DEBUG kafka.log.AbstractIndex - Index /tmp/kafka-unit12180474530667575823/__consumer_offsets-18/00000000000000000000.timeindex was not resized because it already has size 10485756 19:06:41.873 [data-plane-kafka-request-handler-1] INFO kafka.log.UnifiedLog$ - [LogLoader partition=__consumer_offsets-18, dir=/tmp/kafka-unit12180474530667575823] Loading producer state till offset 0 with message format version 2 19:06:41.874 [data-plane-kafka-request-handler-1] DEBUG kafka.utils.KafkaScheduler - Scheduling task flush-metadata-file with initial delay 0 ms and period -1 ms. 19:06:41.874 [data-plane-kafka-request-handler-1] DEBUG kafka.utils.KafkaScheduler - Scheduling task PeriodicProducerExpirationCheck with initial delay 600000 ms and period 600000 ms. 19:06:41.875 [data-plane-kafka-request-handler-1] INFO kafka.log.LogManager - Created log for partition __consumer_offsets-18 in /tmp/kafka-unit12180474530667575823/__consumer_offsets-18 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} 19:06:41.875 [data-plane-kafka-request-handler-1] INFO kafka.cluster.Partition - [Partition __consumer_offsets-18 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-18 19:06:41.875 [data-plane-kafka-request-handler-1] INFO kafka.cluster.Partition - [Partition __consumer_offsets-18 broker=1] Log loaded for partition __consumer_offsets-18 with initial high watermark 0 19:06:41.875 [data-plane-kafka-request-handler-1] INFO state.change.logger - [Broker id=1] Leader __consumer_offsets-18 with topic id Some(eotegnaZSI-1iOOsuTZpHg) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas []. Previous leader epoch was -1. 19:06:41.875 [data-plane-kafka-request-handler-1] DEBUG kafka.server.epoch.LeaderEpochFileCache - [LeaderEpochCache __consumer_offsets-18] Appended new epoch entry EpochEntry(epoch=0, startOffset=0). Cache now contains 1 entries. 19:06:41.876 [main] DEBUG org.apache.kafka.clients.NetworkClient - [Consumer clientId=mso-123456-consumer-e3947c42-8a25-4603-b050-ed449cbe5e27, groupId=mso-group] Sending metadata request MetadataRequestData(topics=[MetadataRequestTopic(topicId=AAAAAAAAAAAAAAAAAAAAAA, name='my-test-topic')], allowAutoTopicCreation=false, includeClusterAuthorizedOperations=false, includeTopicAuthorizedOperations=false) to node localhost:38537 (id: 1 rack: null) 19:06:41.876 [main] DEBUG org.apache.kafka.clients.NetworkClient - [Consumer clientId=mso-123456-consumer-e3947c42-8a25-4603-b050-ed449cbe5e27, groupId=mso-group] Sending METADATA request with header RequestHeader(apiKey=METADATA, apiVersion=12, clientId=mso-123456-consumer-e3947c42-8a25-4603-b050-ed449cbe5e27, correlationId=8) and timeout 30000 to node 1: MetadataRequestData(topics=[MetadataRequestTopic(topicId=AAAAAAAAAAAAAAAAAAAAAA, name='my-test-topic')], allowAutoTopicCreation=false, includeClusterAuthorizedOperations=false, includeTopicAuthorizedOperations=false) 19:06:41.879 [main] DEBUG org.apache.kafka.clients.NetworkClient - [Consumer clientId=mso-123456-consumer-e3947c42-8a25-4603-b050-ed449cbe5e27, groupId=mso-group] Received METADATA response from node 1 for request with header RequestHeader(apiKey=METADATA, apiVersion=12, clientId=mso-123456-consumer-e3947c42-8a25-4603-b050-ed449cbe5e27, correlationId=8): MetadataResponseData(throttleTimeMs=0, brokers=[MetadataResponseBroker(nodeId=1, host='localhost', port=38537, rack=null)], clusterId='rNB1PmTvTz-7PD_3X0wh3g', controllerId=1, topics=[MetadataResponseTopic(errorCode=0, name='my-test-topic', topicId=Y7FYMZASRPqaYrfAS485Ow, isInternal=false, partitions=[MetadataResponsePartition(errorCode=0, partitionIndex=0, leaderId=1, leaderEpoch=0, replicaNodes=[1], isrNodes=[1], offlineReplicas=[])], topicAuthorizedOperations=-2147483648)], clusterAuthorizedOperations=-2147483648) 19:06:41.879 [main] DEBUG org.apache.kafka.clients.Metadata - [Consumer clientId=mso-123456-consumer-e3947c42-8a25-4603-b050-ed449cbe5e27, groupId=mso-group] Updating last seen epoch for partition my-test-topic-0 from 0 to epoch 0 from new metadata 19:06:41.880 [main] DEBUG org.apache.kafka.clients.Metadata - [Consumer clientId=mso-123456-consumer-e3947c42-8a25-4603-b050-ed449cbe5e27, groupId=mso-group] Updated cluster metadata updateVersion 5 to MetadataCache{clusterId='rNB1PmTvTz-7PD_3X0wh3g', nodes={1=localhost:38537 (id: 1 rack: null)}, partitions=[PartitionMetadata(error=NONE, partition=my-test-topic-0, leader=Optional[1], leaderEpoch=Optional[0], replicas=1, isr=1, offlineReplicas=)], controller=localhost:38537 (id: 1 rack: null)} 19:06:41.880 [main] DEBUG org.apache.kafka.clients.consumer.internals.ConsumerCoordinator - [Consumer clientId=mso-123456-consumer-e3947c42-8a25-4603-b050-ed449cbe5e27, groupId=mso-group] Sending FindCoordinator request to broker localhost:38537 (id: 1 rack: null) 19:06:41.880 [main] DEBUG org.apache.kafka.clients.NetworkClient - [Consumer clientId=mso-123456-consumer-e3947c42-8a25-4603-b050-ed449cbe5e27, groupId=mso-group] Sending FIND_COORDINATOR request with header RequestHeader(apiKey=FIND_COORDINATOR, apiVersion=4, clientId=mso-123456-consumer-e3947c42-8a25-4603-b050-ed449cbe5e27, correlationId=9) and timeout 30000 to node 1: FindCoordinatorRequestData(key='', keyType=0, coordinatorKeys=[mso-group]) 19:06:41.880 [data-plane-kafka-network-thread-1-ListenerName(SASL_PLAINTEXT)-SASL_PLAINTEXT-0] DEBUG kafka.request.logger - Completed request:{"isForwarded":false,"requestHeader":{"requestApiKey":3,"requestApiVersion":12,"correlationId":8,"clientId":"mso-123456-consumer-e3947c42-8a25-4603-b050-ed449cbe5e27","requestApiKeyName":"METADATA"},"request":{"topics":[{"topicId":"AAAAAAAAAAAAAAAAAAAAAA","name":"my-test-topic"}],"allowAutoTopicCreation":false,"includeTopicAuthorizedOperations":false},"response":{"throttleTimeMs":0,"brokers":[{"nodeId":1,"host":"localhost","port":38537,"rack":null}],"clusterId":"rNB1PmTvTz-7PD_3X0wh3g","controllerId":1,"topics":[{"errorCode":0,"name":"my-test-topic","topicId":"Y7FYMZASRPqaYrfAS485Ow","isInternal":false,"partitions":[{"errorCode":0,"partitionIndex":0,"leaderId":1,"leaderEpoch":0,"replicaNodes":[1],"isrNodes":[1],"offlineReplicas":[]}],"topicAuthorizedOperations":-2147483648}]},"connection":"127.0.0.1:38537-127.0.0.1:54970-3","totalTimeMs":2.176,"requestQueueTimeMs":0.348,"localTimeMs":1.289,"remoteTimeMs":0.0,"throttleTimeMs":0,"responseQueueTimeMs":0.114,"sendTimeMs":0.424,"securityProtocol":"SASL_PLAINTEXT","principal":"User:admin","listener":"SASL_PLAINTEXT","clientInformation":{"softwareName":"apache-kafka-java","softwareVersion":"3.3.1"}} 19:06:41.882 [ProcessThread(sid:0 cport:42969):] DEBUG org.apache.zookeeper.server.SessionTrackerImpl - Checking session 0x1000002c50e0000 19:06:41.883 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - Processing request:: sessionid:0x1000002c50e0000 type:exists cxid:0xc2 zxid:0xfffffffffffffffe txntype:unknown reqpath:/admin/delete_topics/__consumer_offsets 19:06:41.883 [ProcessThread(sid:0 cport:42969):] DEBUG org.apache.zookeeper.server.SessionTrackerImpl - Checking session 0x1000002c50e0000 19:06:41.883 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - sessionid:0x1000002c50e0000 type:exists cxid:0xc2 zxid:0xfffffffffffffffe txntype:unknown reqpath:/admin/delete_topics/__consumer_offsets 19:06:41.883 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - Processing request:: sessionid:0x1000002c50e0000 type:getData cxid:0xc3 zxid:0xfffffffffffffffe txntype:unknown reqpath:/config/topics/__consumer_offsets 19:06:41.883 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - sessionid:0x1000002c50e0000 type:getData cxid:0xc3 zxid:0xfffffffffffffffe txntype:unknown reqpath:/config/topics/__consumer_offsets 19:06:41.883 [SyncThread:0] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Permission requested: 1 19:06:41.883 [SyncThread:0] DEBUG org.apache.zookeeper.server.ZooKeeperServer - ACLs for node: [31,s{'world,'anyone} ] 19:06:41.883 [SyncThread:0] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Client credentials: ['sasl,'zooclient , 'ip,'127.0.0.1 ] 19:06:41.883 [main-SendThread(127.0.0.1:42969)] DEBUG org.apache.zookeeper.ClientCnxn - Reading reply session id: 0x1000002c50e0000, packet:: clientPath:/admin/delete_topics/__consumer_offsets serverPath:/admin/delete_topics/__consumer_offsets finished:false header:: 194,3 replyHeader:: 194,139,-101 request:: '/admin/delete_topics/__consumer_offsets,F response:: 19:06:41.883 [main-SendThread(127.0.0.1:42969)] DEBUG org.apache.zookeeper.ClientCnxn - Reading reply session id: 0x1000002c50e0000, packet:: clientPath:/config/topics/__consumer_offsets serverPath:/config/topics/__consumer_offsets finished:false header:: 195,4 replyHeader:: 195,139,0 request:: '/config/topics/__consumer_offsets,F response:: #7b2276657273696f6e223a312c22636f6e666967223a7b22636f6d7072657373696f6e2e74797065223a2270726f6475636572222c22636c65616e75702e706f6c696379223a22636f6d70616374222c227365676d656e742e6279746573223a22313034383537363030227d7d,s{37,37,1755112001575,1755112001575,0,0,0,0,109,0,37} 19:06:41.884 [ProcessThread(sid:0 cport:42969):] DEBUG org.apache.zookeeper.server.SessionTrackerImpl - Checking session 0x1000002c50e0000 19:06:41.884 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - Processing request:: sessionid:0x1000002c50e0000 type:exists cxid:0xc4 zxid:0xfffffffffffffffe txntype:unknown reqpath:/brokers/topics/__consumer_offsets 19:06:41.884 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - sessionid:0x1000002c50e0000 type:exists cxid:0xc4 zxid:0xfffffffffffffffe txntype:unknown reqpath:/brokers/topics/__consumer_offsets 19:06:41.884 [main-SendThread(127.0.0.1:42969)] DEBUG org.apache.zookeeper.ClientCnxn - Reading reply session id: 0x1000002c50e0000, packet:: clientPath:/brokers/topics/__consumer_offsets serverPath:/brokers/topics/__consumer_offsets finished:false header:: 196,3 replyHeader:: 196,139,0 request:: '/brokers/topics/__consumer_offsets,T response:: s{38,38,1755112001586,1755112001586,0,1,0,0,548,1,39} 19:06:41.885 [data-plane-kafka-request-handler-0] DEBUG kafka.server.ZkAdminManager - [Admin Manager on Broker 1]: Topic creation failed since topic '__consumer_offsets' already exists. org.apache.kafka.common.errors.TopicExistsException: Topic '__consumer_offsets' already exists. 19:06:41.885 [data-plane-kafka-request-handler-0] DEBUG kafka.server.DefaultAutoTopicCreationManager - Cleared inflight topic creation state for HashMap(__consumer_offsets -> CreatableTopic(name='__consumer_offsets', numPartitions=50, replicationFactor=1, assignments=[], configs=[CreateableTopicConfig(name='compression.type', value='producer'), CreateableTopicConfig(name='cleanup.policy', value='compact'), CreateableTopicConfig(name='segment.bytes', value='104857600')])) 19:06:41.886 [main] DEBUG org.apache.kafka.clients.NetworkClient - [Consumer clientId=mso-123456-consumer-e3947c42-8a25-4603-b050-ed449cbe5e27, groupId=mso-group] Received FIND_COORDINATOR response from node 1 for request with header RequestHeader(apiKey=FIND_COORDINATOR, apiVersion=4, clientId=mso-123456-consumer-e3947c42-8a25-4603-b050-ed449cbe5e27, correlationId=9): FindCoordinatorResponseData(throttleTimeMs=0, errorCode=0, errorMessage='', nodeId=0, host='', port=0, coordinators=[Coordinator(key='mso-group', nodeId=-1, host='', port=-1, errorCode=15, errorMessage='')]) 19:06:41.886 [main] DEBUG org.apache.kafka.clients.consumer.internals.ConsumerCoordinator - [Consumer clientId=mso-123456-consumer-e3947c42-8a25-4603-b050-ed449cbe5e27, groupId=mso-group] Received FindCoordinator response ClientResponse(receivedTimeMs=1755112001886, latencyMs=6, disconnected=false, requestHeader=RequestHeader(apiKey=FIND_COORDINATOR, apiVersion=4, clientId=mso-123456-consumer-e3947c42-8a25-4603-b050-ed449cbe5e27, correlationId=9), responseBody=FindCoordinatorResponseData(throttleTimeMs=0, errorCode=0, errorMessage='', nodeId=0, host='', port=0, coordinators=[Coordinator(key='mso-group', nodeId=-1, host='', port=-1, errorCode=15, errorMessage='')])) 19:06:41.886 [main] DEBUG org.apache.kafka.clients.consumer.internals.ConsumerCoordinator - [Consumer clientId=mso-123456-consumer-e3947c42-8a25-4603-b050-ed449cbe5e27, groupId=mso-group] Group coordinator lookup failed: 19:06:41.886 [main] DEBUG org.apache.kafka.clients.consumer.internals.ConsumerCoordinator - [Consumer clientId=mso-123456-consumer-e3947c42-8a25-4603-b050-ed449cbe5e27, groupId=mso-group] Coordinator discovery failed, refreshing metadata org.apache.kafka.common.errors.CoordinatorNotAvailableException: The coordinator is not available. 19:06:41.886 [data-plane-kafka-network-thread-1-ListenerName(SASL_PLAINTEXT)-SASL_PLAINTEXT-0] DEBUG kafka.request.logger - Completed request:{"isForwarded":false,"requestHeader":{"requestApiKey":10,"requestApiVersion":4,"correlationId":9,"clientId":"mso-123456-consumer-e3947c42-8a25-4603-b050-ed449cbe5e27","requestApiKeyName":"FIND_COORDINATOR"},"request":{"keyType":0,"coordinatorKeys":["mso-group"]},"response":{"throttleTimeMs":0,"coordinators":[{"key":"mso-group","nodeId":-1,"host":"","port":-1,"errorCode":15,"errorMessage":""}]},"connection":"127.0.0.1:38537-127.0.0.1:54970-3","totalTimeMs":5.123,"requestQueueTimeMs":0.17,"localTimeMs":4.543,"remoteTimeMs":0.0,"throttleTimeMs":0,"responseQueueTimeMs":0.159,"sendTimeMs":0.25,"securityProtocol":"SASL_PLAINTEXT","principal":"User:admin","listener":"SASL_PLAINTEXT","clientInformation":{"softwareName":"apache-kafka-java","softwareVersion":"3.3.1"}} 19:06:41.888 [data-plane-kafka-request-handler-1] DEBUG kafka.log.OffsetIndex - Loaded index file /tmp/kafka-unit12180474530667575823/__consumer_offsets-41/00000000000000000000.index with maxEntries = 1310720, maxIndexSize = 10485760, entries = 0, lastOffset = 0, file position = 0 19:06:41.889 [data-plane-kafka-request-handler-1] DEBUG kafka.log.AbstractIndex - Index /tmp/kafka-unit12180474530667575823/__consumer_offsets-41/00000000000000000000.index was not resized because it already has size 10485760 19:06:41.889 [data-plane-kafka-request-handler-1] DEBUG kafka.log.TimeIndex - Loaded index file /tmp/kafka-unit12180474530667575823/__consumer_offsets-41/00000000000000000000.timeindex with maxEntries = 873813, maxIndexSize = 10485760, entries = 0, lastOffset = TimestampOffset(-1,0), file position = 0 19:06:41.889 [data-plane-kafka-request-handler-1] DEBUG kafka.log.AbstractIndex - Index /tmp/kafka-unit12180474530667575823/__consumer_offsets-41/00000000000000000000.timeindex was not resized because it already has size 10485756 19:06:41.889 [data-plane-kafka-request-handler-1] INFO kafka.log.UnifiedLog$ - [LogLoader partition=__consumer_offsets-41, dir=/tmp/kafka-unit12180474530667575823] Loading producer state till offset 0 with message format version 2 19:06:41.889 [data-plane-kafka-request-handler-1] DEBUG kafka.utils.KafkaScheduler - Scheduling task flush-metadata-file with initial delay 0 ms and period -1 ms. 19:06:41.890 [data-plane-kafka-request-handler-1] DEBUG kafka.utils.KafkaScheduler - Scheduling task PeriodicProducerExpirationCheck with initial delay 600000 ms and period 600000 ms. 19:06:41.890 [data-plane-kafka-request-handler-1] INFO kafka.log.LogManager - Created log for partition __consumer_offsets-41 in /tmp/kafka-unit12180474530667575823/__consumer_offsets-41 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} 19:06:41.891 [data-plane-kafka-request-handler-1] INFO kafka.cluster.Partition - [Partition __consumer_offsets-41 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-41 19:06:41.891 [data-plane-kafka-request-handler-1] INFO kafka.cluster.Partition - [Partition __consumer_offsets-41 broker=1] Log loaded for partition __consumer_offsets-41 with initial high watermark 0 19:06:41.891 [data-plane-kafka-request-handler-1] INFO state.change.logger - [Broker id=1] Leader __consumer_offsets-41 with topic id Some(eotegnaZSI-1iOOsuTZpHg) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas []. Previous leader epoch was -1. 19:06:41.891 [data-plane-kafka-request-handler-1] DEBUG kafka.server.epoch.LeaderEpochFileCache - [LeaderEpochCache __consumer_offsets-41] Appended new epoch entry EpochEntry(epoch=0, startOffset=0). Cache now contains 1 entries. 19:06:41.895 [ProcessThread(sid:0 cport:42969):] DEBUG org.apache.zookeeper.server.SessionTrackerImpl - Checking session 0x1000002c50e0000 19:06:41.895 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - Processing request:: sessionid:0x1000002c50e0000 type:getData cxid:0xc5 zxid:0xfffffffffffffffe txntype:unknown reqpath:/config/topics/__consumer_offsets 19:06:41.896 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - sessionid:0x1000002c50e0000 type:getData cxid:0xc5 zxid:0xfffffffffffffffe txntype:unknown reqpath:/config/topics/__consumer_offsets 19:06:41.896 [SyncThread:0] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Permission requested: 1 19:06:41.896 [SyncThread:0] DEBUG org.apache.zookeeper.server.ZooKeeperServer - ACLs for node: [31,s{'world,'anyone} ] 19:06:41.896 [SyncThread:0] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Client credentials: ['sasl,'zooclient , 'ip,'127.0.0.1 ] 19:06:41.896 [main-SendThread(127.0.0.1:42969)] DEBUG org.apache.zookeeper.ClientCnxn - Reading reply session id: 0x1000002c50e0000, packet:: clientPath:/config/topics/__consumer_offsets serverPath:/config/topics/__consumer_offsets finished:false header:: 197,4 replyHeader:: 197,139,0 request:: '/config/topics/__consumer_offsets,F response:: #7b2276657273696f6e223a312c22636f6e666967223a7b22636f6d7072657373696f6e2e74797065223a2270726f6475636572222c22636c65616e75702e706f6c696379223a22636f6d70616374222c227365676d656e742e6279746573223a22313034383537363030227d7d,s{37,37,1755112001575,1755112001575,0,0,0,0,109,0,37} 19:06:41.899 [data-plane-kafka-request-handler-1] DEBUG kafka.log.OffsetIndex - Loaded index file /tmp/kafka-unit12180474530667575823/__consumer_offsets-10/00000000000000000000.index with maxEntries = 1310720, maxIndexSize = 10485760, entries = 0, lastOffset = 0, file position = 0 19:06:41.899 [data-plane-kafka-request-handler-1] DEBUG kafka.log.AbstractIndex - Index /tmp/kafka-unit12180474530667575823/__consumer_offsets-10/00000000000000000000.index was not resized because it already has size 10485760 19:06:41.899 [data-plane-kafka-request-handler-1] DEBUG kafka.log.TimeIndex - Loaded index file /tmp/kafka-unit12180474530667575823/__consumer_offsets-10/00000000000000000000.timeindex with maxEntries = 873813, maxIndexSize = 10485760, entries = 0, lastOffset = TimestampOffset(-1,0), file position = 0 19:06:41.899 [data-plane-kafka-request-handler-1] DEBUG kafka.log.AbstractIndex - Index /tmp/kafka-unit12180474530667575823/__consumer_offsets-10/00000000000000000000.timeindex was not resized because it already has size 10485756 19:06:41.899 [data-plane-kafka-request-handler-1] INFO kafka.log.UnifiedLog$ - [LogLoader partition=__consumer_offsets-10, dir=/tmp/kafka-unit12180474530667575823] Loading producer state till offset 0 with message format version 2 19:06:41.900 [data-plane-kafka-request-handler-1] DEBUG kafka.utils.KafkaScheduler - Scheduling task flush-metadata-file with initial delay 0 ms and period -1 ms. 19:06:41.900 [data-plane-kafka-request-handler-1] DEBUG kafka.utils.KafkaScheduler - Scheduling task PeriodicProducerExpirationCheck with initial delay 600000 ms and period 600000 ms. 19:06:41.900 [data-plane-kafka-request-handler-1] INFO kafka.log.LogManager - Created log for partition __consumer_offsets-10 in /tmp/kafka-unit12180474530667575823/__consumer_offsets-10 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} 19:06:41.900 [data-plane-kafka-request-handler-1] INFO kafka.cluster.Partition - [Partition __consumer_offsets-10 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-10 19:06:41.901 [data-plane-kafka-request-handler-1] INFO kafka.cluster.Partition - [Partition __consumer_offsets-10 broker=1] Log loaded for partition __consumer_offsets-10 with initial high watermark 0 19:06:41.901 [data-plane-kafka-request-handler-1] INFO state.change.logger - [Broker id=1] Leader __consumer_offsets-10 with topic id Some(eotegnaZSI-1iOOsuTZpHg) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas []. Previous leader epoch was -1. 19:06:41.901 [data-plane-kafka-request-handler-1] DEBUG kafka.server.epoch.LeaderEpochFileCache - [LeaderEpochCache __consumer_offsets-10] Appended new epoch entry EpochEntry(epoch=0, startOffset=0). Cache now contains 1 entries. 19:06:41.905 [ProcessThread(sid:0 cport:42969):] DEBUG org.apache.zookeeper.server.SessionTrackerImpl - Checking session 0x1000002c50e0000 19:06:41.906 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - Processing request:: sessionid:0x1000002c50e0000 type:getData cxid:0xc6 zxid:0xfffffffffffffffe txntype:unknown reqpath:/config/topics/__consumer_offsets 19:06:41.906 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - sessionid:0x1000002c50e0000 type:getData cxid:0xc6 zxid:0xfffffffffffffffe txntype:unknown reqpath:/config/topics/__consumer_offsets 19:06:41.906 [SyncThread:0] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Permission requested: 1 19:06:41.906 [SyncThread:0] DEBUG org.apache.zookeeper.server.ZooKeeperServer - ACLs for node: [31,s{'world,'anyone} ] 19:06:41.906 [SyncThread:0] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Client credentials: ['sasl,'zooclient , 'ip,'127.0.0.1 ] 19:06:41.906 [main-SendThread(127.0.0.1:42969)] DEBUG org.apache.zookeeper.ClientCnxn - Reading reply session id: 0x1000002c50e0000, packet:: clientPath:/config/topics/__consumer_offsets serverPath:/config/topics/__consumer_offsets finished:false header:: 198,4 replyHeader:: 198,139,0 request:: '/config/topics/__consumer_offsets,F response:: #7b2276657273696f6e223a312c22636f6e666967223a7b22636f6d7072657373696f6e2e74797065223a2270726f6475636572222c22636c65616e75702e706f6c696379223a22636f6d70616374222c227365676d656e742e6279746573223a22313034383537363030227d7d,s{37,37,1755112001575,1755112001575,0,0,0,0,109,0,37} 19:06:41.909 [data-plane-kafka-request-handler-1] DEBUG kafka.log.OffsetIndex - Loaded index file /tmp/kafka-unit12180474530667575823/__consumer_offsets-33/00000000000000000000.index with maxEntries = 1310720, maxIndexSize = 10485760, entries = 0, lastOffset = 0, file position = 0 19:06:41.909 [data-plane-kafka-request-handler-1] DEBUG kafka.log.AbstractIndex - Index /tmp/kafka-unit12180474530667575823/__consumer_offsets-33/00000000000000000000.index was not resized because it already has size 10485760 19:06:41.909 [data-plane-kafka-request-handler-1] DEBUG kafka.log.TimeIndex - Loaded index file /tmp/kafka-unit12180474530667575823/__consumer_offsets-33/00000000000000000000.timeindex with maxEntries = 873813, maxIndexSize = 10485760, entries = 0, lastOffset = TimestampOffset(-1,0), file position = 0 19:06:41.909 [data-plane-kafka-request-handler-1] DEBUG kafka.log.AbstractIndex - Index /tmp/kafka-unit12180474530667575823/__consumer_offsets-33/00000000000000000000.timeindex was not resized because it already has size 10485756 19:06:41.909 [data-plane-kafka-request-handler-1] INFO kafka.log.UnifiedLog$ - [LogLoader partition=__consumer_offsets-33, dir=/tmp/kafka-unit12180474530667575823] Loading producer state till offset 0 with message format version 2 19:06:41.910 [data-plane-kafka-request-handler-1] DEBUG kafka.utils.KafkaScheduler - Scheduling task flush-metadata-file with initial delay 0 ms and period -1 ms. 19:06:41.910 [data-plane-kafka-request-handler-1] DEBUG kafka.utils.KafkaScheduler - Scheduling task PeriodicProducerExpirationCheck with initial delay 600000 ms and period 600000 ms. 19:06:41.911 [data-plane-kafka-request-handler-1] INFO kafka.log.LogManager - Created log for partition __consumer_offsets-33 in /tmp/kafka-unit12180474530667575823/__consumer_offsets-33 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} 19:06:41.911 [data-plane-kafka-request-handler-1] INFO kafka.cluster.Partition - [Partition __consumer_offsets-33 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-33 19:06:41.911 [data-plane-kafka-request-handler-1] INFO kafka.cluster.Partition - [Partition __consumer_offsets-33 broker=1] Log loaded for partition __consumer_offsets-33 with initial high watermark 0 19:06:41.911 [data-plane-kafka-request-handler-1] INFO state.change.logger - [Broker id=1] Leader __consumer_offsets-33 with topic id Some(eotegnaZSI-1iOOsuTZpHg) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas []. Previous leader epoch was -1. 19:06:41.911 [data-plane-kafka-request-handler-1] DEBUG kafka.server.epoch.LeaderEpochFileCache - [LeaderEpochCache __consumer_offsets-33] Appended new epoch entry EpochEntry(epoch=0, startOffset=0). Cache now contains 1 entries. 19:06:41.916 [ProcessThread(sid:0 cport:42969):] DEBUG org.apache.zookeeper.server.SessionTrackerImpl - Checking session 0x1000002c50e0000 19:06:41.916 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - Processing request:: sessionid:0x1000002c50e0000 type:getData cxid:0xc7 zxid:0xfffffffffffffffe txntype:unknown reqpath:/config/topics/__consumer_offsets 19:06:41.916 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - sessionid:0x1000002c50e0000 type:getData cxid:0xc7 zxid:0xfffffffffffffffe txntype:unknown reqpath:/config/topics/__consumer_offsets 19:06:41.916 [SyncThread:0] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Permission requested: 1 19:06:41.916 [SyncThread:0] DEBUG org.apache.zookeeper.server.ZooKeeperServer - ACLs for node: [31,s{'world,'anyone} ] 19:06:41.916 [SyncThread:0] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Client credentials: ['sasl,'zooclient , 'ip,'127.0.0.1 ] 19:06:41.917 [main-SendThread(127.0.0.1:42969)] DEBUG org.apache.zookeeper.ClientCnxn - Reading reply session id: 0x1000002c50e0000, packet:: clientPath:/config/topics/__consumer_offsets serverPath:/config/topics/__consumer_offsets finished:false header:: 199,4 replyHeader:: 199,139,0 request:: '/config/topics/__consumer_offsets,F response:: #7b2276657273696f6e223a312c22636f6e666967223a7b22636f6d7072657373696f6e2e74797065223a2270726f6475636572222c22636c65616e75702e706f6c696379223a22636f6d70616374222c227365676d656e742e6279746573223a22313034383537363030227d7d,s{37,37,1755112001575,1755112001575,0,0,0,0,109,0,37} 19:06:41.919 [data-plane-kafka-request-handler-1] DEBUG kafka.log.OffsetIndex - Loaded index file /tmp/kafka-unit12180474530667575823/__consumer_offsets-48/00000000000000000000.index with maxEntries = 1310720, maxIndexSize = 10485760, entries = 0, lastOffset = 0, file position = 0 19:06:41.919 [data-plane-kafka-request-handler-1] DEBUG kafka.log.AbstractIndex - Index /tmp/kafka-unit12180474530667575823/__consumer_offsets-48/00000000000000000000.index was not resized because it already has size 10485760 19:06:41.920 [data-plane-kafka-request-handler-1] DEBUG kafka.log.TimeIndex - Loaded index file /tmp/kafka-unit12180474530667575823/__consumer_offsets-48/00000000000000000000.timeindex with maxEntries = 873813, maxIndexSize = 10485760, entries = 0, lastOffset = TimestampOffset(-1,0), file position = 0 19:06:41.920 [data-plane-kafka-request-handler-1] DEBUG kafka.log.AbstractIndex - Index /tmp/kafka-unit12180474530667575823/__consumer_offsets-48/00000000000000000000.timeindex was not resized because it already has size 10485756 19:06:41.920 [data-plane-kafka-request-handler-1] INFO kafka.log.UnifiedLog$ - [LogLoader partition=__consumer_offsets-48, dir=/tmp/kafka-unit12180474530667575823] Loading producer state till offset 0 with message format version 2 19:06:41.920 [data-plane-kafka-request-handler-1] DEBUG kafka.utils.KafkaScheduler - Scheduling task flush-metadata-file with initial delay 0 ms and period -1 ms. 19:06:41.921 [data-plane-kafka-request-handler-1] DEBUG kafka.utils.KafkaScheduler - Scheduling task PeriodicProducerExpirationCheck with initial delay 600000 ms and period 600000 ms. 19:06:41.921 [data-plane-kafka-request-handler-1] INFO kafka.log.LogManager - Created log for partition __consumer_offsets-48 in /tmp/kafka-unit12180474530667575823/__consumer_offsets-48 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} 19:06:41.921 [data-plane-kafka-request-handler-1] INFO kafka.cluster.Partition - [Partition __consumer_offsets-48 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-48 19:06:41.921 [data-plane-kafka-request-handler-1] INFO kafka.cluster.Partition - [Partition __consumer_offsets-48 broker=1] Log loaded for partition __consumer_offsets-48 with initial high watermark 0 19:06:41.922 [data-plane-kafka-request-handler-1] INFO state.change.logger - [Broker id=1] Leader __consumer_offsets-48 with topic id Some(eotegnaZSI-1iOOsuTZpHg) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas []. Previous leader epoch was -1. 19:06:41.922 [data-plane-kafka-request-handler-1] DEBUG kafka.server.epoch.LeaderEpochFileCache - [LeaderEpochCache __consumer_offsets-48] Appended new epoch entry EpochEntry(epoch=0, startOffset=0). Cache now contains 1 entries. 19:06:41.926 [ProcessThread(sid:0 cport:42969):] DEBUG org.apache.zookeeper.server.SessionTrackerImpl - Checking session 0x1000002c50e0000 19:06:41.926 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - Processing request:: sessionid:0x1000002c50e0000 type:getData cxid:0xc8 zxid:0xfffffffffffffffe txntype:unknown reqpath:/config/topics/__consumer_offsets 19:06:41.926 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - sessionid:0x1000002c50e0000 type:getData cxid:0xc8 zxid:0xfffffffffffffffe txntype:unknown reqpath:/config/topics/__consumer_offsets 19:06:41.926 [SyncThread:0] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Permission requested: 1 19:06:41.926 [SyncThread:0] DEBUG org.apache.zookeeper.server.ZooKeeperServer - ACLs for node: [31,s{'world,'anyone} ] 19:06:41.926 [SyncThread:0] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Client credentials: ['sasl,'zooclient , 'ip,'127.0.0.1 ] 19:06:41.926 [main-SendThread(127.0.0.1:42969)] DEBUG org.apache.zookeeper.ClientCnxn - Reading reply session id: 0x1000002c50e0000, packet:: clientPath:/config/topics/__consumer_offsets serverPath:/config/topics/__consumer_offsets finished:false header:: 200,4 replyHeader:: 200,139,0 request:: '/config/topics/__consumer_offsets,F response:: #7b2276657273696f6e223a312c22636f6e666967223a7b22636f6d7072657373696f6e2e74797065223a2270726f6475636572222c22636c65616e75702e706f6c696379223a22636f6d70616374222c227365676d656e742e6279746573223a22313034383537363030227d7d,s{37,37,1755112001575,1755112001575,0,0,0,0,109,0,37} 19:06:41.929 [data-plane-kafka-request-handler-1] DEBUG kafka.log.OffsetIndex - Loaded index file /tmp/kafka-unit12180474530667575823/__consumer_offsets-19/00000000000000000000.index with maxEntries = 1310720, maxIndexSize = 10485760, entries = 0, lastOffset = 0, file position = 0 19:06:41.929 [data-plane-kafka-request-handler-1] DEBUG kafka.log.AbstractIndex - Index /tmp/kafka-unit12180474530667575823/__consumer_offsets-19/00000000000000000000.index was not resized because it already has size 10485760 19:06:41.930 [data-plane-kafka-request-handler-1] DEBUG kafka.log.TimeIndex - Loaded index file /tmp/kafka-unit12180474530667575823/__consumer_offsets-19/00000000000000000000.timeindex with maxEntries = 873813, maxIndexSize = 10485760, entries = 0, lastOffset = TimestampOffset(-1,0), file position = 0 19:06:41.930 [data-plane-kafka-request-handler-1] DEBUG kafka.log.AbstractIndex - Index /tmp/kafka-unit12180474530667575823/__consumer_offsets-19/00000000000000000000.timeindex was not resized because it already has size 10485756 19:06:41.930 [data-plane-kafka-request-handler-1] INFO kafka.log.UnifiedLog$ - [LogLoader partition=__consumer_offsets-19, dir=/tmp/kafka-unit12180474530667575823] Loading producer state till offset 0 with message format version 2 19:06:41.931 [data-plane-kafka-request-handler-1] DEBUG kafka.utils.KafkaScheduler - Scheduling task flush-metadata-file with initial delay 0 ms and period -1 ms. 19:06:41.931 [data-plane-kafka-request-handler-1] DEBUG kafka.utils.KafkaScheduler - Scheduling task PeriodicProducerExpirationCheck with initial delay 600000 ms and period 600000 ms. 19:06:41.932 [data-plane-kafka-request-handler-1] INFO kafka.log.LogManager - Created log for partition __consumer_offsets-19 in /tmp/kafka-unit12180474530667575823/__consumer_offsets-19 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} 19:06:41.932 [data-plane-kafka-request-handler-1] INFO kafka.cluster.Partition - [Partition __consumer_offsets-19 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-19 19:06:41.932 [data-plane-kafka-request-handler-1] INFO kafka.cluster.Partition - [Partition __consumer_offsets-19 broker=1] Log loaded for partition __consumer_offsets-19 with initial high watermark 0 19:06:41.932 [data-plane-kafka-request-handler-1] INFO state.change.logger - [Broker id=1] Leader __consumer_offsets-19 with topic id Some(eotegnaZSI-1iOOsuTZpHg) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas []. Previous leader epoch was -1. 19:06:41.932 [data-plane-kafka-request-handler-1] DEBUG kafka.server.epoch.LeaderEpochFileCache - [LeaderEpochCache __consumer_offsets-19] Appended new epoch entry EpochEntry(epoch=0, startOffset=0). Cache now contains 1 entries. 19:06:41.938 [ProcessThread(sid:0 cport:42969):] DEBUG org.apache.zookeeper.server.SessionTrackerImpl - Checking session 0x1000002c50e0000 19:06:41.938 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - Processing request:: sessionid:0x1000002c50e0000 type:getData cxid:0xc9 zxid:0xfffffffffffffffe txntype:unknown reqpath:/config/topics/__consumer_offsets 19:06:41.938 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - sessionid:0x1000002c50e0000 type:getData cxid:0xc9 zxid:0xfffffffffffffffe txntype:unknown reqpath:/config/topics/__consumer_offsets 19:06:41.938 [SyncThread:0] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Permission requested: 1 19:06:41.938 [SyncThread:0] DEBUG org.apache.zookeeper.server.ZooKeeperServer - ACLs for node: [31,s{'world,'anyone} ] 19:06:41.938 [SyncThread:0] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Client credentials: ['sasl,'zooclient , 'ip,'127.0.0.1 ] 19:06:41.939 [main-SendThread(127.0.0.1:42969)] DEBUG org.apache.zookeeper.ClientCnxn - Reading reply session id: 0x1000002c50e0000, packet:: clientPath:/config/topics/__consumer_offsets serverPath:/config/topics/__consumer_offsets finished:false header:: 201,4 replyHeader:: 201,139,0 request:: '/config/topics/__consumer_offsets,F response:: #7b2276657273696f6e223a312c22636f6e666967223a7b22636f6d7072657373696f6e2e74797065223a2270726f6475636572222c22636c65616e75702e706f6c696379223a22636f6d70616374222c227365676d656e742e6279746573223a22313034383537363030227d7d,s{37,37,1755112001575,1755112001575,0,0,0,0,109,0,37} 19:06:41.941 [data-plane-kafka-request-handler-1] DEBUG kafka.log.OffsetIndex - Loaded index file /tmp/kafka-unit12180474530667575823/__consumer_offsets-34/00000000000000000000.index with maxEntries = 1310720, maxIndexSize = 10485760, entries = 0, lastOffset = 0, file position = 0 19:06:41.941 [data-plane-kafka-request-handler-1] DEBUG kafka.log.AbstractIndex - Index /tmp/kafka-unit12180474530667575823/__consumer_offsets-34/00000000000000000000.index was not resized because it already has size 10485760 19:06:41.942 [data-plane-kafka-request-handler-1] DEBUG kafka.log.TimeIndex - Loaded index file /tmp/kafka-unit12180474530667575823/__consumer_offsets-34/00000000000000000000.timeindex with maxEntries = 873813, maxIndexSize = 10485760, entries = 0, lastOffset = TimestampOffset(-1,0), file position = 0 19:06:41.942 [data-plane-kafka-request-handler-1] DEBUG kafka.log.AbstractIndex - Index /tmp/kafka-unit12180474530667575823/__consumer_offsets-34/00000000000000000000.timeindex was not resized because it already has size 10485756 19:06:41.942 [data-plane-kafka-request-handler-1] INFO kafka.log.UnifiedLog$ - [LogLoader partition=__consumer_offsets-34, dir=/tmp/kafka-unit12180474530667575823] Loading producer state till offset 0 with message format version 2 19:06:41.942 [data-plane-kafka-request-handler-1] DEBUG kafka.utils.KafkaScheduler - Scheduling task flush-metadata-file with initial delay 0 ms and period -1 ms. 19:06:41.943 [data-plane-kafka-request-handler-1] DEBUG kafka.utils.KafkaScheduler - Scheduling task PeriodicProducerExpirationCheck with initial delay 600000 ms and period 600000 ms. 19:06:41.943 [data-plane-kafka-request-handler-1] INFO kafka.log.LogManager - Created log for partition __consumer_offsets-34 in /tmp/kafka-unit12180474530667575823/__consumer_offsets-34 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} 19:06:41.943 [data-plane-kafka-request-handler-1] INFO kafka.cluster.Partition - [Partition __consumer_offsets-34 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-34 19:06:41.943 [data-plane-kafka-request-handler-1] INFO kafka.cluster.Partition - [Partition __consumer_offsets-34 broker=1] Log loaded for partition __consumer_offsets-34 with initial high watermark 0 19:06:41.943 [data-plane-kafka-request-handler-1] INFO state.change.logger - [Broker id=1] Leader __consumer_offsets-34 with topic id Some(eotegnaZSI-1iOOsuTZpHg) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas []. Previous leader epoch was -1. 19:06:41.943 [data-plane-kafka-request-handler-1] DEBUG kafka.server.epoch.LeaderEpochFileCache - [LeaderEpochCache __consumer_offsets-34] Appended new epoch entry EpochEntry(epoch=0, startOffset=0). Cache now contains 1 entries. 19:06:41.947 [ProcessThread(sid:0 cport:42969):] DEBUG org.apache.zookeeper.server.SessionTrackerImpl - Checking session 0x1000002c50e0000 19:06:41.948 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - Processing request:: sessionid:0x1000002c50e0000 type:getData cxid:0xca zxid:0xfffffffffffffffe txntype:unknown reqpath:/config/topics/__consumer_offsets 19:06:41.948 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - sessionid:0x1000002c50e0000 type:getData cxid:0xca zxid:0xfffffffffffffffe txntype:unknown reqpath:/config/topics/__consumer_offsets 19:06:41.948 [SyncThread:0] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Permission requested: 1 19:06:41.948 [SyncThread:0] DEBUG org.apache.zookeeper.server.ZooKeeperServer - ACLs for node: [31,s{'world,'anyone} ] 19:06:41.948 [SyncThread:0] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Client credentials: ['sasl,'zooclient , 'ip,'127.0.0.1 ] 19:06:41.948 [main-SendThread(127.0.0.1:42969)] DEBUG org.apache.zookeeper.ClientCnxn - Reading reply session id: 0x1000002c50e0000, packet:: clientPath:/config/topics/__consumer_offsets serverPath:/config/topics/__consumer_offsets finished:false header:: 202,4 replyHeader:: 202,139,0 request:: '/config/topics/__consumer_offsets,F response:: #7b2276657273696f6e223a312c22636f6e666967223a7b22636f6d7072657373696f6e2e74797065223a2270726f6475636572222c22636c65616e75702e706f6c696379223a22636f6d70616374222c227365676d656e742e6279746573223a22313034383537363030227d7d,s{37,37,1755112001575,1755112001575,0,0,0,0,109,0,37} 19:06:41.950 [data-plane-kafka-request-handler-1] DEBUG kafka.log.OffsetIndex - Loaded index file /tmp/kafka-unit12180474530667575823/__consumer_offsets-4/00000000000000000000.index with maxEntries = 1310720, maxIndexSize = 10485760, entries = 0, lastOffset = 0, file position = 0 19:06:41.950 [data-plane-kafka-request-handler-1] DEBUG kafka.log.AbstractIndex - Index /tmp/kafka-unit12180474530667575823/__consumer_offsets-4/00000000000000000000.index was not resized because it already has size 10485760 19:06:41.951 [data-plane-kafka-request-handler-1] DEBUG kafka.log.TimeIndex - Loaded index file /tmp/kafka-unit12180474530667575823/__consumer_offsets-4/00000000000000000000.timeindex with maxEntries = 873813, maxIndexSize = 10485760, entries = 0, lastOffset = TimestampOffset(-1,0), file position = 0 19:06:41.951 [data-plane-kafka-request-handler-1] DEBUG kafka.log.AbstractIndex - Index /tmp/kafka-unit12180474530667575823/__consumer_offsets-4/00000000000000000000.timeindex was not resized because it already has size 10485756 19:06:41.951 [data-plane-kafka-request-handler-1] INFO kafka.log.UnifiedLog$ - [LogLoader partition=__consumer_offsets-4, dir=/tmp/kafka-unit12180474530667575823] Loading producer state till offset 0 with message format version 2 19:06:41.951 [data-plane-kafka-request-handler-1] DEBUG kafka.utils.KafkaScheduler - Scheduling task flush-metadata-file with initial delay 0 ms and period -1 ms. 19:06:41.952 [data-plane-kafka-request-handler-1] DEBUG kafka.utils.KafkaScheduler - Scheduling task PeriodicProducerExpirationCheck with initial delay 600000 ms and period 600000 ms. 19:06:41.952 [data-plane-kafka-request-handler-1] INFO kafka.log.LogManager - Created log for partition __consumer_offsets-4 in /tmp/kafka-unit12180474530667575823/__consumer_offsets-4 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} 19:06:41.952 [data-plane-kafka-request-handler-1] INFO kafka.cluster.Partition - [Partition __consumer_offsets-4 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-4 19:06:41.952 [data-plane-kafka-request-handler-1] INFO kafka.cluster.Partition - [Partition __consumer_offsets-4 broker=1] Log loaded for partition __consumer_offsets-4 with initial high watermark 0 19:06:41.952 [data-plane-kafka-request-handler-1] INFO state.change.logger - [Broker id=1] Leader __consumer_offsets-4 with topic id Some(eotegnaZSI-1iOOsuTZpHg) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas []. Previous leader epoch was -1. 19:06:41.952 [data-plane-kafka-request-handler-1] DEBUG kafka.server.epoch.LeaderEpochFileCache - [LeaderEpochCache __consumer_offsets-4] Appended new epoch entry EpochEntry(epoch=0, startOffset=0). Cache now contains 1 entries. 19:06:41.957 [ProcessThread(sid:0 cport:42969):] DEBUG org.apache.zookeeper.server.SessionTrackerImpl - Checking session 0x1000002c50e0000 19:06:41.957 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - Processing request:: sessionid:0x1000002c50e0000 type:getData cxid:0xcb zxid:0xfffffffffffffffe txntype:unknown reqpath:/config/topics/__consumer_offsets 19:06:41.957 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - sessionid:0x1000002c50e0000 type:getData cxid:0xcb zxid:0xfffffffffffffffe txntype:unknown reqpath:/config/topics/__consumer_offsets 19:06:41.957 [SyncThread:0] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Permission requested: 1 19:06:41.957 [SyncThread:0] DEBUG org.apache.zookeeper.server.ZooKeeperServer - ACLs for node: [31,s{'world,'anyone} ] 19:06:41.957 [SyncThread:0] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Client credentials: ['sasl,'zooclient , 'ip,'127.0.0.1 ] 19:06:41.958 [main-SendThread(127.0.0.1:42969)] DEBUG org.apache.zookeeper.ClientCnxn - Reading reply session id: 0x1000002c50e0000, packet:: clientPath:/config/topics/__consumer_offsets serverPath:/config/topics/__consumer_offsets finished:false header:: 203,4 replyHeader:: 203,139,0 request:: '/config/topics/__consumer_offsets,F response:: #7b2276657273696f6e223a312c22636f6e666967223a7b22636f6d7072657373696f6e2e74797065223a2270726f6475636572222c22636c65616e75702e706f6c696379223a22636f6d70616374222c227365676d656e742e6279746573223a22313034383537363030227d7d,s{37,37,1755112001575,1755112001575,0,0,0,0,109,0,37} 19:06:41.960 [data-plane-kafka-request-handler-1] DEBUG kafka.log.OffsetIndex - Loaded index file /tmp/kafka-unit12180474530667575823/__consumer_offsets-11/00000000000000000000.index with maxEntries = 1310720, maxIndexSize = 10485760, entries = 0, lastOffset = 0, file position = 0 19:06:41.960 [data-plane-kafka-request-handler-1] DEBUG kafka.log.AbstractIndex - Index /tmp/kafka-unit12180474530667575823/__consumer_offsets-11/00000000000000000000.index was not resized because it already has size 10485760 19:06:41.960 [data-plane-kafka-request-handler-1] DEBUG kafka.log.TimeIndex - Loaded index file /tmp/kafka-unit12180474530667575823/__consumer_offsets-11/00000000000000000000.timeindex with maxEntries = 873813, maxIndexSize = 10485760, entries = 0, lastOffset = TimestampOffset(-1,0), file position = 0 19:06:41.960 [data-plane-kafka-request-handler-1] DEBUG kafka.log.AbstractIndex - Index /tmp/kafka-unit12180474530667575823/__consumer_offsets-11/00000000000000000000.timeindex was not resized because it already has size 10485756 19:06:41.961 [data-plane-kafka-request-handler-1] INFO kafka.log.UnifiedLog$ - [LogLoader partition=__consumer_offsets-11, dir=/tmp/kafka-unit12180474530667575823] Loading producer state till offset 0 with message format version 2 19:06:41.961 [data-plane-kafka-request-handler-1] DEBUG kafka.utils.KafkaScheduler - Scheduling task flush-metadata-file with initial delay 0 ms and period -1 ms. 19:06:41.962 [data-plane-kafka-request-handler-1] DEBUG kafka.utils.KafkaScheduler - Scheduling task PeriodicProducerExpirationCheck with initial delay 600000 ms and period 600000 ms. 19:06:41.962 [data-plane-kafka-request-handler-1] INFO kafka.log.LogManager - Created log for partition __consumer_offsets-11 in /tmp/kafka-unit12180474530667575823/__consumer_offsets-11 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} 19:06:41.962 [data-plane-kafka-request-handler-1] INFO kafka.cluster.Partition - [Partition __consumer_offsets-11 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-11 19:06:41.962 [data-plane-kafka-request-handler-1] INFO kafka.cluster.Partition - [Partition __consumer_offsets-11 broker=1] Log loaded for partition __consumer_offsets-11 with initial high watermark 0 19:06:41.962 [data-plane-kafka-request-handler-1] INFO state.change.logger - [Broker id=1] Leader __consumer_offsets-11 with topic id Some(eotegnaZSI-1iOOsuTZpHg) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas []. Previous leader epoch was -1. 19:06:41.962 [data-plane-kafka-request-handler-1] DEBUG kafka.server.epoch.LeaderEpochFileCache - [LeaderEpochCache __consumer_offsets-11] Appended new epoch entry EpochEntry(epoch=0, startOffset=0). Cache now contains 1 entries. 19:06:41.967 [ProcessThread(sid:0 cport:42969):] DEBUG org.apache.zookeeper.server.SessionTrackerImpl - Checking session 0x1000002c50e0000 19:06:41.967 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - Processing request:: sessionid:0x1000002c50e0000 type:getData cxid:0xcc zxid:0xfffffffffffffffe txntype:unknown reqpath:/config/topics/__consumer_offsets 19:06:41.967 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - sessionid:0x1000002c50e0000 type:getData cxid:0xcc zxid:0xfffffffffffffffe txntype:unknown reqpath:/config/topics/__consumer_offsets 19:06:41.967 [SyncThread:0] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Permission requested: 1 19:06:41.967 [SyncThread:0] DEBUG org.apache.zookeeper.server.ZooKeeperServer - ACLs for node: [31,s{'world,'anyone} ] 19:06:41.967 [SyncThread:0] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Client credentials: ['sasl,'zooclient , 'ip,'127.0.0.1 ] 19:06:41.968 [main-SendThread(127.0.0.1:42969)] DEBUG org.apache.zookeeper.ClientCnxn - Reading reply session id: 0x1000002c50e0000, packet:: clientPath:/config/topics/__consumer_offsets serverPath:/config/topics/__consumer_offsets finished:false header:: 204,4 replyHeader:: 204,139,0 request:: '/config/topics/__consumer_offsets,F response:: #7b2276657273696f6e223a312c22636f6e666967223a7b22636f6d7072657373696f6e2e74797065223a2270726f6475636572222c22636c65616e75702e706f6c696379223a22636f6d70616374222c227365676d656e742e6279746573223a22313034383537363030227d7d,s{37,37,1755112001575,1755112001575,0,0,0,0,109,0,37} 19:06:41.970 [data-plane-kafka-request-handler-1] DEBUG kafka.log.OffsetIndex - Loaded index file /tmp/kafka-unit12180474530667575823/__consumer_offsets-26/00000000000000000000.index with maxEntries = 1310720, maxIndexSize = 10485760, entries = 0, lastOffset = 0, file position = 0 19:06:41.970 [data-plane-kafka-request-handler-1] DEBUG kafka.log.AbstractIndex - Index /tmp/kafka-unit12180474530667575823/__consumer_offsets-26/00000000000000000000.index was not resized because it already has size 10485760 19:06:41.970 [data-plane-kafka-request-handler-1] DEBUG kafka.log.TimeIndex - Loaded index file /tmp/kafka-unit12180474530667575823/__consumer_offsets-26/00000000000000000000.timeindex with maxEntries = 873813, maxIndexSize = 10485760, entries = 0, lastOffset = TimestampOffset(-1,0), file position = 0 19:06:41.970 [data-plane-kafka-request-handler-1] DEBUG kafka.log.AbstractIndex - Index /tmp/kafka-unit12180474530667575823/__consumer_offsets-26/00000000000000000000.timeindex was not resized because it already has size 10485756 19:06:41.970 [data-plane-kafka-request-handler-1] INFO kafka.log.UnifiedLog$ - [LogLoader partition=__consumer_offsets-26, dir=/tmp/kafka-unit12180474530667575823] Loading producer state till offset 0 with message format version 2 19:06:41.971 [data-plane-kafka-request-handler-1] DEBUG kafka.utils.KafkaScheduler - Scheduling task flush-metadata-file with initial delay 0 ms and period -1 ms. 19:06:41.971 [data-plane-kafka-request-handler-1] DEBUG kafka.utils.KafkaScheduler - Scheduling task PeriodicProducerExpirationCheck with initial delay 600000 ms and period 600000 ms. 19:06:41.971 [data-plane-kafka-request-handler-1] INFO kafka.log.LogManager - Created log for partition __consumer_offsets-26 in /tmp/kafka-unit12180474530667575823/__consumer_offsets-26 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} 19:06:41.972 [data-plane-kafka-request-handler-1] INFO kafka.cluster.Partition - [Partition __consumer_offsets-26 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-26 19:06:41.972 [data-plane-kafka-request-handler-1] INFO kafka.cluster.Partition - [Partition __consumer_offsets-26 broker=1] Log loaded for partition __consumer_offsets-26 with initial high watermark 0 19:06:41.972 [data-plane-kafka-request-handler-1] INFO state.change.logger - [Broker id=1] Leader __consumer_offsets-26 with topic id Some(eotegnaZSI-1iOOsuTZpHg) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas []. Previous leader epoch was -1. 19:06:41.972 [data-plane-kafka-request-handler-1] DEBUG kafka.server.epoch.LeaderEpochFileCache - [LeaderEpochCache __consumer_offsets-26] Appended new epoch entry EpochEntry(epoch=0, startOffset=0). Cache now contains 1 entries. 19:06:41.980 [main] DEBUG org.apache.kafka.clients.NetworkClient - [Consumer clientId=mso-123456-consumer-e3947c42-8a25-4603-b050-ed449cbe5e27, groupId=mso-group] Sending metadata request MetadataRequestData(topics=[MetadataRequestTopic(topicId=AAAAAAAAAAAAAAAAAAAAAA, name='my-test-topic')], allowAutoTopicCreation=false, includeClusterAuthorizedOperations=false, includeTopicAuthorizedOperations=false) to node localhost:38537 (id: 1 rack: null) 19:06:41.980 [main] DEBUG org.apache.kafka.clients.NetworkClient - [Consumer clientId=mso-123456-consumer-e3947c42-8a25-4603-b050-ed449cbe5e27, groupId=mso-group] Sending METADATA request with header RequestHeader(apiKey=METADATA, apiVersion=12, clientId=mso-123456-consumer-e3947c42-8a25-4603-b050-ed449cbe5e27, correlationId=10) and timeout 30000 to node 1: MetadataRequestData(topics=[MetadataRequestTopic(topicId=AAAAAAAAAAAAAAAAAAAAAA, name='my-test-topic')], allowAutoTopicCreation=false, includeClusterAuthorizedOperations=false, includeTopicAuthorizedOperations=false) 19:06:41.984 [main] DEBUG org.apache.kafka.clients.NetworkClient - [Consumer clientId=mso-123456-consumer-e3947c42-8a25-4603-b050-ed449cbe5e27, groupId=mso-group] Received METADATA response from node 1 for request with header RequestHeader(apiKey=METADATA, apiVersion=12, clientId=mso-123456-consumer-e3947c42-8a25-4603-b050-ed449cbe5e27, correlationId=10): MetadataResponseData(throttleTimeMs=0, brokers=[MetadataResponseBroker(nodeId=1, host='localhost', port=38537, rack=null)], clusterId='rNB1PmTvTz-7PD_3X0wh3g', controllerId=1, topics=[MetadataResponseTopic(errorCode=0, name='my-test-topic', topicId=Y7FYMZASRPqaYrfAS485Ow, isInternal=false, partitions=[MetadataResponsePartition(errorCode=0, partitionIndex=0, leaderId=1, leaderEpoch=0, replicaNodes=[1], isrNodes=[1], offlineReplicas=[])], topicAuthorizedOperations=-2147483648)], clusterAuthorizedOperations=-2147483648) 19:06:41.985 [main] DEBUG org.apache.kafka.clients.Metadata - [Consumer clientId=mso-123456-consumer-e3947c42-8a25-4603-b050-ed449cbe5e27, groupId=mso-group] Updating last seen epoch for partition my-test-topic-0 from 0 to epoch 0 from new metadata 19:06:41.985 [data-plane-kafka-network-thread-1-ListenerName(SASL_PLAINTEXT)-SASL_PLAINTEXT-0] DEBUG kafka.request.logger - Completed request:{"isForwarded":false,"requestHeader":{"requestApiKey":3,"requestApiVersion":12,"correlationId":10,"clientId":"mso-123456-consumer-e3947c42-8a25-4603-b050-ed449cbe5e27","requestApiKeyName":"METADATA"},"request":{"topics":[{"topicId":"AAAAAAAAAAAAAAAAAAAAAA","name":"my-test-topic"}],"allowAutoTopicCreation":false,"includeTopicAuthorizedOperations":false},"response":{"throttleTimeMs":0,"brokers":[{"nodeId":1,"host":"localhost","port":38537,"rack":null}],"clusterId":"rNB1PmTvTz-7PD_3X0wh3g","controllerId":1,"topics":[{"errorCode":0,"name":"my-test-topic","topicId":"Y7FYMZASRPqaYrfAS485Ow","isInternal":false,"partitions":[{"errorCode":0,"partitionIndex":0,"leaderId":1,"leaderEpoch":0,"replicaNodes":[1],"isrNodes":[1],"offlineReplicas":[]}],"topicAuthorizedOperations":-2147483648}]},"connection":"127.0.0.1:38537-127.0.0.1:54970-3","totalTimeMs":3.473,"requestQueueTimeMs":0.403,"localTimeMs":2.562,"remoteTimeMs":0.0,"throttleTimeMs":0,"responseQueueTimeMs":0.122,"sendTimeMs":0.386,"securityProtocol":"SASL_PLAINTEXT","principal":"User:admin","listener":"SASL_PLAINTEXT","clientInformation":{"softwareName":"apache-kafka-java","softwareVersion":"3.3.1"}} 19:06:41.985 [main] DEBUG org.apache.kafka.clients.Metadata - [Consumer clientId=mso-123456-consumer-e3947c42-8a25-4603-b050-ed449cbe5e27, groupId=mso-group] Updated cluster metadata updateVersion 6 to MetadataCache{clusterId='rNB1PmTvTz-7PD_3X0wh3g', nodes={1=localhost:38537 (id: 1 rack: null)}, partitions=[PartitionMetadata(error=NONE, partition=my-test-topic-0, leader=Optional[1], leaderEpoch=Optional[0], replicas=1, isr=1, offlineReplicas=)], controller=localhost:38537 (id: 1 rack: null)} 19:06:41.985 [main] DEBUG org.apache.kafka.clients.consumer.internals.ConsumerCoordinator - [Consumer clientId=mso-123456-consumer-e3947c42-8a25-4603-b050-ed449cbe5e27, groupId=mso-group] Sending FindCoordinator request to broker localhost:38537 (id: 1 rack: null) 19:06:41.986 [main] DEBUG org.apache.kafka.clients.NetworkClient - [Consumer clientId=mso-123456-consumer-e3947c42-8a25-4603-b050-ed449cbe5e27, groupId=mso-group] Sending FIND_COORDINATOR request with header RequestHeader(apiKey=FIND_COORDINATOR, apiVersion=4, clientId=mso-123456-consumer-e3947c42-8a25-4603-b050-ed449cbe5e27, correlationId=11) and timeout 30000 to node 1: FindCoordinatorRequestData(key='', keyType=0, coordinatorKeys=[mso-group]) 19:06:41.988 [ProcessThread(sid:0 cport:42969):] DEBUG org.apache.zookeeper.server.SessionTrackerImpl - Checking session 0x1000002c50e0000 19:06:41.989 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - Processing request:: sessionid:0x1000002c50e0000 type:exists cxid:0xcd zxid:0xfffffffffffffffe txntype:unknown reqpath:/admin/delete_topics/__consumer_offsets 19:06:41.989 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - sessionid:0x1000002c50e0000 type:exists cxid:0xcd zxid:0xfffffffffffffffe txntype:unknown reqpath:/admin/delete_topics/__consumer_offsets 19:06:41.989 [main-SendThread(127.0.0.1:42969)] DEBUG org.apache.zookeeper.ClientCnxn - Reading reply session id: 0x1000002c50e0000, packet:: clientPath:/admin/delete_topics/__consumer_offsets serverPath:/admin/delete_topics/__consumer_offsets finished:false header:: 205,3 replyHeader:: 205,139,-101 request:: '/admin/delete_topics/__consumer_offsets,F response:: 19:06:41.990 [ProcessThread(sid:0 cport:42969):] DEBUG org.apache.zookeeper.server.SessionTrackerImpl - Checking session 0x1000002c50e0000 19:06:41.991 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - Processing request:: sessionid:0x1000002c50e0000 type:exists cxid:0xce zxid:0xfffffffffffffffe txntype:unknown reqpath:/brokers/topics/__consumer_offsets 19:06:41.991 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - sessionid:0x1000002c50e0000 type:exists cxid:0xce zxid:0xfffffffffffffffe txntype:unknown reqpath:/brokers/topics/__consumer_offsets 19:06:41.991 [main-SendThread(127.0.0.1:42969)] DEBUG org.apache.zookeeper.ClientCnxn - Reading reply session id: 0x1000002c50e0000, packet:: clientPath:/brokers/topics/__consumer_offsets serverPath:/brokers/topics/__consumer_offsets finished:false header:: 206,3 replyHeader:: 206,139,0 request:: '/brokers/topics/__consumer_offsets,T response:: s{38,38,1755112001586,1755112001586,0,1,0,0,548,1,39} 19:06:41.992 [data-plane-kafka-request-handler-0] DEBUG kafka.server.ZkAdminManager - [Admin Manager on Broker 1]: Topic creation failed since topic '__consumer_offsets' already exists. org.apache.kafka.common.errors.TopicExistsException: Topic '__consumer_offsets' already exists. 19:06:41.992 [data-plane-kafka-request-handler-0] DEBUG kafka.server.DefaultAutoTopicCreationManager - Cleared inflight topic creation state for HashMap(__consumer_offsets -> CreatableTopic(name='__consumer_offsets', numPartitions=50, replicationFactor=1, assignments=[], configs=[CreateableTopicConfig(name='compression.type', value='producer'), CreateableTopicConfig(name='cleanup.policy', value='compact'), CreateableTopicConfig(name='segment.bytes', value='104857600')])) 19:06:41.993 [main] DEBUG org.apache.kafka.clients.NetworkClient - [Consumer clientId=mso-123456-consumer-e3947c42-8a25-4603-b050-ed449cbe5e27, groupId=mso-group] Received FIND_COORDINATOR response from node 1 for request with header RequestHeader(apiKey=FIND_COORDINATOR, apiVersion=4, clientId=mso-123456-consumer-e3947c42-8a25-4603-b050-ed449cbe5e27, correlationId=11): FindCoordinatorResponseData(throttleTimeMs=0, errorCode=0, errorMessage='', nodeId=0, host='', port=0, coordinators=[Coordinator(key='mso-group', nodeId=-1, host='', port=-1, errorCode=15, errorMessage='')]) 19:06:41.993 [main] DEBUG org.apache.kafka.clients.consumer.internals.ConsumerCoordinator - [Consumer clientId=mso-123456-consumer-e3947c42-8a25-4603-b050-ed449cbe5e27, groupId=mso-group] Received FindCoordinator response ClientResponse(receivedTimeMs=1755112001993, latencyMs=8, disconnected=false, requestHeader=RequestHeader(apiKey=FIND_COORDINATOR, apiVersion=4, clientId=mso-123456-consumer-e3947c42-8a25-4603-b050-ed449cbe5e27, correlationId=11), responseBody=FindCoordinatorResponseData(throttleTimeMs=0, errorCode=0, errorMessage='', nodeId=0, host='', port=0, coordinators=[Coordinator(key='mso-group', nodeId=-1, host='', port=-1, errorCode=15, errorMessage='')])) 19:06:41.993 [main] DEBUG org.apache.kafka.clients.consumer.internals.ConsumerCoordinator - [Consumer clientId=mso-123456-consumer-e3947c42-8a25-4603-b050-ed449cbe5e27, groupId=mso-group] Group coordinator lookup failed: 19:06:41.993 [main] DEBUG org.apache.kafka.clients.consumer.internals.ConsumerCoordinator - [Consumer clientId=mso-123456-consumer-e3947c42-8a25-4603-b050-ed449cbe5e27, groupId=mso-group] Coordinator discovery failed, refreshing metadata org.apache.kafka.common.errors.CoordinatorNotAvailableException: The coordinator is not available. 19:06:41.994 [data-plane-kafka-network-thread-1-ListenerName(SASL_PLAINTEXT)-SASL_PLAINTEXT-0] DEBUG kafka.request.logger - Completed request:{"isForwarded":false,"requestHeader":{"requestApiKey":10,"requestApiVersion":4,"correlationId":11,"clientId":"mso-123456-consumer-e3947c42-8a25-4603-b050-ed449cbe5e27","requestApiKeyName":"FIND_COORDINATOR"},"request":{"keyType":0,"coordinatorKeys":["mso-group"]},"response":{"throttleTimeMs":0,"coordinators":[{"key":"mso-group","nodeId":-1,"host":"","port":-1,"errorCode":15,"errorMessage":""}]},"connection":"127.0.0.1:38537-127.0.0.1:54970-3","totalTimeMs":6.745,"requestQueueTimeMs":0.163,"localTimeMs":5.998,"remoteTimeMs":0.0,"throttleTimeMs":0,"responseQueueTimeMs":0.129,"sendTimeMs":0.454,"securityProtocol":"SASL_PLAINTEXT","principal":"User:admin","listener":"SASL_PLAINTEXT","clientInformation":{"softwareName":"apache-kafka-java","softwareVersion":"3.3.1"}} 19:06:42.000 [ProcessThread(sid:0 cport:42969):] DEBUG org.apache.zookeeper.server.SessionTrackerImpl - Checking session 0x1000002c50e0000 19:06:42.000 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - Processing request:: sessionid:0x1000002c50e0000 type:getData cxid:0xcf zxid:0xfffffffffffffffe txntype:unknown reqpath:/config/topics/__consumer_offsets 19:06:42.000 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - sessionid:0x1000002c50e0000 type:getData cxid:0xcf zxid:0xfffffffffffffffe txntype:unknown reqpath:/config/topics/__consumer_offsets 19:06:42.001 [SyncThread:0] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Permission requested: 1 19:06:42.001 [SyncThread:0] DEBUG org.apache.zookeeper.server.ZooKeeperServer - ACLs for node: [31,s{'world,'anyone} ] 19:06:42.001 [SyncThread:0] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Client credentials: ['sasl,'zooclient , 'ip,'127.0.0.1 ] 19:06:42.001 [main-SendThread(127.0.0.1:42969)] DEBUG org.apache.zookeeper.ClientCnxn - Reading reply session id: 0x1000002c50e0000, packet:: clientPath:/config/topics/__consumer_offsets serverPath:/config/topics/__consumer_offsets finished:false header:: 207,4 replyHeader:: 207,139,0 request:: '/config/topics/__consumer_offsets,F response:: #7b2276657273696f6e223a312c22636f6e666967223a7b22636f6d7072657373696f6e2e74797065223a2270726f6475636572222c22636c65616e75702e706f6c696379223a22636f6d70616374222c227365676d656e742e6279746573223a22313034383537363030227d7d,s{37,37,1755112001575,1755112001575,0,0,0,0,109,0,37} 19:06:42.005 [data-plane-kafka-request-handler-1] DEBUG kafka.log.OffsetIndex - Loaded index file /tmp/kafka-unit12180474530667575823/__consumer_offsets-49/00000000000000000000.index with maxEntries = 1310720, maxIndexSize = 10485760, entries = 0, lastOffset = 0, file position = 0 19:06:42.005 [data-plane-kafka-request-handler-1] DEBUG kafka.log.AbstractIndex - Index /tmp/kafka-unit12180474530667575823/__consumer_offsets-49/00000000000000000000.index was not resized because it already has size 10485760 19:06:42.005 [data-plane-kafka-request-handler-1] DEBUG kafka.log.TimeIndex - Loaded index file /tmp/kafka-unit12180474530667575823/__consumer_offsets-49/00000000000000000000.timeindex with maxEntries = 873813, maxIndexSize = 10485760, entries = 0, lastOffset = TimestampOffset(-1,0), file position = 0 19:06:42.005 [data-plane-kafka-request-handler-1] DEBUG kafka.log.AbstractIndex - Index /tmp/kafka-unit12180474530667575823/__consumer_offsets-49/00000000000000000000.timeindex was not resized because it already has size 10485756 19:06:42.006 [data-plane-kafka-request-handler-1] INFO kafka.log.UnifiedLog$ - [LogLoader partition=__consumer_offsets-49, dir=/tmp/kafka-unit12180474530667575823] Loading producer state till offset 0 with message format version 2 19:06:42.006 [data-plane-kafka-request-handler-1] DEBUG kafka.utils.KafkaScheduler - Scheduling task flush-metadata-file with initial delay 0 ms and period -1 ms. 19:06:42.007 [data-plane-kafka-request-handler-1] DEBUG kafka.utils.KafkaScheduler - Scheduling task PeriodicProducerExpirationCheck with initial delay 600000 ms and period 600000 ms. 19:06:42.007 [data-plane-kafka-request-handler-1] INFO kafka.log.LogManager - Created log for partition __consumer_offsets-49 in /tmp/kafka-unit12180474530667575823/__consumer_offsets-49 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} 19:06:42.007 [data-plane-kafka-request-handler-1] INFO kafka.cluster.Partition - [Partition __consumer_offsets-49 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-49 19:06:42.007 [data-plane-kafka-request-handler-1] INFO kafka.cluster.Partition - [Partition __consumer_offsets-49 broker=1] Log loaded for partition __consumer_offsets-49 with initial high watermark 0 19:06:42.007 [data-plane-kafka-request-handler-1] INFO state.change.logger - [Broker id=1] Leader __consumer_offsets-49 with topic id Some(eotegnaZSI-1iOOsuTZpHg) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas []. Previous leader epoch was -1. 19:06:42.008 [data-plane-kafka-request-handler-1] DEBUG kafka.server.epoch.LeaderEpochFileCache - [LeaderEpochCache __consumer_offsets-49] Appended new epoch entry EpochEntry(epoch=0, startOffset=0). Cache now contains 1 entries. 19:06:42.014 [ProcessThread(sid:0 cport:42969):] DEBUG org.apache.zookeeper.server.SessionTrackerImpl - Checking session 0x1000002c50e0000 19:06:42.014 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - Processing request:: sessionid:0x1000002c50e0000 type:getData cxid:0xd0 zxid:0xfffffffffffffffe txntype:unknown reqpath:/config/topics/__consumer_offsets 19:06:42.014 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - sessionid:0x1000002c50e0000 type:getData cxid:0xd0 zxid:0xfffffffffffffffe txntype:unknown reqpath:/config/topics/__consumer_offsets 19:06:42.014 [SyncThread:0] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Permission requested: 1 19:06:42.014 [SyncThread:0] DEBUG org.apache.zookeeper.server.ZooKeeperServer - ACLs for node: [31,s{'world,'anyone} ] 19:06:42.014 [SyncThread:0] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Client credentials: ['sasl,'zooclient , 'ip,'127.0.0.1 ] 19:06:42.015 [main-SendThread(127.0.0.1:42969)] DEBUG org.apache.zookeeper.ClientCnxn - Reading reply session id: 0x1000002c50e0000, packet:: clientPath:/config/topics/__consumer_offsets serverPath:/config/topics/__consumer_offsets finished:false header:: 208,4 replyHeader:: 208,139,0 request:: '/config/topics/__consumer_offsets,F response:: #7b2276657273696f6e223a312c22636f6e666967223a7b22636f6d7072657373696f6e2e74797065223a2270726f6475636572222c22636c65616e75702e706f6c696379223a22636f6d70616374222c227365676d656e742e6279746573223a22313034383537363030227d7d,s{37,37,1755112001575,1755112001575,0,0,0,0,109,0,37} 19:06:42.017 [data-plane-kafka-request-handler-1] DEBUG kafka.log.OffsetIndex - Loaded index file /tmp/kafka-unit12180474530667575823/__consumer_offsets-39/00000000000000000000.index with maxEntries = 1310720, maxIndexSize = 10485760, entries = 0, lastOffset = 0, file position = 0 19:06:42.017 [data-plane-kafka-request-handler-1] DEBUG kafka.log.AbstractIndex - Index /tmp/kafka-unit12180474530667575823/__consumer_offsets-39/00000000000000000000.index was not resized because it already has size 10485760 19:06:42.018 [data-plane-kafka-request-handler-1] DEBUG kafka.log.TimeIndex - Loaded index file /tmp/kafka-unit12180474530667575823/__consumer_offsets-39/00000000000000000000.timeindex with maxEntries = 873813, maxIndexSize = 10485760, entries = 0, lastOffset = TimestampOffset(-1,0), file position = 0 19:06:42.018 [data-plane-kafka-request-handler-1] DEBUG kafka.log.AbstractIndex - Index /tmp/kafka-unit12180474530667575823/__consumer_offsets-39/00000000000000000000.timeindex was not resized because it already has size 10485756 19:06:42.018 [data-plane-kafka-request-handler-1] INFO kafka.log.UnifiedLog$ - [LogLoader partition=__consumer_offsets-39, dir=/tmp/kafka-unit12180474530667575823] Loading producer state till offset 0 with message format version 2 19:06:42.019 [data-plane-kafka-request-handler-1] DEBUG kafka.utils.KafkaScheduler - Scheduling task flush-metadata-file with initial delay 0 ms and period -1 ms. 19:06:42.019 [data-plane-kafka-request-handler-1] DEBUG kafka.utils.KafkaScheduler - Scheduling task PeriodicProducerExpirationCheck with initial delay 600000 ms and period 600000 ms. 19:06:42.020 [data-plane-kafka-request-handler-1] INFO kafka.log.LogManager - Created log for partition __consumer_offsets-39 in /tmp/kafka-unit12180474530667575823/__consumer_offsets-39 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} 19:06:42.020 [data-plane-kafka-request-handler-1] INFO kafka.cluster.Partition - [Partition __consumer_offsets-39 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-39 19:06:42.020 [data-plane-kafka-request-handler-1] INFO kafka.cluster.Partition - [Partition __consumer_offsets-39 broker=1] Log loaded for partition __consumer_offsets-39 with initial high watermark 0 19:06:42.020 [data-plane-kafka-request-handler-1] INFO state.change.logger - [Broker id=1] Leader __consumer_offsets-39 with topic id Some(eotegnaZSI-1iOOsuTZpHg) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas []. Previous leader epoch was -1. 19:06:42.020 [data-plane-kafka-request-handler-1] DEBUG kafka.server.epoch.LeaderEpochFileCache - [LeaderEpochCache __consumer_offsets-39] Appended new epoch entry EpochEntry(epoch=0, startOffset=0). Cache now contains 1 entries. 19:06:42.025 [ProcessThread(sid:0 cport:42969):] DEBUG org.apache.zookeeper.server.SessionTrackerImpl - Checking session 0x1000002c50e0000 19:06:42.025 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - Processing request:: sessionid:0x1000002c50e0000 type:getData cxid:0xd1 zxid:0xfffffffffffffffe txntype:unknown reqpath:/config/topics/__consumer_offsets 19:06:42.025 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - sessionid:0x1000002c50e0000 type:getData cxid:0xd1 zxid:0xfffffffffffffffe txntype:unknown reqpath:/config/topics/__consumer_offsets 19:06:42.025 [SyncThread:0] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Permission requested: 1 19:06:42.025 [SyncThread:0] DEBUG org.apache.zookeeper.server.ZooKeeperServer - ACLs for node: [31,s{'world,'anyone} ] 19:06:42.025 [SyncThread:0] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Client credentials: ['sasl,'zooclient , 'ip,'127.0.0.1 ] 19:06:42.026 [main-SendThread(127.0.0.1:42969)] DEBUG org.apache.zookeeper.ClientCnxn - Reading reply session id: 0x1000002c50e0000, packet:: clientPath:/config/topics/__consumer_offsets serverPath:/config/topics/__consumer_offsets finished:false header:: 209,4 replyHeader:: 209,139,0 request:: '/config/topics/__consumer_offsets,F response:: #7b2276657273696f6e223a312c22636f6e666967223a7b22636f6d7072657373696f6e2e74797065223a2270726f6475636572222c22636c65616e75702e706f6c696379223a22636f6d70616374222c227365676d656e742e6279746573223a22313034383537363030227d7d,s{37,37,1755112001575,1755112001575,0,0,0,0,109,0,37} 19:06:42.028 [data-plane-kafka-request-handler-1] DEBUG kafka.log.OffsetIndex - Loaded index file /tmp/kafka-unit12180474530667575823/__consumer_offsets-9/00000000000000000000.index with maxEntries = 1310720, maxIndexSize = 10485760, entries = 0, lastOffset = 0, file position = 0 19:06:42.028 [data-plane-kafka-request-handler-1] DEBUG kafka.log.AbstractIndex - Index /tmp/kafka-unit12180474530667575823/__consumer_offsets-9/00000000000000000000.index was not resized because it already has size 10485760 19:06:42.028 [data-plane-kafka-request-handler-1] DEBUG kafka.log.TimeIndex - Loaded index file /tmp/kafka-unit12180474530667575823/__consumer_offsets-9/00000000000000000000.timeindex with maxEntries = 873813, maxIndexSize = 10485760, entries = 0, lastOffset = TimestampOffset(-1,0), file position = 0 19:06:42.028 [data-plane-kafka-request-handler-1] DEBUG kafka.log.AbstractIndex - Index /tmp/kafka-unit12180474530667575823/__consumer_offsets-9/00000000000000000000.timeindex was not resized because it already has size 10485756 19:06:42.029 [data-plane-kafka-request-handler-1] INFO kafka.log.UnifiedLog$ - [LogLoader partition=__consumer_offsets-9, dir=/tmp/kafka-unit12180474530667575823] Loading producer state till offset 0 with message format version 2 19:06:42.029 [data-plane-kafka-request-handler-1] DEBUG kafka.utils.KafkaScheduler - Scheduling task flush-metadata-file with initial delay 0 ms and period -1 ms. 19:06:42.029 [data-plane-kafka-request-handler-1] DEBUG kafka.utils.KafkaScheduler - Scheduling task PeriodicProducerExpirationCheck with initial delay 600000 ms and period 600000 ms. 19:06:42.030 [data-plane-kafka-request-handler-1] INFO kafka.log.LogManager - Created log for partition __consumer_offsets-9 in /tmp/kafka-unit12180474530667575823/__consumer_offsets-9 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} 19:06:42.030 [data-plane-kafka-request-handler-1] INFO kafka.cluster.Partition - [Partition __consumer_offsets-9 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-9 19:06:42.030 [data-plane-kafka-request-handler-1] INFO kafka.cluster.Partition - [Partition __consumer_offsets-9 broker=1] Log loaded for partition __consumer_offsets-9 with initial high watermark 0 19:06:42.030 [data-plane-kafka-request-handler-1] INFO state.change.logger - [Broker id=1] Leader __consumer_offsets-9 with topic id Some(eotegnaZSI-1iOOsuTZpHg) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas []. Previous leader epoch was -1. 19:06:42.030 [data-plane-kafka-request-handler-1] DEBUG kafka.server.epoch.LeaderEpochFileCache - [LeaderEpochCache __consumer_offsets-9] Appended new epoch entry EpochEntry(epoch=0, startOffset=0). Cache now contains 1 entries. 19:06:42.034 [ProcessThread(sid:0 cport:42969):] DEBUG org.apache.zookeeper.server.SessionTrackerImpl - Checking session 0x1000002c50e0000 19:06:42.034 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - Processing request:: sessionid:0x1000002c50e0000 type:getData cxid:0xd2 zxid:0xfffffffffffffffe txntype:unknown reqpath:/config/topics/__consumer_offsets 19:06:42.034 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - sessionid:0x1000002c50e0000 type:getData cxid:0xd2 zxid:0xfffffffffffffffe txntype:unknown reqpath:/config/topics/__consumer_offsets 19:06:42.034 [SyncThread:0] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Permission requested: 1 19:06:42.034 [SyncThread:0] DEBUG org.apache.zookeeper.server.ZooKeeperServer - ACLs for node: [31,s{'world,'anyone} ] 19:06:42.034 [SyncThread:0] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Client credentials: ['sasl,'zooclient , 'ip,'127.0.0.1 ] 19:06:42.035 [main-SendThread(127.0.0.1:42969)] DEBUG org.apache.zookeeper.ClientCnxn - Reading reply session id: 0x1000002c50e0000, packet:: clientPath:/config/topics/__consumer_offsets serverPath:/config/topics/__consumer_offsets finished:false header:: 210,4 replyHeader:: 210,139,0 request:: '/config/topics/__consumer_offsets,F response:: #7b2276657273696f6e223a312c22636f6e666967223a7b22636f6d7072657373696f6e2e74797065223a2270726f6475636572222c22636c65616e75702e706f6c696379223a22636f6d70616374222c227365676d656e742e6279746573223a22313034383537363030227d7d,s{37,37,1755112001575,1755112001575,0,0,0,0,109,0,37} 19:06:42.037 [data-plane-kafka-request-handler-1] DEBUG kafka.log.OffsetIndex - Loaded index file /tmp/kafka-unit12180474530667575823/__consumer_offsets-24/00000000000000000000.index with maxEntries = 1310720, maxIndexSize = 10485760, entries = 0, lastOffset = 0, file position = 0 19:06:42.037 [data-plane-kafka-request-handler-1] DEBUG kafka.log.AbstractIndex - Index /tmp/kafka-unit12180474530667575823/__consumer_offsets-24/00000000000000000000.index was not resized because it already has size 10485760 19:06:42.037 [data-plane-kafka-request-handler-1] DEBUG kafka.log.TimeIndex - Loaded index file /tmp/kafka-unit12180474530667575823/__consumer_offsets-24/00000000000000000000.timeindex with maxEntries = 873813, maxIndexSize = 10485760, entries = 0, lastOffset = TimestampOffset(-1,0), file position = 0 19:06:42.037 [data-plane-kafka-request-handler-1] DEBUG kafka.log.AbstractIndex - Index /tmp/kafka-unit12180474530667575823/__consumer_offsets-24/00000000000000000000.timeindex was not resized because it already has size 10485756 19:06:42.037 [data-plane-kafka-request-handler-1] INFO kafka.log.UnifiedLog$ - [LogLoader partition=__consumer_offsets-24, dir=/tmp/kafka-unit12180474530667575823] Loading producer state till offset 0 with message format version 2 19:06:42.038 [data-plane-kafka-request-handler-1] DEBUG kafka.utils.KafkaScheduler - Scheduling task flush-metadata-file with initial delay 0 ms and period -1 ms. 19:06:42.038 [data-plane-kafka-request-handler-1] DEBUG kafka.utils.KafkaScheduler - Scheduling task PeriodicProducerExpirationCheck with initial delay 600000 ms and period 600000 ms. 19:06:42.038 [data-plane-kafka-request-handler-1] INFO kafka.log.LogManager - Created log for partition __consumer_offsets-24 in /tmp/kafka-unit12180474530667575823/__consumer_offsets-24 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} 19:06:42.039 [data-plane-kafka-request-handler-1] INFO kafka.cluster.Partition - [Partition __consumer_offsets-24 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-24 19:06:42.039 [data-plane-kafka-request-handler-1] INFO kafka.cluster.Partition - [Partition __consumer_offsets-24 broker=1] Log loaded for partition __consumer_offsets-24 with initial high watermark 0 19:06:42.039 [data-plane-kafka-request-handler-1] INFO state.change.logger - [Broker id=1] Leader __consumer_offsets-24 with topic id Some(eotegnaZSI-1iOOsuTZpHg) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas []. Previous leader epoch was -1. 19:06:42.039 [data-plane-kafka-request-handler-1] DEBUG kafka.server.epoch.LeaderEpochFileCache - [LeaderEpochCache __consumer_offsets-24] Appended new epoch entry EpochEntry(epoch=0, startOffset=0). Cache now contains 1 entries. 19:06:42.047 [ProcessThread(sid:0 cport:42969):] DEBUG org.apache.zookeeper.server.SessionTrackerImpl - Checking session 0x1000002c50e0000 19:06:42.047 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - Processing request:: sessionid:0x1000002c50e0000 type:getData cxid:0xd3 zxid:0xfffffffffffffffe txntype:unknown reqpath:/config/topics/__consumer_offsets 19:06:42.047 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - sessionid:0x1000002c50e0000 type:getData cxid:0xd3 zxid:0xfffffffffffffffe txntype:unknown reqpath:/config/topics/__consumer_offsets 19:06:42.047 [SyncThread:0] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Permission requested: 1 19:06:42.047 [SyncThread:0] DEBUG org.apache.zookeeper.server.ZooKeeperServer - ACLs for node: [31,s{'world,'anyone} ] 19:06:42.047 [SyncThread:0] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Client credentials: ['sasl,'zooclient , 'ip,'127.0.0.1 ] 19:06:42.048 [main-SendThread(127.0.0.1:42969)] DEBUG org.apache.zookeeper.ClientCnxn - Reading reply session id: 0x1000002c50e0000, packet:: clientPath:/config/topics/__consumer_offsets serverPath:/config/topics/__consumer_offsets finished:false header:: 211,4 replyHeader:: 211,139,0 request:: '/config/topics/__consumer_offsets,F response:: #7b2276657273696f6e223a312c22636f6e666967223a7b22636f6d7072657373696f6e2e74797065223a2270726f6475636572222c22636c65616e75702e706f6c696379223a22636f6d70616374222c227365676d656e742e6279746573223a22313034383537363030227d7d,s{37,37,1755112001575,1755112001575,0,0,0,0,109,0,37} 19:06:42.051 [data-plane-kafka-request-handler-1] DEBUG kafka.log.OffsetIndex - Loaded index file /tmp/kafka-unit12180474530667575823/__consumer_offsets-31/00000000000000000000.index with maxEntries = 1310720, maxIndexSize = 10485760, entries = 0, lastOffset = 0, file position = 0 19:06:42.051 [data-plane-kafka-request-handler-1] DEBUG kafka.log.AbstractIndex - Index /tmp/kafka-unit12180474530667575823/__consumer_offsets-31/00000000000000000000.index was not resized because it already has size 10485760 19:06:42.051 [data-plane-kafka-request-handler-1] DEBUG kafka.log.TimeIndex - Loaded index file /tmp/kafka-unit12180474530667575823/__consumer_offsets-31/00000000000000000000.timeindex with maxEntries = 873813, maxIndexSize = 10485760, entries = 0, lastOffset = TimestampOffset(-1,0), file position = 0 19:06:42.051 [data-plane-kafka-request-handler-1] DEBUG kafka.log.AbstractIndex - Index /tmp/kafka-unit12180474530667575823/__consumer_offsets-31/00000000000000000000.timeindex was not resized because it already has size 10485756 19:06:42.052 [data-plane-kafka-request-handler-1] INFO kafka.log.UnifiedLog$ - [LogLoader partition=__consumer_offsets-31, dir=/tmp/kafka-unit12180474530667575823] Loading producer state till offset 0 with message format version 2 19:06:42.052 [data-plane-kafka-request-handler-1] DEBUG kafka.utils.KafkaScheduler - Scheduling task flush-metadata-file with initial delay 0 ms and period -1 ms. 19:06:42.053 [data-plane-kafka-request-handler-1] DEBUG kafka.utils.KafkaScheduler - Scheduling task PeriodicProducerExpirationCheck with initial delay 600000 ms and period 600000 ms. 19:06:42.054 [data-plane-kafka-request-handler-1] INFO kafka.log.LogManager - Created log for partition __consumer_offsets-31 in /tmp/kafka-unit12180474530667575823/__consumer_offsets-31 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} 19:06:42.054 [data-plane-kafka-request-handler-1] INFO kafka.cluster.Partition - [Partition __consumer_offsets-31 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-31 19:06:42.054 [data-plane-kafka-request-handler-1] INFO kafka.cluster.Partition - [Partition __consumer_offsets-31 broker=1] Log loaded for partition __consumer_offsets-31 with initial high watermark 0 19:06:42.054 [data-plane-kafka-request-handler-1] INFO state.change.logger - [Broker id=1] Leader __consumer_offsets-31 with topic id Some(eotegnaZSI-1iOOsuTZpHg) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas []. Previous leader epoch was -1. 19:06:42.054 [data-plane-kafka-request-handler-1] DEBUG kafka.server.epoch.LeaderEpochFileCache - [LeaderEpochCache __consumer_offsets-31] Appended new epoch entry EpochEntry(epoch=0, startOffset=0). Cache now contains 1 entries. 19:06:42.058 [ProcessThread(sid:0 cport:42969):] DEBUG org.apache.zookeeper.server.SessionTrackerImpl - Checking session 0x1000002c50e0000 19:06:42.058 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - Processing request:: sessionid:0x1000002c50e0000 type:getData cxid:0xd4 zxid:0xfffffffffffffffe txntype:unknown reqpath:/config/topics/__consumer_offsets 19:06:42.058 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - sessionid:0x1000002c50e0000 type:getData cxid:0xd4 zxid:0xfffffffffffffffe txntype:unknown reqpath:/config/topics/__consumer_offsets 19:06:42.058 [SyncThread:0] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Permission requested: 1 19:06:42.058 [SyncThread:0] DEBUG org.apache.zookeeper.server.ZooKeeperServer - ACLs for node: [31,s{'world,'anyone} ] 19:06:42.058 [SyncThread:0] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Client credentials: ['sasl,'zooclient , 'ip,'127.0.0.1 ] 19:06:42.059 [main-SendThread(127.0.0.1:42969)] DEBUG org.apache.zookeeper.ClientCnxn - Reading reply session id: 0x1000002c50e0000, packet:: clientPath:/config/topics/__consumer_offsets serverPath:/config/topics/__consumer_offsets finished:false header:: 212,4 replyHeader:: 212,139,0 request:: '/config/topics/__consumer_offsets,F response:: #7b2276657273696f6e223a312c22636f6e666967223a7b22636f6d7072657373696f6e2e74797065223a2270726f6475636572222c22636c65616e75702e706f6c696379223a22636f6d70616374222c227365676d656e742e6279746573223a22313034383537363030227d7d,s{37,37,1755112001575,1755112001575,0,0,0,0,109,0,37} 19:06:42.061 [data-plane-kafka-request-handler-1] DEBUG kafka.log.OffsetIndex - Loaded index file /tmp/kafka-unit12180474530667575823/__consumer_offsets-46/00000000000000000000.index with maxEntries = 1310720, maxIndexSize = 10485760, entries = 0, lastOffset = 0, file position = 0 19:06:42.061 [data-plane-kafka-request-handler-1] DEBUG kafka.log.AbstractIndex - Index /tmp/kafka-unit12180474530667575823/__consumer_offsets-46/00000000000000000000.index was not resized because it already has size 10485760 19:06:42.061 [data-plane-kafka-request-handler-1] DEBUG kafka.log.TimeIndex - Loaded index file /tmp/kafka-unit12180474530667575823/__consumer_offsets-46/00000000000000000000.timeindex with maxEntries = 873813, maxIndexSize = 10485760, entries = 0, lastOffset = TimestampOffset(-1,0), file position = 0 19:06:42.061 [data-plane-kafka-request-handler-1] DEBUG kafka.log.AbstractIndex - Index /tmp/kafka-unit12180474530667575823/__consumer_offsets-46/00000000000000000000.timeindex was not resized because it already has size 10485756 19:06:42.061 [data-plane-kafka-request-handler-1] INFO kafka.log.UnifiedLog$ - [LogLoader partition=__consumer_offsets-46, dir=/tmp/kafka-unit12180474530667575823] Loading producer state till offset 0 with message format version 2 19:06:42.061 [data-plane-kafka-request-handler-1] DEBUG kafka.utils.KafkaScheduler - Scheduling task flush-metadata-file with initial delay 0 ms and period -1 ms. 19:06:42.063 [data-plane-kafka-request-handler-1] DEBUG kafka.utils.KafkaScheduler - Scheduling task PeriodicProducerExpirationCheck with initial delay 600000 ms and period 600000 ms. 19:06:42.063 [data-plane-kafka-request-handler-1] INFO kafka.log.LogManager - Created log for partition __consumer_offsets-46 in /tmp/kafka-unit12180474530667575823/__consumer_offsets-46 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} 19:06:42.063 [data-plane-kafka-request-handler-1] INFO kafka.cluster.Partition - [Partition __consumer_offsets-46 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-46 19:06:42.063 [data-plane-kafka-request-handler-1] INFO kafka.cluster.Partition - [Partition __consumer_offsets-46 broker=1] Log loaded for partition __consumer_offsets-46 with initial high watermark 0 19:06:42.063 [data-plane-kafka-request-handler-1] INFO state.change.logger - [Broker id=1] Leader __consumer_offsets-46 with topic id Some(eotegnaZSI-1iOOsuTZpHg) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas []. Previous leader epoch was -1. 19:06:42.063 [data-plane-kafka-request-handler-1] DEBUG kafka.server.epoch.LeaderEpochFileCache - [LeaderEpochCache __consumer_offsets-46] Appended new epoch entry EpochEntry(epoch=0, startOffset=0). Cache now contains 1 entries. 19:06:42.073 [ProcessThread(sid:0 cport:42969):] DEBUG org.apache.zookeeper.server.SessionTrackerImpl - Checking session 0x1000002c50e0000 19:06:42.073 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - Processing request:: sessionid:0x1000002c50e0000 type:getData cxid:0xd5 zxid:0xfffffffffffffffe txntype:unknown reqpath:/config/topics/__consumer_offsets 19:06:42.073 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - sessionid:0x1000002c50e0000 type:getData cxid:0xd5 zxid:0xfffffffffffffffe txntype:unknown reqpath:/config/topics/__consumer_offsets 19:06:42.073 [SyncThread:0] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Permission requested: 1 19:06:42.073 [SyncThread:0] DEBUG org.apache.zookeeper.server.ZooKeeperServer - ACLs for node: [31,s{'world,'anyone} ] 19:06:42.073 [SyncThread:0] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Client credentials: ['sasl,'zooclient , 'ip,'127.0.0.1 ] 19:06:42.073 [main-SendThread(127.0.0.1:42969)] DEBUG org.apache.zookeeper.ClientCnxn - Reading reply session id: 0x1000002c50e0000, packet:: clientPath:/config/topics/__consumer_offsets serverPath:/config/topics/__consumer_offsets finished:false header:: 213,4 replyHeader:: 213,139,0 request:: '/config/topics/__consumer_offsets,F response:: #7b2276657273696f6e223a312c22636f6e666967223a7b22636f6d7072657373696f6e2e74797065223a2270726f6475636572222c22636c65616e75702e706f6c696379223a22636f6d70616374222c227365676d656e742e6279746573223a22313034383537363030227d7d,s{37,37,1755112001575,1755112001575,0,0,0,0,109,0,37} 19:06:42.077 [data-plane-kafka-request-handler-1] DEBUG kafka.log.OffsetIndex - Loaded index file /tmp/kafka-unit12180474530667575823/__consumer_offsets-1/00000000000000000000.index with maxEntries = 1310720, maxIndexSize = 10485760, entries = 0, lastOffset = 0, file position = 0 19:06:42.077 [data-plane-kafka-request-handler-1] DEBUG kafka.log.AbstractIndex - Index /tmp/kafka-unit12180474530667575823/__consumer_offsets-1/00000000000000000000.index was not resized because it already has size 10485760 19:06:42.077 [data-plane-kafka-request-handler-1] DEBUG kafka.log.TimeIndex - Loaded index file /tmp/kafka-unit12180474530667575823/__consumer_offsets-1/00000000000000000000.timeindex with maxEntries = 873813, maxIndexSize = 10485760, entries = 0, lastOffset = TimestampOffset(-1,0), file position = 0 19:06:42.077 [data-plane-kafka-request-handler-1] DEBUG kafka.log.AbstractIndex - Index /tmp/kafka-unit12180474530667575823/__consumer_offsets-1/00000000000000000000.timeindex was not resized because it already has size 10485756 19:06:42.078 [data-plane-kafka-request-handler-1] INFO kafka.log.UnifiedLog$ - [LogLoader partition=__consumer_offsets-1, dir=/tmp/kafka-unit12180474530667575823] Loading producer state till offset 0 with message format version 2 19:06:42.078 [data-plane-kafka-request-handler-1] DEBUG kafka.utils.KafkaScheduler - Scheduling task flush-metadata-file with initial delay 0 ms and period -1 ms. 19:06:42.079 [data-plane-kafka-request-handler-1] DEBUG kafka.utils.KafkaScheduler - Scheduling task PeriodicProducerExpirationCheck with initial delay 600000 ms and period 600000 ms. 19:06:42.079 [data-plane-kafka-request-handler-1] INFO kafka.log.LogManager - Created log for partition __consumer_offsets-1 in /tmp/kafka-unit12180474530667575823/__consumer_offsets-1 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} 19:06:42.079 [data-plane-kafka-request-handler-1] INFO kafka.cluster.Partition - [Partition __consumer_offsets-1 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-1 19:06:42.079 [data-plane-kafka-request-handler-1] INFO kafka.cluster.Partition - [Partition __consumer_offsets-1 broker=1] Log loaded for partition __consumer_offsets-1 with initial high watermark 0 19:06:42.079 [data-plane-kafka-request-handler-1] INFO state.change.logger - [Broker id=1] Leader __consumer_offsets-1 with topic id Some(eotegnaZSI-1iOOsuTZpHg) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas []. Previous leader epoch was -1. 19:06:42.079 [data-plane-kafka-request-handler-1] DEBUG kafka.server.epoch.LeaderEpochFileCache - [LeaderEpochCache __consumer_offsets-1] Appended new epoch entry EpochEntry(epoch=0, startOffset=0). Cache now contains 1 entries. 19:06:42.083 [ProcessThread(sid:0 cport:42969):] DEBUG org.apache.zookeeper.server.SessionTrackerImpl - Checking session 0x1000002c50e0000 19:06:42.083 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - Processing request:: sessionid:0x1000002c50e0000 type:getData cxid:0xd6 zxid:0xfffffffffffffffe txntype:unknown reqpath:/config/topics/__consumer_offsets 19:06:42.084 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - sessionid:0x1000002c50e0000 type:getData cxid:0xd6 zxid:0xfffffffffffffffe txntype:unknown reqpath:/config/topics/__consumer_offsets 19:06:42.084 [SyncThread:0] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Permission requested: 1 19:06:42.084 [SyncThread:0] DEBUG org.apache.zookeeper.server.ZooKeeperServer - ACLs for node: [31,s{'world,'anyone} ] 19:06:42.084 [SyncThread:0] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Client credentials: ['sasl,'zooclient , 'ip,'127.0.0.1 ] 19:06:42.084 [main-SendThread(127.0.0.1:42969)] DEBUG org.apache.zookeeper.ClientCnxn - Reading reply session id: 0x1000002c50e0000, packet:: clientPath:/config/topics/__consumer_offsets serverPath:/config/topics/__consumer_offsets finished:false header:: 214,4 replyHeader:: 214,139,0 request:: '/config/topics/__consumer_offsets,F response:: #7b2276657273696f6e223a312c22636f6e666967223a7b22636f6d7072657373696f6e2e74797065223a2270726f6475636572222c22636c65616e75702e706f6c696379223a22636f6d70616374222c227365676d656e742e6279746573223a22313034383537363030227d7d,s{37,37,1755112001575,1755112001575,0,0,0,0,109,0,37} 19:06:42.085 [main] DEBUG org.apache.kafka.clients.NetworkClient - [Consumer clientId=mso-123456-consumer-e3947c42-8a25-4603-b050-ed449cbe5e27, groupId=mso-group] Sending metadata request MetadataRequestData(topics=[MetadataRequestTopic(topicId=AAAAAAAAAAAAAAAAAAAAAA, name='my-test-topic')], allowAutoTopicCreation=false, includeClusterAuthorizedOperations=false, includeTopicAuthorizedOperations=false) to node localhost:38537 (id: 1 rack: null) 19:06:42.085 [main] DEBUG org.apache.kafka.clients.NetworkClient - [Consumer clientId=mso-123456-consumer-e3947c42-8a25-4603-b050-ed449cbe5e27, groupId=mso-group] Sending METADATA request with header RequestHeader(apiKey=METADATA, apiVersion=12, clientId=mso-123456-consumer-e3947c42-8a25-4603-b050-ed449cbe5e27, correlationId=12) and timeout 30000 to node 1: MetadataRequestData(topics=[MetadataRequestTopic(topicId=AAAAAAAAAAAAAAAAAAAAAA, name='my-test-topic')], allowAutoTopicCreation=false, includeClusterAuthorizedOperations=false, includeTopicAuthorizedOperations=false) 19:06:42.087 [data-plane-kafka-request-handler-1] DEBUG kafka.log.OffsetIndex - Loaded index file /tmp/kafka-unit12180474530667575823/__consumer_offsets-16/00000000000000000000.index with maxEntries = 1310720, maxIndexSize = 10485760, entries = 0, lastOffset = 0, file position = 0 19:06:42.087 [data-plane-kafka-request-handler-1] DEBUG kafka.log.AbstractIndex - Index /tmp/kafka-unit12180474530667575823/__consumer_offsets-16/00000000000000000000.index was not resized because it already has size 10485760 19:06:42.087 [data-plane-kafka-request-handler-1] DEBUG kafka.log.TimeIndex - Loaded index file /tmp/kafka-unit12180474530667575823/__consumer_offsets-16/00000000000000000000.timeindex with maxEntries = 873813, maxIndexSize = 10485760, entries = 0, lastOffset = TimestampOffset(-1,0), file position = 0 19:06:42.087 [data-plane-kafka-request-handler-1] DEBUG kafka.log.AbstractIndex - Index /tmp/kafka-unit12180474530667575823/__consumer_offsets-16/00000000000000000000.timeindex was not resized because it already has size 10485756 19:06:42.088 [data-plane-kafka-request-handler-1] INFO kafka.log.UnifiedLog$ - [LogLoader partition=__consumer_offsets-16, dir=/tmp/kafka-unit12180474530667575823] Loading producer state till offset 0 with message format version 2 19:06:42.088 [data-plane-kafka-request-handler-1] DEBUG kafka.utils.KafkaScheduler - Scheduling task flush-metadata-file with initial delay 0 ms and period -1 ms. 19:06:42.089 [main] DEBUG org.apache.kafka.clients.NetworkClient - [Consumer clientId=mso-123456-consumer-e3947c42-8a25-4603-b050-ed449cbe5e27, groupId=mso-group] Received METADATA response from node 1 for request with header RequestHeader(apiKey=METADATA, apiVersion=12, clientId=mso-123456-consumer-e3947c42-8a25-4603-b050-ed449cbe5e27, correlationId=12): MetadataResponseData(throttleTimeMs=0, brokers=[MetadataResponseBroker(nodeId=1, host='localhost', port=38537, rack=null)], clusterId='rNB1PmTvTz-7PD_3X0wh3g', controllerId=1, topics=[MetadataResponseTopic(errorCode=0, name='my-test-topic', topicId=Y7FYMZASRPqaYrfAS485Ow, isInternal=false, partitions=[MetadataResponsePartition(errorCode=0, partitionIndex=0, leaderId=1, leaderEpoch=0, replicaNodes=[1], isrNodes=[1], offlineReplicas=[])], topicAuthorizedOperations=-2147483648)], clusterAuthorizedOperations=-2147483648) 19:06:42.089 [main] DEBUG org.apache.kafka.clients.Metadata - [Consumer clientId=mso-123456-consumer-e3947c42-8a25-4603-b050-ed449cbe5e27, groupId=mso-group] Updating last seen epoch for partition my-test-topic-0 from 0 to epoch 0 from new metadata 19:06:42.089 [main] DEBUG org.apache.kafka.clients.Metadata - [Consumer clientId=mso-123456-consumer-e3947c42-8a25-4603-b050-ed449cbe5e27, groupId=mso-group] Updated cluster metadata updateVersion 7 to MetadataCache{clusterId='rNB1PmTvTz-7PD_3X0wh3g', nodes={1=localhost:38537 (id: 1 rack: null)}, partitions=[PartitionMetadata(error=NONE, partition=my-test-topic-0, leader=Optional[1], leaderEpoch=Optional[0], replicas=1, isr=1, offlineReplicas=)], controller=localhost:38537 (id: 1 rack: null)} 19:06:42.089 [data-plane-kafka-network-thread-1-ListenerName(SASL_PLAINTEXT)-SASL_PLAINTEXT-0] DEBUG kafka.request.logger - Completed request:{"isForwarded":false,"requestHeader":{"requestApiKey":3,"requestApiVersion":12,"correlationId":12,"clientId":"mso-123456-consumer-e3947c42-8a25-4603-b050-ed449cbe5e27","requestApiKeyName":"METADATA"},"request":{"topics":[{"topicId":"AAAAAAAAAAAAAAAAAAAAAA","name":"my-test-topic"}],"allowAutoTopicCreation":false,"includeTopicAuthorizedOperations":false},"response":{"throttleTimeMs":0,"brokers":[{"nodeId":1,"host":"localhost","port":38537,"rack":null}],"clusterId":"rNB1PmTvTz-7PD_3X0wh3g","controllerId":1,"topics":[{"errorCode":0,"name":"my-test-topic","topicId":"Y7FYMZASRPqaYrfAS485Ow","isInternal":false,"partitions":[{"errorCode":0,"partitionIndex":0,"leaderId":1,"leaderEpoch":0,"replicaNodes":[1],"isrNodes":[1],"offlineReplicas":[]}],"topicAuthorizedOperations":-2147483648}]},"connection":"127.0.0.1:38537-127.0.0.1:54970-3","totalTimeMs":2.473,"requestQueueTimeMs":0.369,"localTimeMs":1.538,"remoteTimeMs":0.0,"throttleTimeMs":0,"responseQueueTimeMs":0.161,"sendTimeMs":0.404,"securityProtocol":"SASL_PLAINTEXT","principal":"User:admin","listener":"SASL_PLAINTEXT","clientInformation":{"softwareName":"apache-kafka-java","softwareVersion":"3.3.1"}} 19:06:42.090 [main] DEBUG org.apache.kafka.clients.consumer.internals.ConsumerCoordinator - [Consumer clientId=mso-123456-consumer-e3947c42-8a25-4603-b050-ed449cbe5e27, groupId=mso-group] Sending FindCoordinator request to broker localhost:38537 (id: 1 rack: null) 19:06:42.090 [main] DEBUG org.apache.kafka.clients.NetworkClient - [Consumer clientId=mso-123456-consumer-e3947c42-8a25-4603-b050-ed449cbe5e27, groupId=mso-group] Sending FIND_COORDINATOR request with header RequestHeader(apiKey=FIND_COORDINATOR, apiVersion=4, clientId=mso-123456-consumer-e3947c42-8a25-4603-b050-ed449cbe5e27, correlationId=13) and timeout 30000 to node 1: FindCoordinatorRequestData(key='', keyType=0, coordinatorKeys=[mso-group]) 19:06:42.090 [data-plane-kafka-request-handler-1] DEBUG kafka.utils.KafkaScheduler - Scheduling task PeriodicProducerExpirationCheck with initial delay 600000 ms and period 600000 ms. 19:06:42.091 [data-plane-kafka-request-handler-1] INFO kafka.log.LogManager - Created log for partition __consumer_offsets-16 in /tmp/kafka-unit12180474530667575823/__consumer_offsets-16 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} 19:06:42.091 [data-plane-kafka-request-handler-1] INFO kafka.cluster.Partition - [Partition __consumer_offsets-16 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-16 19:06:42.091 [data-plane-kafka-request-handler-1] INFO kafka.cluster.Partition - [Partition __consumer_offsets-16 broker=1] Log loaded for partition __consumer_offsets-16 with initial high watermark 0 19:06:42.091 [data-plane-kafka-request-handler-1] INFO state.change.logger - [Broker id=1] Leader __consumer_offsets-16 with topic id Some(eotegnaZSI-1iOOsuTZpHg) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas []. Previous leader epoch was -1. 19:06:42.091 [data-plane-kafka-request-handler-1] DEBUG kafka.server.epoch.LeaderEpochFileCache - [LeaderEpochCache __consumer_offsets-16] Appended new epoch entry EpochEntry(epoch=0, startOffset=0). Cache now contains 1 entries. 19:06:42.093 [ProcessThread(sid:0 cport:42969):] DEBUG org.apache.zookeeper.server.SessionTrackerImpl - Checking session 0x1000002c50e0000 19:06:42.093 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - Processing request:: sessionid:0x1000002c50e0000 type:exists cxid:0xd7 zxid:0xfffffffffffffffe txntype:unknown reqpath:/admin/delete_topics/__consumer_offsets 19:06:42.093 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - sessionid:0x1000002c50e0000 type:exists cxid:0xd7 zxid:0xfffffffffffffffe txntype:unknown reqpath:/admin/delete_topics/__consumer_offsets 19:06:42.093 [main-SendThread(127.0.0.1:42969)] DEBUG org.apache.zookeeper.ClientCnxn - Reading reply session id: 0x1000002c50e0000, packet:: clientPath:/admin/delete_topics/__consumer_offsets serverPath:/admin/delete_topics/__consumer_offsets finished:false header:: 215,3 replyHeader:: 215,139,-101 request:: '/admin/delete_topics/__consumer_offsets,F response:: 19:06:42.094 [ProcessThread(sid:0 cport:42969):] DEBUG org.apache.zookeeper.server.SessionTrackerImpl - Checking session 0x1000002c50e0000 19:06:42.094 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - Processing request:: sessionid:0x1000002c50e0000 type:getData cxid:0xd8 zxid:0xfffffffffffffffe txntype:unknown reqpath:/config/topics/__consumer_offsets 19:06:42.094 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - sessionid:0x1000002c50e0000 type:getData cxid:0xd8 zxid:0xfffffffffffffffe txntype:unknown reqpath:/config/topics/__consumer_offsets 19:06:42.094 [ProcessThread(sid:0 cport:42969):] DEBUG org.apache.zookeeper.server.SessionTrackerImpl - Checking session 0x1000002c50e0000 19:06:42.094 [SyncThread:0] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Permission requested: 1 19:06:42.094 [SyncThread:0] DEBUG org.apache.zookeeper.server.ZooKeeperServer - ACLs for node: [31,s{'world,'anyone} ] 19:06:42.094 [SyncThread:0] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Client credentials: ['sasl,'zooclient , 'ip,'127.0.0.1 ] 19:06:42.094 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - Processing request:: sessionid:0x1000002c50e0000 type:exists cxid:0xd9 zxid:0xfffffffffffffffe txntype:unknown reqpath:/brokers/topics/__consumer_offsets 19:06:42.094 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - sessionid:0x1000002c50e0000 type:exists cxid:0xd9 zxid:0xfffffffffffffffe txntype:unknown reqpath:/brokers/topics/__consumer_offsets 19:06:42.095 [main-SendThread(127.0.0.1:42969)] DEBUG org.apache.zookeeper.ClientCnxn - Reading reply session id: 0x1000002c50e0000, packet:: clientPath:/config/topics/__consumer_offsets serverPath:/config/topics/__consumer_offsets finished:false header:: 216,4 replyHeader:: 216,139,0 request:: '/config/topics/__consumer_offsets,F response:: #7b2276657273696f6e223a312c22636f6e666967223a7b22636f6d7072657373696f6e2e74797065223a2270726f6475636572222c22636c65616e75702e706f6c696379223a22636f6d70616374222c227365676d656e742e6279746573223a22313034383537363030227d7d,s{37,37,1755112001575,1755112001575,0,0,0,0,109,0,37} 19:06:42.095 [main-SendThread(127.0.0.1:42969)] DEBUG org.apache.zookeeper.ClientCnxn - Reading reply session id: 0x1000002c50e0000, packet:: clientPath:/brokers/topics/__consumer_offsets serverPath:/brokers/topics/__consumer_offsets finished:false header:: 217,3 replyHeader:: 217,139,0 request:: '/brokers/topics/__consumer_offsets,T response:: s{38,38,1755112001586,1755112001586,0,1,0,0,548,1,39} 19:06:42.096 [data-plane-kafka-request-handler-0] DEBUG kafka.server.ZkAdminManager - [Admin Manager on Broker 1]: Topic creation failed since topic '__consumer_offsets' already exists. org.apache.kafka.common.errors.TopicExistsException: Topic '__consumer_offsets' already exists. 19:06:42.096 [data-plane-kafka-request-handler-0] DEBUG kafka.server.DefaultAutoTopicCreationManager - Cleared inflight topic creation state for HashMap(__consumer_offsets -> CreatableTopic(name='__consumer_offsets', numPartitions=50, replicationFactor=1, assignments=[], configs=[CreateableTopicConfig(name='compression.type', value='producer'), CreateableTopicConfig(name='cleanup.policy', value='compact'), CreateableTopicConfig(name='segment.bytes', value='104857600')])) 19:06:42.097 [main] DEBUG org.apache.kafka.clients.NetworkClient - [Consumer clientId=mso-123456-consumer-e3947c42-8a25-4603-b050-ed449cbe5e27, groupId=mso-group] Received FIND_COORDINATOR response from node 1 for request with header RequestHeader(apiKey=FIND_COORDINATOR, apiVersion=4, clientId=mso-123456-consumer-e3947c42-8a25-4603-b050-ed449cbe5e27, correlationId=13): FindCoordinatorResponseData(throttleTimeMs=0, errorCode=0, errorMessage='', nodeId=0, host='', port=0, coordinators=[Coordinator(key='mso-group', nodeId=-1, host='', port=-1, errorCode=15, errorMessage='')]) 19:06:42.097 [main] DEBUG org.apache.kafka.clients.consumer.internals.ConsumerCoordinator - [Consumer clientId=mso-123456-consumer-e3947c42-8a25-4603-b050-ed449cbe5e27, groupId=mso-group] Received FindCoordinator response ClientResponse(receivedTimeMs=1755112002097, latencyMs=7, disconnected=false, requestHeader=RequestHeader(apiKey=FIND_COORDINATOR, apiVersion=4, clientId=mso-123456-consumer-e3947c42-8a25-4603-b050-ed449cbe5e27, correlationId=13), responseBody=FindCoordinatorResponseData(throttleTimeMs=0, errorCode=0, errorMessage='', nodeId=0, host='', port=0, coordinators=[Coordinator(key='mso-group', nodeId=-1, host='', port=-1, errorCode=15, errorMessage='')])) 19:06:42.097 [main] DEBUG org.apache.kafka.clients.consumer.internals.ConsumerCoordinator - [Consumer clientId=mso-123456-consumer-e3947c42-8a25-4603-b050-ed449cbe5e27, groupId=mso-group] Group coordinator lookup failed: 19:06:42.097 [data-plane-kafka-network-thread-1-ListenerName(SASL_PLAINTEXT)-SASL_PLAINTEXT-0] DEBUG kafka.request.logger - Completed request:{"isForwarded":false,"requestHeader":{"requestApiKey":10,"requestApiVersion":4,"correlationId":13,"clientId":"mso-123456-consumer-e3947c42-8a25-4603-b050-ed449cbe5e27","requestApiKeyName":"FIND_COORDINATOR"},"request":{"keyType":0,"coordinatorKeys":["mso-group"]},"response":{"throttleTimeMs":0,"coordinators":[{"key":"mso-group","nodeId":-1,"host":"","port":-1,"errorCode":15,"errorMessage":""}]},"connection":"127.0.0.1:38537-127.0.0.1:54970-3","totalTimeMs":6.324,"requestQueueTimeMs":0.159,"localTimeMs":5.834,"remoteTimeMs":0.0,"throttleTimeMs":0,"responseQueueTimeMs":0.089,"sendTimeMs":0.24,"securityProtocol":"SASL_PLAINTEXT","principal":"User:admin","listener":"SASL_PLAINTEXT","clientInformation":{"softwareName":"apache-kafka-java","softwareVersion":"3.3.1"}} 19:06:42.097 [main] DEBUG org.apache.kafka.clients.consumer.internals.ConsumerCoordinator - [Consumer clientId=mso-123456-consumer-e3947c42-8a25-4603-b050-ed449cbe5e27, groupId=mso-group] Coordinator discovery failed, refreshing metadata org.apache.kafka.common.errors.CoordinatorNotAvailableException: The coordinator is not available. 19:06:42.098 [data-plane-kafka-request-handler-1] DEBUG kafka.log.OffsetIndex - Loaded index file /tmp/kafka-unit12180474530667575823/__consumer_offsets-2/00000000000000000000.index with maxEntries = 1310720, maxIndexSize = 10485760, entries = 0, lastOffset = 0, file position = 0 19:06:42.098 [data-plane-kafka-request-handler-1] DEBUG kafka.log.AbstractIndex - Index /tmp/kafka-unit12180474530667575823/__consumer_offsets-2/00000000000000000000.index was not resized because it already has size 10485760 19:06:42.098 [data-plane-kafka-request-handler-1] DEBUG kafka.log.TimeIndex - Loaded index file /tmp/kafka-unit12180474530667575823/__consumer_offsets-2/00000000000000000000.timeindex with maxEntries = 873813, maxIndexSize = 10485760, entries = 0, lastOffset = TimestampOffset(-1,0), file position = 0 19:06:42.098 [data-plane-kafka-request-handler-1] DEBUG kafka.log.AbstractIndex - Index /tmp/kafka-unit12180474530667575823/__consumer_offsets-2/00000000000000000000.timeindex was not resized because it already has size 10485756 19:06:42.098 [data-plane-kafka-request-handler-1] INFO kafka.log.UnifiedLog$ - [LogLoader partition=__consumer_offsets-2, dir=/tmp/kafka-unit12180474530667575823] Loading producer state till offset 0 with message format version 2 19:06:42.099 [data-plane-kafka-request-handler-1] DEBUG kafka.utils.KafkaScheduler - Scheduling task flush-metadata-file with initial delay 0 ms and period -1 ms. 19:06:42.099 [data-plane-kafka-request-handler-1] DEBUG kafka.utils.KafkaScheduler - Scheduling task PeriodicProducerExpirationCheck with initial delay 600000 ms and period 600000 ms. 19:06:42.099 [data-plane-kafka-request-handler-1] INFO kafka.log.LogManager - Created log for partition __consumer_offsets-2 in /tmp/kafka-unit12180474530667575823/__consumer_offsets-2 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} 19:06:42.099 [data-plane-kafka-request-handler-1] INFO kafka.cluster.Partition - [Partition __consumer_offsets-2 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-2 19:06:42.100 [data-plane-kafka-request-handler-1] INFO kafka.cluster.Partition - [Partition __consumer_offsets-2 broker=1] Log loaded for partition __consumer_offsets-2 with initial high watermark 0 19:06:42.100 [data-plane-kafka-request-handler-1] INFO state.change.logger - [Broker id=1] Leader __consumer_offsets-2 with topic id Some(eotegnaZSI-1iOOsuTZpHg) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas []. Previous leader epoch was -1. 19:06:42.100 [data-plane-kafka-request-handler-1] DEBUG kafka.server.epoch.LeaderEpochFileCache - [LeaderEpochCache __consumer_offsets-2] Appended new epoch entry EpochEntry(epoch=0, startOffset=0). Cache now contains 1 entries. 19:06:42.103 [ProcessThread(sid:0 cport:42969):] DEBUG org.apache.zookeeper.server.SessionTrackerImpl - Checking session 0x1000002c50e0000 19:06:42.103 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - Processing request:: sessionid:0x1000002c50e0000 type:getData cxid:0xda zxid:0xfffffffffffffffe txntype:unknown reqpath:/config/topics/__consumer_offsets 19:06:42.103 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - sessionid:0x1000002c50e0000 type:getData cxid:0xda zxid:0xfffffffffffffffe txntype:unknown reqpath:/config/topics/__consumer_offsets 19:06:42.103 [SyncThread:0] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Permission requested: 1 19:06:42.103 [SyncThread:0] DEBUG org.apache.zookeeper.server.ZooKeeperServer - ACLs for node: [31,s{'world,'anyone} ] 19:06:42.103 [SyncThread:0] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Client credentials: ['sasl,'zooclient , 'ip,'127.0.0.1 ] 19:06:42.104 [main-SendThread(127.0.0.1:42969)] DEBUG org.apache.zookeeper.ClientCnxn - Reading reply session id: 0x1000002c50e0000, packet:: clientPath:/config/topics/__consumer_offsets serverPath:/config/topics/__consumer_offsets finished:false header:: 218,4 replyHeader:: 218,139,0 request:: '/config/topics/__consumer_offsets,F response:: #7b2276657273696f6e223a312c22636f6e666967223a7b22636f6d7072657373696f6e2e74797065223a2270726f6475636572222c22636c65616e75702e706f6c696379223a22636f6d70616374222c227365676d656e742e6279746573223a22313034383537363030227d7d,s{37,37,1755112001575,1755112001575,0,0,0,0,109,0,37} 19:06:42.106 [data-plane-kafka-request-handler-1] DEBUG kafka.log.OffsetIndex - Loaded index file /tmp/kafka-unit12180474530667575823/__consumer_offsets-25/00000000000000000000.index with maxEntries = 1310720, maxIndexSize = 10485760, entries = 0, lastOffset = 0, file position = 0 19:06:42.106 [data-plane-kafka-request-handler-1] DEBUG kafka.log.AbstractIndex - Index /tmp/kafka-unit12180474530667575823/__consumer_offsets-25/00000000000000000000.index was not resized because it already has size 10485760 19:06:42.106 [data-plane-kafka-request-handler-1] DEBUG kafka.log.TimeIndex - Loaded index file /tmp/kafka-unit12180474530667575823/__consumer_offsets-25/00000000000000000000.timeindex with maxEntries = 873813, maxIndexSize = 10485760, entries = 0, lastOffset = TimestampOffset(-1,0), file position = 0 19:06:42.106 [data-plane-kafka-request-handler-1] DEBUG kafka.log.AbstractIndex - Index /tmp/kafka-unit12180474530667575823/__consumer_offsets-25/00000000000000000000.timeindex was not resized because it already has size 10485756 19:06:42.106 [data-plane-kafka-request-handler-1] INFO kafka.log.UnifiedLog$ - [LogLoader partition=__consumer_offsets-25, dir=/tmp/kafka-unit12180474530667575823] Loading producer state till offset 0 with message format version 2 19:06:42.107 [data-plane-kafka-request-handler-1] DEBUG kafka.utils.KafkaScheduler - Scheduling task flush-metadata-file with initial delay 0 ms and period -1 ms. 19:06:42.107 [data-plane-kafka-request-handler-1] DEBUG kafka.utils.KafkaScheduler - Scheduling task PeriodicProducerExpirationCheck with initial delay 600000 ms and period 600000 ms. 19:06:42.107 [data-plane-kafka-request-handler-1] INFO kafka.log.LogManager - Created log for partition __consumer_offsets-25 in /tmp/kafka-unit12180474530667575823/__consumer_offsets-25 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} 19:06:42.108 [data-plane-kafka-request-handler-1] INFO kafka.cluster.Partition - [Partition __consumer_offsets-25 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-25 19:06:42.108 [data-plane-kafka-request-handler-1] INFO kafka.cluster.Partition - [Partition __consumer_offsets-25 broker=1] Log loaded for partition __consumer_offsets-25 with initial high watermark 0 19:06:42.108 [data-plane-kafka-request-handler-1] INFO state.change.logger - [Broker id=1] Leader __consumer_offsets-25 with topic id Some(eotegnaZSI-1iOOsuTZpHg) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas []. Previous leader epoch was -1. 19:06:42.108 [data-plane-kafka-request-handler-1] DEBUG kafka.server.epoch.LeaderEpochFileCache - [LeaderEpochCache __consumer_offsets-25] Appended new epoch entry EpochEntry(epoch=0, startOffset=0). Cache now contains 1 entries. 19:06:42.112 [ProcessThread(sid:0 cport:42969):] DEBUG org.apache.zookeeper.server.SessionTrackerImpl - Checking session 0x1000002c50e0000 19:06:42.112 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - Processing request:: sessionid:0x1000002c50e0000 type:getData cxid:0xdb zxid:0xfffffffffffffffe txntype:unknown reqpath:/config/topics/__consumer_offsets 19:06:42.112 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - sessionid:0x1000002c50e0000 type:getData cxid:0xdb zxid:0xfffffffffffffffe txntype:unknown reqpath:/config/topics/__consumer_offsets 19:06:42.112 [SyncThread:0] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Permission requested: 1 19:06:42.112 [SyncThread:0] DEBUG org.apache.zookeeper.server.ZooKeeperServer - ACLs for node: [31,s{'world,'anyone} ] 19:06:42.112 [SyncThread:0] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Client credentials: ['sasl,'zooclient , 'ip,'127.0.0.1 ] 19:06:42.112 [main-SendThread(127.0.0.1:42969)] DEBUG org.apache.zookeeper.ClientCnxn - Reading reply session id: 0x1000002c50e0000, packet:: clientPath:/config/topics/__consumer_offsets serverPath:/config/topics/__consumer_offsets finished:false header:: 219,4 replyHeader:: 219,139,0 request:: '/config/topics/__consumer_offsets,F response:: #7b2276657273696f6e223a312c22636f6e666967223a7b22636f6d7072657373696f6e2e74797065223a2270726f6475636572222c22636c65616e75702e706f6c696379223a22636f6d70616374222c227365676d656e742e6279746573223a22313034383537363030227d7d,s{37,37,1755112001575,1755112001575,0,0,0,0,109,0,37} 19:06:42.115 [data-plane-kafka-request-handler-1] DEBUG kafka.log.OffsetIndex - Loaded index file /tmp/kafka-unit12180474530667575823/__consumer_offsets-40/00000000000000000000.index with maxEntries = 1310720, maxIndexSize = 10485760, entries = 0, lastOffset = 0, file position = 0 19:06:42.115 [data-plane-kafka-request-handler-1] DEBUG kafka.log.AbstractIndex - Index /tmp/kafka-unit12180474530667575823/__consumer_offsets-40/00000000000000000000.index was not resized because it already has size 10485760 19:06:42.115 [data-plane-kafka-request-handler-1] DEBUG kafka.log.TimeIndex - Loaded index file /tmp/kafka-unit12180474530667575823/__consumer_offsets-40/00000000000000000000.timeindex with maxEntries = 873813, maxIndexSize = 10485760, entries = 0, lastOffset = TimestampOffset(-1,0), file position = 0 19:06:42.115 [data-plane-kafka-request-handler-1] DEBUG kafka.log.AbstractIndex - Index /tmp/kafka-unit12180474530667575823/__consumer_offsets-40/00000000000000000000.timeindex was not resized because it already has size 10485756 19:06:42.115 [data-plane-kafka-request-handler-1] INFO kafka.log.UnifiedLog$ - [LogLoader partition=__consumer_offsets-40, dir=/tmp/kafka-unit12180474530667575823] Loading producer state till offset 0 with message format version 2 19:06:42.116 [data-plane-kafka-request-handler-1] DEBUG kafka.utils.KafkaScheduler - Scheduling task flush-metadata-file with initial delay 0 ms and period -1 ms. 19:06:42.117 [data-plane-kafka-request-handler-1] DEBUG kafka.utils.KafkaScheduler - Scheduling task PeriodicProducerExpirationCheck with initial delay 600000 ms and period 600000 ms. 19:06:42.117 [data-plane-kafka-request-handler-1] INFO kafka.log.LogManager - Created log for partition __consumer_offsets-40 in /tmp/kafka-unit12180474530667575823/__consumer_offsets-40 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} 19:06:42.117 [data-plane-kafka-request-handler-1] INFO kafka.cluster.Partition - [Partition __consumer_offsets-40 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-40 19:06:42.117 [data-plane-kafka-request-handler-1] INFO kafka.cluster.Partition - [Partition __consumer_offsets-40 broker=1] Log loaded for partition __consumer_offsets-40 with initial high watermark 0 19:06:42.117 [data-plane-kafka-request-handler-1] INFO state.change.logger - [Broker id=1] Leader __consumer_offsets-40 with topic id Some(eotegnaZSI-1iOOsuTZpHg) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas []. Previous leader epoch was -1. 19:06:42.117 [data-plane-kafka-request-handler-1] DEBUG kafka.server.epoch.LeaderEpochFileCache - [LeaderEpochCache __consumer_offsets-40] Appended new epoch entry EpochEntry(epoch=0, startOffset=0). Cache now contains 1 entries. 19:06:42.120 [ProcessThread(sid:0 cport:42969):] DEBUG org.apache.zookeeper.server.SessionTrackerImpl - Checking session 0x1000002c50e0000 19:06:42.120 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - Processing request:: sessionid:0x1000002c50e0000 type:getData cxid:0xdc zxid:0xfffffffffffffffe txntype:unknown reqpath:/config/topics/__consumer_offsets 19:06:42.120 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - sessionid:0x1000002c50e0000 type:getData cxid:0xdc zxid:0xfffffffffffffffe txntype:unknown reqpath:/config/topics/__consumer_offsets 19:06:42.121 [SyncThread:0] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Permission requested: 1 19:06:42.121 [SyncThread:0] DEBUG org.apache.zookeeper.server.ZooKeeperServer - ACLs for node: [31,s{'world,'anyone} ] 19:06:42.121 [SyncThread:0] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Client credentials: ['sasl,'zooclient , 'ip,'127.0.0.1 ] 19:06:42.121 [main-SendThread(127.0.0.1:42969)] DEBUG org.apache.zookeeper.ClientCnxn - Reading reply session id: 0x1000002c50e0000, packet:: clientPath:/config/topics/__consumer_offsets serverPath:/config/topics/__consumer_offsets finished:false header:: 220,4 replyHeader:: 220,139,0 request:: '/config/topics/__consumer_offsets,F response:: #7b2276657273696f6e223a312c22636f6e666967223a7b22636f6d7072657373696f6e2e74797065223a2270726f6475636572222c22636c65616e75702e706f6c696379223a22636f6d70616374222c227365676d656e742e6279746573223a22313034383537363030227d7d,s{37,37,1755112001575,1755112001575,0,0,0,0,109,0,37} 19:06:42.123 [data-plane-kafka-request-handler-1] DEBUG kafka.log.OffsetIndex - Loaded index file /tmp/kafka-unit12180474530667575823/__consumer_offsets-47/00000000000000000000.index with maxEntries = 1310720, maxIndexSize = 10485760, entries = 0, lastOffset = 0, file position = 0 19:06:42.123 [data-plane-kafka-request-handler-1] DEBUG kafka.log.AbstractIndex - Index /tmp/kafka-unit12180474530667575823/__consumer_offsets-47/00000000000000000000.index was not resized because it already has size 10485760 19:06:42.123 [data-plane-kafka-request-handler-1] DEBUG kafka.log.TimeIndex - Loaded index file /tmp/kafka-unit12180474530667575823/__consumer_offsets-47/00000000000000000000.timeindex with maxEntries = 873813, maxIndexSize = 10485760, entries = 0, lastOffset = TimestampOffset(-1,0), file position = 0 19:06:42.123 [data-plane-kafka-request-handler-1] DEBUG kafka.log.AbstractIndex - Index /tmp/kafka-unit12180474530667575823/__consumer_offsets-47/00000000000000000000.timeindex was not resized because it already has size 10485756 19:06:42.124 [data-plane-kafka-request-handler-1] INFO kafka.log.UnifiedLog$ - [LogLoader partition=__consumer_offsets-47, dir=/tmp/kafka-unit12180474530667575823] Loading producer state till offset 0 with message format version 2 19:06:42.124 [data-plane-kafka-request-handler-1] DEBUG kafka.utils.KafkaScheduler - Scheduling task flush-metadata-file with initial delay 0 ms and period -1 ms. 19:06:42.124 [data-plane-kafka-request-handler-1] DEBUG kafka.utils.KafkaScheduler - Scheduling task PeriodicProducerExpirationCheck with initial delay 600000 ms and period 600000 ms. 19:06:42.125 [data-plane-kafka-request-handler-1] INFO kafka.log.LogManager - Created log for partition __consumer_offsets-47 in /tmp/kafka-unit12180474530667575823/__consumer_offsets-47 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} 19:06:42.125 [data-plane-kafka-request-handler-1] INFO kafka.cluster.Partition - [Partition __consumer_offsets-47 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-47 19:06:42.125 [data-plane-kafka-request-handler-1] INFO kafka.cluster.Partition - [Partition __consumer_offsets-47 broker=1] Log loaded for partition __consumer_offsets-47 with initial high watermark 0 19:06:42.125 [data-plane-kafka-request-handler-1] INFO state.change.logger - [Broker id=1] Leader __consumer_offsets-47 with topic id Some(eotegnaZSI-1iOOsuTZpHg) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas []. Previous leader epoch was -1. 19:06:42.125 [data-plane-kafka-request-handler-1] DEBUG kafka.server.epoch.LeaderEpochFileCache - [LeaderEpochCache __consumer_offsets-47] Appended new epoch entry EpochEntry(epoch=0, startOffset=0). Cache now contains 1 entries. 19:06:42.129 [ProcessThread(sid:0 cport:42969):] DEBUG org.apache.zookeeper.server.SessionTrackerImpl - Checking session 0x1000002c50e0000 19:06:42.129 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - Processing request:: sessionid:0x1000002c50e0000 type:getData cxid:0xdd zxid:0xfffffffffffffffe txntype:unknown reqpath:/config/topics/__consumer_offsets 19:06:42.129 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - sessionid:0x1000002c50e0000 type:getData cxid:0xdd zxid:0xfffffffffffffffe txntype:unknown reqpath:/config/topics/__consumer_offsets 19:06:42.130 [SyncThread:0] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Permission requested: 1 19:06:42.130 [SyncThread:0] DEBUG org.apache.zookeeper.server.ZooKeeperServer - ACLs for node: [31,s{'world,'anyone} ] 19:06:42.130 [SyncThread:0] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Client credentials: ['sasl,'zooclient , 'ip,'127.0.0.1 ] 19:06:42.130 [main-SendThread(127.0.0.1:42969)] DEBUG org.apache.zookeeper.ClientCnxn - Reading reply session id: 0x1000002c50e0000, packet:: clientPath:/config/topics/__consumer_offsets serverPath:/config/topics/__consumer_offsets finished:false header:: 221,4 replyHeader:: 221,139,0 request:: '/config/topics/__consumer_offsets,F response:: #7b2276657273696f6e223a312c22636f6e666967223a7b22636f6d7072657373696f6e2e74797065223a2270726f6475636572222c22636c65616e75702e706f6c696379223a22636f6d70616374222c227365676d656e742e6279746573223a22313034383537363030227d7d,s{37,37,1755112001575,1755112001575,0,0,0,0,109,0,37} 19:06:42.132 [data-plane-kafka-request-handler-1] DEBUG kafka.log.OffsetIndex - Loaded index file /tmp/kafka-unit12180474530667575823/__consumer_offsets-17/00000000000000000000.index with maxEntries = 1310720, maxIndexSize = 10485760, entries = 0, lastOffset = 0, file position = 0 19:06:42.132 [data-plane-kafka-request-handler-1] DEBUG kafka.log.AbstractIndex - Index /tmp/kafka-unit12180474530667575823/__consumer_offsets-17/00000000000000000000.index was not resized because it already has size 10485760 19:06:42.132 [data-plane-kafka-request-handler-1] DEBUG kafka.log.TimeIndex - Loaded index file /tmp/kafka-unit12180474530667575823/__consumer_offsets-17/00000000000000000000.timeindex with maxEntries = 873813, maxIndexSize = 10485760, entries = 0, lastOffset = TimestampOffset(-1,0), file position = 0 19:06:42.132 [data-plane-kafka-request-handler-1] DEBUG kafka.log.AbstractIndex - Index /tmp/kafka-unit12180474530667575823/__consumer_offsets-17/00000000000000000000.timeindex was not resized because it already has size 10485756 19:06:42.133 [data-plane-kafka-request-handler-1] INFO kafka.log.UnifiedLog$ - [LogLoader partition=__consumer_offsets-17, dir=/tmp/kafka-unit12180474530667575823] Loading producer state till offset 0 with message format version 2 19:06:42.133 [data-plane-kafka-request-handler-1] DEBUG kafka.utils.KafkaScheduler - Scheduling task flush-metadata-file with initial delay 0 ms and period -1 ms. 19:06:42.134 [data-plane-kafka-request-handler-1] DEBUG kafka.utils.KafkaScheduler - Scheduling task PeriodicProducerExpirationCheck with initial delay 600000 ms and period 600000 ms. 19:06:42.134 [data-plane-kafka-request-handler-1] INFO kafka.log.LogManager - Created log for partition __consumer_offsets-17 in /tmp/kafka-unit12180474530667575823/__consumer_offsets-17 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} 19:06:42.134 [data-plane-kafka-request-handler-1] INFO kafka.cluster.Partition - [Partition __consumer_offsets-17 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-17 19:06:42.134 [data-plane-kafka-request-handler-1] INFO kafka.cluster.Partition - [Partition __consumer_offsets-17 broker=1] Log loaded for partition __consumer_offsets-17 with initial high watermark 0 19:06:42.134 [data-plane-kafka-request-handler-1] INFO state.change.logger - [Broker id=1] Leader __consumer_offsets-17 with topic id Some(eotegnaZSI-1iOOsuTZpHg) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas []. Previous leader epoch was -1. 19:06:42.134 [data-plane-kafka-request-handler-1] DEBUG kafka.server.epoch.LeaderEpochFileCache - [LeaderEpochCache __consumer_offsets-17] Appended new epoch entry EpochEntry(epoch=0, startOffset=0). Cache now contains 1 entries. 19:06:42.139 [ProcessThread(sid:0 cport:42969):] DEBUG org.apache.zookeeper.server.SessionTrackerImpl - Checking session 0x1000002c50e0000 19:06:42.139 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - Processing request:: sessionid:0x1000002c50e0000 type:getData cxid:0xde zxid:0xfffffffffffffffe txntype:unknown reqpath:/config/topics/__consumer_offsets 19:06:42.139 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - sessionid:0x1000002c50e0000 type:getData cxid:0xde zxid:0xfffffffffffffffe txntype:unknown reqpath:/config/topics/__consumer_offsets 19:06:42.139 [SyncThread:0] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Permission requested: 1 19:06:42.139 [SyncThread:0] DEBUG org.apache.zookeeper.server.ZooKeeperServer - ACLs for node: [31,s{'world,'anyone} ] 19:06:42.140 [SyncThread:0] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Client credentials: ['sasl,'zooclient , 'ip,'127.0.0.1 ] 19:06:42.140 [main-SendThread(127.0.0.1:42969)] DEBUG org.apache.zookeeper.ClientCnxn - Reading reply session id: 0x1000002c50e0000, packet:: clientPath:/config/topics/__consumer_offsets serverPath:/config/topics/__consumer_offsets finished:false header:: 222,4 replyHeader:: 222,139,0 request:: '/config/topics/__consumer_offsets,F response:: #7b2276657273696f6e223a312c22636f6e666967223a7b22636f6d7072657373696f6e2e74797065223a2270726f6475636572222c22636c65616e75702e706f6c696379223a22636f6d70616374222c227365676d656e742e6279746573223a22313034383537363030227d7d,s{37,37,1755112001575,1755112001575,0,0,0,0,109,0,37} 19:06:42.143 [data-plane-kafka-request-handler-1] DEBUG kafka.log.OffsetIndex - Loaded index file /tmp/kafka-unit12180474530667575823/__consumer_offsets-32/00000000000000000000.index with maxEntries = 1310720, maxIndexSize = 10485760, entries = 0, lastOffset = 0, file position = 0 19:06:42.143 [data-plane-kafka-request-handler-1] DEBUG kafka.log.AbstractIndex - Index /tmp/kafka-unit12180474530667575823/__consumer_offsets-32/00000000000000000000.index was not resized because it already has size 10485760 19:06:42.143 [data-plane-kafka-request-handler-1] DEBUG kafka.log.TimeIndex - Loaded index file /tmp/kafka-unit12180474530667575823/__consumer_offsets-32/00000000000000000000.timeindex with maxEntries = 873813, maxIndexSize = 10485760, entries = 0, lastOffset = TimestampOffset(-1,0), file position = 0 19:06:42.144 [data-plane-kafka-request-handler-1] DEBUG kafka.log.AbstractIndex - Index /tmp/kafka-unit12180474530667575823/__consumer_offsets-32/00000000000000000000.timeindex was not resized because it already has size 10485756 19:06:42.144 [data-plane-kafka-request-handler-1] INFO kafka.log.UnifiedLog$ - [LogLoader partition=__consumer_offsets-32, dir=/tmp/kafka-unit12180474530667575823] Loading producer state till offset 0 with message format version 2 19:06:42.144 [data-plane-kafka-request-handler-1] DEBUG kafka.utils.KafkaScheduler - Scheduling task flush-metadata-file with initial delay 0 ms and period -1 ms. 19:06:42.145 [data-plane-kafka-request-handler-1] DEBUG kafka.utils.KafkaScheduler - Scheduling task PeriodicProducerExpirationCheck with initial delay 600000 ms and period 600000 ms. 19:06:42.145 [data-plane-kafka-request-handler-1] INFO kafka.log.LogManager - Created log for partition __consumer_offsets-32 in /tmp/kafka-unit12180474530667575823/__consumer_offsets-32 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} 19:06:42.145 [data-plane-kafka-request-handler-1] INFO kafka.cluster.Partition - [Partition __consumer_offsets-32 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-32 19:06:42.145 [data-plane-kafka-request-handler-1] INFO kafka.cluster.Partition - [Partition __consumer_offsets-32 broker=1] Log loaded for partition __consumer_offsets-32 with initial high watermark 0 19:06:42.145 [data-plane-kafka-request-handler-1] INFO state.change.logger - [Broker id=1] Leader __consumer_offsets-32 with topic id Some(eotegnaZSI-1iOOsuTZpHg) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas []. Previous leader epoch was -1. 19:06:42.145 [data-plane-kafka-request-handler-1] DEBUG kafka.server.epoch.LeaderEpochFileCache - [LeaderEpochCache __consumer_offsets-32] Appended new epoch entry EpochEntry(epoch=0, startOffset=0). Cache now contains 1 entries. 19:06:42.151 [ProcessThread(sid:0 cport:42969):] DEBUG org.apache.zookeeper.server.SessionTrackerImpl - Checking session 0x1000002c50e0000 19:06:42.151 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - Processing request:: sessionid:0x1000002c50e0000 type:getData cxid:0xdf zxid:0xfffffffffffffffe txntype:unknown reqpath:/config/topics/__consumer_offsets 19:06:42.151 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - sessionid:0x1000002c50e0000 type:getData cxid:0xdf zxid:0xfffffffffffffffe txntype:unknown reqpath:/config/topics/__consumer_offsets 19:06:42.151 [SyncThread:0] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Permission requested: 1 19:06:42.151 [SyncThread:0] DEBUG org.apache.zookeeper.server.ZooKeeperServer - ACLs for node: [31,s{'world,'anyone} ] 19:06:42.151 [SyncThread:0] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Client credentials: ['sasl,'zooclient , 'ip,'127.0.0.1 ] 19:06:42.152 [main-SendThread(127.0.0.1:42969)] DEBUG org.apache.zookeeper.ClientCnxn - Reading reply session id: 0x1000002c50e0000, packet:: clientPath:/config/topics/__consumer_offsets serverPath:/config/topics/__consumer_offsets finished:false header:: 223,4 replyHeader:: 223,139,0 request:: '/config/topics/__consumer_offsets,F response:: #7b2276657273696f6e223a312c22636f6e666967223a7b22636f6d7072657373696f6e2e74797065223a2270726f6475636572222c22636c65616e75702e706f6c696379223a22636f6d70616374222c227365676d656e742e6279746573223a22313034383537363030227d7d,s{37,37,1755112001575,1755112001575,0,0,0,0,109,0,37} 19:06:42.155 [data-plane-kafka-request-handler-1] DEBUG kafka.log.OffsetIndex - Loaded index file /tmp/kafka-unit12180474530667575823/__consumer_offsets-37/00000000000000000000.index with maxEntries = 1310720, maxIndexSize = 10485760, entries = 0, lastOffset = 0, file position = 0 19:06:42.155 [data-plane-kafka-request-handler-1] DEBUG kafka.log.AbstractIndex - Index /tmp/kafka-unit12180474530667575823/__consumer_offsets-37/00000000000000000000.index was not resized because it already has size 10485760 19:06:42.155 [data-plane-kafka-request-handler-1] DEBUG kafka.log.TimeIndex - Loaded index file /tmp/kafka-unit12180474530667575823/__consumer_offsets-37/00000000000000000000.timeindex with maxEntries = 873813, maxIndexSize = 10485760, entries = 0, lastOffset = TimestampOffset(-1,0), file position = 0 19:06:42.155 [data-plane-kafka-request-handler-1] DEBUG kafka.log.AbstractIndex - Index /tmp/kafka-unit12180474530667575823/__consumer_offsets-37/00000000000000000000.timeindex was not resized because it already has size 10485756 19:06:42.155 [data-plane-kafka-request-handler-1] INFO kafka.log.UnifiedLog$ - [LogLoader partition=__consumer_offsets-37, dir=/tmp/kafka-unit12180474530667575823] Loading producer state till offset 0 with message format version 2 19:06:42.156 [data-plane-kafka-request-handler-1] DEBUG kafka.utils.KafkaScheduler - Scheduling task flush-metadata-file with initial delay 0 ms and period -1 ms. 19:06:42.157 [data-plane-kafka-request-handler-1] DEBUG kafka.utils.KafkaScheduler - Scheduling task PeriodicProducerExpirationCheck with initial delay 600000 ms and period 600000 ms. 19:06:42.157 [data-plane-kafka-request-handler-1] INFO kafka.log.LogManager - Created log for partition __consumer_offsets-37 in /tmp/kafka-unit12180474530667575823/__consumer_offsets-37 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} 19:06:42.157 [data-plane-kafka-request-handler-1] INFO kafka.cluster.Partition - [Partition __consumer_offsets-37 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-37 19:06:42.157 [data-plane-kafka-request-handler-1] INFO kafka.cluster.Partition - [Partition __consumer_offsets-37 broker=1] Log loaded for partition __consumer_offsets-37 with initial high watermark 0 19:06:42.157 [data-plane-kafka-request-handler-1] INFO state.change.logger - [Broker id=1] Leader __consumer_offsets-37 with topic id Some(eotegnaZSI-1iOOsuTZpHg) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas []. Previous leader epoch was -1. 19:06:42.157 [data-plane-kafka-request-handler-1] DEBUG kafka.server.epoch.LeaderEpochFileCache - [LeaderEpochCache __consumer_offsets-37] Appended new epoch entry EpochEntry(epoch=0, startOffset=0). Cache now contains 1 entries. 19:06:42.165 [ProcessThread(sid:0 cport:42969):] DEBUG org.apache.zookeeper.server.SessionTrackerImpl - Checking session 0x1000002c50e0000 19:06:42.165 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - Processing request:: sessionid:0x1000002c50e0000 type:getData cxid:0xe0 zxid:0xfffffffffffffffe txntype:unknown reqpath:/config/topics/__consumer_offsets 19:06:42.165 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - sessionid:0x1000002c50e0000 type:getData cxid:0xe0 zxid:0xfffffffffffffffe txntype:unknown reqpath:/config/topics/__consumer_offsets 19:06:42.165 [SyncThread:0] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Permission requested: 1 19:06:42.165 [SyncThread:0] DEBUG org.apache.zookeeper.server.ZooKeeperServer - ACLs for node: [31,s{'world,'anyone} ] 19:06:42.165 [SyncThread:0] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Client credentials: ['sasl,'zooclient , 'ip,'127.0.0.1 ] 19:06:42.165 [main-SendThread(127.0.0.1:42969)] DEBUG org.apache.zookeeper.ClientCnxn - Reading reply session id: 0x1000002c50e0000, packet:: clientPath:/config/topics/__consumer_offsets serverPath:/config/topics/__consumer_offsets finished:false header:: 224,4 replyHeader:: 224,139,0 request:: '/config/topics/__consumer_offsets,F response:: #7b2276657273696f6e223a312c22636f6e666967223a7b22636f6d7072657373696f6e2e74797065223a2270726f6475636572222c22636c65616e75702e706f6c696379223a22636f6d70616374222c227365676d656e742e6279746573223a22313034383537363030227d7d,s{37,37,1755112001575,1755112001575,0,0,0,0,109,0,37} 19:06:42.168 [data-plane-kafka-request-handler-1] DEBUG kafka.log.OffsetIndex - Loaded index file /tmp/kafka-unit12180474530667575823/__consumer_offsets-7/00000000000000000000.index with maxEntries = 1310720, maxIndexSize = 10485760, entries = 0, lastOffset = 0, file position = 0 19:06:42.169 [data-plane-kafka-request-handler-1] DEBUG kafka.log.AbstractIndex - Index /tmp/kafka-unit12180474530667575823/__consumer_offsets-7/00000000000000000000.index was not resized because it already has size 10485760 19:06:42.169 [data-plane-kafka-request-handler-1] DEBUG kafka.log.TimeIndex - Loaded index file /tmp/kafka-unit12180474530667575823/__consumer_offsets-7/00000000000000000000.timeindex with maxEntries = 873813, maxIndexSize = 10485760, entries = 0, lastOffset = TimestampOffset(-1,0), file position = 0 19:06:42.169 [data-plane-kafka-request-handler-1] DEBUG kafka.log.AbstractIndex - Index /tmp/kafka-unit12180474530667575823/__consumer_offsets-7/00000000000000000000.timeindex was not resized because it already has size 10485756 19:06:42.169 [data-plane-kafka-request-handler-1] INFO kafka.log.UnifiedLog$ - [LogLoader partition=__consumer_offsets-7, dir=/tmp/kafka-unit12180474530667575823] Loading producer state till offset 0 with message format version 2 19:06:42.169 [data-plane-kafka-request-handler-1] DEBUG kafka.utils.KafkaScheduler - Scheduling task flush-metadata-file with initial delay 0 ms and period -1 ms. 19:06:42.170 [data-plane-kafka-request-handler-1] DEBUG kafka.utils.KafkaScheduler - Scheduling task PeriodicProducerExpirationCheck with initial delay 600000 ms and period 600000 ms. 19:06:42.170 [data-plane-kafka-request-handler-1] INFO kafka.log.LogManager - Created log for partition __consumer_offsets-7 in /tmp/kafka-unit12180474530667575823/__consumer_offsets-7 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} 19:06:42.170 [data-plane-kafka-request-handler-1] INFO kafka.cluster.Partition - [Partition __consumer_offsets-7 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-7 19:06:42.170 [data-plane-kafka-request-handler-1] INFO kafka.cluster.Partition - [Partition __consumer_offsets-7 broker=1] Log loaded for partition __consumer_offsets-7 with initial high watermark 0 19:06:42.170 [data-plane-kafka-request-handler-1] INFO state.change.logger - [Broker id=1] Leader __consumer_offsets-7 with topic id Some(eotegnaZSI-1iOOsuTZpHg) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas []. Previous leader epoch was -1. 19:06:42.171 [data-plane-kafka-request-handler-1] DEBUG kafka.server.epoch.LeaderEpochFileCache - [LeaderEpochCache __consumer_offsets-7] Appended new epoch entry EpochEntry(epoch=0, startOffset=0). Cache now contains 1 entries. 19:06:42.176 [ProcessThread(sid:0 cport:42969):] DEBUG org.apache.zookeeper.server.SessionTrackerImpl - Checking session 0x1000002c50e0000 19:06:42.176 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - Processing request:: sessionid:0x1000002c50e0000 type:getData cxid:0xe1 zxid:0xfffffffffffffffe txntype:unknown reqpath:/config/topics/__consumer_offsets 19:06:42.176 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - sessionid:0x1000002c50e0000 type:getData cxid:0xe1 zxid:0xfffffffffffffffe txntype:unknown reqpath:/config/topics/__consumer_offsets 19:06:42.176 [SyncThread:0] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Permission requested: 1 19:06:42.176 [SyncThread:0] DEBUG org.apache.zookeeper.server.ZooKeeperServer - ACLs for node: [31,s{'world,'anyone} ] 19:06:42.176 [SyncThread:0] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Client credentials: ['sasl,'zooclient , 'ip,'127.0.0.1 ] 19:06:42.177 [main-SendThread(127.0.0.1:42969)] DEBUG org.apache.zookeeper.ClientCnxn - Reading reply session id: 0x1000002c50e0000, packet:: clientPath:/config/topics/__consumer_offsets serverPath:/config/topics/__consumer_offsets finished:false header:: 225,4 replyHeader:: 225,139,0 request:: '/config/topics/__consumer_offsets,F response:: #7b2276657273696f6e223a312c22636f6e666967223a7b22636f6d7072657373696f6e2e74797065223a2270726f6475636572222c22636c65616e75702e706f6c696379223a22636f6d70616374222c227365676d656e742e6279746573223a22313034383537363030227d7d,s{37,37,1755112001575,1755112001575,0,0,0,0,109,0,37} 19:06:42.179 [data-plane-kafka-request-handler-1] DEBUG kafka.log.OffsetIndex - Loaded index file /tmp/kafka-unit12180474530667575823/__consumer_offsets-22/00000000000000000000.index with maxEntries = 1310720, maxIndexSize = 10485760, entries = 0, lastOffset = 0, file position = 0 19:06:42.179 [data-plane-kafka-request-handler-1] DEBUG kafka.log.AbstractIndex - Index /tmp/kafka-unit12180474530667575823/__consumer_offsets-22/00000000000000000000.index was not resized because it already has size 10485760 19:06:42.180 [data-plane-kafka-request-handler-1] DEBUG kafka.log.TimeIndex - Loaded index file /tmp/kafka-unit12180474530667575823/__consumer_offsets-22/00000000000000000000.timeindex with maxEntries = 873813, maxIndexSize = 10485760, entries = 0, lastOffset = TimestampOffset(-1,0), file position = 0 19:06:42.180 [data-plane-kafka-request-handler-1] DEBUG kafka.log.AbstractIndex - Index /tmp/kafka-unit12180474530667575823/__consumer_offsets-22/00000000000000000000.timeindex was not resized because it already has size 10485756 19:06:42.180 [data-plane-kafka-request-handler-1] INFO kafka.log.UnifiedLog$ - [LogLoader partition=__consumer_offsets-22, dir=/tmp/kafka-unit12180474530667575823] Loading producer state till offset 0 with message format version 2 19:06:42.180 [data-plane-kafka-request-handler-1] DEBUG kafka.utils.KafkaScheduler - Scheduling task flush-metadata-file with initial delay 0 ms and period -1 ms. 19:06:42.181 [data-plane-kafka-request-handler-1] DEBUG kafka.utils.KafkaScheduler - Scheduling task PeriodicProducerExpirationCheck with initial delay 600000 ms and period 600000 ms. 19:06:42.181 [data-plane-kafka-request-handler-1] INFO kafka.log.LogManager - Created log for partition __consumer_offsets-22 in /tmp/kafka-unit12180474530667575823/__consumer_offsets-22 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} 19:06:42.181 [data-plane-kafka-request-handler-1] INFO kafka.cluster.Partition - [Partition __consumer_offsets-22 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-22 19:06:42.181 [data-plane-kafka-request-handler-1] INFO kafka.cluster.Partition - [Partition __consumer_offsets-22 broker=1] Log loaded for partition __consumer_offsets-22 with initial high watermark 0 19:06:42.182 [data-plane-kafka-request-handler-1] INFO state.change.logger - [Broker id=1] Leader __consumer_offsets-22 with topic id Some(eotegnaZSI-1iOOsuTZpHg) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas []. Previous leader epoch was -1. 19:06:42.182 [data-plane-kafka-request-handler-1] DEBUG kafka.server.epoch.LeaderEpochFileCache - [LeaderEpochCache __consumer_offsets-22] Appended new epoch entry EpochEntry(epoch=0, startOffset=0). Cache now contains 1 entries. 19:06:42.187 [ProcessThread(sid:0 cport:42969):] DEBUG org.apache.zookeeper.server.SessionTrackerImpl - Checking session 0x1000002c50e0000 19:06:42.187 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - Processing request:: sessionid:0x1000002c50e0000 type:getData cxid:0xe2 zxid:0xfffffffffffffffe txntype:unknown reqpath:/config/topics/__consumer_offsets 19:06:42.187 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - sessionid:0x1000002c50e0000 type:getData cxid:0xe2 zxid:0xfffffffffffffffe txntype:unknown reqpath:/config/topics/__consumer_offsets 19:06:42.187 [SyncThread:0] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Permission requested: 1 19:06:42.187 [SyncThread:0] DEBUG org.apache.zookeeper.server.ZooKeeperServer - ACLs for node: [31,s{'world,'anyone} ] 19:06:42.187 [SyncThread:0] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Client credentials: ['sasl,'zooclient , 'ip,'127.0.0.1 ] 19:06:42.187 [main-SendThread(127.0.0.1:42969)] DEBUG org.apache.zookeeper.ClientCnxn - Reading reply session id: 0x1000002c50e0000, packet:: clientPath:/config/topics/__consumer_offsets serverPath:/config/topics/__consumer_offsets finished:false header:: 226,4 replyHeader:: 226,139,0 request:: '/config/topics/__consumer_offsets,F response:: #7b2276657273696f6e223a312c22636f6e666967223a7b22636f6d7072657373696f6e2e74797065223a2270726f6475636572222c22636c65616e75702e706f6c696379223a22636f6d70616374222c227365676d656e742e6279746573223a22313034383537363030227d7d,s{37,37,1755112001575,1755112001575,0,0,0,0,109,0,37} 19:06:42.190 [main] DEBUG org.apache.kafka.clients.NetworkClient - [Consumer clientId=mso-123456-consumer-e3947c42-8a25-4603-b050-ed449cbe5e27, groupId=mso-group] Sending metadata request MetadataRequestData(topics=[MetadataRequestTopic(topicId=AAAAAAAAAAAAAAAAAAAAAA, name='my-test-topic')], allowAutoTopicCreation=false, includeClusterAuthorizedOperations=false, includeTopicAuthorizedOperations=false) to node localhost:38537 (id: 1 rack: null) 19:06:42.190 [data-plane-kafka-request-handler-1] DEBUG kafka.log.OffsetIndex - Loaded index file /tmp/kafka-unit12180474530667575823/__consumer_offsets-29/00000000000000000000.index with maxEntries = 1310720, maxIndexSize = 10485760, entries = 0, lastOffset = 0, file position = 0 19:06:42.190 [main] DEBUG org.apache.kafka.clients.NetworkClient - [Consumer clientId=mso-123456-consumer-e3947c42-8a25-4603-b050-ed449cbe5e27, groupId=mso-group] Sending METADATA request with header RequestHeader(apiKey=METADATA, apiVersion=12, clientId=mso-123456-consumer-e3947c42-8a25-4603-b050-ed449cbe5e27, correlationId=14) and timeout 30000 to node 1: MetadataRequestData(topics=[MetadataRequestTopic(topicId=AAAAAAAAAAAAAAAAAAAAAA, name='my-test-topic')], allowAutoTopicCreation=false, includeClusterAuthorizedOperations=false, includeTopicAuthorizedOperations=false) 19:06:42.190 [data-plane-kafka-request-handler-1] DEBUG kafka.log.AbstractIndex - Index /tmp/kafka-unit12180474530667575823/__consumer_offsets-29/00000000000000000000.index was not resized because it already has size 10485760 19:06:42.190 [data-plane-kafka-request-handler-1] DEBUG kafka.log.TimeIndex - Loaded index file /tmp/kafka-unit12180474530667575823/__consumer_offsets-29/00000000000000000000.timeindex with maxEntries = 873813, maxIndexSize = 10485760, entries = 0, lastOffset = TimestampOffset(-1,0), file position = 0 19:06:42.190 [data-plane-kafka-request-handler-1] DEBUG kafka.log.AbstractIndex - Index /tmp/kafka-unit12180474530667575823/__consumer_offsets-29/00000000000000000000.timeindex was not resized because it already has size 10485756 19:06:42.191 [data-plane-kafka-request-handler-1] INFO kafka.log.UnifiedLog$ - [LogLoader partition=__consumer_offsets-29, dir=/tmp/kafka-unit12180474530667575823] Loading producer state till offset 0 with message format version 2 19:06:42.191 [data-plane-kafka-request-handler-1] DEBUG kafka.utils.KafkaScheduler - Scheduling task flush-metadata-file with initial delay 0 ms and period -1 ms. 19:06:42.192 [data-plane-kafka-request-handler-1] DEBUG kafka.utils.KafkaScheduler - Scheduling task PeriodicProducerExpirationCheck with initial delay 600000 ms and period 600000 ms. 19:06:42.192 [data-plane-kafka-request-handler-1] INFO kafka.log.LogManager - Created log for partition __consumer_offsets-29 in /tmp/kafka-unit12180474530667575823/__consumer_offsets-29 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} 19:06:42.192 [data-plane-kafka-request-handler-1] INFO kafka.cluster.Partition - [Partition __consumer_offsets-29 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-29 19:06:42.192 [data-plane-kafka-request-handler-1] INFO kafka.cluster.Partition - [Partition __consumer_offsets-29 broker=1] Log loaded for partition __consumer_offsets-29 with initial high watermark 0 19:06:42.192 [data-plane-kafka-request-handler-1] INFO state.change.logger - [Broker id=1] Leader __consumer_offsets-29 with topic id Some(eotegnaZSI-1iOOsuTZpHg) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas []. Previous leader epoch was -1. 19:06:42.192 [data-plane-kafka-request-handler-1] DEBUG kafka.server.epoch.LeaderEpochFileCache - [LeaderEpochCache __consumer_offsets-29] Appended new epoch entry EpochEntry(epoch=0, startOffset=0). Cache now contains 1 entries. 19:06:42.194 [main] DEBUG org.apache.kafka.clients.NetworkClient - [Consumer clientId=mso-123456-consumer-e3947c42-8a25-4603-b050-ed449cbe5e27, groupId=mso-group] Received METADATA response from node 1 for request with header RequestHeader(apiKey=METADATA, apiVersion=12, clientId=mso-123456-consumer-e3947c42-8a25-4603-b050-ed449cbe5e27, correlationId=14): MetadataResponseData(throttleTimeMs=0, brokers=[MetadataResponseBroker(nodeId=1, host='localhost', port=38537, rack=null)], clusterId='rNB1PmTvTz-7PD_3X0wh3g', controllerId=1, topics=[MetadataResponseTopic(errorCode=0, name='my-test-topic', topicId=Y7FYMZASRPqaYrfAS485Ow, isInternal=false, partitions=[MetadataResponsePartition(errorCode=0, partitionIndex=0, leaderId=1, leaderEpoch=0, replicaNodes=[1], isrNodes=[1], offlineReplicas=[])], topicAuthorizedOperations=-2147483648)], clusterAuthorizedOperations=-2147483648) 19:06:42.194 [main] DEBUG org.apache.kafka.clients.Metadata - [Consumer clientId=mso-123456-consumer-e3947c42-8a25-4603-b050-ed449cbe5e27, groupId=mso-group] Updating last seen epoch for partition my-test-topic-0 from 0 to epoch 0 from new metadata 19:06:42.194 [main] DEBUG org.apache.kafka.clients.Metadata - [Consumer clientId=mso-123456-consumer-e3947c42-8a25-4603-b050-ed449cbe5e27, groupId=mso-group] Updated cluster metadata updateVersion 8 to MetadataCache{clusterId='rNB1PmTvTz-7PD_3X0wh3g', nodes={1=localhost:38537 (id: 1 rack: null)}, partitions=[PartitionMetadata(error=NONE, partition=my-test-topic-0, leader=Optional[1], leaderEpoch=Optional[0], replicas=1, isr=1, offlineReplicas=)], controller=localhost:38537 (id: 1 rack: null)} 19:06:42.194 [main] DEBUG org.apache.kafka.clients.consumer.internals.ConsumerCoordinator - [Consumer clientId=mso-123456-consumer-e3947c42-8a25-4603-b050-ed449cbe5e27, groupId=mso-group] Sending FindCoordinator request to broker localhost:38537 (id: 1 rack: null) 19:06:42.194 [data-plane-kafka-network-thread-1-ListenerName(SASL_PLAINTEXT)-SASL_PLAINTEXT-0] DEBUG kafka.request.logger - Completed request:{"isForwarded":false,"requestHeader":{"requestApiKey":3,"requestApiVersion":12,"correlationId":14,"clientId":"mso-123456-consumer-e3947c42-8a25-4603-b050-ed449cbe5e27","requestApiKeyName":"METADATA"},"request":{"topics":[{"topicId":"AAAAAAAAAAAAAAAAAAAAAA","name":"my-test-topic"}],"allowAutoTopicCreation":false,"includeTopicAuthorizedOperations":false},"response":{"throttleTimeMs":0,"brokers":[{"nodeId":1,"host":"localhost","port":38537,"rack":null}],"clusterId":"rNB1PmTvTz-7PD_3X0wh3g","controllerId":1,"topics":[{"errorCode":0,"name":"my-test-topic","topicId":"Y7FYMZASRPqaYrfAS485Ow","isInternal":false,"partitions":[{"errorCode":0,"partitionIndex":0,"leaderId":1,"leaderEpoch":0,"replicaNodes":[1],"isrNodes":[1],"offlineReplicas":[]}],"topicAuthorizedOperations":-2147483648}]},"connection":"127.0.0.1:38537-127.0.0.1:54970-3","totalTimeMs":2.466,"requestQueueTimeMs":0.338,"localTimeMs":1.588,"remoteTimeMs":0.0,"throttleTimeMs":0,"responseQueueTimeMs":0.128,"sendTimeMs":0.409,"securityProtocol":"SASL_PLAINTEXT","principal":"User:admin","listener":"SASL_PLAINTEXT","clientInformation":{"softwareName":"apache-kafka-java","softwareVersion":"3.3.1"}} 19:06:42.195 [main] DEBUG org.apache.kafka.clients.NetworkClient - [Consumer clientId=mso-123456-consumer-e3947c42-8a25-4603-b050-ed449cbe5e27, groupId=mso-group] Sending FIND_COORDINATOR request with header RequestHeader(apiKey=FIND_COORDINATOR, apiVersion=4, clientId=mso-123456-consumer-e3947c42-8a25-4603-b050-ed449cbe5e27, correlationId=15) and timeout 30000 to node 1: FindCoordinatorRequestData(key='', keyType=0, coordinatorKeys=[mso-group]) 19:06:42.197 [ProcessThread(sid:0 cport:42969):] DEBUG org.apache.zookeeper.server.SessionTrackerImpl - Checking session 0x1000002c50e0000 19:06:42.197 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - Processing request:: sessionid:0x1000002c50e0000 type:exists cxid:0xe3 zxid:0xfffffffffffffffe txntype:unknown reqpath:/admin/delete_topics/__consumer_offsets 19:06:42.197 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - sessionid:0x1000002c50e0000 type:exists cxid:0xe3 zxid:0xfffffffffffffffe txntype:unknown reqpath:/admin/delete_topics/__consumer_offsets 19:06:42.198 [main-SendThread(127.0.0.1:42969)] DEBUG org.apache.zookeeper.ClientCnxn - Reading reply session id: 0x1000002c50e0000, packet:: clientPath:/admin/delete_topics/__consumer_offsets serverPath:/admin/delete_topics/__consumer_offsets finished:false header:: 227,3 replyHeader:: 227,139,-101 request:: '/admin/delete_topics/__consumer_offsets,F response:: 19:06:42.199 [ProcessThread(sid:0 cport:42969):] DEBUG org.apache.zookeeper.server.SessionTrackerImpl - Checking session 0x1000002c50e0000 19:06:42.199 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - Processing request:: sessionid:0x1000002c50e0000 type:exists cxid:0xe4 zxid:0xfffffffffffffffe txntype:unknown reqpath:/brokers/topics/__consumer_offsets 19:06:42.199 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - sessionid:0x1000002c50e0000 type:exists cxid:0xe4 zxid:0xfffffffffffffffe txntype:unknown reqpath:/brokers/topics/__consumer_offsets 19:06:42.199 [main-SendThread(127.0.0.1:42969)] DEBUG org.apache.zookeeper.ClientCnxn - Reading reply session id: 0x1000002c50e0000, packet:: clientPath:/brokers/topics/__consumer_offsets serverPath:/brokers/topics/__consumer_offsets finished:false header:: 228,3 replyHeader:: 228,139,0 request:: '/brokers/topics/__consumer_offsets,T response:: s{38,38,1755112001586,1755112001586,0,1,0,0,548,1,39} 19:06:42.199 [data-plane-kafka-request-handler-0] DEBUG kafka.server.ZkAdminManager - [Admin Manager on Broker 1]: Topic creation failed since topic '__consumer_offsets' already exists. org.apache.kafka.common.errors.TopicExistsException: Topic '__consumer_offsets' already exists. 19:06:42.200 [data-plane-kafka-request-handler-0] DEBUG kafka.server.DefaultAutoTopicCreationManager - Cleared inflight topic creation state for HashMap(__consumer_offsets -> CreatableTopic(name='__consumer_offsets', numPartitions=50, replicationFactor=1, assignments=[], configs=[CreateableTopicConfig(name='compression.type', value='producer'), CreateableTopicConfig(name='cleanup.policy', value='compact'), CreateableTopicConfig(name='segment.bytes', value='104857600')])) 19:06:42.201 [main] DEBUG org.apache.kafka.clients.NetworkClient - [Consumer clientId=mso-123456-consumer-e3947c42-8a25-4603-b050-ed449cbe5e27, groupId=mso-group] Received FIND_COORDINATOR response from node 1 for request with header RequestHeader(apiKey=FIND_COORDINATOR, apiVersion=4, clientId=mso-123456-consumer-e3947c42-8a25-4603-b050-ed449cbe5e27, correlationId=15): FindCoordinatorResponseData(throttleTimeMs=0, errorCode=0, errorMessage='', nodeId=0, host='', port=0, coordinators=[Coordinator(key='mso-group', nodeId=-1, host='', port=-1, errorCode=15, errorMessage='')]) 19:06:42.201 [main] DEBUG org.apache.kafka.clients.consumer.internals.ConsumerCoordinator - [Consumer clientId=mso-123456-consumer-e3947c42-8a25-4603-b050-ed449cbe5e27, groupId=mso-group] Received FindCoordinator response ClientResponse(receivedTimeMs=1755112002200, latencyMs=6, disconnected=false, requestHeader=RequestHeader(apiKey=FIND_COORDINATOR, apiVersion=4, clientId=mso-123456-consumer-e3947c42-8a25-4603-b050-ed449cbe5e27, correlationId=15), responseBody=FindCoordinatorResponseData(throttleTimeMs=0, errorCode=0, errorMessage='', nodeId=0, host='', port=0, coordinators=[Coordinator(key='mso-group', nodeId=-1, host='', port=-1, errorCode=15, errorMessage='')])) 19:06:42.201 [main] DEBUG org.apache.kafka.clients.consumer.internals.ConsumerCoordinator - [Consumer clientId=mso-123456-consumer-e3947c42-8a25-4603-b050-ed449cbe5e27, groupId=mso-group] Group coordinator lookup failed: 19:06:42.201 [main] DEBUG org.apache.kafka.clients.consumer.internals.ConsumerCoordinator - [Consumer clientId=mso-123456-consumer-e3947c42-8a25-4603-b050-ed449cbe5e27, groupId=mso-group] Coordinator discovery failed, refreshing metadata org.apache.kafka.common.errors.CoordinatorNotAvailableException: The coordinator is not available. 19:06:42.201 [data-plane-kafka-network-thread-1-ListenerName(SASL_PLAINTEXT)-SASL_PLAINTEXT-0] DEBUG kafka.request.logger - Completed request:{"isForwarded":false,"requestHeader":{"requestApiKey":10,"requestApiVersion":4,"correlationId":15,"clientId":"mso-123456-consumer-e3947c42-8a25-4603-b050-ed449cbe5e27","requestApiKeyName":"FIND_COORDINATOR"},"request":{"keyType":0,"coordinatorKeys":["mso-group"]},"response":{"throttleTimeMs":0,"coordinators":[{"key":"mso-group","nodeId":-1,"host":"","port":-1,"errorCode":15,"errorMessage":""}]},"connection":"127.0.0.1:38537-127.0.0.1:54970-3","totalTimeMs":5.284,"requestQueueTimeMs":0.223,"localTimeMs":4.8,"remoteTimeMs":0.0,"throttleTimeMs":0,"responseQueueTimeMs":0.077,"sendTimeMs":0.182,"securityProtocol":"SASL_PLAINTEXT","principal":"User:admin","listener":"SASL_PLAINTEXT","clientInformation":{"softwareName":"apache-kafka-java","softwareVersion":"3.3.1"}} 19:06:42.292 [ProcessThread(sid:0 cport:42969):] DEBUG org.apache.zookeeper.server.SessionTrackerImpl - Checking session 0x1000002c50e0000 19:06:42.292 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - Processing request:: sessionid:0x1000002c50e0000 type:getData cxid:0xe5 zxid:0xfffffffffffffffe txntype:unknown reqpath:/config/topics/__consumer_offsets 19:06:42.292 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - sessionid:0x1000002c50e0000 type:getData cxid:0xe5 zxid:0xfffffffffffffffe txntype:unknown reqpath:/config/topics/__consumer_offsets 19:06:42.292 [SyncThread:0] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Permission requested: 1 19:06:42.292 [SyncThread:0] DEBUG org.apache.zookeeper.server.ZooKeeperServer - ACLs for node: [31,s{'world,'anyone} ] 19:06:42.292 [SyncThread:0] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Client credentials: ['sasl,'zooclient , 'ip,'127.0.0.1 ] 19:06:42.293 [main-SendThread(127.0.0.1:42969)] DEBUG org.apache.zookeeper.ClientCnxn - Reading reply session id: 0x1000002c50e0000, packet:: clientPath:/config/topics/__consumer_offsets serverPath:/config/topics/__consumer_offsets finished:false header:: 229,4 replyHeader:: 229,139,0 request:: '/config/topics/__consumer_offsets,F response:: #7b2276657273696f6e223a312c22636f6e666967223a7b22636f6d7072657373696f6e2e74797065223a2270726f6475636572222c22636c65616e75702e706f6c696379223a22636f6d70616374222c227365676d656e742e6279746573223a22313034383537363030227d7d,s{37,37,1755112001575,1755112001575,0,0,0,0,109,0,37} 19:06:42.294 [main] DEBUG org.apache.kafka.clients.NetworkClient - [Consumer clientId=mso-123456-consumer-e3947c42-8a25-4603-b050-ed449cbe5e27, groupId=mso-group] Sending metadata request MetadataRequestData(topics=[MetadataRequestTopic(topicId=AAAAAAAAAAAAAAAAAAAAAA, name='my-test-topic')], allowAutoTopicCreation=false, includeClusterAuthorizedOperations=false, includeTopicAuthorizedOperations=false) to node localhost:38537 (id: 1 rack: null) 19:06:42.294 [main] DEBUG org.apache.kafka.clients.NetworkClient - [Consumer clientId=mso-123456-consumer-e3947c42-8a25-4603-b050-ed449cbe5e27, groupId=mso-group] Sending METADATA request with header RequestHeader(apiKey=METADATA, apiVersion=12, clientId=mso-123456-consumer-e3947c42-8a25-4603-b050-ed449cbe5e27, correlationId=16) and timeout 30000 to node 1: MetadataRequestData(topics=[MetadataRequestTopic(topicId=AAAAAAAAAAAAAAAAAAAAAA, name='my-test-topic')], allowAutoTopicCreation=false, includeClusterAuthorizedOperations=false, includeTopicAuthorizedOperations=false) 19:06:42.297 [data-plane-kafka-request-handler-1] DEBUG kafka.log.OffsetIndex - Loaded index file /tmp/kafka-unit12180474530667575823/__consumer_offsets-44/00000000000000000000.index with maxEntries = 1310720, maxIndexSize = 10485760, entries = 0, lastOffset = 0, file position = 0 19:06:42.297 [data-plane-kafka-request-handler-1] DEBUG kafka.log.AbstractIndex - Index /tmp/kafka-unit12180474530667575823/__consumer_offsets-44/00000000000000000000.index was not resized because it already has size 10485760 19:06:42.297 [main] DEBUG org.apache.kafka.clients.NetworkClient - [Consumer clientId=mso-123456-consumer-e3947c42-8a25-4603-b050-ed449cbe5e27, groupId=mso-group] Received METADATA response from node 1 for request with header RequestHeader(apiKey=METADATA, apiVersion=12, clientId=mso-123456-consumer-e3947c42-8a25-4603-b050-ed449cbe5e27, correlationId=16): MetadataResponseData(throttleTimeMs=0, brokers=[MetadataResponseBroker(nodeId=1, host='localhost', port=38537, rack=null)], clusterId='rNB1PmTvTz-7PD_3X0wh3g', controllerId=1, topics=[MetadataResponseTopic(errorCode=0, name='my-test-topic', topicId=Y7FYMZASRPqaYrfAS485Ow, isInternal=false, partitions=[MetadataResponsePartition(errorCode=0, partitionIndex=0, leaderId=1, leaderEpoch=0, replicaNodes=[1], isrNodes=[1], offlineReplicas=[])], topicAuthorizedOperations=-2147483648)], clusterAuthorizedOperations=-2147483648) 19:06:42.297 [data-plane-kafka-request-handler-1] DEBUG kafka.log.TimeIndex - Loaded index file /tmp/kafka-unit12180474530667575823/__consumer_offsets-44/00000000000000000000.timeindex with maxEntries = 873813, maxIndexSize = 10485760, entries = 0, lastOffset = TimestampOffset(-1,0), file position = 0 19:06:42.297 [data-plane-kafka-request-handler-1] DEBUG kafka.log.AbstractIndex - Index /tmp/kafka-unit12180474530667575823/__consumer_offsets-44/00000000000000000000.timeindex was not resized because it already has size 10485756 19:06:42.298 [main] DEBUG org.apache.kafka.clients.Metadata - [Consumer clientId=mso-123456-consumer-e3947c42-8a25-4603-b050-ed449cbe5e27, groupId=mso-group] Updating last seen epoch for partition my-test-topic-0 from 0 to epoch 0 from new metadata 19:06:42.298 [data-plane-kafka-network-thread-1-ListenerName(SASL_PLAINTEXT)-SASL_PLAINTEXT-0] DEBUG kafka.request.logger - Completed request:{"isForwarded":false,"requestHeader":{"requestApiKey":3,"requestApiVersion":12,"correlationId":16,"clientId":"mso-123456-consumer-e3947c42-8a25-4603-b050-ed449cbe5e27","requestApiKeyName":"METADATA"},"request":{"topics":[{"topicId":"AAAAAAAAAAAAAAAAAAAAAA","name":"my-test-topic"}],"allowAutoTopicCreation":false,"includeTopicAuthorizedOperations":false},"response":{"throttleTimeMs":0,"brokers":[{"nodeId":1,"host":"localhost","port":38537,"rack":null}],"clusterId":"rNB1PmTvTz-7PD_3X0wh3g","controllerId":1,"topics":[{"errorCode":0,"name":"my-test-topic","topicId":"Y7FYMZASRPqaYrfAS485Ow","isInternal":false,"partitions":[{"errorCode":0,"partitionIndex":0,"leaderId":1,"leaderEpoch":0,"replicaNodes":[1],"isrNodes":[1],"offlineReplicas":[]}],"topicAuthorizedOperations":-2147483648}]},"connection":"127.0.0.1:38537-127.0.0.1:54970-3","totalTimeMs":2.161,"requestQueueTimeMs":0.262,"localTimeMs":1.5,"remoteTimeMs":0.0,"throttleTimeMs":0,"responseQueueTimeMs":0.112,"sendTimeMs":0.286,"securityProtocol":"SASL_PLAINTEXT","principal":"User:admin","listener":"SASL_PLAINTEXT","clientInformation":{"softwareName":"apache-kafka-java","softwareVersion":"3.3.1"}} 19:06:42.298 [main] DEBUG org.apache.kafka.clients.Metadata - [Consumer clientId=mso-123456-consumer-e3947c42-8a25-4603-b050-ed449cbe5e27, groupId=mso-group] Updated cluster metadata updateVersion 9 to MetadataCache{clusterId='rNB1PmTvTz-7PD_3X0wh3g', nodes={1=localhost:38537 (id: 1 rack: null)}, partitions=[PartitionMetadata(error=NONE, partition=my-test-topic-0, leader=Optional[1], leaderEpoch=Optional[0], replicas=1, isr=1, offlineReplicas=)], controller=localhost:38537 (id: 1 rack: null)} 19:06:42.298 [data-plane-kafka-request-handler-1] INFO kafka.log.UnifiedLog$ - [LogLoader partition=__consumer_offsets-44, dir=/tmp/kafka-unit12180474530667575823] Loading producer state till offset 0 with message format version 2 19:06:42.298 [main] DEBUG org.apache.kafka.clients.consumer.internals.ConsumerCoordinator - [Consumer clientId=mso-123456-consumer-e3947c42-8a25-4603-b050-ed449cbe5e27, groupId=mso-group] Sending FindCoordinator request to broker localhost:38537 (id: 1 rack: null) 19:06:42.298 [main] DEBUG org.apache.kafka.clients.NetworkClient - [Consumer clientId=mso-123456-consumer-e3947c42-8a25-4603-b050-ed449cbe5e27, groupId=mso-group] Sending FIND_COORDINATOR request with header RequestHeader(apiKey=FIND_COORDINATOR, apiVersion=4, clientId=mso-123456-consumer-e3947c42-8a25-4603-b050-ed449cbe5e27, correlationId=17) and timeout 30000 to node 1: FindCoordinatorRequestData(key='', keyType=0, coordinatorKeys=[mso-group]) 19:06:42.298 [data-plane-kafka-request-handler-1] DEBUG kafka.utils.KafkaScheduler - Scheduling task flush-metadata-file with initial delay 0 ms and period -1 ms. 19:06:42.299 [data-plane-kafka-request-handler-1] DEBUG kafka.utils.KafkaScheduler - Scheduling task PeriodicProducerExpirationCheck with initial delay 600000 ms and period 600000 ms. 19:06:42.299 [data-plane-kafka-request-handler-1] INFO kafka.log.LogManager - Created log for partition __consumer_offsets-44 in /tmp/kafka-unit12180474530667575823/__consumer_offsets-44 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} 19:06:42.300 [data-plane-kafka-request-handler-1] INFO kafka.cluster.Partition - [Partition __consumer_offsets-44 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-44 19:06:42.300 [data-plane-kafka-request-handler-1] INFO kafka.cluster.Partition - [Partition __consumer_offsets-44 broker=1] Log loaded for partition __consumer_offsets-44 with initial high watermark 0 19:06:42.300 [data-plane-kafka-request-handler-1] INFO state.change.logger - [Broker id=1] Leader __consumer_offsets-44 with topic id Some(eotegnaZSI-1iOOsuTZpHg) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas []. Previous leader epoch was -1. 19:06:42.300 [data-plane-kafka-request-handler-1] DEBUG kafka.server.epoch.LeaderEpochFileCache - [LeaderEpochCache __consumer_offsets-44] Appended new epoch entry EpochEntry(epoch=0, startOffset=0). Cache now contains 1 entries. 19:06:42.301 [ProcessThread(sid:0 cport:42969):] DEBUG org.apache.zookeeper.server.SessionTrackerImpl - Checking session 0x1000002c50e0000 19:06:42.301 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - Processing request:: sessionid:0x1000002c50e0000 type:exists cxid:0xe6 zxid:0xfffffffffffffffe txntype:unknown reqpath:/admin/delete_topics/__consumer_offsets 19:06:42.301 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - sessionid:0x1000002c50e0000 type:exists cxid:0xe6 zxid:0xfffffffffffffffe txntype:unknown reqpath:/admin/delete_topics/__consumer_offsets 19:06:42.301 [main-SendThread(127.0.0.1:42969)] DEBUG org.apache.zookeeper.ClientCnxn - Reading reply session id: 0x1000002c50e0000, packet:: clientPath:/admin/delete_topics/__consumer_offsets serverPath:/admin/delete_topics/__consumer_offsets finished:false header:: 230,3 replyHeader:: 230,139,-101 request:: '/admin/delete_topics/__consumer_offsets,F response:: 19:06:42.302 [ProcessThread(sid:0 cport:42969):] DEBUG org.apache.zookeeper.server.SessionTrackerImpl - Checking session 0x1000002c50e0000 19:06:42.302 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - Processing request:: sessionid:0x1000002c50e0000 type:exists cxid:0xe7 zxid:0xfffffffffffffffe txntype:unknown reqpath:/brokers/topics/__consumer_offsets 19:06:42.302 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - sessionid:0x1000002c50e0000 type:exists cxid:0xe7 zxid:0xfffffffffffffffe txntype:unknown reqpath:/brokers/topics/__consumer_offsets 19:06:42.303 [main-SendThread(127.0.0.1:42969)] DEBUG org.apache.zookeeper.ClientCnxn - Reading reply session id: 0x1000002c50e0000, packet:: clientPath:/brokers/topics/__consumer_offsets serverPath:/brokers/topics/__consumer_offsets finished:false header:: 231,3 replyHeader:: 231,139,0 request:: '/brokers/topics/__consumer_offsets,T response:: s{38,38,1755112001586,1755112001586,0,1,0,0,548,1,39} 19:06:42.304 [data-plane-kafka-request-handler-0] DEBUG kafka.server.ZkAdminManager - [Admin Manager on Broker 1]: Topic creation failed since topic '__consumer_offsets' already exists. org.apache.kafka.common.errors.TopicExistsException: Topic '__consumer_offsets' already exists. 19:06:42.304 [data-plane-kafka-request-handler-0] DEBUG kafka.server.DefaultAutoTopicCreationManager - Cleared inflight topic creation state for HashMap(__consumer_offsets -> CreatableTopic(name='__consumer_offsets', numPartitions=50, replicationFactor=1, assignments=[], configs=[CreateableTopicConfig(name='compression.type', value='producer'), CreateableTopicConfig(name='cleanup.policy', value='compact'), CreateableTopicConfig(name='segment.bytes', value='104857600')])) 19:06:42.305 [ProcessThread(sid:0 cport:42969):] DEBUG org.apache.zookeeper.server.SessionTrackerImpl - Checking session 0x1000002c50e0000 19:06:42.305 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - Processing request:: sessionid:0x1000002c50e0000 type:getData cxid:0xe8 zxid:0xfffffffffffffffe txntype:unknown reqpath:/config/topics/__consumer_offsets 19:06:42.305 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - sessionid:0x1000002c50e0000 type:getData cxid:0xe8 zxid:0xfffffffffffffffe txntype:unknown reqpath:/config/topics/__consumer_offsets 19:06:42.305 [SyncThread:0] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Permission requested: 1 19:06:42.305 [data-plane-kafka-network-thread-1-ListenerName(SASL_PLAINTEXT)-SASL_PLAINTEXT-0] DEBUG kafka.request.logger - Completed request:{"isForwarded":false,"requestHeader":{"requestApiKey":10,"requestApiVersion":4,"correlationId":17,"clientId":"mso-123456-consumer-e3947c42-8a25-4603-b050-ed449cbe5e27","requestApiKeyName":"FIND_COORDINATOR"},"request":{"keyType":0,"coordinatorKeys":["mso-group"]},"response":{"throttleTimeMs":0,"coordinators":[{"key":"mso-group","nodeId":-1,"host":"","port":-1,"errorCode":15,"errorMessage":""}]},"connection":"127.0.0.1:38537-127.0.0.1:54970-3","totalTimeMs":5.954,"requestQueueTimeMs":0.15,"localTimeMs":5.462,"remoteTimeMs":0.0,"throttleTimeMs":0,"responseQueueTimeMs":0.089,"sendTimeMs":0.251,"securityProtocol":"SASL_PLAINTEXT","principal":"User:admin","listener":"SASL_PLAINTEXT","clientInformation":{"softwareName":"apache-kafka-java","softwareVersion":"3.3.1"}} 19:06:42.305 [main] DEBUG org.apache.kafka.clients.NetworkClient - [Consumer clientId=mso-123456-consumer-e3947c42-8a25-4603-b050-ed449cbe5e27, groupId=mso-group] Received FIND_COORDINATOR response from node 1 for request with header RequestHeader(apiKey=FIND_COORDINATOR, apiVersion=4, clientId=mso-123456-consumer-e3947c42-8a25-4603-b050-ed449cbe5e27, correlationId=17): FindCoordinatorResponseData(throttleTimeMs=0, errorCode=0, errorMessage='', nodeId=0, host='', port=0, coordinators=[Coordinator(key='mso-group', nodeId=-1, host='', port=-1, errorCode=15, errorMessage='')]) 19:06:42.306 [main] DEBUG org.apache.kafka.clients.consumer.internals.ConsumerCoordinator - [Consumer clientId=mso-123456-consumer-e3947c42-8a25-4603-b050-ed449cbe5e27, groupId=mso-group] Received FindCoordinator response ClientResponse(receivedTimeMs=1755112002305, latencyMs=7, disconnected=false, requestHeader=RequestHeader(apiKey=FIND_COORDINATOR, apiVersion=4, clientId=mso-123456-consumer-e3947c42-8a25-4603-b050-ed449cbe5e27, correlationId=17), responseBody=FindCoordinatorResponseData(throttleTimeMs=0, errorCode=0, errorMessage='', nodeId=0, host='', port=0, coordinators=[Coordinator(key='mso-group', nodeId=-1, host='', port=-1, errorCode=15, errorMessage='')])) 19:06:42.306 [main] DEBUG org.apache.kafka.clients.consumer.internals.ConsumerCoordinator - [Consumer clientId=mso-123456-consumer-e3947c42-8a25-4603-b050-ed449cbe5e27, groupId=mso-group] Group coordinator lookup failed: 19:06:42.306 [main] DEBUG org.apache.kafka.clients.consumer.internals.ConsumerCoordinator - [Consumer clientId=mso-123456-consumer-e3947c42-8a25-4603-b050-ed449cbe5e27, groupId=mso-group] Coordinator discovery failed, refreshing metadata org.apache.kafka.common.errors.CoordinatorNotAvailableException: The coordinator is not available. 19:06:42.305 [SyncThread:0] DEBUG org.apache.zookeeper.server.ZooKeeperServer - ACLs for node: [31,s{'world,'anyone} ] 19:06:42.306 [SyncThread:0] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Client credentials: ['sasl,'zooclient , 'ip,'127.0.0.1 ] 19:06:42.306 [main-SendThread(127.0.0.1:42969)] DEBUG org.apache.zookeeper.ClientCnxn - Reading reply session id: 0x1000002c50e0000, packet:: clientPath:/config/topics/__consumer_offsets serverPath:/config/topics/__consumer_offsets finished:false header:: 232,4 replyHeader:: 232,139,0 request:: '/config/topics/__consumer_offsets,F response:: #7b2276657273696f6e223a312c22636f6e666967223a7b22636f6d7072657373696f6e2e74797065223a2270726f6475636572222c22636c65616e75702e706f6c696379223a22636f6d70616374222c227365676d656e742e6279746573223a22313034383537363030227d7d,s{37,37,1755112001575,1755112001575,0,0,0,0,109,0,37} 19:06:42.314 [data-plane-kafka-request-handler-1] DEBUG kafka.log.OffsetIndex - Loaded index file /tmp/kafka-unit12180474530667575823/__consumer_offsets-14/00000000000000000000.index with maxEntries = 1310720, maxIndexSize = 10485760, entries = 0, lastOffset = 0, file position = 0 19:06:42.315 [data-plane-kafka-request-handler-1] DEBUG kafka.log.AbstractIndex - Index /tmp/kafka-unit12180474530667575823/__consumer_offsets-14/00000000000000000000.index was not resized because it already has size 10485760 19:06:42.315 [data-plane-kafka-request-handler-1] DEBUG kafka.log.TimeIndex - Loaded index file /tmp/kafka-unit12180474530667575823/__consumer_offsets-14/00000000000000000000.timeindex with maxEntries = 873813, maxIndexSize = 10485760, entries = 0, lastOffset = TimestampOffset(-1,0), file position = 0 19:06:42.315 [data-plane-kafka-request-handler-1] DEBUG kafka.log.AbstractIndex - Index /tmp/kafka-unit12180474530667575823/__consumer_offsets-14/00000000000000000000.timeindex was not resized because it already has size 10485756 19:06:42.315 [data-plane-kafka-request-handler-1] INFO kafka.log.UnifiedLog$ - [LogLoader partition=__consumer_offsets-14, dir=/tmp/kafka-unit12180474530667575823] Loading producer state till offset 0 with message format version 2 19:06:42.316 [data-plane-kafka-request-handler-1] DEBUG kafka.utils.KafkaScheduler - Scheduling task flush-metadata-file with initial delay 0 ms and period -1 ms. 19:06:42.316 [data-plane-kafka-request-handler-1] DEBUG kafka.utils.KafkaScheduler - Scheduling task PeriodicProducerExpirationCheck with initial delay 600000 ms and period 600000 ms. 19:06:42.317 [data-plane-kafka-request-handler-1] INFO kafka.log.LogManager - Created log for partition __consumer_offsets-14 in /tmp/kafka-unit12180474530667575823/__consumer_offsets-14 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} 19:06:42.317 [data-plane-kafka-request-handler-1] INFO kafka.cluster.Partition - [Partition __consumer_offsets-14 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-14 19:06:42.317 [data-plane-kafka-request-handler-1] INFO kafka.cluster.Partition - [Partition __consumer_offsets-14 broker=1] Log loaded for partition __consumer_offsets-14 with initial high watermark 0 19:06:42.317 [data-plane-kafka-request-handler-1] INFO state.change.logger - [Broker id=1] Leader __consumer_offsets-14 with topic id Some(eotegnaZSI-1iOOsuTZpHg) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas []. Previous leader epoch was -1. 19:06:42.317 [data-plane-kafka-request-handler-1] DEBUG kafka.server.epoch.LeaderEpochFileCache - [LeaderEpochCache __consumer_offsets-14] Appended new epoch entry EpochEntry(epoch=0, startOffset=0). Cache now contains 1 entries. 19:06:42.323 [ProcessThread(sid:0 cport:42969):] DEBUG org.apache.zookeeper.server.SessionTrackerImpl - Checking session 0x1000002c50e0000 19:06:42.323 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - Processing request:: sessionid:0x1000002c50e0000 type:getData cxid:0xe9 zxid:0xfffffffffffffffe txntype:unknown reqpath:/config/topics/__consumer_offsets 19:06:42.323 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - sessionid:0x1000002c50e0000 type:getData cxid:0xe9 zxid:0xfffffffffffffffe txntype:unknown reqpath:/config/topics/__consumer_offsets 19:06:42.323 [SyncThread:0] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Permission requested: 1 19:06:42.323 [SyncThread:0] DEBUG org.apache.zookeeper.server.ZooKeeperServer - ACLs for node: [31,s{'world,'anyone} ] 19:06:42.323 [SyncThread:0] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Client credentials: ['sasl,'zooclient , 'ip,'127.0.0.1 ] 19:06:42.324 [main-SendThread(127.0.0.1:42969)] DEBUG org.apache.zookeeper.ClientCnxn - Reading reply session id: 0x1000002c50e0000, packet:: clientPath:/config/topics/__consumer_offsets serverPath:/config/topics/__consumer_offsets finished:false header:: 233,4 replyHeader:: 233,139,0 request:: '/config/topics/__consumer_offsets,F response:: #7b2276657273696f6e223a312c22636f6e666967223a7b22636f6d7072657373696f6e2e74797065223a2270726f6475636572222c22636c65616e75702e706f6c696379223a22636f6d70616374222c227365676d656e742e6279746573223a22313034383537363030227d7d,s{37,37,1755112001575,1755112001575,0,0,0,0,109,0,37} 19:06:42.327 [data-plane-kafka-request-handler-1] DEBUG kafka.log.OffsetIndex - Loaded index file /tmp/kafka-unit12180474530667575823/__consumer_offsets-23/00000000000000000000.index with maxEntries = 1310720, maxIndexSize = 10485760, entries = 0, lastOffset = 0, file position = 0 19:06:42.327 [data-plane-kafka-request-handler-1] DEBUG kafka.log.AbstractIndex - Index /tmp/kafka-unit12180474530667575823/__consumer_offsets-23/00000000000000000000.index was not resized because it already has size 10485760 19:06:42.327 [data-plane-kafka-request-handler-1] DEBUG kafka.log.TimeIndex - Loaded index file /tmp/kafka-unit12180474530667575823/__consumer_offsets-23/00000000000000000000.timeindex with maxEntries = 873813, maxIndexSize = 10485760, entries = 0, lastOffset = TimestampOffset(-1,0), file position = 0 19:06:42.327 [data-plane-kafka-request-handler-1] DEBUG kafka.log.AbstractIndex - Index /tmp/kafka-unit12180474530667575823/__consumer_offsets-23/00000000000000000000.timeindex was not resized because it already has size 10485756 19:06:42.328 [data-plane-kafka-request-handler-1] INFO kafka.log.UnifiedLog$ - [LogLoader partition=__consumer_offsets-23, dir=/tmp/kafka-unit12180474530667575823] Loading producer state till offset 0 with message format version 2 19:06:42.328 [data-plane-kafka-request-handler-1] DEBUG kafka.utils.KafkaScheduler - Scheduling task flush-metadata-file with initial delay 0 ms and period -1 ms. 19:06:42.328 [data-plane-kafka-request-handler-1] DEBUG kafka.utils.KafkaScheduler - Scheduling task PeriodicProducerExpirationCheck with initial delay 600000 ms and period 600000 ms. 19:06:42.329 [data-plane-kafka-request-handler-1] INFO kafka.log.LogManager - Created log for partition __consumer_offsets-23 in /tmp/kafka-unit12180474530667575823/__consumer_offsets-23 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} 19:06:42.329 [data-plane-kafka-request-handler-1] INFO kafka.cluster.Partition - [Partition __consumer_offsets-23 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-23 19:06:42.329 [data-plane-kafka-request-handler-1] INFO kafka.cluster.Partition - [Partition __consumer_offsets-23 broker=1] Log loaded for partition __consumer_offsets-23 with initial high watermark 0 19:06:42.329 [data-plane-kafka-request-handler-1] INFO state.change.logger - [Broker id=1] Leader __consumer_offsets-23 with topic id Some(eotegnaZSI-1iOOsuTZpHg) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas []. Previous leader epoch was -1. 19:06:42.329 [data-plane-kafka-request-handler-1] DEBUG kafka.server.epoch.LeaderEpochFileCache - [LeaderEpochCache __consumer_offsets-23] Appended new epoch entry EpochEntry(epoch=0, startOffset=0). Cache now contains 1 entries. 19:06:42.334 [ProcessThread(sid:0 cport:42969):] DEBUG org.apache.zookeeper.server.SessionTrackerImpl - Checking session 0x1000002c50e0000 19:06:42.334 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - Processing request:: sessionid:0x1000002c50e0000 type:getData cxid:0xea zxid:0xfffffffffffffffe txntype:unknown reqpath:/config/topics/__consumer_offsets 19:06:42.334 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - sessionid:0x1000002c50e0000 type:getData cxid:0xea zxid:0xfffffffffffffffe txntype:unknown reqpath:/config/topics/__consumer_offsets 19:06:42.334 [SyncThread:0] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Permission requested: 1 19:06:42.334 [SyncThread:0] DEBUG org.apache.zookeeper.server.ZooKeeperServer - ACLs for node: [31,s{'world,'anyone} ] 19:06:42.334 [SyncThread:0] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Client credentials: ['sasl,'zooclient , 'ip,'127.0.0.1 ] 19:06:42.335 [main-SendThread(127.0.0.1:42969)] DEBUG org.apache.zookeeper.ClientCnxn - Reading reply session id: 0x1000002c50e0000, packet:: clientPath:/config/topics/__consumer_offsets serverPath:/config/topics/__consumer_offsets finished:false header:: 234,4 replyHeader:: 234,139,0 request:: '/config/topics/__consumer_offsets,F response:: #7b2276657273696f6e223a312c22636f6e666967223a7b22636f6d7072657373696f6e2e74797065223a2270726f6475636572222c22636c65616e75702e706f6c696379223a22636f6d70616374222c227365676d656e742e6279746573223a22313034383537363030227d7d,s{37,37,1755112001575,1755112001575,0,0,0,0,109,0,37} 19:06:42.338 [data-plane-kafka-request-handler-1] DEBUG kafka.log.OffsetIndex - Loaded index file /tmp/kafka-unit12180474530667575823/__consumer_offsets-38/00000000000000000000.index with maxEntries = 1310720, maxIndexSize = 10485760, entries = 0, lastOffset = 0, file position = 0 19:06:42.338 [data-plane-kafka-request-handler-1] DEBUG kafka.log.AbstractIndex - Index /tmp/kafka-unit12180474530667575823/__consumer_offsets-38/00000000000000000000.index was not resized because it already has size 10485760 19:06:42.338 [data-plane-kafka-request-handler-1] DEBUG kafka.log.TimeIndex - Loaded index file /tmp/kafka-unit12180474530667575823/__consumer_offsets-38/00000000000000000000.timeindex with maxEntries = 873813, maxIndexSize = 10485760, entries = 0, lastOffset = TimestampOffset(-1,0), file position = 0 19:06:42.338 [data-plane-kafka-request-handler-1] DEBUG kafka.log.AbstractIndex - Index /tmp/kafka-unit12180474530667575823/__consumer_offsets-38/00000000000000000000.timeindex was not resized because it already has size 10485756 19:06:42.338 [data-plane-kafka-request-handler-1] INFO kafka.log.UnifiedLog$ - [LogLoader partition=__consumer_offsets-38, dir=/tmp/kafka-unit12180474530667575823] Loading producer state till offset 0 with message format version 2 19:06:42.339 [data-plane-kafka-request-handler-1] DEBUG kafka.utils.KafkaScheduler - Scheduling task flush-metadata-file with initial delay 0 ms and period -1 ms. 19:06:42.339 [data-plane-kafka-request-handler-1] DEBUG kafka.utils.KafkaScheduler - Scheduling task PeriodicProducerExpirationCheck with initial delay 600000 ms and period 600000 ms. 19:06:42.339 [data-plane-kafka-request-handler-1] INFO kafka.log.LogManager - Created log for partition __consumer_offsets-38 in /tmp/kafka-unit12180474530667575823/__consumer_offsets-38 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} 19:06:42.340 [data-plane-kafka-request-handler-1] INFO kafka.cluster.Partition - [Partition __consumer_offsets-38 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-38 19:06:42.340 [data-plane-kafka-request-handler-1] INFO kafka.cluster.Partition - [Partition __consumer_offsets-38 broker=1] Log loaded for partition __consumer_offsets-38 with initial high watermark 0 19:06:42.340 [data-plane-kafka-request-handler-1] INFO state.change.logger - [Broker id=1] Leader __consumer_offsets-38 with topic id Some(eotegnaZSI-1iOOsuTZpHg) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas []. Previous leader epoch was -1. 19:06:42.340 [data-plane-kafka-request-handler-1] DEBUG kafka.server.epoch.LeaderEpochFileCache - [LeaderEpochCache __consumer_offsets-38] Appended new epoch entry EpochEntry(epoch=0, startOffset=0). Cache now contains 1 entries. 19:06:42.345 [ProcessThread(sid:0 cport:42969):] DEBUG org.apache.zookeeper.server.SessionTrackerImpl - Checking session 0x1000002c50e0000 19:06:42.345 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - Processing request:: sessionid:0x1000002c50e0000 type:getData cxid:0xeb zxid:0xfffffffffffffffe txntype:unknown reqpath:/config/topics/__consumer_offsets 19:06:42.345 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - sessionid:0x1000002c50e0000 type:getData cxid:0xeb zxid:0xfffffffffffffffe txntype:unknown reqpath:/config/topics/__consumer_offsets 19:06:42.345 [SyncThread:0] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Permission requested: 1 19:06:42.345 [SyncThread:0] DEBUG org.apache.zookeeper.server.ZooKeeperServer - ACLs for node: [31,s{'world,'anyone} ] 19:06:42.345 [SyncThread:0] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Client credentials: ['sasl,'zooclient , 'ip,'127.0.0.1 ] 19:06:42.346 [main-SendThread(127.0.0.1:42969)] DEBUG org.apache.zookeeper.ClientCnxn - Reading reply session id: 0x1000002c50e0000, packet:: clientPath:/config/topics/__consumer_offsets serverPath:/config/topics/__consumer_offsets finished:false header:: 235,4 replyHeader:: 235,139,0 request:: '/config/topics/__consumer_offsets,F response:: #7b2276657273696f6e223a312c22636f6e666967223a7b22636f6d7072657373696f6e2e74797065223a2270726f6475636572222c22636c65616e75702e706f6c696379223a22636f6d70616374222c227365676d656e742e6279746573223a22313034383537363030227d7d,s{37,37,1755112001575,1755112001575,0,0,0,0,109,0,37} 19:06:42.349 [data-plane-kafka-request-handler-1] DEBUG kafka.log.OffsetIndex - Loaded index file /tmp/kafka-unit12180474530667575823/__consumer_offsets-8/00000000000000000000.index with maxEntries = 1310720, maxIndexSize = 10485760, entries = 0, lastOffset = 0, file position = 0 19:06:42.349 [data-plane-kafka-request-handler-1] DEBUG kafka.log.AbstractIndex - Index /tmp/kafka-unit12180474530667575823/__consumer_offsets-8/00000000000000000000.index was not resized because it already has size 10485760 19:06:42.349 [data-plane-kafka-request-handler-1] DEBUG kafka.log.TimeIndex - Loaded index file /tmp/kafka-unit12180474530667575823/__consumer_offsets-8/00000000000000000000.timeindex with maxEntries = 873813, maxIndexSize = 10485760, entries = 0, lastOffset = TimestampOffset(-1,0), file position = 0 19:06:42.349 [data-plane-kafka-request-handler-1] DEBUG kafka.log.AbstractIndex - Index /tmp/kafka-unit12180474530667575823/__consumer_offsets-8/00000000000000000000.timeindex was not resized because it already has size 10485756 19:06:42.349 [data-plane-kafka-request-handler-1] INFO kafka.log.UnifiedLog$ - [LogLoader partition=__consumer_offsets-8, dir=/tmp/kafka-unit12180474530667575823] Loading producer state till offset 0 with message format version 2 19:06:42.350 [data-plane-kafka-request-handler-1] DEBUG kafka.utils.KafkaScheduler - Scheduling task flush-metadata-file with initial delay 0 ms and period -1 ms. 19:06:42.350 [data-plane-kafka-request-handler-1] DEBUG kafka.utils.KafkaScheduler - Scheduling task PeriodicProducerExpirationCheck with initial delay 600000 ms and period 600000 ms. 19:06:42.350 [data-plane-kafka-request-handler-1] INFO kafka.log.LogManager - Created log for partition __consumer_offsets-8 in /tmp/kafka-unit12180474530667575823/__consumer_offsets-8 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} 19:06:42.351 [data-plane-kafka-request-handler-1] INFO kafka.cluster.Partition - [Partition __consumer_offsets-8 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-8 19:06:42.351 [data-plane-kafka-request-handler-1] INFO kafka.cluster.Partition - [Partition __consumer_offsets-8 broker=1] Log loaded for partition __consumer_offsets-8 with initial high watermark 0 19:06:42.351 [data-plane-kafka-request-handler-1] INFO state.change.logger - [Broker id=1] Leader __consumer_offsets-8 with topic id Some(eotegnaZSI-1iOOsuTZpHg) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas []. Previous leader epoch was -1. 19:06:42.351 [data-plane-kafka-request-handler-1] DEBUG kafka.server.epoch.LeaderEpochFileCache - [LeaderEpochCache __consumer_offsets-8] Appended new epoch entry EpochEntry(epoch=0, startOffset=0). Cache now contains 1 entries. 19:06:42.356 [ProcessThread(sid:0 cport:42969):] DEBUG org.apache.zookeeper.server.SessionTrackerImpl - Checking session 0x1000002c50e0000 19:06:42.356 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - Processing request:: sessionid:0x1000002c50e0000 type:getData cxid:0xec zxid:0xfffffffffffffffe txntype:unknown reqpath:/config/topics/__consumer_offsets 19:06:42.356 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - sessionid:0x1000002c50e0000 type:getData cxid:0xec zxid:0xfffffffffffffffe txntype:unknown reqpath:/config/topics/__consumer_offsets 19:06:42.356 [SyncThread:0] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Permission requested: 1 19:06:42.356 [SyncThread:0] DEBUG org.apache.zookeeper.server.ZooKeeperServer - ACLs for node: [31,s{'world,'anyone} ] 19:06:42.356 [SyncThread:0] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Client credentials: ['sasl,'zooclient , 'ip,'127.0.0.1 ] 19:06:42.357 [main-SendThread(127.0.0.1:42969)] DEBUG org.apache.zookeeper.ClientCnxn - Reading reply session id: 0x1000002c50e0000, packet:: clientPath:/config/topics/__consumer_offsets serverPath:/config/topics/__consumer_offsets finished:false header:: 236,4 replyHeader:: 236,139,0 request:: '/config/topics/__consumer_offsets,F response:: #7b2276657273696f6e223a312c22636f6e666967223a7b22636f6d7072657373696f6e2e74797065223a2270726f6475636572222c22636c65616e75702e706f6c696379223a22636f6d70616374222c227365676d656e742e6279746573223a22313034383537363030227d7d,s{37,37,1755112001575,1755112001575,0,0,0,0,109,0,37} 19:06:42.360 [data-plane-kafka-request-handler-1] DEBUG kafka.log.OffsetIndex - Loaded index file /tmp/kafka-unit12180474530667575823/__consumer_offsets-45/00000000000000000000.index with maxEntries = 1310720, maxIndexSize = 10485760, entries = 0, lastOffset = 0, file position = 0 19:06:42.360 [data-plane-kafka-request-handler-1] DEBUG kafka.log.AbstractIndex - Index /tmp/kafka-unit12180474530667575823/__consumer_offsets-45/00000000000000000000.index was not resized because it already has size 10485760 19:06:42.360 [data-plane-kafka-request-handler-1] DEBUG kafka.log.TimeIndex - Loaded index file /tmp/kafka-unit12180474530667575823/__consumer_offsets-45/00000000000000000000.timeindex with maxEntries = 873813, maxIndexSize = 10485760, entries = 0, lastOffset = TimestampOffset(-1,0), file position = 0 19:06:42.360 [data-plane-kafka-request-handler-1] DEBUG kafka.log.AbstractIndex - Index /tmp/kafka-unit12180474530667575823/__consumer_offsets-45/00000000000000000000.timeindex was not resized because it already has size 10485756 19:06:42.360 [data-plane-kafka-request-handler-1] INFO kafka.log.UnifiedLog$ - [LogLoader partition=__consumer_offsets-45, dir=/tmp/kafka-unit12180474530667575823] Loading producer state till offset 0 with message format version 2 19:06:42.360 [data-plane-kafka-request-handler-1] DEBUG kafka.utils.KafkaScheduler - Scheduling task flush-metadata-file with initial delay 0 ms and period -1 ms. 19:06:42.361 [data-plane-kafka-request-handler-1] DEBUG kafka.utils.KafkaScheduler - Scheduling task PeriodicProducerExpirationCheck with initial delay 600000 ms and period 600000 ms. 19:06:42.361 [data-plane-kafka-request-handler-1] INFO kafka.log.LogManager - Created log for partition __consumer_offsets-45 in /tmp/kafka-unit12180474530667575823/__consumer_offsets-45 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} 19:06:42.361 [data-plane-kafka-request-handler-1] INFO kafka.cluster.Partition - [Partition __consumer_offsets-45 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-45 19:06:42.362 [data-plane-kafka-request-handler-1] INFO kafka.cluster.Partition - [Partition __consumer_offsets-45 broker=1] Log loaded for partition __consumer_offsets-45 with initial high watermark 0 19:06:42.362 [data-plane-kafka-request-handler-1] INFO state.change.logger - [Broker id=1] Leader __consumer_offsets-45 with topic id Some(eotegnaZSI-1iOOsuTZpHg) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas []. Previous leader epoch was -1. 19:06:42.362 [data-plane-kafka-request-handler-1] DEBUG kafka.server.epoch.LeaderEpochFileCache - [LeaderEpochCache __consumer_offsets-45] Appended new epoch entry EpochEntry(epoch=0, startOffset=0). Cache now contains 1 entries. 19:06:42.369 [ProcessThread(sid:0 cport:42969):] DEBUG org.apache.zookeeper.server.SessionTrackerImpl - Checking session 0x1000002c50e0000 19:06:42.369 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - Processing request:: sessionid:0x1000002c50e0000 type:getData cxid:0xed zxid:0xfffffffffffffffe txntype:unknown reqpath:/config/topics/__consumer_offsets 19:06:42.369 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - sessionid:0x1000002c50e0000 type:getData cxid:0xed zxid:0xfffffffffffffffe txntype:unknown reqpath:/config/topics/__consumer_offsets 19:06:42.369 [SyncThread:0] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Permission requested: 1 19:06:42.369 [SyncThread:0] DEBUG org.apache.zookeeper.server.ZooKeeperServer - ACLs for node: [31,s{'world,'anyone} ] 19:06:42.370 [SyncThread:0] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Client credentials: ['sasl,'zooclient , 'ip,'127.0.0.1 ] 19:06:42.370 [main-SendThread(127.0.0.1:42969)] DEBUG org.apache.zookeeper.ClientCnxn - Reading reply session id: 0x1000002c50e0000, packet:: clientPath:/config/topics/__consumer_offsets serverPath:/config/topics/__consumer_offsets finished:false header:: 237,4 replyHeader:: 237,139,0 request:: '/config/topics/__consumer_offsets,F response:: #7b2276657273696f6e223a312c22636f6e666967223a7b22636f6d7072657373696f6e2e74797065223a2270726f6475636572222c22636c65616e75702e706f6c696379223a22636f6d70616374222c227365676d656e742e6279746573223a22313034383537363030227d7d,s{37,37,1755112001575,1755112001575,0,0,0,0,109,0,37} 19:06:42.395 [data-plane-kafka-request-handler-1] DEBUG kafka.log.OffsetIndex - Loaded index file /tmp/kafka-unit12180474530667575823/__consumer_offsets-15/00000000000000000000.index with maxEntries = 1310720, maxIndexSize = 10485760, entries = 0, lastOffset = 0, file position = 0 19:06:42.396 [data-plane-kafka-request-handler-1] DEBUG kafka.log.AbstractIndex - Index /tmp/kafka-unit12180474530667575823/__consumer_offsets-15/00000000000000000000.index was not resized because it already has size 10485760 19:06:42.396 [data-plane-kafka-request-handler-1] DEBUG kafka.log.TimeIndex - Loaded index file /tmp/kafka-unit12180474530667575823/__consumer_offsets-15/00000000000000000000.timeindex with maxEntries = 873813, maxIndexSize = 10485760, entries = 0, lastOffset = TimestampOffset(-1,0), file position = 0 19:06:42.396 [data-plane-kafka-request-handler-1] DEBUG kafka.log.AbstractIndex - Index /tmp/kafka-unit12180474530667575823/__consumer_offsets-15/00000000000000000000.timeindex was not resized because it already has size 10485756 19:06:42.396 [data-plane-kafka-request-handler-1] INFO kafka.log.UnifiedLog$ - [LogLoader partition=__consumer_offsets-15, dir=/tmp/kafka-unit12180474530667575823] Loading producer state till offset 0 with message format version 2 19:06:42.397 [data-plane-kafka-request-handler-1] DEBUG kafka.utils.KafkaScheduler - Scheduling task flush-metadata-file with initial delay 0 ms and period -1 ms. 19:06:42.397 [data-plane-kafka-request-handler-1] DEBUG kafka.utils.KafkaScheduler - Scheduling task PeriodicProducerExpirationCheck with initial delay 600000 ms and period 600000 ms. 19:06:42.398 [data-plane-kafka-request-handler-1] INFO kafka.log.LogManager - Created log for partition __consumer_offsets-15 in /tmp/kafka-unit12180474530667575823/__consumer_offsets-15 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} 19:06:42.398 [main] DEBUG org.apache.kafka.clients.NetworkClient - [Consumer clientId=mso-123456-consumer-e3947c42-8a25-4603-b050-ed449cbe5e27, groupId=mso-group] Sending metadata request MetadataRequestData(topics=[MetadataRequestTopic(topicId=AAAAAAAAAAAAAAAAAAAAAA, name='my-test-topic')], allowAutoTopicCreation=false, includeClusterAuthorizedOperations=false, includeTopicAuthorizedOperations=false) to node localhost:38537 (id: 1 rack: null) 19:06:42.398 [data-plane-kafka-request-handler-1] INFO kafka.cluster.Partition - [Partition __consumer_offsets-15 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-15 19:06:42.398 [data-plane-kafka-request-handler-1] INFO kafka.cluster.Partition - [Partition __consumer_offsets-15 broker=1] Log loaded for partition __consumer_offsets-15 with initial high watermark 0 19:06:42.398 [main] DEBUG org.apache.kafka.clients.NetworkClient - [Consumer clientId=mso-123456-consumer-e3947c42-8a25-4603-b050-ed449cbe5e27, groupId=mso-group] Sending METADATA request with header RequestHeader(apiKey=METADATA, apiVersion=12, clientId=mso-123456-consumer-e3947c42-8a25-4603-b050-ed449cbe5e27, correlationId=18) and timeout 30000 to node 1: MetadataRequestData(topics=[MetadataRequestTopic(topicId=AAAAAAAAAAAAAAAAAAAAAA, name='my-test-topic')], allowAutoTopicCreation=false, includeClusterAuthorizedOperations=false, includeTopicAuthorizedOperations=false) 19:06:42.398 [data-plane-kafka-request-handler-1] INFO state.change.logger - [Broker id=1] Leader __consumer_offsets-15 with topic id Some(eotegnaZSI-1iOOsuTZpHg) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas []. Previous leader epoch was -1. 19:06:42.398 [data-plane-kafka-request-handler-1] DEBUG kafka.server.epoch.LeaderEpochFileCache - [LeaderEpochCache __consumer_offsets-15] Appended new epoch entry EpochEntry(epoch=0, startOffset=0). Cache now contains 1 entries. 19:06:42.401 [main] DEBUG org.apache.kafka.clients.NetworkClient - [Consumer clientId=mso-123456-consumer-e3947c42-8a25-4603-b050-ed449cbe5e27, groupId=mso-group] Received METADATA response from node 1 for request with header RequestHeader(apiKey=METADATA, apiVersion=12, clientId=mso-123456-consumer-e3947c42-8a25-4603-b050-ed449cbe5e27, correlationId=18): MetadataResponseData(throttleTimeMs=0, brokers=[MetadataResponseBroker(nodeId=1, host='localhost', port=38537, rack=null)], clusterId='rNB1PmTvTz-7PD_3X0wh3g', controllerId=1, topics=[MetadataResponseTopic(errorCode=0, name='my-test-topic', topicId=Y7FYMZASRPqaYrfAS485Ow, isInternal=false, partitions=[MetadataResponsePartition(errorCode=0, partitionIndex=0, leaderId=1, leaderEpoch=0, replicaNodes=[1], isrNodes=[1], offlineReplicas=[])], topicAuthorizedOperations=-2147483648)], clusterAuthorizedOperations=-2147483648) 19:06:42.401 [main] DEBUG org.apache.kafka.clients.Metadata - [Consumer clientId=mso-123456-consumer-e3947c42-8a25-4603-b050-ed449cbe5e27, groupId=mso-group] Updating last seen epoch for partition my-test-topic-0 from 0 to epoch 0 from new metadata 19:06:42.402 [data-plane-kafka-network-thread-1-ListenerName(SASL_PLAINTEXT)-SASL_PLAINTEXT-0] DEBUG kafka.request.logger - Completed request:{"isForwarded":false,"requestHeader":{"requestApiKey":3,"requestApiVersion":12,"correlationId":18,"clientId":"mso-123456-consumer-e3947c42-8a25-4603-b050-ed449cbe5e27","requestApiKeyName":"METADATA"},"request":{"topics":[{"topicId":"AAAAAAAAAAAAAAAAAAAAAA","name":"my-test-topic"}],"allowAutoTopicCreation":false,"includeTopicAuthorizedOperations":false},"response":{"throttleTimeMs":0,"brokers":[{"nodeId":1,"host":"localhost","port":38537,"rack":null}],"clusterId":"rNB1PmTvTz-7PD_3X0wh3g","controllerId":1,"topics":[{"errorCode":0,"name":"my-test-topic","topicId":"Y7FYMZASRPqaYrfAS485Ow","isInternal":false,"partitions":[{"errorCode":0,"partitionIndex":0,"leaderId":1,"leaderEpoch":0,"replicaNodes":[1],"isrNodes":[1],"offlineReplicas":[]}],"topicAuthorizedOperations":-2147483648}]},"connection":"127.0.0.1:38537-127.0.0.1:54970-3","totalTimeMs":2.254,"requestQueueTimeMs":0.293,"localTimeMs":1.605,"remoteTimeMs":0.0,"throttleTimeMs":0,"responseQueueTimeMs":0.136,"sendTimeMs":0.219,"securityProtocol":"SASL_PLAINTEXT","principal":"User:admin","listener":"SASL_PLAINTEXT","clientInformation":{"softwareName":"apache-kafka-java","softwareVersion":"3.3.1"}} 19:06:42.402 [main] DEBUG org.apache.kafka.clients.Metadata - [Consumer clientId=mso-123456-consumer-e3947c42-8a25-4603-b050-ed449cbe5e27, groupId=mso-group] Updated cluster metadata updateVersion 10 to MetadataCache{clusterId='rNB1PmTvTz-7PD_3X0wh3g', nodes={1=localhost:38537 (id: 1 rack: null)}, partitions=[PartitionMetadata(error=NONE, partition=my-test-topic-0, leader=Optional[1], leaderEpoch=Optional[0], replicas=1, isr=1, offlineReplicas=)], controller=localhost:38537 (id: 1 rack: null)} 19:06:42.402 [main] DEBUG org.apache.kafka.clients.consumer.internals.ConsumerCoordinator - [Consumer clientId=mso-123456-consumer-e3947c42-8a25-4603-b050-ed449cbe5e27, groupId=mso-group] Sending FindCoordinator request to broker localhost:38537 (id: 1 rack: null) 19:06:42.402 [main] DEBUG org.apache.kafka.clients.NetworkClient - [Consumer clientId=mso-123456-consumer-e3947c42-8a25-4603-b050-ed449cbe5e27, groupId=mso-group] Sending FIND_COORDINATOR request with header RequestHeader(apiKey=FIND_COORDINATOR, apiVersion=4, clientId=mso-123456-consumer-e3947c42-8a25-4603-b050-ed449cbe5e27, correlationId=19) and timeout 30000 to node 1: FindCoordinatorRequestData(key='', keyType=0, coordinatorKeys=[mso-group]) 19:06:42.404 [ProcessThread(sid:0 cport:42969):] DEBUG org.apache.zookeeper.server.SessionTrackerImpl - Checking session 0x1000002c50e0000 19:06:42.405 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - Processing request:: sessionid:0x1000002c50e0000 type:exists cxid:0xee zxid:0xfffffffffffffffe txntype:unknown reqpath:/admin/delete_topics/__consumer_offsets 19:06:42.405 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - sessionid:0x1000002c50e0000 type:exists cxid:0xee zxid:0xfffffffffffffffe txntype:unknown reqpath:/admin/delete_topics/__consumer_offsets 19:06:42.405 [main-SendThread(127.0.0.1:42969)] DEBUG org.apache.zookeeper.ClientCnxn - Reading reply session id: 0x1000002c50e0000, packet:: clientPath:/admin/delete_topics/__consumer_offsets serverPath:/admin/delete_topics/__consumer_offsets finished:false header:: 238,3 replyHeader:: 238,139,-101 request:: '/admin/delete_topics/__consumer_offsets,F response:: 19:06:42.406 [ProcessThread(sid:0 cport:42969):] DEBUG org.apache.zookeeper.server.SessionTrackerImpl - Checking session 0x1000002c50e0000 19:06:42.406 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - Processing request:: sessionid:0x1000002c50e0000 type:exists cxid:0xef zxid:0xfffffffffffffffe txntype:unknown reqpath:/brokers/topics/__consumer_offsets 19:06:42.406 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - sessionid:0x1000002c50e0000 type:exists cxid:0xef zxid:0xfffffffffffffffe txntype:unknown reqpath:/brokers/topics/__consumer_offsets 19:06:42.406 [main-SendThread(127.0.0.1:42969)] DEBUG org.apache.zookeeper.ClientCnxn - Reading reply session id: 0x1000002c50e0000, packet:: clientPath:/brokers/topics/__consumer_offsets serverPath:/brokers/topics/__consumer_offsets finished:false header:: 239,3 replyHeader:: 239,139,0 request:: '/brokers/topics/__consumer_offsets,T response:: s{38,38,1755112001586,1755112001586,0,1,0,0,548,1,39} 19:06:42.407 [data-plane-kafka-request-handler-0] DEBUG kafka.server.ZkAdminManager - [Admin Manager on Broker 1]: Topic creation failed since topic '__consumer_offsets' already exists. org.apache.kafka.common.errors.TopicExistsException: Topic '__consumer_offsets' already exists. 19:06:42.407 [data-plane-kafka-request-handler-0] DEBUG kafka.server.DefaultAutoTopicCreationManager - Cleared inflight topic creation state for HashMap(__consumer_offsets -> CreatableTopic(name='__consumer_offsets', numPartitions=50, replicationFactor=1, assignments=[], configs=[CreateableTopicConfig(name='compression.type', value='producer'), CreateableTopicConfig(name='cleanup.policy', value='compact'), CreateableTopicConfig(name='segment.bytes', value='104857600')])) 19:06:42.408 [main] DEBUG org.apache.kafka.clients.NetworkClient - [Consumer clientId=mso-123456-consumer-e3947c42-8a25-4603-b050-ed449cbe5e27, groupId=mso-group] Received FIND_COORDINATOR response from node 1 for request with header RequestHeader(apiKey=FIND_COORDINATOR, apiVersion=4, clientId=mso-123456-consumer-e3947c42-8a25-4603-b050-ed449cbe5e27, correlationId=19): FindCoordinatorResponseData(throttleTimeMs=0, errorCode=0, errorMessage='', nodeId=0, host='', port=0, coordinators=[Coordinator(key='mso-group', nodeId=-1, host='', port=-1, errorCode=15, errorMessage='')]) 19:06:42.408 [main] DEBUG org.apache.kafka.clients.consumer.internals.ConsumerCoordinator - [Consumer clientId=mso-123456-consumer-e3947c42-8a25-4603-b050-ed449cbe5e27, groupId=mso-group] Received FindCoordinator response ClientResponse(receivedTimeMs=1755112002408, latencyMs=6, disconnected=false, requestHeader=RequestHeader(apiKey=FIND_COORDINATOR, apiVersion=4, clientId=mso-123456-consumer-e3947c42-8a25-4603-b050-ed449cbe5e27, correlationId=19), responseBody=FindCoordinatorResponseData(throttleTimeMs=0, errorCode=0, errorMessage='', nodeId=0, host='', port=0, coordinators=[Coordinator(key='mso-group', nodeId=-1, host='', port=-1, errorCode=15, errorMessage='')])) 19:06:42.408 [main] DEBUG org.apache.kafka.clients.consumer.internals.ConsumerCoordinator - [Consumer clientId=mso-123456-consumer-e3947c42-8a25-4603-b050-ed449cbe5e27, groupId=mso-group] Group coordinator lookup failed: 19:06:42.408 [data-plane-kafka-network-thread-1-ListenerName(SASL_PLAINTEXT)-SASL_PLAINTEXT-0] DEBUG kafka.request.logger - Completed request:{"isForwarded":false,"requestHeader":{"requestApiKey":10,"requestApiVersion":4,"correlationId":19,"clientId":"mso-123456-consumer-e3947c42-8a25-4603-b050-ed449cbe5e27","requestApiKeyName":"FIND_COORDINATOR"},"request":{"keyType":0,"coordinatorKeys":["mso-group"]},"response":{"throttleTimeMs":0,"coordinators":[{"key":"mso-group","nodeId":-1,"host":"","port":-1,"errorCode":15,"errorMessage":""}]},"connection":"127.0.0.1:38537-127.0.0.1:54970-3","totalTimeMs":5.483,"requestQueueTimeMs":0.101,"localTimeMs":5.101,"remoteTimeMs":0.0,"throttleTimeMs":0,"responseQueueTimeMs":0.071,"sendTimeMs":0.209,"securityProtocol":"SASL_PLAINTEXT","principal":"User:admin","listener":"SASL_PLAINTEXT","clientInformation":{"softwareName":"apache-kafka-java","softwareVersion":"3.3.1"}} 19:06:42.408 [main] DEBUG org.apache.kafka.clients.consumer.internals.ConsumerCoordinator - [Consumer clientId=mso-123456-consumer-e3947c42-8a25-4603-b050-ed449cbe5e27, groupId=mso-group] Coordinator discovery failed, refreshing metadata org.apache.kafka.common.errors.CoordinatorNotAvailableException: The coordinator is not available. 19:06:42.408 [ProcessThread(sid:0 cport:42969):] DEBUG org.apache.zookeeper.server.SessionTrackerImpl - Checking session 0x1000002c50e0000 19:06:42.408 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - Processing request:: sessionid:0x1000002c50e0000 type:getData cxid:0xf0 zxid:0xfffffffffffffffe txntype:unknown reqpath:/config/topics/__consumer_offsets 19:06:42.408 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - sessionid:0x1000002c50e0000 type:getData cxid:0xf0 zxid:0xfffffffffffffffe txntype:unknown reqpath:/config/topics/__consumer_offsets 19:06:42.408 [SyncThread:0] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Permission requested: 1 19:06:42.408 [SyncThread:0] DEBUG org.apache.zookeeper.server.ZooKeeperServer - ACLs for node: [31,s{'world,'anyone} ] 19:06:42.408 [SyncThread:0] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Client credentials: ['sasl,'zooclient , 'ip,'127.0.0.1 ] 19:06:42.409 [main-SendThread(127.0.0.1:42969)] DEBUG org.apache.zookeeper.ClientCnxn - Reading reply session id: 0x1000002c50e0000, packet:: clientPath:/config/topics/__consumer_offsets serverPath:/config/topics/__consumer_offsets finished:false header:: 240,4 replyHeader:: 240,139,0 request:: '/config/topics/__consumer_offsets,F response:: #7b2276657273696f6e223a312c22636f6e666967223a7b22636f6d7072657373696f6e2e74797065223a2270726f6475636572222c22636c65616e75702e706f6c696379223a22636f6d70616374222c227365676d656e742e6279746573223a22313034383537363030227d7d,s{37,37,1755112001575,1755112001575,0,0,0,0,109,0,37} 19:06:42.412 [data-plane-kafka-request-handler-1] DEBUG kafka.log.OffsetIndex - Loaded index file /tmp/kafka-unit12180474530667575823/__consumer_offsets-30/00000000000000000000.index with maxEntries = 1310720, maxIndexSize = 10485760, entries = 0, lastOffset = 0, file position = 0 19:06:42.412 [data-plane-kafka-request-handler-1] DEBUG kafka.log.AbstractIndex - Index /tmp/kafka-unit12180474530667575823/__consumer_offsets-30/00000000000000000000.index was not resized because it already has size 10485760 19:06:42.412 [data-plane-kafka-request-handler-1] DEBUG kafka.log.TimeIndex - Loaded index file /tmp/kafka-unit12180474530667575823/__consumer_offsets-30/00000000000000000000.timeindex with maxEntries = 873813, maxIndexSize = 10485760, entries = 0, lastOffset = TimestampOffset(-1,0), file position = 0 19:06:42.412 [data-plane-kafka-request-handler-1] DEBUG kafka.log.AbstractIndex - Index /tmp/kafka-unit12180474530667575823/__consumer_offsets-30/00000000000000000000.timeindex was not resized because it already has size 10485756 19:06:42.413 [data-plane-kafka-request-handler-1] INFO kafka.log.UnifiedLog$ - [LogLoader partition=__consumer_offsets-30, dir=/tmp/kafka-unit12180474530667575823] Loading producer state till offset 0 with message format version 2 19:06:42.413 [data-plane-kafka-request-handler-1] DEBUG kafka.utils.KafkaScheduler - Scheduling task flush-metadata-file with initial delay 0 ms and period -1 ms. 19:06:42.414 [data-plane-kafka-request-handler-1] DEBUG kafka.utils.KafkaScheduler - Scheduling task PeriodicProducerExpirationCheck with initial delay 600000 ms and period 600000 ms. 19:06:42.414 [data-plane-kafka-request-handler-1] INFO kafka.log.LogManager - Created log for partition __consumer_offsets-30 in /tmp/kafka-unit12180474530667575823/__consumer_offsets-30 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} 19:06:42.414 [data-plane-kafka-request-handler-1] INFO kafka.cluster.Partition - [Partition __consumer_offsets-30 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-30 19:06:42.414 [data-plane-kafka-request-handler-1] INFO kafka.cluster.Partition - [Partition __consumer_offsets-30 broker=1] Log loaded for partition __consumer_offsets-30 with initial high watermark 0 19:06:42.415 [data-plane-kafka-request-handler-1] INFO state.change.logger - [Broker id=1] Leader __consumer_offsets-30 with topic id Some(eotegnaZSI-1iOOsuTZpHg) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas []. Previous leader epoch was -1. 19:06:42.415 [data-plane-kafka-request-handler-1] DEBUG kafka.server.epoch.LeaderEpochFileCache - [LeaderEpochCache __consumer_offsets-30] Appended new epoch entry EpochEntry(epoch=0, startOffset=0). Cache now contains 1 entries. 19:06:42.419 [ProcessThread(sid:0 cport:42969):] DEBUG org.apache.zookeeper.server.SessionTrackerImpl - Checking session 0x1000002c50e0000 19:06:42.419 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - Processing request:: sessionid:0x1000002c50e0000 type:getData cxid:0xf1 zxid:0xfffffffffffffffe txntype:unknown reqpath:/config/topics/__consumer_offsets 19:06:42.419 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - sessionid:0x1000002c50e0000 type:getData cxid:0xf1 zxid:0xfffffffffffffffe txntype:unknown reqpath:/config/topics/__consumer_offsets 19:06:42.419 [SyncThread:0] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Permission requested: 1 19:06:42.419 [SyncThread:0] DEBUG org.apache.zookeeper.server.ZooKeeperServer - ACLs for node: [31,s{'world,'anyone} ] 19:06:42.419 [SyncThread:0] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Client credentials: ['sasl,'zooclient , 'ip,'127.0.0.1 ] 19:06:42.419 [main-SendThread(127.0.0.1:42969)] DEBUG org.apache.zookeeper.ClientCnxn - Reading reply session id: 0x1000002c50e0000, packet:: clientPath:/config/topics/__consumer_offsets serverPath:/config/topics/__consumer_offsets finished:false header:: 241,4 replyHeader:: 241,139,0 request:: '/config/topics/__consumer_offsets,F response:: #7b2276657273696f6e223a312c22636f6e666967223a7b22636f6d7072657373696f6e2e74797065223a2270726f6475636572222c22636c65616e75702e706f6c696379223a22636f6d70616374222c227365676d656e742e6279746573223a22313034383537363030227d7d,s{37,37,1755112001575,1755112001575,0,0,0,0,109,0,37} 19:06:42.422 [data-plane-kafka-request-handler-1] DEBUG kafka.log.OffsetIndex - Loaded index file /tmp/kafka-unit12180474530667575823/__consumer_offsets-0/00000000000000000000.index with maxEntries = 1310720, maxIndexSize = 10485760, entries = 0, lastOffset = 0, file position = 0 19:06:42.422 [data-plane-kafka-request-handler-1] DEBUG kafka.log.AbstractIndex - Index /tmp/kafka-unit12180474530667575823/__consumer_offsets-0/00000000000000000000.index was not resized because it already has size 10485760 19:06:42.422 [data-plane-kafka-request-handler-1] DEBUG kafka.log.TimeIndex - Loaded index file /tmp/kafka-unit12180474530667575823/__consumer_offsets-0/00000000000000000000.timeindex with maxEntries = 873813, maxIndexSize = 10485760, entries = 0, lastOffset = TimestampOffset(-1,0), file position = 0 19:06:42.422 [data-plane-kafka-request-handler-1] DEBUG kafka.log.AbstractIndex - Index /tmp/kafka-unit12180474530667575823/__consumer_offsets-0/00000000000000000000.timeindex was not resized because it already has size 10485756 19:06:42.423 [data-plane-kafka-request-handler-1] INFO kafka.log.UnifiedLog$ - [LogLoader partition=__consumer_offsets-0, dir=/tmp/kafka-unit12180474530667575823] Loading producer state till offset 0 with message format version 2 19:06:42.423 [data-plane-kafka-request-handler-1] DEBUG kafka.utils.KafkaScheduler - Scheduling task flush-metadata-file with initial delay 0 ms and period -1 ms. 19:06:42.423 [data-plane-kafka-request-handler-1] DEBUG kafka.utils.KafkaScheduler - Scheduling task PeriodicProducerExpirationCheck with initial delay 600000 ms and period 600000 ms. 19:06:42.424 [data-plane-kafka-request-handler-1] INFO kafka.log.LogManager - Created log for partition __consumer_offsets-0 in /tmp/kafka-unit12180474530667575823/__consumer_offsets-0 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} 19:06:42.424 [data-plane-kafka-request-handler-1] INFO kafka.cluster.Partition - [Partition __consumer_offsets-0 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-0 19:06:42.424 [data-plane-kafka-request-handler-1] INFO kafka.cluster.Partition - [Partition __consumer_offsets-0 broker=1] Log loaded for partition __consumer_offsets-0 with initial high watermark 0 19:06:42.424 [data-plane-kafka-request-handler-1] INFO state.change.logger - [Broker id=1] Leader __consumer_offsets-0 with topic id Some(eotegnaZSI-1iOOsuTZpHg) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas []. Previous leader epoch was -1. 19:06:42.424 [data-plane-kafka-request-handler-1] DEBUG kafka.server.epoch.LeaderEpochFileCache - [LeaderEpochCache __consumer_offsets-0] Appended new epoch entry EpochEntry(epoch=0, startOffset=0). Cache now contains 1 entries. 19:06:42.429 [ProcessThread(sid:0 cport:42969):] DEBUG org.apache.zookeeper.server.SessionTrackerImpl - Checking session 0x1000002c50e0000 19:06:42.429 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - Processing request:: sessionid:0x1000002c50e0000 type:getData cxid:0xf2 zxid:0xfffffffffffffffe txntype:unknown reqpath:/config/topics/__consumer_offsets 19:06:42.429 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - sessionid:0x1000002c50e0000 type:getData cxid:0xf2 zxid:0xfffffffffffffffe txntype:unknown reqpath:/config/topics/__consumer_offsets 19:06:42.429 [SyncThread:0] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Permission requested: 1 19:06:42.429 [SyncThread:0] DEBUG org.apache.zookeeper.server.ZooKeeperServer - ACLs for node: [31,s{'world,'anyone} ] 19:06:42.429 [SyncThread:0] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Client credentials: ['sasl,'zooclient , 'ip,'127.0.0.1 ] 19:06:42.432 [main-SendThread(127.0.0.1:42969)] DEBUG org.apache.zookeeper.ClientCnxn - Reading reply session id: 0x1000002c50e0000, packet:: clientPath:/config/topics/__consumer_offsets serverPath:/config/topics/__consumer_offsets finished:false header:: 242,4 replyHeader:: 242,139,0 request:: '/config/topics/__consumer_offsets,F response:: #7b2276657273696f6e223a312c22636f6e666967223a7b22636f6d7072657373696f6e2e74797065223a2270726f6475636572222c22636c65616e75702e706f6c696379223a22636f6d70616374222c227365676d656e742e6279746573223a22313034383537363030227d7d,s{37,37,1755112001575,1755112001575,0,0,0,0,109,0,37} 19:06:42.439 [data-plane-kafka-request-handler-1] DEBUG kafka.log.OffsetIndex - Loaded index file /tmp/kafka-unit12180474530667575823/__consumer_offsets-35/00000000000000000000.index with maxEntries = 1310720, maxIndexSize = 10485760, entries = 0, lastOffset = 0, file position = 0 19:06:42.439 [data-plane-kafka-request-handler-1] DEBUG kafka.log.AbstractIndex - Index /tmp/kafka-unit12180474530667575823/__consumer_offsets-35/00000000000000000000.index was not resized because it already has size 10485760 19:06:42.439 [data-plane-kafka-request-handler-1] DEBUG kafka.log.TimeIndex - Loaded index file /tmp/kafka-unit12180474530667575823/__consumer_offsets-35/00000000000000000000.timeindex with maxEntries = 873813, maxIndexSize = 10485760, entries = 0, lastOffset = TimestampOffset(-1,0), file position = 0 19:06:42.439 [data-plane-kafka-request-handler-1] DEBUG kafka.log.AbstractIndex - Index /tmp/kafka-unit12180474530667575823/__consumer_offsets-35/00000000000000000000.timeindex was not resized because it already has size 10485756 19:06:42.440 [data-plane-kafka-request-handler-1] INFO kafka.log.UnifiedLog$ - [LogLoader partition=__consumer_offsets-35, dir=/tmp/kafka-unit12180474530667575823] Loading producer state till offset 0 with message format version 2 19:06:42.440 [data-plane-kafka-request-handler-1] DEBUG kafka.utils.KafkaScheduler - Scheduling task flush-metadata-file with initial delay 0 ms and period -1 ms. 19:06:42.441 [data-plane-kafka-request-handler-1] DEBUG kafka.utils.KafkaScheduler - Scheduling task PeriodicProducerExpirationCheck with initial delay 600000 ms and period 600000 ms. 19:06:42.441 [data-plane-kafka-request-handler-1] INFO kafka.log.LogManager - Created log for partition __consumer_offsets-35 in /tmp/kafka-unit12180474530667575823/__consumer_offsets-35 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} 19:06:42.441 [data-plane-kafka-request-handler-1] INFO kafka.cluster.Partition - [Partition __consumer_offsets-35 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-35 19:06:42.442 [data-plane-kafka-request-handler-1] INFO kafka.cluster.Partition - [Partition __consumer_offsets-35 broker=1] Log loaded for partition __consumer_offsets-35 with initial high watermark 0 19:06:42.442 [data-plane-kafka-request-handler-1] INFO state.change.logger - [Broker id=1] Leader __consumer_offsets-35 with topic id Some(eotegnaZSI-1iOOsuTZpHg) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas []. Previous leader epoch was -1. 19:06:42.442 [data-plane-kafka-request-handler-1] DEBUG kafka.server.epoch.LeaderEpochFileCache - [LeaderEpochCache __consumer_offsets-35] Appended new epoch entry EpochEntry(epoch=0, startOffset=0). Cache now contains 1 entries. 19:06:42.448 [ProcessThread(sid:0 cport:42969):] DEBUG org.apache.zookeeper.server.SessionTrackerImpl - Checking session 0x1000002c50e0000 19:06:42.448 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - Processing request:: sessionid:0x1000002c50e0000 type:getData cxid:0xf3 zxid:0xfffffffffffffffe txntype:unknown reqpath:/config/topics/__consumer_offsets 19:06:42.448 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - sessionid:0x1000002c50e0000 type:getData cxid:0xf3 zxid:0xfffffffffffffffe txntype:unknown reqpath:/config/topics/__consumer_offsets 19:06:42.448 [SyncThread:0] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Permission requested: 1 19:06:42.448 [SyncThread:0] DEBUG org.apache.zookeeper.server.ZooKeeperServer - ACLs for node: [31,s{'world,'anyone} ] 19:06:42.448 [SyncThread:0] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Client credentials: ['sasl,'zooclient , 'ip,'127.0.0.1 ] 19:06:42.449 [main-SendThread(127.0.0.1:42969)] DEBUG org.apache.zookeeper.ClientCnxn - Reading reply session id: 0x1000002c50e0000, packet:: clientPath:/config/topics/__consumer_offsets serverPath:/config/topics/__consumer_offsets finished:false header:: 243,4 replyHeader:: 243,139,0 request:: '/config/topics/__consumer_offsets,F response:: #7b2276657273696f6e223a312c22636f6e666967223a7b22636f6d7072657373696f6e2e74797065223a2270726f6475636572222c22636c65616e75702e706f6c696379223a22636f6d70616374222c227365676d656e742e6279746573223a22313034383537363030227d7d,s{37,37,1755112001575,1755112001575,0,0,0,0,109,0,37} 19:06:42.453 [data-plane-kafka-request-handler-1] DEBUG kafka.log.OffsetIndex - Loaded index file /tmp/kafka-unit12180474530667575823/__consumer_offsets-5/00000000000000000000.index with maxEntries = 1310720, maxIndexSize = 10485760, entries = 0, lastOffset = 0, file position = 0 19:06:42.453 [data-plane-kafka-request-handler-1] DEBUG kafka.log.AbstractIndex - Index /tmp/kafka-unit12180474530667575823/__consumer_offsets-5/00000000000000000000.index was not resized because it already has size 10485760 19:06:42.453 [data-plane-kafka-request-handler-1] DEBUG kafka.log.TimeIndex - Loaded index file /tmp/kafka-unit12180474530667575823/__consumer_offsets-5/00000000000000000000.timeindex with maxEntries = 873813, maxIndexSize = 10485760, entries = 0, lastOffset = TimestampOffset(-1,0), file position = 0 19:06:42.453 [data-plane-kafka-request-handler-1] DEBUG kafka.log.AbstractIndex - Index /tmp/kafka-unit12180474530667575823/__consumer_offsets-5/00000000000000000000.timeindex was not resized because it already has size 10485756 19:06:42.453 [data-plane-kafka-request-handler-1] INFO kafka.log.UnifiedLog$ - [LogLoader partition=__consumer_offsets-5, dir=/tmp/kafka-unit12180474530667575823] Loading producer state till offset 0 with message format version 2 19:06:42.454 [data-plane-kafka-request-handler-1] DEBUG kafka.utils.KafkaScheduler - Scheduling task flush-metadata-file with initial delay 0 ms and period -1 ms. 19:06:42.454 [data-plane-kafka-request-handler-1] DEBUG kafka.utils.KafkaScheduler - Scheduling task PeriodicProducerExpirationCheck with initial delay 600000 ms and period 600000 ms. 19:06:42.455 [data-plane-kafka-request-handler-1] INFO kafka.log.LogManager - Created log for partition __consumer_offsets-5 in /tmp/kafka-unit12180474530667575823/__consumer_offsets-5 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} 19:06:42.455 [data-plane-kafka-request-handler-1] INFO kafka.cluster.Partition - [Partition __consumer_offsets-5 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-5 19:06:42.455 [data-plane-kafka-request-handler-1] INFO kafka.cluster.Partition - [Partition __consumer_offsets-5 broker=1] Log loaded for partition __consumer_offsets-5 with initial high watermark 0 19:06:42.455 [data-plane-kafka-request-handler-1] INFO state.change.logger - [Broker id=1] Leader __consumer_offsets-5 with topic id Some(eotegnaZSI-1iOOsuTZpHg) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas []. Previous leader epoch was -1. 19:06:42.456 [data-plane-kafka-request-handler-1] DEBUG kafka.server.epoch.LeaderEpochFileCache - [LeaderEpochCache __consumer_offsets-5] Appended new epoch entry EpochEntry(epoch=0, startOffset=0). Cache now contains 1 entries. 19:06:42.460 [ProcessThread(sid:0 cport:42969):] DEBUG org.apache.zookeeper.server.SessionTrackerImpl - Checking session 0x1000002c50e0000 19:06:42.461 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - Processing request:: sessionid:0x1000002c50e0000 type:getData cxid:0xf4 zxid:0xfffffffffffffffe txntype:unknown reqpath:/config/topics/__consumer_offsets 19:06:42.461 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - sessionid:0x1000002c50e0000 type:getData cxid:0xf4 zxid:0xfffffffffffffffe txntype:unknown reqpath:/config/topics/__consumer_offsets 19:06:42.461 [SyncThread:0] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Permission requested: 1 19:06:42.461 [SyncThread:0] DEBUG org.apache.zookeeper.server.ZooKeeperServer - ACLs for node: [31,s{'world,'anyone} ] 19:06:42.461 [SyncThread:0] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Client credentials: ['sasl,'zooclient , 'ip,'127.0.0.1 ] 19:06:42.461 [main-SendThread(127.0.0.1:42969)] DEBUG org.apache.zookeeper.ClientCnxn - Reading reply session id: 0x1000002c50e0000, packet:: clientPath:/config/topics/__consumer_offsets serverPath:/config/topics/__consumer_offsets finished:false header:: 244,4 replyHeader:: 244,139,0 request:: '/config/topics/__consumer_offsets,F response:: #7b2276657273696f6e223a312c22636f6e666967223a7b22636f6d7072657373696f6e2e74797065223a2270726f6475636572222c22636c65616e75702e706f6c696379223a22636f6d70616374222c227365676d656e742e6279746573223a22313034383537363030227d7d,s{37,37,1755112001575,1755112001575,0,0,0,0,109,0,37} 19:06:42.465 [data-plane-kafka-request-handler-1] DEBUG kafka.log.OffsetIndex - Loaded index file /tmp/kafka-unit12180474530667575823/__consumer_offsets-20/00000000000000000000.index with maxEntries = 1310720, maxIndexSize = 10485760, entries = 0, lastOffset = 0, file position = 0 19:06:42.465 [data-plane-kafka-request-handler-1] DEBUG kafka.log.AbstractIndex - Index /tmp/kafka-unit12180474530667575823/__consumer_offsets-20/00000000000000000000.index was not resized because it already has size 10485760 19:06:42.465 [data-plane-kafka-request-handler-1] DEBUG kafka.log.TimeIndex - Loaded index file /tmp/kafka-unit12180474530667575823/__consumer_offsets-20/00000000000000000000.timeindex with maxEntries = 873813, maxIndexSize = 10485760, entries = 0, lastOffset = TimestampOffset(-1,0), file position = 0 19:06:42.466 [data-plane-kafka-request-handler-1] DEBUG kafka.log.AbstractIndex - Index /tmp/kafka-unit12180474530667575823/__consumer_offsets-20/00000000000000000000.timeindex was not resized because it already has size 10485756 19:06:42.466 [data-plane-kafka-request-handler-1] INFO kafka.log.UnifiedLog$ - [LogLoader partition=__consumer_offsets-20, dir=/tmp/kafka-unit12180474530667575823] Loading producer state till offset 0 with message format version 2 19:06:42.466 [data-plane-kafka-request-handler-1] DEBUG kafka.utils.KafkaScheduler - Scheduling task flush-metadata-file with initial delay 0 ms and period -1 ms. 19:06:42.467 [data-plane-kafka-request-handler-1] DEBUG kafka.utils.KafkaScheduler - Scheduling task PeriodicProducerExpirationCheck with initial delay 600000 ms and period 600000 ms. 19:06:42.468 [data-plane-kafka-request-handler-1] INFO kafka.log.LogManager - Created log for partition __consumer_offsets-20 in /tmp/kafka-unit12180474530667575823/__consumer_offsets-20 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} 19:06:42.468 [data-plane-kafka-request-handler-1] INFO kafka.cluster.Partition - [Partition __consumer_offsets-20 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-20 19:06:42.468 [data-plane-kafka-request-handler-1] INFO kafka.cluster.Partition - [Partition __consumer_offsets-20 broker=1] Log loaded for partition __consumer_offsets-20 with initial high watermark 0 19:06:42.468 [data-plane-kafka-request-handler-1] INFO state.change.logger - [Broker id=1] Leader __consumer_offsets-20 with topic id Some(eotegnaZSI-1iOOsuTZpHg) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas []. Previous leader epoch was -1. 19:06:42.468 [data-plane-kafka-request-handler-1] DEBUG kafka.server.epoch.LeaderEpochFileCache - [LeaderEpochCache __consumer_offsets-20] Appended new epoch entry EpochEntry(epoch=0, startOffset=0). Cache now contains 1 entries. 19:06:42.472 [ProcessThread(sid:0 cport:42969):] DEBUG org.apache.zookeeper.server.SessionTrackerImpl - Checking session 0x1000002c50e0000 19:06:42.473 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - Processing request:: sessionid:0x1000002c50e0000 type:getData cxid:0xf5 zxid:0xfffffffffffffffe txntype:unknown reqpath:/config/topics/__consumer_offsets 19:06:42.473 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - sessionid:0x1000002c50e0000 type:getData cxid:0xf5 zxid:0xfffffffffffffffe txntype:unknown reqpath:/config/topics/__consumer_offsets 19:06:42.473 [SyncThread:0] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Permission requested: 1 19:06:42.473 [SyncThread:0] DEBUG org.apache.zookeeper.server.ZooKeeperServer - ACLs for node: [31,s{'world,'anyone} ] 19:06:42.473 [SyncThread:0] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Client credentials: ['sasl,'zooclient , 'ip,'127.0.0.1 ] 19:06:42.473 [main-SendThread(127.0.0.1:42969)] DEBUG org.apache.zookeeper.ClientCnxn - Reading reply session id: 0x1000002c50e0000, packet:: clientPath:/config/topics/__consumer_offsets serverPath:/config/topics/__consumer_offsets finished:false header:: 245,4 replyHeader:: 245,139,0 request:: '/config/topics/__consumer_offsets,F response:: #7b2276657273696f6e223a312c22636f6e666967223a7b22636f6d7072657373696f6e2e74797065223a2270726f6475636572222c22636c65616e75702e706f6c696379223a22636f6d70616374222c227365676d656e742e6279746573223a22313034383537363030227d7d,s{37,37,1755112001575,1755112001575,0,0,0,0,109,0,37} 19:06:42.477 [data-plane-kafka-request-handler-1] DEBUG kafka.log.OffsetIndex - Loaded index file /tmp/kafka-unit12180474530667575823/__consumer_offsets-27/00000000000000000000.index with maxEntries = 1310720, maxIndexSize = 10485760, entries = 0, lastOffset = 0, file position = 0 19:06:42.477 [data-plane-kafka-request-handler-1] DEBUG kafka.log.AbstractIndex - Index /tmp/kafka-unit12180474530667575823/__consumer_offsets-27/00000000000000000000.index was not resized because it already has size 10485760 19:06:42.477 [data-plane-kafka-request-handler-1] DEBUG kafka.log.TimeIndex - Loaded index file /tmp/kafka-unit12180474530667575823/__consumer_offsets-27/00000000000000000000.timeindex with maxEntries = 873813, maxIndexSize = 10485760, entries = 0, lastOffset = TimestampOffset(-1,0), file position = 0 19:06:42.477 [data-plane-kafka-request-handler-1] DEBUG kafka.log.AbstractIndex - Index /tmp/kafka-unit12180474530667575823/__consumer_offsets-27/00000000000000000000.timeindex was not resized because it already has size 10485756 19:06:42.478 [data-plane-kafka-request-handler-1] INFO kafka.log.UnifiedLog$ - [LogLoader partition=__consumer_offsets-27, dir=/tmp/kafka-unit12180474530667575823] Loading producer state till offset 0 with message format version 2 19:06:42.478 [data-plane-kafka-request-handler-1] DEBUG kafka.utils.KafkaScheduler - Scheduling task flush-metadata-file with initial delay 0 ms and period -1 ms. 19:06:42.479 [data-plane-kafka-request-handler-1] DEBUG kafka.utils.KafkaScheduler - Scheduling task PeriodicProducerExpirationCheck with initial delay 600000 ms and period 600000 ms. 19:06:42.480 [data-plane-kafka-request-handler-1] INFO kafka.log.LogManager - Created log for partition __consumer_offsets-27 in /tmp/kafka-unit12180474530667575823/__consumer_offsets-27 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} 19:06:42.480 [data-plane-kafka-request-handler-1] INFO kafka.cluster.Partition - [Partition __consumer_offsets-27 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-27 19:06:42.480 [data-plane-kafka-request-handler-1] INFO kafka.cluster.Partition - [Partition __consumer_offsets-27 broker=1] Log loaded for partition __consumer_offsets-27 with initial high watermark 0 19:06:42.480 [data-plane-kafka-request-handler-1] INFO state.change.logger - [Broker id=1] Leader __consumer_offsets-27 with topic id Some(eotegnaZSI-1iOOsuTZpHg) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas []. Previous leader epoch was -1. 19:06:42.481 [data-plane-kafka-request-handler-1] DEBUG kafka.server.epoch.LeaderEpochFileCache - [LeaderEpochCache __consumer_offsets-27] Appended new epoch entry EpochEntry(epoch=0, startOffset=0). Cache now contains 1 entries. 19:06:42.492 [ProcessThread(sid:0 cport:42969):] DEBUG org.apache.zookeeper.server.SessionTrackerImpl - Checking session 0x1000002c50e0000 19:06:42.492 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - Processing request:: sessionid:0x1000002c50e0000 type:getData cxid:0xf6 zxid:0xfffffffffffffffe txntype:unknown reqpath:/config/topics/__consumer_offsets 19:06:42.492 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - sessionid:0x1000002c50e0000 type:getData cxid:0xf6 zxid:0xfffffffffffffffe txntype:unknown reqpath:/config/topics/__consumer_offsets 19:06:42.492 [SyncThread:0] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Permission requested: 1 19:06:42.492 [SyncThread:0] DEBUG org.apache.zookeeper.server.ZooKeeperServer - ACLs for node: [31,s{'world,'anyone} ] 19:06:42.492 [SyncThread:0] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Client credentials: ['sasl,'zooclient , 'ip,'127.0.0.1 ] 19:06:42.492 [main-SendThread(127.0.0.1:42969)] DEBUG org.apache.zookeeper.ClientCnxn - Reading reply session id: 0x1000002c50e0000, packet:: clientPath:/config/topics/__consumer_offsets serverPath:/config/topics/__consumer_offsets finished:false header:: 246,4 replyHeader:: 246,139,0 request:: '/config/topics/__consumer_offsets,F response:: #7b2276657273696f6e223a312c22636f6e666967223a7b22636f6d7072657373696f6e2e74797065223a2270726f6475636572222c22636c65616e75702e706f6c696379223a22636f6d70616374222c227365676d656e742e6279746573223a22313034383537363030227d7d,s{37,37,1755112001575,1755112001575,0,0,0,0,109,0,37} 19:06:42.495 [data-plane-kafka-request-handler-1] DEBUG kafka.log.OffsetIndex - Loaded index file /tmp/kafka-unit12180474530667575823/__consumer_offsets-42/00000000000000000000.index with maxEntries = 1310720, maxIndexSize = 10485760, entries = 0, lastOffset = 0, file position = 0 19:06:42.496 [data-plane-kafka-request-handler-1] DEBUG kafka.log.AbstractIndex - Index /tmp/kafka-unit12180474530667575823/__consumer_offsets-42/00000000000000000000.index was not resized because it already has size 10485760 19:06:42.496 [data-plane-kafka-request-handler-1] DEBUG kafka.log.TimeIndex - Loaded index file /tmp/kafka-unit12180474530667575823/__consumer_offsets-42/00000000000000000000.timeindex with maxEntries = 873813, maxIndexSize = 10485760, entries = 0, lastOffset = TimestampOffset(-1,0), file position = 0 19:06:42.496 [data-plane-kafka-request-handler-1] DEBUG kafka.log.AbstractIndex - Index /tmp/kafka-unit12180474530667575823/__consumer_offsets-42/00000000000000000000.timeindex was not resized because it already has size 10485756 19:06:42.496 [data-plane-kafka-request-handler-1] INFO kafka.log.UnifiedLog$ - [LogLoader partition=__consumer_offsets-42, dir=/tmp/kafka-unit12180474530667575823] Loading producer state till offset 0 with message format version 2 19:06:42.496 [data-plane-kafka-request-handler-1] DEBUG kafka.utils.KafkaScheduler - Scheduling task flush-metadata-file with initial delay 0 ms and period -1 ms. 19:06:42.497 [data-plane-kafka-request-handler-1] DEBUG kafka.utils.KafkaScheduler - Scheduling task PeriodicProducerExpirationCheck with initial delay 600000 ms and period 600000 ms. 19:06:42.497 [data-plane-kafka-request-handler-1] INFO kafka.log.LogManager - Created log for partition __consumer_offsets-42 in /tmp/kafka-unit12180474530667575823/__consumer_offsets-42 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} 19:06:42.497 [data-plane-kafka-request-handler-1] INFO kafka.cluster.Partition - [Partition __consumer_offsets-42 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-42 19:06:42.497 [data-plane-kafka-request-handler-1] INFO kafka.cluster.Partition - [Partition __consumer_offsets-42 broker=1] Log loaded for partition __consumer_offsets-42 with initial high watermark 0 19:06:42.498 [data-plane-kafka-request-handler-1] INFO state.change.logger - [Broker id=1] Leader __consumer_offsets-42 with topic id Some(eotegnaZSI-1iOOsuTZpHg) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas []. Previous leader epoch was -1. 19:06:42.498 [data-plane-kafka-request-handler-1] DEBUG kafka.server.epoch.LeaderEpochFileCache - [LeaderEpochCache __consumer_offsets-42] Appended new epoch entry EpochEntry(epoch=0, startOffset=0). Cache now contains 1 entries. 19:06:42.502 [main] DEBUG org.apache.kafka.clients.NetworkClient - [Consumer clientId=mso-123456-consumer-e3947c42-8a25-4603-b050-ed449cbe5e27, groupId=mso-group] Sending metadata request MetadataRequestData(topics=[MetadataRequestTopic(topicId=AAAAAAAAAAAAAAAAAAAAAA, name='my-test-topic')], allowAutoTopicCreation=false, includeClusterAuthorizedOperations=false, includeTopicAuthorizedOperations=false) to node localhost:38537 (id: 1 rack: null) 19:06:42.502 [main] DEBUG org.apache.kafka.clients.NetworkClient - [Consumer clientId=mso-123456-consumer-e3947c42-8a25-4603-b050-ed449cbe5e27, groupId=mso-group] Sending METADATA request with header RequestHeader(apiKey=METADATA, apiVersion=12, clientId=mso-123456-consumer-e3947c42-8a25-4603-b050-ed449cbe5e27, correlationId=20) and timeout 30000 to node 1: MetadataRequestData(topics=[MetadataRequestTopic(topicId=AAAAAAAAAAAAAAAAAAAAAA, name='my-test-topic')], allowAutoTopicCreation=false, includeClusterAuthorizedOperations=false, includeTopicAuthorizedOperations=false) 19:06:42.503 [ProcessThread(sid:0 cport:42969):] DEBUG org.apache.zookeeper.server.SessionTrackerImpl - Checking session 0x1000002c50e0000 19:06:42.503 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - Processing request:: sessionid:0x1000002c50e0000 type:getData cxid:0xf7 zxid:0xfffffffffffffffe txntype:unknown reqpath:/config/topics/__consumer_offsets 19:06:42.503 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - sessionid:0x1000002c50e0000 type:getData cxid:0xf7 zxid:0xfffffffffffffffe txntype:unknown reqpath:/config/topics/__consumer_offsets 19:06:42.503 [SyncThread:0] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Permission requested: 1 19:06:42.503 [SyncThread:0] DEBUG org.apache.zookeeper.server.ZooKeeperServer - ACLs for node: [31,s{'world,'anyone} ] 19:06:42.503 [SyncThread:0] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Client credentials: ['sasl,'zooclient , 'ip,'127.0.0.1 ] 19:06:42.503 [main-SendThread(127.0.0.1:42969)] DEBUG org.apache.zookeeper.ClientCnxn - Reading reply session id: 0x1000002c50e0000, packet:: clientPath:/config/topics/__consumer_offsets serverPath:/config/topics/__consumer_offsets finished:false header:: 247,4 replyHeader:: 247,139,0 request:: '/config/topics/__consumer_offsets,F response:: #7b2276657273696f6e223a312c22636f6e666967223a7b22636f6d7072657373696f6e2e74797065223a2270726f6475636572222c22636c65616e75702e706f6c696379223a22636f6d70616374222c227365676d656e742e6279746573223a22313034383537363030227d7d,s{37,37,1755112001575,1755112001575,0,0,0,0,109,0,37} 19:06:42.505 [main] DEBUG org.apache.kafka.clients.NetworkClient - [Consumer clientId=mso-123456-consumer-e3947c42-8a25-4603-b050-ed449cbe5e27, groupId=mso-group] Received METADATA response from node 1 for request with header RequestHeader(apiKey=METADATA, apiVersion=12, clientId=mso-123456-consumer-e3947c42-8a25-4603-b050-ed449cbe5e27, correlationId=20): MetadataResponseData(throttleTimeMs=0, brokers=[MetadataResponseBroker(nodeId=1, host='localhost', port=38537, rack=null)], clusterId='rNB1PmTvTz-7PD_3X0wh3g', controllerId=1, topics=[MetadataResponseTopic(errorCode=0, name='my-test-topic', topicId=Y7FYMZASRPqaYrfAS485Ow, isInternal=false, partitions=[MetadataResponsePartition(errorCode=0, partitionIndex=0, leaderId=1, leaderEpoch=0, replicaNodes=[1], isrNodes=[1], offlineReplicas=[])], topicAuthorizedOperations=-2147483648)], clusterAuthorizedOperations=-2147483648) 19:06:42.506 [main] DEBUG org.apache.kafka.clients.Metadata - [Consumer clientId=mso-123456-consumer-e3947c42-8a25-4603-b050-ed449cbe5e27, groupId=mso-group] Updating last seen epoch for partition my-test-topic-0 from 0 to epoch 0 from new metadata 19:06:42.506 [data-plane-kafka-network-thread-1-ListenerName(SASL_PLAINTEXT)-SASL_PLAINTEXT-0] DEBUG kafka.request.logger - Completed request:{"isForwarded":false,"requestHeader":{"requestApiKey":3,"requestApiVersion":12,"correlationId":20,"clientId":"mso-123456-consumer-e3947c42-8a25-4603-b050-ed449cbe5e27","requestApiKeyName":"METADATA"},"request":{"topics":[{"topicId":"AAAAAAAAAAAAAAAAAAAAAA","name":"my-test-topic"}],"allowAutoTopicCreation":false,"includeTopicAuthorizedOperations":false},"response":{"throttleTimeMs":0,"brokers":[{"nodeId":1,"host":"localhost","port":38537,"rack":null}],"clusterId":"rNB1PmTvTz-7PD_3X0wh3g","controllerId":1,"topics":[{"errorCode":0,"name":"my-test-topic","topicId":"Y7FYMZASRPqaYrfAS485Ow","isInternal":false,"partitions":[{"errorCode":0,"partitionIndex":0,"leaderId":1,"leaderEpoch":0,"replicaNodes":[1],"isrNodes":[1],"offlineReplicas":[]}],"topicAuthorizedOperations":-2147483648}]},"connection":"127.0.0.1:38537-127.0.0.1:54970-3","totalTimeMs":2.332,"requestQueueTimeMs":0.4,"localTimeMs":1.446,"remoteTimeMs":0.0,"throttleTimeMs":0,"responseQueueTimeMs":0.099,"sendTimeMs":0.385,"securityProtocol":"SASL_PLAINTEXT","principal":"User:admin","listener":"SASL_PLAINTEXT","clientInformation":{"softwareName":"apache-kafka-java","softwareVersion":"3.3.1"}} 19:06:42.506 [main] DEBUG org.apache.kafka.clients.Metadata - [Consumer clientId=mso-123456-consumer-e3947c42-8a25-4603-b050-ed449cbe5e27, groupId=mso-group] Updated cluster metadata updateVersion 11 to MetadataCache{clusterId='rNB1PmTvTz-7PD_3X0wh3g', nodes={1=localhost:38537 (id: 1 rack: null)}, partitions=[PartitionMetadata(error=NONE, partition=my-test-topic-0, leader=Optional[1], leaderEpoch=Optional[0], replicas=1, isr=1, offlineReplicas=)], controller=localhost:38537 (id: 1 rack: null)} 19:06:42.506 [main] DEBUG org.apache.kafka.clients.consumer.internals.ConsumerCoordinator - [Consumer clientId=mso-123456-consumer-e3947c42-8a25-4603-b050-ed449cbe5e27, groupId=mso-group] Sending FindCoordinator request to broker localhost:38537 (id: 1 rack: null) 19:06:42.507 [main] DEBUG org.apache.kafka.clients.NetworkClient - [Consumer clientId=mso-123456-consumer-e3947c42-8a25-4603-b050-ed449cbe5e27, groupId=mso-group] Sending FIND_COORDINATOR request with header RequestHeader(apiKey=FIND_COORDINATOR, apiVersion=4, clientId=mso-123456-consumer-e3947c42-8a25-4603-b050-ed449cbe5e27, correlationId=21) and timeout 30000 to node 1: FindCoordinatorRequestData(key='', keyType=0, coordinatorKeys=[mso-group]) 19:06:42.508 [data-plane-kafka-request-handler-1] DEBUG kafka.log.OffsetIndex - Loaded index file /tmp/kafka-unit12180474530667575823/__consumer_offsets-12/00000000000000000000.index with maxEntries = 1310720, maxIndexSize = 10485760, entries = 0, lastOffset = 0, file position = 0 19:06:42.508 [data-plane-kafka-request-handler-1] DEBUG kafka.log.AbstractIndex - Index /tmp/kafka-unit12180474530667575823/__consumer_offsets-12/00000000000000000000.index was not resized because it already has size 10485760 19:06:42.509 [data-plane-kafka-request-handler-1] DEBUG kafka.log.TimeIndex - Loaded index file /tmp/kafka-unit12180474530667575823/__consumer_offsets-12/00000000000000000000.timeindex with maxEntries = 873813, maxIndexSize = 10485760, entries = 0, lastOffset = TimestampOffset(-1,0), file position = 0 19:06:42.509 [data-plane-kafka-request-handler-1] DEBUG kafka.log.AbstractIndex - Index /tmp/kafka-unit12180474530667575823/__consumer_offsets-12/00000000000000000000.timeindex was not resized because it already has size 10485756 19:06:42.509 [data-plane-kafka-request-handler-1] INFO kafka.log.UnifiedLog$ - [LogLoader partition=__consumer_offsets-12, dir=/tmp/kafka-unit12180474530667575823] Loading producer state till offset 0 with message format version 2 19:06:42.510 [data-plane-kafka-request-handler-1] DEBUG kafka.utils.KafkaScheduler - Scheduling task flush-metadata-file with initial delay 0 ms and period -1 ms. 19:06:42.511 [data-plane-kafka-request-handler-1] DEBUG kafka.utils.KafkaScheduler - Scheduling task PeriodicProducerExpirationCheck with initial delay 600000 ms and period 600000 ms. 19:06:42.511 [ProcessThread(sid:0 cport:42969):] DEBUG org.apache.zookeeper.server.SessionTrackerImpl - Checking session 0x1000002c50e0000 19:06:42.511 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - Processing request:: sessionid:0x1000002c50e0000 type:exists cxid:0xf8 zxid:0xfffffffffffffffe txntype:unknown reqpath:/admin/delete_topics/__consumer_offsets 19:06:42.511 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - sessionid:0x1000002c50e0000 type:exists cxid:0xf8 zxid:0xfffffffffffffffe txntype:unknown reqpath:/admin/delete_topics/__consumer_offsets 19:06:42.511 [data-plane-kafka-request-handler-1] INFO kafka.log.LogManager - Created log for partition __consumer_offsets-12 in /tmp/kafka-unit12180474530667575823/__consumer_offsets-12 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} 19:06:42.512 [main-SendThread(127.0.0.1:42969)] DEBUG org.apache.zookeeper.ClientCnxn - Reading reply session id: 0x1000002c50e0000, packet:: clientPath:/admin/delete_topics/__consumer_offsets serverPath:/admin/delete_topics/__consumer_offsets finished:false header:: 248,3 replyHeader:: 248,139,-101 request:: '/admin/delete_topics/__consumer_offsets,F response:: 19:06:42.512 [data-plane-kafka-request-handler-1] INFO kafka.cluster.Partition - [Partition __consumer_offsets-12 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-12 19:06:42.512 [data-plane-kafka-request-handler-1] INFO kafka.cluster.Partition - [Partition __consumer_offsets-12 broker=1] Log loaded for partition __consumer_offsets-12 with initial high watermark 0 19:06:42.512 [data-plane-kafka-request-handler-1] INFO state.change.logger - [Broker id=1] Leader __consumer_offsets-12 with topic id Some(eotegnaZSI-1iOOsuTZpHg) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas []. Previous leader epoch was -1. 19:06:42.512 [data-plane-kafka-request-handler-1] DEBUG kafka.server.epoch.LeaderEpochFileCache - [LeaderEpochCache __consumer_offsets-12] Appended new epoch entry EpochEntry(epoch=0, startOffset=0). Cache now contains 1 entries. 19:06:42.513 [ProcessThread(sid:0 cport:42969):] DEBUG org.apache.zookeeper.server.SessionTrackerImpl - Checking session 0x1000002c50e0000 19:06:42.513 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - Processing request:: sessionid:0x1000002c50e0000 type:exists cxid:0xf9 zxid:0xfffffffffffffffe txntype:unknown reqpath:/brokers/topics/__consumer_offsets 19:06:42.513 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - sessionid:0x1000002c50e0000 type:exists cxid:0xf9 zxid:0xfffffffffffffffe txntype:unknown reqpath:/brokers/topics/__consumer_offsets 19:06:42.514 [main-SendThread(127.0.0.1:42969)] DEBUG org.apache.zookeeper.ClientCnxn - Reading reply session id: 0x1000002c50e0000, packet:: clientPath:/brokers/topics/__consumer_offsets serverPath:/brokers/topics/__consumer_offsets finished:false header:: 249,3 replyHeader:: 249,139,0 request:: '/brokers/topics/__consumer_offsets,T response:: s{38,38,1755112001586,1755112001586,0,1,0,0,548,1,39} 19:06:42.514 [data-plane-kafka-request-handler-0] DEBUG kafka.server.ZkAdminManager - [Admin Manager on Broker 1]: Topic creation failed since topic '__consumer_offsets' already exists. org.apache.kafka.common.errors.TopicExistsException: Topic '__consumer_offsets' already exists. 19:06:42.515 [data-plane-kafka-request-handler-0] DEBUG kafka.server.DefaultAutoTopicCreationManager - Cleared inflight topic creation state for HashMap(__consumer_offsets -> CreatableTopic(name='__consumer_offsets', numPartitions=50, replicationFactor=1, assignments=[], configs=[CreateableTopicConfig(name='compression.type', value='producer'), CreateableTopicConfig(name='cleanup.policy', value='compact'), CreateableTopicConfig(name='segment.bytes', value='104857600')])) 19:06:42.516 [main] DEBUG org.apache.kafka.clients.NetworkClient - [Consumer clientId=mso-123456-consumer-e3947c42-8a25-4603-b050-ed449cbe5e27, groupId=mso-group] Received FIND_COORDINATOR response from node 1 for request with header RequestHeader(apiKey=FIND_COORDINATOR, apiVersion=4, clientId=mso-123456-consumer-e3947c42-8a25-4603-b050-ed449cbe5e27, correlationId=21): FindCoordinatorResponseData(throttleTimeMs=0, errorCode=0, errorMessage='', nodeId=0, host='', port=0, coordinators=[Coordinator(key='mso-group', nodeId=-1, host='', port=-1, errorCode=15, errorMessage='')]) 19:06:42.516 [main] DEBUG org.apache.kafka.clients.consumer.internals.ConsumerCoordinator - [Consumer clientId=mso-123456-consumer-e3947c42-8a25-4603-b050-ed449cbe5e27, groupId=mso-group] Received FindCoordinator response ClientResponse(receivedTimeMs=1755112002516, latencyMs=10, disconnected=false, requestHeader=RequestHeader(apiKey=FIND_COORDINATOR, apiVersion=4, clientId=mso-123456-consumer-e3947c42-8a25-4603-b050-ed449cbe5e27, correlationId=21), responseBody=FindCoordinatorResponseData(throttleTimeMs=0, errorCode=0, errorMessage='', nodeId=0, host='', port=0, coordinators=[Coordinator(key='mso-group', nodeId=-1, host='', port=-1, errorCode=15, errorMessage='')])) 19:06:42.516 [main] DEBUG org.apache.kafka.clients.consumer.internals.ConsumerCoordinator - [Consumer clientId=mso-123456-consumer-e3947c42-8a25-4603-b050-ed449cbe5e27, groupId=mso-group] Group coordinator lookup failed: 19:06:42.516 [data-plane-kafka-network-thread-1-ListenerName(SASL_PLAINTEXT)-SASL_PLAINTEXT-0] DEBUG kafka.request.logger - Completed request:{"isForwarded":false,"requestHeader":{"requestApiKey":10,"requestApiVersion":4,"correlationId":21,"clientId":"mso-123456-consumer-e3947c42-8a25-4603-b050-ed449cbe5e27","requestApiKeyName":"FIND_COORDINATOR"},"request":{"keyType":0,"coordinatorKeys":["mso-group"]},"response":{"throttleTimeMs":0,"coordinators":[{"key":"mso-group","nodeId":-1,"host":"","port":-1,"errorCode":15,"errorMessage":""}]},"connection":"127.0.0.1:38537-127.0.0.1:54970-3","totalTimeMs":8.454,"requestQueueTimeMs":0.157,"localTimeMs":7.965,"remoteTimeMs":0.0,"throttleTimeMs":0,"responseQueueTimeMs":0.095,"sendTimeMs":0.236,"securityProtocol":"SASL_PLAINTEXT","principal":"User:admin","listener":"SASL_PLAINTEXT","clientInformation":{"softwareName":"apache-kafka-java","softwareVersion":"3.3.1"}} 19:06:42.516 [main] DEBUG org.apache.kafka.clients.consumer.internals.ConsumerCoordinator - [Consumer clientId=mso-123456-consumer-e3947c42-8a25-4603-b050-ed449cbe5e27, groupId=mso-group] Coordinator discovery failed, refreshing metadata org.apache.kafka.common.errors.CoordinatorNotAvailableException: The coordinator is not available. 19:06:42.517 [ProcessThread(sid:0 cport:42969):] DEBUG org.apache.zookeeper.server.SessionTrackerImpl - Checking session 0x1000002c50e0000 19:06:42.517 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - Processing request:: sessionid:0x1000002c50e0000 type:getData cxid:0xfa zxid:0xfffffffffffffffe txntype:unknown reqpath:/config/topics/__consumer_offsets 19:06:42.517 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - sessionid:0x1000002c50e0000 type:getData cxid:0xfa zxid:0xfffffffffffffffe txntype:unknown reqpath:/config/topics/__consumer_offsets 19:06:42.517 [SyncThread:0] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Permission requested: 1 19:06:42.517 [SyncThread:0] DEBUG org.apache.zookeeper.server.ZooKeeperServer - ACLs for node: [31,s{'world,'anyone} ] 19:06:42.517 [SyncThread:0] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Client credentials: ['sasl,'zooclient , 'ip,'127.0.0.1 ] 19:06:42.518 [main-SendThread(127.0.0.1:42969)] DEBUG org.apache.zookeeper.ClientCnxn - Reading reply session id: 0x1000002c50e0000, packet:: clientPath:/config/topics/__consumer_offsets serverPath:/config/topics/__consumer_offsets finished:false header:: 250,4 replyHeader:: 250,139,0 request:: '/config/topics/__consumer_offsets,F response:: #7b2276657273696f6e223a312c22636f6e666967223a7b22636f6d7072657373696f6e2e74797065223a2270726f6475636572222c22636c65616e75702e706f6c696379223a22636f6d70616374222c227365676d656e742e6279746573223a22313034383537363030227d7d,s{37,37,1755112001575,1755112001575,0,0,0,0,109,0,37} 19:06:42.520 [data-plane-kafka-request-handler-1] DEBUG kafka.log.OffsetIndex - Loaded index file /tmp/kafka-unit12180474530667575823/__consumer_offsets-21/00000000000000000000.index with maxEntries = 1310720, maxIndexSize = 10485760, entries = 0, lastOffset = 0, file position = 0 19:06:42.521 [data-plane-kafka-request-handler-1] DEBUG kafka.log.AbstractIndex - Index /tmp/kafka-unit12180474530667575823/__consumer_offsets-21/00000000000000000000.index was not resized because it already has size 10485760 19:06:42.521 [data-plane-kafka-request-handler-1] DEBUG kafka.log.TimeIndex - Loaded index file /tmp/kafka-unit12180474530667575823/__consumer_offsets-21/00000000000000000000.timeindex with maxEntries = 873813, maxIndexSize = 10485760, entries = 0, lastOffset = TimestampOffset(-1,0), file position = 0 19:06:42.521 [data-plane-kafka-request-handler-1] DEBUG kafka.log.AbstractIndex - Index /tmp/kafka-unit12180474530667575823/__consumer_offsets-21/00000000000000000000.timeindex was not resized because it already has size 10485756 19:06:42.521 [data-plane-kafka-request-handler-1] INFO kafka.log.UnifiedLog$ - [LogLoader partition=__consumer_offsets-21, dir=/tmp/kafka-unit12180474530667575823] Loading producer state till offset 0 with message format version 2 19:06:42.521 [data-plane-kafka-request-handler-1] DEBUG kafka.utils.KafkaScheduler - Scheduling task flush-metadata-file with initial delay 0 ms and period -1 ms. 19:06:42.522 [data-plane-kafka-request-handler-1] DEBUG kafka.utils.KafkaScheduler - Scheduling task PeriodicProducerExpirationCheck with initial delay 600000 ms and period 600000 ms. 19:06:42.522 [data-plane-kafka-request-handler-1] INFO kafka.log.LogManager - Created log for partition __consumer_offsets-21 in /tmp/kafka-unit12180474530667575823/__consumer_offsets-21 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} 19:06:42.523 [data-plane-kafka-request-handler-1] INFO kafka.cluster.Partition - [Partition __consumer_offsets-21 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-21 19:06:42.523 [data-plane-kafka-request-handler-1] INFO kafka.cluster.Partition - [Partition __consumer_offsets-21 broker=1] Log loaded for partition __consumer_offsets-21 with initial high watermark 0 19:06:42.523 [data-plane-kafka-request-handler-1] INFO state.change.logger - [Broker id=1] Leader __consumer_offsets-21 with topic id Some(eotegnaZSI-1iOOsuTZpHg) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas []. Previous leader epoch was -1. 19:06:42.523 [data-plane-kafka-request-handler-1] DEBUG kafka.server.epoch.LeaderEpochFileCache - [LeaderEpochCache __consumer_offsets-21] Appended new epoch entry EpochEntry(epoch=0, startOffset=0). Cache now contains 1 entries. 19:06:42.527 [ProcessThread(sid:0 cport:42969):] DEBUG org.apache.zookeeper.server.SessionTrackerImpl - Checking session 0x1000002c50e0000 19:06:42.527 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - Processing request:: sessionid:0x1000002c50e0000 type:getData cxid:0xfb zxid:0xfffffffffffffffe txntype:unknown reqpath:/config/topics/__consumer_offsets 19:06:42.528 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - sessionid:0x1000002c50e0000 type:getData cxid:0xfb zxid:0xfffffffffffffffe txntype:unknown reqpath:/config/topics/__consumer_offsets 19:06:42.528 [SyncThread:0] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Permission requested: 1 19:06:42.528 [SyncThread:0] DEBUG org.apache.zookeeper.server.ZooKeeperServer - ACLs for node: [31,s{'world,'anyone} ] 19:06:42.528 [SyncThread:0] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Client credentials: ['sasl,'zooclient , 'ip,'127.0.0.1 ] 19:06:42.528 [main-SendThread(127.0.0.1:42969)] DEBUG org.apache.zookeeper.ClientCnxn - Reading reply session id: 0x1000002c50e0000, packet:: clientPath:/config/topics/__consumer_offsets serverPath:/config/topics/__consumer_offsets finished:false header:: 251,4 replyHeader:: 251,139,0 request:: '/config/topics/__consumer_offsets,F response:: #7b2276657273696f6e223a312c22636f6e666967223a7b22636f6d7072657373696f6e2e74797065223a2270726f6475636572222c22636c65616e75702e706f6c696379223a22636f6d70616374222c227365676d656e742e6279746573223a22313034383537363030227d7d,s{37,37,1755112001575,1755112001575,0,0,0,0,109,0,37} 19:06:42.531 [data-plane-kafka-request-handler-1] DEBUG kafka.log.OffsetIndex - Loaded index file /tmp/kafka-unit12180474530667575823/__consumer_offsets-36/00000000000000000000.index with maxEntries = 1310720, maxIndexSize = 10485760, entries = 0, lastOffset = 0, file position = 0 19:06:42.531 [data-plane-kafka-request-handler-1] DEBUG kafka.log.AbstractIndex - Index /tmp/kafka-unit12180474530667575823/__consumer_offsets-36/00000000000000000000.index was not resized because it already has size 10485760 19:06:42.531 [data-plane-kafka-request-handler-1] DEBUG kafka.log.TimeIndex - Loaded index file /tmp/kafka-unit12180474530667575823/__consumer_offsets-36/00000000000000000000.timeindex with maxEntries = 873813, maxIndexSize = 10485760, entries = 0, lastOffset = TimestampOffset(-1,0), file position = 0 19:06:42.531 [data-plane-kafka-request-handler-1] DEBUG kafka.log.AbstractIndex - Index /tmp/kafka-unit12180474530667575823/__consumer_offsets-36/00000000000000000000.timeindex was not resized because it already has size 10485756 19:06:42.532 [data-plane-kafka-request-handler-1] INFO kafka.log.UnifiedLog$ - [LogLoader partition=__consumer_offsets-36, dir=/tmp/kafka-unit12180474530667575823] Loading producer state till offset 0 with message format version 2 19:06:42.532 [data-plane-kafka-request-handler-1] DEBUG kafka.utils.KafkaScheduler - Scheduling task flush-metadata-file with initial delay 0 ms and period -1 ms. 19:06:42.533 [data-plane-kafka-request-handler-1] DEBUG kafka.utils.KafkaScheduler - Scheduling task PeriodicProducerExpirationCheck with initial delay 600000 ms and period 600000 ms. 19:06:42.533 [data-plane-kafka-request-handler-1] INFO kafka.log.LogManager - Created log for partition __consumer_offsets-36 in /tmp/kafka-unit12180474530667575823/__consumer_offsets-36 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} 19:06:42.533 [data-plane-kafka-request-handler-1] INFO kafka.cluster.Partition - [Partition __consumer_offsets-36 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-36 19:06:42.533 [data-plane-kafka-request-handler-1] INFO kafka.cluster.Partition - [Partition __consumer_offsets-36 broker=1] Log loaded for partition __consumer_offsets-36 with initial high watermark 0 19:06:42.533 [data-plane-kafka-request-handler-1] INFO state.change.logger - [Broker id=1] Leader __consumer_offsets-36 with topic id Some(eotegnaZSI-1iOOsuTZpHg) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas []. Previous leader epoch was -1. 19:06:42.533 [data-plane-kafka-request-handler-1] DEBUG kafka.server.epoch.LeaderEpochFileCache - [LeaderEpochCache __consumer_offsets-36] Appended new epoch entry EpochEntry(epoch=0, startOffset=0). Cache now contains 1 entries. 19:06:42.538 [ProcessThread(sid:0 cport:42969):] DEBUG org.apache.zookeeper.server.SessionTrackerImpl - Checking session 0x1000002c50e0000 19:06:42.538 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - Processing request:: sessionid:0x1000002c50e0000 type:getData cxid:0xfc zxid:0xfffffffffffffffe txntype:unknown reqpath:/config/topics/__consumer_offsets 19:06:42.538 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - sessionid:0x1000002c50e0000 type:getData cxid:0xfc zxid:0xfffffffffffffffe txntype:unknown reqpath:/config/topics/__consumer_offsets 19:06:42.538 [SyncThread:0] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Permission requested: 1 19:06:42.538 [SyncThread:0] DEBUG org.apache.zookeeper.server.ZooKeeperServer - ACLs for node: [31,s{'world,'anyone} ] 19:06:42.539 [SyncThread:0] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Client credentials: ['sasl,'zooclient , 'ip,'127.0.0.1 ] 19:06:42.539 [main-SendThread(127.0.0.1:42969)] DEBUG org.apache.zookeeper.ClientCnxn - Reading reply session id: 0x1000002c50e0000, packet:: clientPath:/config/topics/__consumer_offsets serverPath:/config/topics/__consumer_offsets finished:false header:: 252,4 replyHeader:: 252,139,0 request:: '/config/topics/__consumer_offsets,F response:: #7b2276657273696f6e223a312c22636f6e666967223a7b22636f6d7072657373696f6e2e74797065223a2270726f6475636572222c22636c65616e75702e706f6c696379223a22636f6d70616374222c227365676d656e742e6279746573223a22313034383537363030227d7d,s{37,37,1755112001575,1755112001575,0,0,0,0,109,0,37} 19:06:42.542 [data-plane-kafka-request-handler-1] DEBUG kafka.log.OffsetIndex - Loaded index file /tmp/kafka-unit12180474530667575823/__consumer_offsets-6/00000000000000000000.index with maxEntries = 1310720, maxIndexSize = 10485760, entries = 0, lastOffset = 0, file position = 0 19:06:42.543 [data-plane-kafka-request-handler-1] DEBUG kafka.log.AbstractIndex - Index /tmp/kafka-unit12180474530667575823/__consumer_offsets-6/00000000000000000000.index was not resized because it already has size 10485760 19:06:42.543 [data-plane-kafka-request-handler-1] DEBUG kafka.log.TimeIndex - Loaded index file /tmp/kafka-unit12180474530667575823/__consumer_offsets-6/00000000000000000000.timeindex with maxEntries = 873813, maxIndexSize = 10485760, entries = 0, lastOffset = TimestampOffset(-1,0), file position = 0 19:06:42.543 [data-plane-kafka-request-handler-1] DEBUG kafka.log.AbstractIndex - Index /tmp/kafka-unit12180474530667575823/__consumer_offsets-6/00000000000000000000.timeindex was not resized because it already has size 10485756 19:06:42.543 [data-plane-kafka-request-handler-1] INFO kafka.log.UnifiedLog$ - [LogLoader partition=__consumer_offsets-6, dir=/tmp/kafka-unit12180474530667575823] Loading producer state till offset 0 with message format version 2 19:06:42.544 [data-plane-kafka-request-handler-1] DEBUG kafka.utils.KafkaScheduler - Scheduling task flush-metadata-file with initial delay 0 ms and period -1 ms. 19:06:42.544 [data-plane-kafka-request-handler-1] DEBUG kafka.utils.KafkaScheduler - Scheduling task PeriodicProducerExpirationCheck with initial delay 600000 ms and period 600000 ms. 19:06:42.545 [data-plane-kafka-request-handler-1] INFO kafka.log.LogManager - Created log for partition __consumer_offsets-6 in /tmp/kafka-unit12180474530667575823/__consumer_offsets-6 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} 19:06:42.545 [data-plane-kafka-request-handler-1] INFO kafka.cluster.Partition - [Partition __consumer_offsets-6 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-6 19:06:42.545 [data-plane-kafka-request-handler-1] INFO kafka.cluster.Partition - [Partition __consumer_offsets-6 broker=1] Log loaded for partition __consumer_offsets-6 with initial high watermark 0 19:06:42.545 [data-plane-kafka-request-handler-1] INFO state.change.logger - [Broker id=1] Leader __consumer_offsets-6 with topic id Some(eotegnaZSI-1iOOsuTZpHg) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas []. Previous leader epoch was -1. 19:06:42.545 [data-plane-kafka-request-handler-1] DEBUG kafka.server.epoch.LeaderEpochFileCache - [LeaderEpochCache __consumer_offsets-6] Appended new epoch entry EpochEntry(epoch=0, startOffset=0). Cache now contains 1 entries. 19:06:42.550 [ProcessThread(sid:0 cport:42969):] DEBUG org.apache.zookeeper.server.SessionTrackerImpl - Checking session 0x1000002c50e0000 19:06:42.550 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - Processing request:: sessionid:0x1000002c50e0000 type:getData cxid:0xfd zxid:0xfffffffffffffffe txntype:unknown reqpath:/config/topics/__consumer_offsets 19:06:42.550 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - sessionid:0x1000002c50e0000 type:getData cxid:0xfd zxid:0xfffffffffffffffe txntype:unknown reqpath:/config/topics/__consumer_offsets 19:06:42.550 [SyncThread:0] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Permission requested: 1 19:06:42.550 [SyncThread:0] DEBUG org.apache.zookeeper.server.ZooKeeperServer - ACLs for node: [31,s{'world,'anyone} ] 19:06:42.550 [SyncThread:0] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Client credentials: ['sasl,'zooclient , 'ip,'127.0.0.1 ] 19:06:42.551 [main-SendThread(127.0.0.1:42969)] DEBUG org.apache.zookeeper.ClientCnxn - Reading reply session id: 0x1000002c50e0000, packet:: clientPath:/config/topics/__consumer_offsets serverPath:/config/topics/__consumer_offsets finished:false header:: 253,4 replyHeader:: 253,139,0 request:: '/config/topics/__consumer_offsets,F response:: #7b2276657273696f6e223a312c22636f6e666967223a7b22636f6d7072657373696f6e2e74797065223a2270726f6475636572222c22636c65616e75702e706f6c696379223a22636f6d70616374222c227365676d656e742e6279746573223a22313034383537363030227d7d,s{37,37,1755112001575,1755112001575,0,0,0,0,109,0,37} 19:06:42.553 [data-plane-kafka-request-handler-1] DEBUG kafka.log.OffsetIndex - Loaded index file /tmp/kafka-unit12180474530667575823/__consumer_offsets-43/00000000000000000000.index with maxEntries = 1310720, maxIndexSize = 10485760, entries = 0, lastOffset = 0, file position = 0 19:06:42.553 [data-plane-kafka-request-handler-1] DEBUG kafka.log.AbstractIndex - Index /tmp/kafka-unit12180474530667575823/__consumer_offsets-43/00000000000000000000.index was not resized because it already has size 10485760 19:06:42.553 [data-plane-kafka-request-handler-1] DEBUG kafka.log.TimeIndex - Loaded index file /tmp/kafka-unit12180474530667575823/__consumer_offsets-43/00000000000000000000.timeindex with maxEntries = 873813, maxIndexSize = 10485760, entries = 0, lastOffset = TimestampOffset(-1,0), file position = 0 19:06:42.553 [data-plane-kafka-request-handler-1] DEBUG kafka.log.AbstractIndex - Index /tmp/kafka-unit12180474530667575823/__consumer_offsets-43/00000000000000000000.timeindex was not resized because it already has size 10485756 19:06:42.554 [data-plane-kafka-request-handler-1] INFO kafka.log.UnifiedLog$ - [LogLoader partition=__consumer_offsets-43, dir=/tmp/kafka-unit12180474530667575823] Loading producer state till offset 0 with message format version 2 19:06:42.554 [data-plane-kafka-request-handler-1] DEBUG kafka.utils.KafkaScheduler - Scheduling task flush-metadata-file with initial delay 0 ms and period -1 ms. 19:06:42.555 [data-plane-kafka-request-handler-1] DEBUG kafka.utils.KafkaScheduler - Scheduling task PeriodicProducerExpirationCheck with initial delay 600000 ms and period 600000 ms. 19:06:42.555 [data-plane-kafka-request-handler-1] INFO kafka.log.LogManager - Created log for partition __consumer_offsets-43 in /tmp/kafka-unit12180474530667575823/__consumer_offsets-43 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} 19:06:42.555 [data-plane-kafka-request-handler-1] INFO kafka.cluster.Partition - [Partition __consumer_offsets-43 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-43 19:06:42.555 [data-plane-kafka-request-handler-1] INFO kafka.cluster.Partition - [Partition __consumer_offsets-43 broker=1] Log loaded for partition __consumer_offsets-43 with initial high watermark 0 19:06:42.556 [data-plane-kafka-request-handler-1] INFO state.change.logger - [Broker id=1] Leader __consumer_offsets-43 with topic id Some(eotegnaZSI-1iOOsuTZpHg) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas []. Previous leader epoch was -1. 19:06:42.556 [data-plane-kafka-request-handler-1] DEBUG kafka.server.epoch.LeaderEpochFileCache - [LeaderEpochCache __consumer_offsets-43] Appended new epoch entry EpochEntry(epoch=0, startOffset=0). Cache now contains 1 entries. 19:06:42.599 [ProcessThread(sid:0 cport:42969):] DEBUG org.apache.zookeeper.server.SessionTrackerImpl - Checking session 0x1000002c50e0000 19:06:42.599 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - Processing request:: sessionid:0x1000002c50e0000 type:getData cxid:0xfe zxid:0xfffffffffffffffe txntype:unknown reqpath:/config/topics/__consumer_offsets 19:06:42.599 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - sessionid:0x1000002c50e0000 type:getData cxid:0xfe zxid:0xfffffffffffffffe txntype:unknown reqpath:/config/topics/__consumer_offsets 19:06:42.599 [SyncThread:0] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Permission requested: 1 19:06:42.599 [SyncThread:0] DEBUG org.apache.zookeeper.server.ZooKeeperServer - ACLs for node: [31,s{'world,'anyone} ] 19:06:42.599 [SyncThread:0] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Client credentials: ['sasl,'zooclient , 'ip,'127.0.0.1 ] 19:06:42.600 [main-SendThread(127.0.0.1:42969)] DEBUG org.apache.zookeeper.ClientCnxn - Reading reply session id: 0x1000002c50e0000, packet:: clientPath:/config/topics/__consumer_offsets serverPath:/config/topics/__consumer_offsets finished:false header:: 254,4 replyHeader:: 254,139,0 request:: '/config/topics/__consumer_offsets,F response:: #7b2276657273696f6e223a312c22636f6e666967223a7b22636f6d7072657373696f6e2e74797065223a2270726f6475636572222c22636c65616e75702e706f6c696379223a22636f6d70616374222c227365676d656e742e6279746573223a22313034383537363030227d7d,s{37,37,1755112001575,1755112001575,0,0,0,0,109,0,37} 19:06:42.603 [data-plane-kafka-request-handler-1] DEBUG kafka.log.OffsetIndex - Loaded index file /tmp/kafka-unit12180474530667575823/__consumer_offsets-13/00000000000000000000.index with maxEntries = 1310720, maxIndexSize = 10485760, entries = 0, lastOffset = 0, file position = 0 19:06:42.603 [data-plane-kafka-request-handler-1] DEBUG kafka.log.AbstractIndex - Index /tmp/kafka-unit12180474530667575823/__consumer_offsets-13/00000000000000000000.index was not resized because it already has size 10485760 19:06:42.603 [data-plane-kafka-request-handler-1] DEBUG kafka.log.TimeIndex - Loaded index file /tmp/kafka-unit12180474530667575823/__consumer_offsets-13/00000000000000000000.timeindex with maxEntries = 873813, maxIndexSize = 10485760, entries = 0, lastOffset = TimestampOffset(-1,0), file position = 0 19:06:42.603 [data-plane-kafka-request-handler-1] DEBUG kafka.log.AbstractIndex - Index /tmp/kafka-unit12180474530667575823/__consumer_offsets-13/00000000000000000000.timeindex was not resized because it already has size 10485756 19:06:42.603 [data-plane-kafka-request-handler-1] INFO kafka.log.UnifiedLog$ - [LogLoader partition=__consumer_offsets-13, dir=/tmp/kafka-unit12180474530667575823] Loading producer state till offset 0 with message format version 2 19:06:42.604 [data-plane-kafka-request-handler-1] DEBUG kafka.utils.KafkaScheduler - Scheduling task flush-metadata-file with initial delay 0 ms and period -1 ms. 19:06:42.604 [data-plane-kafka-request-handler-1] DEBUG kafka.utils.KafkaScheduler - Scheduling task PeriodicProducerExpirationCheck with initial delay 600000 ms and period 600000 ms. 19:06:42.604 [data-plane-kafka-request-handler-1] INFO kafka.log.LogManager - Created log for partition __consumer_offsets-13 in /tmp/kafka-unit12180474530667575823/__consumer_offsets-13 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} 19:06:42.605 [data-plane-kafka-request-handler-1] INFO kafka.cluster.Partition - [Partition __consumer_offsets-13 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-13 19:06:42.605 [data-plane-kafka-request-handler-1] INFO kafka.cluster.Partition - [Partition __consumer_offsets-13 broker=1] Log loaded for partition __consumer_offsets-13 with initial high watermark 0 19:06:42.605 [data-plane-kafka-request-handler-1] INFO state.change.logger - [Broker id=1] Leader __consumer_offsets-13 with topic id Some(eotegnaZSI-1iOOsuTZpHg) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas []. Previous leader epoch was -1. 19:06:42.605 [data-plane-kafka-request-handler-1] DEBUG kafka.server.epoch.LeaderEpochFileCache - [LeaderEpochCache __consumer_offsets-13] Appended new epoch entry EpochEntry(epoch=0, startOffset=0). Cache now contains 1 entries. 19:06:42.606 [main] DEBUG org.apache.kafka.clients.NetworkClient - [Consumer clientId=mso-123456-consumer-e3947c42-8a25-4603-b050-ed449cbe5e27, groupId=mso-group] Sending metadata request MetadataRequestData(topics=[MetadataRequestTopic(topicId=AAAAAAAAAAAAAAAAAAAAAA, name='my-test-topic')], allowAutoTopicCreation=false, includeClusterAuthorizedOperations=false, includeTopicAuthorizedOperations=false) to node localhost:38537 (id: 1 rack: null) 19:06:42.606 [main] DEBUG org.apache.kafka.clients.NetworkClient - [Consumer clientId=mso-123456-consumer-e3947c42-8a25-4603-b050-ed449cbe5e27, groupId=mso-group] Sending METADATA request with header RequestHeader(apiKey=METADATA, apiVersion=12, clientId=mso-123456-consumer-e3947c42-8a25-4603-b050-ed449cbe5e27, correlationId=22) and timeout 30000 to node 1: MetadataRequestData(topics=[MetadataRequestTopic(topicId=AAAAAAAAAAAAAAAAAAAAAA, name='my-test-topic')], allowAutoTopicCreation=false, includeClusterAuthorizedOperations=false, includeTopicAuthorizedOperations=false) 19:06:42.609 [main] DEBUG org.apache.kafka.clients.NetworkClient - [Consumer clientId=mso-123456-consumer-e3947c42-8a25-4603-b050-ed449cbe5e27, groupId=mso-group] Received METADATA response from node 1 for request with header RequestHeader(apiKey=METADATA, apiVersion=12, clientId=mso-123456-consumer-e3947c42-8a25-4603-b050-ed449cbe5e27, correlationId=22): MetadataResponseData(throttleTimeMs=0, brokers=[MetadataResponseBroker(nodeId=1, host='localhost', port=38537, rack=null)], clusterId='rNB1PmTvTz-7PD_3X0wh3g', controllerId=1, topics=[MetadataResponseTopic(errorCode=0, name='my-test-topic', topicId=Y7FYMZASRPqaYrfAS485Ow, isInternal=false, partitions=[MetadataResponsePartition(errorCode=0, partitionIndex=0, leaderId=1, leaderEpoch=0, replicaNodes=[1], isrNodes=[1], offlineReplicas=[])], topicAuthorizedOperations=-2147483648)], clusterAuthorizedOperations=-2147483648) 19:06:42.610 [main] DEBUG org.apache.kafka.clients.Metadata - [Consumer clientId=mso-123456-consumer-e3947c42-8a25-4603-b050-ed449cbe5e27, groupId=mso-group] Updating last seen epoch for partition my-test-topic-0 from 0 to epoch 0 from new metadata 19:06:42.610 [main] DEBUG org.apache.kafka.clients.Metadata - [Consumer clientId=mso-123456-consumer-e3947c42-8a25-4603-b050-ed449cbe5e27, groupId=mso-group] Updated cluster metadata updateVersion 12 to MetadataCache{clusterId='rNB1PmTvTz-7PD_3X0wh3g', nodes={1=localhost:38537 (id: 1 rack: null)}, partitions=[PartitionMetadata(error=NONE, partition=my-test-topic-0, leader=Optional[1], leaderEpoch=Optional[0], replicas=1, isr=1, offlineReplicas=)], controller=localhost:38537 (id: 1 rack: null)} 19:06:42.610 [data-plane-kafka-network-thread-1-ListenerName(SASL_PLAINTEXT)-SASL_PLAINTEXT-0] DEBUG kafka.request.logger - Completed request:{"isForwarded":false,"requestHeader":{"requestApiKey":3,"requestApiVersion":12,"correlationId":22,"clientId":"mso-123456-consumer-e3947c42-8a25-4603-b050-ed449cbe5e27","requestApiKeyName":"METADATA"},"request":{"topics":[{"topicId":"AAAAAAAAAAAAAAAAAAAAAA","name":"my-test-topic"}],"allowAutoTopicCreation":false,"includeTopicAuthorizedOperations":false},"response":{"throttleTimeMs":0,"brokers":[{"nodeId":1,"host":"localhost","port":38537,"rack":null}],"clusterId":"rNB1PmTvTz-7PD_3X0wh3g","controllerId":1,"topics":[{"errorCode":0,"name":"my-test-topic","topicId":"Y7FYMZASRPqaYrfAS485Ow","isInternal":false,"partitions":[{"errorCode":0,"partitionIndex":0,"leaderId":1,"leaderEpoch":0,"replicaNodes":[1],"isrNodes":[1],"offlineReplicas":[]}],"topicAuthorizedOperations":-2147483648}]},"connection":"127.0.0.1:38537-127.0.0.1:54970-3","totalTimeMs":2.091,"requestQueueTimeMs":0.342,"localTimeMs":1.382,"remoteTimeMs":0.0,"throttleTimeMs":0,"responseQueueTimeMs":0.09,"sendTimeMs":0.276,"securityProtocol":"SASL_PLAINTEXT","principal":"User:admin","listener":"SASL_PLAINTEXT","clientInformation":{"softwareName":"apache-kafka-java","softwareVersion":"3.3.1"}} 19:06:42.610 [main] DEBUG org.apache.kafka.clients.consumer.internals.ConsumerCoordinator - [Consumer clientId=mso-123456-consumer-e3947c42-8a25-4603-b050-ed449cbe5e27, groupId=mso-group] Sending FindCoordinator request to broker localhost:38537 (id: 1 rack: null) 19:06:42.610 [main] DEBUG org.apache.kafka.clients.NetworkClient - [Consumer clientId=mso-123456-consumer-e3947c42-8a25-4603-b050-ed449cbe5e27, groupId=mso-group] Sending FIND_COORDINATOR request with header RequestHeader(apiKey=FIND_COORDINATOR, apiVersion=4, clientId=mso-123456-consumer-e3947c42-8a25-4603-b050-ed449cbe5e27, correlationId=23) and timeout 30000 to node 1: FindCoordinatorRequestData(key='', keyType=0, coordinatorKeys=[mso-group]) 19:06:42.613 [ProcessThread(sid:0 cport:42969):] DEBUG org.apache.zookeeper.server.SessionTrackerImpl - Checking session 0x1000002c50e0000 19:06:42.613 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - Processing request:: sessionid:0x1000002c50e0000 type:getData cxid:0xff zxid:0xfffffffffffffffe txntype:unknown reqpath:/config/topics/__consumer_offsets 19:06:42.613 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - sessionid:0x1000002c50e0000 type:getData cxid:0xff zxid:0xfffffffffffffffe txntype:unknown reqpath:/config/topics/__consumer_offsets 19:06:42.613 [SyncThread:0] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Permission requested: 1 19:06:42.613 [SyncThread:0] DEBUG org.apache.zookeeper.server.ZooKeeperServer - ACLs for node: [31,s{'world,'anyone} ] 19:06:42.613 [SyncThread:0] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Client credentials: ['sasl,'zooclient , 'ip,'127.0.0.1 ] 19:06:42.613 [ProcessThread(sid:0 cport:42969):] DEBUG org.apache.zookeeper.server.SessionTrackerImpl - Checking session 0x1000002c50e0000 19:06:42.613 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - Processing request:: sessionid:0x1000002c50e0000 type:exists cxid:0x100 zxid:0xfffffffffffffffe txntype:unknown reqpath:/admin/delete_topics/__consumer_offsets 19:06:42.614 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - sessionid:0x1000002c50e0000 type:exists cxid:0x100 zxid:0xfffffffffffffffe txntype:unknown reqpath:/admin/delete_topics/__consumer_offsets 19:06:42.614 [main-SendThread(127.0.0.1:42969)] DEBUG org.apache.zookeeper.ClientCnxn - Reading reply session id: 0x1000002c50e0000, packet:: clientPath:/config/topics/__consumer_offsets serverPath:/config/topics/__consumer_offsets finished:false header:: 255,4 replyHeader:: 255,139,0 request:: '/config/topics/__consumer_offsets,F response:: #7b2276657273696f6e223a312c22636f6e666967223a7b22636f6d7072657373696f6e2e74797065223a2270726f6475636572222c22636c65616e75702e706f6c696379223a22636f6d70616374222c227365676d656e742e6279746573223a22313034383537363030227d7d,s{37,37,1755112001575,1755112001575,0,0,0,0,109,0,37} 19:06:42.614 [main-SendThread(127.0.0.1:42969)] DEBUG org.apache.zookeeper.ClientCnxn - Reading reply session id: 0x1000002c50e0000, packet:: clientPath:/admin/delete_topics/__consumer_offsets serverPath:/admin/delete_topics/__consumer_offsets finished:false header:: 256,3 replyHeader:: 256,139,-101 request:: '/admin/delete_topics/__consumer_offsets,F response:: 19:06:42.615 [ProcessThread(sid:0 cport:42969):] DEBUG org.apache.zookeeper.server.SessionTrackerImpl - Checking session 0x1000002c50e0000 19:06:42.615 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - Processing request:: sessionid:0x1000002c50e0000 type:exists cxid:0x101 zxid:0xfffffffffffffffe txntype:unknown reqpath:/brokers/topics/__consumer_offsets 19:06:42.615 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - sessionid:0x1000002c50e0000 type:exists cxid:0x101 zxid:0xfffffffffffffffe txntype:unknown reqpath:/brokers/topics/__consumer_offsets 19:06:42.615 [main-SendThread(127.0.0.1:42969)] DEBUG org.apache.zookeeper.ClientCnxn - Reading reply session id: 0x1000002c50e0000, packet:: clientPath:/brokers/topics/__consumer_offsets serverPath:/brokers/topics/__consumer_offsets finished:false header:: 257,3 replyHeader:: 257,139,0 request:: '/brokers/topics/__consumer_offsets,T response:: s{38,38,1755112001586,1755112001586,0,1,0,0,548,1,39} 19:06:42.616 [data-plane-kafka-request-handler-0] DEBUG kafka.server.ZkAdminManager - [Admin Manager on Broker 1]: Topic creation failed since topic '__consumer_offsets' already exists. org.apache.kafka.common.errors.TopicExistsException: Topic '__consumer_offsets' already exists. 19:06:42.616 [data-plane-kafka-request-handler-0] DEBUG kafka.server.DefaultAutoTopicCreationManager - Cleared inflight topic creation state for HashMap(__consumer_offsets -> CreatableTopic(name='__consumer_offsets', numPartitions=50, replicationFactor=1, assignments=[], configs=[CreateableTopicConfig(name='compression.type', value='producer'), CreateableTopicConfig(name='cleanup.policy', value='compact'), CreateableTopicConfig(name='segment.bytes', value='104857600')])) 19:06:42.617 [main] DEBUG org.apache.kafka.clients.NetworkClient - [Consumer clientId=mso-123456-consumer-e3947c42-8a25-4603-b050-ed449cbe5e27, groupId=mso-group] Received FIND_COORDINATOR response from node 1 for request with header RequestHeader(apiKey=FIND_COORDINATOR, apiVersion=4, clientId=mso-123456-consumer-e3947c42-8a25-4603-b050-ed449cbe5e27, correlationId=23): FindCoordinatorResponseData(throttleTimeMs=0, errorCode=0, errorMessage='', nodeId=0, host='', port=0, coordinators=[Coordinator(key='mso-group', nodeId=-1, host='', port=-1, errorCode=15, errorMessage='')]) 19:06:42.617 [data-plane-kafka-network-thread-1-ListenerName(SASL_PLAINTEXT)-SASL_PLAINTEXT-0] DEBUG kafka.request.logger - Completed request:{"isForwarded":false,"requestHeader":{"requestApiKey":10,"requestApiVersion":4,"correlationId":23,"clientId":"mso-123456-consumer-e3947c42-8a25-4603-b050-ed449cbe5e27","requestApiKeyName":"FIND_COORDINATOR"},"request":{"keyType":0,"coordinatorKeys":["mso-group"]},"response":{"throttleTimeMs":0,"coordinators":[{"key":"mso-group","nodeId":-1,"host":"","port":-1,"errorCode":15,"errorMessage":""}]},"connection":"127.0.0.1:38537-127.0.0.1:54970-3","totalTimeMs":5.843,"requestQueueTimeMs":0.194,"localTimeMs":5.355,"remoteTimeMs":0.0,"throttleTimeMs":0,"responseQueueTimeMs":0.081,"sendTimeMs":0.212,"securityProtocol":"SASL_PLAINTEXT","principal":"User:admin","listener":"SASL_PLAINTEXT","clientInformation":{"softwareName":"apache-kafka-java","softwareVersion":"3.3.1"}} 19:06:42.617 [main] DEBUG org.apache.kafka.clients.consumer.internals.ConsumerCoordinator - [Consumer clientId=mso-123456-consumer-e3947c42-8a25-4603-b050-ed449cbe5e27, groupId=mso-group] Received FindCoordinator response ClientResponse(receivedTimeMs=1755112002617, latencyMs=7, disconnected=false, requestHeader=RequestHeader(apiKey=FIND_COORDINATOR, apiVersion=4, clientId=mso-123456-consumer-e3947c42-8a25-4603-b050-ed449cbe5e27, correlationId=23), responseBody=FindCoordinatorResponseData(throttleTimeMs=0, errorCode=0, errorMessage='', nodeId=0, host='', port=0, coordinators=[Coordinator(key='mso-group', nodeId=-1, host='', port=-1, errorCode=15, errorMessage='')])) 19:06:42.617 [main] DEBUG org.apache.kafka.clients.consumer.internals.ConsumerCoordinator - [Consumer clientId=mso-123456-consumer-e3947c42-8a25-4603-b050-ed449cbe5e27, groupId=mso-group] Group coordinator lookup failed: 19:06:42.617 [main] DEBUG org.apache.kafka.clients.consumer.internals.ConsumerCoordinator - [Consumer clientId=mso-123456-consumer-e3947c42-8a25-4603-b050-ed449cbe5e27, groupId=mso-group] Coordinator discovery failed, refreshing metadata org.apache.kafka.common.errors.CoordinatorNotAvailableException: The coordinator is not available. 19:06:42.618 [data-plane-kafka-request-handler-1] DEBUG kafka.log.OffsetIndex - Loaded index file /tmp/kafka-unit12180474530667575823/__consumer_offsets-28/00000000000000000000.index with maxEntries = 1310720, maxIndexSize = 10485760, entries = 0, lastOffset = 0, file position = 0 19:06:42.618 [data-plane-kafka-request-handler-1] DEBUG kafka.log.AbstractIndex - Index /tmp/kafka-unit12180474530667575823/__consumer_offsets-28/00000000000000000000.index was not resized because it already has size 10485760 19:06:42.618 [data-plane-kafka-request-handler-1] DEBUG kafka.log.TimeIndex - Loaded index file /tmp/kafka-unit12180474530667575823/__consumer_offsets-28/00000000000000000000.timeindex with maxEntries = 873813, maxIndexSize = 10485760, entries = 0, lastOffset = TimestampOffset(-1,0), file position = 0 19:06:42.618 [data-plane-kafka-request-handler-1] DEBUG kafka.log.AbstractIndex - Index /tmp/kafka-unit12180474530667575823/__consumer_offsets-28/00000000000000000000.timeindex was not resized because it already has size 10485756 19:06:42.618 [data-plane-kafka-request-handler-1] INFO kafka.log.UnifiedLog$ - [LogLoader partition=__consumer_offsets-28, dir=/tmp/kafka-unit12180474530667575823] Loading producer state till offset 0 with message format version 2 19:06:42.618 [data-plane-kafka-request-handler-1] DEBUG kafka.utils.KafkaScheduler - Scheduling task flush-metadata-file with initial delay 0 ms and period -1 ms. 19:06:42.619 [data-plane-kafka-request-handler-1] DEBUG kafka.utils.KafkaScheduler - Scheduling task PeriodicProducerExpirationCheck with initial delay 600000 ms and period 600000 ms. 19:06:42.619 [data-plane-kafka-request-handler-1] INFO kafka.log.LogManager - Created log for partition __consumer_offsets-28 in /tmp/kafka-unit12180474530667575823/__consumer_offsets-28 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} 19:06:42.619 [data-plane-kafka-request-handler-1] INFO kafka.cluster.Partition - [Partition __consumer_offsets-28 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-28 19:06:42.619 [data-plane-kafka-request-handler-1] INFO kafka.cluster.Partition - [Partition __consumer_offsets-28 broker=1] Log loaded for partition __consumer_offsets-28 with initial high watermark 0 19:06:42.619 [data-plane-kafka-request-handler-1] INFO state.change.logger - [Broker id=1] Leader __consumer_offsets-28 with topic id Some(eotegnaZSI-1iOOsuTZpHg) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas []. Previous leader epoch was -1. 19:06:42.620 [data-plane-kafka-request-handler-1] DEBUG kafka.server.epoch.LeaderEpochFileCache - [LeaderEpochCache __consumer_offsets-28] Appended new epoch entry EpochEntry(epoch=0, startOffset=0). Cache now contains 1 entries. 19:06:42.626 [data-plane-kafka-request-handler-1] INFO kafka.coordinator.group.GroupCoordinator - [GroupCoordinator 1]: Elected as the group coordinator for partition 3 in epoch 0 19:06:42.627 [data-plane-kafka-request-handler-1] INFO kafka.coordinator.group.GroupMetadataManager - [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-3 for epoch 0 19:06:42.629 [data-plane-kafka-request-handler-1] DEBUG kafka.utils.KafkaScheduler - Scheduling task __consumer_offsets-3 with initial delay 0 ms and period -1 ms. 19:06:42.630 [data-plane-kafka-request-handler-1] INFO kafka.coordinator.group.GroupCoordinator - [GroupCoordinator 1]: Elected as the group coordinator for partition 18 in epoch 0 19:06:42.630 [data-plane-kafka-request-handler-1] INFO kafka.coordinator.group.GroupMetadataManager - [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-18 for epoch 0 19:06:42.630 [data-plane-kafka-request-handler-1] DEBUG kafka.utils.KafkaScheduler - Scheduling task __consumer_offsets-18 with initial delay 0 ms and period -1 ms. 19:06:42.630 [data-plane-kafka-request-handler-1] INFO kafka.coordinator.group.GroupCoordinator - [GroupCoordinator 1]: Elected as the group coordinator for partition 41 in epoch 0 19:06:42.630 [data-plane-kafka-request-handler-1] INFO kafka.coordinator.group.GroupMetadataManager - [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-41 for epoch 0 19:06:42.630 [data-plane-kafka-request-handler-1] DEBUG kafka.utils.KafkaScheduler - Scheduling task __consumer_offsets-41 with initial delay 0 ms and period -1 ms. 19:06:42.630 [data-plane-kafka-request-handler-1] INFO kafka.coordinator.group.GroupCoordinator - [GroupCoordinator 1]: Elected as the group coordinator for partition 10 in epoch 0 19:06:42.630 [data-plane-kafka-request-handler-1] INFO kafka.coordinator.group.GroupMetadataManager - [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-10 for epoch 0 19:06:42.630 [data-plane-kafka-request-handler-1] DEBUG kafka.utils.KafkaScheduler - Scheduling task __consumer_offsets-10 with initial delay 0 ms and period -1 ms. 19:06:42.631 [data-plane-kafka-request-handler-1] INFO kafka.coordinator.group.GroupCoordinator - [GroupCoordinator 1]: Elected as the group coordinator for partition 33 in epoch 0 19:06:42.631 [data-plane-kafka-request-handler-1] INFO kafka.coordinator.group.GroupMetadataManager - [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-33 for epoch 0 19:06:42.631 [data-plane-kafka-request-handler-1] DEBUG kafka.utils.KafkaScheduler - Scheduling task __consumer_offsets-33 with initial delay 0 ms and period -1 ms. 19:06:42.631 [data-plane-kafka-request-handler-1] INFO kafka.coordinator.group.GroupCoordinator - [GroupCoordinator 1]: Elected as the group coordinator for partition 48 in epoch 0 19:06:42.631 [data-plane-kafka-request-handler-1] INFO kafka.coordinator.group.GroupMetadataManager - [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-48 for epoch 0 19:06:42.631 [data-plane-kafka-request-handler-1] DEBUG kafka.utils.KafkaScheduler - Scheduling task __consumer_offsets-48 with initial delay 0 ms and period -1 ms. 19:06:42.631 [data-plane-kafka-request-handler-1] INFO kafka.coordinator.group.GroupCoordinator - [GroupCoordinator 1]: Elected as the group coordinator for partition 19 in epoch 0 19:06:42.631 [data-plane-kafka-request-handler-1] INFO kafka.coordinator.group.GroupMetadataManager - [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-19 for epoch 0 19:06:42.631 [data-plane-kafka-request-handler-1] DEBUG kafka.utils.KafkaScheduler - Scheduling task __consumer_offsets-19 with initial delay 0 ms and period -1 ms. 19:06:42.631 [data-plane-kafka-request-handler-1] INFO kafka.coordinator.group.GroupCoordinator - [GroupCoordinator 1]: Elected as the group coordinator for partition 34 in epoch 0 19:06:42.631 [data-plane-kafka-request-handler-1] INFO kafka.coordinator.group.GroupMetadataManager - [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-34 for epoch 0 19:06:42.631 [data-plane-kafka-request-handler-1] DEBUG kafka.utils.KafkaScheduler - Scheduling task __consumer_offsets-34 with initial delay 0 ms and period -1 ms. 19:06:42.631 [data-plane-kafka-request-handler-1] INFO kafka.coordinator.group.GroupCoordinator - [GroupCoordinator 1]: Elected as the group coordinator for partition 4 in epoch 0 19:06:42.631 [data-plane-kafka-request-handler-1] INFO kafka.coordinator.group.GroupMetadataManager - [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-4 for epoch 0 19:06:42.631 [data-plane-kafka-request-handler-1] DEBUG kafka.utils.KafkaScheduler - Scheduling task __consumer_offsets-4 with initial delay 0 ms and period -1 ms. 19:06:42.631 [data-plane-kafka-request-handler-1] INFO kafka.coordinator.group.GroupCoordinator - [GroupCoordinator 1]: Elected as the group coordinator for partition 11 in epoch 0 19:06:42.631 [data-plane-kafka-request-handler-1] INFO kafka.coordinator.group.GroupMetadataManager - [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-11 for epoch 0 19:06:42.631 [data-plane-kafka-request-handler-1] DEBUG kafka.utils.KafkaScheduler - Scheduling task __consumer_offsets-11 with initial delay 0 ms and period -1 ms. 19:06:42.631 [data-plane-kafka-request-handler-1] INFO kafka.coordinator.group.GroupCoordinator - [GroupCoordinator 1]: Elected as the group coordinator for partition 26 in epoch 0 19:06:42.632 [data-plane-kafka-request-handler-1] INFO kafka.coordinator.group.GroupMetadataManager - [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-26 for epoch 0 19:06:42.632 [data-plane-kafka-request-handler-1] DEBUG kafka.utils.KafkaScheduler - Scheduling task __consumer_offsets-26 with initial delay 0 ms and period -1 ms. 19:06:42.632 [data-plane-kafka-request-handler-1] INFO kafka.coordinator.group.GroupCoordinator - [GroupCoordinator 1]: Elected as the group coordinator for partition 49 in epoch 0 19:06:42.632 [data-plane-kafka-request-handler-1] INFO kafka.coordinator.group.GroupMetadataManager - [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-49 for epoch 0 19:06:42.632 [data-plane-kafka-request-handler-1] DEBUG kafka.utils.KafkaScheduler - Scheduling task __consumer_offsets-49 with initial delay 0 ms and period -1 ms. 19:06:42.632 [data-plane-kafka-request-handler-1] INFO kafka.coordinator.group.GroupCoordinator - [GroupCoordinator 1]: Elected as the group coordinator for partition 39 in epoch 0 19:06:42.632 [data-plane-kafka-request-handler-1] INFO kafka.coordinator.group.GroupMetadataManager - [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-39 for epoch 0 19:06:42.632 [data-plane-kafka-request-handler-1] DEBUG kafka.utils.KafkaScheduler - Scheduling task __consumer_offsets-39 with initial delay 0 ms and period -1 ms. 19:06:42.632 [data-plane-kafka-request-handler-1] INFO kafka.coordinator.group.GroupCoordinator - [GroupCoordinator 1]: Elected as the group coordinator for partition 9 in epoch 0 19:06:42.632 [data-plane-kafka-request-handler-1] INFO kafka.coordinator.group.GroupMetadataManager - [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-9 for epoch 0 19:06:42.632 [data-plane-kafka-request-handler-1] DEBUG kafka.utils.KafkaScheduler - Scheduling task __consumer_offsets-9 with initial delay 0 ms and period -1 ms. 19:06:42.632 [data-plane-kafka-request-handler-1] INFO kafka.coordinator.group.GroupCoordinator - [GroupCoordinator 1]: Elected as the group coordinator for partition 24 in epoch 0 19:06:42.632 [data-plane-kafka-request-handler-1] INFO kafka.coordinator.group.GroupMetadataManager - [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-24 for epoch 0 19:06:42.632 [data-plane-kafka-request-handler-1] DEBUG kafka.utils.KafkaScheduler - Scheduling task __consumer_offsets-24 with initial delay 0 ms and period -1 ms. 19:06:42.632 [data-plane-kafka-request-handler-1] INFO kafka.coordinator.group.GroupCoordinator - [GroupCoordinator 1]: Elected as the group coordinator for partition 31 in epoch 0 19:06:42.632 [group-metadata-manager-0] DEBUG kafka.coordinator.group.GroupMetadataManager - [GroupMetadataManager brokerId=1] Started loading offsets and group metadata from __consumer_offsets-3 for epoch 0 19:06:42.632 [data-plane-kafka-request-handler-1] INFO kafka.coordinator.group.GroupMetadataManager - [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-31 for epoch 0 19:06:42.633 [data-plane-kafka-request-handler-1] DEBUG kafka.utils.KafkaScheduler - Scheduling task __consumer_offsets-31 with initial delay 0 ms and period -1 ms. 19:06:42.633 [data-plane-kafka-request-handler-1] INFO kafka.coordinator.group.GroupCoordinator - [GroupCoordinator 1]: Elected as the group coordinator for partition 46 in epoch 0 19:06:42.633 [data-plane-kafka-request-handler-1] INFO kafka.coordinator.group.GroupMetadataManager - [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-46 for epoch 0 19:06:42.633 [data-plane-kafka-request-handler-1] DEBUG kafka.utils.KafkaScheduler - Scheduling task __consumer_offsets-46 with initial delay 0 ms and period -1 ms. 19:06:42.633 [data-plane-kafka-request-handler-1] INFO kafka.coordinator.group.GroupCoordinator - [GroupCoordinator 1]: Elected as the group coordinator for partition 1 in epoch 0 19:06:42.633 [data-plane-kafka-request-handler-1] INFO kafka.coordinator.group.GroupMetadataManager - [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-1 for epoch 0 19:06:42.633 [data-plane-kafka-request-handler-1] DEBUG kafka.utils.KafkaScheduler - Scheduling task __consumer_offsets-1 with initial delay 0 ms and period -1 ms. 19:06:42.633 [data-plane-kafka-request-handler-1] INFO kafka.coordinator.group.GroupCoordinator - [GroupCoordinator 1]: Elected as the group coordinator for partition 16 in epoch 0 19:06:42.633 [data-plane-kafka-request-handler-1] INFO kafka.coordinator.group.GroupMetadataManager - [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-16 for epoch 0 19:06:42.633 [data-plane-kafka-request-handler-1] DEBUG kafka.utils.KafkaScheduler - Scheduling task __consumer_offsets-16 with initial delay 0 ms and period -1 ms. 19:06:42.633 [data-plane-kafka-request-handler-1] INFO kafka.coordinator.group.GroupCoordinator - [GroupCoordinator 1]: Elected as the group coordinator for partition 2 in epoch 0 19:06:42.633 [data-plane-kafka-request-handler-1] INFO kafka.coordinator.group.GroupMetadataManager - [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-2 for epoch 0 19:06:42.633 [data-plane-kafka-request-handler-1] DEBUG kafka.utils.KafkaScheduler - Scheduling task __consumer_offsets-2 with initial delay 0 ms and period -1 ms. 19:06:42.633 [data-plane-kafka-request-handler-1] INFO kafka.coordinator.group.GroupCoordinator - [GroupCoordinator 1]: Elected as the group coordinator for partition 25 in epoch 0 19:06:42.633 [data-plane-kafka-request-handler-1] INFO kafka.coordinator.group.GroupMetadataManager - [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-25 for epoch 0 19:06:42.633 [data-plane-kafka-request-handler-1] DEBUG kafka.utils.KafkaScheduler - Scheduling task __consumer_offsets-25 with initial delay 0 ms and period -1 ms. 19:06:42.634 [data-plane-kafka-request-handler-1] INFO kafka.coordinator.group.GroupCoordinator - [GroupCoordinator 1]: Elected as the group coordinator for partition 40 in epoch 0 19:06:42.634 [data-plane-kafka-request-handler-1] INFO kafka.coordinator.group.GroupMetadataManager - [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-40 for epoch 0 19:06:42.634 [data-plane-kafka-request-handler-1] DEBUG kafka.utils.KafkaScheduler - Scheduling task __consumer_offsets-40 with initial delay 0 ms and period -1 ms. 19:06:42.634 [data-plane-kafka-request-handler-1] INFO kafka.coordinator.group.GroupCoordinator - [GroupCoordinator 1]: Elected as the group coordinator for partition 47 in epoch 0 19:06:42.634 [data-plane-kafka-request-handler-1] INFO kafka.coordinator.group.GroupMetadataManager - [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-47 for epoch 0 19:06:42.634 [data-plane-kafka-request-handler-1] DEBUG kafka.utils.KafkaScheduler - Scheduling task __consumer_offsets-47 with initial delay 0 ms and period -1 ms. 19:06:42.634 [data-plane-kafka-request-handler-1] INFO kafka.coordinator.group.GroupCoordinator - [GroupCoordinator 1]: Elected as the group coordinator for partition 17 in epoch 0 19:06:42.634 [data-plane-kafka-request-handler-1] INFO kafka.coordinator.group.GroupMetadataManager - [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-17 for epoch 0 19:06:42.634 [data-plane-kafka-request-handler-1] DEBUG kafka.utils.KafkaScheduler - Scheduling task __consumer_offsets-17 with initial delay 0 ms and period -1 ms. 19:06:42.634 [data-plane-kafka-request-handler-1] INFO kafka.coordinator.group.GroupCoordinator - [GroupCoordinator 1]: Elected as the group coordinator for partition 32 in epoch 0 19:06:42.634 [data-plane-kafka-request-handler-1] INFO kafka.coordinator.group.GroupMetadataManager - [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-32 for epoch 0 19:06:42.634 [data-plane-kafka-request-handler-1] DEBUG kafka.utils.KafkaScheduler - Scheduling task __consumer_offsets-32 with initial delay 0 ms and period -1 ms. 19:06:42.634 [data-plane-kafka-request-handler-1] INFO kafka.coordinator.group.GroupCoordinator - [GroupCoordinator 1]: Elected as the group coordinator for partition 37 in epoch 0 19:06:42.634 [data-plane-kafka-request-handler-1] INFO kafka.coordinator.group.GroupMetadataManager - [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-37 for epoch 0 19:06:42.634 [data-plane-kafka-request-handler-1] DEBUG kafka.utils.KafkaScheduler - Scheduling task __consumer_offsets-37 with initial delay 0 ms and period -1 ms. 19:06:42.634 [data-plane-kafka-request-handler-1] INFO kafka.coordinator.group.GroupCoordinator - [GroupCoordinator 1]: Elected as the group coordinator for partition 7 in epoch 0 19:06:42.634 [data-plane-kafka-request-handler-1] INFO kafka.coordinator.group.GroupMetadataManager - [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-7 for epoch 0 19:06:42.634 [data-plane-kafka-request-handler-1] DEBUG kafka.utils.KafkaScheduler - Scheduling task __consumer_offsets-7 with initial delay 0 ms and period -1 ms. 19:06:42.634 [data-plane-kafka-request-handler-1] INFO kafka.coordinator.group.GroupCoordinator - [GroupCoordinator 1]: Elected as the group coordinator for partition 22 in epoch 0 19:06:42.634 [data-plane-kafka-request-handler-1] INFO kafka.coordinator.group.GroupMetadataManager - [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-22 for epoch 0 19:06:42.634 [data-plane-kafka-request-handler-1] DEBUG kafka.utils.KafkaScheduler - Scheduling task __consumer_offsets-22 with initial delay 0 ms and period -1 ms. 19:06:42.634 [data-plane-kafka-request-handler-1] INFO kafka.coordinator.group.GroupCoordinator - [GroupCoordinator 1]: Elected as the group coordinator for partition 29 in epoch 0 19:06:42.634 [data-plane-kafka-request-handler-1] INFO kafka.coordinator.group.GroupMetadataManager - [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-29 for epoch 0 19:06:42.634 [data-plane-kafka-request-handler-1] DEBUG kafka.utils.KafkaScheduler - Scheduling task __consumer_offsets-29 with initial delay 0 ms and period -1 ms. 19:06:42.634 [data-plane-kafka-request-handler-1] INFO kafka.coordinator.group.GroupCoordinator - [GroupCoordinator 1]: Elected as the group coordinator for partition 44 in epoch 0 19:06:42.634 [data-plane-kafka-request-handler-1] INFO kafka.coordinator.group.GroupMetadataManager - [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-44 for epoch 0 19:06:42.634 [data-plane-kafka-request-handler-1] DEBUG kafka.utils.KafkaScheduler - Scheduling task __consumer_offsets-44 with initial delay 0 ms and period -1 ms. 19:06:42.634 [data-plane-kafka-request-handler-1] INFO kafka.coordinator.group.GroupCoordinator - [GroupCoordinator 1]: Elected as the group coordinator for partition 14 in epoch 0 19:06:42.634 [data-plane-kafka-request-handler-1] INFO kafka.coordinator.group.GroupMetadataManager - [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-14 for epoch 0 19:06:42.634 [data-plane-kafka-request-handler-1] DEBUG kafka.utils.KafkaScheduler - Scheduling task __consumer_offsets-14 with initial delay 0 ms and period -1 ms. 19:06:42.634 [data-plane-kafka-request-handler-1] INFO kafka.coordinator.group.GroupCoordinator - [GroupCoordinator 1]: Elected as the group coordinator for partition 23 in epoch 0 19:06:42.634 [data-plane-kafka-request-handler-1] INFO kafka.coordinator.group.GroupMetadataManager - [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-23 for epoch 0 19:06:42.635 [data-plane-kafka-request-handler-1] DEBUG kafka.utils.KafkaScheduler - Scheduling task __consumer_offsets-23 with initial delay 0 ms and period -1 ms. 19:06:42.635 [data-plane-kafka-request-handler-1] INFO kafka.coordinator.group.GroupCoordinator - [GroupCoordinator 1]: Elected as the group coordinator for partition 38 in epoch 0 19:06:42.635 [data-plane-kafka-request-handler-1] INFO kafka.coordinator.group.GroupMetadataManager - [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-38 for epoch 0 19:06:42.635 [data-plane-kafka-request-handler-1] DEBUG kafka.utils.KafkaScheduler - Scheduling task __consumer_offsets-38 with initial delay 0 ms and period -1 ms. 19:06:42.635 [data-plane-kafka-request-handler-1] INFO kafka.coordinator.group.GroupCoordinator - [GroupCoordinator 1]: Elected as the group coordinator for partition 8 in epoch 0 19:06:42.635 [data-plane-kafka-request-handler-1] INFO kafka.coordinator.group.GroupMetadataManager - [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-8 for epoch 0 19:06:42.635 [data-plane-kafka-request-handler-1] DEBUG kafka.utils.KafkaScheduler - Scheduling task __consumer_offsets-8 with initial delay 0 ms and period -1 ms. 19:06:42.635 [data-plane-kafka-request-handler-1] INFO kafka.coordinator.group.GroupCoordinator - [GroupCoordinator 1]: Elected as the group coordinator for partition 45 in epoch 0 19:06:42.635 [data-plane-kafka-request-handler-1] INFO kafka.coordinator.group.GroupMetadataManager - [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-45 for epoch 0 19:06:42.635 [data-plane-kafka-request-handler-1] DEBUG kafka.utils.KafkaScheduler - Scheduling task __consumer_offsets-45 with initial delay 0 ms and period -1 ms. 19:06:42.635 [data-plane-kafka-request-handler-1] INFO kafka.coordinator.group.GroupCoordinator - [GroupCoordinator 1]: Elected as the group coordinator for partition 15 in epoch 0 19:06:42.635 [data-plane-kafka-request-handler-1] INFO kafka.coordinator.group.GroupMetadataManager - [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-15 for epoch 0 19:06:42.635 [data-plane-kafka-request-handler-1] DEBUG kafka.utils.KafkaScheduler - Scheduling task __consumer_offsets-15 with initial delay 0 ms and period -1 ms. 19:06:42.635 [data-plane-kafka-request-handler-1] INFO kafka.coordinator.group.GroupCoordinator - [GroupCoordinator 1]: Elected as the group coordinator for partition 30 in epoch 0 19:06:42.635 [data-plane-kafka-request-handler-1] INFO kafka.coordinator.group.GroupMetadataManager - [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-30 for epoch 0 19:06:42.635 [data-plane-kafka-request-handler-1] DEBUG kafka.utils.KafkaScheduler - Scheduling task __consumer_offsets-30 with initial delay 0 ms and period -1 ms. 19:06:42.635 [data-plane-kafka-request-handler-1] INFO kafka.coordinator.group.GroupCoordinator - [GroupCoordinator 1]: Elected as the group coordinator for partition 0 in epoch 0 19:06:42.635 [data-plane-kafka-request-handler-1] INFO kafka.coordinator.group.GroupMetadataManager - [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-0 for epoch 0 19:06:42.635 [data-plane-kafka-request-handler-1] DEBUG kafka.utils.KafkaScheduler - Scheduling task __consumer_offsets-0 with initial delay 0 ms and period -1 ms. 19:06:42.635 [data-plane-kafka-request-handler-1] INFO kafka.coordinator.group.GroupCoordinator - [GroupCoordinator 1]: Elected as the group coordinator for partition 35 in epoch 0 19:06:42.635 [data-plane-kafka-request-handler-1] INFO kafka.coordinator.group.GroupMetadataManager - [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-35 for epoch 0 19:06:42.635 [data-plane-kafka-request-handler-1] DEBUG kafka.utils.KafkaScheduler - Scheduling task __consumer_offsets-35 with initial delay 0 ms and period -1 ms. 19:06:42.635 [data-plane-kafka-request-handler-1] INFO kafka.coordinator.group.GroupCoordinator - [GroupCoordinator 1]: Elected as the group coordinator for partition 5 in epoch 0 19:06:42.635 [data-plane-kafka-request-handler-1] INFO kafka.coordinator.group.GroupMetadataManager - [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-5 for epoch 0 19:06:42.635 [data-plane-kafka-request-handler-1] DEBUG kafka.utils.KafkaScheduler - Scheduling task __consumer_offsets-5 with initial delay 0 ms and period -1 ms. 19:06:42.635 [data-plane-kafka-request-handler-1] INFO kafka.coordinator.group.GroupCoordinator - [GroupCoordinator 1]: Elected as the group coordinator for partition 20 in epoch 0 19:06:42.635 [data-plane-kafka-request-handler-1] INFO kafka.coordinator.group.GroupMetadataManager - [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-20 for epoch 0 19:06:42.635 [data-plane-kafka-request-handler-1] DEBUG kafka.utils.KafkaScheduler - Scheduling task __consumer_offsets-20 with initial delay 0 ms and period -1 ms. 19:06:42.635 [data-plane-kafka-request-handler-1] INFO kafka.coordinator.group.GroupCoordinator - [GroupCoordinator 1]: Elected as the group coordinator for partition 27 in epoch 0 19:06:42.635 [data-plane-kafka-request-handler-1] INFO kafka.coordinator.group.GroupMetadataManager - [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-27 for epoch 0 19:06:42.635 [data-plane-kafka-request-handler-1] DEBUG kafka.utils.KafkaScheduler - Scheduling task __consumer_offsets-27 with initial delay 0 ms and period -1 ms. 19:06:42.635 [data-plane-kafka-request-handler-1] INFO kafka.coordinator.group.GroupCoordinator - [GroupCoordinator 1]: Elected as the group coordinator for partition 42 in epoch 0 19:06:42.635 [data-plane-kafka-request-handler-1] INFO kafka.coordinator.group.GroupMetadataManager - [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-42 for epoch 0 19:06:42.635 [data-plane-kafka-request-handler-1] DEBUG kafka.utils.KafkaScheduler - Scheduling task __consumer_offsets-42 with initial delay 0 ms and period -1 ms. 19:06:42.635 [data-plane-kafka-request-handler-1] INFO kafka.coordinator.group.GroupCoordinator - [GroupCoordinator 1]: Elected as the group coordinator for partition 12 in epoch 0 19:06:42.635 [data-plane-kafka-request-handler-1] INFO kafka.coordinator.group.GroupMetadataManager - [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-12 for epoch 0 19:06:42.635 [data-plane-kafka-request-handler-1] DEBUG kafka.utils.KafkaScheduler - Scheduling task __consumer_offsets-12 with initial delay 0 ms and period -1 ms. 19:06:42.636 [data-plane-kafka-request-handler-1] INFO kafka.coordinator.group.GroupCoordinator - [GroupCoordinator 1]: Elected as the group coordinator for partition 21 in epoch 0 19:06:42.636 [data-plane-kafka-request-handler-1] INFO kafka.coordinator.group.GroupMetadataManager - [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-21 for epoch 0 19:06:42.636 [data-plane-kafka-request-handler-1] DEBUG kafka.utils.KafkaScheduler - Scheduling task __consumer_offsets-21 with initial delay 0 ms and period -1 ms. 19:06:42.636 [data-plane-kafka-request-handler-1] INFO kafka.coordinator.group.GroupCoordinator - [GroupCoordinator 1]: Elected as the group coordinator for partition 36 in epoch 0 19:06:42.636 [data-plane-kafka-request-handler-1] INFO kafka.coordinator.group.GroupMetadataManager - [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-36 for epoch 0 19:06:42.636 [data-plane-kafka-request-handler-1] DEBUG kafka.utils.KafkaScheduler - Scheduling task __consumer_offsets-36 with initial delay 0 ms and period -1 ms. 19:06:42.636 [data-plane-kafka-request-handler-1] INFO kafka.coordinator.group.GroupCoordinator - [GroupCoordinator 1]: Elected as the group coordinator for partition 6 in epoch 0 19:06:42.636 [data-plane-kafka-request-handler-1] INFO kafka.coordinator.group.GroupMetadataManager - [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-6 for epoch 0 19:06:42.636 [data-plane-kafka-request-handler-1] DEBUG kafka.utils.KafkaScheduler - Scheduling task __consumer_offsets-6 with initial delay 0 ms and period -1 ms. 19:06:42.636 [data-plane-kafka-request-handler-1] INFO kafka.coordinator.group.GroupCoordinator - [GroupCoordinator 1]: Elected as the group coordinator for partition 43 in epoch 0 19:06:42.636 [data-plane-kafka-request-handler-1] INFO kafka.coordinator.group.GroupMetadataManager - [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-43 for epoch 0 19:06:42.636 [data-plane-kafka-request-handler-1] DEBUG kafka.utils.KafkaScheduler - Scheduling task __consumer_offsets-43 with initial delay 0 ms and period -1 ms. 19:06:42.636 [data-plane-kafka-request-handler-1] INFO kafka.coordinator.group.GroupCoordinator - [GroupCoordinator 1]: Elected as the group coordinator for partition 13 in epoch 0 19:06:42.636 [data-plane-kafka-request-handler-1] INFO kafka.coordinator.group.GroupMetadataManager - [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-13 for epoch 0 19:06:42.636 [data-plane-kafka-request-handler-1] DEBUG kafka.utils.KafkaScheduler - Scheduling task __consumer_offsets-13 with initial delay 0 ms and period -1 ms. 19:06:42.636 [data-plane-kafka-request-handler-1] INFO kafka.coordinator.group.GroupCoordinator - [GroupCoordinator 1]: Elected as the group coordinator for partition 28 in epoch 0 19:06:42.636 [data-plane-kafka-request-handler-1] INFO kafka.coordinator.group.GroupMetadataManager - [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-28 for epoch 0 19:06:42.636 [data-plane-kafka-request-handler-1] DEBUG kafka.utils.KafkaScheduler - Scheduling task __consumer_offsets-28 with initial delay 0 ms and period -1 ms. 19:06:42.636 [data-plane-kafka-request-handler-1] INFO state.change.logger - [Broker id=1] Finished LeaderAndIsr request in 815ms correlationId 3 from controller 1 for 50 partitions 19:06:42.638 [TestBroker:1:Controller-1-to-broker-1-send-thread] DEBUG org.apache.kafka.clients.NetworkClient - [Controller id=1, targetBrokerId=1] Received LEADER_AND_ISR response from node 1 for request with header RequestHeader(apiKey=LEADER_AND_ISR, apiVersion=6, clientId=1, correlationId=3): LeaderAndIsrResponseData(errorCode=0, partitionErrors=[], topics=[LeaderAndIsrTopicError(topicId=eotegnaZSI-1iOOsuTZpHg, partitionErrors=[LeaderAndIsrPartitionError(topicName='', partitionIndex=13, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=46, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=9, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=42, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=21, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=17, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=30, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=26, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=5, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=38, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=1, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=34, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=16, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=45, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=12, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=41, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=24, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=20, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=49, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=0, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=29, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=25, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=8, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=37, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=4, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=33, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=15, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=48, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=11, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=44, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=23, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=19, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=32, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=28, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=7, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=40, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=3, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=36, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=47, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=14, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=43, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=10, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=22, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=18, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=31, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=27, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=39, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=6, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=35, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=2, errorCode=0)])]) 19:06:42.639 [TestBroker:1:Controller-1-to-broker-1-send-thread] DEBUG org.apache.kafka.clients.NetworkClient - [Controller id=1, targetBrokerId=1] Sending UPDATE_METADATA request with header RequestHeader(apiKey=UPDATE_METADATA, apiVersion=7, clientId=1, correlationId=4) and timeout 30000 to node 1: UpdateMetadataRequestData(controllerId=1, controllerEpoch=1, brokerEpoch=25, ungroupedPartitionStates=[], topicStates=[UpdateMetadataTopicState(topicName='__consumer_offsets', topicId=eotegnaZSI-1iOOsuTZpHg, partitionStates=[UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=13, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]), UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=46, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]), UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=9, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]), UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=42, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]), UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=21, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]), UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=17, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]), UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=30, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]), UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=26, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]), UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=5, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]), UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=38, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]), UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=1, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]), UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=34, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]), UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=16, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]), UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=45, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]), UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=12, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]), UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=41, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]), UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=24, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]), UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=20, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]), UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=49, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]), UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=0, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]), UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=29, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]), UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=25, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]), UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=8, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]), UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=37, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]), UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=4, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]), UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=33, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]), UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=15, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]), UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=48, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]), UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=11, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]), UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=44, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]), UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=23, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]), UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=19, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]), UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=32, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]), UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=28, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]), UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=7, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]), UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=40, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]), UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=3, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]), UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=36, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]), UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=47, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]), UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=14, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]), UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=43, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]), UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=10, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]), UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=22, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]), UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=18, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]), UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=31, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]), UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=27, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]), UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=39, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]), UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=6, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]), UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=35, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]), UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=2, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[])])], liveBrokers=[UpdateMetadataBroker(id=1, v0Host='', v0Port=0, endpoints=[UpdateMetadataEndpoint(port=38537, host='localhost', listener='SASL_PLAINTEXT', securityProtocol=2)], rack=null)]) 19:06:42.640 [data-plane-kafka-network-thread-1-ListenerName(SASL_PLAINTEXT)-SASL_PLAINTEXT-0] DEBUG kafka.request.logger - Completed request:{"isForwarded":false,"requestHeader":{"requestApiKey":4,"requestApiVersion":6,"correlationId":3,"clientId":"1","requestApiKeyName":"LEADER_AND_ISR"},"request":{"controllerId":1,"controllerEpoch":1,"brokerEpoch":25,"type":0,"topicStates":[{"topicName":"__consumer_offsets","topicId":"eotegnaZSI-1iOOsuTZpHg","partitionStates":[{"partitionIndex":13,"controllerEpoch":1,"leader":1,"leaderEpoch":0,"isr":[1],"partitionEpoch":0,"replicas":[1],"addingReplicas":[],"removingReplicas":[],"isNew":true,"leaderRecoveryState":0},{"partitionIndex":46,"controllerEpoch":1,"leader":1,"leaderEpoch":0,"isr":[1],"partitionEpoch":0,"replicas":[1],"addingReplicas":[],"removingReplicas":[],"isNew":true,"leaderRecoveryState":0},{"partitionIndex":9,"controllerEpoch":1,"leader":1,"leaderEpoch":0,"isr":[1],"partitionEpoch":0,"replicas":[1],"addingReplicas":[],"removingReplicas":[],"isNew":true,"leaderRecoveryState":0},{"partitionIndex":42,"controllerEpoch":1,"leader":1,"leaderEpoch":0,"isr":[1],"partitionEpoch":0,"replicas":[1],"addingReplicas":[],"removingReplicas":[],"isNew":true,"leaderRecoveryState":0},{"partitionIndex":21,"controllerEpoch":1,"leader":1,"leaderEpoch":0,"isr":[1],"partitionEpoch":0,"replicas":[1],"addingReplicas":[],"removingReplicas":[],"isNew":true,"leaderRecoveryState":0},{"partitionIndex":17,"controllerEpoch":1,"leader":1,"leaderEpoch":0,"isr":[1],"partitionEpoch":0,"replicas":[1],"addingReplicas":[],"removingReplicas":[],"isNew":true,"leaderRecoveryState":0},{"partitionIndex":30,"controllerEpoch":1,"leader":1,"leaderEpoch":0,"isr":[1],"partitionEpoch":0,"replicas":[1],"addingReplicas":[],"removingReplicas":[],"isNew":true,"leaderRecoveryState":0},{"partitionIndex":26,"controllerEpoch":1,"leader":1,"leaderEpoch":0,"isr":[1],"partitionEpoch":0,"replicas":[1],"addingReplicas":[],"removingReplicas":[],"isNew":true,"leaderRecoveryState":0},{"partitionIndex":5,"controllerEpoch":1,"leader":1,"leaderEpoch":0,"isr":[1],"partitionEpoch":0,"replicas":[1],"addingReplicas":[],"removingReplicas":[],"isNew":true,"leaderRecoveryState":0},{"partitionIndex":38,"controllerEpoch":1,"leader":1,"leaderEpoch":0,"isr":[1],"partitionEpoch":0,"replicas":[1],"addingReplicas":[],"removingReplicas":[],"isNew":true,"leaderRecoveryState":0},{"partitionIndex":1,"controllerEpoch":1,"leader":1,"leaderEpoch":0,"isr":[1],"partitionEpoch":0,"replicas":[1],"addingReplicas":[],"removingReplicas":[],"isNew":true,"leaderRecoveryState":0},{"partitionIndex":34,"controllerEpoch":1,"leader":1,"leaderEpoch":0,"isr":[1],"partitionEpoch":0,"replicas":[1],"addingReplicas":[],"removingReplicas":[],"isNew":true,"leaderRecoveryState":0},{"partitionIndex":16,"controllerEpoch":1,"leader":1,"leaderEpoch":0,"isr":[1],"partitionEpoch":0,"replicas":[1],"addingReplicas":[],"removingReplicas":[],"isNew":true,"leaderRecoveryState":0},{"partitionIndex":45,"controllerEpoch":1,"leader":1,"leaderEpoch":0,"isr":[1],"partitionEpoch":0,"replicas":[1],"addingReplicas":[],"removingReplicas":[],"isNew":true,"leaderRecoveryState":0},{"partitionIndex":12,"controllerEpoch":1,"leader":1,"leaderEpoch":0,"isr":[1],"partitionEpoch":0,"replicas":[1],"addingReplicas":[],"removingReplicas":[],"isNew":true,"leaderRecoveryState":0},{"partitionIndex":41,"controllerEpoch":1,"leader":1,"leaderEpoch":0,"isr":[1],"partitionEpoch":0,"replicas":[1],"addingReplicas":[],"removingReplicas":[],"isNew":true,"leaderRecoveryState":0},{"partitionIndex":24,"controllerEpoch":1,"leader":1,"leaderEpoch":0,"isr":[1],"partitionEpoch":0,"replicas":[1],"addingReplicas":[],"removingReplicas":[],"isNew":true,"leaderRecoveryState":0},{"partitionIndex":20,"controllerEpoch":1,"leader":1,"leaderEpoch":0,"isr":[1],"partitionEpoch":0,"replicas":[1],"addingReplicas":[],"removingReplicas":[],"isNew":true,"leaderRecoveryState":0},{"partitionIndex":49,"controllerEpoch":1,"leader":1,"leaderEpoch":0,"isr":[1],"partitionEpoch":0,"replicas":[1],"addingReplicas":[],"removingReplicas":[],"isNew":true,"leaderRecoveryState":0},{"partitionIndex":0,"controllerEpoch":1,"leader":1,"leaderEpoch":0,"isr":[1],"partitionEpoch":0,"replicas":[1],"addingReplicas":[],"removingReplicas":[],"isNew":true,"leaderRecoveryState":0},{"partitionIndex":29,"controllerEpoch":1,"leader":1,"leaderEpoch":0,"isr":[1],"partitionEpoch":0,"replicas":[1],"addingReplicas":[],"removingReplicas":[],"isNew":true,"leaderRecoveryState":0},{"partitionIndex":25,"controllerEpoch":1,"leader":1,"leaderEpoch":0,"isr":[1],"partitionEpoch":0,"replicas":[1],"addingReplicas":[],"removingReplicas":[],"isNew":true,"leaderRecoveryState":0},{"partitionIndex":8,"controllerEpoch":1,"leader":1,"leaderEpoch":0,"isr":[1],"partitionEpoch":0,"replicas":[1],"addingReplicas":[],"removingReplicas":[],"isNew":true,"leaderRecoveryState":0},{"partitionIndex":37,"controllerEpoch":1,"leader":1,"leaderEpoch":0,"isr":[1],"partitionEpoch":0,"replicas":[1],"addingReplicas":[],"removingReplicas":[],"isNew":true,"leaderRecoveryState":0},{"partitionIndex":4,"controllerEpoch":1,"leader":1,"leaderEpoch":0,"isr":[1],"partitionEpoch":0,"replicas":[1],"addingReplicas":[],"removingReplicas":[],"isNew":true,"leaderRecoveryState":0},{"partitionIndex":33,"controllerEpoch":1,"leader":1,"leaderEpoch":0,"isr":[1],"partitionEpoch":0,"replicas":[1],"addingReplicas":[],"removingReplicas":[],"isNew":true,"leaderRecoveryState":0},{"partitionIndex":15,"controllerEpoch":1,"leader":1,"leaderEpoch":0,"isr":[1],"partitionEpoch":0,"replicas":[1],"addingReplicas":[],"removingReplicas":[],"isNew":true,"leaderRecoveryState":0},{"partitionIndex":48,"controllerEpoch":1,"leader":1,"leaderEpoch":0,"isr":[1],"partitionEpoch":0,"replicas":[1],"addingReplicas":[],"removingReplicas":[],"isNew":true,"leaderRecoveryState":0},{"partitionIndex":11,"controllerEpoch":1,"leader":1,"leaderEpoch":0,"isr":[1],"partitionEpoch":0,"replicas":[1],"addingReplicas":[],"removingReplicas":[],"isNew":true,"leaderRecoveryState":0},{"partitionIndex":44,"controllerEpoch":1,"leader":1,"leaderEpoch":0,"isr":[1],"partitionEpoch":0,"replicas":[1],"addingReplicas":[],"removingReplicas":[],"isNew":true,"leaderRecoveryState":0},{"partitionIndex":23,"controllerEpoch":1,"leader":1,"leaderEpoch":0,"isr":[1],"partitionEpoch":0,"replicas":[1],"addingReplicas":[],"removingReplicas":[],"isNew":true,"leaderRecoveryState":0},{"partitionIndex":19,"controllerEpoch":1,"leader":1,"leaderEpoch":0,"isr":[1],"partitionEpoch":0,"replicas":[1],"addingReplicas":[],"removingReplicas":[],"isNew":true,"leaderRecoveryState":0},{"partitionIndex":32,"controllerEpoch":1,"leader":1,"leaderEpoch":0,"isr":[1],"partitionEpoch":0,"replicas":[1],"addingReplicas":[],"removingReplicas":[],"isNew":true,"leaderRecoveryState":0},{"partitionIndex":28,"controllerEpoch":1,"leader":1,"leaderEpoch":0,"isr":[1],"partitionEpoch":0,"replicas":[1],"addingReplicas":[],"removingReplicas":[],"isNew":true,"leaderRecoveryState":0},{"partitionIndex":7,"controllerEpoch":1,"leader":1,"leaderEpoch":0,"isr":[1],"partitionEpoch":0,"replicas":[1],"addingReplicas":[],"removingReplicas":[],"isNew":true,"leaderRecoveryState":0},{"partitionIndex":40,"controllerEpoch":1,"leader":1,"leaderEpoch":0,"isr":[1],"partitionEpoch":0,"replicas":[1],"addingReplicas":[],"removingReplicas":[],"isNew":true,"leaderRecoveryState":0},{"partitionIndex":3,"controllerEpoch":1,"leader":1,"leaderEpoch":0,"isr":[1],"partitionEpoch":0,"replicas":[1],"addingReplicas":[],"removingReplicas":[],"isNew":true,"leaderRecoveryState":0},{"partitionIndex":36,"controllerEpoch":1,"leader":1,"leaderEpoch":0,"isr":[1],"partitionEpoch":0,"replicas":[1],"addingReplicas":[],"removingReplicas":[],"isNew":true,"leaderRecoveryState":0},{"partitionIndex":47,"controllerEpoch":1,"leader":1,"leaderEpoch":0,"isr":[1],"partitionEpoch":0,"replicas":[1],"addingReplicas":[],"removingReplicas":[],"isNew":true,"leaderRecoveryState":0},{"partitionIndex":14,"controllerEpoch":1,"leader":1,"leaderEpoch":0,"isr":[1],"partitionEpoch":0,"replicas":[1],"addingReplicas":[],"removingReplicas":[],"isNew":true,"leaderRecoveryState":0},{"partitionIndex":43,"controllerEpoch":1,"leader":1,"leaderEpoch":0,"isr":[1],"partitionEpoch":0,"replicas":[1],"addingReplicas":[],"removingReplicas":[],"isNew":true,"leaderRecoveryState":0},{"partitionIndex":10,"controllerEpoch":1,"leader":1,"leaderEpoch":0,"isr":[1],"partitionEpoch":0,"replicas":[1],"addingReplicas":[],"removingReplicas":[],"isNew":true,"leaderRecoveryState":0},{"partitionIndex":22,"controllerEpoch":1,"leader":1,"leaderEpoch":0,"isr":[1],"partitionEpoch":0,"replicas":[1],"addingReplicas":[],"removingReplicas":[],"isNew":true,"leaderRecoveryState":0},{"partitionIndex":18,"controllerEpoch":1,"leader":1,"leaderEpoch":0,"isr":[1],"partitionEpoch":0,"replicas":[1],"addingReplicas":[],"removingReplicas":[],"isNew":true,"leaderRecoveryState":0},{"partitionIndex":31,"controllerEpoch":1,"leader":1,"leaderEpoch":0,"isr":[1],"partitionEpoch":0,"replicas":[1],"addingReplicas":[],"removingReplicas":[],"isNew":true,"leaderRecoveryState":0},{"partitionIndex":27,"controllerEpoch":1,"leader":1,"leaderEpoch":0,"isr":[1],"partitionEpoch":0,"replicas":[1],"addingReplicas":[],"removingReplicas":[],"isNew":true,"leaderRecoveryState":0},{"partitionIndex":39,"controllerEpoch":1,"leader":1,"leaderEpoch":0,"isr":[1],"partitionEpoch":0,"replicas":[1],"addingReplicas":[],"removingReplicas":[],"isNew":true,"leaderRecoveryState":0},{"partitionIndex":6,"controllerEpoch":1,"leader":1,"leaderEpoch":0,"isr":[1],"partitionEpoch":0,"replicas":[1],"addingReplicas":[],"removingReplicas":[],"isNew":true,"leaderRecoveryState":0},{"partitionIndex":35,"controllerEpoch":1,"leader":1,"leaderEpoch":0,"isr":[1],"partitionEpoch":0,"replicas":[1],"addingReplicas":[],"removingReplicas":[],"isNew":true,"leaderRecoveryState":0},{"partitionIndex":2,"controllerEpoch":1,"leader":1,"leaderEpoch":0,"isr":[1],"partitionEpoch":0,"replicas":[1],"addingReplicas":[],"removingReplicas":[],"isNew":true,"leaderRecoveryState":0}]}],"liveLeaders":[{"brokerId":1,"hostName":"localhost","port":38537}]},"response":{"errorCode":0,"topics":[{"topicId":"eotegnaZSI-1iOOsuTZpHg","partitionErrors":[{"partitionIndex":13,"errorCode":0},{"partitionIndex":46,"errorCode":0},{"partitionIndex":9,"errorCode":0},{"partitionIndex":42,"errorCode":0},{"partitionIndex":21,"errorCode":0},{"partitionIndex":17,"errorCode":0},{"partitionIndex":30,"errorCode":0},{"partitionIndex":26,"errorCode":0},{"partitionIndex":5,"errorCode":0},{"partitionIndex":38,"errorCode":0},{"partitionIndex":1,"errorCode":0},{"partitionIndex":34,"errorCode":0},{"partitionIndex":16,"errorCode":0},{"partitionIndex":45,"errorCode":0},{"partitionIndex":12,"errorCode":0},{"partitionIndex":41,"errorCode":0},{"partitionIndex":24,"errorCode":0},{"partitionIndex":20,"errorCode":0},{"partitionIndex":49,"errorCode":0},{"partitionIndex":0,"errorCode":0},{"partitionIndex":29,"errorCode":0},{"partitionIndex":25,"errorCode":0},{"partitionIndex":8,"errorCode":0},{"partitionIndex":37,"errorCode":0},{"partitionIndex":4,"errorCode":0},{"partitionIndex":33,"errorCode":0},{"partitionIndex":15,"errorCode":0},{"partitionIndex":48,"errorCode":0},{"partitionIndex":11,"errorCode":0},{"partitionIndex":44,"errorCode":0},{"partitionIndex":23,"errorCode":0},{"partitionIndex":19,"errorCode":0},{"partitionIndex":32,"errorCode":0},{"partitionIndex":28,"errorCode":0},{"partitionIndex":7,"errorCode":0},{"partitionIndex":40,"errorCode":0},{"partitionIndex":3,"errorCode":0},{"partitionIndex":36,"errorCode":0},{"partitionIndex":47,"errorCode":0},{"partitionIndex":14,"errorCode":0},{"partitionIndex":43,"errorCode":0},{"partitionIndex":10,"errorCode":0},{"partitionIndex":22,"errorCode":0},{"partitionIndex":18,"errorCode":0},{"partitionIndex":31,"errorCode":0},{"partitionIndex":27,"errorCode":0},{"partitionIndex":39,"errorCode":0},{"partitionIndex":6,"errorCode":0},{"partitionIndex":35,"errorCode":0},{"partitionIndex":2,"errorCode":0}]}]},"connection":"127.0.0.1:38537-127.0.0.1:54958-0","totalTimeMs":817.983,"requestQueueTimeMs":0.919,"localTimeMs":816.534,"remoteTimeMs":0.0,"throttleTimeMs":0,"responseQueueTimeMs":0.153,"sendTimeMs":0.374,"securityProtocol":"SASL_PLAINTEXT","principal":"User:admin","listener":"SASL_PLAINTEXT","clientInformation":{"softwareName":"unknown","softwareVersion":"unknown"}} 19:06:42.641 [group-metadata-manager-0] INFO kafka.coordinator.group.GroupMetadataManager - [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-3 in 8 milliseconds for epoch 0, of which 4 milliseconds was spent in the scheduler. 19:06:42.641 [group-metadata-manager-0] DEBUG kafka.coordinator.group.GroupMetadataManager - [GroupMetadataManager brokerId=1] Started loading offsets and group metadata from __consumer_offsets-18 for epoch 0 19:06:42.642 [group-metadata-manager-0] INFO kafka.coordinator.group.GroupMetadataManager - [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-18 in 12 milliseconds for epoch 0, of which 11 milliseconds was spent in the scheduler. 19:06:42.642 [group-metadata-manager-0] DEBUG kafka.coordinator.group.GroupMetadataManager - [GroupMetadataManager brokerId=1] Started loading offsets and group metadata from __consumer_offsets-41 for epoch 0 19:06:42.642 [group-metadata-manager-0] INFO kafka.coordinator.group.GroupMetadataManager - [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-41 in 12 milliseconds for epoch 0, of which 12 milliseconds was spent in the scheduler. 19:06:42.642 [group-metadata-manager-0] DEBUG kafka.coordinator.group.GroupMetadataManager - [GroupMetadataManager brokerId=1] Started loading offsets and group metadata from __consumer_offsets-10 for epoch 0 19:06:42.642 [group-metadata-manager-0] INFO kafka.coordinator.group.GroupMetadataManager - [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-10 in 12 milliseconds for epoch 0, of which 12 milliseconds was spent in the scheduler. 19:06:42.642 [data-plane-kafka-request-handler-0] INFO state.change.logger - [Broker id=1] Add 50 partitions and deleted 0 partitions from metadata cache in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 4 19:06:42.642 [group-metadata-manager-0] DEBUG kafka.coordinator.group.GroupMetadataManager - [GroupMetadataManager brokerId=1] Started loading offsets and group metadata from __consumer_offsets-33 for epoch 0 19:06:42.642 [group-metadata-manager-0] INFO kafka.coordinator.group.GroupMetadataManager - [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-33 in 11 milliseconds for epoch 0, of which 11 milliseconds was spent in the scheduler. 19:06:42.642 [group-metadata-manager-0] DEBUG kafka.coordinator.group.GroupMetadataManager - [GroupMetadataManager brokerId=1] Started loading offsets and group metadata from __consumer_offsets-48 for epoch 0 19:06:42.642 [group-metadata-manager-0] INFO kafka.coordinator.group.GroupMetadataManager - [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-48 in 11 milliseconds for epoch 0, of which 11 milliseconds was spent in the scheduler. 19:06:42.643 [TestBroker:1:Controller-1-to-broker-1-send-thread] DEBUG org.apache.kafka.clients.NetworkClient - [Controller id=1, targetBrokerId=1] Received UPDATE_METADATA response from node 1 for request with header RequestHeader(apiKey=UPDATE_METADATA, apiVersion=7, clientId=1, correlationId=4): UpdateMetadataResponseData(errorCode=0) 19:06:42.644 [data-plane-kafka-network-thread-1-ListenerName(SASL_PLAINTEXT)-SASL_PLAINTEXT-0] DEBUG kafka.request.logger - Completed request:{"isForwarded":false,"requestHeader":{"requestApiKey":6,"requestApiVersion":7,"correlationId":4,"clientId":"1","requestApiKeyName":"UPDATE_METADATA"},"request":{"controllerId":1,"controllerEpoch":1,"brokerEpoch":25,"topicStates":[{"topicName":"__consumer_offsets","topicId":"eotegnaZSI-1iOOsuTZpHg","partitionStates":[{"partitionIndex":13,"controllerEpoch":1,"leader":1,"leaderEpoch":0,"isr":[1],"zkVersion":0,"replicas":[1],"offlineReplicas":[]},{"partitionIndex":46,"controllerEpoch":1,"leader":1,"leaderEpoch":0,"isr":[1],"zkVersion":0,"replicas":[1],"offlineReplicas":[]},{"partitionIndex":9,"controllerEpoch":1,"leader":1,"leaderEpoch":0,"isr":[1],"zkVersion":0,"replicas":[1],"offlineReplicas":[]},{"partitionIndex":42,"controllerEpoch":1,"leader":1,"leaderEpoch":0,"isr":[1],"zkVersion":0,"replicas":[1],"offlineReplicas":[]},{"partitionIndex":21,"controllerEpoch":1,"leader":1,"leaderEpoch":0,"isr":[1],"zkVersion":0,"replicas":[1],"offlineReplicas":[]},{"partitionIndex":17,"controllerEpoch":1,"leader":1,"leaderEpoch":0,"isr":[1],"zkVersion":0,"replicas":[1],"offlineReplicas":[]},{"partitionIndex":30,"controllerEpoch":1,"leader":1,"leaderEpoch":0,"isr":[1],"zkVersion":0,"replicas":[1],"offlineReplicas":[]},{"partitionIndex":26,"controllerEpoch":1,"leader":1,"leaderEpoch":0,"isr":[1],"zkVersion":0,"replicas":[1],"offlineReplicas":[]},{"partitionIndex":5,"controllerEpoch":1,"leader":1,"leaderEpoch":0,"isr":[1],"zkVersion":0,"replicas":[1],"offlineReplicas":[]},{"partitionIndex":38,"controllerEpoch":1,"leader":1,"leaderEpoch":0,"isr":[1],"zkVersion":0,"replicas":[1],"offlineReplicas":[]},{"partitionIndex":1,"controllerEpoch":1,"leader":1,"leaderEpoch":0,"isr":[1],"zkVersion":0,"replicas":[1],"offlineReplicas":[]},{"partitionIndex":34,"controllerEpoch":1,"leader":1,"leaderEpoch":0,"isr":[1],"zkVersion":0,"replicas":[1],"offlineReplicas":[]},{"partitionIndex":16,"controllerEpoch":1,"leader":1,"leaderEpoch":0,"isr":[1],"zkVersion":0,"replicas":[1],"offlineReplicas":[]},{"partitionIndex":45,"controllerEpoch":1,"leader":1,"leaderEpoch":0,"isr":[1],"zkVersion":0,"replicas":[1],"offlineReplicas":[]},{"partitionIndex":12,"controllerEpoch":1,"leader":1,"leaderEpoch":0,"isr":[1],"zkVersion":0,"replicas":[1],"offlineReplicas":[]},{"partitionIndex":41,"controllerEpoch":1,"leader":1,"leaderEpoch":0,"isr":[1],"zkVersion":0,"replicas":[1],"offlineReplicas":[]},{"partitionIndex":24,"controllerEpoch":1,"leader":1,"leaderEpoch":0,"isr":[1],"zkVersion":0,"replicas":[1],"offlineReplicas":[]},{"partitionIndex":20,"controllerEpoch":1,"leader":1,"leaderEpoch":0,"isr":[1],"zkVersion":0,"replicas":[1],"offlineReplicas":[]},{"partitionIndex":49,"controllerEpoch":1,"leader":1,"leaderEpoch":0,"isr":[1],"zkVersion":0,"replicas":[1],"offlineReplicas":[]},{"partitionIndex":0,"controllerEpoch":1,"leader":1,"leaderEpoch":0,"isr":[1],"zkVersion":0,"replicas":[1],"offlineReplicas":[]},{"partitionIndex":29,"controllerEpoch":1,"leader":1,"leaderEpoch":0,"isr":[1],"zkVersion":0,"replicas":[1],"offlineReplicas":[]},{"partitionIndex":25,"controllerEpoch":1,"leader":1,"leaderEpoch":0,"isr":[1],"zkVersion":0,"replicas":[1],"offlineReplicas":[]},{"partitionIndex":8,"controllerEpoch":1,"leader":1,"leaderEpoch":0,"isr":[1],"zkVersion":0,"replicas":[1],"offlineReplicas":[]},{"partitionIndex":37,"controllerEpoch":1,"leader":1,"leaderEpoch":0,"isr":[1],"zkVersion":0,"replicas":[1],"offlineReplicas":[]},{"partitionIndex":4,"controllerEpoch":1,"leader":1,"leaderEpoch":0,"isr":[1],"zkVersion":0,"replicas":[1],"offlineReplicas":[]},{"partitionIndex":33,"controllerEpoch":1,"leader":1,"leaderEpoch":0,"isr":[1],"zkVersion":0,"replicas":[1],"offlineReplicas":[]},{"partitionIndex":15,"controllerEpoch":1,"leader":1,"leaderEpoch":0,"isr":[1],"zkVersion":0,"replicas":[1],"offlineReplicas":[]},{"partitionIndex":48,"controllerEpoch":1,"leader":1,"leaderEpoch":0,"isr":[1],"zkVersion":0,"replicas":[1],"offlineReplicas":[]},{"partitionIndex":11,"controllerEpoch":1,"leader":1,"leaderEpoch":0,"isr":[1],"zkVersion":0,"replicas":[1],"offlineReplicas":[]},{"partitionIndex":44,"controllerEpoch":1,"leader":1,"leaderEpoch":0,"isr":[1],"zkVersion":0,"replicas":[1],"offlineReplicas":[]},{"partitionIndex":23,"controllerEpoch":1,"leader":1,"leaderEpoch":0,"isr":[1],"zkVersion":0,"replicas":[1],"offlineReplicas":[]},{"partitionIndex":19,"controllerEpoch":1,"leader":1,"leaderEpoch":0,"isr":[1],"zkVersion":0,"replicas":[1],"offlineReplicas":[]},{"partitionIndex":32,"controllerEpoch":1,"leader":1,"leaderEpoch":0,"isr":[1],"zkVersion":0,"replicas":[1],"offlineReplicas":[]},{"partitionIndex":28,"controllerEpoch":1,"leader":1,"leaderEpoch":0,"isr":[1],"zkVersion":0,"replicas":[1],"offlineReplicas":[]},{"partitionIndex":7,"controllerEpoch":1,"leader":1,"leaderEpoch":0,"isr":[1],"zkVersion":0,"replicas":[1],"offlineReplicas":[]},{"partitionIndex":40,"controllerEpoch":1,"leader":1,"leaderEpoch":0,"isr":[1],"zkVersion":0,"replicas":[1],"offlineReplicas":[]},{"partitionIndex":3,"controllerEpoch":1,"leader":1,"leaderEpoch":0,"isr":[1],"zkVersion":0,"replicas":[1],"offlineReplicas":[]},{"partitionIndex":36,"controllerEpoch":1,"leader":1,"leaderEpoch":0,"isr":[1],"zkVersion":0,"replicas":[1],"offlineReplicas":[]},{"partitionIndex":47,"controllerEpoch":1,"leader":1,"leaderEpoch":0,"isr":[1],"zkVersion":0,"replicas":[1],"offlineReplicas":[]},{"partitionIndex":14,"controllerEpoch":1,"leader":1,"leaderEpoch":0,"isr":[1],"zkVersion":0,"replicas":[1],"offlineReplicas":[]},{"partitionIndex":43,"controllerEpoch":1,"leader":1,"leaderEpoch":0,"isr":[1],"zkVersion":0,"replicas":[1],"offlineReplicas":[]},{"partitionIndex":10,"controllerEpoch":1,"leader":1,"leaderEpoch":0,"isr":[1],"zkVersion":0,"replicas":[1],"offlineReplicas":[]},{"partitionIndex":22,"controllerEpoch":1,"leader":1,"leaderEpoch":0,"isr":[1],"zkVersion":0,"replicas":[1],"offlineReplicas":[]},{"partitionIndex":18,"controllerEpoch":1,"leader":1,"leaderEpoch":0,"isr":[1],"zkVersion":0,"replicas":[1],"offlineReplicas":[]},{"partitionIndex":31,"controllerEpoch":1,"leader":1,"leaderEpoch":0,"isr":[1],"zkVersion":0,"replicas":[1],"offlineReplicas":[]},{"partitionIndex":27,"controllerEpoch":1,"leader":1,"leaderEpoch":0,"isr":[1],"zkVersion":0,"replicas":[1],"offlineReplicas":[]},{"partitionIndex":39,"controllerEpoch":1,"leader":1,"leaderEpoch":0,"isr":[1],"zkVersion":0,"replicas":[1],"offlineReplicas":[]},{"partitionIndex":6,"controllerEpoch":1,"leader":1,"leaderEpoch":0,"isr":[1],"zkVersion":0,"replicas":[1],"offlineReplicas":[]},{"partitionIndex":35,"controllerEpoch":1,"leader":1,"leaderEpoch":0,"isr":[1],"zkVersion":0,"replicas":[1],"offlineReplicas":[]},{"partitionIndex":2,"controllerEpoch":1,"leader":1,"leaderEpoch":0,"isr":[1],"zkVersion":0,"replicas":[1],"offlineReplicas":[]}]}],"liveBrokers":[{"id":1,"endpoints":[{"port":38537,"host":"localhost","listener":"SASL_PLAINTEXT","securityProtocol":2}],"rack":null}]},"response":{"errorCode":0},"connection":"127.0.0.1:38537-127.0.0.1:54958-0","totalTimeMs":2.513,"requestQueueTimeMs":0.546,"localTimeMs":1.701,"remoteTimeMs":0.0,"throttleTimeMs":0,"responseQueueTimeMs":0.072,"sendTimeMs":0.193,"securityProtocol":"SASL_PLAINTEXT","principal":"User:admin","listener":"SASL_PLAINTEXT","clientInformation":{"softwareName":"unknown","softwareVersion":"unknown"}} 19:06:42.644 [group-metadata-manager-0] DEBUG kafka.coordinator.group.GroupMetadataManager - [GroupMetadataManager brokerId=1] Started loading offsets and group metadata from __consumer_offsets-19 for epoch 0 19:06:42.644 [group-metadata-manager-0] INFO kafka.coordinator.group.GroupMetadataManager - [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-19 in 13 milliseconds for epoch 0, of which 13 milliseconds was spent in the scheduler. 19:06:42.644 [group-metadata-manager-0] DEBUG kafka.coordinator.group.GroupMetadataManager - [GroupMetadataManager brokerId=1] Started loading offsets and group metadata from __consumer_offsets-34 for epoch 0 19:06:42.644 [group-metadata-manager-0] INFO kafka.coordinator.group.GroupMetadataManager - [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-34 in 13 milliseconds for epoch 0, of which 13 milliseconds was spent in the scheduler. 19:06:42.644 [group-metadata-manager-0] DEBUG kafka.coordinator.group.GroupMetadataManager - [GroupMetadataManager brokerId=1] Started loading offsets and group metadata from __consumer_offsets-4 for epoch 0 19:06:42.644 [group-metadata-manager-0] INFO kafka.coordinator.group.GroupMetadataManager - [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-4 in 13 milliseconds for epoch 0, of which 13 milliseconds was spent in the scheduler. 19:06:42.644 [group-metadata-manager-0] DEBUG kafka.coordinator.group.GroupMetadataManager - [GroupMetadataManager brokerId=1] Started loading offsets and group metadata from __consumer_offsets-11 for epoch 0 19:06:42.644 [group-metadata-manager-0] INFO kafka.coordinator.group.GroupMetadataManager - [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-11 in 13 milliseconds for epoch 0, of which 13 milliseconds was spent in the scheduler. 19:06:42.644 [group-metadata-manager-0] DEBUG kafka.coordinator.group.GroupMetadataManager - [GroupMetadataManager brokerId=1] Started loading offsets and group metadata from __consumer_offsets-26 for epoch 0 19:06:42.645 [group-metadata-manager-0] INFO kafka.coordinator.group.GroupMetadataManager - [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-26 in 13 milliseconds for epoch 0, of which 12 milliseconds was spent in the scheduler. 19:06:42.645 [group-metadata-manager-0] DEBUG kafka.coordinator.group.GroupMetadataManager - [GroupMetadataManager brokerId=1] Started loading offsets and group metadata from __consumer_offsets-49 for epoch 0 19:06:42.645 [group-metadata-manager-0] INFO kafka.coordinator.group.GroupMetadataManager - [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-49 in 13 milliseconds for epoch 0, of which 13 milliseconds was spent in the scheduler. 19:06:42.645 [group-metadata-manager-0] DEBUG kafka.coordinator.group.GroupMetadataManager - [GroupMetadataManager brokerId=1] Started loading offsets and group metadata from __consumer_offsets-39 for epoch 0 19:06:42.645 [group-metadata-manager-0] INFO kafka.coordinator.group.GroupMetadataManager - [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-39 in 13 milliseconds for epoch 0, of which 13 milliseconds was spent in the scheduler. 19:06:42.645 [group-metadata-manager-0] DEBUG kafka.coordinator.group.GroupMetadataManager - [GroupMetadataManager brokerId=1] Started loading offsets and group metadata from __consumer_offsets-9 for epoch 0 19:06:42.645 [group-metadata-manager-0] INFO kafka.coordinator.group.GroupMetadataManager - [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-9 in 13 milliseconds for epoch 0, of which 13 milliseconds was spent in the scheduler. 19:06:42.645 [group-metadata-manager-0] DEBUG kafka.coordinator.group.GroupMetadataManager - [GroupMetadataManager brokerId=1] Started loading offsets and group metadata from __consumer_offsets-24 for epoch 0 19:06:42.645 [group-metadata-manager-0] INFO kafka.coordinator.group.GroupMetadataManager - [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-24 in 13 milliseconds for epoch 0, of which 13 milliseconds was spent in the scheduler. 19:06:42.645 [group-metadata-manager-0] DEBUG kafka.coordinator.group.GroupMetadataManager - [GroupMetadataManager brokerId=1] Started loading offsets and group metadata from __consumer_offsets-31 for epoch 0 19:06:42.645 [group-metadata-manager-0] INFO kafka.coordinator.group.GroupMetadataManager - [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-31 in 12 milliseconds for epoch 0, of which 12 milliseconds was spent in the scheduler. 19:06:42.645 [group-metadata-manager-0] DEBUG kafka.coordinator.group.GroupMetadataManager - [GroupMetadataManager brokerId=1] Started loading offsets and group metadata from __consumer_offsets-46 for epoch 0 19:06:42.645 [group-metadata-manager-0] INFO kafka.coordinator.group.GroupMetadataManager - [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-46 in 12 milliseconds for epoch 0, of which 12 milliseconds was spent in the scheduler. 19:06:42.645 [group-metadata-manager-0] DEBUG kafka.coordinator.group.GroupMetadataManager - [GroupMetadataManager brokerId=1] Started loading offsets and group metadata from __consumer_offsets-1 for epoch 0 19:06:42.646 [group-metadata-manager-0] INFO kafka.coordinator.group.GroupMetadataManager - [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-1 in 13 milliseconds for epoch 0, of which 12 milliseconds was spent in the scheduler. 19:06:42.646 [group-metadata-manager-0] DEBUG kafka.coordinator.group.GroupMetadataManager - [GroupMetadataManager brokerId=1] Started loading offsets and group metadata from __consumer_offsets-16 for epoch 0 19:06:42.646 [group-metadata-manager-0] INFO kafka.coordinator.group.GroupMetadataManager - [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-16 in 13 milliseconds for epoch 0, of which 13 milliseconds was spent in the scheduler. 19:06:42.646 [group-metadata-manager-0] DEBUG kafka.coordinator.group.GroupMetadataManager - [GroupMetadataManager brokerId=1] Started loading offsets and group metadata from __consumer_offsets-2 for epoch 0 19:06:42.646 [group-metadata-manager-0] INFO kafka.coordinator.group.GroupMetadataManager - [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-2 in 13 milliseconds for epoch 0, of which 13 milliseconds was spent in the scheduler. 19:06:42.646 [group-metadata-manager-0] DEBUG kafka.coordinator.group.GroupMetadataManager - [GroupMetadataManager brokerId=1] Started loading offsets and group metadata from __consumer_offsets-25 for epoch 0 19:06:42.646 [group-metadata-manager-0] INFO kafka.coordinator.group.GroupMetadataManager - [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-25 in 13 milliseconds for epoch 0, of which 13 milliseconds was spent in the scheduler. 19:06:42.646 [group-metadata-manager-0] DEBUG kafka.coordinator.group.GroupMetadataManager - [GroupMetadataManager brokerId=1] Started loading offsets and group metadata from __consumer_offsets-40 for epoch 0 19:06:42.646 [group-metadata-manager-0] INFO kafka.coordinator.group.GroupMetadataManager - [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-40 in 12 milliseconds for epoch 0, of which 12 milliseconds was spent in the scheduler. 19:06:42.646 [group-metadata-manager-0] DEBUG kafka.coordinator.group.GroupMetadataManager - [GroupMetadataManager brokerId=1] Started loading offsets and group metadata from __consumer_offsets-47 for epoch 0 19:06:42.646 [group-metadata-manager-0] INFO kafka.coordinator.group.GroupMetadataManager - [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-47 in 12 milliseconds for epoch 0, of which 12 milliseconds was spent in the scheduler. 19:06:42.646 [group-metadata-manager-0] DEBUG kafka.coordinator.group.GroupMetadataManager - [GroupMetadataManager brokerId=1] Started loading offsets and group metadata from __consumer_offsets-17 for epoch 0 19:06:42.647 [group-metadata-manager-0] INFO kafka.coordinator.group.GroupMetadataManager - [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-17 in 13 milliseconds for epoch 0, of which 12 milliseconds was spent in the scheduler. 19:06:42.647 [group-metadata-manager-0] DEBUG kafka.coordinator.group.GroupMetadataManager - [GroupMetadataManager brokerId=1] Started loading offsets and group metadata from __consumer_offsets-32 for epoch 0 19:06:42.647 [group-metadata-manager-0] INFO kafka.coordinator.group.GroupMetadataManager - [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-32 in 13 milliseconds for epoch 0, of which 13 milliseconds was spent in the scheduler. 19:06:42.647 [group-metadata-manager-0] DEBUG kafka.coordinator.group.GroupMetadataManager - [GroupMetadataManager brokerId=1] Started loading offsets and group metadata from __consumer_offsets-37 for epoch 0 19:06:42.647 [group-metadata-manager-0] INFO kafka.coordinator.group.GroupMetadataManager - [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-37 in 13 milliseconds for epoch 0, of which 13 milliseconds was spent in the scheduler. 19:06:42.647 [group-metadata-manager-0] DEBUG kafka.coordinator.group.GroupMetadataManager - [GroupMetadataManager brokerId=1] Started loading offsets and group metadata from __consumer_offsets-7 for epoch 0 19:06:42.647 [group-metadata-manager-0] INFO kafka.coordinator.group.GroupMetadataManager - [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-7 in 13 milliseconds for epoch 0, of which 13 milliseconds was spent in the scheduler. 19:06:42.647 [group-metadata-manager-0] DEBUG kafka.coordinator.group.GroupMetadataManager - [GroupMetadataManager brokerId=1] Started loading offsets and group metadata from __consumer_offsets-22 for epoch 0 19:06:42.647 [group-metadata-manager-0] INFO kafka.coordinator.group.GroupMetadataManager - [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-22 in 13 milliseconds for epoch 0, of which 13 milliseconds was spent in the scheduler. 19:06:42.647 [group-metadata-manager-0] DEBUG kafka.coordinator.group.GroupMetadataManager - [GroupMetadataManager brokerId=1] Started loading offsets and group metadata from __consumer_offsets-29 for epoch 0 19:06:42.647 [group-metadata-manager-0] INFO kafka.coordinator.group.GroupMetadataManager - [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-29 in 13 milliseconds for epoch 0, of which 13 milliseconds was spent in the scheduler. 19:06:42.647 [group-metadata-manager-0] DEBUG kafka.coordinator.group.GroupMetadataManager - [GroupMetadataManager brokerId=1] Started loading offsets and group metadata from __consumer_offsets-44 for epoch 0 19:06:42.647 [group-metadata-manager-0] INFO kafka.coordinator.group.GroupMetadataManager - [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-44 in 13 milliseconds for epoch 0, of which 13 milliseconds was spent in the scheduler. 19:06:42.647 [group-metadata-manager-0] DEBUG kafka.coordinator.group.GroupMetadataManager - [GroupMetadataManager brokerId=1] Started loading offsets and group metadata from __consumer_offsets-14 for epoch 0 19:06:42.648 [group-metadata-manager-0] INFO kafka.coordinator.group.GroupMetadataManager - [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-14 in 14 milliseconds for epoch 0, of which 13 milliseconds was spent in the scheduler. 19:06:42.648 [group-metadata-manager-0] DEBUG kafka.coordinator.group.GroupMetadataManager - [GroupMetadataManager brokerId=1] Started loading offsets and group metadata from __consumer_offsets-23 for epoch 0 19:06:42.648 [group-metadata-manager-0] INFO kafka.coordinator.group.GroupMetadataManager - [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-23 in 14 milliseconds for epoch 0, of which 14 milliseconds was spent in the scheduler. 19:06:42.648 [group-metadata-manager-0] DEBUG kafka.coordinator.group.GroupMetadataManager - [GroupMetadataManager brokerId=1] Started loading offsets and group metadata from __consumer_offsets-38 for epoch 0 19:06:42.648 [group-metadata-manager-0] INFO kafka.coordinator.group.GroupMetadataManager - [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-38 in 13 milliseconds for epoch 0, of which 13 milliseconds was spent in the scheduler. 19:06:42.648 [group-metadata-manager-0] DEBUG kafka.coordinator.group.GroupMetadataManager - [GroupMetadataManager brokerId=1] Started loading offsets and group metadata from __consumer_offsets-8 for epoch 0 19:06:42.648 [group-metadata-manager-0] INFO kafka.coordinator.group.GroupMetadataManager - [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-8 in 13 milliseconds for epoch 0, of which 13 milliseconds was spent in the scheduler. 19:06:42.648 [group-metadata-manager-0] DEBUG kafka.coordinator.group.GroupMetadataManager - [GroupMetadataManager brokerId=1] Started loading offsets and group metadata from __consumer_offsets-45 for epoch 0 19:06:42.648 [group-metadata-manager-0] INFO kafka.coordinator.group.GroupMetadataManager - [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-45 in 13 milliseconds for epoch 0, of which 13 milliseconds was spent in the scheduler. 19:06:42.648 [group-metadata-manager-0] DEBUG kafka.coordinator.group.GroupMetadataManager - [GroupMetadataManager brokerId=1] Started loading offsets and group metadata from __consumer_offsets-15 for epoch 0 19:06:42.648 [group-metadata-manager-0] INFO kafka.coordinator.group.GroupMetadataManager - [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-15 in 13 milliseconds for epoch 0, of which 13 milliseconds was spent in the scheduler. 19:06:42.648 [group-metadata-manager-0] DEBUG kafka.coordinator.group.GroupMetadataManager - [GroupMetadataManager brokerId=1] Started loading offsets and group metadata from __consumer_offsets-30 for epoch 0 19:06:42.649 [group-metadata-manager-0] INFO kafka.coordinator.group.GroupMetadataManager - [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-30 in 14 milliseconds for epoch 0, of which 13 milliseconds was spent in the scheduler. 19:06:42.649 [group-metadata-manager-0] DEBUG kafka.coordinator.group.GroupMetadataManager - [GroupMetadataManager brokerId=1] Started loading offsets and group metadata from __consumer_offsets-0 for epoch 0 19:06:42.649 [group-metadata-manager-0] INFO kafka.coordinator.group.GroupMetadataManager - [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-0 in 14 milliseconds for epoch 0, of which 14 milliseconds was spent in the scheduler. 19:06:42.649 [group-metadata-manager-0] DEBUG kafka.coordinator.group.GroupMetadataManager - [GroupMetadataManager brokerId=1] Started loading offsets and group metadata from __consumer_offsets-35 for epoch 0 19:06:42.649 [group-metadata-manager-0] INFO kafka.coordinator.group.GroupMetadataManager - [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-35 in 14 milliseconds for epoch 0, of which 14 milliseconds was spent in the scheduler. 19:06:42.649 [group-metadata-manager-0] DEBUG kafka.coordinator.group.GroupMetadataManager - [GroupMetadataManager brokerId=1] Started loading offsets and group metadata from __consumer_offsets-5 for epoch 0 19:06:42.649 [group-metadata-manager-0] INFO kafka.coordinator.group.GroupMetadataManager - [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-5 in 14 milliseconds for epoch 0, of which 14 milliseconds was spent in the scheduler. 19:06:42.649 [group-metadata-manager-0] DEBUG kafka.coordinator.group.GroupMetadataManager - [GroupMetadataManager brokerId=1] Started loading offsets and group metadata from __consumer_offsets-20 for epoch 0 19:06:42.649 [group-metadata-manager-0] INFO kafka.coordinator.group.GroupMetadataManager - [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-20 in 14 milliseconds for epoch 0, of which 14 milliseconds was spent in the scheduler. 19:06:42.649 [group-metadata-manager-0] DEBUG kafka.coordinator.group.GroupMetadataManager - [GroupMetadataManager brokerId=1] Started loading offsets and group metadata from __consumer_offsets-27 for epoch 0 19:06:42.649 [group-metadata-manager-0] INFO kafka.coordinator.group.GroupMetadataManager - [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-27 in 14 milliseconds for epoch 0, of which 14 milliseconds was spent in the scheduler. 19:06:42.649 [group-metadata-manager-0] DEBUG kafka.coordinator.group.GroupMetadataManager - [GroupMetadataManager brokerId=1] Started loading offsets and group metadata from __consumer_offsets-42 for epoch 0 19:06:42.649 [group-metadata-manager-0] INFO kafka.coordinator.group.GroupMetadataManager - [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-42 in 14 milliseconds for epoch 0, of which 14 milliseconds was spent in the scheduler. 19:06:42.649 [group-metadata-manager-0] DEBUG kafka.coordinator.group.GroupMetadataManager - [GroupMetadataManager brokerId=1] Started loading offsets and group metadata from __consumer_offsets-12 for epoch 0 19:06:42.650 [group-metadata-manager-0] INFO kafka.coordinator.group.GroupMetadataManager - [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-12 in 15 milliseconds for epoch 0, of which 14 milliseconds was spent in the scheduler. 19:06:42.650 [group-metadata-manager-0] DEBUG kafka.coordinator.group.GroupMetadataManager - [GroupMetadataManager brokerId=1] Started loading offsets and group metadata from __consumer_offsets-21 for epoch 0 19:06:42.650 [group-metadata-manager-0] INFO kafka.coordinator.group.GroupMetadataManager - [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-21 in 14 milliseconds for epoch 0, of which 14 milliseconds was spent in the scheduler. 19:06:42.650 [group-metadata-manager-0] DEBUG kafka.coordinator.group.GroupMetadataManager - [GroupMetadataManager brokerId=1] Started loading offsets and group metadata from __consumer_offsets-36 for epoch 0 19:06:42.650 [group-metadata-manager-0] INFO kafka.coordinator.group.GroupMetadataManager - [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-36 in 14 milliseconds for epoch 0, of which 14 milliseconds was spent in the scheduler. 19:06:42.650 [group-metadata-manager-0] DEBUG kafka.coordinator.group.GroupMetadataManager - [GroupMetadataManager brokerId=1] Started loading offsets and group metadata from __consumer_offsets-6 for epoch 0 19:06:42.650 [group-metadata-manager-0] INFO kafka.coordinator.group.GroupMetadataManager - [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-6 in 14 milliseconds for epoch 0, of which 14 milliseconds was spent in the scheduler. 19:06:42.650 [group-metadata-manager-0] DEBUG kafka.coordinator.group.GroupMetadataManager - [GroupMetadataManager brokerId=1] Started loading offsets and group metadata from __consumer_offsets-43 for epoch 0 19:06:42.650 [group-metadata-manager-0] INFO kafka.coordinator.group.GroupMetadataManager - [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-43 in 14 milliseconds for epoch 0, of which 14 milliseconds was spent in the scheduler. 19:06:42.650 [group-metadata-manager-0] DEBUG kafka.coordinator.group.GroupMetadataManager - [GroupMetadataManager brokerId=1] Started loading offsets and group metadata from __consumer_offsets-13 for epoch 0 19:06:42.650 [group-metadata-manager-0] INFO kafka.coordinator.group.GroupMetadataManager - [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-13 in 14 milliseconds for epoch 0, of which 14 milliseconds was spent in the scheduler. 19:06:42.650 [group-metadata-manager-0] DEBUG kafka.coordinator.group.GroupMetadataManager - [GroupMetadataManager brokerId=1] Started loading offsets and group metadata from __consumer_offsets-28 for epoch 0 19:06:42.650 [group-metadata-manager-0] INFO kafka.coordinator.group.GroupMetadataManager - [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-28 in 14 milliseconds for epoch 0, of which 14 milliseconds was spent in the scheduler. 19:06:42.710 [main] DEBUG org.apache.kafka.clients.NetworkClient - [Consumer clientId=mso-123456-consumer-e3947c42-8a25-4603-b050-ed449cbe5e27, groupId=mso-group] Sending metadata request MetadataRequestData(topics=[MetadataRequestTopic(topicId=AAAAAAAAAAAAAAAAAAAAAA, name='my-test-topic')], allowAutoTopicCreation=false, includeClusterAuthorizedOperations=false, includeTopicAuthorizedOperations=false) to node localhost:38537 (id: 1 rack: null) 19:06:42.711 [main] DEBUG org.apache.kafka.clients.NetworkClient - [Consumer clientId=mso-123456-consumer-e3947c42-8a25-4603-b050-ed449cbe5e27, groupId=mso-group] Sending METADATA request with header RequestHeader(apiKey=METADATA, apiVersion=12, clientId=mso-123456-consumer-e3947c42-8a25-4603-b050-ed449cbe5e27, correlationId=24) and timeout 30000 to node 1: MetadataRequestData(topics=[MetadataRequestTopic(topicId=AAAAAAAAAAAAAAAAAAAAAA, name='my-test-topic')], allowAutoTopicCreation=false, includeClusterAuthorizedOperations=false, includeTopicAuthorizedOperations=false) 19:06:42.715 [main] DEBUG org.apache.kafka.clients.NetworkClient - [Consumer clientId=mso-123456-consumer-e3947c42-8a25-4603-b050-ed449cbe5e27, groupId=mso-group] Received METADATA response from node 1 for request with header RequestHeader(apiKey=METADATA, apiVersion=12, clientId=mso-123456-consumer-e3947c42-8a25-4603-b050-ed449cbe5e27, correlationId=24): MetadataResponseData(throttleTimeMs=0, brokers=[MetadataResponseBroker(nodeId=1, host='localhost', port=38537, rack=null)], clusterId='rNB1PmTvTz-7PD_3X0wh3g', controllerId=1, topics=[MetadataResponseTopic(errorCode=0, name='my-test-topic', topicId=Y7FYMZASRPqaYrfAS485Ow, isInternal=false, partitions=[MetadataResponsePartition(errorCode=0, partitionIndex=0, leaderId=1, leaderEpoch=0, replicaNodes=[1], isrNodes=[1], offlineReplicas=[])], topicAuthorizedOperations=-2147483648)], clusterAuthorizedOperations=-2147483648) 19:06:42.715 [main] DEBUG org.apache.kafka.clients.Metadata - [Consumer clientId=mso-123456-consumer-e3947c42-8a25-4603-b050-ed449cbe5e27, groupId=mso-group] Updating last seen epoch for partition my-test-topic-0 from 0 to epoch 0 from new metadata 19:06:42.715 [data-plane-kafka-network-thread-1-ListenerName(SASL_PLAINTEXT)-SASL_PLAINTEXT-0] DEBUG kafka.request.logger - Completed request:{"isForwarded":false,"requestHeader":{"requestApiKey":3,"requestApiVersion":12,"correlationId":24,"clientId":"mso-123456-consumer-e3947c42-8a25-4603-b050-ed449cbe5e27","requestApiKeyName":"METADATA"},"request":{"topics":[{"topicId":"AAAAAAAAAAAAAAAAAAAAAA","name":"my-test-topic"}],"allowAutoTopicCreation":false,"includeTopicAuthorizedOperations":false},"response":{"throttleTimeMs":0,"brokers":[{"nodeId":1,"host":"localhost","port":38537,"rack":null}],"clusterId":"rNB1PmTvTz-7PD_3X0wh3g","controllerId":1,"topics":[{"errorCode":0,"name":"my-test-topic","topicId":"Y7FYMZASRPqaYrfAS485Ow","isInternal":false,"partitions":[{"errorCode":0,"partitionIndex":0,"leaderId":1,"leaderEpoch":0,"replicaNodes":[1],"isrNodes":[1],"offlineReplicas":[]}],"topicAuthorizedOperations":-2147483648}]},"connection":"127.0.0.1:38537-127.0.0.1:54970-3","totalTimeMs":2.605,"requestQueueTimeMs":0.427,"localTimeMs":1.62,"remoteTimeMs":0.0,"throttleTimeMs":0,"responseQueueTimeMs":0.118,"sendTimeMs":0.438,"securityProtocol":"SASL_PLAINTEXT","principal":"User:admin","listener":"SASL_PLAINTEXT","clientInformation":{"softwareName":"apache-kafka-java","softwareVersion":"3.3.1"}} 19:06:42.715 [main] DEBUG org.apache.kafka.clients.Metadata - [Consumer clientId=mso-123456-consumer-e3947c42-8a25-4603-b050-ed449cbe5e27, groupId=mso-group] Updated cluster metadata updateVersion 13 to MetadataCache{clusterId='rNB1PmTvTz-7PD_3X0wh3g', nodes={1=localhost:38537 (id: 1 rack: null)}, partitions=[PartitionMetadata(error=NONE, partition=my-test-topic-0, leader=Optional[1], leaderEpoch=Optional[0], replicas=1, isr=1, offlineReplicas=)], controller=localhost:38537 (id: 1 rack: null)} 19:06:42.716 [main] DEBUG org.apache.kafka.clients.consumer.internals.ConsumerCoordinator - [Consumer clientId=mso-123456-consumer-e3947c42-8a25-4603-b050-ed449cbe5e27, groupId=mso-group] Sending FindCoordinator request to broker localhost:38537 (id: 1 rack: null) 19:06:42.716 [main] DEBUG org.apache.kafka.clients.NetworkClient - [Consumer clientId=mso-123456-consumer-e3947c42-8a25-4603-b050-ed449cbe5e27, groupId=mso-group] Sending FIND_COORDINATOR request with header RequestHeader(apiKey=FIND_COORDINATOR, apiVersion=4, clientId=mso-123456-consumer-e3947c42-8a25-4603-b050-ed449cbe5e27, correlationId=25) and timeout 30000 to node 1: FindCoordinatorRequestData(key='', keyType=0, coordinatorKeys=[mso-group]) 19:06:42.720 [main] DEBUG org.apache.kafka.clients.NetworkClient - [Consumer clientId=mso-123456-consumer-e3947c42-8a25-4603-b050-ed449cbe5e27, groupId=mso-group] Received FIND_COORDINATOR response from node 1 for request with header RequestHeader(apiKey=FIND_COORDINATOR, apiVersion=4, clientId=mso-123456-consumer-e3947c42-8a25-4603-b050-ed449cbe5e27, correlationId=25): FindCoordinatorResponseData(throttleTimeMs=0, errorCode=0, errorMessage='', nodeId=0, host='', port=0, coordinators=[Coordinator(key='mso-group', nodeId=1, host='localhost', port=38537, errorCode=0, errorMessage='')]) 19:06:42.720 [main] DEBUG org.apache.kafka.clients.consumer.internals.ConsumerCoordinator - [Consumer clientId=mso-123456-consumer-e3947c42-8a25-4603-b050-ed449cbe5e27, groupId=mso-group] Received FindCoordinator response ClientResponse(receivedTimeMs=1755112002720, latencyMs=4, disconnected=false, requestHeader=RequestHeader(apiKey=FIND_COORDINATOR, apiVersion=4, clientId=mso-123456-consumer-e3947c42-8a25-4603-b050-ed449cbe5e27, correlationId=25), responseBody=FindCoordinatorResponseData(throttleTimeMs=0, errorCode=0, errorMessage='', nodeId=0, host='', port=0, coordinators=[Coordinator(key='mso-group', nodeId=1, host='localhost', port=38537, errorCode=0, errorMessage='')])) 19:06:42.721 [main] INFO org.apache.kafka.clients.consumer.internals.ConsumerCoordinator - [Consumer clientId=mso-123456-consumer-e3947c42-8a25-4603-b050-ed449cbe5e27, groupId=mso-group] Discovered group coordinator localhost:38537 (id: 2147483646 rack: null) 19:06:42.721 [data-plane-kafka-network-thread-1-ListenerName(SASL_PLAINTEXT)-SASL_PLAINTEXT-0] DEBUG kafka.request.logger - Completed request:{"isForwarded":false,"requestHeader":{"requestApiKey":10,"requestApiVersion":4,"correlationId":25,"clientId":"mso-123456-consumer-e3947c42-8a25-4603-b050-ed449cbe5e27","requestApiKeyName":"FIND_COORDINATOR"},"request":{"keyType":0,"coordinatorKeys":["mso-group"]},"response":{"throttleTimeMs":0,"coordinators":[{"key":"mso-group","nodeId":1,"host":"localhost","port":38537,"errorCode":0,"errorMessage":""}]},"connection":"127.0.0.1:38537-127.0.0.1:54970-3","totalTimeMs":3.323,"requestQueueTimeMs":0.202,"localTimeMs":2.761,"remoteTimeMs":0.0,"throttleTimeMs":0,"responseQueueTimeMs":0.101,"sendTimeMs":0.257,"securityProtocol":"SASL_PLAINTEXT","principal":"User:admin","listener":"SASL_PLAINTEXT","clientInformation":{"softwareName":"apache-kafka-java","softwareVersion":"3.3.1"}} 19:06:42.721 [main] DEBUG org.apache.kafka.clients.ClientUtils - Resolved host localhost as 127.0.0.1 19:06:42.721 [main] DEBUG org.apache.kafka.clients.NetworkClient - [Consumer clientId=mso-123456-consumer-e3947c42-8a25-4603-b050-ed449cbe5e27, groupId=mso-group] Initiating connection to node localhost:38537 (id: 2147483646 rack: null) using address localhost/127.0.0.1 19:06:42.721 [main] DEBUG org.apache.kafka.common.security.authenticator.SaslClientAuthenticator - [Consumer clientId=mso-123456-consumer-e3947c42-8a25-4603-b050-ed449cbe5e27, groupId=mso-group] Set SASL client state to SEND_APIVERSIONS_REQUEST 19:06:42.722 [data-plane-kafka-socket-acceptor-ListenerName(SASL_PLAINTEXT)-SASL_PLAINTEXT-38537] DEBUG kafka.network.DataPlaneAcceptor - Accepted connection from /127.0.0.1:54972 on /127.0.0.1:38537 and assigned it to processor 1, sendBufferSize [actual|requested]: [102400|102400] recvBufferSize [actual|requested]: [102400|102400] 19:06:42.722 [main] DEBUG org.apache.kafka.common.security.authenticator.SaslClientAuthenticator - [Consumer clientId=mso-123456-consumer-e3947c42-8a25-4603-b050-ed449cbe5e27, groupId=mso-group] Creating SaslClient: client=null;service=kafka;serviceHostname=localhost;mechs=[PLAIN] 19:06:42.722 [data-plane-kafka-network-thread-1-ListenerName(SASL_PLAINTEXT)-SASL_PLAINTEXT-1] DEBUG kafka.network.Processor - Processor 1 listening to new connection from /127.0.0.1:54972 19:06:42.725 [main] DEBUG org.apache.kafka.clients.consumer.internals.ConsumerCoordinator - [Consumer clientId=mso-123456-consumer-e3947c42-8a25-4603-b050-ed449cbe5e27, groupId=mso-group] Executing onJoinPrepare with generation -1 and memberId 19:06:42.725 [main] DEBUG org.apache.kafka.clients.consumer.internals.ConsumerCoordinator - [Consumer clientId=mso-123456-consumer-e3947c42-8a25-4603-b050-ed449cbe5e27, groupId=mso-group] Marking assigned partitions pending for revocation: [] 19:06:42.725 [kafka-coordinator-heartbeat-thread | mso-group] DEBUG org.apache.kafka.clients.consumer.internals.ConsumerCoordinator - [Consumer clientId=mso-123456-consumer-e3947c42-8a25-4603-b050-ed449cbe5e27, groupId=mso-group] Heartbeat thread started 19:06:42.727 [main] DEBUG org.apache.kafka.clients.consumer.internals.ConsumerCoordinator - [Consumer clientId=mso-123456-consumer-e3947c42-8a25-4603-b050-ed449cbe5e27, groupId=mso-group] Sending asynchronous auto-commit of offsets {} 19:06:42.730 [main] DEBUG org.apache.kafka.common.network.Selector - [Consumer clientId=mso-123456-consumer-e3947c42-8a25-4603-b050-ed449cbe5e27, groupId=mso-group] Created socket with SO_RCVBUF = 65536, SO_SNDBUF = 131072, SO_TIMEOUT = 0 to node 2147483646 19:06:42.730 [main] DEBUG org.apache.kafka.common.security.authenticator.SaslClientAuthenticator - [Consumer clientId=mso-123456-consumer-e3947c42-8a25-4603-b050-ed449cbe5e27, groupId=mso-group] Set SASL client state to RECEIVE_APIVERSIONS_RESPONSE 19:06:42.730 [main] DEBUG org.apache.kafka.clients.NetworkClient - [Consumer clientId=mso-123456-consumer-e3947c42-8a25-4603-b050-ed449cbe5e27, groupId=mso-group] Completed connection to node 2147483646. Fetching API versions. 19:06:42.730 [data-plane-kafka-network-thread-1-ListenerName(SASL_PLAINTEXT)-SASL_PLAINTEXT-1] DEBUG org.apache.kafka.common.security.authenticator.SaslServerAuthenticator - Set SASL server state to HANDSHAKE_OR_VERSIONS_REQUEST during authentication 19:06:42.730 [data-plane-kafka-network-thread-1-ListenerName(SASL_PLAINTEXT)-SASL_PLAINTEXT-1] DEBUG org.apache.kafka.common.security.authenticator.SaslServerAuthenticator - Handling Kafka request API_VERSIONS during authentication 19:06:42.730 [main] INFO org.apache.kafka.clients.consumer.internals.ConsumerCoordinator - [Consumer clientId=mso-123456-consumer-e3947c42-8a25-4603-b050-ed449cbe5e27, groupId=mso-group] (Re-)joining group 19:06:42.731 [data-plane-kafka-network-thread-1-ListenerName(SASL_PLAINTEXT)-SASL_PLAINTEXT-1] DEBUG org.apache.kafka.common.security.authenticator.SaslServerAuthenticator - Set SASL server state to HANDSHAKE_REQUEST during authentication 19:06:42.731 [main] DEBUG org.apache.kafka.clients.consumer.internals.ConsumerCoordinator - [Consumer clientId=mso-123456-consumer-e3947c42-8a25-4603-b050-ed449cbe5e27, groupId=mso-group] Joining group with current subscription: [my-test-topic] 19:06:42.737 [main] DEBUG org.apache.kafka.clients.consumer.internals.ConsumerCoordinator - [Consumer clientId=mso-123456-consumer-e3947c42-8a25-4603-b050-ed449cbe5e27, groupId=mso-group] Sending JoinGroup (JoinGroupRequestData(groupId='mso-group', sessionTimeoutMs=50000, rebalanceTimeoutMs=600000, memberId='', groupInstanceId=null, protocolType='consumer', protocols=[JoinGroupRequestProtocol(name='range', metadata=[0, 1, 0, 0, 0, 1, 0, 13, 109, 121, 45, 116, 101, 115, 116, 45, 116, 111, 112, 105, 99, -1, -1, -1, -1, 0, 0, 0, 0]), JoinGroupRequestProtocol(name='cooperative-sticky', metadata=[0, 1, 0, 0, 0, 1, 0, 13, 109, 121, 45, 116, 101, 115, 116, 45, 116, 111, 112, 105, 99, 0, 0, 0, 4, -1, -1, -1, -1, 0, 0, 0, 0])], reason='')) to coordinator localhost:38537 (id: 2147483646 rack: null) 19:06:42.739 [main] DEBUG org.apache.kafka.common.security.authenticator.SaslClientAuthenticator - [Consumer clientId=mso-123456-consumer-e3947c42-8a25-4603-b050-ed449cbe5e27, groupId=mso-group] Set SASL client state to SEND_HANDSHAKE_REQUEST 19:06:42.739 [main] DEBUG org.apache.kafka.common.security.authenticator.SaslClientAuthenticator - [Consumer clientId=mso-123456-consumer-e3947c42-8a25-4603-b050-ed449cbe5e27, groupId=mso-group] Set SASL client state to RECEIVE_HANDSHAKE_RESPONSE 19:06:42.739 [data-plane-kafka-network-thread-1-ListenerName(SASL_PLAINTEXT)-SASL_PLAINTEXT-1] DEBUG org.apache.kafka.common.security.authenticator.SaslServerAuthenticator - Handling Kafka request SASL_HANDSHAKE during authentication 19:06:42.739 [data-plane-kafka-network-thread-1-ListenerName(SASL_PLAINTEXT)-SASL_PLAINTEXT-1] DEBUG org.apache.kafka.common.security.authenticator.SaslServerAuthenticator - Using SASL mechanism 'PLAIN' provided by client 19:06:42.740 [data-plane-kafka-network-thread-1-ListenerName(SASL_PLAINTEXT)-SASL_PLAINTEXT-1] DEBUG org.apache.kafka.common.security.authenticator.SaslServerAuthenticator - Set SASL server state to AUTHENTICATE during authentication 19:06:42.743 [main] DEBUG org.apache.kafka.common.security.authenticator.SaslClientAuthenticator - [Consumer clientId=mso-123456-consumer-e3947c42-8a25-4603-b050-ed449cbe5e27, groupId=mso-group] Set SASL client state to INITIAL 19:06:42.743 [main] DEBUG org.apache.kafka.common.security.authenticator.SaslClientAuthenticator - [Consumer clientId=mso-123456-consumer-e3947c42-8a25-4603-b050-ed449cbe5e27, groupId=mso-group] Set SASL client state to INTERMEDIATE 19:06:42.743 [data-plane-kafka-network-thread-1-ListenerName(SASL_PLAINTEXT)-SASL_PLAINTEXT-1] DEBUG org.apache.kafka.common.security.authenticator.SaslServerAuthenticator - Authentication complete; session max lifetime from broker config=0 ms, no credential expiration; no session expiration, sending 0 ms to client 19:06:42.744 [data-plane-kafka-network-thread-1-ListenerName(SASL_PLAINTEXT)-SASL_PLAINTEXT-1] DEBUG org.apache.kafka.common.security.authenticator.SaslServerAuthenticator - Set SASL server state to COMPLETE during authentication 19:06:42.744 [data-plane-kafka-network-thread-1-ListenerName(SASL_PLAINTEXT)-SASL_PLAINTEXT-1] DEBUG org.apache.kafka.common.network.Selector - [SocketServer listenerType=ZK_BROKER, nodeId=1] Successfully authenticated with /127.0.0.1 19:06:42.744 [main] DEBUG org.apache.kafka.clients.consumer.internals.ConsumerCoordinator - [Consumer clientId=mso-123456-consumer-e3947c42-8a25-4603-b050-ed449cbe5e27, groupId=mso-group] Completed asynchronous auto-commit of offsets {} 19:06:42.744 [main] DEBUG org.apache.kafka.common.security.authenticator.SaslClientAuthenticator - [Consumer clientId=mso-123456-consumer-e3947c42-8a25-4603-b050-ed449cbe5e27, groupId=mso-group] Set SASL client state to COMPLETE 19:06:42.744 [main] DEBUG org.apache.kafka.common.security.authenticator.SaslClientAuthenticator - [Consumer clientId=mso-123456-consumer-e3947c42-8a25-4603-b050-ed449cbe5e27, groupId=mso-group] Finished authentication with no session expiration and no session re-authentication 19:06:42.744 [main] DEBUG org.apache.kafka.common.network.Selector - [Consumer clientId=mso-123456-consumer-e3947c42-8a25-4603-b050-ed449cbe5e27, groupId=mso-group] Successfully authenticated with localhost/127.0.0.1 19:06:42.744 [main] DEBUG org.apache.kafka.clients.NetworkClient - [Consumer clientId=mso-123456-consumer-e3947c42-8a25-4603-b050-ed449cbe5e27, groupId=mso-group] Initiating API versions fetch from node 2147483646. 19:06:42.744 [main] DEBUG org.apache.kafka.clients.NetworkClient - [Consumer clientId=mso-123456-consumer-e3947c42-8a25-4603-b050-ed449cbe5e27, groupId=mso-group] Sending API_VERSIONS request with header RequestHeader(apiKey=API_VERSIONS, apiVersion=3, clientId=mso-123456-consumer-e3947c42-8a25-4603-b050-ed449cbe5e27, correlationId=27) and timeout 30000 to node 2147483646: ApiVersionsRequestData(clientSoftwareName='apache-kafka-java', clientSoftwareVersion='3.3.1') 19:06:42.747 [main] DEBUG org.apache.kafka.clients.NetworkClient - [Consumer clientId=mso-123456-consumer-e3947c42-8a25-4603-b050-ed449cbe5e27, groupId=mso-group] Received API_VERSIONS response from node 2147483646 for request with header RequestHeader(apiKey=API_VERSIONS, apiVersion=3, clientId=mso-123456-consumer-e3947c42-8a25-4603-b050-ed449cbe5e27, correlationId=27): ApiVersionsResponseData(errorCode=0, apiKeys=[ApiVersion(apiKey=0, minVersion=0, maxVersion=9), ApiVersion(apiKey=1, minVersion=0, maxVersion=13), ApiVersion(apiKey=2, minVersion=0, maxVersion=7), ApiVersion(apiKey=3, minVersion=0, maxVersion=12), ApiVersion(apiKey=4, minVersion=0, maxVersion=6), ApiVersion(apiKey=5, minVersion=0, maxVersion=3), ApiVersion(apiKey=6, minVersion=0, maxVersion=7), ApiVersion(apiKey=7, minVersion=0, maxVersion=3), ApiVersion(apiKey=8, minVersion=0, maxVersion=8), ApiVersion(apiKey=9, minVersion=0, maxVersion=8), ApiVersion(apiKey=10, minVersion=0, maxVersion=4), ApiVersion(apiKey=11, minVersion=0, maxVersion=9), ApiVersion(apiKey=12, minVersion=0, maxVersion=4), ApiVersion(apiKey=13, minVersion=0, maxVersion=5), ApiVersion(apiKey=14, minVersion=0, maxVersion=5), ApiVersion(apiKey=15, minVersion=0, maxVersion=5), ApiVersion(apiKey=16, minVersion=0, maxVersion=4), ApiVersion(apiKey=17, minVersion=0, maxVersion=1), ApiVersion(apiKey=18, minVersion=0, maxVersion=3), ApiVersion(apiKey=19, minVersion=0, maxVersion=7), ApiVersion(apiKey=20, minVersion=0, maxVersion=6), ApiVersion(apiKey=21, minVersion=0, maxVersion=2), ApiVersion(apiKey=22, minVersion=0, maxVersion=4), ApiVersion(apiKey=23, minVersion=0, maxVersion=4), ApiVersion(apiKey=24, minVersion=0, maxVersion=3), ApiVersion(apiKey=25, minVersion=0, maxVersion=3), ApiVersion(apiKey=26, minVersion=0, maxVersion=3), ApiVersion(apiKey=27, minVersion=0, maxVersion=1), ApiVersion(apiKey=28, minVersion=0, maxVersion=3), ApiVersion(apiKey=29, minVersion=0, maxVersion=3), ApiVersion(apiKey=30, minVersion=0, maxVersion=3), ApiVersion(apiKey=31, minVersion=0, maxVersion=3), ApiVersion(apiKey=32, minVersion=0, maxVersion=4), ApiVersion(apiKey=33, minVersion=0, maxVersion=2), ApiVersion(apiKey=34, minVersion=0, maxVersion=2), ApiVersion(apiKey=35, minVersion=0, maxVersion=4), ApiVersion(apiKey=36, minVersion=0, maxVersion=2), ApiVersion(apiKey=37, minVersion=0, maxVersion=3), ApiVersion(apiKey=38, minVersion=0, maxVersion=3), ApiVersion(apiKey=39, minVersion=0, maxVersion=2), ApiVersion(apiKey=40, minVersion=0, maxVersion=2), ApiVersion(apiKey=41, minVersion=0, maxVersion=3), ApiVersion(apiKey=42, minVersion=0, maxVersion=2), ApiVersion(apiKey=43, minVersion=0, maxVersion=2), ApiVersion(apiKey=44, minVersion=0, maxVersion=1), ApiVersion(apiKey=45, minVersion=0, maxVersion=0), ApiVersion(apiKey=46, minVersion=0, maxVersion=0), ApiVersion(apiKey=47, minVersion=0, maxVersion=0), ApiVersion(apiKey=48, minVersion=0, maxVersion=1), ApiVersion(apiKey=49, minVersion=0, maxVersion=1), ApiVersion(apiKey=50, minVersion=0, maxVersion=0), ApiVersion(apiKey=51, minVersion=0, maxVersion=0), ApiVersion(apiKey=56, minVersion=0, maxVersion=2), ApiVersion(apiKey=57, minVersion=0, maxVersion=1), ApiVersion(apiKey=60, minVersion=0, maxVersion=0), ApiVersion(apiKey=61, minVersion=0, maxVersion=0), ApiVersion(apiKey=65, minVersion=0, maxVersion=0), ApiVersion(apiKey=66, minVersion=0, maxVersion=0), ApiVersion(apiKey=67, minVersion=0, maxVersion=0)], throttleTimeMs=0, supportedFeatures=[], finalizedFeaturesEpoch=0, finalizedFeatures=[]) 19:06:42.747 [data-plane-kafka-network-thread-1-ListenerName(SASL_PLAINTEXT)-SASL_PLAINTEXT-1] DEBUG kafka.request.logger - Completed request:{"isForwarded":false,"requestHeader":{"requestApiKey":18,"requestApiVersion":3,"correlationId":27,"clientId":"mso-123456-consumer-e3947c42-8a25-4603-b050-ed449cbe5e27","requestApiKeyName":"API_VERSIONS"},"request":{"clientSoftwareName":"apache-kafka-java","clientSoftwareVersion":"3.3.1"},"response":{"errorCode":0,"apiKeys":[{"apiKey":0,"minVersion":0,"maxVersion":9},{"apiKey":1,"minVersion":0,"maxVersion":13},{"apiKey":2,"minVersion":0,"maxVersion":7},{"apiKey":3,"minVersion":0,"maxVersion":12},{"apiKey":4,"minVersion":0,"maxVersion":6},{"apiKey":5,"minVersion":0,"maxVersion":3},{"apiKey":6,"minVersion":0,"maxVersion":7},{"apiKey":7,"minVersion":0,"maxVersion":3},{"apiKey":8,"minVersion":0,"maxVersion":8},{"apiKey":9,"minVersion":0,"maxVersion":8},{"apiKey":10,"minVersion":0,"maxVersion":4},{"apiKey":11,"minVersion":0,"maxVersion":9},{"apiKey":12,"minVersion":0,"maxVersion":4},{"apiKey":13,"minVersion":0,"maxVersion":5},{"apiKey":14,"minVersion":0,"maxVersion":5},{"apiKey":15,"minVersion":0,"maxVersion":5},{"apiKey":16,"minVersion":0,"maxVersion":4},{"apiKey":17,"minVersion":0,"maxVersion":1},{"apiKey":18,"minVersion":0,"maxVersion":3},{"apiKey":19,"minVersion":0,"maxVersion":7},{"apiKey":20,"minVersion":0,"maxVersion":6},{"apiKey":21,"minVersion":0,"maxVersion":2},{"apiKey":22,"minVersion":0,"maxVersion":4},{"apiKey":23,"minVersion":0,"maxVersion":4},{"apiKey":24,"minVersion":0,"maxVersion":3},{"apiKey":25,"minVersion":0,"maxVersion":3},{"apiKey":26,"minVersion":0,"maxVersion":3},{"apiKey":27,"minVersion":0,"maxVersion":1},{"apiKey":28,"minVersion":0,"maxVersion":3},{"apiKey":29,"minVersion":0,"maxVersion":3},{"apiKey":30,"minVersion":0,"maxVersion":3},{"apiKey":31,"minVersion":0,"maxVersion":3},{"apiKey":32,"minVersion":0,"maxVersion":4},{"apiKey":33,"minVersion":0,"maxVersion":2},{"apiKey":34,"minVersion":0,"maxVersion":2},{"apiKey":35,"minVersion":0,"maxVersion":4},{"apiKey":36,"minVersion":0,"maxVersion":2},{"apiKey":37,"minVersion":0,"maxVersion":3},{"apiKey":38,"minVersion":0,"maxVersion":3},{"apiKey":39,"minVersion":0,"maxVersion":2},{"apiKey":40,"minVersion":0,"maxVersion":2},{"apiKey":41,"minVersion":0,"maxVersion":3},{"apiKey":42,"minVersion":0,"maxVersion":2},{"apiKey":43,"minVersion":0,"maxVersion":2},{"apiKey":44,"minVersion":0,"maxVersion":1},{"apiKey":45,"minVersion":0,"maxVersion":0},{"apiKey":46,"minVersion":0,"maxVersion":0},{"apiKey":47,"minVersion":0,"maxVersion":0},{"apiKey":48,"minVersion":0,"maxVersion":1},{"apiKey":49,"minVersion":0,"maxVersion":1},{"apiKey":50,"minVersion":0,"maxVersion":0},{"apiKey":51,"minVersion":0,"maxVersion":0},{"apiKey":56,"minVersion":0,"maxVersion":2},{"apiKey":57,"minVersion":0,"maxVersion":1},{"apiKey":60,"minVersion":0,"maxVersion":0},{"apiKey":61,"minVersion":0,"maxVersion":0},{"apiKey":65,"minVersion":0,"maxVersion":0},{"apiKey":66,"minVersion":0,"maxVersion":0},{"apiKey":67,"minVersion":0,"maxVersion":0}],"throttleTimeMs":0,"finalizedFeaturesEpoch":0},"connection":"127.0.0.1:38537-127.0.0.1:54972-3","totalTimeMs":1.516,"requestQueueTimeMs":0.265,"localTimeMs":0.949,"remoteTimeMs":0.0,"throttleTimeMs":0,"responseQueueTimeMs":0.094,"sendTimeMs":0.208,"securityProtocol":"SASL_PLAINTEXT","principal":"User:admin","listener":"SASL_PLAINTEXT","clientInformation":{"softwareName":"unknown","softwareVersion":"unknown"}} 19:06:42.747 [main] DEBUG org.apache.kafka.clients.NetworkClient - [Consumer clientId=mso-123456-consumer-e3947c42-8a25-4603-b050-ed449cbe5e27, groupId=mso-group] Node 2147483646 has finalized features epoch: 0, finalized features: [], supported features: [], API versions: (Produce(0): 0 to 9 [usable: 9], Fetch(1): 0 to 13 [usable: 13], ListOffsets(2): 0 to 7 [usable: 7], Metadata(3): 0 to 12 [usable: 12], LeaderAndIsr(4): 0 to 6 [usable: 6], StopReplica(5): 0 to 3 [usable: 3], UpdateMetadata(6): 0 to 7 [usable: 7], ControlledShutdown(7): 0 to 3 [usable: 3], OffsetCommit(8): 0 to 8 [usable: 8], OffsetFetch(9): 0 to 8 [usable: 8], FindCoordinator(10): 0 to 4 [usable: 4], JoinGroup(11): 0 to 9 [usable: 9], Heartbeat(12): 0 to 4 [usable: 4], LeaveGroup(13): 0 to 5 [usable: 5], SyncGroup(14): 0 to 5 [usable: 5], DescribeGroups(15): 0 to 5 [usable: 5], ListGroups(16): 0 to 4 [usable: 4], SaslHandshake(17): 0 to 1 [usable: 1], ApiVersions(18): 0 to 3 [usable: 3], CreateTopics(19): 0 to 7 [usable: 7], DeleteTopics(20): 0 to 6 [usable: 6], DeleteRecords(21): 0 to 2 [usable: 2], InitProducerId(22): 0 to 4 [usable: 4], OffsetForLeaderEpoch(23): 0 to 4 [usable: 4], AddPartitionsToTxn(24): 0 to 3 [usable: 3], AddOffsetsToTxn(25): 0 to 3 [usable: 3], EndTxn(26): 0 to 3 [usable: 3], WriteTxnMarkers(27): 0 to 1 [usable: 1], TxnOffsetCommit(28): 0 to 3 [usable: 3], DescribeAcls(29): 0 to 3 [usable: 3], CreateAcls(30): 0 to 3 [usable: 3], DeleteAcls(31): 0 to 3 [usable: 3], DescribeConfigs(32): 0 to 4 [usable: 4], AlterConfigs(33): 0 to 2 [usable: 2], AlterReplicaLogDirs(34): 0 to 2 [usable: 2], DescribeLogDirs(35): 0 to 4 [usable: 4], SaslAuthenticate(36): 0 to 2 [usable: 2], CreatePartitions(37): 0 to 3 [usable: 3], CreateDelegationToken(38): 0 to 3 [usable: 3], RenewDelegationToken(39): 0 to 2 [usable: 2], ExpireDelegationToken(40): 0 to 2 [usable: 2], DescribeDelegationToken(41): 0 to 3 [usable: 3], DeleteGroups(42): 0 to 2 [usable: 2], ElectLeaders(43): 0 to 2 [usable: 2], IncrementalAlterConfigs(44): 0 to 1 [usable: 1], AlterPartitionReassignments(45): 0 [usable: 0], ListPartitionReassignments(46): 0 [usable: 0], OffsetDelete(47): 0 [usable: 0], DescribeClientQuotas(48): 0 to 1 [usable: 1], AlterClientQuotas(49): 0 to 1 [usable: 1], DescribeUserScramCredentials(50): 0 [usable: 0], AlterUserScramCredentials(51): 0 [usable: 0], AlterPartition(56): 0 to 2 [usable: 2], UpdateFeatures(57): 0 to 1 [usable: 1], DescribeCluster(60): 0 [usable: 0], DescribeProducers(61): 0 [usable: 0], DescribeTransactions(65): 0 [usable: 0], ListTransactions(66): 0 [usable: 0], AllocateProducerIds(67): 0 [usable: 0]). 19:06:42.747 [main] DEBUG org.apache.kafka.clients.NetworkClient - [Consumer clientId=mso-123456-consumer-e3947c42-8a25-4603-b050-ed449cbe5e27, groupId=mso-group] Sending JOIN_GROUP request with header RequestHeader(apiKey=JOIN_GROUP, apiVersion=9, clientId=mso-123456-consumer-e3947c42-8a25-4603-b050-ed449cbe5e27, correlationId=26) and timeout 605000 to node 2147483646: JoinGroupRequestData(groupId='mso-group', sessionTimeoutMs=50000, rebalanceTimeoutMs=600000, memberId='', groupInstanceId=null, protocolType='consumer', protocols=[JoinGroupRequestProtocol(name='range', metadata=[0, 1, 0, 0, 0, 1, 0, 13, 109, 121, 45, 116, 101, 115, 116, 45, 116, 111, 112, 105, 99, -1, -1, -1, -1, 0, 0, 0, 0]), JoinGroupRequestProtocol(name='cooperative-sticky', metadata=[0, 1, 0, 0, 0, 1, 0, 13, 109, 121, 45, 116, 101, 115, 116, 45, 116, 111, 112, 105, 99, 0, 0, 0, 4, -1, -1, -1, -1, 0, 0, 0, 0])], reason='') 19:06:42.761 [data-plane-kafka-request-handler-0] INFO kafka.coordinator.group.GroupCoordinator - [GroupCoordinator 1]: Dynamic member with unknown member id joins group mso-group in Empty state. Created a new member id mso-123456-consumer-e3947c42-8a25-4603-b050-ed449cbe5e27-4206ed4e-0a48-4fc2-880d-094163bb2a9d and request the member to rejoin with this id. 19:06:42.767 [main] DEBUG org.apache.kafka.clients.NetworkClient - [Consumer clientId=mso-123456-consumer-e3947c42-8a25-4603-b050-ed449cbe5e27, groupId=mso-group] Received JOIN_GROUP response from node 2147483646 for request with header RequestHeader(apiKey=JOIN_GROUP, apiVersion=9, clientId=mso-123456-consumer-e3947c42-8a25-4603-b050-ed449cbe5e27, correlationId=26): JoinGroupResponseData(throttleTimeMs=0, errorCode=79, generationId=-1, protocolType=null, protocolName=null, leader='', skipAssignment=false, memberId='mso-123456-consumer-e3947c42-8a25-4603-b050-ed449cbe5e27-4206ed4e-0a48-4fc2-880d-094163bb2a9d', members=[]) 19:06:42.767 [main] DEBUG org.apache.kafka.clients.consumer.internals.ConsumerCoordinator - [Consumer clientId=mso-123456-consumer-e3947c42-8a25-4603-b050-ed449cbe5e27, groupId=mso-group] JoinGroup failed due to non-fatal error: MEMBER_ID_REQUIRED. Will set the member id as mso-123456-consumer-e3947c42-8a25-4603-b050-ed449cbe5e27-4206ed4e-0a48-4fc2-880d-094163bb2a9d and then rejoin. Sent generation was Generation{generationId=-1, memberId='', protocol='null'} 19:06:42.767 [main] INFO org.apache.kafka.clients.consumer.internals.ConsumerCoordinator - [Consumer clientId=mso-123456-consumer-e3947c42-8a25-4603-b050-ed449cbe5e27, groupId=mso-group] Request joining group due to: need to re-join with the given member-id: mso-123456-consumer-e3947c42-8a25-4603-b050-ed449cbe5e27-4206ed4e-0a48-4fc2-880d-094163bb2a9d 19:06:42.767 [data-plane-kafka-network-thread-1-ListenerName(SASL_PLAINTEXT)-SASL_PLAINTEXT-1] DEBUG kafka.request.logger - Completed request:{"isForwarded":false,"requestHeader":{"requestApiKey":11,"requestApiVersion":9,"correlationId":26,"clientId":"mso-123456-consumer-e3947c42-8a25-4603-b050-ed449cbe5e27","requestApiKeyName":"JOIN_GROUP"},"request":{"groupId":"mso-group","sessionTimeoutMs":50000,"rebalanceTimeoutMs":600000,"memberId":"","groupInstanceId":null,"protocolType":"consumer","protocols":[{"name":"range","metadata":"AAEAAAABAA1teS10ZXN0LXRvcGlj/////wAAAAA="},{"name":"cooperative-sticky","metadata":"AAEAAAABAA1teS10ZXN0LXRvcGljAAAABP////8AAAAA"}],"reason":""},"response":{"throttleTimeMs":0,"errorCode":79,"generationId":-1,"protocolType":null,"protocolName":null,"leader":"","skipAssignment":false,"memberId":"mso-123456-consumer-e3947c42-8a25-4603-b050-ed449cbe5e27-4206ed4e-0a48-4fc2-880d-094163bb2a9d","members":[]},"connection":"127.0.0.1:38537-127.0.0.1:54972-3","totalTimeMs":18.196,"requestQueueTimeMs":3.13,"localTimeMs":14.778,"remoteTimeMs":0.0,"throttleTimeMs":0,"responseQueueTimeMs":0.126,"sendTimeMs":0.161,"securityProtocol":"SASL_PLAINTEXT","principal":"User:admin","listener":"SASL_PLAINTEXT","clientInformation":{"softwareName":"apache-kafka-java","softwareVersion":"3.3.1"}} 19:06:42.767 [main] INFO org.apache.kafka.clients.consumer.internals.ConsumerCoordinator - [Consumer clientId=mso-123456-consumer-e3947c42-8a25-4603-b050-ed449cbe5e27, groupId=mso-group] Request joining group due to: rebalance failed due to 'The group member needs to have a valid member id before actually entering a consumer group.' (MemberIdRequiredException) 19:06:42.767 [main] INFO org.apache.kafka.clients.consumer.internals.ConsumerCoordinator - [Consumer clientId=mso-123456-consumer-e3947c42-8a25-4603-b050-ed449cbe5e27, groupId=mso-group] (Re-)joining group 19:06:42.767 [main] DEBUG org.apache.kafka.clients.consumer.internals.ConsumerCoordinator - [Consumer clientId=mso-123456-consumer-e3947c42-8a25-4603-b050-ed449cbe5e27, groupId=mso-group] Joining group with current subscription: [my-test-topic] 19:06:42.768 [main] DEBUG org.apache.kafka.clients.consumer.internals.ConsumerCoordinator - [Consumer clientId=mso-123456-consumer-e3947c42-8a25-4603-b050-ed449cbe5e27, groupId=mso-group] Sending JoinGroup (JoinGroupRequestData(groupId='mso-group', sessionTimeoutMs=50000, rebalanceTimeoutMs=600000, memberId='mso-123456-consumer-e3947c42-8a25-4603-b050-ed449cbe5e27-4206ed4e-0a48-4fc2-880d-094163bb2a9d', groupInstanceId=null, protocolType='consumer', protocols=[JoinGroupRequestProtocol(name='range', metadata=[0, 1, 0, 0, 0, 1, 0, 13, 109, 121, 45, 116, 101, 115, 116, 45, 116, 111, 112, 105, 99, -1, -1, -1, -1, 0, 0, 0, 0]), JoinGroupRequestProtocol(name='cooperative-sticky', metadata=[0, 1, 0, 0, 0, 1, 0, 13, 109, 121, 45, 116, 101, 115, 116, 45, 116, 111, 112, 105, 99, 0, 0, 0, 4, -1, -1, -1, -1, 0, 0, 0, 0])], reason='rebalance failed due to MemberIdRequiredException')) to coordinator localhost:38537 (id: 2147483646 rack: null) 19:06:42.768 [main] DEBUG org.apache.kafka.clients.NetworkClient - [Consumer clientId=mso-123456-consumer-e3947c42-8a25-4603-b050-ed449cbe5e27, groupId=mso-group] Sending JOIN_GROUP request with header RequestHeader(apiKey=JOIN_GROUP, apiVersion=9, clientId=mso-123456-consumer-e3947c42-8a25-4603-b050-ed449cbe5e27, correlationId=28) and timeout 605000 to node 2147483646: JoinGroupRequestData(groupId='mso-group', sessionTimeoutMs=50000, rebalanceTimeoutMs=600000, memberId='mso-123456-consumer-e3947c42-8a25-4603-b050-ed449cbe5e27-4206ed4e-0a48-4fc2-880d-094163bb2a9d', groupInstanceId=null, protocolType='consumer', protocols=[JoinGroupRequestProtocol(name='range', metadata=[0, 1, 0, 0, 0, 1, 0, 13, 109, 121, 45, 116, 101, 115, 116, 45, 116, 111, 112, 105, 99, -1, -1, -1, -1, 0, 0, 0, 0]), JoinGroupRequestProtocol(name='cooperative-sticky', metadata=[0, 1, 0, 0, 0, 1, 0, 13, 109, 121, 45, 116, 101, 115, 116, 45, 116, 111, 112, 105, 99, 0, 0, 0, 4, -1, -1, -1, -1, 0, 0, 0, 0])], reason='rebalance failed due to MemberIdRequiredException') 19:06:42.770 [data-plane-kafka-request-handler-1] DEBUG kafka.coordinator.group.GroupCoordinator - [GroupCoordinator 1]: Pending dynamic member with id mso-123456-consumer-e3947c42-8a25-4603-b050-ed449cbe5e27-4206ed4e-0a48-4fc2-880d-094163bb2a9d joins group mso-group in Empty state. Adding to the group now. 19:06:42.773 [data-plane-kafka-request-handler-1] DEBUG kafka.server.DelayedOperationPurgatory - Request key MemberKey(mso-group,mso-123456-consumer-e3947c42-8a25-4603-b050-ed449cbe5e27-4206ed4e-0a48-4fc2-880d-094163bb2a9d) unblocked 1 Heartbeat operations 19:06:42.775 [data-plane-kafka-request-handler-1] INFO kafka.coordinator.group.GroupCoordinator - [GroupCoordinator 1]: Preparing to rebalance group mso-group in state PreparingRebalance with old generation 0 (__consumer_offsets-37) (reason: Adding new member mso-123456-consumer-e3947c42-8a25-4603-b050-ed449cbe5e27-4206ed4e-0a48-4fc2-880d-094163bb2a9d with group instance id None; client reason: rebalance failed due to MemberIdRequiredException) 19:06:45.620 [controller-event-thread] INFO kafka.controller.KafkaController - [Controller id=1] Processing automatic preferred replica leader election 19:06:45.628 [controller-event-thread] DEBUG kafka.controller.KafkaController - [Controller id=1] Topics not in preferred replica for broker 1 HashMap() 19:06:45.629 [controller-event-thread] DEBUG kafka.utils.KafkaScheduler - Scheduling task auto-leader-rebalance-task with initial delay 300000 ms and period -1000 ms. 19:06:45.785 [executor-Rebalance] INFO kafka.coordinator.group.GroupCoordinator - [GroupCoordinator 1]: Stabilized group mso-group generation 1 (__consumer_offsets-37) with 1 members 19:06:45.791 [executor-Rebalance] DEBUG kafka.server.DelayedOperationPurgatory - Request key MemberKey(mso-group,mso-123456-consumer-e3947c42-8a25-4603-b050-ed449cbe5e27-4206ed4e-0a48-4fc2-880d-094163bb2a9d) unblocked 1 Heartbeat operations 19:06:45.791 [main] DEBUG org.apache.kafka.clients.NetworkClient - [Consumer clientId=mso-123456-consumer-e3947c42-8a25-4603-b050-ed449cbe5e27, groupId=mso-group] Received JOIN_GROUP response from node 2147483646 for request with header RequestHeader(apiKey=JOIN_GROUP, apiVersion=9, clientId=mso-123456-consumer-e3947c42-8a25-4603-b050-ed449cbe5e27, correlationId=28): JoinGroupResponseData(throttleTimeMs=0, errorCode=0, generationId=1, protocolType='consumer', protocolName='range', leader='mso-123456-consumer-e3947c42-8a25-4603-b050-ed449cbe5e27-4206ed4e-0a48-4fc2-880d-094163bb2a9d', skipAssignment=false, memberId='mso-123456-consumer-e3947c42-8a25-4603-b050-ed449cbe5e27-4206ed4e-0a48-4fc2-880d-094163bb2a9d', members=[JoinGroupResponseMember(memberId='mso-123456-consumer-e3947c42-8a25-4603-b050-ed449cbe5e27-4206ed4e-0a48-4fc2-880d-094163bb2a9d', groupInstanceId=null, metadata=[0, 1, 0, 0, 0, 1, 0, 13, 109, 121, 45, 116, 101, 115, 116, 45, 116, 111, 112, 105, 99, -1, -1, -1, -1, 0, 0, 0, 0])]) 19:06:45.792 [main] DEBUG org.apache.kafka.clients.consumer.internals.ConsumerCoordinator - [Consumer clientId=mso-123456-consumer-e3947c42-8a25-4603-b050-ed449cbe5e27, groupId=mso-group] Received successful JoinGroup response: JoinGroupResponseData(throttleTimeMs=0, errorCode=0, generationId=1, protocolType='consumer', protocolName='range', leader='mso-123456-consumer-e3947c42-8a25-4603-b050-ed449cbe5e27-4206ed4e-0a48-4fc2-880d-094163bb2a9d', skipAssignment=false, memberId='mso-123456-consumer-e3947c42-8a25-4603-b050-ed449cbe5e27-4206ed4e-0a48-4fc2-880d-094163bb2a9d', members=[JoinGroupResponseMember(memberId='mso-123456-consumer-e3947c42-8a25-4603-b050-ed449cbe5e27-4206ed4e-0a48-4fc2-880d-094163bb2a9d', groupInstanceId=null, metadata=[0, 1, 0, 0, 0, 1, 0, 13, 109, 121, 45, 116, 101, 115, 116, 45, 116, 111, 112, 105, 99, -1, -1, -1, -1, 0, 0, 0, 0])]) 19:06:45.792 [data-plane-kafka-network-thread-1-ListenerName(SASL_PLAINTEXT)-SASL_PLAINTEXT-1] DEBUG kafka.request.logger - Completed request:{"isForwarded":false,"requestHeader":{"requestApiKey":11,"requestApiVersion":9,"correlationId":28,"clientId":"mso-123456-consumer-e3947c42-8a25-4603-b050-ed449cbe5e27","requestApiKeyName":"JOIN_GROUP"},"request":{"groupId":"mso-group","sessionTimeoutMs":50000,"rebalanceTimeoutMs":600000,"memberId":"mso-123456-consumer-e3947c42-8a25-4603-b050-ed449cbe5e27-4206ed4e-0a48-4fc2-880d-094163bb2a9d","groupInstanceId":null,"protocolType":"consumer","protocols":[{"name":"range","metadata":"AAEAAAABAA1teS10ZXN0LXRvcGlj/////wAAAAA="},{"name":"cooperative-sticky","metadata":"AAEAAAABAA1teS10ZXN0LXRvcGljAAAABP////8AAAAA"}],"reason":"rebalance failed due to MemberIdRequiredException"},"response":{"throttleTimeMs":0,"errorCode":0,"generationId":1,"protocolType":"consumer","protocolName":"range","leader":"mso-123456-consumer-e3947c42-8a25-4603-b050-ed449cbe5e27-4206ed4e-0a48-4fc2-880d-094163bb2a9d","skipAssignment":false,"memberId":"mso-123456-consumer-e3947c42-8a25-4603-b050-ed449cbe5e27-4206ed4e-0a48-4fc2-880d-094163bb2a9d","members":[{"memberId":"mso-123456-consumer-e3947c42-8a25-4603-b050-ed449cbe5e27-4206ed4e-0a48-4fc2-880d-094163bb2a9d","groupInstanceId":null,"metadata":"AAEAAAABAA1teS10ZXN0LXRvcGlj/////wAAAAA="}]},"connection":"127.0.0.1:38537-127.0.0.1:54972-3","totalTimeMs":3022.454,"requestQueueTimeMs":0.15,"localTimeMs":7.28,"remoteTimeMs":3013.478,"throttleTimeMs":0,"responseQueueTimeMs":0.279,"sendTimeMs":1.265,"securityProtocol":"SASL_PLAINTEXT","principal":"User:admin","listener":"SASL_PLAINTEXT","clientInformation":{"softwareName":"apache-kafka-java","softwareVersion":"3.3.1"}} 19:06:45.792 [main] DEBUG org.apache.kafka.clients.consumer.internals.ConsumerCoordinator - [Consumer clientId=mso-123456-consumer-e3947c42-8a25-4603-b050-ed449cbe5e27, groupId=mso-group] Enabling heartbeat thread 19:06:45.792 [main] INFO org.apache.kafka.clients.consumer.internals.ConsumerCoordinator - [Consumer clientId=mso-123456-consumer-e3947c42-8a25-4603-b050-ed449cbe5e27, groupId=mso-group] Successfully joined group with generation Generation{generationId=1, memberId='mso-123456-consumer-e3947c42-8a25-4603-b050-ed449cbe5e27-4206ed4e-0a48-4fc2-880d-094163bb2a9d', protocol='range'} 19:06:45.793 [main] DEBUG org.apache.kafka.clients.consumer.internals.ConsumerCoordinator - [Consumer clientId=mso-123456-consumer-e3947c42-8a25-4603-b050-ed449cbe5e27, groupId=mso-group] Performing assignment using strategy range with subscriptions {mso-123456-consumer-e3947c42-8a25-4603-b050-ed449cbe5e27-4206ed4e-0a48-4fc2-880d-094163bb2a9d=Subscription(topics=[my-test-topic], ownedPartitions=[], groupInstanceId=null)} 19:06:45.799 [main] INFO org.apache.kafka.clients.consumer.internals.ConsumerCoordinator - [Consumer clientId=mso-123456-consumer-e3947c42-8a25-4603-b050-ed449cbe5e27, groupId=mso-group] Finished assignment for group at generation 1: {mso-123456-consumer-e3947c42-8a25-4603-b050-ed449cbe5e27-4206ed4e-0a48-4fc2-880d-094163bb2a9d=Assignment(partitions=[my-test-topic-0])} 19:06:45.804 [main] DEBUG org.apache.kafka.clients.consumer.internals.ConsumerCoordinator - [Consumer clientId=mso-123456-consumer-e3947c42-8a25-4603-b050-ed449cbe5e27, groupId=mso-group] Sending leader SyncGroup to coordinator localhost:38537 (id: 2147483646 rack: null): SyncGroupRequestData(groupId='mso-group', generationId=1, memberId='mso-123456-consumer-e3947c42-8a25-4603-b050-ed449cbe5e27-4206ed4e-0a48-4fc2-880d-094163bb2a9d', groupInstanceId=null, protocolType='consumer', protocolName='range', assignments=[SyncGroupRequestAssignment(memberId='mso-123456-consumer-e3947c42-8a25-4603-b050-ed449cbe5e27-4206ed4e-0a48-4fc2-880d-094163bb2a9d', assignment=[0, 1, 0, 0, 0, 1, 0, 13, 109, 121, 45, 116, 101, 115, 116, 45, 116, 111, 112, 105, 99, 0, 0, 0, 1, 0, 0, 0, 0, -1, -1, -1, -1])]) 19:06:45.806 [kafka-coordinator-heartbeat-thread | mso-group] DEBUG org.apache.kafka.clients.NetworkClient - [Consumer clientId=mso-123456-consumer-e3947c42-8a25-4603-b050-ed449cbe5e27, groupId=mso-group] Sending SYNC_GROUP request with header RequestHeader(apiKey=SYNC_GROUP, apiVersion=5, clientId=mso-123456-consumer-e3947c42-8a25-4603-b050-ed449cbe5e27, correlationId=29) and timeout 30000 to node 2147483646: SyncGroupRequestData(groupId='mso-group', generationId=1, memberId='mso-123456-consumer-e3947c42-8a25-4603-b050-ed449cbe5e27-4206ed4e-0a48-4fc2-880d-094163bb2a9d', groupInstanceId=null, protocolType='consumer', protocolName='range', assignments=[SyncGroupRequestAssignment(memberId='mso-123456-consumer-e3947c42-8a25-4603-b050-ed449cbe5e27-4206ed4e-0a48-4fc2-880d-094163bb2a9d', assignment=[0, 1, 0, 0, 0, 1, 0, 13, 109, 121, 45, 116, 101, 115, 116, 45, 116, 111, 112, 105, 99, 0, 0, 0, 1, 0, 0, 0, 0, -1, -1, -1, -1])]) 19:06:45.816 [data-plane-kafka-request-handler-0] DEBUG kafka.server.DelayedOperationPurgatory - Request key GroupSyncKey(mso-group) unblocked 1 Rebalance operations 19:06:45.817 [data-plane-kafka-request-handler-0] INFO kafka.coordinator.group.GroupCoordinator - [GroupCoordinator 1]: Assignment received from leader mso-123456-consumer-e3947c42-8a25-4603-b050-ed449cbe5e27-4206ed4e-0a48-4fc2-880d-094163bb2a9d for group mso-group for generation 1. The group has 1 members, 0 of which are static. 19:06:45.870 [data-plane-kafka-request-handler-0] DEBUG kafka.log.UnifiedLog - [UnifiedLog partition=__consumer_offsets-37, dir=/tmp/kafka-unit12180474530667575823] Flushing log up to offset 1 (exclusive)with recovery point 1, last flushed: 1755112002156, current time: 1755112005870,unflushed: 1 19:06:45.973 [data-plane-kafka-request-handler-0] DEBUG kafka.cluster.Partition - [Partition __consumer_offsets-37 broker=1] High watermark updated from (offset=0 segment=[0:0]) to (offset=1 segment=[0:458]) 19:06:45.978 [data-plane-kafka-request-handler-0] DEBUG kafka.server.ReplicaManager - [ReplicaManager broker=1] Produce to local log in 135 ms 19:06:45.989 [data-plane-kafka-request-handler-0] DEBUG kafka.server.DelayedOperationPurgatory - Request key MemberKey(mso-group,mso-123456-consumer-e3947c42-8a25-4603-b050-ed449cbe5e27-4206ed4e-0a48-4fc2-880d-094163bb2a9d) unblocked 1 Heartbeat operations 19:06:45.990 [main] DEBUG org.apache.kafka.clients.NetworkClient - [Consumer clientId=mso-123456-consumer-e3947c42-8a25-4603-b050-ed449cbe5e27, groupId=mso-group] Received SYNC_GROUP response from node 2147483646 for request with header RequestHeader(apiKey=SYNC_GROUP, apiVersion=5, clientId=mso-123456-consumer-e3947c42-8a25-4603-b050-ed449cbe5e27, correlationId=29): SyncGroupResponseData(throttleTimeMs=0, errorCode=0, protocolType='consumer', protocolName='range', assignment=[0, 1, 0, 0, 0, 1, 0, 13, 109, 121, 45, 116, 101, 115, 116, 45, 116, 111, 112, 105, 99, 0, 0, 0, 1, 0, 0, 0, 0, -1, -1, -1, -1]) 19:06:45.990 [main] DEBUG org.apache.kafka.clients.consumer.internals.ConsumerCoordinator - [Consumer clientId=mso-123456-consumer-e3947c42-8a25-4603-b050-ed449cbe5e27, groupId=mso-group] Received successful SyncGroup response: SyncGroupResponseData(throttleTimeMs=0, errorCode=0, protocolType='consumer', protocolName='range', assignment=[0, 1, 0, 0, 0, 1, 0, 13, 109, 121, 45, 116, 101, 115, 116, 45, 116, 111, 112, 105, 99, 0, 0, 0, 1, 0, 0, 0, 0, -1, -1, -1, -1]) 19:06:45.990 [data-plane-kafka-network-thread-1-ListenerName(SASL_PLAINTEXT)-SASL_PLAINTEXT-1] DEBUG kafka.request.logger - Completed request:{"isForwarded":false,"requestHeader":{"requestApiKey":14,"requestApiVersion":5,"correlationId":29,"clientId":"mso-123456-consumer-e3947c42-8a25-4603-b050-ed449cbe5e27","requestApiKeyName":"SYNC_GROUP"},"request":{"groupId":"mso-group","generationId":1,"memberId":"mso-123456-consumer-e3947c42-8a25-4603-b050-ed449cbe5e27-4206ed4e-0a48-4fc2-880d-094163bb2a9d","groupInstanceId":null,"protocolType":"consumer","protocolName":"range","assignments":[{"memberId":"mso-123456-consumer-e3947c42-8a25-4603-b050-ed449cbe5e27-4206ed4e-0a48-4fc2-880d-094163bb2a9d","assignment":"AAEAAAABAA1teS10ZXN0LXRvcGljAAAAAQAAAAD/////"}]},"response":{"throttleTimeMs":0,"errorCode":0,"protocolType":"consumer","protocolName":"range","assignment":"AAEAAAABAA1teS10ZXN0LXRvcGljAAAAAQAAAAD/////"},"connection":"127.0.0.1:38537-127.0.0.1:54972-3","totalTimeMs":181.152,"requestQueueTimeMs":2.446,"localTimeMs":177.077,"remoteTimeMs":0.0,"throttleTimeMs":0,"responseQueueTimeMs":1.101,"sendTimeMs":0.527,"securityProtocol":"SASL_PLAINTEXT","principal":"User:admin","listener":"SASL_PLAINTEXT","clientInformation":{"softwareName":"apache-kafka-java","softwareVersion":"3.3.1"}} 19:06:45.990 [main] INFO org.apache.kafka.clients.consumer.internals.ConsumerCoordinator - [Consumer clientId=mso-123456-consumer-e3947c42-8a25-4603-b050-ed449cbe5e27, groupId=mso-group] Successfully synced group in generation Generation{generationId=1, memberId='mso-123456-consumer-e3947c42-8a25-4603-b050-ed449cbe5e27-4206ed4e-0a48-4fc2-880d-094163bb2a9d', protocol='range'} 19:06:45.992 [main] DEBUG org.apache.kafka.clients.consumer.internals.ConsumerCoordinator - [Consumer clientId=mso-123456-consumer-e3947c42-8a25-4603-b050-ed449cbe5e27, groupId=mso-group] Executing onJoinComplete with generation 1 and memberId mso-123456-consumer-e3947c42-8a25-4603-b050-ed449cbe5e27-4206ed4e-0a48-4fc2-880d-094163bb2a9d 19:06:45.992 [main] INFO org.apache.kafka.clients.consumer.internals.ConsumerCoordinator - [Consumer clientId=mso-123456-consumer-e3947c42-8a25-4603-b050-ed449cbe5e27, groupId=mso-group] Notifying assignor about the new Assignment(partitions=[my-test-topic-0]) 19:06:45.998 [main] INFO org.apache.kafka.clients.consumer.internals.ConsumerCoordinator - [Consumer clientId=mso-123456-consumer-e3947c42-8a25-4603-b050-ed449cbe5e27, groupId=mso-group] Adding newly assigned partitions: my-test-topic-0 19:06:46.003 [main] DEBUG org.apache.kafka.clients.consumer.internals.ConsumerCoordinator - [Consumer clientId=mso-123456-consumer-e3947c42-8a25-4603-b050-ed449cbe5e27, groupId=mso-group] Fetching committed offsets for partitions: [my-test-topic-0] 19:06:46.009 [main] DEBUG org.apache.kafka.clients.NetworkClient - [Consumer clientId=mso-123456-consumer-e3947c42-8a25-4603-b050-ed449cbe5e27, groupId=mso-group] Sending OFFSET_FETCH request with header RequestHeader(apiKey=OFFSET_FETCH, apiVersion=8, clientId=mso-123456-consumer-e3947c42-8a25-4603-b050-ed449cbe5e27, correlationId=30) and timeout 30000 to node 2147483646: OffsetFetchRequestData(groupId='', topics=[], groups=[OffsetFetchRequestGroup(groupId='mso-group', topics=[OffsetFetchRequestTopics(name='my-test-topic', partitionIndexes=[0])])], requireStable=true) 19:06:46.031 [main] DEBUG org.apache.kafka.clients.NetworkClient - [Consumer clientId=mso-123456-consumer-e3947c42-8a25-4603-b050-ed449cbe5e27, groupId=mso-group] Received OFFSET_FETCH response from node 2147483646 for request with header RequestHeader(apiKey=OFFSET_FETCH, apiVersion=8, clientId=mso-123456-consumer-e3947c42-8a25-4603-b050-ed449cbe5e27, correlationId=30): OffsetFetchResponseData(throttleTimeMs=0, topics=[], errorCode=0, groups=[OffsetFetchResponseGroup(groupId='mso-group', topics=[OffsetFetchResponseTopics(name='my-test-topic', partitions=[OffsetFetchResponsePartitions(partitionIndex=0, committedOffset=-1, committedLeaderEpoch=-1, metadata='', errorCode=0)])], errorCode=0)]) 19:06:46.032 [main] INFO org.apache.kafka.clients.consumer.internals.ConsumerCoordinator - [Consumer clientId=mso-123456-consumer-e3947c42-8a25-4603-b050-ed449cbe5e27, groupId=mso-group] Found no committed offset for partition my-test-topic-0 19:06:46.033 [data-plane-kafka-network-thread-1-ListenerName(SASL_PLAINTEXT)-SASL_PLAINTEXT-1] DEBUG kafka.request.logger - Completed request:{"isForwarded":false,"requestHeader":{"requestApiKey":9,"requestApiVersion":8,"correlationId":30,"clientId":"mso-123456-consumer-e3947c42-8a25-4603-b050-ed449cbe5e27","requestApiKeyName":"OFFSET_FETCH"},"request":{"groups":[{"groupId":"mso-group","topics":[{"name":"my-test-topic","partitionIndexes":[0]}]}],"requireStable":true},"response":{"throttleTimeMs":0,"groups":[{"groupId":"mso-group","topics":[{"name":"my-test-topic","partitions":[{"partitionIndex":0,"committedOffset":-1,"committedLeaderEpoch":-1,"metadata":"","errorCode":0}]}],"errorCode":0}]},"connection":"127.0.0.1:38537-127.0.0.1:54972-3","totalTimeMs":21.173,"requestQueueTimeMs":4.479,"localTimeMs":15.865,"remoteTimeMs":0.0,"throttleTimeMs":0,"responseQueueTimeMs":0.273,"sendTimeMs":0.555,"securityProtocol":"SASL_PLAINTEXT","principal":"User:admin","listener":"SASL_PLAINTEXT","clientInformation":{"softwareName":"apache-kafka-java","softwareVersion":"3.3.1"}} 19:06:46.037 [main] DEBUG org.apache.kafka.clients.consumer.internals.Fetcher - [Consumer clientId=mso-123456-consumer-e3947c42-8a25-4603-b050-ed449cbe5e27, groupId=mso-group] Sending ListOffsetRequest ListOffsetsRequestData(replicaId=-1, isolationLevel=0, topics=[ListOffsetsTopic(name='my-test-topic', partitions=[ListOffsetsPartition(partitionIndex=0, currentLeaderEpoch=0, timestamp=-1, maxNumOffsets=1)])]) to broker localhost:38537 (id: 1 rack: null) 19:06:46.039 [main] DEBUG org.apache.kafka.clients.NetworkClient - [Consumer clientId=mso-123456-consumer-e3947c42-8a25-4603-b050-ed449cbe5e27, groupId=mso-group] Sending LIST_OFFSETS request with header RequestHeader(apiKey=LIST_OFFSETS, apiVersion=7, clientId=mso-123456-consumer-e3947c42-8a25-4603-b050-ed449cbe5e27, correlationId=31) and timeout 30000 to node 1: ListOffsetsRequestData(replicaId=-1, isolationLevel=0, topics=[ListOffsetsTopic(name='my-test-topic', partitions=[ListOffsetsPartition(partitionIndex=0, currentLeaderEpoch=0, timestamp=-1, maxNumOffsets=1)])]) 19:06:46.057 [main] DEBUG org.apache.kafka.clients.NetworkClient - [Consumer clientId=mso-123456-consumer-e3947c42-8a25-4603-b050-ed449cbe5e27, groupId=mso-group] Received LIST_OFFSETS response from node 1 for request with header RequestHeader(apiKey=LIST_OFFSETS, apiVersion=7, clientId=mso-123456-consumer-e3947c42-8a25-4603-b050-ed449cbe5e27, correlationId=31): ListOffsetsResponseData(throttleTimeMs=0, topics=[ListOffsetsTopicResponse(name='my-test-topic', partitions=[ListOffsetsPartitionResponse(partitionIndex=0, errorCode=0, oldStyleOffsets=[], timestamp=-1, offset=0, leaderEpoch=0)])]) 19:06:46.058 [data-plane-kafka-network-thread-1-ListenerName(SASL_PLAINTEXT)-SASL_PLAINTEXT-0] DEBUG kafka.request.logger - Completed request:{"isForwarded":false,"requestHeader":{"requestApiKey":2,"requestApiVersion":7,"correlationId":31,"clientId":"mso-123456-consumer-e3947c42-8a25-4603-b050-ed449cbe5e27","requestApiKeyName":"LIST_OFFSETS"},"request":{"replicaId":-1,"isolationLevel":0,"topics":[{"name":"my-test-topic","partitions":[{"partitionIndex":0,"currentLeaderEpoch":0,"timestamp":-1}]}]},"response":{"throttleTimeMs":0,"topics":[{"name":"my-test-topic","partitions":[{"partitionIndex":0,"errorCode":0,"timestamp":-1,"offset":0,"leaderEpoch":0}]}]},"connection":"127.0.0.1:38537-127.0.0.1:54970-3","totalTimeMs":16.969,"requestQueueTimeMs":4.647,"localTimeMs":11.846,"remoteTimeMs":0.0,"throttleTimeMs":0,"responseQueueTimeMs":0.111,"sendTimeMs":0.363,"securityProtocol":"SASL_PLAINTEXT","principal":"User:admin","listener":"SASL_PLAINTEXT","clientInformation":{"softwareName":"apache-kafka-java","softwareVersion":"3.3.1"}} 19:06:46.058 [main] DEBUG org.apache.kafka.clients.consumer.internals.Fetcher - [Consumer clientId=mso-123456-consumer-e3947c42-8a25-4603-b050-ed449cbe5e27, groupId=mso-group] Handling ListOffsetResponse response for my-test-topic-0. Fetched offset 0, timestamp -1 19:06:46.062 [main] DEBUG org.apache.kafka.clients.Metadata - [Consumer clientId=mso-123456-consumer-e3947c42-8a25-4603-b050-ed449cbe5e27, groupId=mso-group] Not replacing existing epoch 0 with new epoch 0 for partition my-test-topic-0 19:06:46.063 [main] INFO org.apache.kafka.clients.consumer.internals.SubscriptionState - [Consumer clientId=mso-123456-consumer-e3947c42-8a25-4603-b050-ed449cbe5e27, groupId=mso-group] Resetting offset for partition my-test-topic-0 to position FetchPosition{offset=0, offsetEpoch=Optional.empty, currentLeader=LeaderAndEpoch{leader=Optional[localhost:38537 (id: 1 rack: null)], epoch=0}}. 19:06:46.069 [main] DEBUG org.apache.kafka.clients.consumer.internals.Fetcher - [Consumer clientId=mso-123456-consumer-e3947c42-8a25-4603-b050-ed449cbe5e27, groupId=mso-group] Added READ_UNCOMMITTED fetch request for partition my-test-topic-0 at position FetchPosition{offset=0, offsetEpoch=Optional.empty, currentLeader=LeaderAndEpoch{leader=Optional[localhost:38537 (id: 1 rack: null)], epoch=0}} to node localhost:38537 (id: 1 rack: null) 19:06:46.069 [main] DEBUG org.apache.kafka.clients.FetchSessionHandler - [Consumer clientId=mso-123456-consumer-e3947c42-8a25-4603-b050-ed449cbe5e27, groupId=mso-group] Built full fetch (sessionId=INVALID, epoch=INITIAL) for node 1 with 1 partition(s). 19:06:46.070 [main] DEBUG org.apache.kafka.clients.consumer.internals.Fetcher - [Consumer clientId=mso-123456-consumer-e3947c42-8a25-4603-b050-ed449cbe5e27, groupId=mso-group] Sending READ_UNCOMMITTED FullFetchRequest(toSend=(my-test-topic-0), canUseTopicIds=True) to broker localhost:38537 (id: 1 rack: null) 19:06:46.073 [main] DEBUG org.apache.kafka.clients.NetworkClient - [Consumer clientId=mso-123456-consumer-e3947c42-8a25-4603-b050-ed449cbe5e27, groupId=mso-group] Sending FETCH request with header RequestHeader(apiKey=FETCH, apiVersion=13, clientId=mso-123456-consumer-e3947c42-8a25-4603-b050-ed449cbe5e27, correlationId=32) and timeout 30000 to node 1: FetchRequestData(clusterId=null, replicaId=-1, maxWaitMs=500, minBytes=1, maxBytes=52428800, isolationLevel=0, sessionId=0, sessionEpoch=0, topics=[FetchTopic(topic='my-test-topic', topicId=Y7FYMZASRPqaYrfAS485Ow, partitions=[FetchPartition(partition=0, currentLeaderEpoch=0, fetchOffset=0, lastFetchedEpoch=-1, logStartOffset=-1, partitionMaxBytes=1048576)])], forgottenTopicsData=[], rackId='') 19:06:46.083 [data-plane-kafka-request-handler-1] DEBUG kafka.server.FetchManager - Created a new full FetchContext with 1 partition(s). 19:06:46.623 [executor-Fetch] DEBUG kafka.server.FetchSessionCache - Created fetch session FetchSession(id=1631787598, privileged=false, partitionMap.size=1, usesTopicIds=true, creationMs=1755112006619, lastUsedMs=1755112006619, epoch=1) 19:06:46.629 [executor-Fetch] DEBUG kafka.server.FullFetchContext - Full fetch context with session id 1631787598 returning 1 partition(s) 19:06:46.656 [data-plane-kafka-network-thread-1-ListenerName(SASL_PLAINTEXT)-SASL_PLAINTEXT-0] DEBUG kafka.request.logger - Completed request:{"isForwarded":false,"requestHeader":{"requestApiKey":1,"requestApiVersion":13,"correlationId":32,"clientId":"mso-123456-consumer-e3947c42-8a25-4603-b050-ed449cbe5e27","requestApiKeyName":"FETCH"},"request":{"replicaId":-1,"maxWaitMs":500,"minBytes":1,"maxBytes":52428800,"isolationLevel":0,"sessionId":0,"sessionEpoch":0,"topics":[{"topicId":"Y7FYMZASRPqaYrfAS485Ow","partitions":[{"partition":0,"currentLeaderEpoch":0,"fetchOffset":0,"lastFetchedEpoch":-1,"logStartOffset":-1,"partitionMaxBytes":1048576}]}],"forgottenTopicsData":[],"rackId":""},"response":{"throttleTimeMs":0,"errorCode":0,"sessionId":1631787598,"responses":[{"topicId":"Y7FYMZASRPqaYrfAS485Ow","partitions":[{"partitionIndex":0,"errorCode":0,"highWatermark":0,"lastStableOffset":0,"logStartOffset":0,"abortedTransactions":null,"preferredReadReplica":-1,"recordsSizeInBytes":0}]}]},"connection":"127.0.0.1:38537-127.0.0.1:54970-3","totalTimeMs":580.796,"requestQueueTimeMs":3.069,"localTimeMs":33.133,"remoteTimeMs":536.624,"throttleTimeMs":0,"responseQueueTimeMs":7.583,"sendTimeMs":0.384,"securityProtocol":"SASL_PLAINTEXT","principal":"User:admin","listener":"SASL_PLAINTEXT","clientInformation":{"softwareName":"apache-kafka-java","softwareVersion":"3.3.1"}} 19:06:46.656 [main] DEBUG org.apache.kafka.clients.NetworkClient - [Consumer clientId=mso-123456-consumer-e3947c42-8a25-4603-b050-ed449cbe5e27, groupId=mso-group] Received FETCH response from node 1 for request with header RequestHeader(apiKey=FETCH, apiVersion=13, clientId=mso-123456-consumer-e3947c42-8a25-4603-b050-ed449cbe5e27, correlationId=32): FetchResponseData(throttleTimeMs=0, errorCode=0, sessionId=1631787598, responses=[FetchableTopicResponse(topic='', topicId=Y7FYMZASRPqaYrfAS485Ow, partitions=[PartitionData(partitionIndex=0, errorCode=0, highWatermark=0, lastStableOffset=0, logStartOffset=0, divergingEpoch=EpochEndOffset(epoch=-1, endOffset=-1), currentLeader=LeaderIdAndEpoch(leaderId=-1, leaderEpoch=-1), snapshotId=SnapshotId(endOffset=-1, epoch=-1), abortedTransactions=null, preferredReadReplica=-1, records=MemoryRecords(size=0, buffer=java.nio.HeapByteBuffer[pos=0 lim=0 cap=3]))])]) 19:06:46.659 [main] DEBUG org.apache.kafka.clients.FetchSessionHandler - [Consumer clientId=mso-123456-consumer-e3947c42-8a25-4603-b050-ed449cbe5e27, groupId=mso-group] Node 1 sent a full fetch response that created a new incremental fetch session 1631787598 with 1 response partition(s) 19:06:46.662 [main] DEBUG org.apache.kafka.clients.consumer.internals.Fetcher - [Consumer clientId=mso-123456-consumer-e3947c42-8a25-4603-b050-ed449cbe5e27, groupId=mso-group] Fetch READ_UNCOMMITTED at offset 0 for partition my-test-topic-0 returned fetch data PartitionData(partitionIndex=0, errorCode=0, highWatermark=0, lastStableOffset=0, logStartOffset=0, divergingEpoch=EpochEndOffset(epoch=-1, endOffset=-1), currentLeader=LeaderIdAndEpoch(leaderId=-1, leaderEpoch=-1), snapshotId=SnapshotId(endOffset=-1, epoch=-1), abortedTransactions=null, preferredReadReplica=-1, records=MemoryRecords(size=0, buffer=java.nio.HeapByteBuffer[pos=0 lim=0 cap=3])) 19:06:46.666 [main] DEBUG org.apache.kafka.clients.consumer.internals.Fetcher - [Consumer clientId=mso-123456-consumer-e3947c42-8a25-4603-b050-ed449cbe5e27, groupId=mso-group] Added READ_UNCOMMITTED fetch request for partition my-test-topic-0 at position FetchPosition{offset=0, offsetEpoch=Optional.empty, currentLeader=LeaderAndEpoch{leader=Optional[localhost:38537 (id: 1 rack: null)], epoch=0}} to node localhost:38537 (id: 1 rack: null) 19:06:46.666 [main] DEBUG org.apache.kafka.clients.FetchSessionHandler - [Consumer clientId=mso-123456-consumer-e3947c42-8a25-4603-b050-ed449cbe5e27, groupId=mso-group] Built incremental fetch (sessionId=1631787598, epoch=1) for node 1. Added 0 partition(s), altered 0 partition(s), removed 0 partition(s), replaced 0 partition(s) out of 1 partition(s) 19:06:46.666 [main] DEBUG org.apache.kafka.clients.consumer.internals.Fetcher - [Consumer clientId=mso-123456-consumer-e3947c42-8a25-4603-b050-ed449cbe5e27, groupId=mso-group] Sending READ_UNCOMMITTED IncrementalFetchRequest(toSend=(), toForget=(), toReplace=(), implied=(my-test-topic-0), canUseTopicIds=True) to broker localhost:38537 (id: 1 rack: null) 19:06:46.666 [main] DEBUG org.apache.kafka.clients.NetworkClient - [Consumer clientId=mso-123456-consumer-e3947c42-8a25-4603-b050-ed449cbe5e27, groupId=mso-group] Sending FETCH request with header RequestHeader(apiKey=FETCH, apiVersion=13, clientId=mso-123456-consumer-e3947c42-8a25-4603-b050-ed449cbe5e27, correlationId=33) and timeout 30000 to node 1: FetchRequestData(clusterId=null, replicaId=-1, maxWaitMs=500, minBytes=1, maxBytes=52428800, isolationLevel=0, sessionId=1631787598, sessionEpoch=1, topics=[], forgottenTopicsData=[], rackId='') 19:06:46.670 [data-plane-kafka-request-handler-1] DEBUG kafka.server.FetchManager - Created a new incremental FetchContext for session id 1631787598, epoch 2: added 0 partition(s), updated 0 partition(s), removed 0 partition(s) 19:06:47.178 [executor-Fetch] DEBUG kafka.server.IncrementalFetchContext - Incremental fetch context with session id 1631787598 returning 0 partition(s) 19:06:47.179 [main] DEBUG org.apache.kafka.clients.NetworkClient - [Consumer clientId=mso-123456-consumer-e3947c42-8a25-4603-b050-ed449cbe5e27, groupId=mso-group] Received FETCH response from node 1 for request with header RequestHeader(apiKey=FETCH, apiVersion=13, clientId=mso-123456-consumer-e3947c42-8a25-4603-b050-ed449cbe5e27, correlationId=33): FetchResponseData(throttleTimeMs=0, errorCode=0, sessionId=1631787598, responses=[]) 19:06:47.180 [main] DEBUG org.apache.kafka.clients.FetchSessionHandler - [Consumer clientId=mso-123456-consumer-e3947c42-8a25-4603-b050-ed449cbe5e27, groupId=mso-group] Node 1 sent an incremental fetch response with throttleTimeMs = 0 for session 1631787598 with 0 response partition(s), 1 implied partition(s) 19:06:47.180 [data-plane-kafka-network-thread-1-ListenerName(SASL_PLAINTEXT)-SASL_PLAINTEXT-0] DEBUG kafka.request.logger - Completed request:{"isForwarded":false,"requestHeader":{"requestApiKey":1,"requestApiVersion":13,"correlationId":33,"clientId":"mso-123456-consumer-e3947c42-8a25-4603-b050-ed449cbe5e27","requestApiKeyName":"FETCH"},"request":{"replicaId":-1,"maxWaitMs":500,"minBytes":1,"maxBytes":52428800,"isolationLevel":0,"sessionId":1631787598,"sessionEpoch":1,"topics":[],"forgottenTopicsData":[],"rackId":""},"response":{"throttleTimeMs":0,"errorCode":0,"sessionId":1631787598,"responses":[]},"connection":"127.0.0.1:38537-127.0.0.1:54970-3","totalTimeMs":512.236,"requestQueueTimeMs":0.201,"localTimeMs":6.486,"remoteTimeMs":504.948,"throttleTimeMs":0,"responseQueueTimeMs":0.227,"sendTimeMs":0.372,"securityProtocol":"SASL_PLAINTEXT","principal":"User:admin","listener":"SASL_PLAINTEXT","clientInformation":{"softwareName":"apache-kafka-java","softwareVersion":"3.3.1"}} 19:06:47.181 [main] DEBUG org.apache.kafka.clients.consumer.internals.Fetcher - [Consumer clientId=mso-123456-consumer-e3947c42-8a25-4603-b050-ed449cbe5e27, groupId=mso-group] Added READ_UNCOMMITTED fetch request for partition my-test-topic-0 at position FetchPosition{offset=0, offsetEpoch=Optional.empty, currentLeader=LeaderAndEpoch{leader=Optional[localhost:38537 (id: 1 rack: null)], epoch=0}} to node localhost:38537 (id: 1 rack: null) 19:06:47.181 [main] DEBUG org.apache.kafka.clients.FetchSessionHandler - [Consumer clientId=mso-123456-consumer-e3947c42-8a25-4603-b050-ed449cbe5e27, groupId=mso-group] Built incremental fetch (sessionId=1631787598, epoch=2) for node 1. Added 0 partition(s), altered 0 partition(s), removed 0 partition(s), replaced 0 partition(s) out of 1 partition(s) 19:06:47.181 [main] DEBUG org.apache.kafka.clients.consumer.internals.Fetcher - [Consumer clientId=mso-123456-consumer-e3947c42-8a25-4603-b050-ed449cbe5e27, groupId=mso-group] Sending READ_UNCOMMITTED IncrementalFetchRequest(toSend=(), toForget=(), toReplace=(), implied=(my-test-topic-0), canUseTopicIds=True) to broker localhost:38537 (id: 1 rack: null) 19:06:47.181 [main] DEBUG org.apache.kafka.clients.NetworkClient - [Consumer clientId=mso-123456-consumer-e3947c42-8a25-4603-b050-ed449cbe5e27, groupId=mso-group] Sending FETCH request with header RequestHeader(apiKey=FETCH, apiVersion=13, clientId=mso-123456-consumer-e3947c42-8a25-4603-b050-ed449cbe5e27, correlationId=34) and timeout 30000 to node 1: FetchRequestData(clusterId=null, replicaId=-1, maxWaitMs=500, minBytes=1, maxBytes=52428800, isolationLevel=0, sessionId=1631787598, sessionEpoch=2, topics=[], forgottenTopicsData=[], rackId='') 19:06:47.183 [data-plane-kafka-request-handler-0] DEBUG kafka.server.FetchManager - Created a new incremental FetchContext for session id 1631787598, epoch 3: added 0 partition(s), updated 0 partition(s), removed 0 partition(s) 19:06:47.686 [executor-Fetch] DEBUG kafka.server.IncrementalFetchContext - Incremental fetch context with session id 1631787598 returning 0 partition(s) 19:06:47.687 [main] DEBUG org.apache.kafka.clients.NetworkClient - [Consumer clientId=mso-123456-consumer-e3947c42-8a25-4603-b050-ed449cbe5e27, groupId=mso-group] Received FETCH response from node 1 for request with header RequestHeader(apiKey=FETCH, apiVersion=13, clientId=mso-123456-consumer-e3947c42-8a25-4603-b050-ed449cbe5e27, correlationId=34): FetchResponseData(throttleTimeMs=0, errorCode=0, sessionId=1631787598, responses=[]) 19:06:47.688 [main] DEBUG org.apache.kafka.clients.FetchSessionHandler - [Consumer clientId=mso-123456-consumer-e3947c42-8a25-4603-b050-ed449cbe5e27, groupId=mso-group] Node 1 sent an incremental fetch response with throttleTimeMs = 0 for session 1631787598 with 0 response partition(s), 1 implied partition(s) 19:06:47.688 [main] DEBUG org.apache.kafka.clients.consumer.internals.Fetcher - [Consumer clientId=mso-123456-consumer-e3947c42-8a25-4603-b050-ed449cbe5e27, groupId=mso-group] Added READ_UNCOMMITTED fetch request for partition my-test-topic-0 at position FetchPosition{offset=0, offsetEpoch=Optional.empty, currentLeader=LeaderAndEpoch{leader=Optional[localhost:38537 (id: 1 rack: null)], epoch=0}} to node localhost:38537 (id: 1 rack: null) 19:06:47.688 [main] DEBUG org.apache.kafka.clients.FetchSessionHandler - [Consumer clientId=mso-123456-consumer-e3947c42-8a25-4603-b050-ed449cbe5e27, groupId=mso-group] Built incremental fetch (sessionId=1631787598, epoch=3) for node 1. Added 0 partition(s), altered 0 partition(s), removed 0 partition(s), replaced 0 partition(s) out of 1 partition(s) 19:06:47.689 [main] DEBUG org.apache.kafka.clients.consumer.internals.Fetcher - [Consumer clientId=mso-123456-consumer-e3947c42-8a25-4603-b050-ed449cbe5e27, groupId=mso-group] Sending READ_UNCOMMITTED IncrementalFetchRequest(toSend=(), toForget=(), toReplace=(), implied=(my-test-topic-0), canUseTopicIds=True) to broker localhost:38537 (id: 1 rack: null) 19:06:47.689 [main] DEBUG org.apache.kafka.clients.NetworkClient - [Consumer clientId=mso-123456-consumer-e3947c42-8a25-4603-b050-ed449cbe5e27, groupId=mso-group] Sending FETCH request with header RequestHeader(apiKey=FETCH, apiVersion=13, clientId=mso-123456-consumer-e3947c42-8a25-4603-b050-ed449cbe5e27, correlationId=35) and timeout 30000 to node 1: FetchRequestData(clusterId=null, replicaId=-1, maxWaitMs=500, minBytes=1, maxBytes=52428800, isolationLevel=0, sessionId=1631787598, sessionEpoch=3, topics=[], forgottenTopicsData=[], rackId='') 19:06:47.689 [data-plane-kafka-network-thread-1-ListenerName(SASL_PLAINTEXT)-SASL_PLAINTEXT-0] DEBUG kafka.request.logger - Completed request:{"isForwarded":false,"requestHeader":{"requestApiKey":1,"requestApiVersion":13,"correlationId":34,"clientId":"mso-123456-consumer-e3947c42-8a25-4603-b050-ed449cbe5e27","requestApiKeyName":"FETCH"},"request":{"replicaId":-1,"maxWaitMs":500,"minBytes":1,"maxBytes":52428800,"isolationLevel":0,"sessionId":1631787598,"sessionEpoch":2,"topics":[],"forgottenTopicsData":[],"rackId":""},"response":{"throttleTimeMs":0,"errorCode":0,"sessionId":1631787598,"responses":[]},"connection":"127.0.0.1:38537-127.0.0.1:54970-3","totalTimeMs":505.035,"requestQueueTimeMs":0.38,"localTimeMs":1.516,"remoteTimeMs":502.333,"throttleTimeMs":0,"responseQueueTimeMs":0.294,"sendTimeMs":0.51,"securityProtocol":"SASL_PLAINTEXT","principal":"User:admin","listener":"SASL_PLAINTEXT","clientInformation":{"softwareName":"apache-kafka-java","softwareVersion":"3.3.1"}} 19:06:47.691 [data-plane-kafka-request-handler-0] DEBUG kafka.server.FetchManager - Created a new incremental FetchContext for session id 1631787598, epoch 4: added 0 partition(s), updated 0 partition(s), removed 0 partition(s) 19:06:48.193 [executor-Fetch] DEBUG kafka.server.IncrementalFetchContext - Incremental fetch context with session id 1631787598 returning 0 partition(s) 19:06:48.195 [main] DEBUG org.apache.kafka.clients.NetworkClient - [Consumer clientId=mso-123456-consumer-e3947c42-8a25-4603-b050-ed449cbe5e27, groupId=mso-group] Received FETCH response from node 1 for request with header RequestHeader(apiKey=FETCH, apiVersion=13, clientId=mso-123456-consumer-e3947c42-8a25-4603-b050-ed449cbe5e27, correlationId=35): FetchResponseData(throttleTimeMs=0, errorCode=0, sessionId=1631787598, responses=[]) 19:06:48.195 [data-plane-kafka-network-thread-1-ListenerName(SASL_PLAINTEXT)-SASL_PLAINTEXT-0] DEBUG kafka.request.logger - Completed request:{"isForwarded":false,"requestHeader":{"requestApiKey":1,"requestApiVersion":13,"correlationId":35,"clientId":"mso-123456-consumer-e3947c42-8a25-4603-b050-ed449cbe5e27","requestApiKeyName":"FETCH"},"request":{"replicaId":-1,"maxWaitMs":500,"minBytes":1,"maxBytes":52428800,"isolationLevel":0,"sessionId":1631787598,"sessionEpoch":3,"topics":[],"forgottenTopicsData":[],"rackId":""},"response":{"throttleTimeMs":0,"errorCode":0,"sessionId":1631787598,"responses":[]},"connection":"127.0.0.1:38537-127.0.0.1:54970-3","totalTimeMs":504.813,"requestQueueTimeMs":0.339,"localTimeMs":1.919,"remoteTimeMs":501.908,"throttleTimeMs":0,"responseQueueTimeMs":0.215,"sendTimeMs":0.43,"securityProtocol":"SASL_PLAINTEXT","principal":"User:admin","listener":"SASL_PLAINTEXT","clientInformation":{"softwareName":"apache-kafka-java","softwareVersion":"3.3.1"}} 19:06:48.196 [main] DEBUG org.apache.kafka.clients.FetchSessionHandler - [Consumer clientId=mso-123456-consumer-e3947c42-8a25-4603-b050-ed449cbe5e27, groupId=mso-group] Node 1 sent an incremental fetch response with throttleTimeMs = 0 for session 1631787598 with 0 response partition(s), 1 implied partition(s) 19:06:48.197 [main] DEBUG org.apache.kafka.clients.consumer.internals.Fetcher - [Consumer clientId=mso-123456-consumer-e3947c42-8a25-4603-b050-ed449cbe5e27, groupId=mso-group] Added READ_UNCOMMITTED fetch request for partition my-test-topic-0 at position FetchPosition{offset=0, offsetEpoch=Optional.empty, currentLeader=LeaderAndEpoch{leader=Optional[localhost:38537 (id: 1 rack: null)], epoch=0}} to node localhost:38537 (id: 1 rack: null) 19:06:48.197 [main] DEBUG org.apache.kafka.clients.FetchSessionHandler - [Consumer clientId=mso-123456-consumer-e3947c42-8a25-4603-b050-ed449cbe5e27, groupId=mso-group] Built incremental fetch (sessionId=1631787598, epoch=4) for node 1. Added 0 partition(s), altered 0 partition(s), removed 0 partition(s), replaced 0 partition(s) out of 1 partition(s) 19:06:48.197 [main] DEBUG org.apache.kafka.clients.consumer.internals.Fetcher - [Consumer clientId=mso-123456-consumer-e3947c42-8a25-4603-b050-ed449cbe5e27, groupId=mso-group] Sending READ_UNCOMMITTED IncrementalFetchRequest(toSend=(), toForget=(), toReplace=(), implied=(my-test-topic-0), canUseTopicIds=True) to broker localhost:38537 (id: 1 rack: null) 19:06:48.197 [main] DEBUG org.apache.kafka.clients.NetworkClient - [Consumer clientId=mso-123456-consumer-e3947c42-8a25-4603-b050-ed449cbe5e27, groupId=mso-group] Sending FETCH request with header RequestHeader(apiKey=FETCH, apiVersion=13, clientId=mso-123456-consumer-e3947c42-8a25-4603-b050-ed449cbe5e27, correlationId=36) and timeout 30000 to node 1: FetchRequestData(clusterId=null, replicaId=-1, maxWaitMs=500, minBytes=1, maxBytes=52428800, isolationLevel=0, sessionId=1631787598, sessionEpoch=4, topics=[], forgottenTopicsData=[], rackId='') 19:06:48.199 [data-plane-kafka-request-handler-0] DEBUG kafka.server.FetchManager - Created a new incremental FetchContext for session id 1631787598, epoch 5: added 0 partition(s), updated 0 partition(s), removed 0 partition(s) 19:06:48.703 [executor-Fetch] DEBUG kafka.server.IncrementalFetchContext - Incremental fetch context with session id 1631787598 returning 0 partition(s) 19:06:48.704 [main] DEBUG org.apache.kafka.clients.NetworkClient - [Consumer clientId=mso-123456-consumer-e3947c42-8a25-4603-b050-ed449cbe5e27, groupId=mso-group] Received FETCH response from node 1 for request with header RequestHeader(apiKey=FETCH, apiVersion=13, clientId=mso-123456-consumer-e3947c42-8a25-4603-b050-ed449cbe5e27, correlationId=36): FetchResponseData(throttleTimeMs=0, errorCode=0, sessionId=1631787598, responses=[]) 19:06:48.705 [data-plane-kafka-network-thread-1-ListenerName(SASL_PLAINTEXT)-SASL_PLAINTEXT-0] DEBUG kafka.request.logger - Completed request:{"isForwarded":false,"requestHeader":{"requestApiKey":1,"requestApiVersion":13,"correlationId":36,"clientId":"mso-123456-consumer-e3947c42-8a25-4603-b050-ed449cbe5e27","requestApiKeyName":"FETCH"},"request":{"replicaId":-1,"maxWaitMs":500,"minBytes":1,"maxBytes":52428800,"isolationLevel":0,"sessionId":1631787598,"sessionEpoch":4,"topics":[],"forgottenTopicsData":[],"rackId":""},"response":{"throttleTimeMs":0,"errorCode":0,"sessionId":1631787598,"responses":[]},"connection":"127.0.0.1:38537-127.0.0.1:54970-3","totalTimeMs":505.639,"requestQueueTimeMs":0.271,"localTimeMs":1.67,"remoteTimeMs":502.994,"throttleTimeMs":0,"responseQueueTimeMs":0.18,"sendTimeMs":0.523,"securityProtocol":"SASL_PLAINTEXT","principal":"User:admin","listener":"SASL_PLAINTEXT","clientInformation":{"softwareName":"apache-kafka-java","softwareVersion":"3.3.1"}} 19:06:48.705 [main] DEBUG org.apache.kafka.clients.FetchSessionHandler - [Consumer clientId=mso-123456-consumer-e3947c42-8a25-4603-b050-ed449cbe5e27, groupId=mso-group] Node 1 sent an incremental fetch response with throttleTimeMs = 0 for session 1631787598 with 0 response partition(s), 1 implied partition(s) 19:06:48.706 [main] DEBUG org.apache.kafka.clients.consumer.internals.Fetcher - [Consumer clientId=mso-123456-consumer-e3947c42-8a25-4603-b050-ed449cbe5e27, groupId=mso-group] Added READ_UNCOMMITTED fetch request for partition my-test-topic-0 at position FetchPosition{offset=0, offsetEpoch=Optional.empty, currentLeader=LeaderAndEpoch{leader=Optional[localhost:38537 (id: 1 rack: null)], epoch=0}} to node localhost:38537 (id: 1 rack: null) 19:06:48.706 [main] DEBUG org.apache.kafka.clients.FetchSessionHandler - [Consumer clientId=mso-123456-consumer-e3947c42-8a25-4603-b050-ed449cbe5e27, groupId=mso-group] Built incremental fetch (sessionId=1631787598, epoch=5) for node 1. Added 0 partition(s), altered 0 partition(s), removed 0 partition(s), replaced 0 partition(s) out of 1 partition(s) 19:06:48.706 [main] DEBUG org.apache.kafka.clients.consumer.internals.Fetcher - [Consumer clientId=mso-123456-consumer-e3947c42-8a25-4603-b050-ed449cbe5e27, groupId=mso-group] Sending READ_UNCOMMITTED IncrementalFetchRequest(toSend=(), toForget=(), toReplace=(), implied=(my-test-topic-0), canUseTopicIds=True) to broker localhost:38537 (id: 1 rack: null) 19:06:48.706 [main] DEBUG org.apache.kafka.clients.NetworkClient - [Consumer clientId=mso-123456-consumer-e3947c42-8a25-4603-b050-ed449cbe5e27, groupId=mso-group] Sending FETCH request with header RequestHeader(apiKey=FETCH, apiVersion=13, clientId=mso-123456-consumer-e3947c42-8a25-4603-b050-ed449cbe5e27, correlationId=37) and timeout 30000 to node 1: FetchRequestData(clusterId=null, replicaId=-1, maxWaitMs=500, minBytes=1, maxBytes=52428800, isolationLevel=0, sessionId=1631787598, sessionEpoch=5, topics=[], forgottenTopicsData=[], rackId='') 19:06:48.708 [data-plane-kafka-request-handler-1] DEBUG kafka.server.FetchManager - Created a new incremental FetchContext for session id 1631787598, epoch 6: added 0 partition(s), updated 0 partition(s), removed 0 partition(s) 19:06:48.793 [kafka-coordinator-heartbeat-thread | mso-group] DEBUG org.apache.kafka.clients.consumer.internals.ConsumerCoordinator - [Consumer clientId=mso-123456-consumer-e3947c42-8a25-4603-b050-ed449cbe5e27, groupId=mso-group] Sending Heartbeat request with generation 1 and member id mso-123456-consumer-e3947c42-8a25-4603-b050-ed449cbe5e27-4206ed4e-0a48-4fc2-880d-094163bb2a9d to coordinator localhost:38537 (id: 2147483646 rack: null) 19:06:48.798 [kafka-coordinator-heartbeat-thread | mso-group] DEBUG org.apache.kafka.clients.NetworkClient - [Consumer clientId=mso-123456-consumer-e3947c42-8a25-4603-b050-ed449cbe5e27, groupId=mso-group] Sending HEARTBEAT request with header RequestHeader(apiKey=HEARTBEAT, apiVersion=4, clientId=mso-123456-consumer-e3947c42-8a25-4603-b050-ed449cbe5e27, correlationId=38) and timeout 30000 to node 2147483646: HeartbeatRequestData(groupId='mso-group', generationId=1, memberId='mso-123456-consumer-e3947c42-8a25-4603-b050-ed449cbe5e27-4206ed4e-0a48-4fc2-880d-094163bb2a9d', groupInstanceId=null) 19:06:48.804 [data-plane-kafka-request-handler-1] DEBUG kafka.server.DelayedOperationPurgatory - Request key MemberKey(mso-group,mso-123456-consumer-e3947c42-8a25-4603-b050-ed449cbe5e27-4206ed4e-0a48-4fc2-880d-094163bb2a9d) unblocked 1 Heartbeat operations 19:06:48.808 [main] DEBUG org.apache.kafka.clients.NetworkClient - [Consumer clientId=mso-123456-consumer-e3947c42-8a25-4603-b050-ed449cbe5e27, groupId=mso-group] Received HEARTBEAT response from node 2147483646 for request with header RequestHeader(apiKey=HEARTBEAT, apiVersion=4, clientId=mso-123456-consumer-e3947c42-8a25-4603-b050-ed449cbe5e27, correlationId=38): HeartbeatResponseData(throttleTimeMs=0, errorCode=0) 19:06:48.808 [main] DEBUG org.apache.kafka.clients.consumer.internals.ConsumerCoordinator - [Consumer clientId=mso-123456-consumer-e3947c42-8a25-4603-b050-ed449cbe5e27, groupId=mso-group] Received successful Heartbeat response 19:06:48.809 [data-plane-kafka-network-thread-1-ListenerName(SASL_PLAINTEXT)-SASL_PLAINTEXT-1] DEBUG kafka.request.logger - Completed request:{"isForwarded":false,"requestHeader":{"requestApiKey":12,"requestApiVersion":4,"correlationId":38,"clientId":"mso-123456-consumer-e3947c42-8a25-4603-b050-ed449cbe5e27","requestApiKeyName":"HEARTBEAT"},"request":{"groupId":"mso-group","generationId":1,"memberId":"mso-123456-consumer-e3947c42-8a25-4603-b050-ed449cbe5e27-4206ed4e-0a48-4fc2-880d-094163bb2a9d","groupInstanceId":null},"response":{"throttleTimeMs":0,"errorCode":0},"connection":"127.0.0.1:38537-127.0.0.1:54972-3","totalTimeMs":8.648,"requestQueueTimeMs":2.13,"localTimeMs":5.763,"remoteTimeMs":0.0,"throttleTimeMs":0,"responseQueueTimeMs":0.221,"sendTimeMs":0.532,"securityProtocol":"SASL_PLAINTEXT","principal":"User:admin","listener":"SASL_PLAINTEXT","clientInformation":{"softwareName":"apache-kafka-java","softwareVersion":"3.3.1"}} 19:06:49.211 [executor-Fetch] DEBUG kafka.server.IncrementalFetchContext - Incremental fetch context with session id 1631787598 returning 0 partition(s) 19:06:49.212 [main] DEBUG org.apache.kafka.clients.NetworkClient - [Consumer clientId=mso-123456-consumer-e3947c42-8a25-4603-b050-ed449cbe5e27, groupId=mso-group] Received FETCH response from node 1 for request with header RequestHeader(apiKey=FETCH, apiVersion=13, clientId=mso-123456-consumer-e3947c42-8a25-4603-b050-ed449cbe5e27, correlationId=37): FetchResponseData(throttleTimeMs=0, errorCode=0, sessionId=1631787598, responses=[]) 19:06:49.213 [data-plane-kafka-network-thread-1-ListenerName(SASL_PLAINTEXT)-SASL_PLAINTEXT-0] DEBUG kafka.request.logger - Completed request:{"isForwarded":false,"requestHeader":{"requestApiKey":1,"requestApiVersion":13,"correlationId":37,"clientId":"mso-123456-consumer-e3947c42-8a25-4603-b050-ed449cbe5e27","requestApiKeyName":"FETCH"},"request":{"replicaId":-1,"maxWaitMs":500,"minBytes":1,"maxBytes":52428800,"isolationLevel":0,"sessionId":1631787598,"sessionEpoch":5,"topics":[],"forgottenTopicsData":[],"rackId":""},"response":{"throttleTimeMs":0,"errorCode":0,"sessionId":1631787598,"responses":[]},"connection":"127.0.0.1:38537-127.0.0.1:54970-3","totalTimeMs":504.636,"requestQueueTimeMs":0.275,"localTimeMs":1.955,"remoteTimeMs":501.709,"throttleTimeMs":0,"responseQueueTimeMs":0.168,"sendTimeMs":0.527,"securityProtocol":"SASL_PLAINTEXT","principal":"User:admin","listener":"SASL_PLAINTEXT","clientInformation":{"softwareName":"apache-kafka-java","softwareVersion":"3.3.1"}} 19:06:49.216 [kafka-coordinator-heartbeat-thread | mso-group] DEBUG org.apache.kafka.clients.FetchSessionHandler - [Consumer clientId=mso-123456-consumer-e3947c42-8a25-4603-b050-ed449cbe5e27, groupId=mso-group] Node 1 sent an incremental fetch response with throttleTimeMs = 0 for session 1631787598 with 0 response partition(s), 1 implied partition(s) 19:06:49.217 [main] DEBUG org.apache.kafka.clients.consumer.internals.Fetcher - [Consumer clientId=mso-123456-consumer-e3947c42-8a25-4603-b050-ed449cbe5e27, groupId=mso-group] Added READ_UNCOMMITTED fetch request for partition my-test-topic-0 at position FetchPosition{offset=0, offsetEpoch=Optional.empty, currentLeader=LeaderAndEpoch{leader=Optional[localhost:38537 (id: 1 rack: null)], epoch=0}} to node localhost:38537 (id: 1 rack: null) 19:06:49.217 [main] DEBUG org.apache.kafka.clients.FetchSessionHandler - [Consumer clientId=mso-123456-consumer-e3947c42-8a25-4603-b050-ed449cbe5e27, groupId=mso-group] Built incremental fetch (sessionId=1631787598, epoch=6) for node 1. Added 0 partition(s), altered 0 partition(s), removed 0 partition(s), replaced 0 partition(s) out of 1 partition(s) 19:06:49.217 [main] DEBUG org.apache.kafka.clients.consumer.internals.Fetcher - [Consumer clientId=mso-123456-consumer-e3947c42-8a25-4603-b050-ed449cbe5e27, groupId=mso-group] Sending READ_UNCOMMITTED IncrementalFetchRequest(toSend=(), toForget=(), toReplace=(), implied=(my-test-topic-0), canUseTopicIds=True) to broker localhost:38537 (id: 1 rack: null) 19:06:49.217 [main] DEBUG org.apache.kafka.clients.NetworkClient - [Consumer clientId=mso-123456-consumer-e3947c42-8a25-4603-b050-ed449cbe5e27, groupId=mso-group] Sending FETCH request with header RequestHeader(apiKey=FETCH, apiVersion=13, clientId=mso-123456-consumer-e3947c42-8a25-4603-b050-ed449cbe5e27, correlationId=39) and timeout 30000 to node 1: FetchRequestData(clusterId=null, replicaId=-1, maxWaitMs=500, minBytes=1, maxBytes=52428800, isolationLevel=0, sessionId=1631787598, sessionEpoch=6, topics=[], forgottenTopicsData=[], rackId='') 19:06:49.218 [data-plane-kafka-request-handler-0] DEBUG kafka.server.FetchManager - Created a new incremental FetchContext for session id 1631787598, epoch 7: added 0 partition(s), updated 0 partition(s), removed 0 partition(s) 19:06:49.721 [executor-Fetch] DEBUG kafka.server.IncrementalFetchContext - Incremental fetch context with session id 1631787598 returning 0 partition(s) 19:06:49.723 [main] DEBUG org.apache.kafka.clients.NetworkClient - [Consumer clientId=mso-123456-consumer-e3947c42-8a25-4603-b050-ed449cbe5e27, groupId=mso-group] Received FETCH response from node 1 for request with header RequestHeader(apiKey=FETCH, apiVersion=13, clientId=mso-123456-consumer-e3947c42-8a25-4603-b050-ed449cbe5e27, correlationId=39): FetchResponseData(throttleTimeMs=0, errorCode=0, sessionId=1631787598, responses=[]) 19:06:49.724 [main] DEBUG org.apache.kafka.clients.FetchSessionHandler - [Consumer clientId=mso-123456-consumer-e3947c42-8a25-4603-b050-ed449cbe5e27, groupId=mso-group] Node 1 sent an incremental fetch response with throttleTimeMs = 0 for session 1631787598 with 0 response partition(s), 1 implied partition(s) 19:06:49.724 [data-plane-kafka-network-thread-1-ListenerName(SASL_PLAINTEXT)-SASL_PLAINTEXT-0] DEBUG kafka.request.logger - Completed request:{"isForwarded":false,"requestHeader":{"requestApiKey":1,"requestApiVersion":13,"correlationId":39,"clientId":"mso-123456-consumer-e3947c42-8a25-4603-b050-ed449cbe5e27","requestApiKeyName":"FETCH"},"request":{"replicaId":-1,"maxWaitMs":500,"minBytes":1,"maxBytes":52428800,"isolationLevel":0,"sessionId":1631787598,"sessionEpoch":6,"topics":[],"forgottenTopicsData":[],"rackId":""},"response":{"throttleTimeMs":0,"errorCode":0,"sessionId":1631787598,"responses":[]},"connection":"127.0.0.1:38537-127.0.0.1:54970-3","totalTimeMs":504.477,"requestQueueTimeMs":0.288,"localTimeMs":1.144,"remoteTimeMs":502.457,"throttleTimeMs":0,"responseQueueTimeMs":0.165,"sendTimeMs":0.421,"securityProtocol":"SASL_PLAINTEXT","principal":"User:admin","listener":"SASL_PLAINTEXT","clientInformation":{"softwareName":"apache-kafka-java","softwareVersion":"3.3.1"}} 19:06:49.725 [main] DEBUG org.apache.kafka.clients.consumer.internals.Fetcher - [Consumer clientId=mso-123456-consumer-e3947c42-8a25-4603-b050-ed449cbe5e27, groupId=mso-group] Added READ_UNCOMMITTED fetch request for partition my-test-topic-0 at position FetchPosition{offset=0, offsetEpoch=Optional.empty, currentLeader=LeaderAndEpoch{leader=Optional[localhost:38537 (id: 1 rack: null)], epoch=0}} to node localhost:38537 (id: 1 rack: null) 19:06:49.726 [main] DEBUG org.apache.kafka.clients.FetchSessionHandler - [Consumer clientId=mso-123456-consumer-e3947c42-8a25-4603-b050-ed449cbe5e27, groupId=mso-group] Built incremental fetch (sessionId=1631787598, epoch=7) for node 1. Added 0 partition(s), altered 0 partition(s), removed 0 partition(s), replaced 0 partition(s) out of 1 partition(s) 19:06:49.726 [main] DEBUG org.apache.kafka.clients.consumer.internals.Fetcher - [Consumer clientId=mso-123456-consumer-e3947c42-8a25-4603-b050-ed449cbe5e27, groupId=mso-group] Sending READ_UNCOMMITTED IncrementalFetchRequest(toSend=(), toForget=(), toReplace=(), implied=(my-test-topic-0), canUseTopicIds=True) to broker localhost:38537 (id: 1 rack: null) 19:06:49.726 [main] DEBUG org.apache.kafka.clients.NetworkClient - [Consumer clientId=mso-123456-consumer-e3947c42-8a25-4603-b050-ed449cbe5e27, groupId=mso-group] Sending FETCH request with header RequestHeader(apiKey=FETCH, apiVersion=13, clientId=mso-123456-consumer-e3947c42-8a25-4603-b050-ed449cbe5e27, correlationId=40) and timeout 30000 to node 1: FetchRequestData(clusterId=null, replicaId=-1, maxWaitMs=500, minBytes=1, maxBytes=52428800, isolationLevel=0, sessionId=1631787598, sessionEpoch=7, topics=[], forgottenTopicsData=[], rackId='') 19:06:49.727 [data-plane-kafka-request-handler-0] DEBUG kafka.server.FetchManager - Created a new incremental FetchContext for session id 1631787598, epoch 8: added 0 partition(s), updated 0 partition(s), removed 0 partition(s) 19:06:50.230 [executor-Fetch] DEBUG kafka.server.IncrementalFetchContext - Incremental fetch context with session id 1631787598 returning 0 partition(s) 19:06:50.232 [main] DEBUG org.apache.kafka.clients.NetworkClient - [Consumer clientId=mso-123456-consumer-e3947c42-8a25-4603-b050-ed449cbe5e27, groupId=mso-group] Received FETCH response from node 1 for request with header RequestHeader(apiKey=FETCH, apiVersion=13, clientId=mso-123456-consumer-e3947c42-8a25-4603-b050-ed449cbe5e27, correlationId=40): FetchResponseData(throttleTimeMs=0, errorCode=0, sessionId=1631787598, responses=[]) 19:06:50.232 [main] DEBUG org.apache.kafka.clients.FetchSessionHandler - [Consumer clientId=mso-123456-consumer-e3947c42-8a25-4603-b050-ed449cbe5e27, groupId=mso-group] Node 1 sent an incremental fetch response with throttleTimeMs = 0 for session 1631787598 with 0 response partition(s), 1 implied partition(s) 19:06:50.233 [data-plane-kafka-network-thread-1-ListenerName(SASL_PLAINTEXT)-SASL_PLAINTEXT-0] DEBUG kafka.request.logger - Completed request:{"isForwarded":false,"requestHeader":{"requestApiKey":1,"requestApiVersion":13,"correlationId":40,"clientId":"mso-123456-consumer-e3947c42-8a25-4603-b050-ed449cbe5e27","requestApiKeyName":"FETCH"},"request":{"replicaId":-1,"maxWaitMs":500,"minBytes":1,"maxBytes":52428800,"isolationLevel":0,"sessionId":1631787598,"sessionEpoch":7,"topics":[],"forgottenTopicsData":[],"rackId":""},"response":{"throttleTimeMs":0,"errorCode":0,"sessionId":1631787598,"responses":[]},"connection":"127.0.0.1:38537-127.0.0.1:54970-3","totalTimeMs":504.89,"requestQueueTimeMs":0.286,"localTimeMs":1.469,"remoteTimeMs":502.325,"throttleTimeMs":0,"responseQueueTimeMs":0.251,"sendTimeMs":0.556,"securityProtocol":"SASL_PLAINTEXT","principal":"User:admin","listener":"SASL_PLAINTEXT","clientInformation":{"softwareName":"apache-kafka-java","softwareVersion":"3.3.1"}} 19:06:50.233 [main] DEBUG org.apache.kafka.clients.consumer.internals.Fetcher - [Consumer clientId=mso-123456-consumer-e3947c42-8a25-4603-b050-ed449cbe5e27, groupId=mso-group] Added READ_UNCOMMITTED fetch request for partition my-test-topic-0 at position FetchPosition{offset=0, offsetEpoch=Optional.empty, currentLeader=LeaderAndEpoch{leader=Optional[localhost:38537 (id: 1 rack: null)], epoch=0}} to node localhost:38537 (id: 1 rack: null) 19:06:50.233 [main] DEBUG org.apache.kafka.clients.FetchSessionHandler - [Consumer clientId=mso-123456-consumer-e3947c42-8a25-4603-b050-ed449cbe5e27, groupId=mso-group] Built incremental fetch (sessionId=1631787598, epoch=8) for node 1. Added 0 partition(s), altered 0 partition(s), removed 0 partition(s), replaced 0 partition(s) out of 1 partition(s) 19:06:50.233 [main] DEBUG org.apache.kafka.clients.consumer.internals.Fetcher - [Consumer clientId=mso-123456-consumer-e3947c42-8a25-4603-b050-ed449cbe5e27, groupId=mso-group] Sending READ_UNCOMMITTED IncrementalFetchRequest(toSend=(), toForget=(), toReplace=(), implied=(my-test-topic-0), canUseTopicIds=True) to broker localhost:38537 (id: 1 rack: null) 19:06:50.234 [main] DEBUG org.apache.kafka.clients.NetworkClient - [Consumer clientId=mso-123456-consumer-e3947c42-8a25-4603-b050-ed449cbe5e27, groupId=mso-group] Sending FETCH request with header RequestHeader(apiKey=FETCH, apiVersion=13, clientId=mso-123456-consumer-e3947c42-8a25-4603-b050-ed449cbe5e27, correlationId=41) and timeout 30000 to node 1: FetchRequestData(clusterId=null, replicaId=-1, maxWaitMs=500, minBytes=1, maxBytes=52428800, isolationLevel=0, sessionId=1631787598, sessionEpoch=8, topics=[], forgottenTopicsData=[], rackId='') 19:06:50.235 [data-plane-kafka-request-handler-1] DEBUG kafka.server.FetchManager - Created a new incremental FetchContext for session id 1631787598, epoch 9: added 0 partition(s), updated 0 partition(s), removed 0 partition(s) 19:06:50.738 [executor-Fetch] DEBUG kafka.server.IncrementalFetchContext - Incremental fetch context with session id 1631787598 returning 0 partition(s) 19:06:50.739 [main] DEBUG org.apache.kafka.clients.NetworkClient - [Consumer clientId=mso-123456-consumer-e3947c42-8a25-4603-b050-ed449cbe5e27, groupId=mso-group] Received FETCH response from node 1 for request with header RequestHeader(apiKey=FETCH, apiVersion=13, clientId=mso-123456-consumer-e3947c42-8a25-4603-b050-ed449cbe5e27, correlationId=41): FetchResponseData(throttleTimeMs=0, errorCode=0, sessionId=1631787598, responses=[]) 19:06:50.740 [main] DEBUG org.apache.kafka.clients.FetchSessionHandler - [Consumer clientId=mso-123456-consumer-e3947c42-8a25-4603-b050-ed449cbe5e27, groupId=mso-group] Node 1 sent an incremental fetch response with throttleTimeMs = 0 for session 1631787598 with 0 response partition(s), 1 implied partition(s) 19:06:50.740 [data-plane-kafka-network-thread-1-ListenerName(SASL_PLAINTEXT)-SASL_PLAINTEXT-0] DEBUG kafka.request.logger - Completed request:{"isForwarded":false,"requestHeader":{"requestApiKey":1,"requestApiVersion":13,"correlationId":41,"clientId":"mso-123456-consumer-e3947c42-8a25-4603-b050-ed449cbe5e27","requestApiKeyName":"FETCH"},"request":{"replicaId":-1,"maxWaitMs":500,"minBytes":1,"maxBytes":52428800,"isolationLevel":0,"sessionId":1631787598,"sessionEpoch":8,"topics":[],"forgottenTopicsData":[],"rackId":""},"response":{"throttleTimeMs":0,"errorCode":0,"sessionId":1631787598,"responses":[]},"connection":"127.0.0.1:38537-127.0.0.1:54970-3","totalTimeMs":504.823,"requestQueueTimeMs":0.299,"localTimeMs":1.418,"remoteTimeMs":502.403,"throttleTimeMs":0,"responseQueueTimeMs":0.246,"sendTimeMs":0.455,"securityProtocol":"SASL_PLAINTEXT","principal":"User:admin","listener":"SASL_PLAINTEXT","clientInformation":{"softwareName":"apache-kafka-java","softwareVersion":"3.3.1"}} 19:06:50.741 [main] DEBUG org.apache.kafka.clients.consumer.internals.Fetcher - [Consumer clientId=mso-123456-consumer-e3947c42-8a25-4603-b050-ed449cbe5e27, groupId=mso-group] Added READ_UNCOMMITTED fetch request for partition my-test-topic-0 at position FetchPosition{offset=0, offsetEpoch=Optional.empty, currentLeader=LeaderAndEpoch{leader=Optional[localhost:38537 (id: 1 rack: null)], epoch=0}} to node localhost:38537 (id: 1 rack: null) 19:06:50.741 [main] DEBUG org.apache.kafka.clients.FetchSessionHandler - [Consumer clientId=mso-123456-consumer-e3947c42-8a25-4603-b050-ed449cbe5e27, groupId=mso-group] Built incremental fetch (sessionId=1631787598, epoch=9) for node 1. Added 0 partition(s), altered 0 partition(s), removed 0 partition(s), replaced 0 partition(s) out of 1 partition(s) 19:06:50.741 [main] DEBUG org.apache.kafka.clients.consumer.internals.Fetcher - [Consumer clientId=mso-123456-consumer-e3947c42-8a25-4603-b050-ed449cbe5e27, groupId=mso-group] Sending READ_UNCOMMITTED IncrementalFetchRequest(toSend=(), toForget=(), toReplace=(), implied=(my-test-topic-0), canUseTopicIds=True) to broker localhost:38537 (id: 1 rack: null) 19:06:50.741 [main] DEBUG org.apache.kafka.clients.NetworkClient - [Consumer clientId=mso-123456-consumer-e3947c42-8a25-4603-b050-ed449cbe5e27, groupId=mso-group] Sending FETCH request with header RequestHeader(apiKey=FETCH, apiVersion=13, clientId=mso-123456-consumer-e3947c42-8a25-4603-b050-ed449cbe5e27, correlationId=42) and timeout 30000 to node 1: FetchRequestData(clusterId=null, replicaId=-1, maxWaitMs=500, minBytes=1, maxBytes=52428800, isolationLevel=0, sessionId=1631787598, sessionEpoch=9, topics=[], forgottenTopicsData=[], rackId='') 19:06:50.742 [data-plane-kafka-request-handler-1] DEBUG kafka.server.FetchManager - Created a new incremental FetchContext for session id 1631787598, epoch 10: added 0 partition(s), updated 0 partition(s), removed 0 partition(s) 19:06:50.994 [main] DEBUG org.apache.kafka.clients.consumer.internals.ConsumerCoordinator - [Consumer clientId=mso-123456-consumer-e3947c42-8a25-4603-b050-ed449cbe5e27, groupId=mso-group] Sending asynchronous auto-commit of offsets {my-test-topic-0=OffsetAndMetadata{offset=0, leaderEpoch=null, metadata=''}} 19:06:50.996 [main] DEBUG org.apache.kafka.clients.NetworkClient - [Consumer clientId=mso-123456-consumer-e3947c42-8a25-4603-b050-ed449cbe5e27, groupId=mso-group] Sending OFFSET_COMMIT request with header RequestHeader(apiKey=OFFSET_COMMIT, apiVersion=8, clientId=mso-123456-consumer-e3947c42-8a25-4603-b050-ed449cbe5e27, correlationId=43) and timeout 30000 to node 2147483646: OffsetCommitRequestData(groupId='mso-group', generationId=1, memberId='mso-123456-consumer-e3947c42-8a25-4603-b050-ed449cbe5e27-4206ed4e-0a48-4fc2-880d-094163bb2a9d', groupInstanceId=null, retentionTimeMs=-1, topics=[OffsetCommitRequestTopic(name='my-test-topic', partitions=[OffsetCommitRequestPartition(partitionIndex=0, committedOffset=0, committedLeaderEpoch=-1, commitTimestamp=-1, committedMetadata='')])]) 19:06:51.007 [data-plane-kafka-request-handler-1] DEBUG kafka.server.DelayedOperationPurgatory - Request key MemberKey(mso-group,mso-123456-consumer-e3947c42-8a25-4603-b050-ed449cbe5e27-4206ed4e-0a48-4fc2-880d-094163bb2a9d) unblocked 1 Heartbeat operations 19:06:51.016 [data-plane-kafka-request-handler-1] DEBUG kafka.log.UnifiedLog - [UnifiedLog partition=__consumer_offsets-37, dir=/tmp/kafka-unit12180474530667575823] Flushing log up to offset 2 (exclusive)with recovery point 2, last flushed: 1755112005973, current time: 1755112011016,unflushed: 1 19:06:51.021 [data-plane-kafka-request-handler-1] DEBUG kafka.cluster.Partition - [Partition __consumer_offsets-37 broker=1] High watermark updated from (offset=1 segment=[0:458]) to (offset=2 segment=[0:582]) 19:06:51.021 [data-plane-kafka-request-handler-1] DEBUG kafka.server.ReplicaManager - [ReplicaManager broker=1] Produce to local log in 6 ms 19:06:51.032 [main] DEBUG org.apache.kafka.clients.NetworkClient - [Consumer clientId=mso-123456-consumer-e3947c42-8a25-4603-b050-ed449cbe5e27, groupId=mso-group] Received OFFSET_COMMIT response from node 2147483646 for request with header RequestHeader(apiKey=OFFSET_COMMIT, apiVersion=8, clientId=mso-123456-consumer-e3947c42-8a25-4603-b050-ed449cbe5e27, correlationId=43): OffsetCommitResponseData(throttleTimeMs=0, topics=[OffsetCommitResponseTopic(name='my-test-topic', partitions=[OffsetCommitResponsePartition(partitionIndex=0, errorCode=0)])]) 19:06:51.032 [data-plane-kafka-network-thread-1-ListenerName(SASL_PLAINTEXT)-SASL_PLAINTEXT-1] DEBUG kafka.request.logger - Completed request:{"isForwarded":false,"requestHeader":{"requestApiKey":8,"requestApiVersion":8,"correlationId":43,"clientId":"mso-123456-consumer-e3947c42-8a25-4603-b050-ed449cbe5e27","requestApiKeyName":"OFFSET_COMMIT"},"request":{"groupId":"mso-group","generationId":1,"memberId":"mso-123456-consumer-e3947c42-8a25-4603-b050-ed449cbe5e27-4206ed4e-0a48-4fc2-880d-094163bb2a9d","groupInstanceId":null,"topics":[{"name":"my-test-topic","partitions":[{"partitionIndex":0,"committedOffset":0,"committedLeaderEpoch":-1,"committedMetadata":""}]}]},"response":{"throttleTimeMs":0,"topics":[{"name":"my-test-topic","partitions":[{"partitionIndex":0,"errorCode":0}]}]},"connection":"127.0.0.1:38537-127.0.0.1:54972-3","totalTimeMs":33.788,"requestQueueTimeMs":3.98,"localTimeMs":29.292,"remoteTimeMs":0.0,"throttleTimeMs":0,"responseQueueTimeMs":0.164,"sendTimeMs":0.351,"securityProtocol":"SASL_PLAINTEXT","principal":"User:admin","listener":"SASL_PLAINTEXT","clientInformation":{"softwareName":"apache-kafka-java","softwareVersion":"3.3.1"}} 19:06:51.032 [main] DEBUG org.apache.kafka.clients.consumer.internals.ConsumerCoordinator - [Consumer clientId=mso-123456-consumer-e3947c42-8a25-4603-b050-ed449cbe5e27, groupId=mso-group] Committed offset 0 for partition my-test-topic-0 19:06:51.032 [main] DEBUG org.apache.kafka.clients.consumer.internals.ConsumerCoordinator - [Consumer clientId=mso-123456-consumer-e3947c42-8a25-4603-b050-ed449cbe5e27, groupId=mso-group] Completed asynchronous auto-commit of offsets {my-test-topic-0=OffsetAndMetadata{offset=0, leaderEpoch=null, metadata=''}} 19:06:51.246 [executor-Fetch] DEBUG kafka.server.IncrementalFetchContext - Incremental fetch context with session id 1631787598 returning 0 partition(s) 19:06:51.247 [main] DEBUG org.apache.kafka.clients.NetworkClient - [Consumer clientId=mso-123456-consumer-e3947c42-8a25-4603-b050-ed449cbe5e27, groupId=mso-group] Received FETCH response from node 1 for request with header RequestHeader(apiKey=FETCH, apiVersion=13, clientId=mso-123456-consumer-e3947c42-8a25-4603-b050-ed449cbe5e27, correlationId=42): FetchResponseData(throttleTimeMs=0, errorCode=0, sessionId=1631787598, responses=[]) 19:06:51.248 [main] DEBUG org.apache.kafka.clients.FetchSessionHandler - [Consumer clientId=mso-123456-consumer-e3947c42-8a25-4603-b050-ed449cbe5e27, groupId=mso-group] Node 1 sent an incremental fetch response with throttleTimeMs = 0 for session 1631787598 with 0 response partition(s), 1 implied partition(s) 19:06:51.248 [data-plane-kafka-network-thread-1-ListenerName(SASL_PLAINTEXT)-SASL_PLAINTEXT-0] DEBUG kafka.request.logger - Completed request:{"isForwarded":false,"requestHeader":{"requestApiKey":1,"requestApiVersion":13,"correlationId":42,"clientId":"mso-123456-consumer-e3947c42-8a25-4603-b050-ed449cbe5e27","requestApiKeyName":"FETCH"},"request":{"replicaId":-1,"maxWaitMs":500,"minBytes":1,"maxBytes":52428800,"isolationLevel":0,"sessionId":1631787598,"sessionEpoch":9,"topics":[],"forgottenTopicsData":[],"rackId":""},"response":{"throttleTimeMs":0,"errorCode":0,"sessionId":1631787598,"responses":[]},"connection":"127.0.0.1:38537-127.0.0.1:54970-3","totalTimeMs":505.48,"requestQueueTimeMs":0.219,"localTimeMs":2.085,"remoteTimeMs":502.446,"throttleTimeMs":0,"responseQueueTimeMs":0.26,"sendTimeMs":0.467,"securityProtocol":"SASL_PLAINTEXT","principal":"User:admin","listener":"SASL_PLAINTEXT","clientInformation":{"softwareName":"apache-kafka-java","softwareVersion":"3.3.1"}} 19:06:51.248 [main] DEBUG org.apache.kafka.clients.consumer.internals.Fetcher - [Consumer clientId=mso-123456-consumer-e3947c42-8a25-4603-b050-ed449cbe5e27, groupId=mso-group] Added READ_UNCOMMITTED fetch request for partition my-test-topic-0 at position FetchPosition{offset=0, offsetEpoch=Optional.empty, currentLeader=LeaderAndEpoch{leader=Optional[localhost:38537 (id: 1 rack: null)], epoch=0}} to node localhost:38537 (id: 1 rack: null) 19:06:51.249 [main] DEBUG org.apache.kafka.clients.FetchSessionHandler - [Consumer clientId=mso-123456-consumer-e3947c42-8a25-4603-b050-ed449cbe5e27, groupId=mso-group] Built incremental fetch (sessionId=1631787598, epoch=10) for node 1. Added 0 partition(s), altered 0 partition(s), removed 0 partition(s), replaced 0 partition(s) out of 1 partition(s) 19:06:51.249 [main] DEBUG org.apache.kafka.clients.consumer.internals.Fetcher - [Consumer clientId=mso-123456-consumer-e3947c42-8a25-4603-b050-ed449cbe5e27, groupId=mso-group] Sending READ_UNCOMMITTED IncrementalFetchRequest(toSend=(), toForget=(), toReplace=(), implied=(my-test-topic-0), canUseTopicIds=True) to broker localhost:38537 (id: 1 rack: null) 19:06:51.249 [main] DEBUG org.apache.kafka.clients.NetworkClient - [Consumer clientId=mso-123456-consumer-e3947c42-8a25-4603-b050-ed449cbe5e27, groupId=mso-group] Sending FETCH request with header RequestHeader(apiKey=FETCH, apiVersion=13, clientId=mso-123456-consumer-e3947c42-8a25-4603-b050-ed449cbe5e27, correlationId=44) and timeout 30000 to node 1: FetchRequestData(clusterId=null, replicaId=-1, maxWaitMs=500, minBytes=1, maxBytes=52428800, isolationLevel=0, sessionId=1631787598, sessionEpoch=10, topics=[], forgottenTopicsData=[], rackId='') 19:06:51.250 [data-plane-kafka-request-handler-1] DEBUG kafka.server.FetchManager - Created a new incremental FetchContext for session id 1631787598, epoch 11: added 0 partition(s), updated 0 partition(s), removed 0 partition(s) 19:06:51.753 [executor-Fetch] DEBUG kafka.server.IncrementalFetchContext - Incremental fetch context with session id 1631787598 returning 0 partition(s) 19:06:51.754 [main] DEBUG org.apache.kafka.clients.NetworkClient - [Consumer clientId=mso-123456-consumer-e3947c42-8a25-4603-b050-ed449cbe5e27, groupId=mso-group] Received FETCH response from node 1 for request with header RequestHeader(apiKey=FETCH, apiVersion=13, clientId=mso-123456-consumer-e3947c42-8a25-4603-b050-ed449cbe5e27, correlationId=44): FetchResponseData(throttleTimeMs=0, errorCode=0, sessionId=1631787598, responses=[]) 19:06:51.755 [main] DEBUG org.apache.kafka.clients.FetchSessionHandler - [Consumer clientId=mso-123456-consumer-e3947c42-8a25-4603-b050-ed449cbe5e27, groupId=mso-group] Node 1 sent an incremental fetch response with throttleTimeMs = 0 for session 1631787598 with 0 response partition(s), 1 implied partition(s) 19:06:51.755 [data-plane-kafka-network-thread-1-ListenerName(SASL_PLAINTEXT)-SASL_PLAINTEXT-0] DEBUG kafka.request.logger - Completed request:{"isForwarded":false,"requestHeader":{"requestApiKey":1,"requestApiVersion":13,"correlationId":44,"clientId":"mso-123456-consumer-e3947c42-8a25-4603-b050-ed449cbe5e27","requestApiKeyName":"FETCH"},"request":{"replicaId":-1,"maxWaitMs":500,"minBytes":1,"maxBytes":52428800,"isolationLevel":0,"sessionId":1631787598,"sessionEpoch":10,"topics":[],"forgottenTopicsData":[],"rackId":""},"response":{"throttleTimeMs":0,"errorCode":0,"sessionId":1631787598,"responses":[]},"connection":"127.0.0.1:38537-127.0.0.1:54970-3","totalTimeMs":504.519,"requestQueueTimeMs":0.298,"localTimeMs":1.749,"remoteTimeMs":501.764,"throttleTimeMs":0,"responseQueueTimeMs":0.215,"sendTimeMs":0.491,"securityProtocol":"SASL_PLAINTEXT","principal":"User:admin","listener":"SASL_PLAINTEXT","clientInformation":{"softwareName":"apache-kafka-java","softwareVersion":"3.3.1"}} 19:06:51.756 [main] DEBUG org.apache.kafka.clients.consumer.internals.Fetcher - [Consumer clientId=mso-123456-consumer-e3947c42-8a25-4603-b050-ed449cbe5e27, groupId=mso-group] Added READ_UNCOMMITTED fetch request for partition my-test-topic-0 at position FetchPosition{offset=0, offsetEpoch=Optional.empty, currentLeader=LeaderAndEpoch{leader=Optional[localhost:38537 (id: 1 rack: null)], epoch=0}} to node localhost:38537 (id: 1 rack: null) 19:06:51.756 [main] DEBUG org.apache.kafka.clients.FetchSessionHandler - [Consumer clientId=mso-123456-consumer-e3947c42-8a25-4603-b050-ed449cbe5e27, groupId=mso-group] Built incremental fetch (sessionId=1631787598, epoch=11) for node 1. Added 0 partition(s), altered 0 partition(s), removed 0 partition(s), replaced 0 partition(s) out of 1 partition(s) 19:06:51.756 [main] DEBUG org.apache.kafka.clients.consumer.internals.Fetcher - [Consumer clientId=mso-123456-consumer-e3947c42-8a25-4603-b050-ed449cbe5e27, groupId=mso-group] Sending READ_UNCOMMITTED IncrementalFetchRequest(toSend=(), toForget=(), toReplace=(), implied=(my-test-topic-0), canUseTopicIds=True) to broker localhost:38537 (id: 1 rack: null) 19:06:51.756 [main] DEBUG org.apache.kafka.clients.NetworkClient - [Consumer clientId=mso-123456-consumer-e3947c42-8a25-4603-b050-ed449cbe5e27, groupId=mso-group] Sending FETCH request with header RequestHeader(apiKey=FETCH, apiVersion=13, clientId=mso-123456-consumer-e3947c42-8a25-4603-b050-ed449cbe5e27, correlationId=45) and timeout 30000 to node 1: FetchRequestData(clusterId=null, replicaId=-1, maxWaitMs=500, minBytes=1, maxBytes=52428800, isolationLevel=0, sessionId=1631787598, sessionEpoch=11, topics=[], forgottenTopicsData=[], rackId='') 19:06:51.758 [data-plane-kafka-request-handler-0] DEBUG kafka.server.FetchManager - Created a new incremental FetchContext for session id 1631787598, epoch 12: added 0 partition(s), updated 0 partition(s), removed 0 partition(s) 19:06:51.794 [kafka-coordinator-heartbeat-thread | mso-group] DEBUG org.apache.kafka.clients.consumer.internals.ConsumerCoordinator - [Consumer clientId=mso-123456-consumer-e3947c42-8a25-4603-b050-ed449cbe5e27, groupId=mso-group] Sending Heartbeat request with generation 1 and member id mso-123456-consumer-e3947c42-8a25-4603-b050-ed449cbe5e27-4206ed4e-0a48-4fc2-880d-094163bb2a9d to coordinator localhost:38537 (id: 2147483646 rack: null) 19:06:51.794 [kafka-coordinator-heartbeat-thread | mso-group] DEBUG org.apache.kafka.clients.NetworkClient - [Consumer clientId=mso-123456-consumer-e3947c42-8a25-4603-b050-ed449cbe5e27, groupId=mso-group] Sending HEARTBEAT request with header RequestHeader(apiKey=HEARTBEAT, apiVersion=4, clientId=mso-123456-consumer-e3947c42-8a25-4603-b050-ed449cbe5e27, correlationId=46) and timeout 30000 to node 2147483646: HeartbeatRequestData(groupId='mso-group', generationId=1, memberId='mso-123456-consumer-e3947c42-8a25-4603-b050-ed449cbe5e27-4206ed4e-0a48-4fc2-880d-094163bb2a9d', groupInstanceId=null) 19:06:51.796 [data-plane-kafka-request-handler-1] DEBUG kafka.server.DelayedOperationPurgatory - Request key MemberKey(mso-group,mso-123456-consumer-e3947c42-8a25-4603-b050-ed449cbe5e27-4206ed4e-0a48-4fc2-880d-094163bb2a9d) unblocked 1 Heartbeat operations 19:06:51.798 [data-plane-kafka-network-thread-1-ListenerName(SASL_PLAINTEXT)-SASL_PLAINTEXT-1] DEBUG kafka.request.logger - Completed request:{"isForwarded":false,"requestHeader":{"requestApiKey":12,"requestApiVersion":4,"correlationId":46,"clientId":"mso-123456-consumer-e3947c42-8a25-4603-b050-ed449cbe5e27","requestApiKeyName":"HEARTBEAT"},"request":{"groupId":"mso-group","generationId":1,"memberId":"mso-123456-consumer-e3947c42-8a25-4603-b050-ed449cbe5e27-4206ed4e-0a48-4fc2-880d-094163bb2a9d","groupInstanceId":null},"response":{"throttleTimeMs":0,"errorCode":0},"connection":"127.0.0.1:38537-127.0.0.1:54972-3","totalTimeMs":2.04,"requestQueueTimeMs":0.383,"localTimeMs":1.244,"remoteTimeMs":0.0,"throttleTimeMs":0,"responseQueueTimeMs":0.112,"sendTimeMs":0.3,"securityProtocol":"SASL_PLAINTEXT","principal":"User:admin","listener":"SASL_PLAINTEXT","clientInformation":{"softwareName":"apache-kafka-java","softwareVersion":"3.3.1"}} 19:06:51.804 [main] DEBUG org.apache.kafka.clients.NetworkClient - [Consumer clientId=mso-123456-consumer-e3947c42-8a25-4603-b050-ed449cbe5e27, groupId=mso-group] Received HEARTBEAT response from node 2147483646 for request with header RequestHeader(apiKey=HEARTBEAT, apiVersion=4, clientId=mso-123456-consumer-e3947c42-8a25-4603-b050-ed449cbe5e27, correlationId=46): HeartbeatResponseData(throttleTimeMs=0, errorCode=0) 19:06:51.805 [main] DEBUG org.apache.kafka.clients.consumer.internals.ConsumerCoordinator - [Consumer clientId=mso-123456-consumer-e3947c42-8a25-4603-b050-ed449cbe5e27, groupId=mso-group] Received successful Heartbeat response 19:06:52.260 [executor-Fetch] DEBUG kafka.server.IncrementalFetchContext - Incremental fetch context with session id 1631787598 returning 0 partition(s) 19:06:52.262 [main] DEBUG org.apache.kafka.clients.NetworkClient - [Consumer clientId=mso-123456-consumer-e3947c42-8a25-4603-b050-ed449cbe5e27, groupId=mso-group] Received FETCH response from node 1 for request with header RequestHeader(apiKey=FETCH, apiVersion=13, clientId=mso-123456-consumer-e3947c42-8a25-4603-b050-ed449cbe5e27, correlationId=45): FetchResponseData(throttleTimeMs=0, errorCode=0, sessionId=1631787598, responses=[]) 19:06:52.263 [main] DEBUG org.apache.kafka.clients.FetchSessionHandler - [Consumer clientId=mso-123456-consumer-e3947c42-8a25-4603-b050-ed449cbe5e27, groupId=mso-group] Node 1 sent an incremental fetch response with throttleTimeMs = 0 for session 1631787598 with 0 response partition(s), 1 implied partition(s) 19:06:52.263 [data-plane-kafka-network-thread-1-ListenerName(SASL_PLAINTEXT)-SASL_PLAINTEXT-0] DEBUG kafka.request.logger - Completed request:{"isForwarded":false,"requestHeader":{"requestApiKey":1,"requestApiVersion":13,"correlationId":45,"clientId":"mso-123456-consumer-e3947c42-8a25-4603-b050-ed449cbe5e27","requestApiKeyName":"FETCH"},"request":{"replicaId":-1,"maxWaitMs":500,"minBytes":1,"maxBytes":52428800,"isolationLevel":0,"sessionId":1631787598,"sessionEpoch":11,"topics":[],"forgottenTopicsData":[],"rackId":""},"response":{"throttleTimeMs":0,"errorCode":0,"sessionId":1631787598,"responses":[]},"connection":"127.0.0.1:38537-127.0.0.1:54970-3","totalTimeMs":505.17,"requestQueueTimeMs":0.266,"localTimeMs":1.219,"remoteTimeMs":502.882,"throttleTimeMs":0,"responseQueueTimeMs":0.239,"sendTimeMs":0.563,"securityProtocol":"SASL_PLAINTEXT","principal":"User:admin","listener":"SASL_PLAINTEXT","clientInformation":{"softwareName":"apache-kafka-java","softwareVersion":"3.3.1"}} 19:06:52.264 [main] DEBUG org.apache.kafka.clients.consumer.internals.Fetcher - [Consumer clientId=mso-123456-consumer-e3947c42-8a25-4603-b050-ed449cbe5e27, groupId=mso-group] Added READ_UNCOMMITTED fetch request for partition my-test-topic-0 at position FetchPosition{offset=0, offsetEpoch=Optional.empty, currentLeader=LeaderAndEpoch{leader=Optional[localhost:38537 (id: 1 rack: null)], epoch=0}} to node localhost:38537 (id: 1 rack: null) 19:06:52.264 [main] DEBUG org.apache.kafka.clients.FetchSessionHandler - [Consumer clientId=mso-123456-consumer-e3947c42-8a25-4603-b050-ed449cbe5e27, groupId=mso-group] Built incremental fetch (sessionId=1631787598, epoch=12) for node 1. Added 0 partition(s), altered 0 partition(s), removed 0 partition(s), replaced 0 partition(s) out of 1 partition(s) 19:06:52.264 [main] DEBUG org.apache.kafka.clients.consumer.internals.Fetcher - [Consumer clientId=mso-123456-consumer-e3947c42-8a25-4603-b050-ed449cbe5e27, groupId=mso-group] Sending READ_UNCOMMITTED IncrementalFetchRequest(toSend=(), toForget=(), toReplace=(), implied=(my-test-topic-0), canUseTopicIds=True) to broker localhost:38537 (id: 1 rack: null) 19:06:52.264 [main] DEBUG org.apache.kafka.clients.NetworkClient - [Consumer clientId=mso-123456-consumer-e3947c42-8a25-4603-b050-ed449cbe5e27, groupId=mso-group] Sending FETCH request with header RequestHeader(apiKey=FETCH, apiVersion=13, clientId=mso-123456-consumer-e3947c42-8a25-4603-b050-ed449cbe5e27, correlationId=47) and timeout 30000 to node 1: FetchRequestData(clusterId=null, replicaId=-1, maxWaitMs=500, minBytes=1, maxBytes=52428800, isolationLevel=0, sessionId=1631787598, sessionEpoch=12, topics=[], forgottenTopicsData=[], rackId='') 19:06:52.266 [data-plane-kafka-request-handler-0] DEBUG kafka.server.FetchManager - Created a new incremental FetchContext for session id 1631787598, epoch 13: added 0 partition(s), updated 0 partition(s), removed 0 partition(s) 19:06:52.624 [ProcessThread(sid:0 cport:42969):] DEBUG org.apache.zookeeper.server.SessionTrackerImpl - Checking session 0x1000002c50e0000 19:06:52.625 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - Processing request:: sessionid:0x1000002c50e0000 type:ping cxid:0xfffffffffffffffe zxid:0xfffffffffffffffe txntype:unknown reqpath:n/a 19:06:52.625 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - sessionid:0x1000002c50e0000 type:ping cxid:0xfffffffffffffffe zxid:0xfffffffffffffffe txntype:unknown reqpath:n/a 19:06:52.625 [main-SendThread(127.0.0.1:42969)] DEBUG org.apache.zookeeper.ClientCnxn - Got ping response for session id: 0x1000002c50e0000 after 2ms. 19:06:52.769 [executor-Fetch] DEBUG kafka.server.IncrementalFetchContext - Incremental fetch context with session id 1631787598 returning 0 partition(s) 19:06:52.771 [main] DEBUG org.apache.kafka.clients.NetworkClient - [Consumer clientId=mso-123456-consumer-e3947c42-8a25-4603-b050-ed449cbe5e27, groupId=mso-group] Received FETCH response from node 1 for request with header RequestHeader(apiKey=FETCH, apiVersion=13, clientId=mso-123456-consumer-e3947c42-8a25-4603-b050-ed449cbe5e27, correlationId=47): FetchResponseData(throttleTimeMs=0, errorCode=0, sessionId=1631787598, responses=[]) 19:06:52.771 [main] DEBUG org.apache.kafka.clients.FetchSessionHandler - [Consumer clientId=mso-123456-consumer-e3947c42-8a25-4603-b050-ed449cbe5e27, groupId=mso-group] Node 1 sent an incremental fetch response with throttleTimeMs = 0 for session 1631787598 with 0 response partition(s), 1 implied partition(s) 19:06:52.771 [data-plane-kafka-network-thread-1-ListenerName(SASL_PLAINTEXT)-SASL_PLAINTEXT-0] DEBUG kafka.request.logger - Completed request:{"isForwarded":false,"requestHeader":{"requestApiKey":1,"requestApiVersion":13,"correlationId":47,"clientId":"mso-123456-consumer-e3947c42-8a25-4603-b050-ed449cbe5e27","requestApiKeyName":"FETCH"},"request":{"replicaId":-1,"maxWaitMs":500,"minBytes":1,"maxBytes":52428800,"isolationLevel":0,"sessionId":1631787598,"sessionEpoch":12,"topics":[],"forgottenTopicsData":[],"rackId":""},"response":{"throttleTimeMs":0,"errorCode":0,"sessionId":1631787598,"responses":[]},"connection":"127.0.0.1:38537-127.0.0.1:54970-3","totalTimeMs":505.447,"requestQueueTimeMs":0.27,"localTimeMs":1.676,"remoteTimeMs":502.727,"throttleTimeMs":0,"responseQueueTimeMs":0.206,"sendTimeMs":0.567,"securityProtocol":"SASL_PLAINTEXT","principal":"User:admin","listener":"SASL_PLAINTEXT","clientInformation":{"softwareName":"apache-kafka-java","softwareVersion":"3.3.1"}} 19:06:52.772 [main] DEBUG org.apache.kafka.clients.consumer.internals.Fetcher - [Consumer clientId=mso-123456-consumer-e3947c42-8a25-4603-b050-ed449cbe5e27, groupId=mso-group] Added READ_UNCOMMITTED fetch request for partition my-test-topic-0 at position FetchPosition{offset=0, offsetEpoch=Optional.empty, currentLeader=LeaderAndEpoch{leader=Optional[localhost:38537 (id: 1 rack: null)], epoch=0}} to node localhost:38537 (id: 1 rack: null) 19:06:52.772 [main] DEBUG org.apache.kafka.clients.FetchSessionHandler - [Consumer clientId=mso-123456-consumer-e3947c42-8a25-4603-b050-ed449cbe5e27, groupId=mso-group] Built incremental fetch (sessionId=1631787598, epoch=13) for node 1. Added 0 partition(s), altered 0 partition(s), removed 0 partition(s), replaced 0 partition(s) out of 1 partition(s) 19:06:52.772 [main] DEBUG org.apache.kafka.clients.consumer.internals.Fetcher - [Consumer clientId=mso-123456-consumer-e3947c42-8a25-4603-b050-ed449cbe5e27, groupId=mso-group] Sending READ_UNCOMMITTED IncrementalFetchRequest(toSend=(), toForget=(), toReplace=(), implied=(my-test-topic-0), canUseTopicIds=True) to broker localhost:38537 (id: 1 rack: null) 19:06:52.772 [main] DEBUG org.apache.kafka.clients.NetworkClient - [Consumer clientId=mso-123456-consumer-e3947c42-8a25-4603-b050-ed449cbe5e27, groupId=mso-group] Sending FETCH request with header RequestHeader(apiKey=FETCH, apiVersion=13, clientId=mso-123456-consumer-e3947c42-8a25-4603-b050-ed449cbe5e27, correlationId=48) and timeout 30000 to node 1: FetchRequestData(clusterId=null, replicaId=-1, maxWaitMs=500, minBytes=1, maxBytes=52428800, isolationLevel=0, sessionId=1631787598, sessionEpoch=13, topics=[], forgottenTopicsData=[], rackId='') 19:06:52.774 [data-plane-kafka-request-handler-0] DEBUG kafka.server.FetchManager - Created a new incremental FetchContext for session id 1631787598, epoch 14: added 0 partition(s), updated 0 partition(s), removed 0 partition(s) 19:06:53.276 [executor-Fetch] DEBUG kafka.server.IncrementalFetchContext - Incremental fetch context with session id 1631787598 returning 0 partition(s) 19:06:53.279 [main] DEBUG org.apache.kafka.clients.NetworkClient - [Consumer clientId=mso-123456-consumer-e3947c42-8a25-4603-b050-ed449cbe5e27, groupId=mso-group] Received FETCH response from node 1 for request with header RequestHeader(apiKey=FETCH, apiVersion=13, clientId=mso-123456-consumer-e3947c42-8a25-4603-b050-ed449cbe5e27, correlationId=48): FetchResponseData(throttleTimeMs=0, errorCode=0, sessionId=1631787598, responses=[]) 19:06:53.279 [data-plane-kafka-network-thread-1-ListenerName(SASL_PLAINTEXT)-SASL_PLAINTEXT-0] DEBUG kafka.request.logger - Completed request:{"isForwarded":false,"requestHeader":{"requestApiKey":1,"requestApiVersion":13,"correlationId":48,"clientId":"mso-123456-consumer-e3947c42-8a25-4603-b050-ed449cbe5e27","requestApiKeyName":"FETCH"},"request":{"replicaId":-1,"maxWaitMs":500,"minBytes":1,"maxBytes":52428800,"isolationLevel":0,"sessionId":1631787598,"sessionEpoch":13,"topics":[],"forgottenTopicsData":[],"rackId":""},"response":{"throttleTimeMs":0,"errorCode":0,"sessionId":1631787598,"responses":[]},"connection":"127.0.0.1:38537-127.0.0.1:54970-3","totalTimeMs":505.604,"requestQueueTimeMs":0.327,"localTimeMs":1.711,"remoteTimeMs":502.898,"throttleTimeMs":0,"responseQueueTimeMs":0.188,"sendTimeMs":0.479,"securityProtocol":"SASL_PLAINTEXT","principal":"User:admin","listener":"SASL_PLAINTEXT","clientInformation":{"softwareName":"apache-kafka-java","softwareVersion":"3.3.1"}} 19:06:53.279 [main] DEBUG org.apache.kafka.clients.FetchSessionHandler - [Consumer clientId=mso-123456-consumer-e3947c42-8a25-4603-b050-ed449cbe5e27, groupId=mso-group] Node 1 sent an incremental fetch response with throttleTimeMs = 0 for session 1631787598 with 0 response partition(s), 1 implied partition(s) 19:06:53.280 [main] DEBUG org.apache.kafka.clients.consumer.internals.Fetcher - [Consumer clientId=mso-123456-consumer-e3947c42-8a25-4603-b050-ed449cbe5e27, groupId=mso-group] Added READ_UNCOMMITTED fetch request for partition my-test-topic-0 at position FetchPosition{offset=0, offsetEpoch=Optional.empty, currentLeader=LeaderAndEpoch{leader=Optional[localhost:38537 (id: 1 rack: null)], epoch=0}} to node localhost:38537 (id: 1 rack: null) 19:06:53.280 [main] DEBUG org.apache.kafka.clients.FetchSessionHandler - [Consumer clientId=mso-123456-consumer-e3947c42-8a25-4603-b050-ed449cbe5e27, groupId=mso-group] Built incremental fetch (sessionId=1631787598, epoch=14) for node 1. Added 0 partition(s), altered 0 partition(s), removed 0 partition(s), replaced 0 partition(s) out of 1 partition(s) 19:06:53.280 [main] DEBUG org.apache.kafka.clients.consumer.internals.Fetcher - [Consumer clientId=mso-123456-consumer-e3947c42-8a25-4603-b050-ed449cbe5e27, groupId=mso-group] Sending READ_UNCOMMITTED IncrementalFetchRequest(toSend=(), toForget=(), toReplace=(), implied=(my-test-topic-0), canUseTopicIds=True) to broker localhost:38537 (id: 1 rack: null) 19:06:53.281 [main] DEBUG org.apache.kafka.clients.NetworkClient - [Consumer clientId=mso-123456-consumer-e3947c42-8a25-4603-b050-ed449cbe5e27, groupId=mso-group] Sending FETCH request with header RequestHeader(apiKey=FETCH, apiVersion=13, clientId=mso-123456-consumer-e3947c42-8a25-4603-b050-ed449cbe5e27, correlationId=49) and timeout 30000 to node 1: FetchRequestData(clusterId=null, replicaId=-1, maxWaitMs=500, minBytes=1, maxBytes=52428800, isolationLevel=0, sessionId=1631787598, sessionEpoch=14, topics=[], forgottenTopicsData=[], rackId='') 19:06:53.287 [data-plane-kafka-request-handler-1] DEBUG kafka.server.FetchManager - Created a new incremental FetchContext for session id 1631787598, epoch 15: added 0 partition(s), updated 0 partition(s), removed 0 partition(s) 19:06:53.792 [executor-Fetch] DEBUG kafka.server.IncrementalFetchContext - Incremental fetch context with session id 1631787598 returning 0 partition(s) 19:06:53.794 [main] DEBUG org.apache.kafka.clients.NetworkClient - [Consumer clientId=mso-123456-consumer-e3947c42-8a25-4603-b050-ed449cbe5e27, groupId=mso-group] Received FETCH response from node 1 for request with header RequestHeader(apiKey=FETCH, apiVersion=13, clientId=mso-123456-consumer-e3947c42-8a25-4603-b050-ed449cbe5e27, correlationId=49): FetchResponseData(throttleTimeMs=0, errorCode=0, sessionId=1631787598, responses=[]) 19:06:53.794 [main] DEBUG org.apache.kafka.clients.FetchSessionHandler - [Consumer clientId=mso-123456-consumer-e3947c42-8a25-4603-b050-ed449cbe5e27, groupId=mso-group] Node 1 sent an incremental fetch response with throttleTimeMs = 0 for session 1631787598 with 0 response partition(s), 1 implied partition(s) 19:06:53.794 [data-plane-kafka-network-thread-1-ListenerName(SASL_PLAINTEXT)-SASL_PLAINTEXT-0] DEBUG kafka.request.logger - Completed request:{"isForwarded":false,"requestHeader":{"requestApiKey":1,"requestApiVersion":13,"correlationId":49,"clientId":"mso-123456-consumer-e3947c42-8a25-4603-b050-ed449cbe5e27","requestApiKeyName":"FETCH"},"request":{"replicaId":-1,"maxWaitMs":500,"minBytes":1,"maxBytes":52428800,"isolationLevel":0,"sessionId":1631787598,"sessionEpoch":14,"topics":[],"forgottenTopicsData":[],"rackId":""},"response":{"throttleTimeMs":0,"errorCode":0,"sessionId":1631787598,"responses":[]},"connection":"127.0.0.1:38537-127.0.0.1:54970-3","totalTimeMs":511.531,"requestQueueTimeMs":0.27,"localTimeMs":8.35,"remoteTimeMs":502.265,"throttleTimeMs":0,"responseQueueTimeMs":0.17,"sendTimeMs":0.474,"securityProtocol":"SASL_PLAINTEXT","principal":"User:admin","listener":"SASL_PLAINTEXT","clientInformation":{"softwareName":"apache-kafka-java","softwareVersion":"3.3.1"}} 19:06:53.795 [main] DEBUG org.apache.kafka.clients.consumer.internals.Fetcher - [Consumer clientId=mso-123456-consumer-e3947c42-8a25-4603-b050-ed449cbe5e27, groupId=mso-group] Added READ_UNCOMMITTED fetch request for partition my-test-topic-0 at position FetchPosition{offset=0, offsetEpoch=Optional.empty, currentLeader=LeaderAndEpoch{leader=Optional[localhost:38537 (id: 1 rack: null)], epoch=0}} to node localhost:38537 (id: 1 rack: null) 19:06:53.795 [main] DEBUG org.apache.kafka.clients.FetchSessionHandler - [Consumer clientId=mso-123456-consumer-e3947c42-8a25-4603-b050-ed449cbe5e27, groupId=mso-group] Built incremental fetch (sessionId=1631787598, epoch=15) for node 1. Added 0 partition(s), altered 0 partition(s), removed 0 partition(s), replaced 0 partition(s) out of 1 partition(s) 19:06:53.795 [main] DEBUG org.apache.kafka.clients.consumer.internals.Fetcher - [Consumer clientId=mso-123456-consumer-e3947c42-8a25-4603-b050-ed449cbe5e27, groupId=mso-group] Sending READ_UNCOMMITTED IncrementalFetchRequest(toSend=(), toForget=(), toReplace=(), implied=(my-test-topic-0), canUseTopicIds=True) to broker localhost:38537 (id: 1 rack: null) 19:06:53.795 [main] DEBUG org.apache.kafka.clients.NetworkClient - [Consumer clientId=mso-123456-consumer-e3947c42-8a25-4603-b050-ed449cbe5e27, groupId=mso-group] Sending FETCH request with header RequestHeader(apiKey=FETCH, apiVersion=13, clientId=mso-123456-consumer-e3947c42-8a25-4603-b050-ed449cbe5e27, correlationId=50) and timeout 30000 to node 1: FetchRequestData(clusterId=null, replicaId=-1, maxWaitMs=500, minBytes=1, maxBytes=52428800, isolationLevel=0, sessionId=1631787598, sessionEpoch=15, topics=[], forgottenTopicsData=[], rackId='') 19:06:53.797 [data-plane-kafka-request-handler-1] DEBUG kafka.server.FetchManager - Created a new incremental FetchContext for session id 1631787598, epoch 16: added 0 partition(s), updated 0 partition(s), removed 0 partition(s) 19:06:54.300 [executor-Fetch] DEBUG kafka.server.IncrementalFetchContext - Incremental fetch context with session id 1631787598 returning 0 partition(s) 19:06:54.302 [main] DEBUG org.apache.kafka.clients.NetworkClient - [Consumer clientId=mso-123456-consumer-e3947c42-8a25-4603-b050-ed449cbe5e27, groupId=mso-group] Received FETCH response from node 1 for request with header RequestHeader(apiKey=FETCH, apiVersion=13, clientId=mso-123456-consumer-e3947c42-8a25-4603-b050-ed449cbe5e27, correlationId=50): FetchResponseData(throttleTimeMs=0, errorCode=0, sessionId=1631787598, responses=[]) 19:06:54.302 [data-plane-kafka-network-thread-1-ListenerName(SASL_PLAINTEXT)-SASL_PLAINTEXT-0] DEBUG kafka.request.logger - Completed request:{"isForwarded":false,"requestHeader":{"requestApiKey":1,"requestApiVersion":13,"correlationId":50,"clientId":"mso-123456-consumer-e3947c42-8a25-4603-b050-ed449cbe5e27","requestApiKeyName":"FETCH"},"request":{"replicaId":-1,"maxWaitMs":500,"minBytes":1,"maxBytes":52428800,"isolationLevel":0,"sessionId":1631787598,"sessionEpoch":15,"topics":[],"forgottenTopicsData":[],"rackId":""},"response":{"throttleTimeMs":0,"errorCode":0,"sessionId":1631787598,"responses":[]},"connection":"127.0.0.1:38537-127.0.0.1:54970-3","totalTimeMs":505.535,"requestQueueTimeMs":0.31,"localTimeMs":1.956,"remoteTimeMs":502.802,"throttleTimeMs":0,"responseQueueTimeMs":0.132,"sendTimeMs":0.332,"securityProtocol":"SASL_PLAINTEXT","principal":"User:admin","listener":"SASL_PLAINTEXT","clientInformation":{"softwareName":"apache-kafka-java","softwareVersion":"3.3.1"}} 19:06:54.302 [main] DEBUG org.apache.kafka.clients.FetchSessionHandler - [Consumer clientId=mso-123456-consumer-e3947c42-8a25-4603-b050-ed449cbe5e27, groupId=mso-group] Node 1 sent an incremental fetch response with throttleTimeMs = 0 for session 1631787598 with 0 response partition(s), 1 implied partition(s) 19:06:54.303 [main] DEBUG org.apache.kafka.clients.consumer.internals.Fetcher - [Consumer clientId=mso-123456-consumer-e3947c42-8a25-4603-b050-ed449cbe5e27, groupId=mso-group] Added READ_UNCOMMITTED fetch request for partition my-test-topic-0 at position FetchPosition{offset=0, offsetEpoch=Optional.empty, currentLeader=LeaderAndEpoch{leader=Optional[localhost:38537 (id: 1 rack: null)], epoch=0}} to node localhost:38537 (id: 1 rack: null) 19:06:54.303 [main] DEBUG org.apache.kafka.clients.FetchSessionHandler - [Consumer clientId=mso-123456-consumer-e3947c42-8a25-4603-b050-ed449cbe5e27, groupId=mso-group] Built incremental fetch (sessionId=1631787598, epoch=16) for node 1. Added 0 partition(s), altered 0 partition(s), removed 0 partition(s), replaced 0 partition(s) out of 1 partition(s) 19:06:54.304 [main] DEBUG org.apache.kafka.clients.consumer.internals.Fetcher - [Consumer clientId=mso-123456-consumer-e3947c42-8a25-4603-b050-ed449cbe5e27, groupId=mso-group] Sending READ_UNCOMMITTED IncrementalFetchRequest(toSend=(), toForget=(), toReplace=(), implied=(my-test-topic-0), canUseTopicIds=True) to broker localhost:38537 (id: 1 rack: null) 19:06:54.304 [main] DEBUG org.apache.kafka.clients.NetworkClient - [Consumer clientId=mso-123456-consumer-e3947c42-8a25-4603-b050-ed449cbe5e27, groupId=mso-group] Sending FETCH request with header RequestHeader(apiKey=FETCH, apiVersion=13, clientId=mso-123456-consumer-e3947c42-8a25-4603-b050-ed449cbe5e27, correlationId=51) and timeout 30000 to node 1: FetchRequestData(clusterId=null, replicaId=-1, maxWaitMs=500, minBytes=1, maxBytes=52428800, isolationLevel=0, sessionId=1631787598, sessionEpoch=16, topics=[], forgottenTopicsData=[], rackId='') 19:06:54.306 [data-plane-kafka-request-handler-1] DEBUG kafka.server.FetchManager - Created a new incremental FetchContext for session id 1631787598, epoch 17: added 0 partition(s), updated 0 partition(s), removed 0 partition(s) 19:06:54.315 [kafka-log-cleaner-thread-0] DEBUG kafka.log.LogCleanerManager$ - Finding range of cleanable offsets for log=__consumer_offsets-13. Last clean offset=None now=1755112014310 => firstDirtyOffset=0 firstUncleanableOffset=0 activeSegment.baseOffset=0 19:06:54.321 [kafka-log-cleaner-thread-0] DEBUG kafka.log.LogCleanerManager$ - Finding range of cleanable offsets for log=__consumer_offsets-46. Last clean offset=None now=1755112014310 => firstDirtyOffset=0 firstUncleanableOffset=0 activeSegment.baseOffset=0 19:06:54.321 [kafka-log-cleaner-thread-0] DEBUG kafka.log.LogCleanerManager$ - Finding range of cleanable offsets for log=__consumer_offsets-9. Last clean offset=None now=1755112014310 => firstDirtyOffset=0 firstUncleanableOffset=0 activeSegment.baseOffset=0 19:06:54.322 [kafka-log-cleaner-thread-0] DEBUG kafka.log.LogCleanerManager$ - Finding range of cleanable offsets for log=__consumer_offsets-42. Last clean offset=None now=1755112014310 => firstDirtyOffset=0 firstUncleanableOffset=0 activeSegment.baseOffset=0 19:06:54.322 [kafka-log-cleaner-thread-0] DEBUG kafka.log.LogCleanerManager$ - Finding range of cleanable offsets for log=__consumer_offsets-21. Last clean offset=None now=1755112014310 => firstDirtyOffset=0 firstUncleanableOffset=0 activeSegment.baseOffset=0 19:06:54.322 [kafka-log-cleaner-thread-0] DEBUG kafka.log.LogCleanerManager$ - Finding range of cleanable offsets for log=__consumer_offsets-17. Last clean offset=None now=1755112014310 => firstDirtyOffset=0 firstUncleanableOffset=0 activeSegment.baseOffset=0 19:06:54.322 [kafka-log-cleaner-thread-0] DEBUG kafka.log.LogCleanerManager$ - Finding range of cleanable offsets for log=__consumer_offsets-30. Last clean offset=None now=1755112014310 => firstDirtyOffset=0 firstUncleanableOffset=0 activeSegment.baseOffset=0 19:06:54.322 [kafka-log-cleaner-thread-0] DEBUG kafka.log.LogCleanerManager$ - Finding range of cleanable offsets for log=__consumer_offsets-26. Last clean offset=None now=1755112014310 => firstDirtyOffset=0 firstUncleanableOffset=0 activeSegment.baseOffset=0 19:06:54.322 [kafka-log-cleaner-thread-0] DEBUG kafka.log.LogCleanerManager$ - Finding range of cleanable offsets for log=__consumer_offsets-5. Last clean offset=None now=1755112014310 => firstDirtyOffset=0 firstUncleanableOffset=0 activeSegment.baseOffset=0 19:06:54.322 [kafka-log-cleaner-thread-0] DEBUG kafka.log.LogCleanerManager$ - Finding range of cleanable offsets for log=__consumer_offsets-38. Last clean offset=None now=1755112014310 => firstDirtyOffset=0 firstUncleanableOffset=0 activeSegment.baseOffset=0 19:06:54.322 [kafka-log-cleaner-thread-0] DEBUG kafka.log.LogCleanerManager$ - Finding range of cleanable offsets for log=__consumer_offsets-1. Last clean offset=None now=1755112014310 => firstDirtyOffset=0 firstUncleanableOffset=0 activeSegment.baseOffset=0 19:06:54.322 [kafka-log-cleaner-thread-0] DEBUG kafka.log.LogCleanerManager$ - Finding range of cleanable offsets for log=__consumer_offsets-34. Last clean offset=None now=1755112014310 => firstDirtyOffset=0 firstUncleanableOffset=0 activeSegment.baseOffset=0 19:06:54.323 [kafka-log-cleaner-thread-0] DEBUG kafka.log.LogCleanerManager$ - Finding range of cleanable offsets for log=__consumer_offsets-16. Last clean offset=None now=1755112014310 => firstDirtyOffset=0 firstUncleanableOffset=0 activeSegment.baseOffset=0 19:06:54.323 [kafka-log-cleaner-thread-0] DEBUG kafka.log.LogCleanerManager$ - Finding range of cleanable offsets for log=__consumer_offsets-45. Last clean offset=None now=1755112014310 => firstDirtyOffset=0 firstUncleanableOffset=0 activeSegment.baseOffset=0 19:06:54.323 [kafka-log-cleaner-thread-0] DEBUG kafka.log.LogCleanerManager$ - Finding range of cleanable offsets for log=__consumer_offsets-12. Last clean offset=None now=1755112014310 => firstDirtyOffset=0 firstUncleanableOffset=0 activeSegment.baseOffset=0 19:06:54.323 [kafka-log-cleaner-thread-0] DEBUG kafka.log.LogCleanerManager$ - Finding range of cleanable offsets for log=__consumer_offsets-41. Last clean offset=None now=1755112014310 => firstDirtyOffset=0 firstUncleanableOffset=0 activeSegment.baseOffset=0 19:06:54.323 [kafka-log-cleaner-thread-0] DEBUG kafka.log.LogCleanerManager$ - Finding range of cleanable offsets for log=__consumer_offsets-24. Last clean offset=None now=1755112014310 => firstDirtyOffset=0 firstUncleanableOffset=0 activeSegment.baseOffset=0 19:06:54.323 [kafka-log-cleaner-thread-0] DEBUG kafka.log.LogCleanerManager$ - Finding range of cleanable offsets for log=__consumer_offsets-20. Last clean offset=None now=1755112014310 => firstDirtyOffset=0 firstUncleanableOffset=0 activeSegment.baseOffset=0 19:06:54.323 [kafka-log-cleaner-thread-0] DEBUG kafka.log.LogCleanerManager$ - Finding range of cleanable offsets for log=__consumer_offsets-49. Last clean offset=None now=1755112014310 => firstDirtyOffset=0 firstUncleanableOffset=0 activeSegment.baseOffset=0 19:06:54.323 [kafka-log-cleaner-thread-0] DEBUG kafka.log.LogCleanerManager$ - Finding range of cleanable offsets for log=__consumer_offsets-0. Last clean offset=None now=1755112014310 => firstDirtyOffset=0 firstUncleanableOffset=0 activeSegment.baseOffset=0 19:06:54.323 [kafka-log-cleaner-thread-0] DEBUG kafka.log.LogCleanerManager$ - Finding range of cleanable offsets for log=__consumer_offsets-29. Last clean offset=None now=1755112014310 => firstDirtyOffset=0 firstUncleanableOffset=0 activeSegment.baseOffset=0 19:06:54.324 [kafka-log-cleaner-thread-0] DEBUG kafka.log.LogCleanerManager$ - Finding range of cleanable offsets for log=__consumer_offsets-25. Last clean offset=None now=1755112014310 => firstDirtyOffset=0 firstUncleanableOffset=0 activeSegment.baseOffset=0 19:06:54.324 [kafka-log-cleaner-thread-0] DEBUG kafka.log.LogCleanerManager$ - Finding range of cleanable offsets for log=__consumer_offsets-8. Last clean offset=None now=1755112014310 => firstDirtyOffset=0 firstUncleanableOffset=0 activeSegment.baseOffset=0 19:06:54.324 [kafka-log-cleaner-thread-0] DEBUG kafka.log.LogCleanerManager$ - Finding range of cleanable offsets for log=__consumer_offsets-37. Last clean offset=None now=1755112014310 => firstDirtyOffset=0 firstUncleanableOffset=0 activeSegment.baseOffset=0 19:06:54.324 [kafka-log-cleaner-thread-0] DEBUG kafka.log.LogCleanerManager$ - Finding range of cleanable offsets for log=__consumer_offsets-4. Last clean offset=None now=1755112014310 => firstDirtyOffset=0 firstUncleanableOffset=0 activeSegment.baseOffset=0 19:06:54.324 [kafka-log-cleaner-thread-0] DEBUG kafka.log.LogCleanerManager$ - Finding range of cleanable offsets for log=__consumer_offsets-33. Last clean offset=None now=1755112014310 => firstDirtyOffset=0 firstUncleanableOffset=0 activeSegment.baseOffset=0 19:06:54.324 [kafka-log-cleaner-thread-0] DEBUG kafka.log.LogCleanerManager$ - Finding range of cleanable offsets for log=__consumer_offsets-15. Last clean offset=None now=1755112014310 => firstDirtyOffset=0 firstUncleanableOffset=0 activeSegment.baseOffset=0 19:06:54.324 [kafka-log-cleaner-thread-0] DEBUG kafka.log.LogCleanerManager$ - Finding range of cleanable offsets for log=__consumer_offsets-48. Last clean offset=None now=1755112014310 => firstDirtyOffset=0 firstUncleanableOffset=0 activeSegment.baseOffset=0 19:06:54.324 [kafka-log-cleaner-thread-0] DEBUG kafka.log.LogCleanerManager$ - Finding range of cleanable offsets for log=__consumer_offsets-11. Last clean offset=None now=1755112014310 => firstDirtyOffset=0 firstUncleanableOffset=0 activeSegment.baseOffset=0 19:06:54.324 [kafka-log-cleaner-thread-0] DEBUG kafka.log.LogCleanerManager$ - Finding range of cleanable offsets for log=__consumer_offsets-44. Last clean offset=None now=1755112014310 => firstDirtyOffset=0 firstUncleanableOffset=0 activeSegment.baseOffset=0 19:06:54.325 [kafka-log-cleaner-thread-0] DEBUG kafka.log.LogCleanerManager$ - Finding range of cleanable offsets for log=__consumer_offsets-23. Last clean offset=None now=1755112014310 => firstDirtyOffset=0 firstUncleanableOffset=0 activeSegment.baseOffset=0 19:06:54.325 [kafka-log-cleaner-thread-0] DEBUG kafka.log.LogCleanerManager$ - Finding range of cleanable offsets for log=__consumer_offsets-19. Last clean offset=None now=1755112014310 => firstDirtyOffset=0 firstUncleanableOffset=0 activeSegment.baseOffset=0 19:06:54.325 [kafka-log-cleaner-thread-0] DEBUG kafka.log.LogCleanerManager$ - Finding range of cleanable offsets for log=__consumer_offsets-32. Last clean offset=None now=1755112014310 => firstDirtyOffset=0 firstUncleanableOffset=0 activeSegment.baseOffset=0 19:06:54.325 [kafka-log-cleaner-thread-0] DEBUG kafka.log.LogCleanerManager$ - Finding range of cleanable offsets for log=__consumer_offsets-28. Last clean offset=None now=1755112014310 => firstDirtyOffset=0 firstUncleanableOffset=0 activeSegment.baseOffset=0 19:06:54.325 [kafka-log-cleaner-thread-0] DEBUG kafka.log.LogCleanerManager$ - Finding range of cleanable offsets for log=__consumer_offsets-7. Last clean offset=None now=1755112014310 => firstDirtyOffset=0 firstUncleanableOffset=0 activeSegment.baseOffset=0 19:06:54.325 [kafka-log-cleaner-thread-0] DEBUG kafka.log.LogCleanerManager$ - Finding range of cleanable offsets for log=__consumer_offsets-40. Last clean offset=None now=1755112014310 => firstDirtyOffset=0 firstUncleanableOffset=0 activeSegment.baseOffset=0 19:06:54.325 [kafka-log-cleaner-thread-0] DEBUG kafka.log.LogCleanerManager$ - Finding range of cleanable offsets for log=__consumer_offsets-3. Last clean offset=None now=1755112014310 => firstDirtyOffset=0 firstUncleanableOffset=0 activeSegment.baseOffset=0 19:06:54.325 [kafka-log-cleaner-thread-0] DEBUG kafka.log.LogCleanerManager$ - Finding range of cleanable offsets for log=__consumer_offsets-36. Last clean offset=None now=1755112014310 => firstDirtyOffset=0 firstUncleanableOffset=0 activeSegment.baseOffset=0 19:06:54.325 [kafka-log-cleaner-thread-0] DEBUG kafka.log.LogCleanerManager$ - Finding range of cleanable offsets for log=__consumer_offsets-47. Last clean offset=None now=1755112014310 => firstDirtyOffset=0 firstUncleanableOffset=0 activeSegment.baseOffset=0 19:06:54.325 [kafka-log-cleaner-thread-0] DEBUG kafka.log.LogCleanerManager$ - Finding range of cleanable offsets for log=__consumer_offsets-14. Last clean offset=None now=1755112014310 => firstDirtyOffset=0 firstUncleanableOffset=0 activeSegment.baseOffset=0 19:06:54.326 [kafka-log-cleaner-thread-0] DEBUG kafka.log.LogCleanerManager$ - Finding range of cleanable offsets for log=__consumer_offsets-43. Last clean offset=None now=1755112014310 => firstDirtyOffset=0 firstUncleanableOffset=0 activeSegment.baseOffset=0 19:06:54.326 [kafka-log-cleaner-thread-0] DEBUG kafka.log.LogCleanerManager$ - Finding range of cleanable offsets for log=__consumer_offsets-10. Last clean offset=None now=1755112014310 => firstDirtyOffset=0 firstUncleanableOffset=0 activeSegment.baseOffset=0 19:06:54.326 [kafka-log-cleaner-thread-0] DEBUG kafka.log.LogCleanerManager$ - Finding range of cleanable offsets for log=__consumer_offsets-22. Last clean offset=None now=1755112014310 => firstDirtyOffset=0 firstUncleanableOffset=0 activeSegment.baseOffset=0 19:06:54.326 [kafka-log-cleaner-thread-0] DEBUG kafka.log.LogCleanerManager$ - Finding range of cleanable offsets for log=__consumer_offsets-18. Last clean offset=None now=1755112014310 => firstDirtyOffset=0 firstUncleanableOffset=0 activeSegment.baseOffset=0 19:06:54.326 [kafka-log-cleaner-thread-0] DEBUG kafka.log.LogCleanerManager$ - Finding range of cleanable offsets for log=__consumer_offsets-31. Last clean offset=None now=1755112014310 => firstDirtyOffset=0 firstUncleanableOffset=0 activeSegment.baseOffset=0 19:06:54.326 [kafka-log-cleaner-thread-0] DEBUG kafka.log.LogCleanerManager$ - Finding range of cleanable offsets for log=__consumer_offsets-27. Last clean offset=None now=1755112014310 => firstDirtyOffset=0 firstUncleanableOffset=0 activeSegment.baseOffset=0 19:06:54.326 [kafka-log-cleaner-thread-0] DEBUG kafka.log.LogCleanerManager$ - Finding range of cleanable offsets for log=__consumer_offsets-39. Last clean offset=None now=1755112014310 => firstDirtyOffset=0 firstUncleanableOffset=0 activeSegment.baseOffset=0 19:06:54.326 [kafka-log-cleaner-thread-0] DEBUG kafka.log.LogCleanerManager$ - Finding range of cleanable offsets for log=__consumer_offsets-6. Last clean offset=None now=1755112014310 => firstDirtyOffset=0 firstUncleanableOffset=0 activeSegment.baseOffset=0 19:06:54.326 [kafka-log-cleaner-thread-0] DEBUG kafka.log.LogCleanerManager$ - Finding range of cleanable offsets for log=__consumer_offsets-35. Last clean offset=None now=1755112014310 => firstDirtyOffset=0 firstUncleanableOffset=0 activeSegment.baseOffset=0 19:06:54.327 [kafka-log-cleaner-thread-0] DEBUG kafka.log.LogCleanerManager$ - Finding range of cleanable offsets for log=__consumer_offsets-2. Last clean offset=None now=1755112014310 => firstDirtyOffset=0 firstUncleanableOffset=0 activeSegment.baseOffset=0 19:06:54.794 [kafka-coordinator-heartbeat-thread | mso-group] DEBUG org.apache.kafka.clients.consumer.internals.ConsumerCoordinator - [Consumer clientId=mso-123456-consumer-e3947c42-8a25-4603-b050-ed449cbe5e27, groupId=mso-group] Sending Heartbeat request with generation 1 and member id mso-123456-consumer-e3947c42-8a25-4603-b050-ed449cbe5e27-4206ed4e-0a48-4fc2-880d-094163bb2a9d to coordinator localhost:38537 (id: 2147483646 rack: null) 19:06:54.795 [kafka-coordinator-heartbeat-thread | mso-group] DEBUG org.apache.kafka.clients.NetworkClient - [Consumer clientId=mso-123456-consumer-e3947c42-8a25-4603-b050-ed449cbe5e27, groupId=mso-group] Sending HEARTBEAT request with header RequestHeader(apiKey=HEARTBEAT, apiVersion=4, clientId=mso-123456-consumer-e3947c42-8a25-4603-b050-ed449cbe5e27, correlationId=52) and timeout 30000 to node 2147483646: HeartbeatRequestData(groupId='mso-group', generationId=1, memberId='mso-123456-consumer-e3947c42-8a25-4603-b050-ed449cbe5e27-4206ed4e-0a48-4fc2-880d-094163bb2a9d', groupInstanceId=null) 19:06:54.797 [data-plane-kafka-request-handler-0] DEBUG kafka.server.DelayedOperationPurgatory - Request key MemberKey(mso-group,mso-123456-consumer-e3947c42-8a25-4603-b050-ed449cbe5e27-4206ed4e-0a48-4fc2-880d-094163bb2a9d) unblocked 1 Heartbeat operations 19:06:54.799 [main] DEBUG org.apache.kafka.clients.NetworkClient - [Consumer clientId=mso-123456-consumer-e3947c42-8a25-4603-b050-ed449cbe5e27, groupId=mso-group] Received HEARTBEAT response from node 2147483646 for request with header RequestHeader(apiKey=HEARTBEAT, apiVersion=4, clientId=mso-123456-consumer-e3947c42-8a25-4603-b050-ed449cbe5e27, correlationId=52): HeartbeatResponseData(throttleTimeMs=0, errorCode=0) 19:06:54.800 [main] DEBUG org.apache.kafka.clients.consumer.internals.ConsumerCoordinator - [Consumer clientId=mso-123456-consumer-e3947c42-8a25-4603-b050-ed449cbe5e27, groupId=mso-group] Received successful Heartbeat response 19:06:54.800 [data-plane-kafka-network-thread-1-ListenerName(SASL_PLAINTEXT)-SASL_PLAINTEXT-1] DEBUG kafka.request.logger - Completed request:{"isForwarded":false,"requestHeader":{"requestApiKey":12,"requestApiVersion":4,"correlationId":52,"clientId":"mso-123456-consumer-e3947c42-8a25-4603-b050-ed449cbe5e27","requestApiKeyName":"HEARTBEAT"},"request":{"groupId":"mso-group","generationId":1,"memberId":"mso-123456-consumer-e3947c42-8a25-4603-b050-ed449cbe5e27-4206ed4e-0a48-4fc2-880d-094163bb2a9d","groupInstanceId":null},"response":{"throttleTimeMs":0,"errorCode":0},"connection":"127.0.0.1:38537-127.0.0.1:54972-3","totalTimeMs":2.718,"requestQueueTimeMs":0.351,"localTimeMs":1.865,"remoteTimeMs":0.0,"throttleTimeMs":0,"responseQueueTimeMs":0.106,"sendTimeMs":0.395,"securityProtocol":"SASL_PLAINTEXT","principal":"User:admin","listener":"SASL_PLAINTEXT","clientInformation":{"softwareName":"apache-kafka-java","softwareVersion":"3.3.1"}} 19:06:54.809 [executor-Fetch] DEBUG kafka.server.IncrementalFetchContext - Incremental fetch context with session id 1631787598 returning 0 partition(s) 19:06:54.809 [main] DEBUG org.apache.kafka.clients.NetworkClient - [Consumer clientId=mso-123456-consumer-e3947c42-8a25-4603-b050-ed449cbe5e27, groupId=mso-group] Received FETCH response from node 1 for request with header RequestHeader(apiKey=FETCH, apiVersion=13, clientId=mso-123456-consumer-e3947c42-8a25-4603-b050-ed449cbe5e27, correlationId=51): FetchResponseData(throttleTimeMs=0, errorCode=0, sessionId=1631787598, responses=[]) 19:06:54.810 [data-plane-kafka-network-thread-1-ListenerName(SASL_PLAINTEXT)-SASL_PLAINTEXT-0] DEBUG kafka.request.logger - Completed request:{"isForwarded":false,"requestHeader":{"requestApiKey":1,"requestApiVersion":13,"correlationId":51,"clientId":"mso-123456-consumer-e3947c42-8a25-4603-b050-ed449cbe5e27","requestApiKeyName":"FETCH"},"request":{"replicaId":-1,"maxWaitMs":500,"minBytes":1,"maxBytes":52428800,"isolationLevel":0,"sessionId":1631787598,"sessionEpoch":16,"topics":[],"forgottenTopicsData":[],"rackId":""},"response":{"throttleTimeMs":0,"errorCode":0,"sessionId":1631787598,"responses":[]},"connection":"127.0.0.1:38537-127.0.0.1:54970-3","totalTimeMs":504.391,"requestQueueTimeMs":0.31,"localTimeMs":1.881,"remoteTimeMs":501.961,"throttleTimeMs":0,"responseQueueTimeMs":0.077,"sendTimeMs":0.159,"securityProtocol":"SASL_PLAINTEXT","principal":"User:admin","listener":"SASL_PLAINTEXT","clientInformation":{"softwareName":"apache-kafka-java","softwareVersion":"3.3.1"}} 19:06:54.810 [main] DEBUG org.apache.kafka.clients.FetchSessionHandler - [Consumer clientId=mso-123456-consumer-e3947c42-8a25-4603-b050-ed449cbe5e27, groupId=mso-group] Node 1 sent an incremental fetch response with throttleTimeMs = 0 for session 1631787598 with 0 response partition(s), 1 implied partition(s) 19:06:54.810 [main] DEBUG org.apache.kafka.clients.consumer.internals.Fetcher - [Consumer clientId=mso-123456-consumer-e3947c42-8a25-4603-b050-ed449cbe5e27, groupId=mso-group] Added READ_UNCOMMITTED fetch request for partition my-test-topic-0 at position FetchPosition{offset=0, offsetEpoch=Optional.empty, currentLeader=LeaderAndEpoch{leader=Optional[localhost:38537 (id: 1 rack: null)], epoch=0}} to node localhost:38537 (id: 1 rack: null) 19:06:54.810 [main] DEBUG org.apache.kafka.clients.FetchSessionHandler - [Consumer clientId=mso-123456-consumer-e3947c42-8a25-4603-b050-ed449cbe5e27, groupId=mso-group] Built incremental fetch (sessionId=1631787598, epoch=17) for node 1. Added 0 partition(s), altered 0 partition(s), removed 0 partition(s), replaced 0 partition(s) out of 1 partition(s) 19:06:54.811 [main] DEBUG org.apache.kafka.clients.consumer.internals.Fetcher - [Consumer clientId=mso-123456-consumer-e3947c42-8a25-4603-b050-ed449cbe5e27, groupId=mso-group] Sending READ_UNCOMMITTED IncrementalFetchRequest(toSend=(), toForget=(), toReplace=(), implied=(my-test-topic-0), canUseTopicIds=True) to broker localhost:38537 (id: 1 rack: null) 19:06:54.811 [main] DEBUG org.apache.kafka.clients.NetworkClient - [Consumer clientId=mso-123456-consumer-e3947c42-8a25-4603-b050-ed449cbe5e27, groupId=mso-group] Sending FETCH request with header RequestHeader(apiKey=FETCH, apiVersion=13, clientId=mso-123456-consumer-e3947c42-8a25-4603-b050-ed449cbe5e27, correlationId=53) and timeout 30000 to node 1: FetchRequestData(clusterId=null, replicaId=-1, maxWaitMs=500, minBytes=1, maxBytes=52428800, isolationLevel=0, sessionId=1631787598, sessionEpoch=17, topics=[], forgottenTopicsData=[], rackId='') 19:06:54.812 [data-plane-kafka-request-handler-1] DEBUG kafka.server.FetchManager - Created a new incremental FetchContext for session id 1631787598, epoch 18: added 0 partition(s), updated 0 partition(s), removed 0 partition(s) 19:06:55.314 [executor-Fetch] DEBUG kafka.server.IncrementalFetchContext - Incremental fetch context with session id 1631787598 returning 0 partition(s) 19:06:55.315 [main] DEBUG org.apache.kafka.clients.NetworkClient - [Consumer clientId=mso-123456-consumer-e3947c42-8a25-4603-b050-ed449cbe5e27, groupId=mso-group] Received FETCH response from node 1 for request with header RequestHeader(apiKey=FETCH, apiVersion=13, clientId=mso-123456-consumer-e3947c42-8a25-4603-b050-ed449cbe5e27, correlationId=53): FetchResponseData(throttleTimeMs=0, errorCode=0, sessionId=1631787598, responses=[]) 19:06:55.316 [main] DEBUG org.apache.kafka.clients.FetchSessionHandler - [Consumer clientId=mso-123456-consumer-e3947c42-8a25-4603-b050-ed449cbe5e27, groupId=mso-group] Node 1 sent an incremental fetch response with throttleTimeMs = 0 for session 1631787598 with 0 response partition(s), 1 implied partition(s) 19:06:55.316 [data-plane-kafka-network-thread-1-ListenerName(SASL_PLAINTEXT)-SASL_PLAINTEXT-0] DEBUG kafka.request.logger - Completed request:{"isForwarded":false,"requestHeader":{"requestApiKey":1,"requestApiVersion":13,"correlationId":53,"clientId":"mso-123456-consumer-e3947c42-8a25-4603-b050-ed449cbe5e27","requestApiKeyName":"FETCH"},"request":{"replicaId":-1,"maxWaitMs":500,"minBytes":1,"maxBytes":52428800,"isolationLevel":0,"sessionId":1631787598,"sessionEpoch":17,"topics":[],"forgottenTopicsData":[],"rackId":""},"response":{"throttleTimeMs":0,"errorCode":0,"sessionId":1631787598,"responses":[]},"connection":"127.0.0.1:38537-127.0.0.1:54970-3","totalTimeMs":504.124,"requestQueueTimeMs":0.223,"localTimeMs":1.287,"remoteTimeMs":502.072,"throttleTimeMs":0,"responseQueueTimeMs":0.151,"sendTimeMs":0.388,"securityProtocol":"SASL_PLAINTEXT","principal":"User:admin","listener":"SASL_PLAINTEXT","clientInformation":{"softwareName":"apache-kafka-java","softwareVersion":"3.3.1"}} 19:06:55.316 [main] DEBUG org.apache.kafka.clients.consumer.internals.Fetcher - [Consumer clientId=mso-123456-consumer-e3947c42-8a25-4603-b050-ed449cbe5e27, groupId=mso-group] Added READ_UNCOMMITTED fetch request for partition my-test-topic-0 at position FetchPosition{offset=0, offsetEpoch=Optional.empty, currentLeader=LeaderAndEpoch{leader=Optional[localhost:38537 (id: 1 rack: null)], epoch=0}} to node localhost:38537 (id: 1 rack: null) 19:06:55.316 [main] DEBUG org.apache.kafka.clients.FetchSessionHandler - [Consumer clientId=mso-123456-consumer-e3947c42-8a25-4603-b050-ed449cbe5e27, groupId=mso-group] Built incremental fetch (sessionId=1631787598, epoch=18) for node 1. Added 0 partition(s), altered 0 partition(s), removed 0 partition(s), replaced 0 partition(s) out of 1 partition(s) 19:06:55.316 [main] DEBUG org.apache.kafka.clients.consumer.internals.Fetcher - [Consumer clientId=mso-123456-consumer-e3947c42-8a25-4603-b050-ed449cbe5e27, groupId=mso-group] Sending READ_UNCOMMITTED IncrementalFetchRequest(toSend=(), toForget=(), toReplace=(), implied=(my-test-topic-0), canUseTopicIds=True) to broker localhost:38537 (id: 1 rack: null) 19:06:55.317 [main] DEBUG org.apache.kafka.clients.NetworkClient - [Consumer clientId=mso-123456-consumer-e3947c42-8a25-4603-b050-ed449cbe5e27, groupId=mso-group] Sending FETCH request with header RequestHeader(apiKey=FETCH, apiVersion=13, clientId=mso-123456-consumer-e3947c42-8a25-4603-b050-ed449cbe5e27, correlationId=54) and timeout 30000 to node 1: FetchRequestData(clusterId=null, replicaId=-1, maxWaitMs=500, minBytes=1, maxBytes=52428800, isolationLevel=0, sessionId=1631787598, sessionEpoch=18, topics=[], forgottenTopicsData=[], rackId='') 19:06:55.318 [data-plane-kafka-request-handler-0] DEBUG kafka.server.FetchManager - Created a new incremental FetchContext for session id 1631787598, epoch 19: added 0 partition(s), updated 0 partition(s), removed 0 partition(s) 19:06:55.821 [executor-Fetch] DEBUG kafka.server.IncrementalFetchContext - Incremental fetch context with session id 1631787598 returning 0 partition(s) 19:06:55.822 [main] DEBUG org.apache.kafka.clients.NetworkClient - [Consumer clientId=mso-123456-consumer-e3947c42-8a25-4603-b050-ed449cbe5e27, groupId=mso-group] Received FETCH response from node 1 for request with header RequestHeader(apiKey=FETCH, apiVersion=13, clientId=mso-123456-consumer-e3947c42-8a25-4603-b050-ed449cbe5e27, correlationId=54): FetchResponseData(throttleTimeMs=0, errorCode=0, sessionId=1631787598, responses=[]) 19:06:55.823 [main] DEBUG org.apache.kafka.clients.FetchSessionHandler - [Consumer clientId=mso-123456-consumer-e3947c42-8a25-4603-b050-ed449cbe5e27, groupId=mso-group] Node 1 sent an incremental fetch response with throttleTimeMs = 0 for session 1631787598 with 0 response partition(s), 1 implied partition(s) 19:06:55.823 [data-plane-kafka-network-thread-1-ListenerName(SASL_PLAINTEXT)-SASL_PLAINTEXT-0] DEBUG kafka.request.logger - Completed request:{"isForwarded":false,"requestHeader":{"requestApiKey":1,"requestApiVersion":13,"correlationId":54,"clientId":"mso-123456-consumer-e3947c42-8a25-4603-b050-ed449cbe5e27","requestApiKeyName":"FETCH"},"request":{"replicaId":-1,"maxWaitMs":500,"minBytes":1,"maxBytes":52428800,"isolationLevel":0,"sessionId":1631787598,"sessionEpoch":18,"topics":[],"forgottenTopicsData":[],"rackId":""},"response":{"throttleTimeMs":0,"errorCode":0,"sessionId":1631787598,"responses":[]},"connection":"127.0.0.1:38537-127.0.0.1:54970-3","totalTimeMs":504.787,"requestQueueTimeMs":0.425,"localTimeMs":1.85,"remoteTimeMs":501.843,"throttleTimeMs":0,"responseQueueTimeMs":0.221,"sendTimeMs":0.445,"securityProtocol":"SASL_PLAINTEXT","principal":"User:admin","listener":"SASL_PLAINTEXT","clientInformation":{"softwareName":"apache-kafka-java","softwareVersion":"3.3.1"}} 19:06:55.823 [main] DEBUG org.apache.kafka.clients.consumer.internals.Fetcher - [Consumer clientId=mso-123456-consumer-e3947c42-8a25-4603-b050-ed449cbe5e27, groupId=mso-group] Added READ_UNCOMMITTED fetch request for partition my-test-topic-0 at position FetchPosition{offset=0, offsetEpoch=Optional.empty, currentLeader=LeaderAndEpoch{leader=Optional[localhost:38537 (id: 1 rack: null)], epoch=0}} to node localhost:38537 (id: 1 rack: null) 19:06:55.823 [main] DEBUG org.apache.kafka.clients.FetchSessionHandler - [Consumer clientId=mso-123456-consumer-e3947c42-8a25-4603-b050-ed449cbe5e27, groupId=mso-group] Built incremental fetch (sessionId=1631787598, epoch=19) for node 1. Added 0 partition(s), altered 0 partition(s), removed 0 partition(s), replaced 0 partition(s) out of 1 partition(s) 19:06:55.823 [main] DEBUG org.apache.kafka.clients.consumer.internals.Fetcher - [Consumer clientId=mso-123456-consumer-e3947c42-8a25-4603-b050-ed449cbe5e27, groupId=mso-group] Sending READ_UNCOMMITTED IncrementalFetchRequest(toSend=(), toForget=(), toReplace=(), implied=(my-test-topic-0), canUseTopicIds=True) to broker localhost:38537 (id: 1 rack: null) 19:06:55.824 [main] DEBUG org.apache.kafka.clients.NetworkClient - [Consumer clientId=mso-123456-consumer-e3947c42-8a25-4603-b050-ed449cbe5e27, groupId=mso-group] Sending FETCH request with header RequestHeader(apiKey=FETCH, apiVersion=13, clientId=mso-123456-consumer-e3947c42-8a25-4603-b050-ed449cbe5e27, correlationId=55) and timeout 30000 to node 1: FetchRequestData(clusterId=null, replicaId=-1, maxWaitMs=500, minBytes=1, maxBytes=52428800, isolationLevel=0, sessionId=1631787598, sessionEpoch=19, topics=[], forgottenTopicsData=[], rackId='') 19:06:55.825 [data-plane-kafka-request-handler-0] DEBUG kafka.server.FetchManager - Created a new incremental FetchContext for session id 1631787598, epoch 20: added 0 partition(s), updated 0 partition(s), removed 0 partition(s) 19:06:55.993 [main] DEBUG org.apache.kafka.clients.consumer.internals.ConsumerCoordinator - [Consumer clientId=mso-123456-consumer-e3947c42-8a25-4603-b050-ed449cbe5e27, groupId=mso-group] Sending asynchronous auto-commit of offsets {my-test-topic-0=OffsetAndMetadata{offset=0, leaderEpoch=null, metadata=''}} 19:06:55.994 [main] DEBUG org.apache.kafka.clients.NetworkClient - [Consumer clientId=mso-123456-consumer-e3947c42-8a25-4603-b050-ed449cbe5e27, groupId=mso-group] Sending OFFSET_COMMIT request with header RequestHeader(apiKey=OFFSET_COMMIT, apiVersion=8, clientId=mso-123456-consumer-e3947c42-8a25-4603-b050-ed449cbe5e27, correlationId=56) and timeout 30000 to node 2147483646: OffsetCommitRequestData(groupId='mso-group', generationId=1, memberId='mso-123456-consumer-e3947c42-8a25-4603-b050-ed449cbe5e27-4206ed4e-0a48-4fc2-880d-094163bb2a9d', groupInstanceId=null, retentionTimeMs=-1, topics=[OffsetCommitRequestTopic(name='my-test-topic', partitions=[OffsetCommitRequestPartition(partitionIndex=0, committedOffset=0, committedLeaderEpoch=-1, commitTimestamp=-1, committedMetadata='')])]) 19:06:55.997 [data-plane-kafka-request-handler-1] DEBUG kafka.server.DelayedOperationPurgatory - Request key MemberKey(mso-group,mso-123456-consumer-e3947c42-8a25-4603-b050-ed449cbe5e27-4206ed4e-0a48-4fc2-880d-094163bb2a9d) unblocked 1 Heartbeat operations 19:06:55.999 [data-plane-kafka-request-handler-1] DEBUG kafka.log.UnifiedLog - [UnifiedLog partition=__consumer_offsets-37, dir=/tmp/kafka-unit12180474530667575823] Flushing log up to offset 3 (exclusive)with recovery point 3, last flushed: 1755112011021, current time: 1755112015999,unflushed: 1 19:06:56.005 [data-plane-kafka-request-handler-1] DEBUG kafka.cluster.Partition - [Partition __consumer_offsets-37 broker=1] High watermark updated from (offset=2 segment=[0:582]) to (offset=3 segment=[0:706]) 19:06:56.005 [data-plane-kafka-request-handler-1] DEBUG kafka.server.ReplicaManager - [ReplicaManager broker=1] Produce to local log in 7 ms 19:06:56.007 [main] DEBUG org.apache.kafka.clients.NetworkClient - [Consumer clientId=mso-123456-consumer-e3947c42-8a25-4603-b050-ed449cbe5e27, groupId=mso-group] Received OFFSET_COMMIT response from node 2147483646 for request with header RequestHeader(apiKey=OFFSET_COMMIT, apiVersion=8, clientId=mso-123456-consumer-e3947c42-8a25-4603-b050-ed449cbe5e27, correlationId=56): OffsetCommitResponseData(throttleTimeMs=0, topics=[OffsetCommitResponseTopic(name='my-test-topic', partitions=[OffsetCommitResponsePartition(partitionIndex=0, errorCode=0)])]) 19:06:56.008 [main] DEBUG org.apache.kafka.clients.consumer.internals.ConsumerCoordinator - [Consumer clientId=mso-123456-consumer-e3947c42-8a25-4603-b050-ed449cbe5e27, groupId=mso-group] Committed offset 0 for partition my-test-topic-0 19:06:56.008 [data-plane-kafka-network-thread-1-ListenerName(SASL_PLAINTEXT)-SASL_PLAINTEXT-1] DEBUG kafka.request.logger - Completed request:{"isForwarded":false,"requestHeader":{"requestApiKey":8,"requestApiVersion":8,"correlationId":56,"clientId":"mso-123456-consumer-e3947c42-8a25-4603-b050-ed449cbe5e27","requestApiKeyName":"OFFSET_COMMIT"},"request":{"groupId":"mso-group","generationId":1,"memberId":"mso-123456-consumer-e3947c42-8a25-4603-b050-ed449cbe5e27-4206ed4e-0a48-4fc2-880d-094163bb2a9d","groupInstanceId":null,"topics":[{"name":"my-test-topic","partitions":[{"partitionIndex":0,"committedOffset":0,"committedLeaderEpoch":-1,"committedMetadata":""}]}]},"response":{"throttleTimeMs":0,"topics":[{"name":"my-test-topic","partitions":[{"partitionIndex":0,"errorCode":0}]}]},"connection":"127.0.0.1:38537-127.0.0.1:54972-3","totalTimeMs":12.134,"requestQueueTimeMs":0.381,"localTimeMs":11.161,"remoteTimeMs":0.0,"throttleTimeMs":0,"responseQueueTimeMs":0.165,"sendTimeMs":0.426,"securityProtocol":"SASL_PLAINTEXT","principal":"User:admin","listener":"SASL_PLAINTEXT","clientInformation":{"softwareName":"apache-kafka-java","softwareVersion":"3.3.1"}} 19:06:56.008 [main] DEBUG org.apache.kafka.clients.consumer.internals.ConsumerCoordinator - [Consumer clientId=mso-123456-consumer-e3947c42-8a25-4603-b050-ed449cbe5e27, groupId=mso-group] Completed asynchronous auto-commit of offsets {my-test-topic-0=OffsetAndMetadata{offset=0, leaderEpoch=null, metadata=''}} 19:06:56.329 [executor-Fetch] DEBUG kafka.server.IncrementalFetchContext - Incremental fetch context with session id 1631787598 returning 0 partition(s) 19:06:56.331 [main] DEBUG org.apache.kafka.clients.NetworkClient - [Consumer clientId=mso-123456-consumer-e3947c42-8a25-4603-b050-ed449cbe5e27, groupId=mso-group] Received FETCH response from node 1 for request with header RequestHeader(apiKey=FETCH, apiVersion=13, clientId=mso-123456-consumer-e3947c42-8a25-4603-b050-ed449cbe5e27, correlationId=55): FetchResponseData(throttleTimeMs=0, errorCode=0, sessionId=1631787598, responses=[]) 19:06:56.331 [main] DEBUG org.apache.kafka.clients.FetchSessionHandler - [Consumer clientId=mso-123456-consumer-e3947c42-8a25-4603-b050-ed449cbe5e27, groupId=mso-group] Node 1 sent an incremental fetch response with throttleTimeMs = 0 for session 1631787598 with 0 response partition(s), 1 implied partition(s) 19:06:56.331 [data-plane-kafka-network-thread-1-ListenerName(SASL_PLAINTEXT)-SASL_PLAINTEXT-0] DEBUG kafka.request.logger - Completed request:{"isForwarded":false,"requestHeader":{"requestApiKey":1,"requestApiVersion":13,"correlationId":55,"clientId":"mso-123456-consumer-e3947c42-8a25-4603-b050-ed449cbe5e27","requestApiKeyName":"FETCH"},"request":{"replicaId":-1,"maxWaitMs":500,"minBytes":1,"maxBytes":52428800,"isolationLevel":0,"sessionId":1631787598,"sessionEpoch":19,"topics":[],"forgottenTopicsData":[],"rackId":""},"response":{"throttleTimeMs":0,"errorCode":0,"sessionId":1631787598,"responses":[]},"connection":"127.0.0.1:38537-127.0.0.1:54970-3","totalTimeMs":506.094,"requestQueueTimeMs":0.219,"localTimeMs":1.32,"remoteTimeMs":503.884,"throttleTimeMs":0,"responseQueueTimeMs":0.179,"sendTimeMs":0.489,"securityProtocol":"SASL_PLAINTEXT","principal":"User:admin","listener":"SASL_PLAINTEXT","clientInformation":{"softwareName":"apache-kafka-java","softwareVersion":"3.3.1"}} 19:06:56.332 [main] DEBUG org.apache.kafka.clients.consumer.internals.Fetcher - [Consumer clientId=mso-123456-consumer-e3947c42-8a25-4603-b050-ed449cbe5e27, groupId=mso-group] Added READ_UNCOMMITTED fetch request for partition my-test-topic-0 at position FetchPosition{offset=0, offsetEpoch=Optional.empty, currentLeader=LeaderAndEpoch{leader=Optional[localhost:38537 (id: 1 rack: null)], epoch=0}} to node localhost:38537 (id: 1 rack: null) 19:06:56.333 [main] DEBUG org.apache.kafka.clients.FetchSessionHandler - [Consumer clientId=mso-123456-consumer-e3947c42-8a25-4603-b050-ed449cbe5e27, groupId=mso-group] Built incremental fetch (sessionId=1631787598, epoch=20) for node 1. Added 0 partition(s), altered 0 partition(s), removed 0 partition(s), replaced 0 partition(s) out of 1 partition(s) 19:06:56.333 [main] DEBUG org.apache.kafka.clients.consumer.internals.Fetcher - [Consumer clientId=mso-123456-consumer-e3947c42-8a25-4603-b050-ed449cbe5e27, groupId=mso-group] Sending READ_UNCOMMITTED IncrementalFetchRequest(toSend=(), toForget=(), toReplace=(), implied=(my-test-topic-0), canUseTopicIds=True) to broker localhost:38537 (id: 1 rack: null) 19:06:56.333 [main] DEBUG org.apache.kafka.clients.NetworkClient - [Consumer clientId=mso-123456-consumer-e3947c42-8a25-4603-b050-ed449cbe5e27, groupId=mso-group] Sending FETCH request with header RequestHeader(apiKey=FETCH, apiVersion=13, clientId=mso-123456-consumer-e3947c42-8a25-4603-b050-ed449cbe5e27, correlationId=57) and timeout 30000 to node 1: FetchRequestData(clusterId=null, replicaId=-1, maxWaitMs=500, minBytes=1, maxBytes=52428800, isolationLevel=0, sessionId=1631787598, sessionEpoch=20, topics=[], forgottenTopicsData=[], rackId='') 19:06:56.335 [data-plane-kafka-request-handler-0] DEBUG kafka.server.FetchManager - Created a new incremental FetchContext for session id 1631787598, epoch 21: added 0 partition(s), updated 0 partition(s), removed 0 partition(s) 19:06:56.838 [executor-Fetch] DEBUG kafka.server.IncrementalFetchContext - Incremental fetch context with session id 1631787598 returning 0 partition(s) 19:06:56.840 [main] DEBUG org.apache.kafka.clients.NetworkClient - [Consumer clientId=mso-123456-consumer-e3947c42-8a25-4603-b050-ed449cbe5e27, groupId=mso-group] Received FETCH response from node 1 for request with header RequestHeader(apiKey=FETCH, apiVersion=13, clientId=mso-123456-consumer-e3947c42-8a25-4603-b050-ed449cbe5e27, correlationId=57): FetchResponseData(throttleTimeMs=0, errorCode=0, sessionId=1631787598, responses=[]) 19:06:56.840 [main] DEBUG org.apache.kafka.clients.FetchSessionHandler - [Consumer clientId=mso-123456-consumer-e3947c42-8a25-4603-b050-ed449cbe5e27, groupId=mso-group] Node 1 sent an incremental fetch response with throttleTimeMs = 0 for session 1631787598 with 0 response partition(s), 1 implied partition(s) 19:06:56.840 [data-plane-kafka-network-thread-1-ListenerName(SASL_PLAINTEXT)-SASL_PLAINTEXT-0] DEBUG kafka.request.logger - Completed request:{"isForwarded":false,"requestHeader":{"requestApiKey":1,"requestApiVersion":13,"correlationId":57,"clientId":"mso-123456-consumer-e3947c42-8a25-4603-b050-ed449cbe5e27","requestApiKeyName":"FETCH"},"request":{"replicaId":-1,"maxWaitMs":500,"minBytes":1,"maxBytes":52428800,"isolationLevel":0,"sessionId":1631787598,"sessionEpoch":20,"topics":[],"forgottenTopicsData":[],"rackId":""},"response":{"throttleTimeMs":0,"errorCode":0,"sessionId":1631787598,"responses":[]},"connection":"127.0.0.1:38537-127.0.0.1:54970-3","totalTimeMs":505.139,"requestQueueTimeMs":0.36,"localTimeMs":1.699,"remoteTimeMs":502.419,"throttleTimeMs":0,"responseQueueTimeMs":0.216,"sendTimeMs":0.443,"securityProtocol":"SASL_PLAINTEXT","principal":"User:admin","listener":"SASL_PLAINTEXT","clientInformation":{"softwareName":"apache-kafka-java","softwareVersion":"3.3.1"}} 19:06:56.841 [main] DEBUG org.apache.kafka.clients.consumer.internals.Fetcher - [Consumer clientId=mso-123456-consumer-e3947c42-8a25-4603-b050-ed449cbe5e27, groupId=mso-group] Added READ_UNCOMMITTED fetch request for partition my-test-topic-0 at position FetchPosition{offset=0, offsetEpoch=Optional.empty, currentLeader=LeaderAndEpoch{leader=Optional[localhost:38537 (id: 1 rack: null)], epoch=0}} to node localhost:38537 (id: 1 rack: null) 19:06:56.841 [main] DEBUG org.apache.kafka.clients.FetchSessionHandler - [Consumer clientId=mso-123456-consumer-e3947c42-8a25-4603-b050-ed449cbe5e27, groupId=mso-group] Built incremental fetch (sessionId=1631787598, epoch=21) for node 1. Added 0 partition(s), altered 0 partition(s), removed 0 partition(s), replaced 0 partition(s) out of 1 partition(s) 19:06:56.841 [main] DEBUG org.apache.kafka.clients.consumer.internals.Fetcher - [Consumer clientId=mso-123456-consumer-e3947c42-8a25-4603-b050-ed449cbe5e27, groupId=mso-group] Sending READ_UNCOMMITTED IncrementalFetchRequest(toSend=(), toForget=(), toReplace=(), implied=(my-test-topic-0), canUseTopicIds=True) to broker localhost:38537 (id: 1 rack: null) 19:06:56.842 [main] DEBUG org.apache.kafka.clients.NetworkClient - [Consumer clientId=mso-123456-consumer-e3947c42-8a25-4603-b050-ed449cbe5e27, groupId=mso-group] Sending FETCH request with header RequestHeader(apiKey=FETCH, apiVersion=13, clientId=mso-123456-consumer-e3947c42-8a25-4603-b050-ed449cbe5e27, correlationId=58) and timeout 30000 to node 1: FetchRequestData(clusterId=null, replicaId=-1, maxWaitMs=500, minBytes=1, maxBytes=52428800, isolationLevel=0, sessionId=1631787598, sessionEpoch=21, topics=[], forgottenTopicsData=[], rackId='') 19:06:56.843 [data-plane-kafka-request-handler-1] DEBUG kafka.server.FetchManager - Created a new incremental FetchContext for session id 1631787598, epoch 22: added 0 partition(s), updated 0 partition(s), removed 0 partition(s) 19:06:57.346 [executor-Fetch] DEBUG kafka.server.IncrementalFetchContext - Incremental fetch context with session id 1631787598 returning 0 partition(s) 19:06:57.348 [main] DEBUG org.apache.kafka.clients.NetworkClient - [Consumer clientId=mso-123456-consumer-e3947c42-8a25-4603-b050-ed449cbe5e27, groupId=mso-group] Received FETCH response from node 1 for request with header RequestHeader(apiKey=FETCH, apiVersion=13, clientId=mso-123456-consumer-e3947c42-8a25-4603-b050-ed449cbe5e27, correlationId=58): FetchResponseData(throttleTimeMs=0, errorCode=0, sessionId=1631787598, responses=[]) 19:06:57.348 [main] DEBUG org.apache.kafka.clients.FetchSessionHandler - [Consumer clientId=mso-123456-consumer-e3947c42-8a25-4603-b050-ed449cbe5e27, groupId=mso-group] Node 1 sent an incremental fetch response with throttleTimeMs = 0 for session 1631787598 with 0 response partition(s), 1 implied partition(s) 19:06:57.348 [data-plane-kafka-network-thread-1-ListenerName(SASL_PLAINTEXT)-SASL_PLAINTEXT-0] DEBUG kafka.request.logger - Completed request:{"isForwarded":false,"requestHeader":{"requestApiKey":1,"requestApiVersion":13,"correlationId":58,"clientId":"mso-123456-consumer-e3947c42-8a25-4603-b050-ed449cbe5e27","requestApiKeyName":"FETCH"},"request":{"replicaId":-1,"maxWaitMs":500,"minBytes":1,"maxBytes":52428800,"isolationLevel":0,"sessionId":1631787598,"sessionEpoch":21,"topics":[],"forgottenTopicsData":[],"rackId":""},"response":{"throttleTimeMs":0,"errorCode":0,"sessionId":1631787598,"responses":[]},"connection":"127.0.0.1:38537-127.0.0.1:54970-3","totalTimeMs":505.154,"requestQueueTimeMs":0.317,"localTimeMs":1.936,"remoteTimeMs":502.231,"throttleTimeMs":0,"responseQueueTimeMs":0.227,"sendTimeMs":0.442,"securityProtocol":"SASL_PLAINTEXT","principal":"User:admin","listener":"SASL_PLAINTEXT","clientInformation":{"softwareName":"apache-kafka-java","softwareVersion":"3.3.1"}} 19:06:57.349 [main] DEBUG org.apache.kafka.clients.consumer.internals.Fetcher - [Consumer clientId=mso-123456-consumer-e3947c42-8a25-4603-b050-ed449cbe5e27, groupId=mso-group] Added READ_UNCOMMITTED fetch request for partition my-test-topic-0 at position FetchPosition{offset=0, offsetEpoch=Optional.empty, currentLeader=LeaderAndEpoch{leader=Optional[localhost:38537 (id: 1 rack: null)], epoch=0}} to node localhost:38537 (id: 1 rack: null) 19:06:57.349 [main] DEBUG org.apache.kafka.clients.FetchSessionHandler - [Consumer clientId=mso-123456-consumer-e3947c42-8a25-4603-b050-ed449cbe5e27, groupId=mso-group] Built incremental fetch (sessionId=1631787598, epoch=22) for node 1. Added 0 partition(s), altered 0 partition(s), removed 0 partition(s), replaced 0 partition(s) out of 1 partition(s) 19:06:57.349 [main] DEBUG org.apache.kafka.clients.consumer.internals.Fetcher - [Consumer clientId=mso-123456-consumer-e3947c42-8a25-4603-b050-ed449cbe5e27, groupId=mso-group] Sending READ_UNCOMMITTED IncrementalFetchRequest(toSend=(), toForget=(), toReplace=(), implied=(my-test-topic-0), canUseTopicIds=True) to broker localhost:38537 (id: 1 rack: null) 19:06:57.349 [main] DEBUG org.apache.kafka.clients.NetworkClient - [Consumer clientId=mso-123456-consumer-e3947c42-8a25-4603-b050-ed449cbe5e27, groupId=mso-group] Sending FETCH request with header RequestHeader(apiKey=FETCH, apiVersion=13, clientId=mso-123456-consumer-e3947c42-8a25-4603-b050-ed449cbe5e27, correlationId=59) and timeout 30000 to node 1: FetchRequestData(clusterId=null, replicaId=-1, maxWaitMs=500, minBytes=1, maxBytes=52428800, isolationLevel=0, sessionId=1631787598, sessionEpoch=22, topics=[], forgottenTopicsData=[], rackId='') 19:06:57.351 [data-plane-kafka-request-handler-1] DEBUG kafka.server.FetchManager - Created a new incremental FetchContext for session id 1631787598, epoch 23: added 0 partition(s), updated 0 partition(s), removed 0 partition(s) 19:06:57.796 [kafka-coordinator-heartbeat-thread | mso-group] DEBUG org.apache.kafka.clients.consumer.internals.ConsumerCoordinator - [Consumer clientId=mso-123456-consumer-e3947c42-8a25-4603-b050-ed449cbe5e27, groupId=mso-group] Sending Heartbeat request with generation 1 and member id mso-123456-consumer-e3947c42-8a25-4603-b050-ed449cbe5e27-4206ed4e-0a48-4fc2-880d-094163bb2a9d to coordinator localhost:38537 (id: 2147483646 rack: null) 19:06:57.796 [kafka-coordinator-heartbeat-thread | mso-group] DEBUG org.apache.kafka.clients.NetworkClient - [Consumer clientId=mso-123456-consumer-e3947c42-8a25-4603-b050-ed449cbe5e27, groupId=mso-group] Sending HEARTBEAT request with header RequestHeader(apiKey=HEARTBEAT, apiVersion=4, clientId=mso-123456-consumer-e3947c42-8a25-4603-b050-ed449cbe5e27, correlationId=60) and timeout 30000 to node 2147483646: HeartbeatRequestData(groupId='mso-group', generationId=1, memberId='mso-123456-consumer-e3947c42-8a25-4603-b050-ed449cbe5e27-4206ed4e-0a48-4fc2-880d-094163bb2a9d', groupInstanceId=null) 19:06:57.798 [data-plane-kafka-request-handler-0] DEBUG kafka.server.DelayedOperationPurgatory - Request key MemberKey(mso-group,mso-123456-consumer-e3947c42-8a25-4603-b050-ed449cbe5e27-4206ed4e-0a48-4fc2-880d-094163bb2a9d) unblocked 1 Heartbeat operations 19:06:57.799 [main] DEBUG org.apache.kafka.clients.NetworkClient - [Consumer clientId=mso-123456-consumer-e3947c42-8a25-4603-b050-ed449cbe5e27, groupId=mso-group] Received HEARTBEAT response from node 2147483646 for request with header RequestHeader(apiKey=HEARTBEAT, apiVersion=4, clientId=mso-123456-consumer-e3947c42-8a25-4603-b050-ed449cbe5e27, correlationId=60): HeartbeatResponseData(throttleTimeMs=0, errorCode=0) 19:06:57.800 [main] DEBUG org.apache.kafka.clients.consumer.internals.ConsumerCoordinator - [Consumer clientId=mso-123456-consumer-e3947c42-8a25-4603-b050-ed449cbe5e27, groupId=mso-group] Received successful Heartbeat response 19:06:57.800 [data-plane-kafka-network-thread-1-ListenerName(SASL_PLAINTEXT)-SASL_PLAINTEXT-1] DEBUG kafka.request.logger - Completed request:{"isForwarded":false,"requestHeader":{"requestApiKey":12,"requestApiVersion":4,"correlationId":60,"clientId":"mso-123456-consumer-e3947c42-8a25-4603-b050-ed449cbe5e27","requestApiKeyName":"HEARTBEAT"},"request":{"groupId":"mso-group","generationId":1,"memberId":"mso-123456-consumer-e3947c42-8a25-4603-b050-ed449cbe5e27-4206ed4e-0a48-4fc2-880d-094163bb2a9d","groupInstanceId":null},"response":{"throttleTimeMs":0,"errorCode":0},"connection":"127.0.0.1:38537-127.0.0.1:54972-3","totalTimeMs":2.336,"requestQueueTimeMs":0.303,"localTimeMs":1.758,"remoteTimeMs":0.0,"throttleTimeMs":0,"responseQueueTimeMs":0.095,"sendTimeMs":0.179,"securityProtocol":"SASL_PLAINTEXT","principal":"User:admin","listener":"SASL_PLAINTEXT","clientInformation":{"softwareName":"apache-kafka-java","softwareVersion":"3.3.1"}} 19:06:57.854 [executor-Fetch] DEBUG kafka.server.IncrementalFetchContext - Incremental fetch context with session id 1631787598 returning 0 partition(s) 19:06:57.856 [main] DEBUG org.apache.kafka.clients.NetworkClient - [Consumer clientId=mso-123456-consumer-e3947c42-8a25-4603-b050-ed449cbe5e27, groupId=mso-group] Received FETCH response from node 1 for request with header RequestHeader(apiKey=FETCH, apiVersion=13, clientId=mso-123456-consumer-e3947c42-8a25-4603-b050-ed449cbe5e27, correlationId=59): FetchResponseData(throttleTimeMs=0, errorCode=0, sessionId=1631787598, responses=[]) 19:06:57.856 [main] DEBUG org.apache.kafka.clients.FetchSessionHandler - [Consumer clientId=mso-123456-consumer-e3947c42-8a25-4603-b050-ed449cbe5e27, groupId=mso-group] Node 1 sent an incremental fetch response with throttleTimeMs = 0 for session 1631787598 with 0 response partition(s), 1 implied partition(s) 19:06:57.857 [main] DEBUG org.apache.kafka.clients.consumer.internals.Fetcher - [Consumer clientId=mso-123456-consumer-e3947c42-8a25-4603-b050-ed449cbe5e27, groupId=mso-group] Added READ_UNCOMMITTED fetch request for partition my-test-topic-0 at position FetchPosition{offset=0, offsetEpoch=Optional.empty, currentLeader=LeaderAndEpoch{leader=Optional[localhost:38537 (id: 1 rack: null)], epoch=0}} to node localhost:38537 (id: 1 rack: null) 19:06:57.857 [data-plane-kafka-network-thread-1-ListenerName(SASL_PLAINTEXT)-SASL_PLAINTEXT-0] DEBUG kafka.request.logger - Completed request:{"isForwarded":false,"requestHeader":{"requestApiKey":1,"requestApiVersion":13,"correlationId":59,"clientId":"mso-123456-consumer-e3947c42-8a25-4603-b050-ed449cbe5e27","requestApiKeyName":"FETCH"},"request":{"replicaId":-1,"maxWaitMs":500,"minBytes":1,"maxBytes":52428800,"isolationLevel":0,"sessionId":1631787598,"sessionEpoch":22,"topics":[],"forgottenTopicsData":[],"rackId":""},"response":{"throttleTimeMs":0,"errorCode":0,"sessionId":1631787598,"responses":[]},"connection":"127.0.0.1:38537-127.0.0.1:54970-3","totalTimeMs":505.382,"requestQueueTimeMs":0.243,"localTimeMs":1.708,"remoteTimeMs":502.679,"throttleTimeMs":0,"responseQueueTimeMs":0.223,"sendTimeMs":0.526,"securityProtocol":"SASL_PLAINTEXT","principal":"User:admin","listener":"SASL_PLAINTEXT","clientInformation":{"softwareName":"apache-kafka-java","softwareVersion":"3.3.1"}} 19:06:57.857 [main] DEBUG org.apache.kafka.clients.FetchSessionHandler - [Consumer clientId=mso-123456-consumer-e3947c42-8a25-4603-b050-ed449cbe5e27, groupId=mso-group] Built incremental fetch (sessionId=1631787598, epoch=23) for node 1. Added 0 partition(s), altered 0 partition(s), removed 0 partition(s), replaced 0 partition(s) out of 1 partition(s) 19:06:57.857 [main] DEBUG org.apache.kafka.clients.consumer.internals.Fetcher - [Consumer clientId=mso-123456-consumer-e3947c42-8a25-4603-b050-ed449cbe5e27, groupId=mso-group] Sending READ_UNCOMMITTED IncrementalFetchRequest(toSend=(), toForget=(), toReplace=(), implied=(my-test-topic-0), canUseTopicIds=True) to broker localhost:38537 (id: 1 rack: null) 19:06:57.857 [main] DEBUG org.apache.kafka.clients.NetworkClient - [Consumer clientId=mso-123456-consumer-e3947c42-8a25-4603-b050-ed449cbe5e27, groupId=mso-group] Sending FETCH request with header RequestHeader(apiKey=FETCH, apiVersion=13, clientId=mso-123456-consumer-e3947c42-8a25-4603-b050-ed449cbe5e27, correlationId=61) and timeout 30000 to node 1: FetchRequestData(clusterId=null, replicaId=-1, maxWaitMs=500, minBytes=1, maxBytes=52428800, isolationLevel=0, sessionId=1631787598, sessionEpoch=23, topics=[], forgottenTopicsData=[], rackId='') 19:06:57.858 [data-plane-kafka-request-handler-1] DEBUG kafka.server.FetchManager - Created a new incremental FetchContext for session id 1631787598, epoch 24: added 0 partition(s), updated 0 partition(s), removed 0 partition(s) 19:06:58.361 [executor-Fetch] DEBUG kafka.server.IncrementalFetchContext - Incremental fetch context with session id 1631787598 returning 0 partition(s) 19:06:58.362 [main] DEBUG org.apache.kafka.clients.NetworkClient - [Consumer clientId=mso-123456-consumer-e3947c42-8a25-4603-b050-ed449cbe5e27, groupId=mso-group] Received FETCH response from node 1 for request with header RequestHeader(apiKey=FETCH, apiVersion=13, clientId=mso-123456-consumer-e3947c42-8a25-4603-b050-ed449cbe5e27, correlationId=61): FetchResponseData(throttleTimeMs=0, errorCode=0, sessionId=1631787598, responses=[]) 19:06:58.363 [data-plane-kafka-network-thread-1-ListenerName(SASL_PLAINTEXT)-SASL_PLAINTEXT-0] DEBUG kafka.request.logger - Completed request:{"isForwarded":false,"requestHeader":{"requestApiKey":1,"requestApiVersion":13,"correlationId":61,"clientId":"mso-123456-consumer-e3947c42-8a25-4603-b050-ed449cbe5e27","requestApiKeyName":"FETCH"},"request":{"replicaId":-1,"maxWaitMs":500,"minBytes":1,"maxBytes":52428800,"isolationLevel":0,"sessionId":1631787598,"sessionEpoch":23,"topics":[],"forgottenTopicsData":[],"rackId":""},"response":{"throttleTimeMs":0,"errorCode":0,"sessionId":1631787598,"responses":[]},"connection":"127.0.0.1:38537-127.0.0.1:54970-3","totalTimeMs":504.08,"requestQueueTimeMs":0.244,"localTimeMs":1.479,"remoteTimeMs":501.83,"throttleTimeMs":0,"responseQueueTimeMs":0.153,"sendTimeMs":0.372,"securityProtocol":"SASL_PLAINTEXT","principal":"User:admin","listener":"SASL_PLAINTEXT","clientInformation":{"softwareName":"apache-kafka-java","softwareVersion":"3.3.1"}} 19:06:58.363 [kafka-coordinator-heartbeat-thread | mso-group] DEBUG org.apache.kafka.clients.FetchSessionHandler - [Consumer clientId=mso-123456-consumer-e3947c42-8a25-4603-b050-ed449cbe5e27, groupId=mso-group] Node 1 sent an incremental fetch response with throttleTimeMs = 0 for session 1631787598 with 0 response partition(s), 1 implied partition(s) 19:06:58.364 [main] DEBUG org.apache.kafka.clients.consumer.internals.Fetcher - [Consumer clientId=mso-123456-consumer-e3947c42-8a25-4603-b050-ed449cbe5e27, groupId=mso-group] Added READ_UNCOMMITTED fetch request for partition my-test-topic-0 at position FetchPosition{offset=0, offsetEpoch=Optional.empty, currentLeader=LeaderAndEpoch{leader=Optional[localhost:38537 (id: 1 rack: null)], epoch=0}} to node localhost:38537 (id: 1 rack: null) 19:06:58.364 [main] DEBUG org.apache.kafka.clients.FetchSessionHandler - [Consumer clientId=mso-123456-consumer-e3947c42-8a25-4603-b050-ed449cbe5e27, groupId=mso-group] Built incremental fetch (sessionId=1631787598, epoch=24) for node 1. Added 0 partition(s), altered 0 partition(s), removed 0 partition(s), replaced 0 partition(s) out of 1 partition(s) 19:06:58.364 [main] DEBUG org.apache.kafka.clients.consumer.internals.Fetcher - [Consumer clientId=mso-123456-consumer-e3947c42-8a25-4603-b050-ed449cbe5e27, groupId=mso-group] Sending READ_UNCOMMITTED IncrementalFetchRequest(toSend=(), toForget=(), toReplace=(), implied=(my-test-topic-0), canUseTopicIds=True) to broker localhost:38537 (id: 1 rack: null) 19:06:58.364 [main] DEBUG org.apache.kafka.clients.NetworkClient - [Consumer clientId=mso-123456-consumer-e3947c42-8a25-4603-b050-ed449cbe5e27, groupId=mso-group] Sending FETCH request with header RequestHeader(apiKey=FETCH, apiVersion=13, clientId=mso-123456-consumer-e3947c42-8a25-4603-b050-ed449cbe5e27, correlationId=62) and timeout 30000 to node 1: FetchRequestData(clusterId=null, replicaId=-1, maxWaitMs=500, minBytes=1, maxBytes=52428800, isolationLevel=0, sessionId=1631787598, sessionEpoch=24, topics=[], forgottenTopicsData=[], rackId='') 19:06:58.366 [data-plane-kafka-request-handler-0] DEBUG kafka.server.FetchManager - Created a new incremental FetchContext for session id 1631787598, epoch 25: added 0 partition(s), updated 0 partition(s), removed 0 partition(s) 19:06:58.870 [executor-Fetch] DEBUG kafka.server.IncrementalFetchContext - Incremental fetch context with session id 1631787598 returning 0 partition(s) 19:06:58.871 [main] DEBUG org.apache.kafka.clients.NetworkClient - [Consumer clientId=mso-123456-consumer-e3947c42-8a25-4603-b050-ed449cbe5e27, groupId=mso-group] Received FETCH response from node 1 for request with header RequestHeader(apiKey=FETCH, apiVersion=13, clientId=mso-123456-consumer-e3947c42-8a25-4603-b050-ed449cbe5e27, correlationId=62): FetchResponseData(throttleTimeMs=0, errorCode=0, sessionId=1631787598, responses=[]) 19:06:58.871 [data-plane-kafka-network-thread-1-ListenerName(SASL_PLAINTEXT)-SASL_PLAINTEXT-0] DEBUG kafka.request.logger - Completed request:{"isForwarded":false,"requestHeader":{"requestApiKey":1,"requestApiVersion":13,"correlationId":62,"clientId":"mso-123456-consumer-e3947c42-8a25-4603-b050-ed449cbe5e27","requestApiKeyName":"FETCH"},"request":{"replicaId":-1,"maxWaitMs":500,"minBytes":1,"maxBytes":52428800,"isolationLevel":0,"sessionId":1631787598,"sessionEpoch":24,"topics":[],"forgottenTopicsData":[],"rackId":""},"response":{"throttleTimeMs":0,"errorCode":0,"sessionId":1631787598,"responses":[]},"connection":"127.0.0.1:38537-127.0.0.1:54970-3","totalTimeMs":505.644,"requestQueueTimeMs":0.27,"localTimeMs":1.836,"remoteTimeMs":503.048,"throttleTimeMs":0,"responseQueueTimeMs":0.148,"sendTimeMs":0.339,"securityProtocol":"SASL_PLAINTEXT","principal":"User:admin","listener":"SASL_PLAINTEXT","clientInformation":{"softwareName":"apache-kafka-java","softwareVersion":"3.3.1"}} 19:06:58.872 [kafka-coordinator-heartbeat-thread | mso-group] DEBUG org.apache.kafka.clients.FetchSessionHandler - [Consumer clientId=mso-123456-consumer-e3947c42-8a25-4603-b050-ed449cbe5e27, groupId=mso-group] Node 1 sent an incremental fetch response with throttleTimeMs = 0 for session 1631787598 with 0 response partition(s), 1 implied partition(s) 19:06:58.873 [main] DEBUG org.apache.kafka.clients.consumer.internals.Fetcher - [Consumer clientId=mso-123456-consumer-e3947c42-8a25-4603-b050-ed449cbe5e27, groupId=mso-group] Added READ_UNCOMMITTED fetch request for partition my-test-topic-0 at position FetchPosition{offset=0, offsetEpoch=Optional.empty, currentLeader=LeaderAndEpoch{leader=Optional[localhost:38537 (id: 1 rack: null)], epoch=0}} to node localhost:38537 (id: 1 rack: null) 19:06:58.873 [main] DEBUG org.apache.kafka.clients.FetchSessionHandler - [Consumer clientId=mso-123456-consumer-e3947c42-8a25-4603-b050-ed449cbe5e27, groupId=mso-group] Built incremental fetch (sessionId=1631787598, epoch=25) for node 1. Added 0 partition(s), altered 0 partition(s), removed 0 partition(s), replaced 0 partition(s) out of 1 partition(s) 19:06:58.873 [main] DEBUG org.apache.kafka.clients.consumer.internals.Fetcher - [Consumer clientId=mso-123456-consumer-e3947c42-8a25-4603-b050-ed449cbe5e27, groupId=mso-group] Sending READ_UNCOMMITTED IncrementalFetchRequest(toSend=(), toForget=(), toReplace=(), implied=(my-test-topic-0), canUseTopicIds=True) to broker localhost:38537 (id: 1 rack: null) 19:06:58.873 [main] DEBUG org.apache.kafka.clients.NetworkClient - [Consumer clientId=mso-123456-consumer-e3947c42-8a25-4603-b050-ed449cbe5e27, groupId=mso-group] Sending FETCH request with header RequestHeader(apiKey=FETCH, apiVersion=13, clientId=mso-123456-consumer-e3947c42-8a25-4603-b050-ed449cbe5e27, correlationId=63) and timeout 30000 to node 1: FetchRequestData(clusterId=null, replicaId=-1, maxWaitMs=500, minBytes=1, maxBytes=52428800, isolationLevel=0, sessionId=1631787598, sessionEpoch=25, topics=[], forgottenTopicsData=[], rackId='') 19:06:58.875 [data-plane-kafka-request-handler-0] DEBUG kafka.server.FetchManager - Created a new incremental FetchContext for session id 1631787598, epoch 26: added 0 partition(s), updated 0 partition(s), removed 0 partition(s) 19:06:59.378 [executor-Fetch] DEBUG kafka.server.IncrementalFetchContext - Incremental fetch context with session id 1631787598 returning 0 partition(s) 19:06:59.380 [main] DEBUG org.apache.kafka.clients.NetworkClient - [Consumer clientId=mso-123456-consumer-e3947c42-8a25-4603-b050-ed449cbe5e27, groupId=mso-group] Received FETCH response from node 1 for request with header RequestHeader(apiKey=FETCH, apiVersion=13, clientId=mso-123456-consumer-e3947c42-8a25-4603-b050-ed449cbe5e27, correlationId=63): FetchResponseData(throttleTimeMs=0, errorCode=0, sessionId=1631787598, responses=[]) 19:06:59.380 [data-plane-kafka-network-thread-1-ListenerName(SASL_PLAINTEXT)-SASL_PLAINTEXT-0] DEBUG kafka.request.logger - Completed request:{"isForwarded":false,"requestHeader":{"requestApiKey":1,"requestApiVersion":13,"correlationId":63,"clientId":"mso-123456-consumer-e3947c42-8a25-4603-b050-ed449cbe5e27","requestApiKeyName":"FETCH"},"request":{"replicaId":-1,"maxWaitMs":500,"minBytes":1,"maxBytes":52428800,"isolationLevel":0,"sessionId":1631787598,"sessionEpoch":25,"topics":[],"forgottenTopicsData":[],"rackId":""},"response":{"throttleTimeMs":0,"errorCode":0,"sessionId":1631787598,"responses":[]},"connection":"127.0.0.1:38537-127.0.0.1:54970-3","totalTimeMs":505.248,"requestQueueTimeMs":0.455,"localTimeMs":1.713,"remoteTimeMs":502.518,"throttleTimeMs":0,"responseQueueTimeMs":0.158,"sendTimeMs":0.402,"securityProtocol":"SASL_PLAINTEXT","principal":"User:admin","listener":"SASL_PLAINTEXT","clientInformation":{"softwareName":"apache-kafka-java","softwareVersion":"3.3.1"}} 19:06:59.381 [kafka-coordinator-heartbeat-thread | mso-group] DEBUG org.apache.kafka.clients.FetchSessionHandler - [Consumer clientId=mso-123456-consumer-e3947c42-8a25-4603-b050-ed449cbe5e27, groupId=mso-group] Node 1 sent an incremental fetch response with throttleTimeMs = 0 for session 1631787598 with 0 response partition(s), 1 implied partition(s) 19:06:59.381 [main] DEBUG org.apache.kafka.clients.consumer.internals.Fetcher - [Consumer clientId=mso-123456-consumer-e3947c42-8a25-4603-b050-ed449cbe5e27, groupId=mso-group] Added READ_UNCOMMITTED fetch request for partition my-test-topic-0 at position FetchPosition{offset=0, offsetEpoch=Optional.empty, currentLeader=LeaderAndEpoch{leader=Optional[localhost:38537 (id: 1 rack: null)], epoch=0}} to node localhost:38537 (id: 1 rack: null) 19:06:59.381 [main] DEBUG org.apache.kafka.clients.FetchSessionHandler - [Consumer clientId=mso-123456-consumer-e3947c42-8a25-4603-b050-ed449cbe5e27, groupId=mso-group] Built incremental fetch (sessionId=1631787598, epoch=26) for node 1. Added 0 partition(s), altered 0 partition(s), removed 0 partition(s), replaced 0 partition(s) out of 1 partition(s) 19:06:59.381 [main] DEBUG org.apache.kafka.clients.consumer.internals.Fetcher - [Consumer clientId=mso-123456-consumer-e3947c42-8a25-4603-b050-ed449cbe5e27, groupId=mso-group] Sending READ_UNCOMMITTED IncrementalFetchRequest(toSend=(), toForget=(), toReplace=(), implied=(my-test-topic-0), canUseTopicIds=True) to broker localhost:38537 (id: 1 rack: null) 19:06:59.382 [main] DEBUG org.apache.kafka.clients.NetworkClient - [Consumer clientId=mso-123456-consumer-e3947c42-8a25-4603-b050-ed449cbe5e27, groupId=mso-group] Sending FETCH request with header RequestHeader(apiKey=FETCH, apiVersion=13, clientId=mso-123456-consumer-e3947c42-8a25-4603-b050-ed449cbe5e27, correlationId=64) and timeout 30000 to node 1: FetchRequestData(clusterId=null, replicaId=-1, maxWaitMs=500, minBytes=1, maxBytes=52428800, isolationLevel=0, sessionId=1631787598, sessionEpoch=26, topics=[], forgottenTopicsData=[], rackId='') 19:06:59.384 [data-plane-kafka-request-handler-0] DEBUG kafka.server.FetchManager - Created a new incremental FetchContext for session id 1631787598, epoch 27: added 0 partition(s), updated 0 partition(s), removed 0 partition(s) 19:06:59.886 [executor-Fetch] DEBUG kafka.server.IncrementalFetchContext - Incremental fetch context with session id 1631787598 returning 0 partition(s) 19:06:59.888 [main] DEBUG org.apache.kafka.clients.NetworkClient - [Consumer clientId=mso-123456-consumer-e3947c42-8a25-4603-b050-ed449cbe5e27, groupId=mso-group] Received FETCH response from node 1 for request with header RequestHeader(apiKey=FETCH, apiVersion=13, clientId=mso-123456-consumer-e3947c42-8a25-4603-b050-ed449cbe5e27, correlationId=64): FetchResponseData(throttleTimeMs=0, errorCode=0, sessionId=1631787598, responses=[]) 19:06:59.889 [data-plane-kafka-network-thread-1-ListenerName(SASL_PLAINTEXT)-SASL_PLAINTEXT-0] DEBUG kafka.request.logger - Completed request:{"isForwarded":false,"requestHeader":{"requestApiKey":1,"requestApiVersion":13,"correlationId":64,"clientId":"mso-123456-consumer-e3947c42-8a25-4603-b050-ed449cbe5e27","requestApiKeyName":"FETCH"},"request":{"replicaId":-1,"maxWaitMs":500,"minBytes":1,"maxBytes":52428800,"isolationLevel":0,"sessionId":1631787598,"sessionEpoch":26,"topics":[],"forgottenTopicsData":[],"rackId":""},"response":{"throttleTimeMs":0,"errorCode":0,"sessionId":1631787598,"responses":[]},"connection":"127.0.0.1:38537-127.0.0.1:54970-3","totalTimeMs":505.122,"requestQueueTimeMs":0.262,"localTimeMs":1.838,"remoteTimeMs":502.407,"throttleTimeMs":0,"responseQueueTimeMs":0.234,"sendTimeMs":0.379,"securityProtocol":"SASL_PLAINTEXT","principal":"User:admin","listener":"SASL_PLAINTEXT","clientInformation":{"softwareName":"apache-kafka-java","softwareVersion":"3.3.1"}} 19:06:59.889 [kafka-coordinator-heartbeat-thread | mso-group] DEBUG org.apache.kafka.clients.FetchSessionHandler - [Consumer clientId=mso-123456-consumer-e3947c42-8a25-4603-b050-ed449cbe5e27, groupId=mso-group] Node 1 sent an incremental fetch response with throttleTimeMs = 0 for session 1631787598 with 0 response partition(s), 1 implied partition(s) 19:06:59.890 [main] DEBUG org.apache.kafka.clients.consumer.internals.Fetcher - [Consumer clientId=mso-123456-consumer-e3947c42-8a25-4603-b050-ed449cbe5e27, groupId=mso-group] Added READ_UNCOMMITTED fetch request for partition my-test-topic-0 at position FetchPosition{offset=0, offsetEpoch=Optional.empty, currentLeader=LeaderAndEpoch{leader=Optional[localhost:38537 (id: 1 rack: null)], epoch=0}} to node localhost:38537 (id: 1 rack: null) 19:06:59.890 [main] DEBUG org.apache.kafka.clients.FetchSessionHandler - [Consumer clientId=mso-123456-consumer-e3947c42-8a25-4603-b050-ed449cbe5e27, groupId=mso-group] Built incremental fetch (sessionId=1631787598, epoch=27) for node 1. Added 0 partition(s), altered 0 partition(s), removed 0 partition(s), replaced 0 partition(s) out of 1 partition(s) 19:06:59.890 [main] DEBUG org.apache.kafka.clients.consumer.internals.Fetcher - [Consumer clientId=mso-123456-consumer-e3947c42-8a25-4603-b050-ed449cbe5e27, groupId=mso-group] Sending READ_UNCOMMITTED IncrementalFetchRequest(toSend=(), toForget=(), toReplace=(), implied=(my-test-topic-0), canUseTopicIds=True) to broker localhost:38537 (id: 1 rack: null) 19:06:59.890 [main] DEBUG org.apache.kafka.clients.NetworkClient - [Consumer clientId=mso-123456-consumer-e3947c42-8a25-4603-b050-ed449cbe5e27, groupId=mso-group] Sending FETCH request with header RequestHeader(apiKey=FETCH, apiVersion=13, clientId=mso-123456-consumer-e3947c42-8a25-4603-b050-ed449cbe5e27, correlationId=65) and timeout 30000 to node 1: FetchRequestData(clusterId=null, replicaId=-1, maxWaitMs=500, minBytes=1, maxBytes=52428800, isolationLevel=0, sessionId=1631787598, sessionEpoch=27, topics=[], forgottenTopicsData=[], rackId='') 19:06:59.892 [data-plane-kafka-request-handler-1] DEBUG kafka.server.FetchManager - Created a new incremental FetchContext for session id 1631787598, epoch 28: added 0 partition(s), updated 0 partition(s), removed 0 partition(s) 19:07:00.394 [executor-Fetch] DEBUG kafka.server.IncrementalFetchContext - Incremental fetch context with session id 1631787598 returning 0 partition(s) 19:07:00.396 [main] DEBUG org.apache.kafka.clients.NetworkClient - [Consumer clientId=mso-123456-consumer-e3947c42-8a25-4603-b050-ed449cbe5e27, groupId=mso-group] Received FETCH response from node 1 for request with header RequestHeader(apiKey=FETCH, apiVersion=13, clientId=mso-123456-consumer-e3947c42-8a25-4603-b050-ed449cbe5e27, correlationId=65): FetchResponseData(throttleTimeMs=0, errorCode=0, sessionId=1631787598, responses=[]) 19:07:00.397 [kafka-coordinator-heartbeat-thread | mso-group] DEBUG org.apache.kafka.clients.FetchSessionHandler - [Consumer clientId=mso-123456-consumer-e3947c42-8a25-4603-b050-ed449cbe5e27, groupId=mso-group] Node 1 sent an incremental fetch response with throttleTimeMs = 0 for session 1631787598 with 0 response partition(s), 1 implied partition(s) 19:07:00.397 [data-plane-kafka-network-thread-1-ListenerName(SASL_PLAINTEXT)-SASL_PLAINTEXT-0] DEBUG kafka.request.logger - Completed request:{"isForwarded":false,"requestHeader":{"requestApiKey":1,"requestApiVersion":13,"correlationId":65,"clientId":"mso-123456-consumer-e3947c42-8a25-4603-b050-ed449cbe5e27","requestApiKeyName":"FETCH"},"request":{"replicaId":-1,"maxWaitMs":500,"minBytes":1,"maxBytes":52428800,"isolationLevel":0,"sessionId":1631787598,"sessionEpoch":27,"topics":[],"forgottenTopicsData":[],"rackId":""},"response":{"throttleTimeMs":0,"errorCode":0,"sessionId":1631787598,"responses":[]},"connection":"127.0.0.1:38537-127.0.0.1:54970-3","totalTimeMs":504.573,"requestQueueTimeMs":0.227,"localTimeMs":1.461,"remoteTimeMs":502.02,"throttleTimeMs":0,"responseQueueTimeMs":0.224,"sendTimeMs":0.639,"securityProtocol":"SASL_PLAINTEXT","principal":"User:admin","listener":"SASL_PLAINTEXT","clientInformation":{"softwareName":"apache-kafka-java","softwareVersion":"3.3.1"}} 19:07:00.397 [main] DEBUG org.apache.kafka.clients.consumer.internals.Fetcher - [Consumer clientId=mso-123456-consumer-e3947c42-8a25-4603-b050-ed449cbe5e27, groupId=mso-group] Added READ_UNCOMMITTED fetch request for partition my-test-topic-0 at position FetchPosition{offset=0, offsetEpoch=Optional.empty, currentLeader=LeaderAndEpoch{leader=Optional[localhost:38537 (id: 1 rack: null)], epoch=0}} to node localhost:38537 (id: 1 rack: null) 19:07:00.397 [main] DEBUG org.apache.kafka.clients.FetchSessionHandler - [Consumer clientId=mso-123456-consumer-e3947c42-8a25-4603-b050-ed449cbe5e27, groupId=mso-group] Built incremental fetch (sessionId=1631787598, epoch=28) for node 1. Added 0 partition(s), altered 0 partition(s), removed 0 partition(s), replaced 0 partition(s) out of 1 partition(s) 19:07:00.397 [main] DEBUG org.apache.kafka.clients.consumer.internals.Fetcher - [Consumer clientId=mso-123456-consumer-e3947c42-8a25-4603-b050-ed449cbe5e27, groupId=mso-group] Sending READ_UNCOMMITTED IncrementalFetchRequest(toSend=(), toForget=(), toReplace=(), implied=(my-test-topic-0), canUseTopicIds=True) to broker localhost:38537 (id: 1 rack: null) 19:07:00.398 [main] DEBUG org.apache.kafka.clients.NetworkClient - [Consumer clientId=mso-123456-consumer-e3947c42-8a25-4603-b050-ed449cbe5e27, groupId=mso-group] Sending FETCH request with header RequestHeader(apiKey=FETCH, apiVersion=13, clientId=mso-123456-consumer-e3947c42-8a25-4603-b050-ed449cbe5e27, correlationId=66) and timeout 30000 to node 1: FetchRequestData(clusterId=null, replicaId=-1, maxWaitMs=500, minBytes=1, maxBytes=52428800, isolationLevel=0, sessionId=1631787598, sessionEpoch=28, topics=[], forgottenTopicsData=[], rackId='') 19:07:00.399 [data-plane-kafka-request-handler-1] DEBUG kafka.server.FetchManager - Created a new incremental FetchContext for session id 1631787598, epoch 29: added 0 partition(s), updated 0 partition(s), removed 0 partition(s) 19:07:00.798 [kafka-coordinator-heartbeat-thread | mso-group] DEBUG org.apache.kafka.clients.consumer.internals.ConsumerCoordinator - [Consumer clientId=mso-123456-consumer-e3947c42-8a25-4603-b050-ed449cbe5e27, groupId=mso-group] Sending Heartbeat request with generation 1 and member id mso-123456-consumer-e3947c42-8a25-4603-b050-ed449cbe5e27-4206ed4e-0a48-4fc2-880d-094163bb2a9d to coordinator localhost:38537 (id: 2147483646 rack: null) 19:07:00.799 [kafka-coordinator-heartbeat-thread | mso-group] DEBUG org.apache.kafka.clients.NetworkClient - [Consumer clientId=mso-123456-consumer-e3947c42-8a25-4603-b050-ed449cbe5e27, groupId=mso-group] Sending HEARTBEAT request with header RequestHeader(apiKey=HEARTBEAT, apiVersion=4, clientId=mso-123456-consumer-e3947c42-8a25-4603-b050-ed449cbe5e27, correlationId=67) and timeout 30000 to node 2147483646: HeartbeatRequestData(groupId='mso-group', generationId=1, memberId='mso-123456-consumer-e3947c42-8a25-4603-b050-ed449cbe5e27-4206ed4e-0a48-4fc2-880d-094163bb2a9d', groupInstanceId=null) 19:07:00.801 [data-plane-kafka-request-handler-0] DEBUG kafka.server.DelayedOperationPurgatory - Request key MemberKey(mso-group,mso-123456-consumer-e3947c42-8a25-4603-b050-ed449cbe5e27-4206ed4e-0a48-4fc2-880d-094163bb2a9d) unblocked 1 Heartbeat operations 19:07:00.802 [main] DEBUG org.apache.kafka.clients.NetworkClient - [Consumer clientId=mso-123456-consumer-e3947c42-8a25-4603-b050-ed449cbe5e27, groupId=mso-group] Received HEARTBEAT response from node 2147483646 for request with header RequestHeader(apiKey=HEARTBEAT, apiVersion=4, clientId=mso-123456-consumer-e3947c42-8a25-4603-b050-ed449cbe5e27, correlationId=67): HeartbeatResponseData(throttleTimeMs=0, errorCode=0) 19:07:00.802 [main] DEBUG org.apache.kafka.clients.consumer.internals.ConsumerCoordinator - [Consumer clientId=mso-123456-consumer-e3947c42-8a25-4603-b050-ed449cbe5e27, groupId=mso-group] Received successful Heartbeat response 19:07:00.802 [data-plane-kafka-network-thread-1-ListenerName(SASL_PLAINTEXT)-SASL_PLAINTEXT-1] DEBUG kafka.request.logger - Completed request:{"isForwarded":false,"requestHeader":{"requestApiKey":12,"requestApiVersion":4,"correlationId":67,"clientId":"mso-123456-consumer-e3947c42-8a25-4603-b050-ed449cbe5e27","requestApiKeyName":"HEARTBEAT"},"request":{"groupId":"mso-group","generationId":1,"memberId":"mso-123456-consumer-e3947c42-8a25-4603-b050-ed449cbe5e27-4206ed4e-0a48-4fc2-880d-094163bb2a9d","groupInstanceId":null},"response":{"throttleTimeMs":0,"errorCode":0},"connection":"127.0.0.1:38537-127.0.0.1:54972-3","totalTimeMs":2.056,"requestQueueTimeMs":0.28,"localTimeMs":1.309,"remoteTimeMs":0.0,"throttleTimeMs":0,"responseQueueTimeMs":0.143,"sendTimeMs":0.323,"securityProtocol":"SASL_PLAINTEXT","principal":"User:admin","listener":"SASL_PLAINTEXT","clientInformation":{"softwareName":"apache-kafka-java","softwareVersion":"3.3.1"}} 19:07:00.902 [executor-Fetch] DEBUG kafka.server.IncrementalFetchContext - Incremental fetch context with session id 1631787598 returning 0 partition(s) 19:07:00.903 [main] DEBUG org.apache.kafka.clients.NetworkClient - [Consumer clientId=mso-123456-consumer-e3947c42-8a25-4603-b050-ed449cbe5e27, groupId=mso-group] Received FETCH response from node 1 for request with header RequestHeader(apiKey=FETCH, apiVersion=13, clientId=mso-123456-consumer-e3947c42-8a25-4603-b050-ed449cbe5e27, correlationId=66): FetchResponseData(throttleTimeMs=0, errorCode=0, sessionId=1631787598, responses=[]) 19:07:00.904 [main] DEBUG org.apache.kafka.clients.FetchSessionHandler - [Consumer clientId=mso-123456-consumer-e3947c42-8a25-4603-b050-ed449cbe5e27, groupId=mso-group] Node 1 sent an incremental fetch response with throttleTimeMs = 0 for session 1631787598 with 0 response partition(s), 1 implied partition(s) 19:07:00.904 [data-plane-kafka-network-thread-1-ListenerName(SASL_PLAINTEXT)-SASL_PLAINTEXT-0] DEBUG kafka.request.logger - Completed request:{"isForwarded":false,"requestHeader":{"requestApiKey":1,"requestApiVersion":13,"correlationId":66,"clientId":"mso-123456-consumer-e3947c42-8a25-4603-b050-ed449cbe5e27","requestApiKeyName":"FETCH"},"request":{"replicaId":-1,"maxWaitMs":500,"minBytes":1,"maxBytes":52428800,"isolationLevel":0,"sessionId":1631787598,"sessionEpoch":28,"topics":[],"forgottenTopicsData":[],"rackId":""},"response":{"throttleTimeMs":0,"errorCode":0,"sessionId":1631787598,"responses":[]},"connection":"127.0.0.1:38537-127.0.0.1:54970-3","totalTimeMs":505.037,"requestQueueTimeMs":0.205,"localTimeMs":1.667,"remoteTimeMs":502.419,"throttleTimeMs":0,"responseQueueTimeMs":0.251,"sendTimeMs":0.494,"securityProtocol":"SASL_PLAINTEXT","principal":"User:admin","listener":"SASL_PLAINTEXT","clientInformation":{"softwareName":"apache-kafka-java","softwareVersion":"3.3.1"}} 19:07:00.905 [main] DEBUG org.apache.kafka.clients.consumer.internals.Fetcher - [Consumer clientId=mso-123456-consumer-e3947c42-8a25-4603-b050-ed449cbe5e27, groupId=mso-group] Added READ_UNCOMMITTED fetch request for partition my-test-topic-0 at position FetchPosition{offset=0, offsetEpoch=Optional.empty, currentLeader=LeaderAndEpoch{leader=Optional[localhost:38537 (id: 1 rack: null)], epoch=0}} to node localhost:38537 (id: 1 rack: null) 19:07:00.905 [main] DEBUG org.apache.kafka.clients.FetchSessionHandler - [Consumer clientId=mso-123456-consumer-e3947c42-8a25-4603-b050-ed449cbe5e27, groupId=mso-group] Built incremental fetch (sessionId=1631787598, epoch=29) for node 1. Added 0 partition(s), altered 0 partition(s), removed 0 partition(s), replaced 0 partition(s) out of 1 partition(s) 19:07:00.905 [main] DEBUG org.apache.kafka.clients.consumer.internals.Fetcher - [Consumer clientId=mso-123456-consumer-e3947c42-8a25-4603-b050-ed449cbe5e27, groupId=mso-group] Sending READ_UNCOMMITTED IncrementalFetchRequest(toSend=(), toForget=(), toReplace=(), implied=(my-test-topic-0), canUseTopicIds=True) to broker localhost:38537 (id: 1 rack: null) 19:07:00.905 [main] DEBUG org.apache.kafka.clients.NetworkClient - [Consumer clientId=mso-123456-consumer-e3947c42-8a25-4603-b050-ed449cbe5e27, groupId=mso-group] Sending FETCH request with header RequestHeader(apiKey=FETCH, apiVersion=13, clientId=mso-123456-consumer-e3947c42-8a25-4603-b050-ed449cbe5e27, correlationId=68) and timeout 30000 to node 1: FetchRequestData(clusterId=null, replicaId=-1, maxWaitMs=500, minBytes=1, maxBytes=52428800, isolationLevel=0, sessionId=1631787598, sessionEpoch=29, topics=[], forgottenTopicsData=[], rackId='') 19:07:00.906 [data-plane-kafka-request-handler-1] DEBUG kafka.server.FetchManager - Created a new incremental FetchContext for session id 1631787598, epoch 30: added 0 partition(s), updated 0 partition(s), removed 0 partition(s) 19:07:00.994 [main] DEBUG org.apache.kafka.clients.consumer.internals.ConsumerCoordinator - [Consumer clientId=mso-123456-consumer-e3947c42-8a25-4603-b050-ed449cbe5e27, groupId=mso-group] Sending asynchronous auto-commit of offsets {my-test-topic-0=OffsetAndMetadata{offset=0, leaderEpoch=null, metadata=''}} 19:07:00.995 [main] DEBUG org.apache.kafka.clients.NetworkClient - [Consumer clientId=mso-123456-consumer-e3947c42-8a25-4603-b050-ed449cbe5e27, groupId=mso-group] Sending OFFSET_COMMIT request with header RequestHeader(apiKey=OFFSET_COMMIT, apiVersion=8, clientId=mso-123456-consumer-e3947c42-8a25-4603-b050-ed449cbe5e27, correlationId=69) and timeout 30000 to node 2147483646: OffsetCommitRequestData(groupId='mso-group', generationId=1, memberId='mso-123456-consumer-e3947c42-8a25-4603-b050-ed449cbe5e27-4206ed4e-0a48-4fc2-880d-094163bb2a9d', groupInstanceId=null, retentionTimeMs=-1, topics=[OffsetCommitRequestTopic(name='my-test-topic', partitions=[OffsetCommitRequestPartition(partitionIndex=0, committedOffset=0, committedLeaderEpoch=-1, commitTimestamp=-1, committedMetadata='')])]) 19:07:00.997 [data-plane-kafka-request-handler-0] DEBUG kafka.server.DelayedOperationPurgatory - Request key MemberKey(mso-group,mso-123456-consumer-e3947c42-8a25-4603-b050-ed449cbe5e27-4206ed4e-0a48-4fc2-880d-094163bb2a9d) unblocked 1 Heartbeat operations 19:07:00.998 [data-plane-kafka-request-handler-0] DEBUG kafka.log.UnifiedLog - [UnifiedLog partition=__consumer_offsets-37, dir=/tmp/kafka-unit12180474530667575823] Flushing log up to offset 4 (exclusive)with recovery point 4, last flushed: 1755112016005, current time: 1755112020998,unflushed: 1 19:07:01.027 [data-plane-kafka-request-handler-0] DEBUG kafka.cluster.Partition - [Partition __consumer_offsets-37 broker=1] High watermark updated from (offset=3 segment=[0:706]) to (offset=4 segment=[0:830]) 19:07:01.027 [data-plane-kafka-request-handler-0] DEBUG kafka.server.ReplicaManager - [ReplicaManager broker=1] Produce to local log in 30 ms 19:07:01.030 [main] DEBUG org.apache.kafka.clients.NetworkClient - [Consumer clientId=mso-123456-consumer-e3947c42-8a25-4603-b050-ed449cbe5e27, groupId=mso-group] Received OFFSET_COMMIT response from node 2147483646 for request with header RequestHeader(apiKey=OFFSET_COMMIT, apiVersion=8, clientId=mso-123456-consumer-e3947c42-8a25-4603-b050-ed449cbe5e27, correlationId=69): OffsetCommitResponseData(throttleTimeMs=0, topics=[OffsetCommitResponseTopic(name='my-test-topic', partitions=[OffsetCommitResponsePartition(partitionIndex=0, errorCode=0)])]) 19:07:01.030 [main] DEBUG org.apache.kafka.clients.consumer.internals.ConsumerCoordinator - [Consumer clientId=mso-123456-consumer-e3947c42-8a25-4603-b050-ed449cbe5e27, groupId=mso-group] Committed offset 0 for partition my-test-topic-0 19:07:01.030 [data-plane-kafka-network-thread-1-ListenerName(SASL_PLAINTEXT)-SASL_PLAINTEXT-1] DEBUG kafka.request.logger - Completed request:{"isForwarded":false,"requestHeader":{"requestApiKey":8,"requestApiVersion":8,"correlationId":69,"clientId":"mso-123456-consumer-e3947c42-8a25-4603-b050-ed449cbe5e27","requestApiKeyName":"OFFSET_COMMIT"},"request":{"groupId":"mso-group","generationId":1,"memberId":"mso-123456-consumer-e3947c42-8a25-4603-b050-ed449cbe5e27-4206ed4e-0a48-4fc2-880d-094163bb2a9d","groupInstanceId":null,"topics":[{"name":"my-test-topic","partitions":[{"partitionIndex":0,"committedOffset":0,"committedLeaderEpoch":-1,"committedMetadata":""}]}]},"response":{"throttleTimeMs":0,"topics":[{"name":"my-test-topic","partitions":[{"partitionIndex":0,"errorCode":0}]}]},"connection":"127.0.0.1:38537-127.0.0.1:54972-3","totalTimeMs":34.49,"requestQueueTimeMs":0.392,"localTimeMs":33.26,"remoteTimeMs":0.0,"throttleTimeMs":0,"responseQueueTimeMs":0.354,"sendTimeMs":0.482,"securityProtocol":"SASL_PLAINTEXT","principal":"User:admin","listener":"SASL_PLAINTEXT","clientInformation":{"softwareName":"apache-kafka-java","softwareVersion":"3.3.1"}} 19:07:01.031 [main] DEBUG org.apache.kafka.clients.consumer.internals.ConsumerCoordinator - [Consumer clientId=mso-123456-consumer-e3947c42-8a25-4603-b050-ed449cbe5e27, groupId=mso-group] Completed asynchronous auto-commit of offsets {my-test-topic-0=OffsetAndMetadata{offset=0, leaderEpoch=null, metadata=''}} 19:07:01.410 [executor-Fetch] DEBUG kafka.server.IncrementalFetchContext - Incremental fetch context with session id 1631787598 returning 0 partition(s) 19:07:01.411 [main] DEBUG org.apache.kafka.clients.NetworkClient - [Consumer clientId=mso-123456-consumer-e3947c42-8a25-4603-b050-ed449cbe5e27, groupId=mso-group] Received FETCH response from node 1 for request with header RequestHeader(apiKey=FETCH, apiVersion=13, clientId=mso-123456-consumer-e3947c42-8a25-4603-b050-ed449cbe5e27, correlationId=68): FetchResponseData(throttleTimeMs=0, errorCode=0, sessionId=1631787598, responses=[]) 19:07:01.412 [main] DEBUG org.apache.kafka.clients.FetchSessionHandler - [Consumer clientId=mso-123456-consumer-e3947c42-8a25-4603-b050-ed449cbe5e27, groupId=mso-group] Node 1 sent an incremental fetch response with throttleTimeMs = 0 for session 1631787598 with 0 response partition(s), 1 implied partition(s) 19:07:01.412 [data-plane-kafka-network-thread-1-ListenerName(SASL_PLAINTEXT)-SASL_PLAINTEXT-0] DEBUG kafka.request.logger - Completed request:{"isForwarded":false,"requestHeader":{"requestApiKey":1,"requestApiVersion":13,"correlationId":68,"clientId":"mso-123456-consumer-e3947c42-8a25-4603-b050-ed449cbe5e27","requestApiKeyName":"FETCH"},"request":{"replicaId":-1,"maxWaitMs":500,"minBytes":1,"maxBytes":52428800,"isolationLevel":0,"sessionId":1631787598,"sessionEpoch":29,"topics":[],"forgottenTopicsData":[],"rackId":""},"response":{"throttleTimeMs":0,"errorCode":0,"sessionId":1631787598,"responses":[]},"connection":"127.0.0.1:38537-127.0.0.1:54970-3","totalTimeMs":505.485,"requestQueueTimeMs":0.25,"localTimeMs":1.76,"remoteTimeMs":502.74,"throttleTimeMs":0,"responseQueueTimeMs":0.243,"sendTimeMs":0.489,"securityProtocol":"SASL_PLAINTEXT","principal":"User:admin","listener":"SASL_PLAINTEXT","clientInformation":{"softwareName":"apache-kafka-java","softwareVersion":"3.3.1"}} 19:07:01.412 [main] DEBUG org.apache.kafka.clients.consumer.internals.Fetcher - [Consumer clientId=mso-123456-consumer-e3947c42-8a25-4603-b050-ed449cbe5e27, groupId=mso-group] Added READ_UNCOMMITTED fetch request for partition my-test-topic-0 at position FetchPosition{offset=0, offsetEpoch=Optional.empty, currentLeader=LeaderAndEpoch{leader=Optional[localhost:38537 (id: 1 rack: null)], epoch=0}} to node localhost:38537 (id: 1 rack: null) 19:07:01.412 [main] DEBUG org.apache.kafka.clients.FetchSessionHandler - [Consumer clientId=mso-123456-consumer-e3947c42-8a25-4603-b050-ed449cbe5e27, groupId=mso-group] Built incremental fetch (sessionId=1631787598, epoch=30) for node 1. Added 0 partition(s), altered 0 partition(s), removed 0 partition(s), replaced 0 partition(s) out of 1 partition(s) 19:07:01.413 [main] DEBUG org.apache.kafka.clients.consumer.internals.Fetcher - [Consumer clientId=mso-123456-consumer-e3947c42-8a25-4603-b050-ed449cbe5e27, groupId=mso-group] Sending READ_UNCOMMITTED IncrementalFetchRequest(toSend=(), toForget=(), toReplace=(), implied=(my-test-topic-0), canUseTopicIds=True) to broker localhost:38537 (id: 1 rack: null) 19:07:01.413 [main] DEBUG org.apache.kafka.clients.NetworkClient - [Consumer clientId=mso-123456-consumer-e3947c42-8a25-4603-b050-ed449cbe5e27, groupId=mso-group] Sending FETCH request with header RequestHeader(apiKey=FETCH, apiVersion=13, clientId=mso-123456-consumer-e3947c42-8a25-4603-b050-ed449cbe5e27, correlationId=70) and timeout 30000 to node 1: FetchRequestData(clusterId=null, replicaId=-1, maxWaitMs=500, minBytes=1, maxBytes=52428800, isolationLevel=0, sessionId=1631787598, sessionEpoch=30, topics=[], forgottenTopicsData=[], rackId='') 19:07:01.414 [data-plane-kafka-request-handler-1] DEBUG kafka.server.FetchManager - Created a new incremental FetchContext for session id 1631787598, epoch 31: added 0 partition(s), updated 0 partition(s), removed 0 partition(s) 19:07:01.535 [main] INFO org.apache.kafka.clients.producer.ProducerConfig - ProducerConfig values: acks = -1 batch.size = 16384 bootstrap.servers = [SASL_PLAINTEXT://localhost:38537] buffer.memory = 33554432 client.dns.lookup = use_all_dns_ips client.id = mso-123456-producer-f273d893-2274-4515-b9d3-12029fbb5bf7 compression.type = none connections.max.idle.ms = 540000 delivery.timeout.ms = 120000 enable.idempotence = true interceptor.classes = [] key.serializer = class org.apache.kafka.common.serialization.StringSerializer linger.ms = 0 max.block.ms = 60000 max.in.flight.requests.per.connection = 5 max.request.size = 1048576 metadata.max.age.ms = 300000 metadata.max.idle.ms = 300000 metric.reporters = [] metrics.num.samples = 2 metrics.recording.level = INFO metrics.sample.window.ms = 30000 partitioner.adaptive.partitioning.enable = true partitioner.availability.timeout.ms = 0 partitioner.class = null partitioner.ignore.keys = false receive.buffer.bytes = 32768 reconnect.backoff.max.ms = 1000 reconnect.backoff.ms = 50 request.timeout.ms = 30000 retries = 2147483647 retry.backoff.ms = 100 sasl.client.callback.handler.class = null sasl.jaas.config = [hidden] sasl.kerberos.kinit.cmd = /usr/bin/kinit sasl.kerberos.min.time.before.relogin = 60000 sasl.kerberos.service.name = null sasl.kerberos.ticket.renew.jitter = 0.05 sasl.kerberos.ticket.renew.window.factor = 0.8 sasl.login.callback.handler.class = null sasl.login.class = null sasl.login.connect.timeout.ms = null sasl.login.read.timeout.ms = null sasl.login.refresh.buffer.seconds = 300 sasl.login.refresh.min.period.seconds = 60 sasl.login.refresh.window.factor = 0.8 sasl.login.refresh.window.jitter = 0.05 sasl.login.retry.backoff.max.ms = 10000 sasl.login.retry.backoff.ms = 100 sasl.mechanism = PLAIN sasl.oauthbearer.clock.skew.seconds = 30 sasl.oauthbearer.expected.audience = null sasl.oauthbearer.expected.issuer = null sasl.oauthbearer.jwks.endpoint.refresh.ms = 3600000 sasl.oauthbearer.jwks.endpoint.retry.backoff.max.ms = 10000 sasl.oauthbearer.jwks.endpoint.retry.backoff.ms = 100 sasl.oauthbearer.jwks.endpoint.url = null sasl.oauthbearer.scope.claim.name = scope sasl.oauthbearer.sub.claim.name = sub sasl.oauthbearer.token.endpoint.url = null security.protocol = SASL_PLAINTEXT security.providers = null send.buffer.bytes = 131072 socket.connection.setup.timeout.max.ms = 30000 socket.connection.setup.timeout.ms = 10000 ssl.cipher.suites = null ssl.enabled.protocols = [TLSv1.2, TLSv1.3] ssl.endpoint.identification.algorithm = https ssl.engine.factory.class = null ssl.key.password = null ssl.keymanager.algorithm = SunX509 ssl.keystore.certificate.chain = null ssl.keystore.key = null ssl.keystore.location = null ssl.keystore.password = null ssl.keystore.type = JKS ssl.protocol = TLSv1.3 ssl.provider = null ssl.secure.random.implementation = null ssl.trustmanager.algorithm = PKIX ssl.truststore.certificates = null ssl.truststore.location = null ssl.truststore.password = null ssl.truststore.type = JKS transaction.timeout.ms = 60000 transactional.id = null value.serializer = class org.apache.kafka.common.serialization.StringSerializer 19:07:01.550 [main] INFO org.apache.kafka.clients.producer.KafkaProducer - [Producer clientId=mso-123456-producer-f273d893-2274-4515-b9d3-12029fbb5bf7] Instantiated an idempotent producer. 19:07:01.574 [main] INFO org.apache.kafka.common.utils.AppInfoParser - Kafka version: 3.3.1 19:07:01.574 [main] INFO org.apache.kafka.common.utils.AppInfoParser - Kafka commitId: e23c59d00e687ff5 19:07:01.574 [main] INFO org.apache.kafka.common.utils.AppInfoParser - Kafka startTimeMs: 1755112021574 19:07:01.574 [kafka-producer-network-thread | mso-123456-producer-f273d893-2274-4515-b9d3-12029fbb5bf7] DEBUG org.apache.kafka.clients.producer.internals.Sender - [Producer clientId=mso-123456-producer-f273d893-2274-4515-b9d3-12029fbb5bf7] Starting Kafka producer I/O thread. 19:07:01.574 [main] DEBUG org.apache.kafka.clients.producer.KafkaProducer - [Producer clientId=mso-123456-producer-f273d893-2274-4515-b9d3-12029fbb5bf7] Kafka producer started 19:07:01.576 [kafka-producer-network-thread | mso-123456-producer-f273d893-2274-4515-b9d3-12029fbb5bf7] DEBUG org.apache.kafka.clients.producer.internals.TransactionManager - [Producer clientId=mso-123456-producer-f273d893-2274-4515-b9d3-12029fbb5bf7] Transition from state UNINITIALIZED to INITIALIZING 19:07:01.578 [kafka-producer-network-thread | mso-123456-producer-f273d893-2274-4515-b9d3-12029fbb5bf7] DEBUG org.apache.kafka.clients.producer.internals.TransactionManager - [Producer clientId=mso-123456-producer-f273d893-2274-4515-b9d3-12029fbb5bf7] Enqueuing transactional request InitProducerIdRequestData(transactionalId=null, transactionTimeoutMs=2147483647, producerId=-1, producerEpoch=-1) 19:07:01.579 [kafka-producer-network-thread | mso-123456-producer-f273d893-2274-4515-b9d3-12029fbb5bf7] DEBUG org.apache.kafka.clients.NetworkClient - [Producer clientId=mso-123456-producer-f273d893-2274-4515-b9d3-12029fbb5bf7] Initialize connection to node localhost:38537 (id: -1 rack: null) for sending metadata request 19:07:01.579 [kafka-producer-network-thread | mso-123456-producer-f273d893-2274-4515-b9d3-12029fbb5bf7] DEBUG org.apache.kafka.clients.ClientUtils - Resolved host localhost as 127.0.0.1 19:07:01.579 [kafka-producer-network-thread | mso-123456-producer-f273d893-2274-4515-b9d3-12029fbb5bf7] DEBUG org.apache.kafka.clients.NetworkClient - [Producer clientId=mso-123456-producer-f273d893-2274-4515-b9d3-12029fbb5bf7] Initiating connection to node localhost:38537 (id: -1 rack: null) using address localhost/127.0.0.1 19:07:01.580 [kafka-producer-network-thread | mso-123456-producer-f273d893-2274-4515-b9d3-12029fbb5bf7] DEBUG org.apache.kafka.common.security.authenticator.SaslClientAuthenticator - [Producer clientId=mso-123456-producer-f273d893-2274-4515-b9d3-12029fbb5bf7] Set SASL client state to SEND_APIVERSIONS_REQUEST 19:07:01.580 [kafka-producer-network-thread | mso-123456-producer-f273d893-2274-4515-b9d3-12029fbb5bf7] DEBUG org.apache.kafka.common.security.authenticator.SaslClientAuthenticator - [Producer clientId=mso-123456-producer-f273d893-2274-4515-b9d3-12029fbb5bf7] Creating SaslClient: client=null;service=kafka;serviceHostname=localhost;mechs=[PLAIN] 19:07:01.581 [data-plane-kafka-socket-acceptor-ListenerName(SASL_PLAINTEXT)-SASL_PLAINTEXT-38537] DEBUG kafka.network.DataPlaneAcceptor - Accepted connection from /127.0.0.1:36412 on /127.0.0.1:38537 and assigned it to processor 0, sendBufferSize [actual|requested]: [102400|102400] recvBufferSize [actual|requested]: [102400|102400] 19:07:01.581 [data-plane-kafka-network-thread-1-ListenerName(SASL_PLAINTEXT)-SASL_PLAINTEXT-0] DEBUG kafka.network.Processor - Processor 0 listening to new connection from /127.0.0.1:36412 19:07:01.584 [kafka-producer-network-thread | mso-123456-producer-f273d893-2274-4515-b9d3-12029fbb5bf7] DEBUG org.apache.kafka.common.network.Selector - [Producer clientId=mso-123456-producer-f273d893-2274-4515-b9d3-12029fbb5bf7] Created socket with SO_RCVBUF = 32768, SO_SNDBUF = 131072, SO_TIMEOUT = 0 to node -1 19:07:01.584 [kafka-producer-network-thread | mso-123456-producer-f273d893-2274-4515-b9d3-12029fbb5bf7] DEBUG org.apache.kafka.common.security.authenticator.SaslClientAuthenticator - [Producer clientId=mso-123456-producer-f273d893-2274-4515-b9d3-12029fbb5bf7] Set SASL client state to RECEIVE_APIVERSIONS_RESPONSE 19:07:01.584 [kafka-producer-network-thread | mso-123456-producer-f273d893-2274-4515-b9d3-12029fbb5bf7] DEBUG org.apache.kafka.clients.NetworkClient - [Producer clientId=mso-123456-producer-f273d893-2274-4515-b9d3-12029fbb5bf7] Completed connection to node -1. Fetching API versions. 19:07:01.585 [data-plane-kafka-network-thread-1-ListenerName(SASL_PLAINTEXT)-SASL_PLAINTEXT-0] DEBUG org.apache.kafka.common.security.authenticator.SaslServerAuthenticator - Set SASL server state to HANDSHAKE_OR_VERSIONS_REQUEST during authentication 19:07:01.585 [data-plane-kafka-network-thread-1-ListenerName(SASL_PLAINTEXT)-SASL_PLAINTEXT-0] DEBUG org.apache.kafka.common.security.authenticator.SaslServerAuthenticator - Handling Kafka request API_VERSIONS during authentication 19:07:01.586 [data-plane-kafka-network-thread-1-ListenerName(SASL_PLAINTEXT)-SASL_PLAINTEXT-0] DEBUG org.apache.kafka.common.security.authenticator.SaslServerAuthenticator - Set SASL server state to HANDSHAKE_REQUEST during authentication 19:07:01.587 [kafka-producer-network-thread | mso-123456-producer-f273d893-2274-4515-b9d3-12029fbb5bf7] DEBUG org.apache.kafka.common.security.authenticator.SaslClientAuthenticator - [Producer clientId=mso-123456-producer-f273d893-2274-4515-b9d3-12029fbb5bf7] Set SASL client state to SEND_HANDSHAKE_REQUEST 19:07:01.587 [kafka-producer-network-thread | mso-123456-producer-f273d893-2274-4515-b9d3-12029fbb5bf7] DEBUG org.apache.kafka.common.security.authenticator.SaslClientAuthenticator - [Producer clientId=mso-123456-producer-f273d893-2274-4515-b9d3-12029fbb5bf7] Set SASL client state to RECEIVE_HANDSHAKE_RESPONSE 19:07:01.587 [data-plane-kafka-network-thread-1-ListenerName(SASL_PLAINTEXT)-SASL_PLAINTEXT-0] DEBUG org.apache.kafka.common.security.authenticator.SaslServerAuthenticator - Handling Kafka request SASL_HANDSHAKE during authentication 19:07:01.587 [data-plane-kafka-network-thread-1-ListenerName(SASL_PLAINTEXT)-SASL_PLAINTEXT-0] DEBUG org.apache.kafka.common.security.authenticator.SaslServerAuthenticator - Using SASL mechanism 'PLAIN' provided by client 19:07:01.588 [kafka-producer-network-thread | mso-123456-producer-f273d893-2274-4515-b9d3-12029fbb5bf7] DEBUG org.apache.kafka.common.security.authenticator.SaslClientAuthenticator - [Producer clientId=mso-123456-producer-f273d893-2274-4515-b9d3-12029fbb5bf7] Set SASL client state to INITIAL 19:07:01.588 [kafka-producer-network-thread | mso-123456-producer-f273d893-2274-4515-b9d3-12029fbb5bf7] DEBUG org.apache.kafka.common.security.authenticator.SaslClientAuthenticator - [Producer clientId=mso-123456-producer-f273d893-2274-4515-b9d3-12029fbb5bf7] Set SASL client state to INTERMEDIATE 19:07:01.589 [data-plane-kafka-network-thread-1-ListenerName(SASL_PLAINTEXT)-SASL_PLAINTEXT-0] DEBUG org.apache.kafka.common.security.authenticator.SaslServerAuthenticator - Set SASL server state to AUTHENTICATE during authentication 19:07:01.589 [data-plane-kafka-network-thread-1-ListenerName(SASL_PLAINTEXT)-SASL_PLAINTEXT-0] DEBUG org.apache.kafka.common.security.authenticator.SaslServerAuthenticator - Authentication complete; session max lifetime from broker config=0 ms, no credential expiration; no session expiration, sending 0 ms to client 19:07:01.590 [data-plane-kafka-network-thread-1-ListenerName(SASL_PLAINTEXT)-SASL_PLAINTEXT-0] DEBUG org.apache.kafka.common.security.authenticator.SaslServerAuthenticator - Set SASL server state to COMPLETE during authentication 19:07:01.590 [kafka-producer-network-thread | mso-123456-producer-f273d893-2274-4515-b9d3-12029fbb5bf7] DEBUG org.apache.kafka.common.security.authenticator.SaslClientAuthenticator - [Producer clientId=mso-123456-producer-f273d893-2274-4515-b9d3-12029fbb5bf7] Set SASL client state to COMPLETE 19:07:01.590 [kafka-producer-network-thread | mso-123456-producer-f273d893-2274-4515-b9d3-12029fbb5bf7] DEBUG org.apache.kafka.common.security.authenticator.SaslClientAuthenticator - [Producer clientId=mso-123456-producer-f273d893-2274-4515-b9d3-12029fbb5bf7] Finished authentication with no session expiration and no session re-authentication 19:07:01.590 [data-plane-kafka-network-thread-1-ListenerName(SASL_PLAINTEXT)-SASL_PLAINTEXT-0] DEBUG org.apache.kafka.common.network.Selector - [SocketServer listenerType=ZK_BROKER, nodeId=1] Successfully authenticated with /127.0.0.1 19:07:01.590 [kafka-producer-network-thread | mso-123456-producer-f273d893-2274-4515-b9d3-12029fbb5bf7] DEBUG org.apache.kafka.common.network.Selector - [Producer clientId=mso-123456-producer-f273d893-2274-4515-b9d3-12029fbb5bf7] Successfully authenticated with localhost/127.0.0.1 19:07:01.590 [kafka-producer-network-thread | mso-123456-producer-f273d893-2274-4515-b9d3-12029fbb5bf7] DEBUG org.apache.kafka.clients.NetworkClient - [Producer clientId=mso-123456-producer-f273d893-2274-4515-b9d3-12029fbb5bf7] Initiating API versions fetch from node -1. 19:07:01.591 [kafka-producer-network-thread | mso-123456-producer-f273d893-2274-4515-b9d3-12029fbb5bf7] DEBUG org.apache.kafka.clients.NetworkClient - [Producer clientId=mso-123456-producer-f273d893-2274-4515-b9d3-12029fbb5bf7] Sending API_VERSIONS request with header RequestHeader(apiKey=API_VERSIONS, apiVersion=3, clientId=mso-123456-producer-f273d893-2274-4515-b9d3-12029fbb5bf7, correlationId=0) and timeout 30000 to node -1: ApiVersionsRequestData(clientSoftwareName='apache-kafka-java', clientSoftwareVersion='3.3.1') 19:07:01.594 [kafka-producer-network-thread | mso-123456-producer-f273d893-2274-4515-b9d3-12029fbb5bf7] DEBUG org.apache.kafka.clients.NetworkClient - [Producer clientId=mso-123456-producer-f273d893-2274-4515-b9d3-12029fbb5bf7] Received API_VERSIONS response from node -1 for request with header RequestHeader(apiKey=API_VERSIONS, apiVersion=3, clientId=mso-123456-producer-f273d893-2274-4515-b9d3-12029fbb5bf7, correlationId=0): ApiVersionsResponseData(errorCode=0, apiKeys=[ApiVersion(apiKey=0, minVersion=0, maxVersion=9), ApiVersion(apiKey=1, minVersion=0, maxVersion=13), ApiVersion(apiKey=2, minVersion=0, maxVersion=7), ApiVersion(apiKey=3, minVersion=0, maxVersion=12), ApiVersion(apiKey=4, minVersion=0, maxVersion=6), ApiVersion(apiKey=5, minVersion=0, maxVersion=3), ApiVersion(apiKey=6, minVersion=0, maxVersion=7), ApiVersion(apiKey=7, minVersion=0, maxVersion=3), ApiVersion(apiKey=8, minVersion=0, maxVersion=8), ApiVersion(apiKey=9, minVersion=0, maxVersion=8), ApiVersion(apiKey=10, minVersion=0, maxVersion=4), ApiVersion(apiKey=11, minVersion=0, maxVersion=9), ApiVersion(apiKey=12, minVersion=0, maxVersion=4), ApiVersion(apiKey=13, minVersion=0, maxVersion=5), ApiVersion(apiKey=14, minVersion=0, maxVersion=5), ApiVersion(apiKey=15, minVersion=0, maxVersion=5), ApiVersion(apiKey=16, minVersion=0, maxVersion=4), ApiVersion(apiKey=17, minVersion=0, maxVersion=1), ApiVersion(apiKey=18, minVersion=0, maxVersion=3), ApiVersion(apiKey=19, minVersion=0, maxVersion=7), ApiVersion(apiKey=20, minVersion=0, maxVersion=6), ApiVersion(apiKey=21, minVersion=0, maxVersion=2), ApiVersion(apiKey=22, minVersion=0, maxVersion=4), ApiVersion(apiKey=23, minVersion=0, maxVersion=4), ApiVersion(apiKey=24, minVersion=0, maxVersion=3), ApiVersion(apiKey=25, minVersion=0, maxVersion=3), ApiVersion(apiKey=26, minVersion=0, maxVersion=3), ApiVersion(apiKey=27, minVersion=0, maxVersion=1), ApiVersion(apiKey=28, minVersion=0, maxVersion=3), ApiVersion(apiKey=29, minVersion=0, maxVersion=3), ApiVersion(apiKey=30, minVersion=0, maxVersion=3), ApiVersion(apiKey=31, minVersion=0, maxVersion=3), ApiVersion(apiKey=32, minVersion=0, maxVersion=4), ApiVersion(apiKey=33, minVersion=0, maxVersion=2), ApiVersion(apiKey=34, minVersion=0, maxVersion=2), ApiVersion(apiKey=35, minVersion=0, maxVersion=4), ApiVersion(apiKey=36, minVersion=0, maxVersion=2), ApiVersion(apiKey=37, minVersion=0, maxVersion=3), ApiVersion(apiKey=38, minVersion=0, maxVersion=3), ApiVersion(apiKey=39, minVersion=0, maxVersion=2), ApiVersion(apiKey=40, minVersion=0, maxVersion=2), ApiVersion(apiKey=41, minVersion=0, maxVersion=3), ApiVersion(apiKey=42, minVersion=0, maxVersion=2), ApiVersion(apiKey=43, minVersion=0, maxVersion=2), ApiVersion(apiKey=44, minVersion=0, maxVersion=1), ApiVersion(apiKey=45, minVersion=0, maxVersion=0), ApiVersion(apiKey=46, minVersion=0, maxVersion=0), ApiVersion(apiKey=47, minVersion=0, maxVersion=0), ApiVersion(apiKey=48, minVersion=0, maxVersion=1), ApiVersion(apiKey=49, minVersion=0, maxVersion=1), ApiVersion(apiKey=50, minVersion=0, maxVersion=0), ApiVersion(apiKey=51, minVersion=0, maxVersion=0), ApiVersion(apiKey=56, minVersion=0, maxVersion=2), ApiVersion(apiKey=57, minVersion=0, maxVersion=1), ApiVersion(apiKey=60, minVersion=0, maxVersion=0), ApiVersion(apiKey=61, minVersion=0, maxVersion=0), ApiVersion(apiKey=65, minVersion=0, maxVersion=0), ApiVersion(apiKey=66, minVersion=0, maxVersion=0), ApiVersion(apiKey=67, minVersion=0, maxVersion=0)], throttleTimeMs=0, supportedFeatures=[], finalizedFeaturesEpoch=0, finalizedFeatures=[]) 19:07:01.594 [kafka-producer-network-thread | mso-123456-producer-f273d893-2274-4515-b9d3-12029fbb5bf7] DEBUG org.apache.kafka.clients.NetworkClient - [Producer clientId=mso-123456-producer-f273d893-2274-4515-b9d3-12029fbb5bf7] Node -1 has finalized features epoch: 0, finalized features: [], supported features: [], API versions: (Produce(0): 0 to 9 [usable: 9], Fetch(1): 0 to 13 [usable: 13], ListOffsets(2): 0 to 7 [usable: 7], Metadata(3): 0 to 12 [usable: 12], LeaderAndIsr(4): 0 to 6 [usable: 6], StopReplica(5): 0 to 3 [usable: 3], UpdateMetadata(6): 0 to 7 [usable: 7], ControlledShutdown(7): 0 to 3 [usable: 3], OffsetCommit(8): 0 to 8 [usable: 8], OffsetFetch(9): 0 to 8 [usable: 8], FindCoordinator(10): 0 to 4 [usable: 4], JoinGroup(11): 0 to 9 [usable: 9], Heartbeat(12): 0 to 4 [usable: 4], LeaveGroup(13): 0 to 5 [usable: 5], SyncGroup(14): 0 to 5 [usable: 5], DescribeGroups(15): 0 to 5 [usable: 5], ListGroups(16): 0 to 4 [usable: 4], SaslHandshake(17): 0 to 1 [usable: 1], ApiVersions(18): 0 to 3 [usable: 3], CreateTopics(19): 0 to 7 [usable: 7], DeleteTopics(20): 0 to 6 [usable: 6], DeleteRecords(21): 0 to 2 [usable: 2], InitProducerId(22): 0 to 4 [usable: 4], OffsetForLeaderEpoch(23): 0 to 4 [usable: 4], AddPartitionsToTxn(24): 0 to 3 [usable: 3], AddOffsetsToTxn(25): 0 to 3 [usable: 3], EndTxn(26): 0 to 3 [usable: 3], WriteTxnMarkers(27): 0 to 1 [usable: 1], TxnOffsetCommit(28): 0 to 3 [usable: 3], DescribeAcls(29): 0 to 3 [usable: 3], CreateAcls(30): 0 to 3 [usable: 3], DeleteAcls(31): 0 to 3 [usable: 3], DescribeConfigs(32): 0 to 4 [usable: 4], AlterConfigs(33): 0 to 2 [usable: 2], AlterReplicaLogDirs(34): 0 to 2 [usable: 2], DescribeLogDirs(35): 0 to 4 [usable: 4], SaslAuthenticate(36): 0 to 2 [usable: 2], CreatePartitions(37): 0 to 3 [usable: 3], CreateDelegationToken(38): 0 to 3 [usable: 3], RenewDelegationToken(39): 0 to 2 [usable: 2], ExpireDelegationToken(40): 0 to 2 [usable: 2], DescribeDelegationToken(41): 0 to 3 [usable: 3], DeleteGroups(42): 0 to 2 [usable: 2], ElectLeaders(43): 0 to 2 [usable: 2], IncrementalAlterConfigs(44): 0 to 1 [usable: 1], AlterPartitionReassignments(45): 0 [usable: 0], ListPartitionReassignments(46): 0 [usable: 0], OffsetDelete(47): 0 [usable: 0], DescribeClientQuotas(48): 0 to 1 [usable: 1], AlterClientQuotas(49): 0 to 1 [usable: 1], DescribeUserScramCredentials(50): 0 [usable: 0], AlterUserScramCredentials(51): 0 [usable: 0], AlterPartition(56): 0 to 2 [usable: 2], UpdateFeatures(57): 0 to 1 [usable: 1], DescribeCluster(60): 0 [usable: 0], DescribeProducers(61): 0 [usable: 0], DescribeTransactions(65): 0 [usable: 0], ListTransactions(66): 0 [usable: 0], AllocateProducerIds(67): 0 [usable: 0]). 19:07:01.595 [data-plane-kafka-network-thread-1-ListenerName(SASL_PLAINTEXT)-SASL_PLAINTEXT-0] DEBUG kafka.request.logger - Completed request:{"isForwarded":false,"requestHeader":{"requestApiKey":18,"requestApiVersion":3,"correlationId":0,"clientId":"mso-123456-producer-f273d893-2274-4515-b9d3-12029fbb5bf7","requestApiKeyName":"API_VERSIONS"},"request":{"clientSoftwareName":"apache-kafka-java","clientSoftwareVersion":"3.3.1"},"response":{"errorCode":0,"apiKeys":[{"apiKey":0,"minVersion":0,"maxVersion":9},{"apiKey":1,"minVersion":0,"maxVersion":13},{"apiKey":2,"minVersion":0,"maxVersion":7},{"apiKey":3,"minVersion":0,"maxVersion":12},{"apiKey":4,"minVersion":0,"maxVersion":6},{"apiKey":5,"minVersion":0,"maxVersion":3},{"apiKey":6,"minVersion":0,"maxVersion":7},{"apiKey":7,"minVersion":0,"maxVersion":3},{"apiKey":8,"minVersion":0,"maxVersion":8},{"apiKey":9,"minVersion":0,"maxVersion":8},{"apiKey":10,"minVersion":0,"maxVersion":4},{"apiKey":11,"minVersion":0,"maxVersion":9},{"apiKey":12,"minVersion":0,"maxVersion":4},{"apiKey":13,"minVersion":0,"maxVersion":5},{"apiKey":14,"minVersion":0,"maxVersion":5},{"apiKey":15,"minVersion":0,"maxVersion":5},{"apiKey":16,"minVersion":0,"maxVersion":4},{"apiKey":17,"minVersion":0,"maxVersion":1},{"apiKey":18,"minVersion":0,"maxVersion":3},{"apiKey":19,"minVersion":0,"maxVersion":7},{"apiKey":20,"minVersion":0,"maxVersion":6},{"apiKey":21,"minVersion":0,"maxVersion":2},{"apiKey":22,"minVersion":0,"maxVersion":4},{"apiKey":23,"minVersion":0,"maxVersion":4},{"apiKey":24,"minVersion":0,"maxVersion":3},{"apiKey":25,"minVersion":0,"maxVersion":3},{"apiKey":26,"minVersion":0,"maxVersion":3},{"apiKey":27,"minVersion":0,"maxVersion":1},{"apiKey":28,"minVersion":0,"maxVersion":3},{"apiKey":29,"minVersion":0,"maxVersion":3},{"apiKey":30,"minVersion":0,"maxVersion":3},{"apiKey":31,"minVersion":0,"maxVersion":3},{"apiKey":32,"minVersion":0,"maxVersion":4},{"apiKey":33,"minVersion":0,"maxVersion":2},{"apiKey":34,"minVersion":0,"maxVersion":2},{"apiKey":35,"minVersion":0,"maxVersion":4},{"apiKey":36,"minVersion":0,"maxVersion":2},{"apiKey":37,"minVersion":0,"maxVersion":3},{"apiKey":38,"minVersion":0,"maxVersion":3},{"apiKey":39,"minVersion":0,"maxVersion":2},{"apiKey":40,"minVersion":0,"maxVersion":2},{"apiKey":41,"minVersion":0,"maxVersion":3},{"apiKey":42,"minVersion":0,"maxVersion":2},{"apiKey":43,"minVersion":0,"maxVersion":2},{"apiKey":44,"minVersion":0,"maxVersion":1},{"apiKey":45,"minVersion":0,"maxVersion":0},{"apiKey":46,"minVersion":0,"maxVersion":0},{"apiKey":47,"minVersion":0,"maxVersion":0},{"apiKey":48,"minVersion":0,"maxVersion":1},{"apiKey":49,"minVersion":0,"maxVersion":1},{"apiKey":50,"minVersion":0,"maxVersion":0},{"apiKey":51,"minVersion":0,"maxVersion":0},{"apiKey":56,"minVersion":0,"maxVersion":2},{"apiKey":57,"minVersion":0,"maxVersion":1},{"apiKey":60,"minVersion":0,"maxVersion":0},{"apiKey":61,"minVersion":0,"maxVersion":0},{"apiKey":65,"minVersion":0,"maxVersion":0},{"apiKey":66,"minVersion":0,"maxVersion":0},{"apiKey":67,"minVersion":0,"maxVersion":0}],"throttleTimeMs":0,"finalizedFeaturesEpoch":0},"connection":"127.0.0.1:38537-127.0.0.1:36412-4","totalTimeMs":2.415,"requestQueueTimeMs":0.393,"localTimeMs":1.53,"remoteTimeMs":0.0,"throttleTimeMs":0,"responseQueueTimeMs":0.163,"sendTimeMs":0.327,"securityProtocol":"SASL_PLAINTEXT","principal":"User:admin","listener":"SASL_PLAINTEXT","clientInformation":{"softwareName":"unknown","softwareVersion":"unknown"}} 19:07:01.595 [kafka-producer-network-thread | mso-123456-producer-f273d893-2274-4515-b9d3-12029fbb5bf7] DEBUG org.apache.kafka.clients.NetworkClient - [Producer clientId=mso-123456-producer-f273d893-2274-4515-b9d3-12029fbb5bf7] Sending metadata request MetadataRequestData(topics=[MetadataRequestTopic(topicId=AAAAAAAAAAAAAAAAAAAAAA, name='my-test-topic')], allowAutoTopicCreation=true, includeClusterAuthorizedOperations=false, includeTopicAuthorizedOperations=false) to node localhost:38537 (id: -1 rack: null) 19:07:01.595 [kafka-producer-network-thread | mso-123456-producer-f273d893-2274-4515-b9d3-12029fbb5bf7] DEBUG org.apache.kafka.clients.NetworkClient - [Producer clientId=mso-123456-producer-f273d893-2274-4515-b9d3-12029fbb5bf7] Sending METADATA request with header RequestHeader(apiKey=METADATA, apiVersion=12, clientId=mso-123456-producer-f273d893-2274-4515-b9d3-12029fbb5bf7, correlationId=1) and timeout 30000 to node -1: MetadataRequestData(topics=[MetadataRequestTopic(topicId=AAAAAAAAAAAAAAAAAAAAAA, name='my-test-topic')], allowAutoTopicCreation=true, includeClusterAuthorizedOperations=false, includeTopicAuthorizedOperations=false) 19:07:01.595 [kafka-producer-network-thread | mso-123456-producer-f273d893-2274-4515-b9d3-12029fbb5bf7] DEBUG org.apache.kafka.clients.producer.internals.Sender - [Producer clientId=mso-123456-producer-f273d893-2274-4515-b9d3-12029fbb5bf7] Sending transactional request InitProducerIdRequestData(transactionalId=null, transactionTimeoutMs=2147483647, producerId=-1, producerEpoch=-1) to node localhost:38537 (id: -1 rack: null) with correlation ID 2 19:07:01.596 [kafka-producer-network-thread | mso-123456-producer-f273d893-2274-4515-b9d3-12029fbb5bf7] DEBUG org.apache.kafka.clients.NetworkClient - [Producer clientId=mso-123456-producer-f273d893-2274-4515-b9d3-12029fbb5bf7] Sending INIT_PRODUCER_ID request with header RequestHeader(apiKey=INIT_PRODUCER_ID, apiVersion=4, clientId=mso-123456-producer-f273d893-2274-4515-b9d3-12029fbb5bf7, correlationId=2) and timeout 30000 to node -1: InitProducerIdRequestData(transactionalId=null, transactionTimeoutMs=2147483647, producerId=-1, producerEpoch=-1) 19:07:01.599 [kafka-producer-network-thread | mso-123456-producer-f273d893-2274-4515-b9d3-12029fbb5bf7] DEBUG org.apache.kafka.clients.NetworkClient - [Producer clientId=mso-123456-producer-f273d893-2274-4515-b9d3-12029fbb5bf7] Received METADATA response from node -1 for request with header RequestHeader(apiKey=METADATA, apiVersion=12, clientId=mso-123456-producer-f273d893-2274-4515-b9d3-12029fbb5bf7, correlationId=1): MetadataResponseData(throttleTimeMs=0, brokers=[MetadataResponseBroker(nodeId=1, host='localhost', port=38537, rack=null)], clusterId='rNB1PmTvTz-7PD_3X0wh3g', controllerId=1, topics=[MetadataResponseTopic(errorCode=0, name='my-test-topic', topicId=Y7FYMZASRPqaYrfAS485Ow, isInternal=false, partitions=[MetadataResponsePartition(errorCode=0, partitionIndex=0, leaderId=1, leaderEpoch=0, replicaNodes=[1], isrNodes=[1], offlineReplicas=[])], topicAuthorizedOperations=-2147483648)], clusterAuthorizedOperations=-2147483648) 19:07:01.599 [data-plane-kafka-network-thread-1-ListenerName(SASL_PLAINTEXT)-SASL_PLAINTEXT-0] DEBUG kafka.request.logger - Completed request:{"isForwarded":false,"requestHeader":{"requestApiKey":3,"requestApiVersion":12,"correlationId":1,"clientId":"mso-123456-producer-f273d893-2274-4515-b9d3-12029fbb5bf7","requestApiKeyName":"METADATA"},"request":{"topics":[{"topicId":"AAAAAAAAAAAAAAAAAAAAAA","name":"my-test-topic"}],"allowAutoTopicCreation":true,"includeTopicAuthorizedOperations":false},"response":{"throttleTimeMs":0,"brokers":[{"nodeId":1,"host":"localhost","port":38537,"rack":null}],"clusterId":"rNB1PmTvTz-7PD_3X0wh3g","controllerId":1,"topics":[{"errorCode":0,"name":"my-test-topic","topicId":"Y7FYMZASRPqaYrfAS485Ow","isInternal":false,"partitions":[{"errorCode":0,"partitionIndex":0,"leaderId":1,"leaderEpoch":0,"replicaNodes":[1],"isrNodes":[1],"offlineReplicas":[]}],"topicAuthorizedOperations":-2147483648}]},"connection":"127.0.0.1:38537-127.0.0.1:36412-4","totalTimeMs":3.023,"requestQueueTimeMs":0.246,"localTimeMs":2.526,"remoteTimeMs":0.0,"throttleTimeMs":0,"responseQueueTimeMs":0.09,"sendTimeMs":0.16,"securityProtocol":"SASL_PLAINTEXT","principal":"User:admin","listener":"SASL_PLAINTEXT","clientInformation":{"softwareName":"apache-kafka-java","softwareVersion":"3.3.1"}} 19:07:01.599 [kafka-producer-network-thread | mso-123456-producer-f273d893-2274-4515-b9d3-12029fbb5bf7] INFO org.apache.kafka.clients.Metadata - [Producer clientId=mso-123456-producer-f273d893-2274-4515-b9d3-12029fbb5bf7] Resetting the last seen epoch of partition my-test-topic-0 to 0 since the associated topicId changed from null to Y7FYMZASRPqaYrfAS485Ow 19:07:01.599 [kafka-producer-network-thread | mso-123456-producer-f273d893-2274-4515-b9d3-12029fbb5bf7] INFO org.apache.kafka.clients.Metadata - [Producer clientId=mso-123456-producer-f273d893-2274-4515-b9d3-12029fbb5bf7] Cluster ID: rNB1PmTvTz-7PD_3X0wh3g 19:07:01.599 [kafka-producer-network-thread | mso-123456-producer-f273d893-2274-4515-b9d3-12029fbb5bf7] DEBUG org.apache.kafka.clients.Metadata - [Producer clientId=mso-123456-producer-f273d893-2274-4515-b9d3-12029fbb5bf7] Updated cluster metadata updateVersion 2 to MetadataCache{clusterId='rNB1PmTvTz-7PD_3X0wh3g', nodes={1=localhost:38537 (id: 1 rack: null)}, partitions=[PartitionMetadata(error=NONE, partition=my-test-topic-0, leader=Optional[1], leaderEpoch=Optional[0], replicas=1, isr=1, offlineReplicas=)], controller=localhost:38537 (id: 1 rack: null)} 19:07:01.604 [data-plane-kafka-request-handler-0] DEBUG kafka.coordinator.transaction.RPCProducerIdManager - [RPC ProducerId Manager 1]: Requesting next Producer ID block 19:07:01.609 [TestBroker:1:BrokerToControllerChannelManager broker=1 name=forwarding] DEBUG org.apache.kafka.clients.ClientUtils - Resolved host localhost as 127.0.0.1 19:07:01.609 [TestBroker:1:BrokerToControllerChannelManager broker=1 name=forwarding] DEBUG org.apache.kafka.clients.NetworkClient - [BrokerToControllerChannelManager broker=1 name=forwarding] Initiating connection to node localhost:38537 (id: 1 rack: null) using address localhost/127.0.0.1 19:07:01.609 [TestBroker:1:BrokerToControllerChannelManager broker=1 name=forwarding] DEBUG org.apache.kafka.common.security.authenticator.SaslClientAuthenticator - [BrokerToControllerChannelManager broker=1 name=forwarding] Set SASL client state to SEND_APIVERSIONS_REQUEST 19:07:01.609 [TestBroker:1:BrokerToControllerChannelManager broker=1 name=forwarding] DEBUG org.apache.kafka.common.security.authenticator.SaslClientAuthenticator - [BrokerToControllerChannelManager broker=1 name=forwarding] Creating SaslClient: client=null;service=kafka;serviceHostname=localhost;mechs=[PLAIN] 19:07:01.610 [data-plane-kafka-socket-acceptor-ListenerName(SASL_PLAINTEXT)-SASL_PLAINTEXT-38537] DEBUG kafka.network.DataPlaneAcceptor - Accepted connection from /127.0.0.1:36414 on /127.0.0.1:38537 and assigned it to processor 1, sendBufferSize [actual|requested]: [102400|102400] recvBufferSize [actual|requested]: [102400|102400] 19:07:01.610 [data-plane-kafka-network-thread-1-ListenerName(SASL_PLAINTEXT)-SASL_PLAINTEXT-1] DEBUG kafka.network.Processor - Processor 1 listening to new connection from /127.0.0.1:36414 19:07:01.611 [TestBroker:1:BrokerToControllerChannelManager broker=1 name=forwarding] DEBUG org.apache.kafka.common.network.Selector - [BrokerToControllerChannelManager broker=1 name=forwarding] Created socket with SO_RCVBUF = 65536, SO_SNDBUF = 1313280, SO_TIMEOUT = 0 to node 1 19:07:01.612 [TestBroker:1:BrokerToControllerChannelManager broker=1 name=forwarding] DEBUG org.apache.kafka.common.security.authenticator.SaslClientAuthenticator - [BrokerToControllerChannelManager broker=1 name=forwarding] Set SASL client state to RECEIVE_APIVERSIONS_RESPONSE 19:07:01.612 [TestBroker:1:BrokerToControllerChannelManager broker=1 name=forwarding] DEBUG org.apache.kafka.clients.NetworkClient - [BrokerToControllerChannelManager broker=1 name=forwarding] Completed connection to node 1. Fetching API versions. 19:07:01.612 [data-plane-kafka-network-thread-1-ListenerName(SASL_PLAINTEXT)-SASL_PLAINTEXT-1] DEBUG org.apache.kafka.common.security.authenticator.SaslServerAuthenticator - Set SASL server state to HANDSHAKE_OR_VERSIONS_REQUEST during authentication 19:07:01.612 [data-plane-kafka-network-thread-1-ListenerName(SASL_PLAINTEXT)-SASL_PLAINTEXT-1] DEBUG org.apache.kafka.common.security.authenticator.SaslServerAuthenticator - Handling Kafka request API_VERSIONS during authentication 19:07:01.613 [data-plane-kafka-network-thread-1-ListenerName(SASL_PLAINTEXT)-SASL_PLAINTEXT-1] DEBUG org.apache.kafka.common.security.authenticator.SaslServerAuthenticator - Set SASL server state to HANDSHAKE_REQUEST during authentication 19:07:01.613 [TestBroker:1:BrokerToControllerChannelManager broker=1 name=forwarding] DEBUG org.apache.kafka.common.security.authenticator.SaslClientAuthenticator - [BrokerToControllerChannelManager broker=1 name=forwarding] Set SASL client state to SEND_HANDSHAKE_REQUEST 19:07:01.613 [TestBroker:1:BrokerToControllerChannelManager broker=1 name=forwarding] DEBUG org.apache.kafka.common.security.authenticator.SaslClientAuthenticator - [BrokerToControllerChannelManager broker=1 name=forwarding] Set SASL client state to RECEIVE_HANDSHAKE_RESPONSE 19:07:01.613 [data-plane-kafka-network-thread-1-ListenerName(SASL_PLAINTEXT)-SASL_PLAINTEXT-1] DEBUG org.apache.kafka.common.security.authenticator.SaslServerAuthenticator - Handling Kafka request SASL_HANDSHAKE during authentication 19:07:01.613 [data-plane-kafka-network-thread-1-ListenerName(SASL_PLAINTEXT)-SASL_PLAINTEXT-1] DEBUG org.apache.kafka.common.security.authenticator.SaslServerAuthenticator - Using SASL mechanism 'PLAIN' provided by client 19:07:01.613 [TestBroker:1:BrokerToControllerChannelManager broker=1 name=forwarding] DEBUG org.apache.kafka.common.security.authenticator.SaslClientAuthenticator - [BrokerToControllerChannelManager broker=1 name=forwarding] Set SASL client state to INITIAL 19:07:01.613 [TestBroker:1:BrokerToControllerChannelManager broker=1 name=forwarding] DEBUG org.apache.kafka.common.security.authenticator.SaslClientAuthenticator - [BrokerToControllerChannelManager broker=1 name=forwarding] Set SASL client state to INTERMEDIATE 19:07:01.614 [data-plane-kafka-network-thread-1-ListenerName(SASL_PLAINTEXT)-SASL_PLAINTEXT-1] DEBUG org.apache.kafka.common.security.authenticator.SaslServerAuthenticator - Set SASL server state to AUTHENTICATE during authentication 19:07:01.614 [data-plane-kafka-network-thread-1-ListenerName(SASL_PLAINTEXT)-SASL_PLAINTEXT-1] DEBUG org.apache.kafka.common.security.authenticator.SaslServerAuthenticator - Authentication complete; session max lifetime from broker config=0 ms, no credential expiration; no session expiration, sending 0 ms to client 19:07:01.614 [data-plane-kafka-network-thread-1-ListenerName(SASL_PLAINTEXT)-SASL_PLAINTEXT-1] DEBUG org.apache.kafka.common.security.authenticator.SaslServerAuthenticator - Set SASL server state to COMPLETE during authentication 19:07:01.614 [data-plane-kafka-network-thread-1-ListenerName(SASL_PLAINTEXT)-SASL_PLAINTEXT-1] DEBUG org.apache.kafka.common.network.Selector - [SocketServer listenerType=ZK_BROKER, nodeId=1] Successfully authenticated with /127.0.0.1 19:07:01.614 [TestBroker:1:BrokerToControllerChannelManager broker=1 name=forwarding] DEBUG org.apache.kafka.common.security.authenticator.SaslClientAuthenticator - [BrokerToControllerChannelManager broker=1 name=forwarding] Set SASL client state to COMPLETE 19:07:01.614 [TestBroker:1:BrokerToControllerChannelManager broker=1 name=forwarding] DEBUG org.apache.kafka.common.security.authenticator.SaslClientAuthenticator - [BrokerToControllerChannelManager broker=1 name=forwarding] Finished authentication with no session expiration and no session re-authentication 19:07:01.614 [TestBroker:1:BrokerToControllerChannelManager broker=1 name=forwarding] DEBUG org.apache.kafka.common.network.Selector - [BrokerToControllerChannelManager broker=1 name=forwarding] Successfully authenticated with localhost/127.0.0.1 19:07:01.614 [TestBroker:1:BrokerToControllerChannelManager broker=1 name=forwarding] DEBUG org.apache.kafka.clients.NetworkClient - [BrokerToControllerChannelManager broker=1 name=forwarding] Initiating API versions fetch from node 1. 19:07:01.614 [TestBroker:1:BrokerToControllerChannelManager broker=1 name=forwarding] DEBUG org.apache.kafka.clients.NetworkClient - [BrokerToControllerChannelManager broker=1 name=forwarding] Sending API_VERSIONS request with header RequestHeader(apiKey=API_VERSIONS, apiVersion=3, clientId=1, correlationId=1) and timeout 30000 to node 1: ApiVersionsRequestData(clientSoftwareName='apache-kafka-java', clientSoftwareVersion='3.3.1') 19:07:01.617 [TestBroker:1:BrokerToControllerChannelManager broker=1 name=forwarding] DEBUG org.apache.kafka.clients.NetworkClient - [BrokerToControllerChannelManager broker=1 name=forwarding] Received API_VERSIONS response from node 1 for request with header RequestHeader(apiKey=API_VERSIONS, apiVersion=3, clientId=1, correlationId=1): ApiVersionsResponseData(errorCode=0, apiKeys=[ApiVersion(apiKey=0, minVersion=0, maxVersion=9), ApiVersion(apiKey=1, minVersion=0, maxVersion=13), ApiVersion(apiKey=2, minVersion=0, maxVersion=7), ApiVersion(apiKey=3, minVersion=0, maxVersion=12), ApiVersion(apiKey=4, minVersion=0, maxVersion=6), ApiVersion(apiKey=5, minVersion=0, maxVersion=3), ApiVersion(apiKey=6, minVersion=0, maxVersion=7), ApiVersion(apiKey=7, minVersion=0, maxVersion=3), ApiVersion(apiKey=8, minVersion=0, maxVersion=8), ApiVersion(apiKey=9, minVersion=0, maxVersion=8), ApiVersion(apiKey=10, minVersion=0, maxVersion=4), ApiVersion(apiKey=11, minVersion=0, maxVersion=9), ApiVersion(apiKey=12, minVersion=0, maxVersion=4), ApiVersion(apiKey=13, minVersion=0, maxVersion=5), ApiVersion(apiKey=14, minVersion=0, maxVersion=5), ApiVersion(apiKey=15, minVersion=0, maxVersion=5), ApiVersion(apiKey=16, minVersion=0, maxVersion=4), ApiVersion(apiKey=17, minVersion=0, maxVersion=1), ApiVersion(apiKey=18, minVersion=0, maxVersion=3), ApiVersion(apiKey=19, minVersion=0, maxVersion=7), ApiVersion(apiKey=20, minVersion=0, maxVersion=6), ApiVersion(apiKey=21, minVersion=0, maxVersion=2), ApiVersion(apiKey=22, minVersion=0, maxVersion=4), ApiVersion(apiKey=23, minVersion=0, maxVersion=4), ApiVersion(apiKey=24, minVersion=0, maxVersion=3), ApiVersion(apiKey=25, minVersion=0, maxVersion=3), ApiVersion(apiKey=26, minVersion=0, maxVersion=3), ApiVersion(apiKey=27, minVersion=0, maxVersion=1), ApiVersion(apiKey=28, minVersion=0, maxVersion=3), ApiVersion(apiKey=29, minVersion=0, maxVersion=3), ApiVersion(apiKey=30, minVersion=0, maxVersion=3), ApiVersion(apiKey=31, minVersion=0, maxVersion=3), ApiVersion(apiKey=32, minVersion=0, maxVersion=4), ApiVersion(apiKey=33, minVersion=0, maxVersion=2), ApiVersion(apiKey=34, minVersion=0, maxVersion=2), ApiVersion(apiKey=35, minVersion=0, maxVersion=4), ApiVersion(apiKey=36, minVersion=0, maxVersion=2), ApiVersion(apiKey=37, minVersion=0, maxVersion=3), ApiVersion(apiKey=38, minVersion=0, maxVersion=3), ApiVersion(apiKey=39, minVersion=0, maxVersion=2), ApiVersion(apiKey=40, minVersion=0, maxVersion=2), ApiVersion(apiKey=41, minVersion=0, maxVersion=3), ApiVersion(apiKey=42, minVersion=0, maxVersion=2), ApiVersion(apiKey=43, minVersion=0, maxVersion=2), ApiVersion(apiKey=44, minVersion=0, maxVersion=1), ApiVersion(apiKey=45, minVersion=0, maxVersion=0), ApiVersion(apiKey=46, minVersion=0, maxVersion=0), ApiVersion(apiKey=47, minVersion=0, maxVersion=0), ApiVersion(apiKey=48, minVersion=0, maxVersion=1), ApiVersion(apiKey=49, minVersion=0, maxVersion=1), ApiVersion(apiKey=50, minVersion=0, maxVersion=0), ApiVersion(apiKey=51, minVersion=0, maxVersion=0), ApiVersion(apiKey=56, minVersion=0, maxVersion=2), ApiVersion(apiKey=57, minVersion=0, maxVersion=1), ApiVersion(apiKey=60, minVersion=0, maxVersion=0), ApiVersion(apiKey=61, minVersion=0, maxVersion=0), ApiVersion(apiKey=65, minVersion=0, maxVersion=0), ApiVersion(apiKey=66, minVersion=0, maxVersion=0), ApiVersion(apiKey=67, minVersion=0, maxVersion=0)], throttleTimeMs=0, supportedFeatures=[], finalizedFeaturesEpoch=0, finalizedFeatures=[]) 19:07:01.617 [data-plane-kafka-network-thread-1-ListenerName(SASL_PLAINTEXT)-SASL_PLAINTEXT-1] DEBUG kafka.request.logger - Completed request:{"isForwarded":false,"requestHeader":{"requestApiKey":18,"requestApiVersion":3,"correlationId":1,"clientId":"1","requestApiKeyName":"API_VERSIONS"},"request":{"clientSoftwareName":"apache-kafka-java","clientSoftwareVersion":"3.3.1"},"response":{"errorCode":0,"apiKeys":[{"apiKey":0,"minVersion":0,"maxVersion":9},{"apiKey":1,"minVersion":0,"maxVersion":13},{"apiKey":2,"minVersion":0,"maxVersion":7},{"apiKey":3,"minVersion":0,"maxVersion":12},{"apiKey":4,"minVersion":0,"maxVersion":6},{"apiKey":5,"minVersion":0,"maxVersion":3},{"apiKey":6,"minVersion":0,"maxVersion":7},{"apiKey":7,"minVersion":0,"maxVersion":3},{"apiKey":8,"minVersion":0,"maxVersion":8},{"apiKey":9,"minVersion":0,"maxVersion":8},{"apiKey":10,"minVersion":0,"maxVersion":4},{"apiKey":11,"minVersion":0,"maxVersion":9},{"apiKey":12,"minVersion":0,"maxVersion":4},{"apiKey":13,"minVersion":0,"maxVersion":5},{"apiKey":14,"minVersion":0,"maxVersion":5},{"apiKey":15,"minVersion":0,"maxVersion":5},{"apiKey":16,"minVersion":0,"maxVersion":4},{"apiKey":17,"minVersion":0,"maxVersion":1},{"apiKey":18,"minVersion":0,"maxVersion":3},{"apiKey":19,"minVersion":0,"maxVersion":7},{"apiKey":20,"minVersion":0,"maxVersion":6},{"apiKey":21,"minVersion":0,"maxVersion":2},{"apiKey":22,"minVersion":0,"maxVersion":4},{"apiKey":23,"minVersion":0,"maxVersion":4},{"apiKey":24,"minVersion":0,"maxVersion":3},{"apiKey":25,"minVersion":0,"maxVersion":3},{"apiKey":26,"minVersion":0,"maxVersion":3},{"apiKey":27,"minVersion":0,"maxVersion":1},{"apiKey":28,"minVersion":0,"maxVersion":3},{"apiKey":29,"minVersion":0,"maxVersion":3},{"apiKey":30,"minVersion":0,"maxVersion":3},{"apiKey":31,"minVersion":0,"maxVersion":3},{"apiKey":32,"minVersion":0,"maxVersion":4},{"apiKey":33,"minVersion":0,"maxVersion":2},{"apiKey":34,"minVersion":0,"maxVersion":2},{"apiKey":35,"minVersion":0,"maxVersion":4},{"apiKey":36,"minVersion":0,"maxVersion":2},{"apiKey":37,"minVersion":0,"maxVersion":3},{"apiKey":38,"minVersion":0,"maxVersion":3},{"apiKey":39,"minVersion":0,"maxVersion":2},{"apiKey":40,"minVersion":0,"maxVersion":2},{"apiKey":41,"minVersion":0,"maxVersion":3},{"apiKey":42,"minVersion":0,"maxVersion":2},{"apiKey":43,"minVersion":0,"maxVersion":2},{"apiKey":44,"minVersion":0,"maxVersion":1},{"apiKey":45,"minVersion":0,"maxVersion":0},{"apiKey":46,"minVersion":0,"maxVersion":0},{"apiKey":47,"minVersion":0,"maxVersion":0},{"apiKey":48,"minVersion":0,"maxVersion":1},{"apiKey":49,"minVersion":0,"maxVersion":1},{"apiKey":50,"minVersion":0,"maxVersion":0},{"apiKey":51,"minVersion":0,"maxVersion":0},{"apiKey":56,"minVersion":0,"maxVersion":2},{"apiKey":57,"minVersion":0,"maxVersion":1},{"apiKey":60,"minVersion":0,"maxVersion":0},{"apiKey":61,"minVersion":0,"maxVersion":0},{"apiKey":65,"minVersion":0,"maxVersion":0},{"apiKey":66,"minVersion":0,"maxVersion":0},{"apiKey":67,"minVersion":0,"maxVersion":0}],"throttleTimeMs":0,"finalizedFeaturesEpoch":0},"connection":"127.0.0.1:38537-127.0.0.1:36414-4","totalTimeMs":1.555,"requestQueueTimeMs":0.255,"localTimeMs":1.046,"remoteTimeMs":0.0,"throttleTimeMs":0,"responseQueueTimeMs":0.089,"sendTimeMs":0.162,"securityProtocol":"SASL_PLAINTEXT","principal":"User:admin","listener":"SASL_PLAINTEXT","clientInformation":{"softwareName":"unknown","softwareVersion":"unknown"}} 19:07:01.617 [TestBroker:1:BrokerToControllerChannelManager broker=1 name=forwarding] DEBUG org.apache.kafka.clients.NetworkClient - [BrokerToControllerChannelManager broker=1 name=forwarding] Node 1 has finalized features epoch: 0, finalized features: [], supported features: [], API versions: (Produce(0): 0 to 9 [usable: 9], Fetch(1): 0 to 13 [usable: 13], ListOffsets(2): 0 to 7 [usable: 7], Metadata(3): 0 to 12 [usable: 12], LeaderAndIsr(4): 0 to 6 [usable: 6], StopReplica(5): 0 to 3 [usable: 3], UpdateMetadata(6): 0 to 7 [usable: 7], ControlledShutdown(7): 0 to 3 [usable: 3], OffsetCommit(8): 0 to 8 [usable: 8], OffsetFetch(9): 0 to 8 [usable: 8], FindCoordinator(10): 0 to 4 [usable: 4], JoinGroup(11): 0 to 9 [usable: 9], Heartbeat(12): 0 to 4 [usable: 4], LeaveGroup(13): 0 to 5 [usable: 5], SyncGroup(14): 0 to 5 [usable: 5], DescribeGroups(15): 0 to 5 [usable: 5], ListGroups(16): 0 to 4 [usable: 4], SaslHandshake(17): 0 to 1 [usable: 1], ApiVersions(18): 0 to 3 [usable: 3], CreateTopics(19): 0 to 7 [usable: 7], DeleteTopics(20): 0 to 6 [usable: 6], DeleteRecords(21): 0 to 2 [usable: 2], InitProducerId(22): 0 to 4 [usable: 4], OffsetForLeaderEpoch(23): 0 to 4 [usable: 4], AddPartitionsToTxn(24): 0 to 3 [usable: 3], AddOffsetsToTxn(25): 0 to 3 [usable: 3], EndTxn(26): 0 to 3 [usable: 3], WriteTxnMarkers(27): 0 to 1 [usable: 1], TxnOffsetCommit(28): 0 to 3 [usable: 3], DescribeAcls(29): 0 to 3 [usable: 3], CreateAcls(30): 0 to 3 [usable: 3], DeleteAcls(31): 0 to 3 [usable: 3], DescribeConfigs(32): 0 to 4 [usable: 4], AlterConfigs(33): 0 to 2 [usable: 2], AlterReplicaLogDirs(34): 0 to 2 [usable: 2], DescribeLogDirs(35): 0 to 4 [usable: 4], SaslAuthenticate(36): 0 to 2 [usable: 2], CreatePartitions(37): 0 to 3 [usable: 3], CreateDelegationToken(38): 0 to 3 [usable: 3], RenewDelegationToken(39): 0 to 2 [usable: 2], ExpireDelegationToken(40): 0 to 2 [usable: 2], DescribeDelegationToken(41): 0 to 3 [usable: 3], DeleteGroups(42): 0 to 2 [usable: 2], ElectLeaders(43): 0 to 2 [usable: 2], IncrementalAlterConfigs(44): 0 to 1 [usable: 1], AlterPartitionReassignments(45): 0 [usable: 0], ListPartitionReassignments(46): 0 [usable: 0], OffsetDelete(47): 0 [usable: 0], DescribeClientQuotas(48): 0 to 1 [usable: 1], AlterClientQuotas(49): 0 to 1 [usable: 1], DescribeUserScramCredentials(50): 0 [usable: 0], AlterUserScramCredentials(51): 0 [usable: 0], AlterPartition(56): 0 to 2 [usable: 2], UpdateFeatures(57): 0 to 1 [usable: 1], DescribeCluster(60): 0 [usable: 0], DescribeProducers(61): 0 [usable: 0], DescribeTransactions(65): 0 [usable: 0], ListTransactions(66): 0 [usable: 0], AllocateProducerIds(67): 0 [usable: 0]). 19:07:01.618 [TestBroker:1:BrokerToControllerChannelManager broker=1 name=forwarding] DEBUG org.apache.kafka.clients.NetworkClient - [BrokerToControllerChannelManager broker=1 name=forwarding] Sending ALLOCATE_PRODUCER_IDS request with header RequestHeader(apiKey=ALLOCATE_PRODUCER_IDS, apiVersion=0, clientId=1, correlationId=0) and timeout 30000 to node 1: AllocateProducerIdsRequestData(brokerId=1, brokerEpoch=25) 19:07:01.626 [ProcessThread(sid:0 cport:42969):] DEBUG org.apache.zookeeper.server.SessionTrackerImpl - Checking session 0x1000002c50e0000 19:07:01.626 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - Processing request:: sessionid:0x1000002c50e0000 type:getData cxid:0x102 zxid:0xfffffffffffffffe txntype:unknown reqpath:/latest_producer_id_block 19:07:01.626 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - sessionid:0x1000002c50e0000 type:getData cxid:0x102 zxid:0xfffffffffffffffe txntype:unknown reqpath:/latest_producer_id_block 19:07:01.626 [SyncThread:0] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Permission requested: 1 19:07:01.626 [SyncThread:0] DEBUG org.apache.zookeeper.server.ZooKeeperServer - ACLs for node: [31,s{'world,'anyone} ] 19:07:01.626 [SyncThread:0] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Client credentials: ['sasl,'zooclient , 'ip,'127.0.0.1 ] 19:07:01.626 [ProcessThread(sid:0 cport:42969):] DEBUG org.apache.zookeeper.server.SessionTrackerImpl - Checking session 0x1000002c50e0000 19:07:01.626 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - Processing request:: sessionid:0x1000002c50e0000 type:ping cxid:0xfffffffffffffffe zxid:0xfffffffffffffffe txntype:unknown reqpath:n/a 19:07:01.626 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - sessionid:0x1000002c50e0000 type:ping cxid:0xfffffffffffffffe zxid:0xfffffffffffffffe txntype:unknown reqpath:n/a 19:07:01.627 [main-SendThread(127.0.0.1:42969)] DEBUG org.apache.zookeeper.ClientCnxn - Reading reply session id: 0x1000002c50e0000, packet:: clientPath:/latest_producer_id_block serverPath:/latest_producer_id_block finished:false header:: 258,4 replyHeader:: 258,139,0 request:: '/latest_producer_id_block,F response:: ,s{15,15,1755111998597,1755111998597,0,0,0,0,0,0,15} 19:07:01.627 [main-SendThread(127.0.0.1:42969)] DEBUG org.apache.zookeeper.ClientCnxn - Got ping response for session id: 0x1000002c50e0000 after 1ms. 19:07:01.628 [controller-event-thread] DEBUG kafka.controller.KafkaController - [Controller id=1] There is no producerId block yet (Zk path version 0), creating the first block 19:07:01.630 [ProcessThread(sid:0 cport:42969):] DEBUG org.apache.zookeeper.server.SessionTrackerImpl - Checking session 0x1000002c50e0000 19:07:01.630 [ProcessThread(sid:0 cport:42969):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Permission requested: 2 19:07:01.630 [ProcessThread(sid:0 cport:42969):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - ACLs for node: [31,s{'world,'anyone} ] 19:07:01.630 [ProcessThread(sid:0 cport:42969):] DEBUG org.apache.zookeeper.server.ZooKeeperServer - Client credentials: ['sasl,'zooclient , 'ip,'127.0.0.1 ] 19:07:01.630 [ProcessThread(sid:0 cport:42969):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from data tree is: 303344084217 19:07:01.631 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - Processing request:: sessionid:0x1000002c50e0000 type:setData cxid:0x103 zxid:0x8c txntype:5 reqpath:n/a 19:07:01.632 [SyncThread:0] DEBUG org.apache.zookeeper.common.PathTrie - latest_producer_id_block 19:07:01.632 [SyncThread:0] DEBUG org.apache.zookeeper.server.DataTree - Digests are matching for Zxid: 8c, Digest in log and actual tree: 302721938539 19:07:01.632 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - sessionid:0x1000002c50e0000 type:setData cxid:0x103 zxid:0x8c txntype:5 reqpath:n/a 19:07:01.633 [main-SendThread(127.0.0.1:42969)] DEBUG org.apache.zookeeper.ClientCnxn - Reading reply session id: 0x1000002c50e0000, packet:: clientPath:/latest_producer_id_block serverPath:/latest_producer_id_block finished:false header:: 259,5 replyHeader:: 259,140,0 request:: '/latest_producer_id_block,#7b2276657273696f6e223a312c2262726f6b6572223a312c22626c6f636b5f7374617274223a2230222c22626c6f636b5f656e64223a22393939227d,0 response:: s{15,140,1755111998597,1755112021630,1,0,0,0,60,0,15} 19:07:01.634 [controller-event-thread] DEBUG kafka.zk.KafkaZkClient - Conditional update of path /latest_producer_id_block with value {"version":1,"broker":1,"block_start":"0","block_end":"999"} and expected version 0 succeeded, returning the new version: 1 19:07:01.635 [controller-event-thread] INFO kafka.controller.KafkaController - [Controller id=1] Acquired new producerId block ProducerIdsBlock(assignedBrokerId=1, firstProducerId=0, size=1000) by writing to Zk with path version 1 19:07:01.637 [TestBroker:1:BrokerToControllerChannelManager broker=1 name=forwarding] DEBUG org.apache.kafka.clients.NetworkClient - [BrokerToControllerChannelManager broker=1 name=forwarding] Received ALLOCATE_PRODUCER_IDS response from node 1 for request with header RequestHeader(apiKey=ALLOCATE_PRODUCER_IDS, apiVersion=0, clientId=1, correlationId=0): AllocateProducerIdsResponseData(throttleTimeMs=0, errorCode=0, producerIdStart=0, producerIdLen=1000) 19:07:01.638 [data-plane-kafka-network-thread-1-ListenerName(SASL_PLAINTEXT)-SASL_PLAINTEXT-1] DEBUG kafka.request.logger - Completed request:{"isForwarded":false,"requestHeader":{"requestApiKey":67,"requestApiVersion":0,"correlationId":0,"clientId":"1","requestApiKeyName":"ALLOCATE_PRODUCER_IDS"},"request":{"brokerId":1,"brokerEpoch":25},"response":{"throttleTimeMs":0,"errorCode":0,"producerIdStart":0,"producerIdLen":1000},"connection":"127.0.0.1:38537-127.0.0.1:36414-4","totalTimeMs":18.631,"requestQueueTimeMs":1.23,"localTimeMs":2.071,"remoteTimeMs":14.919,"throttleTimeMs":0,"responseQueueTimeMs":0.105,"sendTimeMs":0.304,"securityProtocol":"SASL_PLAINTEXT","principal":"User:admin","listener":"SASL_PLAINTEXT","clientInformation":{"softwareName":"apache-kafka-java","softwareVersion":"3.3.1"}} 19:07:01.638 [TestBroker:1:BrokerToControllerChannelManager broker=1 name=forwarding] DEBUG kafka.coordinator.transaction.RPCProducerIdManager - [RPC ProducerId Manager 1]: Got next producer ID block from controller AllocateProducerIdsResponseData(throttleTimeMs=0, errorCode=0, producerIdStart=0, producerIdLen=1000) 19:07:01.643 [kafka-producer-network-thread | mso-123456-producer-f273d893-2274-4515-b9d3-12029fbb5bf7] DEBUG org.apache.kafka.clients.NetworkClient - [Producer clientId=mso-123456-producer-f273d893-2274-4515-b9d3-12029fbb5bf7] Received INIT_PRODUCER_ID response from node -1 for request with header RequestHeader(apiKey=INIT_PRODUCER_ID, apiVersion=4, clientId=mso-123456-producer-f273d893-2274-4515-b9d3-12029fbb5bf7, correlationId=2): InitProducerIdResponseData(throttleTimeMs=0, errorCode=0, producerId=0, producerEpoch=0) 19:07:01.644 [kafka-producer-network-thread | mso-123456-producer-f273d893-2274-4515-b9d3-12029fbb5bf7] INFO org.apache.kafka.clients.producer.internals.TransactionManager - [Producer clientId=mso-123456-producer-f273d893-2274-4515-b9d3-12029fbb5bf7] ProducerId set to 0 with epoch 0 19:07:01.644 [kafka-producer-network-thread | mso-123456-producer-f273d893-2274-4515-b9d3-12029fbb5bf7] DEBUG org.apache.kafka.clients.producer.internals.TransactionManager - [Producer clientId=mso-123456-producer-f273d893-2274-4515-b9d3-12029fbb5bf7] Transition from state INITIALIZING to READY 19:07:01.645 [data-plane-kafka-network-thread-1-ListenerName(SASL_PLAINTEXT)-SASL_PLAINTEXT-0] DEBUG kafka.request.logger - Completed request:{"isForwarded":false,"requestHeader":{"requestApiKey":22,"requestApiVersion":4,"correlationId":2,"clientId":"mso-123456-producer-f273d893-2274-4515-b9d3-12029fbb5bf7","requestApiKeyName":"INIT_PRODUCER_ID"},"request":{"transactionalId":null,"transactionTimeoutMs":2147483647,"producerId":-1,"producerEpoch":-1},"response":{"throttleTimeMs":0,"errorCode":0,"producerId":0,"producerEpoch":0},"connection":"127.0.0.1:38537-127.0.0.1:36412-4","totalTimeMs":44.029,"requestQueueTimeMs":1.803,"localTimeMs":41.558,"remoteTimeMs":0.0,"throttleTimeMs":0,"responseQueueTimeMs":0.211,"sendTimeMs":0.456,"securityProtocol":"SASL_PLAINTEXT","principal":"User:admin","listener":"SASL_PLAINTEXT","clientInformation":{"softwareName":"apache-kafka-java","softwareVersion":"3.3.1"}} 19:07:01.646 [kafka-producer-network-thread | mso-123456-producer-f273d893-2274-4515-b9d3-12029fbb5bf7] DEBUG org.apache.kafka.clients.ClientUtils - Resolved host localhost as 127.0.0.1 19:07:01.646 [kafka-producer-network-thread | mso-123456-producer-f273d893-2274-4515-b9d3-12029fbb5bf7] DEBUG org.apache.kafka.clients.NetworkClient - [Producer clientId=mso-123456-producer-f273d893-2274-4515-b9d3-12029fbb5bf7] Initiating connection to node localhost:38537 (id: 1 rack: null) using address localhost/127.0.0.1 19:07:01.646 [kafka-producer-network-thread | mso-123456-producer-f273d893-2274-4515-b9d3-12029fbb5bf7] DEBUG org.apache.kafka.common.security.authenticator.SaslClientAuthenticator - [Producer clientId=mso-123456-producer-f273d893-2274-4515-b9d3-12029fbb5bf7] Set SASL client state to SEND_APIVERSIONS_REQUEST 19:07:01.646 [kafka-producer-network-thread | mso-123456-producer-f273d893-2274-4515-b9d3-12029fbb5bf7] DEBUG org.apache.kafka.common.security.authenticator.SaslClientAuthenticator - [Producer clientId=mso-123456-producer-f273d893-2274-4515-b9d3-12029fbb5bf7] Creating SaslClient: client=null;service=kafka;serviceHostname=localhost;mechs=[PLAIN] 19:07:01.648 [kafka-producer-network-thread | mso-123456-producer-f273d893-2274-4515-b9d3-12029fbb5bf7] DEBUG org.apache.kafka.common.network.Selector - [Producer clientId=mso-123456-producer-f273d893-2274-4515-b9d3-12029fbb5bf7] Created socket with SO_RCVBUF = 32768, SO_SNDBUF = 131072, SO_TIMEOUT = 0 to node 1 19:07:01.648 [kafka-producer-network-thread | mso-123456-producer-f273d893-2274-4515-b9d3-12029fbb5bf7] DEBUG org.apache.kafka.common.security.authenticator.SaslClientAuthenticator - [Producer clientId=mso-123456-producer-f273d893-2274-4515-b9d3-12029fbb5bf7] Set SASL client state to RECEIVE_APIVERSIONS_RESPONSE 19:07:01.648 [kafka-producer-network-thread | mso-123456-producer-f273d893-2274-4515-b9d3-12029fbb5bf7] DEBUG org.apache.kafka.clients.NetworkClient - [Producer clientId=mso-123456-producer-f273d893-2274-4515-b9d3-12029fbb5bf7] Completed connection to node 1. Fetching API versions. 19:07:01.649 [data-plane-kafka-socket-acceptor-ListenerName(SASL_PLAINTEXT)-SASL_PLAINTEXT-38537] DEBUG kafka.network.DataPlaneAcceptor - Accepted connection from /127.0.0.1:36416 on /127.0.0.1:38537 and assigned it to processor 0, sendBufferSize [actual|requested]: [102400|102400] recvBufferSize [actual|requested]: [102400|102400] 19:07:01.649 [data-plane-kafka-network-thread-1-ListenerName(SASL_PLAINTEXT)-SASL_PLAINTEXT-0] DEBUG kafka.network.Processor - Processor 0 listening to new connection from /127.0.0.1:36416 19:07:01.649 [data-plane-kafka-network-thread-1-ListenerName(SASL_PLAINTEXT)-SASL_PLAINTEXT-0] DEBUG org.apache.kafka.common.security.authenticator.SaslServerAuthenticator - Set SASL server state to HANDSHAKE_OR_VERSIONS_REQUEST during authentication 19:07:01.649 [data-plane-kafka-network-thread-1-ListenerName(SASL_PLAINTEXT)-SASL_PLAINTEXT-0] DEBUG org.apache.kafka.common.security.authenticator.SaslServerAuthenticator - Handling Kafka request API_VERSIONS during authentication 19:07:01.650 [data-plane-kafka-network-thread-1-ListenerName(SASL_PLAINTEXT)-SASL_PLAINTEXT-0] DEBUG org.apache.kafka.common.security.authenticator.SaslServerAuthenticator - Set SASL server state to HANDSHAKE_REQUEST during authentication 19:07:01.650 [kafka-producer-network-thread | mso-123456-producer-f273d893-2274-4515-b9d3-12029fbb5bf7] DEBUG org.apache.kafka.common.security.authenticator.SaslClientAuthenticator - [Producer clientId=mso-123456-producer-f273d893-2274-4515-b9d3-12029fbb5bf7] Set SASL client state to SEND_HANDSHAKE_REQUEST 19:07:01.650 [kafka-producer-network-thread | mso-123456-producer-f273d893-2274-4515-b9d3-12029fbb5bf7] DEBUG org.apache.kafka.common.security.authenticator.SaslClientAuthenticator - [Producer clientId=mso-123456-producer-f273d893-2274-4515-b9d3-12029fbb5bf7] Set SASL client state to RECEIVE_HANDSHAKE_RESPONSE 19:07:01.651 [data-plane-kafka-network-thread-1-ListenerName(SASL_PLAINTEXT)-SASL_PLAINTEXT-0] DEBUG org.apache.kafka.common.security.authenticator.SaslServerAuthenticator - Handling Kafka request SASL_HANDSHAKE during authentication 19:07:01.651 [data-plane-kafka-network-thread-1-ListenerName(SASL_PLAINTEXT)-SASL_PLAINTEXT-0] DEBUG org.apache.kafka.common.security.authenticator.SaslServerAuthenticator - Using SASL mechanism 'PLAIN' provided by client 19:07:01.651 [kafka-producer-network-thread | mso-123456-producer-f273d893-2274-4515-b9d3-12029fbb5bf7] DEBUG org.apache.kafka.common.security.authenticator.SaslClientAuthenticator - [Producer clientId=mso-123456-producer-f273d893-2274-4515-b9d3-12029fbb5bf7] Set SASL client state to INITIAL 19:07:01.651 [kafka-producer-network-thread | mso-123456-producer-f273d893-2274-4515-b9d3-12029fbb5bf7] DEBUG org.apache.kafka.common.security.authenticator.SaslClientAuthenticator - [Producer clientId=mso-123456-producer-f273d893-2274-4515-b9d3-12029fbb5bf7] Set SASL client state to INTERMEDIATE 19:07:01.651 [data-plane-kafka-network-thread-1-ListenerName(SASL_PLAINTEXT)-SASL_PLAINTEXT-0] DEBUG org.apache.kafka.common.security.authenticator.SaslServerAuthenticator - Set SASL server state to AUTHENTICATE during authentication 19:07:01.652 [data-plane-kafka-network-thread-1-ListenerName(SASL_PLAINTEXT)-SASL_PLAINTEXT-0] DEBUG org.apache.kafka.common.security.authenticator.SaslServerAuthenticator - Authentication complete; session max lifetime from broker config=0 ms, no credential expiration; no session expiration, sending 0 ms to client 19:07:01.652 [data-plane-kafka-network-thread-1-ListenerName(SASL_PLAINTEXT)-SASL_PLAINTEXT-0] DEBUG org.apache.kafka.common.security.authenticator.SaslServerAuthenticator - Set SASL server state to COMPLETE during authentication 19:07:01.652 [data-plane-kafka-network-thread-1-ListenerName(SASL_PLAINTEXT)-SASL_PLAINTEXT-0] DEBUG org.apache.kafka.common.network.Selector - [SocketServer listenerType=ZK_BROKER, nodeId=1] Successfully authenticated with /127.0.0.1 19:07:01.652 [kafka-producer-network-thread | mso-123456-producer-f273d893-2274-4515-b9d3-12029fbb5bf7] DEBUG org.apache.kafka.common.security.authenticator.SaslClientAuthenticator - [Producer clientId=mso-123456-producer-f273d893-2274-4515-b9d3-12029fbb5bf7] Set SASL client state to COMPLETE 19:07:01.652 [kafka-producer-network-thread | mso-123456-producer-f273d893-2274-4515-b9d3-12029fbb5bf7] DEBUG org.apache.kafka.common.security.authenticator.SaslClientAuthenticator - [Producer clientId=mso-123456-producer-f273d893-2274-4515-b9d3-12029fbb5bf7] Finished authentication with no session expiration and no session re-authentication 19:07:01.652 [kafka-producer-network-thread | mso-123456-producer-f273d893-2274-4515-b9d3-12029fbb5bf7] DEBUG org.apache.kafka.common.network.Selector - [Producer clientId=mso-123456-producer-f273d893-2274-4515-b9d3-12029fbb5bf7] Successfully authenticated with localhost/127.0.0.1 19:07:01.652 [kafka-producer-network-thread | mso-123456-producer-f273d893-2274-4515-b9d3-12029fbb5bf7] DEBUG org.apache.kafka.clients.NetworkClient - [Producer clientId=mso-123456-producer-f273d893-2274-4515-b9d3-12029fbb5bf7] Initiating API versions fetch from node 1. 19:07:01.652 [kafka-producer-network-thread | mso-123456-producer-f273d893-2274-4515-b9d3-12029fbb5bf7] DEBUG org.apache.kafka.clients.NetworkClient - [Producer clientId=mso-123456-producer-f273d893-2274-4515-b9d3-12029fbb5bf7] Sending API_VERSIONS request with header RequestHeader(apiKey=API_VERSIONS, apiVersion=3, clientId=mso-123456-producer-f273d893-2274-4515-b9d3-12029fbb5bf7, correlationId=3) and timeout 30000 to node 1: ApiVersionsRequestData(clientSoftwareName='apache-kafka-java', clientSoftwareVersion='3.3.1') 19:07:01.654 [kafka-producer-network-thread | mso-123456-producer-f273d893-2274-4515-b9d3-12029fbb5bf7] DEBUG org.apache.kafka.clients.NetworkClient - [Producer clientId=mso-123456-producer-f273d893-2274-4515-b9d3-12029fbb5bf7] Received API_VERSIONS response from node 1 for request with header RequestHeader(apiKey=API_VERSIONS, apiVersion=3, clientId=mso-123456-producer-f273d893-2274-4515-b9d3-12029fbb5bf7, correlationId=3): ApiVersionsResponseData(errorCode=0, apiKeys=[ApiVersion(apiKey=0, minVersion=0, maxVersion=9), ApiVersion(apiKey=1, minVersion=0, maxVersion=13), ApiVersion(apiKey=2, minVersion=0, maxVersion=7), ApiVersion(apiKey=3, minVersion=0, maxVersion=12), ApiVersion(apiKey=4, minVersion=0, maxVersion=6), ApiVersion(apiKey=5, minVersion=0, maxVersion=3), ApiVersion(apiKey=6, minVersion=0, maxVersion=7), ApiVersion(apiKey=7, minVersion=0, maxVersion=3), ApiVersion(apiKey=8, minVersion=0, maxVersion=8), ApiVersion(apiKey=9, minVersion=0, maxVersion=8), ApiVersion(apiKey=10, minVersion=0, maxVersion=4), ApiVersion(apiKey=11, minVersion=0, maxVersion=9), ApiVersion(apiKey=12, minVersion=0, maxVersion=4), ApiVersion(apiKey=13, minVersion=0, maxVersion=5), ApiVersion(apiKey=14, minVersion=0, maxVersion=5), ApiVersion(apiKey=15, minVersion=0, maxVersion=5), ApiVersion(apiKey=16, minVersion=0, maxVersion=4), ApiVersion(apiKey=17, minVersion=0, maxVersion=1), ApiVersion(apiKey=18, minVersion=0, maxVersion=3), ApiVersion(apiKey=19, minVersion=0, maxVersion=7), ApiVersion(apiKey=20, minVersion=0, maxVersion=6), ApiVersion(apiKey=21, minVersion=0, maxVersion=2), ApiVersion(apiKey=22, minVersion=0, maxVersion=4), ApiVersion(apiKey=23, minVersion=0, maxVersion=4), ApiVersion(apiKey=24, minVersion=0, maxVersion=3), ApiVersion(apiKey=25, minVersion=0, maxVersion=3), ApiVersion(apiKey=26, minVersion=0, maxVersion=3), ApiVersion(apiKey=27, minVersion=0, maxVersion=1), ApiVersion(apiKey=28, minVersion=0, maxVersion=3), ApiVersion(apiKey=29, minVersion=0, maxVersion=3), ApiVersion(apiKey=30, minVersion=0, maxVersion=3), ApiVersion(apiKey=31, minVersion=0, maxVersion=3), ApiVersion(apiKey=32, minVersion=0, maxVersion=4), ApiVersion(apiKey=33, minVersion=0, maxVersion=2), ApiVersion(apiKey=34, minVersion=0, maxVersion=2), ApiVersion(apiKey=35, minVersion=0, maxVersion=4), ApiVersion(apiKey=36, minVersion=0, maxVersion=2), ApiVersion(apiKey=37, minVersion=0, maxVersion=3), ApiVersion(apiKey=38, minVersion=0, maxVersion=3), ApiVersion(apiKey=39, minVersion=0, maxVersion=2), ApiVersion(apiKey=40, minVersion=0, maxVersion=2), ApiVersion(apiKey=41, minVersion=0, maxVersion=3), ApiVersion(apiKey=42, minVersion=0, maxVersion=2), ApiVersion(apiKey=43, minVersion=0, maxVersion=2), ApiVersion(apiKey=44, minVersion=0, maxVersion=1), ApiVersion(apiKey=45, minVersion=0, maxVersion=0), ApiVersion(apiKey=46, minVersion=0, maxVersion=0), ApiVersion(apiKey=47, minVersion=0, maxVersion=0), ApiVersion(apiKey=48, minVersion=0, maxVersion=1), ApiVersion(apiKey=49, minVersion=0, maxVersion=1), ApiVersion(apiKey=50, minVersion=0, maxVersion=0), ApiVersion(apiKey=51, minVersion=0, maxVersion=0), ApiVersion(apiKey=56, minVersion=0, maxVersion=2), ApiVersion(apiKey=57, minVersion=0, maxVersion=1), ApiVersion(apiKey=60, minVersion=0, maxVersion=0), ApiVersion(apiKey=61, minVersion=0, maxVersion=0), ApiVersion(apiKey=65, minVersion=0, maxVersion=0), ApiVersion(apiKey=66, minVersion=0, maxVersion=0), ApiVersion(apiKey=67, minVersion=0, maxVersion=0)], throttleTimeMs=0, supportedFeatures=[], finalizedFeaturesEpoch=0, finalizedFeatures=[]) 19:07:01.655 [kafka-producer-network-thread | mso-123456-producer-f273d893-2274-4515-b9d3-12029fbb5bf7] DEBUG org.apache.kafka.clients.NetworkClient - [Producer clientId=mso-123456-producer-f273d893-2274-4515-b9d3-12029fbb5bf7] Node 1 has finalized features epoch: 0, finalized features: [], supported features: [], API versions: (Produce(0): 0 to 9 [usable: 9], Fetch(1): 0 to 13 [usable: 13], ListOffsets(2): 0 to 7 [usable: 7], Metadata(3): 0 to 12 [usable: 12], LeaderAndIsr(4): 0 to 6 [usable: 6], StopReplica(5): 0 to 3 [usable: 3], UpdateMetadata(6): 0 to 7 [usable: 7], ControlledShutdown(7): 0 to 3 [usable: 3], OffsetCommit(8): 0 to 8 [usable: 8], OffsetFetch(9): 0 to 8 [usable: 8], FindCoordinator(10): 0 to 4 [usable: 4], JoinGroup(11): 0 to 9 [usable: 9], Heartbeat(12): 0 to 4 [usable: 4], LeaveGroup(13): 0 to 5 [usable: 5], SyncGroup(14): 0 to 5 [usable: 5], DescribeGroups(15): 0 to 5 [usable: 5], ListGroups(16): 0 to 4 [usable: 4], SaslHandshake(17): 0 to 1 [usable: 1], ApiVersions(18): 0 to 3 [usable: 3], CreateTopics(19): 0 to 7 [usable: 7], DeleteTopics(20): 0 to 6 [usable: 6], DeleteRecords(21): 0 to 2 [usable: 2], InitProducerId(22): 0 to 4 [usable: 4], OffsetForLeaderEpoch(23): 0 to 4 [usable: 4], AddPartitionsToTxn(24): 0 to 3 [usable: 3], AddOffsetsToTxn(25): 0 to 3 [usable: 3], EndTxn(26): 0 to 3 [usable: 3], WriteTxnMarkers(27): 0 to 1 [usable: 1], TxnOffsetCommit(28): 0 to 3 [usable: 3], DescribeAcls(29): 0 to 3 [usable: 3], CreateAcls(30): 0 to 3 [usable: 3], DeleteAcls(31): 0 to 3 [usable: 3], DescribeConfigs(32): 0 to 4 [usable: 4], AlterConfigs(33): 0 to 2 [usable: 2], AlterReplicaLogDirs(34): 0 to 2 [usable: 2], DescribeLogDirs(35): 0 to 4 [usable: 4], SaslAuthenticate(36): 0 to 2 [usable: 2], CreatePartitions(37): 0 to 3 [usable: 3], CreateDelegationToken(38): 0 to 3 [usable: 3], RenewDelegationToken(39): 0 to 2 [usable: 2], ExpireDelegationToken(40): 0 to 2 [usable: 2], DescribeDelegationToken(41): 0 to 3 [usable: 3], DeleteGroups(42): 0 to 2 [usable: 2], ElectLeaders(43): 0 to 2 [usable: 2], IncrementalAlterConfigs(44): 0 to 1 [usable: 1], AlterPartitionReassignments(45): 0 [usable: 0], ListPartitionReassignments(46): 0 [usable: 0], OffsetDelete(47): 0 [usable: 0], DescribeClientQuotas(48): 0 to 1 [usable: 1], AlterClientQuotas(49): 0 to 1 [usable: 1], DescribeUserScramCredentials(50): 0 [usable: 0], AlterUserScramCredentials(51): 0 [usable: 0], AlterPartition(56): 0 to 2 [usable: 2], UpdateFeatures(57): 0 to 1 [usable: 1], DescribeCluster(60): 0 [usable: 0], DescribeProducers(61): 0 [usable: 0], DescribeTransactions(65): 0 [usable: 0], ListTransactions(66): 0 [usable: 0], AllocateProducerIds(67): 0 [usable: 0]). 19:07:01.655 [data-plane-kafka-network-thread-1-ListenerName(SASL_PLAINTEXT)-SASL_PLAINTEXT-0] DEBUG kafka.request.logger - Completed request:{"isForwarded":false,"requestHeader":{"requestApiKey":18,"requestApiVersion":3,"correlationId":3,"clientId":"mso-123456-producer-f273d893-2274-4515-b9d3-12029fbb5bf7","requestApiKeyName":"API_VERSIONS"},"request":{"clientSoftwareName":"apache-kafka-java","clientSoftwareVersion":"3.3.1"},"response":{"errorCode":0,"apiKeys":[{"apiKey":0,"minVersion":0,"maxVersion":9},{"apiKey":1,"minVersion":0,"maxVersion":13},{"apiKey":2,"minVersion":0,"maxVersion":7},{"apiKey":3,"minVersion":0,"maxVersion":12},{"apiKey":4,"minVersion":0,"maxVersion":6},{"apiKey":5,"minVersion":0,"maxVersion":3},{"apiKey":6,"minVersion":0,"maxVersion":7},{"apiKey":7,"minVersion":0,"maxVersion":3},{"apiKey":8,"minVersion":0,"maxVersion":8},{"apiKey":9,"minVersion":0,"maxVersion":8},{"apiKey":10,"minVersion":0,"maxVersion":4},{"apiKey":11,"minVersion":0,"maxVersion":9},{"apiKey":12,"minVersion":0,"maxVersion":4},{"apiKey":13,"minVersion":0,"maxVersion":5},{"apiKey":14,"minVersion":0,"maxVersion":5},{"apiKey":15,"minVersion":0,"maxVersion":5},{"apiKey":16,"minVersion":0,"maxVersion":4},{"apiKey":17,"minVersion":0,"maxVersion":1},{"apiKey":18,"minVersion":0,"maxVersion":3},{"apiKey":19,"minVersion":0,"maxVersion":7},{"apiKey":20,"minVersion":0,"maxVersion":6},{"apiKey":21,"minVersion":0,"maxVersion":2},{"apiKey":22,"minVersion":0,"maxVersion":4},{"apiKey":23,"minVersion":0,"maxVersion":4},{"apiKey":24,"minVersion":0,"maxVersion":3},{"apiKey":25,"minVersion":0,"maxVersion":3},{"apiKey":26,"minVersion":0,"maxVersion":3},{"apiKey":27,"minVersion":0,"maxVersion":1},{"apiKey":28,"minVersion":0,"maxVersion":3},{"apiKey":29,"minVersion":0,"maxVersion":3},{"apiKey":30,"minVersion":0,"maxVersion":3},{"apiKey":31,"minVersion":0,"maxVersion":3},{"apiKey":32,"minVersion":0,"maxVersion":4},{"apiKey":33,"minVersion":0,"maxVersion":2},{"apiKey":34,"minVersion":0,"maxVersion":2},{"apiKey":35,"minVersion":0,"maxVersion":4},{"apiKey":36,"minVersion":0,"maxVersion":2},{"apiKey":37,"minVersion":0,"maxVersion":3},{"apiKey":38,"minVersion":0,"maxVersion":3},{"apiKey":39,"minVersion":0,"maxVersion":2},{"apiKey":40,"minVersion":0,"maxVersion":2},{"apiKey":41,"minVersion":0,"maxVersion":3},{"apiKey":42,"minVersion":0,"maxVersion":2},{"apiKey":43,"minVersion":0,"maxVersion":2},{"apiKey":44,"minVersion":0,"maxVersion":1},{"apiKey":45,"minVersion":0,"maxVersion":0},{"apiKey":46,"minVersion":0,"maxVersion":0},{"apiKey":47,"minVersion":0,"maxVersion":0},{"apiKey":48,"minVersion":0,"maxVersion":1},{"apiKey":49,"minVersion":0,"maxVersion":1},{"apiKey":50,"minVersion":0,"maxVersion":0},{"apiKey":51,"minVersion":0,"maxVersion":0},{"apiKey":56,"minVersion":0,"maxVersion":2},{"apiKey":57,"minVersion":0,"maxVersion":1},{"apiKey":60,"minVersion":0,"maxVersion":0},{"apiKey":61,"minVersion":0,"maxVersion":0},{"apiKey":65,"minVersion":0,"maxVersion":0},{"apiKey":66,"minVersion":0,"maxVersion":0},{"apiKey":67,"minVersion":0,"maxVersion":0}],"throttleTimeMs":0,"finalizedFeaturesEpoch":0},"connection":"127.0.0.1:38537-127.0.0.1:36416-5","totalTimeMs":1.154,"requestQueueTimeMs":0.287,"localTimeMs":0.585,"remoteTimeMs":0.0,"throttleTimeMs":0,"responseQueueTimeMs":0.088,"sendTimeMs":0.192,"securityProtocol":"SASL_PLAINTEXT","principal":"User:admin","listener":"SASL_PLAINTEXT","clientInformation":{"softwareName":"unknown","softwareVersion":"unknown"}} 19:07:01.659 [kafka-producer-network-thread | mso-123456-producer-f273d893-2274-4515-b9d3-12029fbb5bf7] DEBUG org.apache.kafka.clients.producer.internals.TransactionManager - [Producer clientId=mso-123456-producer-f273d893-2274-4515-b9d3-12029fbb5bf7] ProducerId of partition my-test-topic-0 set to 0 with epoch 0. Reinitialize sequence at beginning. 19:07:01.659 [kafka-producer-network-thread | mso-123456-producer-f273d893-2274-4515-b9d3-12029fbb5bf7] DEBUG org.apache.kafka.clients.producer.internals.RecordAccumulator - [Producer clientId=mso-123456-producer-f273d893-2274-4515-b9d3-12029fbb5bf7] Assigned producerId 0 and producerEpoch 0 to batch with base sequence 0 being sent to partition my-test-topic-0 19:07:01.662 [kafka-producer-network-thread | mso-123456-producer-f273d893-2274-4515-b9d3-12029fbb5bf7] DEBUG org.apache.kafka.clients.NetworkClient - [Producer clientId=mso-123456-producer-f273d893-2274-4515-b9d3-12029fbb5bf7] Sending PRODUCE request with header RequestHeader(apiKey=PRODUCE, apiVersion=9, clientId=mso-123456-producer-f273d893-2274-4515-b9d3-12029fbb5bf7, correlationId=4) and timeout 30000 to node 1: {acks=-1,timeout=30000,partitionSizes=[my-test-topic-0=106]} 19:07:01.707 [data-plane-kafka-request-handler-0] DEBUG kafka.log.UnifiedLog - [UnifiedLog partition=my-test-topic-0, dir=/tmp/kafka-unit12180474530667575823] Flushing log up to offset 3 (exclusive)with recovery point 3, last flushed: 1755112001327, current time: 1755112021707,unflushed: 3 19:07:01.711 [data-plane-kafka-request-handler-0] DEBUG kafka.cluster.Partition - [Partition my-test-topic-0 broker=1] High watermark updated from (offset=0 segment=[0:0]) to (offset=3 segment=[0:106]) 19:07:01.711 [data-plane-kafka-request-handler-0] DEBUG kafka.server.ReplicaManager - [ReplicaManager broker=1] Produce to local log in 36 ms 19:07:01.721 [kafka-producer-network-thread | mso-123456-producer-f273d893-2274-4515-b9d3-12029fbb5bf7] DEBUG org.apache.kafka.clients.NetworkClient - [Producer clientId=mso-123456-producer-f273d893-2274-4515-b9d3-12029fbb5bf7] Received PRODUCE response from node 1 for request with header RequestHeader(apiKey=PRODUCE, apiVersion=9, clientId=mso-123456-producer-f273d893-2274-4515-b9d3-12029fbb5bf7, correlationId=4): ProduceResponseData(responses=[TopicProduceResponse(name='my-test-topic', partitionResponses=[PartitionProduceResponse(index=0, errorCode=0, baseOffset=0, logAppendTimeMs=-1, logStartOffset=0, recordErrors=[], errorMessage=null)])], throttleTimeMs=0) 19:07:01.722 [data-plane-kafka-network-thread-1-ListenerName(SASL_PLAINTEXT)-SASL_PLAINTEXT-0] DEBUG kafka.request.logger - Completed request:{"isForwarded":false,"requestHeader":{"requestApiKey":0,"requestApiVersion":9,"correlationId":4,"clientId":"mso-123456-producer-f273d893-2274-4515-b9d3-12029fbb5bf7","requestApiKeyName":"PRODUCE"},"request":{"transactionalId":null,"acks":-1,"timeoutMs":30000,"topicData":[{"name":"my-test-topic","partitionData":[{"index":0,"recordsSizeInBytes":106}]}]},"response":{"responses":[{"name":"my-test-topic","partitionResponses":[{"index":0,"errorCode":0,"baseOffset":0,"logAppendTimeMs":-1,"logStartOffset":0,"recordErrors":[],"errorMessage":null}]}],"throttleTimeMs":0},"connection":"127.0.0.1:38537-127.0.0.1:36416-5","totalTimeMs":57.792,"requestQueueTimeMs":6.863,"localTimeMs":49.709,"remoteTimeMs":0.0,"throttleTimeMs":0,"responseQueueTimeMs":0.184,"sendTimeMs":1.034,"securityProtocol":"SASL_PLAINTEXT","principal":"User:admin","listener":"SASL_PLAINTEXT","clientInformation":{"softwareName":"apache-kafka-java","softwareVersion":"3.3.1"}} 19:07:01.726 [data-plane-kafka-request-handler-0] DEBUG kafka.server.IncrementalFetchContext - Incremental fetch context with session id 1631787598 returning 1 partition(s) 19:07:01.728 [kafka-producer-network-thread | mso-123456-producer-f273d893-2274-4515-b9d3-12029fbb5bf7] DEBUG org.apache.kafka.clients.producer.internals.TransactionManager - [Producer clientId=mso-123456-producer-f273d893-2274-4515-b9d3-12029fbb5bf7] ProducerId: 0; Set last ack'd sequence number for topic-partition my-test-topic-0 to 2 19:07:01.728 [data-plane-kafka-request-handler-0] DEBUG kafka.server.DelayedOperationPurgatory - Request key TopicPartitionOperationKey(my-test-topic,0) unblocked 1 Fetch operations 19:07:01.732 [main] DEBUG org.apache.kafka.clients.NetworkClient - [Consumer clientId=mso-123456-consumer-e3947c42-8a25-4603-b050-ed449cbe5e27, groupId=mso-group] Received FETCH response from node 1 for request with header RequestHeader(apiKey=FETCH, apiVersion=13, clientId=mso-123456-consumer-e3947c42-8a25-4603-b050-ed449cbe5e27, correlationId=70): FetchResponseData(throttleTimeMs=0, errorCode=0, sessionId=1631787598, responses=[FetchableTopicResponse(topic='', topicId=Y7FYMZASRPqaYrfAS485Ow, partitions=[PartitionData(partitionIndex=0, errorCode=0, highWatermark=3, lastStableOffset=3, logStartOffset=0, divergingEpoch=EpochEndOffset(epoch=-1, endOffset=-1), currentLeader=LeaderIdAndEpoch(leaderId=-1, leaderEpoch=-1), snapshotId=SnapshotId(endOffset=-1, epoch=-1), abortedTransactions=null, preferredReadReplica=-1, records=MemoryRecords(size=106, buffer=java.nio.HeapByteBuffer[pos=0 lim=106 cap=109]))])]) 19:07:01.732 [main] DEBUG org.apache.kafka.clients.FetchSessionHandler - [Consumer clientId=mso-123456-consumer-e3947c42-8a25-4603-b050-ed449cbe5e27, groupId=mso-group] Node 1 sent an incremental fetch response with throttleTimeMs = 0 for session 1631787598 with 1 response partition(s) 19:07:01.732 [main] DEBUG org.apache.kafka.clients.consumer.internals.Fetcher - [Consumer clientId=mso-123456-consumer-e3947c42-8a25-4603-b050-ed449cbe5e27, groupId=mso-group] Fetch READ_UNCOMMITTED at offset 0 for partition my-test-topic-0 returned fetch data PartitionData(partitionIndex=0, errorCode=0, highWatermark=3, lastStableOffset=3, logStartOffset=0, divergingEpoch=EpochEndOffset(epoch=-1, endOffset=-1), currentLeader=LeaderIdAndEpoch(leaderId=-1, leaderEpoch=-1), snapshotId=SnapshotId(endOffset=-1, epoch=-1), abortedTransactions=null, preferredReadReplica=-1, records=MemoryRecords(size=106, buffer=java.nio.HeapByteBuffer[pos=0 lim=106 cap=109])) 19:07:01.732 [data-plane-kafka-network-thread-1-ListenerName(SASL_PLAINTEXT)-SASL_PLAINTEXT-0] DEBUG kafka.request.logger - Completed request:{"isForwarded":false,"requestHeader":{"requestApiKey":1,"requestApiVersion":13,"correlationId":70,"clientId":"mso-123456-consumer-e3947c42-8a25-4603-b050-ed449cbe5e27","requestApiKeyName":"FETCH"},"request":{"replicaId":-1,"maxWaitMs":500,"minBytes":1,"maxBytes":52428800,"isolationLevel":0,"sessionId":1631787598,"sessionEpoch":30,"topics":[],"forgottenTopicsData":[],"rackId":""},"response":{"throttleTimeMs":0,"errorCode":0,"sessionId":1631787598,"responses":[{"topicId":"Y7FYMZASRPqaYrfAS485Ow","partitions":[{"partitionIndex":0,"errorCode":0,"highWatermark":3,"lastStableOffset":3,"logStartOffset":0,"abortedTransactions":null,"preferredReadReplica":-1,"recordsSizeInBytes":106}]}]},"connection":"127.0.0.1:38537-127.0.0.1:54970-3","totalTimeMs":317.926,"requestQueueTimeMs":0.227,"localTimeMs":1.902,"remoteTimeMs":312.558,"throttleTimeMs":0,"responseQueueTimeMs":0.136,"sendTimeMs":3.102,"securityProtocol":"SASL_PLAINTEXT","principal":"User:admin","listener":"SASL_PLAINTEXT","clientInformation":{"softwareName":"apache-kafka-java","softwareVersion":"3.3.1"}} 19:07:01.735 [main] DEBUG org.apache.kafka.clients.consumer.internals.Fetcher - [Consumer clientId=mso-123456-consumer-e3947c42-8a25-4603-b050-ed449cbe5e27, groupId=mso-group] Added READ_UNCOMMITTED fetch request for partition my-test-topic-0 at position FetchPosition{offset=3, offsetEpoch=Optional[0], currentLeader=LeaderAndEpoch{leader=Optional[localhost:38537 (id: 1 rack: null)], epoch=0}} to node localhost:38537 (id: 1 rack: null) 19:07:01.735 [main] DEBUG org.apache.kafka.clients.FetchSessionHandler - [Consumer clientId=mso-123456-consumer-e3947c42-8a25-4603-b050-ed449cbe5e27, groupId=mso-group] Built incremental fetch (sessionId=1631787598, epoch=31) for node 1. Added 0 partition(s), altered 1 partition(s), removed 0 partition(s), replaced 0 partition(s) out of 1 partition(s) 19:07:01.735 [main] DEBUG org.apache.kafka.clients.consumer.internals.Fetcher - [Consumer clientId=mso-123456-consumer-e3947c42-8a25-4603-b050-ed449cbe5e27, groupId=mso-group] Sending READ_UNCOMMITTED IncrementalFetchRequest(toSend=(my-test-topic-0), toForget=(), toReplace=(), implied=(), canUseTopicIds=True) to broker localhost:38537 (id: 1 rack: null) 19:07:01.735 [main] DEBUG org.apache.kafka.clients.NetworkClient - [Consumer clientId=mso-123456-consumer-e3947c42-8a25-4603-b050-ed449cbe5e27, groupId=mso-group] Sending FETCH request with header RequestHeader(apiKey=FETCH, apiVersion=13, clientId=mso-123456-consumer-e3947c42-8a25-4603-b050-ed449cbe5e27, correlationId=71) and timeout 30000 to node 1: FetchRequestData(clusterId=null, replicaId=-1, maxWaitMs=500, minBytes=1, maxBytes=52428800, isolationLevel=0, sessionId=1631787598, sessionEpoch=31, topics=[FetchTopic(topic='my-test-topic', topicId=Y7FYMZASRPqaYrfAS485Ow, partitions=[FetchPartition(partition=0, currentLeaderEpoch=0, fetchOffset=3, lastFetchedEpoch=-1, logStartOffset=-1, partitionMaxBytes=1048576)])], forgottenTopicsData=[], rackId='') 19:07:01.737 [data-plane-kafka-request-handler-1] DEBUG kafka.server.FetchManager - Created a new incremental FetchContext for session id 1631787598, epoch 32: added 0 partition(s), updated 1 partition(s), removed 0 partition(s) 19:07:01.763 [main] INFO kafka.server.KafkaServer - [KafkaServer id=1] shutting down 19:07:01.764 [main] INFO kafka.server.KafkaServer - [KafkaServer id=1] Starting controlled shutdown 19:07:01.766 [main] DEBUG org.apache.kafka.clients.ClientUtils - Resolved host localhost as 127.0.0.1 19:07:01.766 [main] DEBUG org.apache.kafka.clients.NetworkClient - [KafkaServer id=1] Initiating connection to node localhost:38537 (id: 1 rack: null) using address localhost/127.0.0.1 19:07:01.767 [main] DEBUG org.apache.kafka.common.security.authenticator.SaslClientAuthenticator - [KafkaServer id=1] Set SASL client state to SEND_APIVERSIONS_REQUEST 19:07:01.767 [data-plane-kafka-socket-acceptor-ListenerName(SASL_PLAINTEXT)-SASL_PLAINTEXT-38537] DEBUG kafka.network.DataPlaneAcceptor - Accepted connection from /127.0.0.1:36418 on /127.0.0.1:38537 and assigned it to processor 1, sendBufferSize [actual|requested]: [102400|102400] recvBufferSize [actual|requested]: [102400|102400] 19:07:01.767 [data-plane-kafka-network-thread-1-ListenerName(SASL_PLAINTEXT)-SASL_PLAINTEXT-1] DEBUG kafka.network.Processor - Processor 1 listening to new connection from /127.0.0.1:36418 19:07:01.768 [main] DEBUG org.apache.kafka.common.security.authenticator.SaslClientAuthenticator - [KafkaServer id=1] Creating SaslClient: client=null;service=kafka;serviceHostname=localhost;mechs=[PLAIN] 19:07:01.769 [main] DEBUG org.apache.kafka.common.network.Selector - [KafkaServer id=1] Created socket with SO_RCVBUF = 65536, SO_SNDBUF = 1313280, SO_TIMEOUT = 0 to node 1 19:07:01.769 [main] DEBUG org.apache.kafka.common.security.authenticator.SaslClientAuthenticator - [KafkaServer id=1] Set SASL client state to RECEIVE_APIVERSIONS_RESPONSE 19:07:01.770 [main] DEBUG org.apache.kafka.clients.NetworkClient - [KafkaServer id=1] Completed connection to node 1. Ready. 19:07:01.770 [data-plane-kafka-network-thread-1-ListenerName(SASL_PLAINTEXT)-SASL_PLAINTEXT-1] DEBUG org.apache.kafka.common.security.authenticator.SaslServerAuthenticator - Set SASL server state to HANDSHAKE_OR_VERSIONS_REQUEST during authentication 19:07:01.770 [data-plane-kafka-network-thread-1-ListenerName(SASL_PLAINTEXT)-SASL_PLAINTEXT-1] DEBUG org.apache.kafka.common.security.authenticator.SaslServerAuthenticator - Handling Kafka request API_VERSIONS during authentication 19:07:01.771 [data-plane-kafka-network-thread-1-ListenerName(SASL_PLAINTEXT)-SASL_PLAINTEXT-1] DEBUG org.apache.kafka.common.security.authenticator.SaslServerAuthenticator - Set SASL server state to HANDSHAKE_REQUEST during authentication 19:07:01.771 [main] DEBUG org.apache.kafka.common.security.authenticator.SaslClientAuthenticator - [KafkaServer id=1] Set SASL client state to SEND_HANDSHAKE_REQUEST 19:07:01.771 [main] DEBUG org.apache.kafka.common.security.authenticator.SaslClientAuthenticator - [KafkaServer id=1] Set SASL client state to RECEIVE_HANDSHAKE_RESPONSE 19:07:01.771 [data-plane-kafka-network-thread-1-ListenerName(SASL_PLAINTEXT)-SASL_PLAINTEXT-1] DEBUG org.apache.kafka.common.security.authenticator.SaslServerAuthenticator - Handling Kafka request SASL_HANDSHAKE during authentication 19:07:01.771 [data-plane-kafka-network-thread-1-ListenerName(SASL_PLAINTEXT)-SASL_PLAINTEXT-1] DEBUG org.apache.kafka.common.security.authenticator.SaslServerAuthenticator - Using SASL mechanism 'PLAIN' provided by client 19:07:01.772 [main] DEBUG org.apache.kafka.common.security.authenticator.SaslClientAuthenticator - [KafkaServer id=1] Set SASL client state to INITIAL 19:07:01.772 [main] DEBUG org.apache.kafka.common.security.authenticator.SaslClientAuthenticator - [KafkaServer id=1] Set SASL client state to INTERMEDIATE 19:07:01.772 [data-plane-kafka-network-thread-1-ListenerName(SASL_PLAINTEXT)-SASL_PLAINTEXT-1] DEBUG org.apache.kafka.common.security.authenticator.SaslServerAuthenticator - Set SASL server state to AUTHENTICATE during authentication 19:07:01.773 [data-plane-kafka-network-thread-1-ListenerName(SASL_PLAINTEXT)-SASL_PLAINTEXT-1] DEBUG org.apache.kafka.common.security.authenticator.SaslServerAuthenticator - Authentication complete; session max lifetime from broker config=0 ms, no credential expiration; no session expiration, sending 0 ms to client 19:07:01.773 [data-plane-kafka-network-thread-1-ListenerName(SASL_PLAINTEXT)-SASL_PLAINTEXT-1] DEBUG org.apache.kafka.common.security.authenticator.SaslServerAuthenticator - Set SASL server state to COMPLETE during authentication 19:07:01.773 [data-plane-kafka-network-thread-1-ListenerName(SASL_PLAINTEXT)-SASL_PLAINTEXT-1] DEBUG org.apache.kafka.common.network.Selector - [SocketServer listenerType=ZK_BROKER, nodeId=1] Successfully authenticated with /127.0.0.1 19:07:01.773 [main] DEBUG org.apache.kafka.common.security.authenticator.SaslClientAuthenticator - [KafkaServer id=1] Set SASL client state to COMPLETE 19:07:01.773 [main] DEBUG org.apache.kafka.common.security.authenticator.SaslClientAuthenticator - [KafkaServer id=1] Finished authentication with no session expiration and no session re-authentication 19:07:01.773 [main] DEBUG org.apache.kafka.common.network.Selector - [KafkaServer id=1] Successfully authenticated with localhost/127.0.0.1 19:07:01.774 [main] DEBUG org.apache.kafka.clients.NetworkClient - [KafkaServer id=1] Sending CONTROLLED_SHUTDOWN request with header RequestHeader(apiKey=CONTROLLED_SHUTDOWN, apiVersion=3, clientId=1, correlationId=0) and timeout 30000 to node 1: ControlledShutdownRequestData(brokerId=1, brokerEpoch=25) 19:07:01.779 [controller-event-thread] INFO kafka.controller.KafkaController - [Controller id=1] Shutting down broker 1 19:07:01.780 [controller-event-thread] DEBUG kafka.controller.KafkaController - [Controller id=1] All shutting down brokers: 1 19:07:01.780 [controller-event-thread] DEBUG kafka.controller.KafkaController - [Controller id=1] Live brokers: 19:07:01.784 [controller-event-thread] INFO state.change.logger - [Controller id=1 epoch=1] Sending UpdateMetadata request to brokers HashSet() for 0 partitions 19:07:01.790 [data-plane-kafka-network-thread-1-ListenerName(SASL_PLAINTEXT)-SASL_PLAINTEXT-1] DEBUG kafka.request.logger - Completed request:{"isForwarded":false,"requestHeader":{"requestApiKey":7,"requestApiVersion":3,"correlationId":0,"clientId":"1","requestApiKeyName":"CONTROLLED_SHUTDOWN"},"request":{"brokerId":1,"brokerEpoch":25},"response":{"errorCode":0,"remainingPartitions":[]},"connection":"127.0.0.1:38537-127.0.0.1:36418-5","totalTimeMs":15.101,"requestQueueTimeMs":1.433,"localTimeMs":2.543,"remoteTimeMs":10.678,"throttleTimeMs":0,"responseQueueTimeMs":0.141,"sendTimeMs":0.304,"securityProtocol":"SASL_PLAINTEXT","principal":"User:admin","listener":"SASL_PLAINTEXT","clientInformation":{"softwareName":"unknown","softwareVersion":"unknown"}} 19:07:01.791 [main] DEBUG org.apache.kafka.clients.NetworkClient - [KafkaServer id=1] Received CONTROLLED_SHUTDOWN response from node 1 for request with header RequestHeader(apiKey=CONTROLLED_SHUTDOWN, apiVersion=3, clientId=1, correlationId=0): ControlledShutdownResponseData(errorCode=0, remainingPartitions=[]) 19:07:01.791 [main] INFO kafka.server.KafkaServer - [KafkaServer id=1] Controlled shutdown request returned successfully after 17ms 19:07:01.792 [data-plane-kafka-network-thread-1-ListenerName(SASL_PLAINTEXT)-SASL_PLAINTEXT-1] DEBUG org.apache.kafka.common.network.Selector - [SocketServer listenerType=ZK_BROKER, nodeId=1] Connection with /127.0.0.1 (channelId=127.0.0.1:38537-127.0.0.1:36418-5) disconnected java.io.EOFException: null at org.apache.kafka.common.network.NetworkReceive.readFrom(NetworkReceive.java:97) at org.apache.kafka.common.network.KafkaChannel.receive(KafkaChannel.java:452) at org.apache.kafka.common.network.KafkaChannel.read(KafkaChannel.java:402) at org.apache.kafka.common.network.Selector.attemptRead(Selector.java:674) at org.apache.kafka.common.network.Selector.pollSelectionKeys(Selector.java:576) at org.apache.kafka.common.network.Selector.poll(Selector.java:481) at kafka.network.Processor.poll(SocketServer.scala:1055) at kafka.network.Processor.run(SocketServer.scala:959) at java.base/java.lang.Thread.run(Thread.java:829) 19:07:01.795 [main] INFO kafka.common.ZkNodeChangeNotificationListener$ChangeEventProcessThread - [/config/changes-event-process-thread]: Shutting down 19:07:01.796 [main] INFO kafka.common.ZkNodeChangeNotificationListener$ChangeEventProcessThread - [/config/changes-event-process-thread]: Shutdown completed 19:07:01.796 [/config/changes-event-process-thread] INFO kafka.common.ZkNodeChangeNotificationListener$ChangeEventProcessThread - [/config/changes-event-process-thread]: Stopped 19:07:01.797 [main] INFO kafka.network.SocketServer - [SocketServer listenerType=ZK_BROKER, nodeId=1] Stopping socket server request processors 19:07:01.798 [data-plane-kafka-socket-acceptor-ListenerName(SASL_PLAINTEXT)-SASL_PLAINTEXT-38537] DEBUG kafka.network.DataPlaneAcceptor - Closing server socket, selector, and any throttled sockets. 19:07:01.799 [data-plane-kafka-network-thread-1-ListenerName(SASL_PLAINTEXT)-SASL_PLAINTEXT-1] DEBUG kafka.network.Processor - Closing selector - processor 1 19:07:01.799 [data-plane-kafka-network-thread-1-ListenerName(SASL_PLAINTEXT)-SASL_PLAINTEXT-0] DEBUG kafka.network.Processor - Closing selector - processor 0 19:07:01.800 [data-plane-kafka-network-thread-1-ListenerName(SASL_PLAINTEXT)-SASL_PLAINTEXT-1] DEBUG kafka.network.Processor - Closing selector connection 127.0.0.1:38537-127.0.0.1:54968-2 19:07:01.801 [data-plane-kafka-network-thread-1-ListenerName(SASL_PLAINTEXT)-SASL_PLAINTEXT-0] DEBUG kafka.network.Processor - Closing selector connection 127.0.0.1:38537-127.0.0.1:54958-0 19:07:01.801 [data-plane-kafka-network-thread-1-ListenerName(SASL_PLAINTEXT)-SASL_PLAINTEXT-1] DEBUG kafka.network.Processor - Closing selector connection 127.0.0.1:38537-127.0.0.1:54972-3 19:07:01.801 [data-plane-kafka-network-thread-1-ListenerName(SASL_PLAINTEXT)-SASL_PLAINTEXT-1] DEBUG kafka.network.Processor - Closing selector connection 127.0.0.1:38537-127.0.0.1:36414-4 19:07:01.802 [data-plane-kafka-network-thread-1-ListenerName(SASL_PLAINTEXT)-SASL_PLAINTEXT-0] DEBUG kafka.network.Processor - Closing selector connection 127.0.0.1:38537-127.0.0.1:36412-4 19:07:01.802 [data-plane-kafka-network-thread-1-ListenerName(SASL_PLAINTEXT)-SASL_PLAINTEXT-0] DEBUG kafka.network.Processor - Closing selector connection 127.0.0.1:38537-127.0.0.1:36416-5 19:07:01.802 [data-plane-kafka-network-thread-1-ListenerName(SASL_PLAINTEXT)-SASL_PLAINTEXT-0] DEBUG kafka.network.Processor - Closing selector connection 127.0.0.1:38537-127.0.0.1:54970-3 19:07:01.802 [TestBroker:1:BrokerToControllerChannelManager broker=1 name=forwarding] DEBUG org.apache.kafka.common.network.Selector - [BrokerToControllerChannelManager broker=1 name=forwarding] Connection with localhost/127.0.0.1 (channelId=1) disconnected java.io.EOFException: null at org.apache.kafka.common.network.NetworkReceive.readFrom(NetworkReceive.java:97) at org.apache.kafka.common.network.KafkaChannel.receive(KafkaChannel.java:452) at org.apache.kafka.common.network.KafkaChannel.read(KafkaChannel.java:402) at org.apache.kafka.common.network.Selector.attemptRead(Selector.java:674) at org.apache.kafka.common.network.Selector.pollSelectionKeys(Selector.java:576) at org.apache.kafka.common.network.Selector.poll(Selector.java:481) at org.apache.kafka.clients.NetworkClient.poll(NetworkClient.java:560) at kafka.common.InterBrokerSendThread.pollOnce(InterBrokerSendThread.scala:74) at kafka.server.BrokerToControllerRequestThread.doWork(BrokerToControllerChannelManager.scala:368) at kafka.utils.ShutdownableThread.run(ShutdownableThread.scala:96) 19:07:01.802 [kafka-producer-network-thread | mso-123456-producer-f273d893-2274-4515-b9d3-12029fbb5bf7] DEBUG org.apache.kafka.common.network.Selector - [Producer clientId=mso-123456-producer-f273d893-2274-4515-b9d3-12029fbb5bf7] Connection with localhost/127.0.0.1 (channelId=-1) disconnected java.io.EOFException: null at org.apache.kafka.common.network.NetworkReceive.readFrom(NetworkReceive.java:97) at org.apache.kafka.common.network.KafkaChannel.receive(KafkaChannel.java:452) at org.apache.kafka.common.network.KafkaChannel.read(KafkaChannel.java:402) at org.apache.kafka.common.network.Selector.attemptRead(Selector.java:674) at org.apache.kafka.common.network.Selector.pollSelectionKeys(Selector.java:576) at org.apache.kafka.common.network.Selector.poll(Selector.java:481) at org.apache.kafka.clients.NetworkClient.poll(NetworkClient.java:560) at org.apache.kafka.clients.producer.internals.Sender.runOnce(Sender.java:328) at org.apache.kafka.clients.producer.internals.Sender.run(Sender.java:243) at java.base/java.lang.Thread.run(Thread.java:829) 19:07:01.803 [kafka-producer-network-thread | mso-123456-producer-f273d893-2274-4515-b9d3-12029fbb5bf7] INFO org.apache.kafka.clients.NetworkClient - [Producer clientId=mso-123456-producer-f273d893-2274-4515-b9d3-12029fbb5bf7] Node -1 disconnected. 19:07:01.803 [TestBroker:1:BrokerToControllerChannelManager broker=1 name=forwarding] INFO org.apache.kafka.clients.NetworkClient - [BrokerToControllerChannelManager broker=1 name=forwarding] Node 1 disconnected. 19:07:01.806 [kafka-producer-network-thread | mso-123456-producer-f273d893-2274-4515-b9d3-12029fbb5bf7] DEBUG org.apache.kafka.clients.NetworkClient - [Producer clientId=mso-123456-producer-f273d893-2274-4515-b9d3-12029fbb5bf7] Sending metadata request MetadataRequestData(topics=[MetadataRequestTopic(topicId=AAAAAAAAAAAAAAAAAAAAAA, name='my-test-topic')], allowAutoTopicCreation=true, includeClusterAuthorizedOperations=false, includeTopicAuthorizedOperations=false) to node localhost:38537 (id: 1 rack: null) 19:07:01.806 [kafka-producer-network-thread | mso-123456-producer-f273d893-2274-4515-b9d3-12029fbb5bf7] DEBUG org.apache.kafka.clients.NetworkClient - [Producer clientId=mso-123456-producer-f273d893-2274-4515-b9d3-12029fbb5bf7] Sending METADATA request with header RequestHeader(apiKey=METADATA, apiVersion=12, clientId=mso-123456-producer-f273d893-2274-4515-b9d3-12029fbb5bf7, correlationId=5) and timeout 30000 to node 1: MetadataRequestData(topics=[MetadataRequestTopic(topicId=AAAAAAAAAAAAAAAAAAAAAA, name='my-test-topic')], allowAutoTopicCreation=true, includeClusterAuthorizedOperations=false, includeTopicAuthorizedOperations=false) 19:07:01.806 [main] INFO kafka.network.SocketServer - [SocketServer listenerType=ZK_BROKER, nodeId=1] Stopped socket server request processors 19:07:01.807 [kafka-producer-network-thread | mso-123456-producer-f273d893-2274-4515-b9d3-12029fbb5bf7] DEBUG org.apache.kafka.common.network.Selector - [Producer clientId=mso-123456-producer-f273d893-2274-4515-b9d3-12029fbb5bf7] Connection with localhost/127.0.0.1 (channelId=1) disconnected java.io.EOFException: null at org.apache.kafka.common.network.NetworkReceive.readFrom(NetworkReceive.java:97) at org.apache.kafka.common.network.KafkaChannel.receive(KafkaChannel.java:452) at org.apache.kafka.common.network.KafkaChannel.read(KafkaChannel.java:402) at org.apache.kafka.common.network.Selector.attemptRead(Selector.java:674) at org.apache.kafka.common.network.Selector.pollSelectionKeys(Selector.java:576) at org.apache.kafka.common.network.Selector.poll(Selector.java:481) at org.apache.kafka.clients.NetworkClient.poll(NetworkClient.java:560) at org.apache.kafka.clients.producer.internals.Sender.runOnce(Sender.java:328) at org.apache.kafka.clients.producer.internals.Sender.run(Sender.java:243) at java.base/java.lang.Thread.run(Thread.java:829) 19:07:01.807 [main] INFO kafka.server.KafkaRequestHandlerPool - [data-plane Kafka Request Handler on Broker 1], shutting down 19:07:01.807 [kafka-producer-network-thread | mso-123456-producer-f273d893-2274-4515-b9d3-12029fbb5bf7] INFO org.apache.kafka.clients.NetworkClient - [Producer clientId=mso-123456-producer-f273d893-2274-4515-b9d3-12029fbb5bf7] Node 1 disconnected. 19:07:01.807 [kafka-producer-network-thread | mso-123456-producer-f273d893-2274-4515-b9d3-12029fbb5bf7] DEBUG org.apache.kafka.clients.NetworkClient - [Producer clientId=mso-123456-producer-f273d893-2274-4515-b9d3-12029fbb5bf7] Cancelled in-flight METADATA request with correlation id 5 due to node 1 being disconnected (elapsed time since creation: 2ms, elapsed time since send: 2ms, request timeout: 30000ms): MetadataRequestData(topics=[MetadataRequestTopic(topicId=AAAAAAAAAAAAAAAAAAAAAA, name='my-test-topic')], allowAutoTopicCreation=true, includeClusterAuthorizedOperations=false, includeTopicAuthorizedOperations=false) 19:07:01.808 [data-plane-kafka-request-handler-1] DEBUG kafka.server.KafkaRequestHandler - [Kafka Request Handler 1 on Broker 1], Kafka request handler 1 on broker 1 received shut down command 19:07:01.809 [data-plane-kafka-request-handler-0] DEBUG kafka.server.KafkaRequestHandler - [Kafka Request Handler 0 on Broker 1], Kafka request handler 0 on broker 1 received shut down command 19:07:01.810 [main] INFO kafka.server.KafkaRequestHandlerPool - [data-plane Kafka Request Handler on Broker 1], shut down completely 19:07:01.810 [main] DEBUG kafka.utils.KafkaScheduler - Shutting down task scheduler. 19:07:01.815 [kafka-coordinator-heartbeat-thread | mso-group] DEBUG org.apache.kafka.common.network.Selector - [Consumer clientId=mso-123456-consumer-e3947c42-8a25-4603-b050-ed449cbe5e27, groupId=mso-group] Connection with localhost/127.0.0.1 (channelId=1) disconnected java.io.EOFException: null at org.apache.kafka.common.network.NetworkReceive.readFrom(NetworkReceive.java:97) at org.apache.kafka.common.network.KafkaChannel.receive(KafkaChannel.java:452) at org.apache.kafka.common.network.KafkaChannel.read(KafkaChannel.java:402) at org.apache.kafka.common.network.Selector.attemptRead(Selector.java:674) at org.apache.kafka.common.network.Selector.pollSelectionKeys(Selector.java:576) at org.apache.kafka.common.network.Selector.poll(Selector.java:481) at org.apache.kafka.clients.NetworkClient.poll(NetworkClient.java:560) at org.apache.kafka.clients.consumer.internals.ConsumerNetworkClient.poll(ConsumerNetworkClient.java:280) at org.apache.kafka.clients.consumer.internals.ConsumerNetworkClient.pollNoWakeup(ConsumerNetworkClient.java:321) at org.apache.kafka.clients.consumer.internals.AbstractCoordinator$HeartbeatThread.run(AbstractCoordinator.java:1454) 19:07:01.816 [kafka-coordinator-heartbeat-thread | mso-group] DEBUG org.apache.kafka.common.network.Selector - [Consumer clientId=mso-123456-consumer-e3947c42-8a25-4603-b050-ed449cbe5e27, groupId=mso-group] Connection with localhost/127.0.0.1 (channelId=2147483646) disconnected java.io.EOFException: null at org.apache.kafka.common.network.NetworkReceive.readFrom(NetworkReceive.java:97) at org.apache.kafka.common.network.KafkaChannel.receive(KafkaChannel.java:452) at org.apache.kafka.common.network.KafkaChannel.read(KafkaChannel.java:402) at org.apache.kafka.common.network.Selector.attemptRead(Selector.java:674) at org.apache.kafka.common.network.Selector.pollSelectionKeys(Selector.java:576) at org.apache.kafka.common.network.Selector.poll(Selector.java:481) at org.apache.kafka.clients.NetworkClient.poll(NetworkClient.java:560) at org.apache.kafka.clients.consumer.internals.ConsumerNetworkClient.poll(ConsumerNetworkClient.java:280) at org.apache.kafka.clients.consumer.internals.ConsumerNetworkClient.pollNoWakeup(ConsumerNetworkClient.java:321) at org.apache.kafka.clients.consumer.internals.AbstractCoordinator$HeartbeatThread.run(AbstractCoordinator.java:1454) 19:07:01.816 [kafka-coordinator-heartbeat-thread | mso-group] DEBUG org.apache.kafka.common.network.Selector - [Consumer clientId=mso-123456-consumer-e3947c42-8a25-4603-b050-ed449cbe5e27, groupId=mso-group] Connection with localhost/127.0.0.1 (channelId=-1) disconnected java.io.EOFException: null at org.apache.kafka.common.network.NetworkReceive.readFrom(NetworkReceive.java:97) at org.apache.kafka.common.network.KafkaChannel.receive(KafkaChannel.java:452) at org.apache.kafka.common.network.KafkaChannel.read(KafkaChannel.java:402) at org.apache.kafka.common.network.Selector.attemptRead(Selector.java:674) at org.apache.kafka.common.network.Selector.pollSelectionKeys(Selector.java:576) at org.apache.kafka.common.network.Selector.poll(Selector.java:481) at org.apache.kafka.clients.NetworkClient.poll(NetworkClient.java:560) at org.apache.kafka.clients.consumer.internals.ConsumerNetworkClient.poll(ConsumerNetworkClient.java:280) at org.apache.kafka.clients.consumer.internals.ConsumerNetworkClient.pollNoWakeup(ConsumerNetworkClient.java:321) at org.apache.kafka.clients.consumer.internals.AbstractCoordinator$HeartbeatThread.run(AbstractCoordinator.java:1454) 19:07:01.816 [kafka-coordinator-heartbeat-thread | mso-group] INFO org.apache.kafka.clients.NetworkClient - [Consumer clientId=mso-123456-consumer-e3947c42-8a25-4603-b050-ed449cbe5e27, groupId=mso-group] Node 1 disconnected. 19:07:01.816 [kafka-coordinator-heartbeat-thread | mso-group] DEBUG org.apache.kafka.clients.NetworkClient - [Consumer clientId=mso-123456-consumer-e3947c42-8a25-4603-b050-ed449cbe5e27, groupId=mso-group] Cancelled in-flight FETCH request with correlation id 71 due to node 1 being disconnected (elapsed time since creation: 81ms, elapsed time since send: 81ms, request timeout: 30000ms): FetchRequestData(clusterId=null, replicaId=-1, maxWaitMs=500, minBytes=1, maxBytes=52428800, isolationLevel=0, sessionId=1631787598, sessionEpoch=31, topics=[FetchTopic(topic='my-test-topic', topicId=Y7FYMZASRPqaYrfAS485Ow, partitions=[FetchPartition(partition=0, currentLeaderEpoch=0, fetchOffset=3, lastFetchedEpoch=-1, logStartOffset=-1, partitionMaxBytes=1048576)])], forgottenTopicsData=[], rackId='') 19:07:01.816 [kafka-coordinator-heartbeat-thread | mso-group] INFO org.apache.kafka.clients.NetworkClient - [Consumer clientId=mso-123456-consumer-e3947c42-8a25-4603-b050-ed449cbe5e27, groupId=mso-group] Node -1 disconnected. 19:07:01.816 [kafka-coordinator-heartbeat-thread | mso-group] INFO org.apache.kafka.clients.NetworkClient - [Consumer clientId=mso-123456-consumer-e3947c42-8a25-4603-b050-ed449cbe5e27, groupId=mso-group] Node 2147483646 disconnected. 19:07:01.817 [kafka-coordinator-heartbeat-thread | mso-group] DEBUG org.apache.kafka.clients.consumer.internals.ConsumerNetworkClient - [Consumer clientId=mso-123456-consumer-e3947c42-8a25-4603-b050-ed449cbe5e27, groupId=mso-group] Cancelled request with header RequestHeader(apiKey=FETCH, apiVersion=13, clientId=mso-123456-consumer-e3947c42-8a25-4603-b050-ed449cbe5e27, correlationId=71) due to node 1 being disconnected 19:07:01.817 [kafka-coordinator-heartbeat-thread | mso-group] INFO org.apache.kafka.clients.FetchSessionHandler - [Consumer clientId=mso-123456-consumer-e3947c42-8a25-4603-b050-ed449cbe5e27, groupId=mso-group] Error sending fetch request (sessionId=1631787598, epoch=31) to node 1: org.apache.kafka.common.errors.DisconnectException: null 19:07:01.817 [kafka-coordinator-heartbeat-thread | mso-group] INFO org.apache.kafka.clients.consumer.internals.ConsumerCoordinator - [Consumer clientId=mso-123456-consumer-e3947c42-8a25-4603-b050-ed449cbe5e27, groupId=mso-group] Group coordinator localhost:38537 (id: 2147483646 rack: null) is unavailable or invalid due to cause: coordinator unavailable. isDisconnected: true. Rediscovery will be attempted. 19:07:01.817 [kafka-coordinator-heartbeat-thread | mso-group] DEBUG org.apache.kafka.clients.consumer.internals.ConsumerCoordinator - [Consumer clientId=mso-123456-consumer-e3947c42-8a25-4603-b050-ed449cbe5e27, groupId=mso-group] No broker available to send FindCoordinator request 19:07:01.817 [main] INFO kafka.server.DelayedOperationPurgatory$ExpiredOperationReaper - [ExpirationReaper-1-AlterAcls]: Shutting down 19:07:01.819 [main] INFO kafka.server.DelayedOperationPurgatory$ExpiredOperationReaper - [ExpirationReaper-1-AlterAcls]: Shutdown completed 19:07:01.819 [ExpirationReaper-1-AlterAcls] INFO kafka.server.DelayedOperationPurgatory$ExpiredOperationReaper - [ExpirationReaper-1-AlterAcls]: Stopped 19:07:01.820 [main] INFO kafka.server.KafkaApis - [KafkaApi-1] Shutdown complete. 19:07:01.821 [main] INFO kafka.server.DelayedOperationPurgatory$ExpiredOperationReaper - [ExpirationReaper-1-topic]: Shutting down 19:07:01.822 [main] INFO kafka.server.DelayedOperationPurgatory$ExpiredOperationReaper - [ExpirationReaper-1-topic]: Shutdown completed 19:07:01.822 [ExpirationReaper-1-topic] INFO kafka.server.DelayedOperationPurgatory$ExpiredOperationReaper - [ExpirationReaper-1-topic]: Stopped 19:07:01.824 [main] INFO kafka.coordinator.transaction.TransactionCoordinator - [TransactionCoordinator id=1] Shutting down. 19:07:01.824 [main] DEBUG kafka.utils.KafkaScheduler - Shutting down task scheduler. 19:07:01.826 [main] INFO kafka.coordinator.transaction.TransactionStateManager - [Transaction State Manager 1]: Shutdown complete 19:07:01.826 [main] INFO kafka.coordinator.transaction.TransactionMarkerChannelManager - [Transaction Marker Channel Manager 1]: Shutting down 19:07:01.826 [main] INFO kafka.coordinator.transaction.TransactionMarkerChannelManager - [Transaction Marker Channel Manager 1]: Shutdown completed 19:07:01.826 [TxnMarkerSenderThread-1] INFO kafka.coordinator.transaction.TransactionMarkerChannelManager - [Transaction Marker Channel Manager 1]: Stopped 19:07:01.827 [main] INFO kafka.coordinator.transaction.TransactionCoordinator - [TransactionCoordinator id=1] Shutdown complete. 19:07:01.828 [main] INFO kafka.coordinator.group.GroupCoordinator - [GroupCoordinator 1]: Shutting down. 19:07:01.828 [main] DEBUG kafka.utils.KafkaScheduler - Shutting down task scheduler. 19:07:01.828 [main] INFO kafka.server.DelayedOperationPurgatory$ExpiredOperationReaper - [ExpirationReaper-1-Heartbeat]: Shutting down 19:07:01.829 [main] INFO kafka.server.DelayedOperationPurgatory$ExpiredOperationReaper - [ExpirationReaper-1-Heartbeat]: Shutdown completed 19:07:01.829 [ExpirationReaper-1-Heartbeat] INFO kafka.server.DelayedOperationPurgatory$ExpiredOperationReaper - [ExpirationReaper-1-Heartbeat]: Stopped 19:07:01.830 [main] INFO kafka.server.DelayedOperationPurgatory$ExpiredOperationReaper - [ExpirationReaper-1-Rebalance]: Shutting down 19:07:01.831 [main] INFO kafka.server.DelayedOperationPurgatory$ExpiredOperationReaper - [ExpirationReaper-1-Rebalance]: Shutdown completed 19:07:01.831 [ExpirationReaper-1-Rebalance] INFO kafka.server.DelayedOperationPurgatory$ExpiredOperationReaper - [ExpirationReaper-1-Rebalance]: Stopped 19:07:01.831 [main] INFO kafka.coordinator.group.GroupCoordinator - [GroupCoordinator 1]: Shutdown complete. 19:07:01.832 [main] INFO kafka.server.ReplicaManager - [ReplicaManager broker=1] Shutting down 19:07:01.833 [main] INFO kafka.server.ReplicaManager$LogDirFailureHandler - [LogDirFailureHandler]: Shutting down 19:07:01.833 [main] INFO kafka.server.ReplicaManager$LogDirFailureHandler - [LogDirFailureHandler]: Shutdown completed 19:07:01.833 [LogDirFailureHandler] INFO kafka.server.ReplicaManager$LogDirFailureHandler - [LogDirFailureHandler]: Stopped 19:07:01.833 [main] INFO kafka.server.ReplicaFetcherManager - [ReplicaFetcherManager on broker 1] shutting down 19:07:01.835 [main] INFO kafka.server.ReplicaFetcherManager - [ReplicaFetcherManager on broker 1] shutdown completed 19:07:01.835 [main] INFO kafka.server.ReplicaAlterLogDirsManager - [ReplicaAlterLogDirsManager on broker 1] shutting down 19:07:01.835 [main] INFO kafka.server.ReplicaAlterLogDirsManager - [ReplicaAlterLogDirsManager on broker 1] shutdown completed 19:07:01.835 [main] INFO kafka.server.DelayedOperationPurgatory$ExpiredOperationReaper - [ExpirationReaper-1-Fetch]: Shutting down 19:07:01.836 [main] INFO kafka.server.DelayedOperationPurgatory$ExpiredOperationReaper - [ExpirationReaper-1-Fetch]: Shutdown completed 19:07:01.836 [ExpirationReaper-1-Fetch] INFO kafka.server.DelayedOperationPurgatory$ExpiredOperationReaper - [ExpirationReaper-1-Fetch]: Stopped 19:07:01.836 [main] INFO kafka.server.DelayedOperationPurgatory$ExpiredOperationReaper - [ExpirationReaper-1-Produce]: Shutting down 19:07:01.837 [ExpirationReaper-1-Produce] INFO kafka.server.DelayedOperationPurgatory$ExpiredOperationReaper - [ExpirationReaper-1-Produce]: Stopped 19:07:01.837 [main] INFO kafka.server.DelayedOperationPurgatory$ExpiredOperationReaper - [ExpirationReaper-1-Produce]: Shutdown completed 19:07:01.838 [main] INFO kafka.server.DelayedOperationPurgatory$ExpiredOperationReaper - [ExpirationReaper-1-DeleteRecords]: Shutting down 19:07:01.838 [main] INFO kafka.server.DelayedOperationPurgatory$ExpiredOperationReaper - [ExpirationReaper-1-DeleteRecords]: Shutdown completed 19:07:01.838 [ExpirationReaper-1-DeleteRecords] INFO kafka.server.DelayedOperationPurgatory$ExpiredOperationReaper - [ExpirationReaper-1-DeleteRecords]: Stopped 19:07:01.839 [main] INFO kafka.server.DelayedOperationPurgatory$ExpiredOperationReaper - [ExpirationReaper-1-ElectLeader]: Shutting down 19:07:01.840 [ExpirationReaper-1-ElectLeader] INFO kafka.server.DelayedOperationPurgatory$ExpiredOperationReaper - [ExpirationReaper-1-ElectLeader]: Stopped 19:07:01.840 [main] INFO kafka.server.DelayedOperationPurgatory$ExpiredOperationReaper - [ExpirationReaper-1-ElectLeader]: Shutdown completed 19:07:01.908 [kafka-producer-network-thread | mso-123456-producer-f273d893-2274-4515-b9d3-12029fbb5bf7] DEBUG org.apache.kafka.clients.NetworkClient - [Producer clientId=mso-123456-producer-f273d893-2274-4515-b9d3-12029fbb5bf7] Initialize connection to node localhost:38537 (id: 1 rack: null) for sending metadata request 19:07:01.908 [kafka-producer-network-thread | mso-123456-producer-f273d893-2274-4515-b9d3-12029fbb5bf7] DEBUG org.apache.kafka.clients.ClientUtils - Resolved host localhost as 127.0.0.1 19:07:01.908 [kafka-producer-network-thread | mso-123456-producer-f273d893-2274-4515-b9d3-12029fbb5bf7] DEBUG org.apache.kafka.clients.NetworkClient - [Producer clientId=mso-123456-producer-f273d893-2274-4515-b9d3-12029fbb5bf7] Initiating connection to node localhost:38537 (id: 1 rack: null) using address localhost/127.0.0.1 19:07:01.909 [kafka-producer-network-thread | mso-123456-producer-f273d893-2274-4515-b9d3-12029fbb5bf7] DEBUG org.apache.kafka.common.security.authenticator.SaslClientAuthenticator - [Producer clientId=mso-123456-producer-f273d893-2274-4515-b9d3-12029fbb5bf7] Set SASL client state to SEND_APIVERSIONS_REQUEST 19:07:01.909 [kafka-producer-network-thread | mso-123456-producer-f273d893-2274-4515-b9d3-12029fbb5bf7] DEBUG org.apache.kafka.common.security.authenticator.SaslClientAuthenticator - [Producer clientId=mso-123456-producer-f273d893-2274-4515-b9d3-12029fbb5bf7] Creating SaslClient: client=null;service=kafka;serviceHostname=localhost;mechs=[PLAIN] 19:07:01.911 [kafka-producer-network-thread | mso-123456-producer-f273d893-2274-4515-b9d3-12029fbb5bf7] DEBUG org.apache.kafka.common.network.Selector - [Producer clientId=mso-123456-producer-f273d893-2274-4515-b9d3-12029fbb5bf7] Connection with localhost/127.0.0.1 (channelId=1) disconnected java.net.ConnectException: Connection refused at java.base/sun.nio.ch.SocketChannelImpl.checkConnect(Native Method) at java.base/sun.nio.ch.SocketChannelImpl.finishConnect(SocketChannelImpl.java:777) at org.apache.kafka.common.network.PlaintextTransportLayer.finishConnect(PlaintextTransportLayer.java:50) at org.apache.kafka.common.network.KafkaChannel.finishConnect(KafkaChannel.java:224) at org.apache.kafka.common.network.Selector.pollSelectionKeys(Selector.java:526) at org.apache.kafka.common.network.Selector.poll(Selector.java:481) at org.apache.kafka.clients.NetworkClient.poll(NetworkClient.java:560) at org.apache.kafka.clients.producer.internals.Sender.runOnce(Sender.java:328) at org.apache.kafka.clients.producer.internals.Sender.run(Sender.java:243) at java.base/java.lang.Thread.run(Thread.java:829) 19:07:01.911 [kafka-producer-network-thread | mso-123456-producer-f273d893-2274-4515-b9d3-12029fbb5bf7] INFO org.apache.kafka.clients.NetworkClient - [Producer clientId=mso-123456-producer-f273d893-2274-4515-b9d3-12029fbb5bf7] Node 1 disconnected. 19:07:01.911 [kafka-producer-network-thread | mso-123456-producer-f273d893-2274-4515-b9d3-12029fbb5bf7] WARN org.apache.kafka.clients.NetworkClient - [Producer clientId=mso-123456-producer-f273d893-2274-4515-b9d3-12029fbb5bf7] Connection to node 1 (localhost/127.0.0.1:38537) could not be established. Broker may not be available. 19:07:01.917 [kafka-coordinator-heartbeat-thread | mso-group] DEBUG org.apache.kafka.clients.NetworkClient - [Consumer clientId=mso-123456-consumer-e3947c42-8a25-4603-b050-ed449cbe5e27, groupId=mso-group] Initialize connection to node localhost:38537 (id: 1 rack: null) for sending metadata request 19:07:01.918 [kafka-coordinator-heartbeat-thread | mso-group] DEBUG org.apache.kafka.clients.ClientUtils - Resolved host localhost as 127.0.0.1 19:07:01.918 [kafka-coordinator-heartbeat-thread | mso-group] DEBUG org.apache.kafka.clients.NetworkClient - [Consumer clientId=mso-123456-consumer-e3947c42-8a25-4603-b050-ed449cbe5e27, groupId=mso-group] Initiating connection to node localhost:38537 (id: 1 rack: null) using address localhost/127.0.0.1 19:07:01.918 [kafka-coordinator-heartbeat-thread | mso-group] DEBUG org.apache.kafka.common.security.authenticator.SaslClientAuthenticator - [Consumer clientId=mso-123456-consumer-e3947c42-8a25-4603-b050-ed449cbe5e27, groupId=mso-group] Set SASL client state to SEND_APIVERSIONS_REQUEST 19:07:01.918 [kafka-coordinator-heartbeat-thread | mso-group] DEBUG org.apache.kafka.common.security.authenticator.SaslClientAuthenticator - [Consumer clientId=mso-123456-consumer-e3947c42-8a25-4603-b050-ed449cbe5e27, groupId=mso-group] Creating SaslClient: client=null;service=kafka;serviceHostname=localhost;mechs=[PLAIN] 19:07:01.919 [kafka-coordinator-heartbeat-thread | mso-group] DEBUG org.apache.kafka.common.network.Selector - [Consumer clientId=mso-123456-consumer-e3947c42-8a25-4603-b050-ed449cbe5e27, groupId=mso-group] Connection with localhost/127.0.0.1 (channelId=1) disconnected java.net.ConnectException: Connection refused at java.base/sun.nio.ch.SocketChannelImpl.checkConnect(Native Method) at java.base/sun.nio.ch.SocketChannelImpl.finishConnect(SocketChannelImpl.java:777) at org.apache.kafka.common.network.PlaintextTransportLayer.finishConnect(PlaintextTransportLayer.java:50) at org.apache.kafka.common.network.KafkaChannel.finishConnect(KafkaChannel.java:224) at org.apache.kafka.common.network.Selector.pollSelectionKeys(Selector.java:526) at org.apache.kafka.common.network.Selector.poll(Selector.java:481) at org.apache.kafka.clients.NetworkClient.poll(NetworkClient.java:560) at org.apache.kafka.clients.consumer.internals.ConsumerNetworkClient.poll(ConsumerNetworkClient.java:280) at org.apache.kafka.clients.consumer.internals.ConsumerNetworkClient.pollNoWakeup(ConsumerNetworkClient.java:321) at org.apache.kafka.clients.consumer.internals.AbstractCoordinator$HeartbeatThread.run(AbstractCoordinator.java:1454) 19:07:01.919 [kafka-coordinator-heartbeat-thread | mso-group] INFO org.apache.kafka.clients.NetworkClient - [Consumer clientId=mso-123456-consumer-e3947c42-8a25-4603-b050-ed449cbe5e27, groupId=mso-group] Node 1 disconnected. 19:07:01.919 [kafka-coordinator-heartbeat-thread | mso-group] WARN org.apache.kafka.clients.NetworkClient - [Consumer clientId=mso-123456-consumer-e3947c42-8a25-4603-b050-ed449cbe5e27, groupId=mso-group] Connection to node 1 (localhost/127.0.0.1:38537) could not be established. Broker may not be available. 19:07:01.919 [kafka-coordinator-heartbeat-thread | mso-group] DEBUG org.apache.kafka.clients.consumer.internals.ConsumerCoordinator - [Consumer clientId=mso-123456-consumer-e3947c42-8a25-4603-b050-ed449cbe5e27, groupId=mso-group] No broker available to send FindCoordinator request 19:07:01.941 [main] INFO kafka.server.ReplicaManager - [ReplicaManager broker=1] Shut down completely 19:07:01.942 [main] INFO kafka.server.BrokerToControllerRequestThread - [TestBroker:1:BrokerToControllerChannelManager broker=1 name=alterPartition]: Shutting down 19:07:01.943 [TestBroker:1:BrokerToControllerChannelManager broker=1 name=alterPartition] INFO kafka.server.BrokerToControllerRequestThread - [TestBroker:1:BrokerToControllerChannelManager broker=1 name=alterPartition]: Stopped 19:07:01.943 [main] INFO kafka.server.BrokerToControllerRequestThread - [TestBroker:1:BrokerToControllerChannelManager broker=1 name=alterPartition]: Shutdown completed 19:07:01.946 [main] INFO kafka.server.BrokerToControllerChannelManagerImpl - Broker to controller channel manager for alterPartition shutdown 19:07:01.946 [main] INFO kafka.server.BrokerToControllerRequestThread - [TestBroker:1:BrokerToControllerChannelManager broker=1 name=forwarding]: Shutting down 19:07:01.947 [TestBroker:1:BrokerToControllerChannelManager broker=1 name=forwarding] INFO kafka.server.BrokerToControllerRequestThread - [TestBroker:1:BrokerToControllerChannelManager broker=1 name=forwarding]: Stopped 19:07:01.947 [main] INFO kafka.server.BrokerToControllerRequestThread - [TestBroker:1:BrokerToControllerChannelManager broker=1 name=forwarding]: Shutdown completed 19:07:01.948 [main] INFO kafka.server.BrokerToControllerChannelManagerImpl - Broker to controller channel manager for forwarding shutdown 19:07:01.948 [main] INFO kafka.log.LogManager - Shutting down. 19:07:01.949 [main] INFO kafka.log.LogCleaner - Shutting down the log cleaner. 19:07:01.950 [main] INFO kafka.log.LogCleaner - [kafka-log-cleaner-thread-0]: Shutting down 19:07:01.951 [kafka-log-cleaner-thread-0] INFO kafka.log.LogCleaner - [kafka-log-cleaner-thread-0]: Stopped 19:07:01.951 [main] INFO kafka.log.LogCleaner - [kafka-log-cleaner-thread-0]: Shutdown completed 19:07:01.952 [main] DEBUG kafka.log.LogManager - Flushing and closing logs at /tmp/kafka-unit12180474530667575823 19:07:01.955 [log-closing-/tmp/kafka-unit12180474530667575823] DEBUG kafka.log.UnifiedLog - [UnifiedLog partition=__consumer_offsets-29, dir=/tmp/kafka-unit12180474530667575823] Flushing log up to offset 0 (inclusive)with recovery point 0, last flushed: 1755112002191, current time: 1755112021955,unflushed: 0 19:07:01.958 [log-closing-/tmp/kafka-unit12180474530667575823] DEBUG kafka.log.UnifiedLog - [UnifiedLog partition=__consumer_offsets-29, dir=/tmp/kafka-unit12180474530667575823] Closing log 19:07:01.960 [log-closing-/tmp/kafka-unit12180474530667575823] DEBUG kafka.log.AbstractIndex - Resized /tmp/kafka-unit12180474530667575823/__consumer_offsets-29/00000000000000000000.index to 0, position is 0 and limit is 0 19:07:01.964 [log-closing-/tmp/kafka-unit12180474530667575823] DEBUG kafka.log.AbstractIndex - Resized /tmp/kafka-unit12180474530667575823/__consumer_offsets-29/00000000000000000000.timeindex to 0, position is 0 and limit is 0 19:07:01.965 [log-closing-/tmp/kafka-unit12180474530667575823] DEBUG kafka.log.UnifiedLog - [UnifiedLog partition=__consumer_offsets-43, dir=/tmp/kafka-unit12180474530667575823] Flushing log up to offset 0 (inclusive)with recovery point 0, last flushed: 1755112002554, current time: 1755112021965,unflushed: 0 19:07:01.968 [log-closing-/tmp/kafka-unit12180474530667575823] DEBUG kafka.log.UnifiedLog - [UnifiedLog partition=__consumer_offsets-43, dir=/tmp/kafka-unit12180474530667575823] Closing log 19:07:01.969 [log-closing-/tmp/kafka-unit12180474530667575823] DEBUG kafka.log.AbstractIndex - Resized /tmp/kafka-unit12180474530667575823/__consumer_offsets-43/00000000000000000000.index to 0, position is 0 and limit is 0 19:07:01.969 [log-closing-/tmp/kafka-unit12180474530667575823] DEBUG kafka.log.AbstractIndex - Resized /tmp/kafka-unit12180474530667575823/__consumer_offsets-43/00000000000000000000.timeindex to 0, position is 0 and limit is 0 19:07:01.970 [log-closing-/tmp/kafka-unit12180474530667575823] DEBUG kafka.log.UnifiedLog - [UnifiedLog partition=__consumer_offsets-0, dir=/tmp/kafka-unit12180474530667575823] Flushing log up to offset 0 (inclusive)with recovery point 0, last flushed: 1755112002423, current time: 1755112021970,unflushed: 0 19:07:01.972 [log-closing-/tmp/kafka-unit12180474530667575823] DEBUG kafka.log.UnifiedLog - [UnifiedLog partition=__consumer_offsets-0, dir=/tmp/kafka-unit12180474530667575823] Closing log 19:07:01.973 [log-closing-/tmp/kafka-unit12180474530667575823] DEBUG kafka.log.AbstractIndex - Resized /tmp/kafka-unit12180474530667575823/__consumer_offsets-0/00000000000000000000.index to 0, position is 0 and limit is 0 19:07:01.973 [log-closing-/tmp/kafka-unit12180474530667575823] DEBUG kafka.log.AbstractIndex - Resized /tmp/kafka-unit12180474530667575823/__consumer_offsets-0/00000000000000000000.timeindex to 0, position is 0 and limit is 0 19:07:01.974 [log-closing-/tmp/kafka-unit12180474530667575823] DEBUG kafka.log.UnifiedLog - [UnifiedLog partition=__consumer_offsets-6, dir=/tmp/kafka-unit12180474530667575823] Flushing log up to offset 0 (inclusive)with recovery point 0, last flushed: 1755112002543, current time: 1755112021974,unflushed: 0 19:07:01.976 [log-closing-/tmp/kafka-unit12180474530667575823] DEBUG kafka.log.UnifiedLog - [UnifiedLog partition=__consumer_offsets-6, dir=/tmp/kafka-unit12180474530667575823] Closing log 19:07:01.976 [log-closing-/tmp/kafka-unit12180474530667575823] DEBUG kafka.log.AbstractIndex - Resized /tmp/kafka-unit12180474530667575823/__consumer_offsets-6/00000000000000000000.index to 0, position is 0 and limit is 0 19:07:01.976 [log-closing-/tmp/kafka-unit12180474530667575823] DEBUG kafka.log.AbstractIndex - Resized /tmp/kafka-unit12180474530667575823/__consumer_offsets-6/00000000000000000000.timeindex to 0, position is 0 and limit is 0 19:07:01.977 [log-closing-/tmp/kafka-unit12180474530667575823] DEBUG kafka.log.UnifiedLog - [UnifiedLog partition=__consumer_offsets-35, dir=/tmp/kafka-unit12180474530667575823] Flushing log up to offset 0 (inclusive)with recovery point 0, last flushed: 1755112002440, current time: 1755112021977,unflushed: 0 19:07:01.978 [log-closing-/tmp/kafka-unit12180474530667575823] DEBUG kafka.log.UnifiedLog - [UnifiedLog partition=__consumer_offsets-35, dir=/tmp/kafka-unit12180474530667575823] Closing log 19:07:01.978 [log-closing-/tmp/kafka-unit12180474530667575823] DEBUG kafka.log.AbstractIndex - Resized /tmp/kafka-unit12180474530667575823/__consumer_offsets-35/00000000000000000000.index to 0, position is 0 and limit is 0 19:07:01.978 [log-closing-/tmp/kafka-unit12180474530667575823] DEBUG kafka.log.AbstractIndex - Resized /tmp/kafka-unit12180474530667575823/__consumer_offsets-35/00000000000000000000.timeindex to 0, position is 0 and limit is 0 19:07:01.979 [log-closing-/tmp/kafka-unit12180474530667575823] DEBUG kafka.log.UnifiedLog - [UnifiedLog partition=__consumer_offsets-30, dir=/tmp/kafka-unit12180474530667575823] Flushing log up to offset 0 (inclusive)with recovery point 0, last flushed: 1755112002413, current time: 1755112021979,unflushed: 0 19:07:01.982 [log-closing-/tmp/kafka-unit12180474530667575823] DEBUG kafka.log.UnifiedLog - [UnifiedLog partition=__consumer_offsets-30, dir=/tmp/kafka-unit12180474530667575823] Closing log 19:07:01.982 [log-closing-/tmp/kafka-unit12180474530667575823] DEBUG kafka.log.AbstractIndex - Resized /tmp/kafka-unit12180474530667575823/__consumer_offsets-30/00000000000000000000.index to 0, position is 0 and limit is 0 19:07:01.982 [log-closing-/tmp/kafka-unit12180474530667575823] DEBUG kafka.log.AbstractIndex - Resized /tmp/kafka-unit12180474530667575823/__consumer_offsets-30/00000000000000000000.timeindex to 0, position is 0 and limit is 0 19:07:01.982 [log-closing-/tmp/kafka-unit12180474530667575823] DEBUG kafka.log.UnifiedLog - [UnifiedLog partition=__consumer_offsets-13, dir=/tmp/kafka-unit12180474530667575823] Flushing log up to offset 0 (inclusive)with recovery point 0, last flushed: 1755112002604, current time: 1755112021982,unflushed: 0 19:07:01.984 [log-closing-/tmp/kafka-unit12180474530667575823] DEBUG kafka.log.UnifiedLog - [UnifiedLog partition=__consumer_offsets-13, dir=/tmp/kafka-unit12180474530667575823] Closing log 19:07:01.984 [log-closing-/tmp/kafka-unit12180474530667575823] DEBUG kafka.log.AbstractIndex - Resized /tmp/kafka-unit12180474530667575823/__consumer_offsets-13/00000000000000000000.index to 0, position is 0 and limit is 0 19:07:01.984 [log-closing-/tmp/kafka-unit12180474530667575823] DEBUG kafka.log.AbstractIndex - Resized /tmp/kafka-unit12180474530667575823/__consumer_offsets-13/00000000000000000000.timeindex to 0, position is 0 and limit is 0 19:07:01.985 [log-closing-/tmp/kafka-unit12180474530667575823] DEBUG kafka.log.UnifiedLog - [UnifiedLog partition=__consumer_offsets-26, dir=/tmp/kafka-unit12180474530667575823] Flushing log up to offset 0 (inclusive)with recovery point 0, last flushed: 1755112001970, current time: 1755112021985,unflushed: 0 19:07:01.986 [log-closing-/tmp/kafka-unit12180474530667575823] DEBUG kafka.log.UnifiedLog - [UnifiedLog partition=__consumer_offsets-26, dir=/tmp/kafka-unit12180474530667575823] Closing log 19:07:01.986 [log-closing-/tmp/kafka-unit12180474530667575823] DEBUG kafka.log.AbstractIndex - Resized /tmp/kafka-unit12180474530667575823/__consumer_offsets-26/00000000000000000000.index to 0, position is 0 and limit is 0 19:07:01.986 [log-closing-/tmp/kafka-unit12180474530667575823] DEBUG kafka.log.AbstractIndex - Resized /tmp/kafka-unit12180474530667575823/__consumer_offsets-26/00000000000000000000.timeindex to 0, position is 0 and limit is 0 19:07:01.987 [log-closing-/tmp/kafka-unit12180474530667575823] DEBUG kafka.log.UnifiedLog - [UnifiedLog partition=__consumer_offsets-21, dir=/tmp/kafka-unit12180474530667575823] Flushing log up to offset 0 (inclusive)with recovery point 0, last flushed: 1755112002521, current time: 1755112021987,unflushed: 0 19:07:01.988 [log-closing-/tmp/kafka-unit12180474530667575823] DEBUG kafka.log.UnifiedLog - [UnifiedLog partition=__consumer_offsets-21, dir=/tmp/kafka-unit12180474530667575823] Closing log 19:07:01.988 [log-closing-/tmp/kafka-unit12180474530667575823] DEBUG kafka.log.AbstractIndex - Resized /tmp/kafka-unit12180474530667575823/__consumer_offsets-21/00000000000000000000.index to 0, position is 0 and limit is 0 19:07:01.988 [log-closing-/tmp/kafka-unit12180474530667575823] DEBUG kafka.log.AbstractIndex - Resized /tmp/kafka-unit12180474530667575823/__consumer_offsets-21/00000000000000000000.timeindex to 0, position is 0 and limit is 0 19:07:01.989 [log-closing-/tmp/kafka-unit12180474530667575823] DEBUG kafka.log.UnifiedLog - [UnifiedLog partition=__consumer_offsets-19, dir=/tmp/kafka-unit12180474530667575823] Flushing log up to offset 0 (inclusive)with recovery point 0, last flushed: 1755112001930, current time: 1755112021989,unflushed: 0 19:07:01.990 [log-closing-/tmp/kafka-unit12180474530667575823] DEBUG kafka.log.UnifiedLog - [UnifiedLog partition=__consumer_offsets-19, dir=/tmp/kafka-unit12180474530667575823] Closing log 19:07:01.990 [log-closing-/tmp/kafka-unit12180474530667575823] DEBUG kafka.log.AbstractIndex - Resized /tmp/kafka-unit12180474530667575823/__consumer_offsets-19/00000000000000000000.index to 0, position is 0 and limit is 0 19:07:01.991 [log-closing-/tmp/kafka-unit12180474530667575823] DEBUG kafka.log.AbstractIndex - Resized /tmp/kafka-unit12180474530667575823/__consumer_offsets-19/00000000000000000000.timeindex to 0, position is 0 and limit is 0 19:07:01.991 [log-closing-/tmp/kafka-unit12180474530667575823] DEBUG kafka.log.UnifiedLog - [UnifiedLog partition=__consumer_offsets-25, dir=/tmp/kafka-unit12180474530667575823] Flushing log up to offset 0 (inclusive)with recovery point 0, last flushed: 1755112002106, current time: 1755112021991,unflushed: 0 19:07:01.993 [log-closing-/tmp/kafka-unit12180474530667575823] DEBUG kafka.log.UnifiedLog - [UnifiedLog partition=__consumer_offsets-25, dir=/tmp/kafka-unit12180474530667575823] Closing log 19:07:01.993 [log-closing-/tmp/kafka-unit12180474530667575823] DEBUG kafka.log.AbstractIndex - Resized /tmp/kafka-unit12180474530667575823/__consumer_offsets-25/00000000000000000000.index to 0, position is 0 and limit is 0 19:07:01.993 [log-closing-/tmp/kafka-unit12180474530667575823] DEBUG kafka.log.AbstractIndex - Resized /tmp/kafka-unit12180474530667575823/__consumer_offsets-25/00000000000000000000.timeindex to 0, position is 0 and limit is 0 19:07:01.993 [log-closing-/tmp/kafka-unit12180474530667575823] DEBUG kafka.log.UnifiedLog - [UnifiedLog partition=__consumer_offsets-33, dir=/tmp/kafka-unit12180474530667575823] Flushing log up to offset 0 (inclusive)with recovery point 0, last flushed: 1755112001910, current time: 1755112021993,unflushed: 0 19:07:01.995 [log-closing-/tmp/kafka-unit12180474530667575823] DEBUG kafka.log.UnifiedLog - [UnifiedLog partition=__consumer_offsets-33, dir=/tmp/kafka-unit12180474530667575823] Closing log 19:07:01.995 [log-closing-/tmp/kafka-unit12180474530667575823] DEBUG kafka.log.AbstractIndex - Resized /tmp/kafka-unit12180474530667575823/__consumer_offsets-33/00000000000000000000.index to 0, position is 0 and limit is 0 19:07:01.995 [log-closing-/tmp/kafka-unit12180474530667575823] DEBUG kafka.log.AbstractIndex - Resized /tmp/kafka-unit12180474530667575823/__consumer_offsets-33/00000000000000000000.timeindex to 0, position is 0 and limit is 0 19:07:01.995 [log-closing-/tmp/kafka-unit12180474530667575823] DEBUG kafka.log.UnifiedLog - [UnifiedLog partition=__consumer_offsets-41, dir=/tmp/kafka-unit12180474530667575823] Flushing log up to offset 0 (inclusive)with recovery point 0, last flushed: 1755112001889, current time: 1755112021995,unflushed: 0 19:07:01.997 [log-closing-/tmp/kafka-unit12180474530667575823] DEBUG kafka.log.UnifiedLog - [UnifiedLog partition=__consumer_offsets-41, dir=/tmp/kafka-unit12180474530667575823] Closing log 19:07:01.997 [log-closing-/tmp/kafka-unit12180474530667575823] DEBUG kafka.log.AbstractIndex - Resized /tmp/kafka-unit12180474530667575823/__consumer_offsets-41/00000000000000000000.index to 0, position is 0 and limit is 0 19:07:01.997 [log-closing-/tmp/kafka-unit12180474530667575823] DEBUG kafka.log.AbstractIndex - Resized /tmp/kafka-unit12180474530667575823/__consumer_offsets-41/00000000000000000000.timeindex to 0, position is 0 and limit is 0 19:07:01.997 [log-closing-/tmp/kafka-unit12180474530667575823] DEBUG kafka.log.UnifiedLog - [UnifiedLog partition=__consumer_offsets-37, dir=/tmp/kafka-unit12180474530667575823] Flushing log up to offset 4 (inclusive)with recovery point 4, last flushed: 1755112021027, current time: 1755112021997,unflushed: 0 19:07:01.998 [log-closing-/tmp/kafka-unit12180474530667575823] DEBUG kafka.log.UnifiedLog - [UnifiedLog partition=__consumer_offsets-37, dir=/tmp/kafka-unit12180474530667575823] Closing log 19:07:02.003 [log-closing-/tmp/kafka-unit12180474530667575823] INFO kafka.log.ProducerStateManager - [ProducerStateManager partition=__consumer_offsets-37] Wrote producer snapshot at offset 4 with 0 producer ids in 4 ms. 19:07:02.004 [log-closing-/tmp/kafka-unit12180474530667575823] DEBUG kafka.log.AbstractIndex - Resized /tmp/kafka-unit12180474530667575823/__consumer_offsets-37/00000000000000000000.index to 0, position is 0 and limit is 0 19:07:02.004 [log-closing-/tmp/kafka-unit12180474530667575823] DEBUG kafka.log.AbstractIndex - Resized /tmp/kafka-unit12180474530667575823/__consumer_offsets-37/00000000000000000000.timeindex to 12, position is 12 and limit is 12 19:07:02.005 [log-closing-/tmp/kafka-unit12180474530667575823] DEBUG kafka.log.UnifiedLog - [UnifiedLog partition=__consumer_offsets-8, dir=/tmp/kafka-unit12180474530667575823] Flushing log up to offset 0 (inclusive)with recovery point 0, last flushed: 1755112002350, current time: 1755112022005,unflushed: 0 19:07:02.007 [log-closing-/tmp/kafka-unit12180474530667575823] DEBUG kafka.log.UnifiedLog - [UnifiedLog partition=__consumer_offsets-8, dir=/tmp/kafka-unit12180474530667575823] Closing log 19:07:02.007 [log-closing-/tmp/kafka-unit12180474530667575823] DEBUG kafka.log.AbstractIndex - Resized /tmp/kafka-unit12180474530667575823/__consumer_offsets-8/00000000000000000000.index to 0, position is 0 and limit is 0 19:07:02.007 [log-closing-/tmp/kafka-unit12180474530667575823] DEBUG kafka.log.AbstractIndex - Resized /tmp/kafka-unit12180474530667575823/__consumer_offsets-8/00000000000000000000.timeindex to 0, position is 0 and limit is 0 19:07:02.007 [log-closing-/tmp/kafka-unit12180474530667575823] DEBUG kafka.log.UnifiedLog - [UnifiedLog partition=__consumer_offsets-24, dir=/tmp/kafka-unit12180474530667575823] Flushing log up to offset 0 (inclusive)with recovery point 0, last flushed: 1755112002037, current time: 1755112022007,unflushed: 0 19:07:02.009 [log-closing-/tmp/kafka-unit12180474530667575823] DEBUG kafka.log.UnifiedLog - [UnifiedLog partition=__consumer_offsets-24, dir=/tmp/kafka-unit12180474530667575823] Closing log 19:07:02.009 [log-closing-/tmp/kafka-unit12180474530667575823] DEBUG kafka.log.AbstractIndex - Resized /tmp/kafka-unit12180474530667575823/__consumer_offsets-24/00000000000000000000.index to 0, position is 0 and limit is 0 19:07:02.009 [log-closing-/tmp/kafka-unit12180474530667575823] DEBUG kafka.log.AbstractIndex - Resized /tmp/kafka-unit12180474530667575823/__consumer_offsets-24/00000000000000000000.timeindex to 0, position is 0 and limit is 0 19:07:02.009 [log-closing-/tmp/kafka-unit12180474530667575823] DEBUG kafka.log.UnifiedLog - [UnifiedLog partition=__consumer_offsets-49, dir=/tmp/kafka-unit12180474530667575823] Flushing log up to offset 0 (inclusive)with recovery point 0, last flushed: 1755112002006, current time: 1755112022009,unflushed: 0 19:07:02.011 [log-closing-/tmp/kafka-unit12180474530667575823] DEBUG kafka.log.UnifiedLog - [UnifiedLog partition=__consumer_offsets-49, dir=/tmp/kafka-unit12180474530667575823] Closing log 19:07:02.011 [log-closing-/tmp/kafka-unit12180474530667575823] DEBUG kafka.log.AbstractIndex - Resized /tmp/kafka-unit12180474530667575823/__consumer_offsets-49/00000000000000000000.index to 0, position is 0 and limit is 0 19:07:02.011 [log-closing-/tmp/kafka-unit12180474530667575823] DEBUG kafka.log.AbstractIndex - Resized /tmp/kafka-unit12180474530667575823/__consumer_offsets-49/00000000000000000000.timeindex to 0, position is 0 and limit is 0 19:07:02.012 [log-closing-/tmp/kafka-unit12180474530667575823] DEBUG kafka.log.UnifiedLog - [UnifiedLog partition=my-test-topic-0, dir=/tmp/kafka-unit12180474530667575823] Flushing log up to offset 3 (inclusive)with recovery point 3, last flushed: 1755112021711, current time: 1755112022012,unflushed: 0 19:07:02.012 [log-closing-/tmp/kafka-unit12180474530667575823] DEBUG kafka.log.UnifiedLog - [UnifiedLog partition=my-test-topic-0, dir=/tmp/kafka-unit12180474530667575823] Closing log 19:07:02.012 [kafka-producer-network-thread | mso-123456-producer-f273d893-2274-4515-b9d3-12029fbb5bf7] DEBUG org.apache.kafka.clients.NetworkClient - [Producer clientId=mso-123456-producer-f273d893-2274-4515-b9d3-12029fbb5bf7] Initialize connection to node localhost:38537 (id: 1 rack: null) for sending metadata request 19:07:02.013 [kafka-producer-network-thread | mso-123456-producer-f273d893-2274-4515-b9d3-12029fbb5bf7] DEBUG org.apache.kafka.clients.ClientUtils - Resolved host localhost as 127.0.0.1 19:07:02.013 [kafka-producer-network-thread | mso-123456-producer-f273d893-2274-4515-b9d3-12029fbb5bf7] DEBUG org.apache.kafka.clients.NetworkClient - [Producer clientId=mso-123456-producer-f273d893-2274-4515-b9d3-12029fbb5bf7] Initiating connection to node localhost:38537 (id: 1 rack: null) using address localhost/127.0.0.1 19:07:02.013 [kafka-producer-network-thread | mso-123456-producer-f273d893-2274-4515-b9d3-12029fbb5bf7] DEBUG org.apache.kafka.common.security.authenticator.SaslClientAuthenticator - [Producer clientId=mso-123456-producer-f273d893-2274-4515-b9d3-12029fbb5bf7] Set SASL client state to SEND_APIVERSIONS_REQUEST 19:07:02.014 [kafka-producer-network-thread | mso-123456-producer-f273d893-2274-4515-b9d3-12029fbb5bf7] DEBUG org.apache.kafka.common.security.authenticator.SaslClientAuthenticator - [Producer clientId=mso-123456-producer-f273d893-2274-4515-b9d3-12029fbb5bf7] Creating SaslClient: client=null;service=kafka;serviceHostname=localhost;mechs=[PLAIN] 19:07:02.015 [kafka-producer-network-thread | mso-123456-producer-f273d893-2274-4515-b9d3-12029fbb5bf7] DEBUG org.apache.kafka.common.network.Selector - [Producer clientId=mso-123456-producer-f273d893-2274-4515-b9d3-12029fbb5bf7] Connection with localhost/127.0.0.1 (channelId=1) disconnected java.net.ConnectException: Connection refused at java.base/sun.nio.ch.SocketChannelImpl.checkConnect(Native Method) at java.base/sun.nio.ch.SocketChannelImpl.finishConnect(SocketChannelImpl.java:777) at org.apache.kafka.common.network.PlaintextTransportLayer.finishConnect(PlaintextTransportLayer.java:50) at org.apache.kafka.common.network.KafkaChannel.finishConnect(KafkaChannel.java:224) at org.apache.kafka.common.network.Selector.pollSelectionKeys(Selector.java:526) at org.apache.kafka.common.network.Selector.poll(Selector.java:481) at org.apache.kafka.clients.NetworkClient.poll(NetworkClient.java:560) at org.apache.kafka.clients.producer.internals.Sender.runOnce(Sender.java:328) at org.apache.kafka.clients.producer.internals.Sender.run(Sender.java:243) at java.base/java.lang.Thread.run(Thread.java:829) 19:07:02.015 [kafka-producer-network-thread | mso-123456-producer-f273d893-2274-4515-b9d3-12029fbb5bf7] INFO org.apache.kafka.clients.NetworkClient - [Producer clientId=mso-123456-producer-f273d893-2274-4515-b9d3-12029fbb5bf7] Node 1 disconnected. 19:07:02.015 [kafka-producer-network-thread | mso-123456-producer-f273d893-2274-4515-b9d3-12029fbb5bf7] WARN org.apache.kafka.clients.NetworkClient - [Producer clientId=mso-123456-producer-f273d893-2274-4515-b9d3-12029fbb5bf7] Connection to node 1 (localhost/127.0.0.1:38537) could not be established. Broker may not be available. 19:07:02.015 [log-closing-/tmp/kafka-unit12180474530667575823] INFO kafka.log.ProducerStateManager - [ProducerStateManager partition=my-test-topic-0] Wrote producer snapshot at offset 3 with 1 producer ids in 3 ms. 19:07:02.015 [log-closing-/tmp/kafka-unit12180474530667575823] DEBUG kafka.log.AbstractIndex - Resized /tmp/kafka-unit12180474530667575823/my-test-topic-0/00000000000000000000.index to 0, position is 0 and limit is 0 19:07:02.016 [log-closing-/tmp/kafka-unit12180474530667575823] DEBUG kafka.log.AbstractIndex - Resized /tmp/kafka-unit12180474530667575823/my-test-topic-0/00000000000000000000.timeindex to 12, position is 12 and limit is 12 19:07:02.016 [log-closing-/tmp/kafka-unit12180474530667575823] DEBUG kafka.log.UnifiedLog - [UnifiedLog partition=__consumer_offsets-3, dir=/tmp/kafka-unit12180474530667575823] Flushing log up to offset 0 (inclusive)with recovery point 0, last flushed: 1755112001861, current time: 1755112022016,unflushed: 0 19:07:02.017 [log-closing-/tmp/kafka-unit12180474530667575823] DEBUG kafka.log.UnifiedLog - [UnifiedLog partition=__consumer_offsets-3, dir=/tmp/kafka-unit12180474530667575823] Closing log 19:07:02.018 [log-closing-/tmp/kafka-unit12180474530667575823] DEBUG kafka.log.AbstractIndex - Resized /tmp/kafka-unit12180474530667575823/__consumer_offsets-3/00000000000000000000.index to 0, position is 0 and limit is 0 19:07:02.018 [log-closing-/tmp/kafka-unit12180474530667575823] DEBUG kafka.log.AbstractIndex - Resized /tmp/kafka-unit12180474530667575823/__consumer_offsets-3/00000000000000000000.timeindex to 0, position is 0 and limit is 0 19:07:02.018 [log-closing-/tmp/kafka-unit12180474530667575823] DEBUG kafka.log.UnifiedLog - [UnifiedLog partition=__consumer_offsets-40, dir=/tmp/kafka-unit12180474530667575823] Flushing log up to offset 0 (inclusive)with recovery point 0, last flushed: 1755112002116, current time: 1755112022018,unflushed: 0 19:07:02.020 [kafka-coordinator-heartbeat-thread | mso-group] DEBUG org.apache.kafka.clients.NetworkClient - [Consumer clientId=mso-123456-consumer-e3947c42-8a25-4603-b050-ed449cbe5e27, groupId=mso-group] Give up sending metadata request since no node is available 19:07:02.020 [kafka-coordinator-heartbeat-thread | mso-group] DEBUG org.apache.kafka.clients.consumer.internals.ConsumerCoordinator - [Consumer clientId=mso-123456-consumer-e3947c42-8a25-4603-b050-ed449cbe5e27, groupId=mso-group] No broker available to send FindCoordinator request 19:07:02.020 [log-closing-/tmp/kafka-unit12180474530667575823] DEBUG kafka.log.UnifiedLog - [UnifiedLog partition=__consumer_offsets-40, dir=/tmp/kafka-unit12180474530667575823] Closing log 19:07:02.021 [log-closing-/tmp/kafka-unit12180474530667575823] DEBUG kafka.log.AbstractIndex - Resized /tmp/kafka-unit12180474530667575823/__consumer_offsets-40/00000000000000000000.index to 0, position is 0 and limit is 0 19:07:02.021 [log-closing-/tmp/kafka-unit12180474530667575823] DEBUG kafka.log.AbstractIndex - Resized /tmp/kafka-unit12180474530667575823/__consumer_offsets-40/00000000000000000000.timeindex to 0, position is 0 and limit is 0 19:07:02.021 [log-closing-/tmp/kafka-unit12180474530667575823] DEBUG kafka.log.UnifiedLog - [UnifiedLog partition=__consumer_offsets-27, dir=/tmp/kafka-unit12180474530667575823] Flushing log up to offset 0 (inclusive)with recovery point 0, last flushed: 1755112002478, current time: 1755112022021,unflushed: 0 19:07:02.022 [log-closing-/tmp/kafka-unit12180474530667575823] DEBUG kafka.log.UnifiedLog - [UnifiedLog partition=__consumer_offsets-27, dir=/tmp/kafka-unit12180474530667575823] Closing log 19:07:02.023 [log-closing-/tmp/kafka-unit12180474530667575823] DEBUG kafka.log.AbstractIndex - Resized /tmp/kafka-unit12180474530667575823/__consumer_offsets-27/00000000000000000000.index to 0, position is 0 and limit is 0 19:07:02.023 [log-closing-/tmp/kafka-unit12180474530667575823] DEBUG kafka.log.AbstractIndex - Resized /tmp/kafka-unit12180474530667575823/__consumer_offsets-27/00000000000000000000.timeindex to 0, position is 0 and limit is 0 19:07:02.023 [log-closing-/tmp/kafka-unit12180474530667575823] DEBUG kafka.log.UnifiedLog - [UnifiedLog partition=__consumer_offsets-17, dir=/tmp/kafka-unit12180474530667575823] Flushing log up to offset 0 (inclusive)with recovery point 0, last flushed: 1755112002133, current time: 1755112022023,unflushed: 0 19:07:02.024 [log-closing-/tmp/kafka-unit12180474530667575823] DEBUG kafka.log.UnifiedLog - [UnifiedLog partition=__consumer_offsets-17, dir=/tmp/kafka-unit12180474530667575823] Closing log 19:07:02.025 [log-closing-/tmp/kafka-unit12180474530667575823] DEBUG kafka.log.AbstractIndex - Resized /tmp/kafka-unit12180474530667575823/__consumer_offsets-17/00000000000000000000.index to 0, position is 0 and limit is 0 19:07:02.025 [log-closing-/tmp/kafka-unit12180474530667575823] DEBUG kafka.log.AbstractIndex - Resized /tmp/kafka-unit12180474530667575823/__consumer_offsets-17/00000000000000000000.timeindex to 0, position is 0 and limit is 0 19:07:02.025 [log-closing-/tmp/kafka-unit12180474530667575823] DEBUG kafka.log.UnifiedLog - [UnifiedLog partition=__consumer_offsets-32, dir=/tmp/kafka-unit12180474530667575823] Flushing log up to offset 0 (inclusive)with recovery point 0, last flushed: 1755112002144, current time: 1755112022025,unflushed: 0 19:07:02.026 [log-closing-/tmp/kafka-unit12180474530667575823] DEBUG kafka.log.UnifiedLog - [UnifiedLog partition=__consumer_offsets-32, dir=/tmp/kafka-unit12180474530667575823] Closing log 19:07:02.026 [log-closing-/tmp/kafka-unit12180474530667575823] DEBUG kafka.log.AbstractIndex - Resized /tmp/kafka-unit12180474530667575823/__consumer_offsets-32/00000000000000000000.index to 0, position is 0 and limit is 0 19:07:02.027 [log-closing-/tmp/kafka-unit12180474530667575823] DEBUG kafka.log.AbstractIndex - Resized /tmp/kafka-unit12180474530667575823/__consumer_offsets-32/00000000000000000000.timeindex to 0, position is 0 and limit is 0 19:07:02.027 [log-closing-/tmp/kafka-unit12180474530667575823] DEBUG kafka.log.UnifiedLog - [UnifiedLog partition=__consumer_offsets-39, dir=/tmp/kafka-unit12180474530667575823] Flushing log up to offset 0 (inclusive)with recovery point 0, last flushed: 1755112002019, current time: 1755112022027,unflushed: 0 19:07:02.028 [log-closing-/tmp/kafka-unit12180474530667575823] DEBUG kafka.log.UnifiedLog - [UnifiedLog partition=__consumer_offsets-39, dir=/tmp/kafka-unit12180474530667575823] Closing log 19:07:02.028 [log-closing-/tmp/kafka-unit12180474530667575823] DEBUG kafka.log.AbstractIndex - Resized /tmp/kafka-unit12180474530667575823/__consumer_offsets-39/00000000000000000000.index to 0, position is 0 and limit is 0 19:07:02.028 [log-closing-/tmp/kafka-unit12180474530667575823] DEBUG kafka.log.AbstractIndex - Resized /tmp/kafka-unit12180474530667575823/__consumer_offsets-39/00000000000000000000.timeindex to 0, position is 0 and limit is 0 19:07:02.029 [log-closing-/tmp/kafka-unit12180474530667575823] DEBUG kafka.log.UnifiedLog - [UnifiedLog partition=__consumer_offsets-2, dir=/tmp/kafka-unit12180474530667575823] Flushing log up to offset 0 (inclusive)with recovery point 0, last flushed: 1755112002098, current time: 1755112022029,unflushed: 0 19:07:02.030 [log-closing-/tmp/kafka-unit12180474530667575823] DEBUG kafka.log.UnifiedLog - [UnifiedLog partition=__consumer_offsets-2, dir=/tmp/kafka-unit12180474530667575823] Closing log 19:07:02.030 [log-closing-/tmp/kafka-unit12180474530667575823] DEBUG kafka.log.AbstractIndex - Resized /tmp/kafka-unit12180474530667575823/__consumer_offsets-2/00000000000000000000.index to 0, position is 0 and limit is 0 19:07:02.030 [log-closing-/tmp/kafka-unit12180474530667575823] DEBUG kafka.log.AbstractIndex - Resized /tmp/kafka-unit12180474530667575823/__consumer_offsets-2/00000000000000000000.timeindex to 0, position is 0 and limit is 0 19:07:02.030 [log-closing-/tmp/kafka-unit12180474530667575823] DEBUG kafka.log.UnifiedLog - [UnifiedLog partition=__consumer_offsets-44, dir=/tmp/kafka-unit12180474530667575823] Flushing log up to offset 0 (inclusive)with recovery point 0, last flushed: 1755112002298, current time: 1755112022030,unflushed: 0 19:07:02.032 [log-closing-/tmp/kafka-unit12180474530667575823] DEBUG kafka.log.UnifiedLog - [UnifiedLog partition=__consumer_offsets-44, dir=/tmp/kafka-unit12180474530667575823] Closing log 19:07:02.032 [log-closing-/tmp/kafka-unit12180474530667575823] DEBUG kafka.log.AbstractIndex - Resized /tmp/kafka-unit12180474530667575823/__consumer_offsets-44/00000000000000000000.index to 0, position is 0 and limit is 0 19:07:02.032 [log-closing-/tmp/kafka-unit12180474530667575823] DEBUG kafka.log.AbstractIndex - Resized /tmp/kafka-unit12180474530667575823/__consumer_offsets-44/00000000000000000000.timeindex to 0, position is 0 and limit is 0 19:07:02.032 [log-closing-/tmp/kafka-unit12180474530667575823] DEBUG kafka.log.UnifiedLog - [UnifiedLog partition=__consumer_offsets-12, dir=/tmp/kafka-unit12180474530667575823] Flushing log up to offset 0 (inclusive)with recovery point 0, last flushed: 1755112002509, current time: 1755112022032,unflushed: 0 19:07:02.034 [log-closing-/tmp/kafka-unit12180474530667575823] DEBUG kafka.log.UnifiedLog - [UnifiedLog partition=__consumer_offsets-12, dir=/tmp/kafka-unit12180474530667575823] Closing log 19:07:02.034 [log-closing-/tmp/kafka-unit12180474530667575823] DEBUG kafka.log.AbstractIndex - Resized /tmp/kafka-unit12180474530667575823/__consumer_offsets-12/00000000000000000000.index to 0, position is 0 and limit is 0 19:07:02.034 [log-closing-/tmp/kafka-unit12180474530667575823] DEBUG kafka.log.AbstractIndex - Resized /tmp/kafka-unit12180474530667575823/__consumer_offsets-12/00000000000000000000.timeindex to 0, position is 0 and limit is 0 19:07:02.034 [log-closing-/tmp/kafka-unit12180474530667575823] DEBUG kafka.log.UnifiedLog - [UnifiedLog partition=__consumer_offsets-36, dir=/tmp/kafka-unit12180474530667575823] Flushing log up to offset 0 (inclusive)with recovery point 0, last flushed: 1755112002532, current time: 1755112022034,unflushed: 0 19:07:02.036 [log-closing-/tmp/kafka-unit12180474530667575823] DEBUG kafka.log.UnifiedLog - [UnifiedLog partition=__consumer_offsets-36, dir=/tmp/kafka-unit12180474530667575823] Closing log 19:07:02.036 [log-closing-/tmp/kafka-unit12180474530667575823] DEBUG kafka.log.AbstractIndex - Resized /tmp/kafka-unit12180474530667575823/__consumer_offsets-36/00000000000000000000.index to 0, position is 0 and limit is 0 19:07:02.036 [log-closing-/tmp/kafka-unit12180474530667575823] DEBUG kafka.log.AbstractIndex - Resized /tmp/kafka-unit12180474530667575823/__consumer_offsets-36/00000000000000000000.timeindex to 0, position is 0 and limit is 0 19:07:02.036 [log-closing-/tmp/kafka-unit12180474530667575823] DEBUG kafka.log.UnifiedLog - [UnifiedLog partition=__consumer_offsets-45, dir=/tmp/kafka-unit12180474530667575823] Flushing log up to offset 0 (inclusive)with recovery point 0, last flushed: 1755112002360, current time: 1755112022036,unflushed: 0 19:07:02.037 [log-closing-/tmp/kafka-unit12180474530667575823] DEBUG kafka.log.UnifiedLog - [UnifiedLog partition=__consumer_offsets-45, dir=/tmp/kafka-unit12180474530667575823] Closing log 19:07:02.037 [log-closing-/tmp/kafka-unit12180474530667575823] DEBUG kafka.log.AbstractIndex - Resized /tmp/kafka-unit12180474530667575823/__consumer_offsets-45/00000000000000000000.index to 0, position is 0 and limit is 0 19:07:02.037 [log-closing-/tmp/kafka-unit12180474530667575823] DEBUG kafka.log.AbstractIndex - Resized /tmp/kafka-unit12180474530667575823/__consumer_offsets-45/00000000000000000000.timeindex to 0, position is 0 and limit is 0 19:07:02.038 [log-closing-/tmp/kafka-unit12180474530667575823] DEBUG kafka.log.UnifiedLog - [UnifiedLog partition=__consumer_offsets-16, dir=/tmp/kafka-unit12180474530667575823] Flushing log up to offset 0 (inclusive)with recovery point 0, last flushed: 1755112002088, current time: 1755112022038,unflushed: 0 19:07:02.039 [log-closing-/tmp/kafka-unit12180474530667575823] DEBUG kafka.log.UnifiedLog - [UnifiedLog partition=__consumer_offsets-16, dir=/tmp/kafka-unit12180474530667575823] Closing log 19:07:02.039 [log-closing-/tmp/kafka-unit12180474530667575823] DEBUG kafka.log.AbstractIndex - Resized /tmp/kafka-unit12180474530667575823/__consumer_offsets-16/00000000000000000000.index to 0, position is 0 and limit is 0 19:07:02.039 [log-closing-/tmp/kafka-unit12180474530667575823] DEBUG kafka.log.AbstractIndex - Resized /tmp/kafka-unit12180474530667575823/__consumer_offsets-16/00000000000000000000.timeindex to 0, position is 0 and limit is 0 19:07:02.039 [log-closing-/tmp/kafka-unit12180474530667575823] DEBUG kafka.log.UnifiedLog - [UnifiedLog partition=__consumer_offsets-10, dir=/tmp/kafka-unit12180474530667575823] Flushing log up to offset 0 (inclusive)with recovery point 0, last flushed: 1755112001899, current time: 1755112022039,unflushed: 0 19:07:02.040 [log-closing-/tmp/kafka-unit12180474530667575823] DEBUG kafka.log.UnifiedLog - [UnifiedLog partition=__consumer_offsets-10, dir=/tmp/kafka-unit12180474530667575823] Closing log 19:07:02.040 [log-closing-/tmp/kafka-unit12180474530667575823] DEBUG kafka.log.AbstractIndex - Resized /tmp/kafka-unit12180474530667575823/__consumer_offsets-10/00000000000000000000.index to 0, position is 0 and limit is 0 19:07:02.041 [log-closing-/tmp/kafka-unit12180474530667575823] DEBUG kafka.log.AbstractIndex - Resized /tmp/kafka-unit12180474530667575823/__consumer_offsets-10/00000000000000000000.timeindex to 0, position is 0 and limit is 0 19:07:02.041 [log-closing-/tmp/kafka-unit12180474530667575823] DEBUG kafka.log.UnifiedLog - [UnifiedLog partition=__consumer_offsets-11, dir=/tmp/kafka-unit12180474530667575823] Flushing log up to offset 0 (inclusive)with recovery point 0, last flushed: 1755112001961, current time: 1755112022041,unflushed: 0 19:07:02.042 [log-closing-/tmp/kafka-unit12180474530667575823] DEBUG kafka.log.UnifiedLog - [UnifiedLog partition=__consumer_offsets-11, dir=/tmp/kafka-unit12180474530667575823] Closing log 19:07:02.042 [log-closing-/tmp/kafka-unit12180474530667575823] DEBUG kafka.log.AbstractIndex - Resized /tmp/kafka-unit12180474530667575823/__consumer_offsets-11/00000000000000000000.index to 0, position is 0 and limit is 0 19:07:02.042 [log-closing-/tmp/kafka-unit12180474530667575823] DEBUG kafka.log.AbstractIndex - Resized /tmp/kafka-unit12180474530667575823/__consumer_offsets-11/00000000000000000000.timeindex to 0, position is 0 and limit is 0 19:07:02.042 [log-closing-/tmp/kafka-unit12180474530667575823] DEBUG kafka.log.UnifiedLog - [UnifiedLog partition=__consumer_offsets-20, dir=/tmp/kafka-unit12180474530667575823] Flushing log up to offset 0 (inclusive)with recovery point 0, last flushed: 1755112002466, current time: 1755112022042,unflushed: 0 19:07:02.043 [log-closing-/tmp/kafka-unit12180474530667575823] DEBUG kafka.log.UnifiedLog - [UnifiedLog partition=__consumer_offsets-20, dir=/tmp/kafka-unit12180474530667575823] Closing log 19:07:02.043 [log-closing-/tmp/kafka-unit12180474530667575823] DEBUG kafka.log.AbstractIndex - Resized /tmp/kafka-unit12180474530667575823/__consumer_offsets-20/00000000000000000000.index to 0, position is 0 and limit is 0 19:07:02.043 [log-closing-/tmp/kafka-unit12180474530667575823] DEBUG kafka.log.AbstractIndex - Resized /tmp/kafka-unit12180474530667575823/__consumer_offsets-20/00000000000000000000.timeindex to 0, position is 0 and limit is 0 19:07:02.043 [log-closing-/tmp/kafka-unit12180474530667575823] DEBUG kafka.log.UnifiedLog - [UnifiedLog partition=__consumer_offsets-47, dir=/tmp/kafka-unit12180474530667575823] Flushing log up to offset 0 (inclusive)with recovery point 0, last flushed: 1755112002124, current time: 1755112022043,unflushed: 0 19:07:02.045 [log-closing-/tmp/kafka-unit12180474530667575823] DEBUG kafka.log.UnifiedLog - [UnifiedLog partition=__consumer_offsets-47, dir=/tmp/kafka-unit12180474530667575823] Closing log 19:07:02.045 [log-closing-/tmp/kafka-unit12180474530667575823] DEBUG kafka.log.AbstractIndex - Resized /tmp/kafka-unit12180474530667575823/__consumer_offsets-47/00000000000000000000.index to 0, position is 0 and limit is 0 19:07:02.045 [log-closing-/tmp/kafka-unit12180474530667575823] DEBUG kafka.log.AbstractIndex - Resized /tmp/kafka-unit12180474530667575823/__consumer_offsets-47/00000000000000000000.timeindex to 0, position is 0 and limit is 0 19:07:02.045 [log-closing-/tmp/kafka-unit12180474530667575823] DEBUG kafka.log.UnifiedLog - [UnifiedLog partition=__consumer_offsets-18, dir=/tmp/kafka-unit12180474530667575823] Flushing log up to offset 0 (inclusive)with recovery point 0, last flushed: 1755112001873, current time: 1755112022045,unflushed: 0 19:07:02.046 [log-closing-/tmp/kafka-unit12180474530667575823] DEBUG kafka.log.UnifiedLog - [UnifiedLog partition=__consumer_offsets-18, dir=/tmp/kafka-unit12180474530667575823] Closing log 19:07:02.046 [log-closing-/tmp/kafka-unit12180474530667575823] DEBUG kafka.log.AbstractIndex - Resized /tmp/kafka-unit12180474530667575823/__consumer_offsets-18/00000000000000000000.index to 0, position is 0 and limit is 0 19:07:02.046 [log-closing-/tmp/kafka-unit12180474530667575823] DEBUG kafka.log.AbstractIndex - Resized /tmp/kafka-unit12180474530667575823/__consumer_offsets-18/00000000000000000000.timeindex to 0, position is 0 and limit is 0 19:07:02.046 [log-closing-/tmp/kafka-unit12180474530667575823] DEBUG kafka.log.UnifiedLog - [UnifiedLog partition=__consumer_offsets-7, dir=/tmp/kafka-unit12180474530667575823] Flushing log up to offset 0 (inclusive)with recovery point 0, last flushed: 1755112002169, current time: 1755112022046,unflushed: 0 19:07:02.048 [log-closing-/tmp/kafka-unit12180474530667575823] DEBUG kafka.log.UnifiedLog - [UnifiedLog partition=__consumer_offsets-7, dir=/tmp/kafka-unit12180474530667575823] Closing log 19:07:02.048 [log-closing-/tmp/kafka-unit12180474530667575823] DEBUG kafka.log.AbstractIndex - Resized /tmp/kafka-unit12180474530667575823/__consumer_offsets-7/00000000000000000000.index to 0, position is 0 and limit is 0 19:07:02.048 [log-closing-/tmp/kafka-unit12180474530667575823] DEBUG kafka.log.AbstractIndex - Resized /tmp/kafka-unit12180474530667575823/__consumer_offsets-7/00000000000000000000.timeindex to 0, position is 0 and limit is 0 19:07:02.048 [log-closing-/tmp/kafka-unit12180474530667575823] DEBUG kafka.log.UnifiedLog - [UnifiedLog partition=__consumer_offsets-48, dir=/tmp/kafka-unit12180474530667575823] Flushing log up to offset 0 (inclusive)with recovery point 0, last flushed: 1755112001920, current time: 1755112022048,unflushed: 0 19:07:02.050 [log-closing-/tmp/kafka-unit12180474530667575823] DEBUG kafka.log.UnifiedLog - [UnifiedLog partition=__consumer_offsets-48, dir=/tmp/kafka-unit12180474530667575823] Closing log 19:07:02.050 [log-closing-/tmp/kafka-unit12180474530667575823] DEBUG kafka.log.AbstractIndex - Resized /tmp/kafka-unit12180474530667575823/__consumer_offsets-48/00000000000000000000.index to 0, position is 0 and limit is 0 19:07:02.050 [log-closing-/tmp/kafka-unit12180474530667575823] DEBUG kafka.log.AbstractIndex - Resized /tmp/kafka-unit12180474530667575823/__consumer_offsets-48/00000000000000000000.timeindex to 0, position is 0 and limit is 0 19:07:02.050 [log-closing-/tmp/kafka-unit12180474530667575823] DEBUG kafka.log.UnifiedLog - [UnifiedLog partition=__consumer_offsets-22, dir=/tmp/kafka-unit12180474530667575823] Flushing log up to offset 0 (inclusive)with recovery point 0, last flushed: 1755112002180, current time: 1755112022050,unflushed: 0 19:07:02.052 [log-closing-/tmp/kafka-unit12180474530667575823] DEBUG kafka.log.UnifiedLog - [UnifiedLog partition=__consumer_offsets-22, dir=/tmp/kafka-unit12180474530667575823] Closing log 19:07:02.052 [log-closing-/tmp/kafka-unit12180474530667575823] DEBUG kafka.log.AbstractIndex - Resized /tmp/kafka-unit12180474530667575823/__consumer_offsets-22/00000000000000000000.index to 0, position is 0 and limit is 0 19:07:02.052 [log-closing-/tmp/kafka-unit12180474530667575823] DEBUG kafka.log.AbstractIndex - Resized /tmp/kafka-unit12180474530667575823/__consumer_offsets-22/00000000000000000000.timeindex to 0, position is 0 and limit is 0 19:07:02.052 [log-closing-/tmp/kafka-unit12180474530667575823] DEBUG kafka.log.UnifiedLog - [UnifiedLog partition=__consumer_offsets-46, dir=/tmp/kafka-unit12180474530667575823] Flushing log up to offset 0 (inclusive)with recovery point 0, last flushed: 1755112002061, current time: 1755112022052,unflushed: 0 19:07:02.076 [log-closing-/tmp/kafka-unit12180474530667575823] DEBUG kafka.log.UnifiedLog - [UnifiedLog partition=__consumer_offsets-46, dir=/tmp/kafka-unit12180474530667575823] Closing log 19:07:02.077 [log-closing-/tmp/kafka-unit12180474530667575823] DEBUG kafka.log.AbstractIndex - Resized /tmp/kafka-unit12180474530667575823/__consumer_offsets-46/00000000000000000000.index to 0, position is 0 and limit is 0 19:07:02.077 [log-closing-/tmp/kafka-unit12180474530667575823] DEBUG kafka.log.AbstractIndex - Resized /tmp/kafka-unit12180474530667575823/__consumer_offsets-46/00000000000000000000.timeindex to 0, position is 0 and limit is 0 19:07:02.077 [log-closing-/tmp/kafka-unit12180474530667575823] DEBUG kafka.log.UnifiedLog - [UnifiedLog partition=__consumer_offsets-23, dir=/tmp/kafka-unit12180474530667575823] Flushing log up to offset 0 (inclusive)with recovery point 0, last flushed: 1755112002328, current time: 1755112022077,unflushed: 0 19:07:02.083 [log-closing-/tmp/kafka-unit12180474530667575823] DEBUG kafka.log.UnifiedLog - [UnifiedLog partition=__consumer_offsets-23, dir=/tmp/kafka-unit12180474530667575823] Closing log 19:07:02.084 [log-closing-/tmp/kafka-unit12180474530667575823] DEBUG kafka.log.AbstractIndex - Resized /tmp/kafka-unit12180474530667575823/__consumer_offsets-23/00000000000000000000.index to 0, position is 0 and limit is 0 19:07:02.084 [log-closing-/tmp/kafka-unit12180474530667575823] DEBUG kafka.log.AbstractIndex - Resized /tmp/kafka-unit12180474530667575823/__consumer_offsets-23/00000000000000000000.timeindex to 0, position is 0 and limit is 0 19:07:02.084 [log-closing-/tmp/kafka-unit12180474530667575823] DEBUG kafka.log.UnifiedLog - [UnifiedLog partition=__consumer_offsets-42, dir=/tmp/kafka-unit12180474530667575823] Flushing log up to offset 0 (inclusive)with recovery point 0, last flushed: 1755112002496, current time: 1755112022084,unflushed: 0 19:07:02.086 [log-closing-/tmp/kafka-unit12180474530667575823] DEBUG kafka.log.UnifiedLog - [UnifiedLog partition=__consumer_offsets-42, dir=/tmp/kafka-unit12180474530667575823] Closing log 19:07:02.087 [log-closing-/tmp/kafka-unit12180474530667575823] DEBUG kafka.log.AbstractIndex - Resized /tmp/kafka-unit12180474530667575823/__consumer_offsets-42/00000000000000000000.index to 0, position is 0 and limit is 0 19:07:02.087 [log-closing-/tmp/kafka-unit12180474530667575823] DEBUG kafka.log.AbstractIndex - Resized /tmp/kafka-unit12180474530667575823/__consumer_offsets-42/00000000000000000000.timeindex to 0, position is 0 and limit is 0 19:07:02.087 [log-closing-/tmp/kafka-unit12180474530667575823] DEBUG kafka.log.UnifiedLog - [UnifiedLog partition=__consumer_offsets-28, dir=/tmp/kafka-unit12180474530667575823] Flushing log up to offset 0 (inclusive)with recovery point 0, last flushed: 1755112002618, current time: 1755112022087,unflushed: 0 19:07:02.089 [log-closing-/tmp/kafka-unit12180474530667575823] DEBUG kafka.log.UnifiedLog - [UnifiedLog partition=__consumer_offsets-28, dir=/tmp/kafka-unit12180474530667575823] Closing log 19:07:02.089 [log-closing-/tmp/kafka-unit12180474530667575823] DEBUG kafka.log.AbstractIndex - Resized /tmp/kafka-unit12180474530667575823/__consumer_offsets-28/00000000000000000000.index to 0, position is 0 and limit is 0 19:07:02.089 [log-closing-/tmp/kafka-unit12180474530667575823] DEBUG kafka.log.AbstractIndex - Resized /tmp/kafka-unit12180474530667575823/__consumer_offsets-28/00000000000000000000.timeindex to 0, position is 0 and limit is 0 19:07:02.089 [log-closing-/tmp/kafka-unit12180474530667575823] DEBUG kafka.log.UnifiedLog - [UnifiedLog partition=__consumer_offsets-4, dir=/tmp/kafka-unit12180474530667575823] Flushing log up to offset 0 (inclusive)with recovery point 0, last flushed: 1755112001951, current time: 1755112022089,unflushed: 0 19:07:02.091 [log-closing-/tmp/kafka-unit12180474530667575823] DEBUG kafka.log.UnifiedLog - [UnifiedLog partition=__consumer_offsets-4, dir=/tmp/kafka-unit12180474530667575823] Closing log 19:07:02.091 [log-closing-/tmp/kafka-unit12180474530667575823] DEBUG kafka.log.AbstractIndex - Resized /tmp/kafka-unit12180474530667575823/__consumer_offsets-4/00000000000000000000.index to 0, position is 0 and limit is 0 19:07:02.091 [log-closing-/tmp/kafka-unit12180474530667575823] DEBUG kafka.log.AbstractIndex - Resized /tmp/kafka-unit12180474530667575823/__consumer_offsets-4/00000000000000000000.timeindex to 0, position is 0 and limit is 0 19:07:02.091 [log-closing-/tmp/kafka-unit12180474530667575823] DEBUG kafka.log.UnifiedLog - [UnifiedLog partition=__consumer_offsets-31, dir=/tmp/kafka-unit12180474530667575823] Flushing log up to offset 0 (inclusive)with recovery point 0, last flushed: 1755112002052, current time: 1755112022091,unflushed: 0 19:07:02.093 [log-closing-/tmp/kafka-unit12180474530667575823] DEBUG kafka.log.UnifiedLog - [UnifiedLog partition=__consumer_offsets-31, dir=/tmp/kafka-unit12180474530667575823] Closing log 19:07:02.093 [log-closing-/tmp/kafka-unit12180474530667575823] DEBUG kafka.log.AbstractIndex - Resized /tmp/kafka-unit12180474530667575823/__consumer_offsets-31/00000000000000000000.index to 0, position is 0 and limit is 0 19:07:02.093 [log-closing-/tmp/kafka-unit12180474530667575823] DEBUG kafka.log.AbstractIndex - Resized /tmp/kafka-unit12180474530667575823/__consumer_offsets-31/00000000000000000000.timeindex to 0, position is 0 and limit is 0 19:07:02.094 [log-closing-/tmp/kafka-unit12180474530667575823] DEBUG kafka.log.UnifiedLog - [UnifiedLog partition=__consumer_offsets-5, dir=/tmp/kafka-unit12180474530667575823] Flushing log up to offset 0 (inclusive)with recovery point 0, last flushed: 1755112002454, current time: 1755112022094,unflushed: 0 19:07:02.095 [log-closing-/tmp/kafka-unit12180474530667575823] DEBUG kafka.log.UnifiedLog - [UnifiedLog partition=__consumer_offsets-5, dir=/tmp/kafka-unit12180474530667575823] Closing log 19:07:02.095 [log-closing-/tmp/kafka-unit12180474530667575823] DEBUG kafka.log.AbstractIndex - Resized /tmp/kafka-unit12180474530667575823/__consumer_offsets-5/00000000000000000000.index to 0, position is 0 and limit is 0 19:07:02.095 [log-closing-/tmp/kafka-unit12180474530667575823] DEBUG kafka.log.AbstractIndex - Resized /tmp/kafka-unit12180474530667575823/__consumer_offsets-5/00000000000000000000.timeindex to 0, position is 0 and limit is 0 19:07:02.096 [log-closing-/tmp/kafka-unit12180474530667575823] DEBUG kafka.log.UnifiedLog - [UnifiedLog partition=__consumer_offsets-1, dir=/tmp/kafka-unit12180474530667575823] Flushing log up to offset 0 (inclusive)with recovery point 0, last flushed: 1755112002078, current time: 1755112022096,unflushed: 0 19:07:02.097 [log-closing-/tmp/kafka-unit12180474530667575823] DEBUG kafka.log.UnifiedLog - [UnifiedLog partition=__consumer_offsets-1, dir=/tmp/kafka-unit12180474530667575823] Closing log 19:07:02.097 [log-closing-/tmp/kafka-unit12180474530667575823] DEBUG kafka.log.AbstractIndex - Resized /tmp/kafka-unit12180474530667575823/__consumer_offsets-1/00000000000000000000.index to 0, position is 0 and limit is 0 19:07:02.097 [log-closing-/tmp/kafka-unit12180474530667575823] DEBUG kafka.log.AbstractIndex - Resized /tmp/kafka-unit12180474530667575823/__consumer_offsets-1/00000000000000000000.timeindex to 0, position is 0 and limit is 0 19:07:02.098 [log-closing-/tmp/kafka-unit12180474530667575823] DEBUG kafka.log.UnifiedLog - [UnifiedLog partition=__consumer_offsets-15, dir=/tmp/kafka-unit12180474530667575823] Flushing log up to offset 0 (inclusive)with recovery point 0, last flushed: 1755112002396, current time: 1755112022098,unflushed: 0 19:07:02.099 [log-closing-/tmp/kafka-unit12180474530667575823] DEBUG kafka.log.UnifiedLog - [UnifiedLog partition=__consumer_offsets-15, dir=/tmp/kafka-unit12180474530667575823] Closing log 19:07:02.099 [log-closing-/tmp/kafka-unit12180474530667575823] DEBUG kafka.log.AbstractIndex - Resized /tmp/kafka-unit12180474530667575823/__consumer_offsets-15/00000000000000000000.index to 0, position is 0 and limit is 0 19:07:02.099 [log-closing-/tmp/kafka-unit12180474530667575823] DEBUG kafka.log.AbstractIndex - Resized /tmp/kafka-unit12180474530667575823/__consumer_offsets-15/00000000000000000000.timeindex to 0, position is 0 and limit is 0 19:07:02.099 [log-closing-/tmp/kafka-unit12180474530667575823] DEBUG kafka.log.UnifiedLog - [UnifiedLog partition=__consumer_offsets-38, dir=/tmp/kafka-unit12180474530667575823] Flushing log up to offset 0 (inclusive)with recovery point 0, last flushed: 1755112002338, current time: 1755112022099,unflushed: 0 19:07:02.101 [log-closing-/tmp/kafka-unit12180474530667575823] DEBUG kafka.log.UnifiedLog - [UnifiedLog partition=__consumer_offsets-38, dir=/tmp/kafka-unit12180474530667575823] Closing log 19:07:02.101 [log-closing-/tmp/kafka-unit12180474530667575823] DEBUG kafka.log.AbstractIndex - Resized /tmp/kafka-unit12180474530667575823/__consumer_offsets-38/00000000000000000000.index to 0, position is 0 and limit is 0 19:07:02.101 [log-closing-/tmp/kafka-unit12180474530667575823] DEBUG kafka.log.AbstractIndex - Resized /tmp/kafka-unit12180474530667575823/__consumer_offsets-38/00000000000000000000.timeindex to 0, position is 0 and limit is 0 19:07:02.101 [log-closing-/tmp/kafka-unit12180474530667575823] DEBUG kafka.log.UnifiedLog - [UnifiedLog partition=__consumer_offsets-34, dir=/tmp/kafka-unit12180474530667575823] Flushing log up to offset 0 (inclusive)with recovery point 0, last flushed: 1755112001942, current time: 1755112022101,unflushed: 0 19:07:02.102 [log-closing-/tmp/kafka-unit12180474530667575823] DEBUG kafka.log.UnifiedLog - [UnifiedLog partition=__consumer_offsets-34, dir=/tmp/kafka-unit12180474530667575823] Closing log 19:07:02.102 [log-closing-/tmp/kafka-unit12180474530667575823] DEBUG kafka.log.AbstractIndex - Resized /tmp/kafka-unit12180474530667575823/__consumer_offsets-34/00000000000000000000.index to 0, position is 0 and limit is 0 19:07:02.102 [log-closing-/tmp/kafka-unit12180474530667575823] DEBUG kafka.log.AbstractIndex - Resized /tmp/kafka-unit12180474530667575823/__consumer_offsets-34/00000000000000000000.timeindex to 0, position is 0 and limit is 0 19:07:02.102 [log-closing-/tmp/kafka-unit12180474530667575823] DEBUG kafka.log.UnifiedLog - [UnifiedLog partition=__consumer_offsets-9, dir=/tmp/kafka-unit12180474530667575823] Flushing log up to offset 0 (inclusive)with recovery point 0, last flushed: 1755112002029, current time: 1755112022102,unflushed: 0 19:07:02.104 [log-closing-/tmp/kafka-unit12180474530667575823] DEBUG kafka.log.UnifiedLog - [UnifiedLog partition=__consumer_offsets-9, dir=/tmp/kafka-unit12180474530667575823] Closing log 19:07:02.104 [log-closing-/tmp/kafka-unit12180474530667575823] DEBUG kafka.log.AbstractIndex - Resized /tmp/kafka-unit12180474530667575823/__consumer_offsets-9/00000000000000000000.index to 0, position is 0 and limit is 0 19:07:02.104 [log-closing-/tmp/kafka-unit12180474530667575823] DEBUG kafka.log.AbstractIndex - Resized /tmp/kafka-unit12180474530667575823/__consumer_offsets-9/00000000000000000000.timeindex to 0, position is 0 and limit is 0 19:07:02.104 [log-closing-/tmp/kafka-unit12180474530667575823] DEBUG kafka.log.UnifiedLog - [UnifiedLog partition=__consumer_offsets-14, dir=/tmp/kafka-unit12180474530667575823] Flushing log up to offset 0 (inclusive)with recovery point 0, last flushed: 1755112002316, current time: 1755112022104,unflushed: 0 19:07:02.106 [log-closing-/tmp/kafka-unit12180474530667575823] DEBUG kafka.log.UnifiedLog - [UnifiedLog partition=__consumer_offsets-14, dir=/tmp/kafka-unit12180474530667575823] Closing log 19:07:02.106 [log-closing-/tmp/kafka-unit12180474530667575823] DEBUG kafka.log.AbstractIndex - Resized /tmp/kafka-unit12180474530667575823/__consumer_offsets-14/00000000000000000000.index to 0, position is 0 and limit is 0 19:07:02.106 [log-closing-/tmp/kafka-unit12180474530667575823] DEBUG kafka.log.AbstractIndex - Resized /tmp/kafka-unit12180474530667575823/__consumer_offsets-14/00000000000000000000.timeindex to 0, position is 0 and limit is 0 19:07:02.107 [main] DEBUG kafka.log.LogManager - Updating recovery points at /tmp/kafka-unit12180474530667575823 19:07:02.113 [main] DEBUG kafka.log.LogManager - Updating log start offsets at /tmp/kafka-unit12180474530667575823 19:07:02.116 [kafka-producer-network-thread | mso-123456-producer-f273d893-2274-4515-b9d3-12029fbb5bf7] DEBUG org.apache.kafka.clients.NetworkClient - [Producer clientId=mso-123456-producer-f273d893-2274-4515-b9d3-12029fbb5bf7] Give up sending metadata request since no node is available 19:07:02.120 [kafka-coordinator-heartbeat-thread | mso-group] DEBUG org.apache.kafka.clients.NetworkClient - [Consumer clientId=mso-123456-consumer-e3947c42-8a25-4603-b050-ed449cbe5e27, groupId=mso-group] Initialize connection to node localhost:38537 (id: 1 rack: null) for sending metadata request 19:07:02.121 [kafka-coordinator-heartbeat-thread | mso-group] DEBUG org.apache.kafka.clients.ClientUtils - Resolved host localhost as 127.0.0.1 19:07:02.121 [kafka-coordinator-heartbeat-thread | mso-group] DEBUG org.apache.kafka.clients.NetworkClient - [Consumer clientId=mso-123456-consumer-e3947c42-8a25-4603-b050-ed449cbe5e27, groupId=mso-group] Initiating connection to node localhost:38537 (id: 1 rack: null) using address localhost/127.0.0.1 19:07:02.121 [kafka-coordinator-heartbeat-thread | mso-group] DEBUG org.apache.kafka.common.security.authenticator.SaslClientAuthenticator - [Consumer clientId=mso-123456-consumer-e3947c42-8a25-4603-b050-ed449cbe5e27, groupId=mso-group] Set SASL client state to SEND_APIVERSIONS_REQUEST 19:07:02.121 [kafka-coordinator-heartbeat-thread | mso-group] DEBUG org.apache.kafka.common.security.authenticator.SaslClientAuthenticator - [Consumer clientId=mso-123456-consumer-e3947c42-8a25-4603-b050-ed449cbe5e27, groupId=mso-group] Creating SaslClient: client=null;service=kafka;serviceHostname=localhost;mechs=[PLAIN] 19:07:02.122 [kafka-coordinator-heartbeat-thread | mso-group] DEBUG org.apache.kafka.common.network.Selector - [Consumer clientId=mso-123456-consumer-e3947c42-8a25-4603-b050-ed449cbe5e27, groupId=mso-group] Connection with localhost/127.0.0.1 (channelId=1) disconnected java.net.ConnectException: Connection refused at java.base/sun.nio.ch.SocketChannelImpl.checkConnect(Native Method) at java.base/sun.nio.ch.SocketChannelImpl.finishConnect(SocketChannelImpl.java:777) at org.apache.kafka.common.network.PlaintextTransportLayer.finishConnect(PlaintextTransportLayer.java:50) at org.apache.kafka.common.network.KafkaChannel.finishConnect(KafkaChannel.java:224) at org.apache.kafka.common.network.Selector.pollSelectionKeys(Selector.java:526) at org.apache.kafka.common.network.Selector.poll(Selector.java:481) at org.apache.kafka.clients.NetworkClient.poll(NetworkClient.java:560) at org.apache.kafka.clients.consumer.internals.ConsumerNetworkClient.poll(ConsumerNetworkClient.java:280) at org.apache.kafka.clients.consumer.internals.ConsumerNetworkClient.pollNoWakeup(ConsumerNetworkClient.java:321) at org.apache.kafka.clients.consumer.internals.AbstractCoordinator$HeartbeatThread.run(AbstractCoordinator.java:1454) 19:07:02.123 [kafka-coordinator-heartbeat-thread | mso-group] INFO org.apache.kafka.clients.NetworkClient - [Consumer clientId=mso-123456-consumer-e3947c42-8a25-4603-b050-ed449cbe5e27, groupId=mso-group] Node 1 disconnected. 19:07:02.123 [kafka-coordinator-heartbeat-thread | mso-group] WARN org.apache.kafka.clients.NetworkClient - [Consumer clientId=mso-123456-consumer-e3947c42-8a25-4603-b050-ed449cbe5e27, groupId=mso-group] Connection to node 1 (localhost/127.0.0.1:38537) could not be established. Broker may not be available. 19:07:02.123 [kafka-coordinator-heartbeat-thread | mso-group] DEBUG org.apache.kafka.clients.consumer.internals.ConsumerCoordinator - [Consumer clientId=mso-123456-consumer-e3947c42-8a25-4603-b050-ed449cbe5e27, groupId=mso-group] No broker available to send FindCoordinator request 19:07:02.161 [main] DEBUG kafka.log.LogManager - Writing clean shutdown marker at /tmp/kafka-unit12180474530667575823 19:07:02.163 [main] INFO kafka.log.LogManager - Shutdown complete. 19:07:02.164 [main] INFO kafka.controller.ControllerEventManager$ControllerEventThread - [ControllerEventThread controllerId=1] Shutting down 19:07:02.164 [controller-event-thread] INFO kafka.controller.ControllerEventManager$ControllerEventThread - [ControllerEventThread controllerId=1] Stopped 19:07:02.164 [main] INFO kafka.controller.ControllerEventManager$ControllerEventThread - [ControllerEventThread controllerId=1] Shutdown completed 19:07:02.165 [main] DEBUG kafka.controller.KafkaController - [Controller id=1] Resigning 19:07:02.165 [main] DEBUG kafka.controller.KafkaController - [Controller id=1] Unregister BrokerModifications handler for Set(1) 19:07:02.166 [kafka-producer-network-thread | mso-123456-producer-f273d893-2274-4515-b9d3-12029fbb5bf7] DEBUG org.apache.kafka.clients.NetworkClient - [Producer clientId=mso-123456-producer-f273d893-2274-4515-b9d3-12029fbb5bf7] Give up sending metadata request since no node is available 19:07:02.166 [main] DEBUG kafka.utils.KafkaScheduler - Shutting down task scheduler. 19:07:02.167 [main] INFO kafka.controller.ZkPartitionStateMachine - [PartitionStateMachine controllerId=1] Stopped partition state machine 19:07:02.168 [main] INFO kafka.controller.ZkReplicaStateMachine - [ReplicaStateMachine controllerId=1] Stopped replica state machine 19:07:02.169 [main] INFO kafka.controller.RequestSendThread - [RequestSendThread controllerId=1] Shutting down 19:07:02.169 [main] INFO kafka.controller.RequestSendThread - [RequestSendThread controllerId=1] Shutdown completed 19:07:02.169 [TestBroker:1:Controller-1-to-broker-1-send-thread] INFO kafka.controller.RequestSendThread - [RequestSendThread controllerId=1] Stopped 19:07:02.171 [main] INFO kafka.controller.KafkaController - [Controller id=1] Resigned 19:07:02.171 [main] INFO kafka.server.FinalizedFeatureChangeListener$ChangeNotificationProcessorThread - [feature-zk-node-event-process-thread]: Shutting down 19:07:02.171 [main] INFO kafka.server.FinalizedFeatureChangeListener$ChangeNotificationProcessorThread - [feature-zk-node-event-process-thread]: Shutdown completed 19:07:02.171 [feature-zk-node-event-process-thread] INFO kafka.server.FinalizedFeatureChangeListener$ChangeNotificationProcessorThread - [feature-zk-node-event-process-thread]: Stopped 19:07:02.172 [main] INFO kafka.zookeeper.ZooKeeperClient - [ZooKeeperClient Kafka server] Closing. 19:07:02.172 [main] DEBUG kafka.utils.KafkaScheduler - Shutting down task scheduler. 19:07:02.172 [main] DEBUG org.apache.zookeeper.ZooKeeper - Closing session: 0x1000002c50e0000 19:07:02.172 [main] DEBUG org.apache.zookeeper.ClientCnxn - Closing client for session: 0x1000002c50e0000 19:07:02.173 [ProcessThread(sid:0 cport:42969):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from data tree is: 302721938539 19:07:02.173 [ProcessThread(sid:0 cport:42969):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 303859357107 19:07:02.173 [ProcessThread(sid:0 cport:42969):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 303442022782 19:07:02.173 [ProcessThread(sid:0 cport:42969):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 301644705446 19:07:02.175 [ProcessThread(sid:0 cport:42969):] DEBUG org.apache.zookeeper.server.PrepRequestProcessor - Digest got from outstandingChanges is: 297441401439 19:07:02.177 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - Processing request:: sessionid:0x1000002c50e0000 type:closeSession cxid:0x104 zxid:0x8d txntype:-11 reqpath:n/a 19:07:02.177 [SyncThread:0] DEBUG org.apache.zookeeper.server.SessionTrackerImpl - Removing session 0x1000002c50e0000 19:07:02.178 [SyncThread:0] DEBUG org.apache.zookeeper.common.PathTrie - controller 19:07:02.178 [SyncThread:0] DEBUG org.apache.zookeeper.server.DataTree - Deleting ephemeral node /controller for session 0x1000002c50e0000 19:07:02.178 [SyncThread:0] DEBUG org.apache.zookeeper.common.PathTrie - brokers 19:07:02.178 [main-SendThread(127.0.0.1:42969)] DEBUG org.apache.zookeeper.ClientCnxn - Got notification session id: 0x1000002c50e0000 19:07:02.178 [SyncThread:0] DEBUG org.apache.zookeeper.server.DataTree - Deleting ephemeral node /brokers/ids/1 for session 0x1000002c50e0000 19:07:02.178 [main-SendThread(127.0.0.1:42969)] DEBUG org.apache.zookeeper.ClientCnxn - Got WatchedEvent state:SyncConnected type:NodeDeleted path:/controller for session id 0x1000002c50e0000 19:07:02.178 [SyncThread:0] DEBUG org.apache.zookeeper.server.DataTree - Digests are matching for Zxid: 8d, Digest in log and actual tree: 297441401439 19:07:02.178 [main-SendThread(127.0.0.1:42969)] DEBUG org.apache.zookeeper.ClientCnxn - Got notification session id: 0x1000002c50e0000 19:07:02.178 [SyncThread:0] DEBUG org.apache.zookeeper.server.FinalRequestProcessor - sessionid:0x1000002c50e0000 type:closeSession cxid:0x104 zxid:0x8d txntype:-11 reqpath:n/a 19:07:02.178 [main-SendThread(127.0.0.1:42969)] DEBUG org.apache.zookeeper.ClientCnxn - Got WatchedEvent state:SyncConnected type:NodeDeleted path:/brokers/ids/1 for session id 0x1000002c50e0000 19:07:02.178 [main-EventThread] DEBUG kafka.zookeeper.ZooKeeperClient - [ZooKeeperClient Kafka server] Received event: WatchedEvent state:SyncConnected type:NodeDeleted path:/controller 19:07:02.178 [main-SendThread(127.0.0.1:42969)] DEBUG org.apache.zookeeper.ClientCnxn - Got notification session id: 0x1000002c50e0000 19:07:02.178 [main-SendThread(127.0.0.1:42969)] DEBUG org.apache.zookeeper.ClientCnxn - Got WatchedEvent state:SyncConnected type:NodeChildrenChanged path:/brokers/ids for session id 0x1000002c50e0000 19:07:02.179 [main-SendThread(127.0.0.1:42969)] DEBUG org.apache.zookeeper.ClientCnxn - Reading reply session id: 0x1000002c50e0000, packet:: clientPath:null serverPath:null finished:false header:: 260,-11 replyHeader:: 260,141,0 request:: null response:: null 19:07:02.179 [main] DEBUG org.apache.zookeeper.ClientCnxn - Disconnecting client for session: 0x1000002c50e0000 19:07:02.179 [main-EventThread] DEBUG kafka.zookeeper.ZooKeeperClient - [ZooKeeperClient Kafka server] Received event: WatchedEvent state:SyncConnected type:NodeDeleted path:/brokers/ids/1 19:07:02.179 [main-EventThread] DEBUG kafka.zookeeper.ZooKeeperClient - [ZooKeeperClient Kafka server] Received event: WatchedEvent state:SyncConnected type:NodeChildrenChanged path:/brokers/ids 19:07:02.180 [NIOWorkerThread-11] DEBUG org.apache.zookeeper.server.NIOServerCnxn - Closed socket connection for client /127.0.0.1:46386 which had sessionid 0x1000002c50e0000 19:07:02.216 [kafka-producer-network-thread | mso-123456-producer-f273d893-2274-4515-b9d3-12029fbb5bf7] DEBUG org.apache.kafka.clients.NetworkClient - [Producer clientId=mso-123456-producer-f273d893-2274-4515-b9d3-12029fbb5bf7] Give up sending metadata request since no node is available 19:07:02.223 [kafka-coordinator-heartbeat-thread | mso-group] DEBUG org.apache.kafka.clients.NetworkClient - [Consumer clientId=mso-123456-consumer-e3947c42-8a25-4603-b050-ed449cbe5e27, groupId=mso-group] Give up sending metadata request since no node is available 19:07:02.224 [kafka-coordinator-heartbeat-thread | mso-group] DEBUG org.apache.kafka.clients.consumer.internals.ConsumerCoordinator - [Consumer clientId=mso-123456-consumer-e3947c42-8a25-4603-b050-ed449cbe5e27, groupId=mso-group] No broker available to send FindCoordinator request 19:07:02.267 [kafka-producer-network-thread | mso-123456-producer-f273d893-2274-4515-b9d3-12029fbb5bf7] DEBUG org.apache.kafka.clients.NetworkClient - [Producer clientId=mso-123456-producer-f273d893-2274-4515-b9d3-12029fbb5bf7] Initialize connection to node localhost:38537 (id: 1 rack: null) for sending metadata request 19:07:02.267 [kafka-producer-network-thread | mso-123456-producer-f273d893-2274-4515-b9d3-12029fbb5bf7] DEBUG org.apache.kafka.clients.ClientUtils - Resolved host localhost as 127.0.0.1 19:07:02.267 [kafka-producer-network-thread | mso-123456-producer-f273d893-2274-4515-b9d3-12029fbb5bf7] DEBUG org.apache.kafka.clients.NetworkClient - [Producer clientId=mso-123456-producer-f273d893-2274-4515-b9d3-12029fbb5bf7] Initiating connection to node localhost:38537 (id: 1 rack: null) using address localhost/127.0.0.1 19:07:02.267 [kafka-producer-network-thread | mso-123456-producer-f273d893-2274-4515-b9d3-12029fbb5bf7] DEBUG org.apache.kafka.common.security.authenticator.SaslClientAuthenticator - [Producer clientId=mso-123456-producer-f273d893-2274-4515-b9d3-12029fbb5bf7] Set SASL client state to SEND_APIVERSIONS_REQUEST 19:07:02.267 [kafka-producer-network-thread | mso-123456-producer-f273d893-2274-4515-b9d3-12029fbb5bf7] DEBUG org.apache.kafka.common.security.authenticator.SaslClientAuthenticator - [Producer clientId=mso-123456-producer-f273d893-2274-4515-b9d3-12029fbb5bf7] Creating SaslClient: client=null;service=kafka;serviceHostname=localhost;mechs=[PLAIN] 19:07:02.268 [kafka-producer-network-thread | mso-123456-producer-f273d893-2274-4515-b9d3-12029fbb5bf7] DEBUG org.apache.kafka.common.network.Selector - [Producer clientId=mso-123456-producer-f273d893-2274-4515-b9d3-12029fbb5bf7] Connection with localhost/127.0.0.1 (channelId=1) disconnected java.net.ConnectException: Connection refused at java.base/sun.nio.ch.SocketChannelImpl.checkConnect(Native Method) at java.base/sun.nio.ch.SocketChannelImpl.finishConnect(SocketChannelImpl.java:777) at org.apache.kafka.common.network.PlaintextTransportLayer.finishConnect(PlaintextTransportLayer.java:50) at org.apache.kafka.common.network.KafkaChannel.finishConnect(KafkaChannel.java:224) at org.apache.kafka.common.network.Selector.pollSelectionKeys(Selector.java:526) at org.apache.kafka.common.network.Selector.poll(Selector.java:481) at org.apache.kafka.clients.NetworkClient.poll(NetworkClient.java:560) at org.apache.kafka.clients.producer.internals.Sender.runOnce(Sender.java:328) at org.apache.kafka.clients.producer.internals.Sender.run(Sender.java:243) at java.base/java.lang.Thread.run(Thread.java:829) 19:07:02.268 [kafka-producer-network-thread | mso-123456-producer-f273d893-2274-4515-b9d3-12029fbb5bf7] INFO org.apache.kafka.clients.NetworkClient - [Producer clientId=mso-123456-producer-f273d893-2274-4515-b9d3-12029fbb5bf7] Node 1 disconnected. 19:07:02.269 [kafka-producer-network-thread | mso-123456-producer-f273d893-2274-4515-b9d3-12029fbb5bf7] WARN org.apache.kafka.clients.NetworkClient - [Producer clientId=mso-123456-producer-f273d893-2274-4515-b9d3-12029fbb5bf7] Connection to node 1 (localhost/127.0.0.1:38537) could not be established. Broker may not be available. 19:07:02.279 [main-EventThread] DEBUG kafka.zookeeper.ZooKeeperClient - [ZooKeeperClient Kafka server] Received event: WatchedEvent state:Closed type:None path:null 19:07:02.281 [main-EventThread] INFO org.apache.zookeeper.ClientCnxn - EventThread shut down for session: 0x1000002c50e0000 19:07:02.281 [main] INFO org.apache.zookeeper.ZooKeeper - Session: 0x1000002c50e0000 closed 19:07:02.284 [main] INFO kafka.zookeeper.ZooKeeperClient - [ZooKeeperClient Kafka server] Closed. 19:07:02.285 [main] INFO kafka.server.ClientQuotaManager$ThrottledChannelReaper - [TestBroker:1ThrottledChannelReaper-Fetch]: Shutting down 19:07:02.289 [TestBroker:1ThrottledChannelReaper-Fetch] INFO kafka.server.ClientQuotaManager$ThrottledChannelReaper - [TestBroker:1ThrottledChannelReaper-Fetch]: Stopped 19:07:02.289 [main] INFO kafka.server.ClientQuotaManager$ThrottledChannelReaper - [TestBroker:1ThrottledChannelReaper-Fetch]: Shutdown completed 19:07:02.289 [main] INFO kafka.server.ClientQuotaManager$ThrottledChannelReaper - [TestBroker:1ThrottledChannelReaper-Produce]: Shutting down 19:07:02.289 [TestBroker:1ThrottledChannelReaper-Produce] INFO kafka.server.ClientQuotaManager$ThrottledChannelReaper - [TestBroker:1ThrottledChannelReaper-Produce]: Stopped 19:07:02.290 [main] INFO kafka.server.ClientQuotaManager$ThrottledChannelReaper - [TestBroker:1ThrottledChannelReaper-Produce]: Shutdown completed 19:07:02.290 [main] INFO kafka.server.ClientQuotaManager$ThrottledChannelReaper - [TestBroker:1ThrottledChannelReaper-Request]: Shutting down 19:07:02.290 [TestBroker:1ThrottledChannelReaper-Request] INFO kafka.server.ClientQuotaManager$ThrottledChannelReaper - [TestBroker:1ThrottledChannelReaper-Request]: Stopped 19:07:02.290 [main] INFO kafka.server.ClientQuotaManager$ThrottledChannelReaper - [TestBroker:1ThrottledChannelReaper-Request]: Shutdown completed 19:07:02.290 [main] INFO kafka.server.ClientQuotaManager$ThrottledChannelReaper - [TestBroker:1ThrottledChannelReaper-ControllerMutation]: Shutting down 19:07:02.290 [TestBroker:1ThrottledChannelReaper-ControllerMutation] INFO kafka.server.ClientQuotaManager$ThrottledChannelReaper - [TestBroker:1ThrottledChannelReaper-ControllerMutation]: Stopped 19:07:02.291 [main] INFO kafka.server.ClientQuotaManager$ThrottledChannelReaper - [TestBroker:1ThrottledChannelReaper-ControllerMutation]: Shutdown completed 19:07:02.292 [main] INFO kafka.network.SocketServer - [SocketServer listenerType=ZK_BROKER, nodeId=1] Shutting down socket server 19:07:02.321 [main] INFO kafka.network.SocketServer - [SocketServer listenerType=ZK_BROKER, nodeId=1] Shutdown completed 19:07:02.322 [main] INFO org.apache.kafka.common.metrics.Metrics - Metrics scheduler closed 19:07:02.322 [main] INFO org.apache.kafka.common.metrics.Metrics - Closing reporter org.apache.kafka.common.metrics.JmxReporter 19:07:02.322 [main] INFO org.apache.kafka.common.metrics.Metrics - Metrics reporters closed 19:07:02.324 [main] INFO kafka.server.BrokerTopicStats - Broker and topic stats closed 19:07:02.324 [kafka-coordinator-heartbeat-thread | mso-group] DEBUG org.apache.kafka.clients.NetworkClient - [Consumer clientId=mso-123456-consumer-e3947c42-8a25-4603-b050-ed449cbe5e27, groupId=mso-group] Give up sending metadata request since no node is available 19:07:02.324 [kafka-coordinator-heartbeat-thread | mso-group] DEBUG org.apache.kafka.clients.consumer.internals.ConsumerCoordinator - [Consumer clientId=mso-123456-consumer-e3947c42-8a25-4603-b050-ed449cbe5e27, groupId=mso-group] No broker available to send FindCoordinator request 19:07:02.324 [main] INFO org.apache.kafka.common.utils.AppInfoParser - App info kafka.server for 1 unregistered 19:07:02.324 [main] INFO kafka.server.KafkaServer - [KafkaServer id=1] shut down completed 19:07:02.325 [main] INFO com.salesforce.kafka.test.ZookeeperTestServer - Shutting down zookeeper test server 19:07:02.325 [NIOServerCxnFactory.SelectorThread-1] INFO org.apache.zookeeper.server.NIOServerCnxnFactory - selector thread exitted run method 19:07:02.326 [ConnnectionExpirer] INFO org.apache.zookeeper.server.NIOServerCnxnFactory - ConnnectionExpirerThread interrupted 19:07:02.326 [NIOServerCxnFactory.AcceptThread:0.0.0.0/0.0.0.0:42969] INFO org.apache.zookeeper.server.NIOServerCnxnFactory - accept thread exitted run method 19:07:02.326 [NIOServerCxnFactory.SelectorThread-0] INFO org.apache.zookeeper.server.NIOServerCnxnFactory - selector thread exitted run method 19:07:02.329 [main] INFO org.apache.zookeeper.server.ZooKeeperServer - shutting down 19:07:02.330 [main] INFO org.apache.zookeeper.server.RequestThrottler - Shutting down 19:07:02.330 [RequestThrottler] INFO org.apache.zookeeper.server.RequestThrottler - Draining request throttler queue 19:07:02.330 [RequestThrottler] INFO org.apache.zookeeper.server.RequestThrottler - RequestThrottler shutdown. Dropped 0 requests 19:07:02.330 [main] INFO org.apache.zookeeper.server.SessionTrackerImpl - Shutting down 19:07:02.330 [main] INFO org.apache.zookeeper.server.PrepRequestProcessor - Shutting down 19:07:02.330 [main] INFO org.apache.zookeeper.server.SyncRequestProcessor - Shutting down 19:07:02.330 [SyncThread:0] INFO org.apache.zookeeper.server.SyncRequestProcessor - SyncRequestProcessor exited! 19:07:02.330 [ProcessThread(sid:0 cport:42969):] INFO org.apache.zookeeper.server.PrepRequestProcessor - PrepRequestProcessor exited loop! 19:07:02.331 [main] INFO org.apache.zookeeper.server.FinalRequestProcessor - shutdown of request processor complete 19:07:02.331 [main] DEBUG org.apache.zookeeper.server.persistence.FileTxnLog - Created new input stream: /tmp/kafka-unit11219859268625780946/version-2/log.1 19:07:02.331 [main] DEBUG org.apache.zookeeper.server.persistence.FileTxnLog - Created new input archive: /tmp/kafka-unit11219859268625780946/version-2/log.1 19:07:02.336 [main] DEBUG org.apache.zookeeper.server.persistence.FileTxnLog - EOF exception java.io.EOFException: Failed to read /tmp/kafka-unit11219859268625780946/version-2/log.1 at org.apache.zookeeper.server.persistence.FileTxnLog$FileTxnIterator.next(FileTxnLog.java:771) at org.apache.zookeeper.server.persistence.FileTxnLog$FileTxnIterator.(FileTxnLog.java:650) at org.apache.zookeeper.server.persistence.FileTxnLog.read(FileTxnLog.java:462) at org.apache.zookeeper.server.persistence.FileTxnLog.read(FileTxnLog.java:449) at org.apache.zookeeper.server.persistence.FileTxnSnapLog.fastForwardFromEdits(FileTxnSnapLog.java:321) at org.apache.zookeeper.server.ZKDatabase.fastForwardDataBase(ZKDatabase.java:300) at org.apache.zookeeper.server.ZooKeeperServer.shutdown(ZooKeeperServer.java:848) at org.apache.zookeeper.server.ZooKeeperServer.shutdown(ZooKeeperServer.java:796) at org.apache.zookeeper.server.NIOServerCnxnFactory.shutdown(NIOServerCnxnFactory.java:922) at org.apache.zookeeper.server.ZooKeeperServerMain.shutdown(ZooKeeperServerMain.java:219) at org.apache.curator.test.TestingZooKeeperMain.close(TestingZooKeeperMain.java:144) at org.apache.curator.test.TestingZooKeeperServer.stop(TestingZooKeeperServer.java:110) at org.apache.curator.test.TestingServer.stop(TestingServer.java:161) at com.salesforce.kafka.test.ZookeeperTestServer.stop(ZookeeperTestServer.java:129) at com.salesforce.kafka.test.KafkaTestCluster.stop(KafkaTestCluster.java:303) at com.salesforce.kafka.test.KafkaTestCluster.close(KafkaTestCluster.java:312) at org.onap.sdc.utils.SdcKafkaTest.after(SdcKafkaTest.java:65) at java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at java.base/jdk.internal.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.base/java.lang.reflect.Method.invoke(Method.java:566) at org.junit.platform.commons.util.ReflectionUtils.invokeMethod(ReflectionUtils.java:688) at org.junit.jupiter.engine.execution.MethodInvocation.proceed(MethodInvocation.java:60) at org.junit.jupiter.engine.execution.InvocationInterceptorChain$ValidatingInvocation.proceed(InvocationInterceptorChain.java:131) at org.junit.jupiter.engine.extension.TimeoutExtension.intercept(TimeoutExtension.java:149) at org.junit.jupiter.engine.extension.TimeoutExtension.interceptLifecycleMethod(TimeoutExtension.java:126) at org.junit.jupiter.engine.extension.TimeoutExtension.interceptAfterAllMethod(TimeoutExtension.java:116) at org.junit.jupiter.engine.execution.ExecutableInvoker$ReflectiveInterceptorCall.lambda$ofVoidMethod$0(ExecutableInvoker.java:115) at org.junit.jupiter.engine.execution.ExecutableInvoker.lambda$invoke$0(ExecutableInvoker.java:105) at org.junit.jupiter.engine.execution.InvocationInterceptorChain$InterceptedInvocation.proceed(InvocationInterceptorChain.java:106) at org.junit.jupiter.engine.execution.InvocationInterceptorChain.proceed(InvocationInterceptorChain.java:64) at org.junit.jupiter.engine.execution.InvocationInterceptorChain.chainAndInvoke(InvocationInterceptorChain.java:45) at org.junit.jupiter.engine.execution.InvocationInterceptorChain.invoke(InvocationInterceptorChain.java:37) at org.junit.jupiter.engine.execution.ExecutableInvoker.invoke(ExecutableInvoker.java:104) at org.junit.jupiter.engine.execution.ExecutableInvoker.invoke(ExecutableInvoker.java:98) at org.junit.jupiter.engine.descriptor.ClassBasedTestDescriptor.lambda$invokeAfterAllMethods$11(ClassBasedTestDescriptor.java:412) at org.junit.platform.engine.support.hierarchical.ThrowableCollector.execute(ThrowableCollector.java:73) at org.junit.jupiter.engine.descriptor.ClassBasedTestDescriptor.lambda$invokeAfterAllMethods$12(ClassBasedTestDescriptor.java:410) at java.base/java.util.ArrayList.forEach(ArrayList.java:1541) at java.base/java.util.Collections$UnmodifiableCollection.forEach(Collections.java:1085) at org.junit.jupiter.engine.descriptor.ClassBasedTestDescriptor.invokeAfterAllMethods(ClassBasedTestDescriptor.java:410) at org.junit.jupiter.engine.descriptor.ClassBasedTestDescriptor.after(ClassBasedTestDescriptor.java:212) at org.junit.jupiter.engine.descriptor.ClassBasedTestDescriptor.after(ClassBasedTestDescriptor.java:78) at org.junit.platform.engine.support.hierarchical.NodeTestTask.lambda$executeRecursively$6(NodeTestTask.java:149) at org.junit.platform.engine.support.hierarchical.ThrowableCollector.execute(ThrowableCollector.java:73) at org.junit.platform.engine.support.hierarchical.NodeTestTask.lambda$executeRecursively$7(NodeTestTask.java:149) at org.junit.platform.engine.support.hierarchical.Node.around(Node.java:137) at org.junit.platform.engine.support.hierarchical.NodeTestTask.lambda$executeRecursively$8(NodeTestTask.java:127) at org.junit.platform.engine.support.hierarchical.ThrowableCollector.execute(ThrowableCollector.java:73) at org.junit.platform.engine.support.hierarchical.NodeTestTask.executeRecursively(NodeTestTask.java:126) at org.junit.platform.engine.support.hierarchical.NodeTestTask.execute(NodeTestTask.java:84) at java.base/java.util.ArrayList.forEach(ArrayList.java:1541) at org.junit.platform.engine.support.hierarchical.SameThreadHierarchicalTestExecutorService.invokeAll(SameThreadHierarchicalTestExecutorService.java:38) at org.junit.platform.engine.support.hierarchical.NodeTestTask.lambda$executeRecursively$5(NodeTestTask.java:143) at org.junit.platform.engine.support.hierarchical.ThrowableCollector.execute(ThrowableCollector.java:73) at org.junit.platform.engine.support.hierarchical.NodeTestTask.lambda$executeRecursively$7(NodeTestTask.java:129) at org.junit.platform.engine.support.hierarchical.Node.around(Node.java:137) at org.junit.platform.engine.support.hierarchical.NodeTestTask.lambda$executeRecursively$8(NodeTestTask.java:127) at org.junit.platform.engine.support.hierarchical.ThrowableCollector.execute(ThrowableCollector.java:73) at org.junit.platform.engine.support.hierarchical.NodeTestTask.executeRecursively(NodeTestTask.java:126) at org.junit.platform.engine.support.hierarchical.NodeTestTask.execute(NodeTestTask.java:84) at org.junit.platform.engine.support.hierarchical.SameThreadHierarchicalTestExecutorService.submit(SameThreadHierarchicalTestExecutorService.java:32) at org.junit.platform.engine.support.hierarchical.HierarchicalTestExecutor.execute(HierarchicalTestExecutor.java:57) at org.junit.platform.engine.support.hierarchical.HierarchicalTestEngine.execute(HierarchicalTestEngine.java:51) at org.junit.platform.launcher.core.EngineExecutionOrchestrator.execute(EngineExecutionOrchestrator.java:108) at org.junit.platform.launcher.core.EngineExecutionOrchestrator.execute(EngineExecutionOrchestrator.java:88) at org.junit.platform.launcher.core.EngineExecutionOrchestrator.lambda$execute$0(EngineExecutionOrchestrator.java:54) at org.junit.platform.launcher.core.EngineExecutionOrchestrator.withInterceptedStreams(EngineExecutionOrchestrator.java:67) at org.junit.platform.launcher.core.EngineExecutionOrchestrator.execute(EngineExecutionOrchestrator.java:52) at org.junit.platform.launcher.core.DefaultLauncher.execute(DefaultLauncher.java:96) at org.junit.platform.launcher.core.DefaultLauncher.execute(DefaultLauncher.java:75) at org.apache.maven.surefire.junitplatform.JUnitPlatformProvider.invokeAllTests(JUnitPlatformProvider.java:154) at org.apache.maven.surefire.junitplatform.JUnitPlatformProvider.invoke(JUnitPlatformProvider.java:127) at org.apache.maven.surefire.booter.ForkedBooter.runSuitesInProcess(ForkedBooter.java:377) at org.apache.maven.surefire.booter.ForkedBooter.execute(ForkedBooter.java:138) at org.apache.maven.surefire.booter.ForkedBooter.run(ForkedBooter.java:465) at org.apache.maven.surefire.booter.ForkedBooter.main(ForkedBooter.java:451) 19:07:02.337 [Thread-2] DEBUG org.apache.zookeeper.server.ZooKeeperServer - ZooKeeper server is not running, so not proceeding to shutdown! 19:07:02.337 [main] INFO kafka.server.KafkaServer - [KafkaServer id=1] shutting down 19:07:02.338 [main] INFO com.salesforce.kafka.test.ZookeeperTestServer - Shutting down zookeeper test server [INFO] Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 26.483 s - in org.onap.sdc.utils.SdcKafkaTest [INFO] Running org.onap.sdc.utils.NotificationSenderTest 19:07:02.464 [kafka-coordinator-heartbeat-thread | mso-group] DEBUG org.apache.kafka.clients.NetworkClient - [Consumer clientId=mso-123456-consumer-e3947c42-8a25-4603-b050-ed449cbe5e27, groupId=mso-group] Initialize connection to node localhost:38537 (id: 1 rack: null) for sending metadata request 19:07:02.465 [kafka-producer-network-thread | mso-123456-producer-f273d893-2274-4515-b9d3-12029fbb5bf7] DEBUG org.apache.kafka.clients.NetworkClient - [Producer clientId=mso-123456-producer-f273d893-2274-4515-b9d3-12029fbb5bf7] Give up sending metadata request since no node is available 19:07:02.466 [kafka-coordinator-heartbeat-thread | mso-group] DEBUG org.apache.kafka.clients.ClientUtils - Resolved host localhost as 127.0.0.1 19:07:02.466 [kafka-coordinator-heartbeat-thread | mso-group] DEBUG org.apache.kafka.clients.NetworkClient - [Consumer clientId=mso-123456-consumer-e3947c42-8a25-4603-b050-ed449cbe5e27, groupId=mso-group] Initiating connection to node localhost:38537 (id: 1 rack: null) using address localhost/127.0.0.1 19:07:02.467 [kafka-coordinator-heartbeat-thread | mso-group] DEBUG org.apache.kafka.common.security.authenticator.SaslClientAuthenticator - [Consumer clientId=mso-123456-consumer-e3947c42-8a25-4603-b050-ed449cbe5e27, groupId=mso-group] Set SASL client state to SEND_APIVERSIONS_REQUEST 19:07:02.467 [kafka-coordinator-heartbeat-thread | mso-group] DEBUG org.apache.kafka.common.security.authenticator.SaslClientAuthenticator - [Consumer clientId=mso-123456-consumer-e3947c42-8a25-4603-b050-ed449cbe5e27, groupId=mso-group] Creating SaslClient: client=null;service=kafka;serviceHostname=localhost;mechs=[PLAIN] 19:07:02.470 [kafka-coordinator-heartbeat-thread | mso-group] DEBUG org.apache.kafka.common.network.Selector - [Consumer clientId=mso-123456-consumer-e3947c42-8a25-4603-b050-ed449cbe5e27, groupId=mso-group] Connection with localhost/127.0.0.1 (channelId=1) disconnected java.net.ConnectException: Connection refused at java.base/sun.nio.ch.SocketChannelImpl.checkConnect(Native Method) at java.base/sun.nio.ch.SocketChannelImpl.finishConnect(SocketChannelImpl.java:777) at org.apache.kafka.common.network.PlaintextTransportLayer.finishConnect(PlaintextTransportLayer.java:50) at org.apache.kafka.common.network.KafkaChannel.finishConnect(KafkaChannel.java:224) at org.apache.kafka.common.network.Selector.pollSelectionKeys(Selector.java:526) at org.apache.kafka.common.network.Selector.poll(Selector.java:481) at org.apache.kafka.clients.NetworkClient.poll(NetworkClient.java:560) at org.apache.kafka.clients.consumer.internals.ConsumerNetworkClient.poll(ConsumerNetworkClient.java:280) at org.apache.kafka.clients.consumer.internals.ConsumerNetworkClient.pollNoWakeup(ConsumerNetworkClient.java:321) at org.apache.kafka.clients.consumer.internals.AbstractCoordinator$HeartbeatThread.run(AbstractCoordinator.java:1454) 19:07:02.471 [kafka-coordinator-heartbeat-thread | mso-group] INFO org.apache.kafka.clients.NetworkClient - [Consumer clientId=mso-123456-consumer-e3947c42-8a25-4603-b050-ed449cbe5e27, groupId=mso-group] Node 1 disconnected. 19:07:02.471 [kafka-coordinator-heartbeat-thread | mso-group] WARN org.apache.kafka.clients.NetworkClient - [Consumer clientId=mso-123456-consumer-e3947c42-8a25-4603-b050-ed449cbe5e27, groupId=mso-group] Connection to node 1 (localhost/127.0.0.1:38537) could not be established. Broker may not be available. 19:07:02.471 [kafka-coordinator-heartbeat-thread | mso-group] DEBUG org.apache.kafka.clients.consumer.internals.ConsumerCoordinator - [Consumer clientId=mso-123456-consumer-e3947c42-8a25-4603-b050-ed449cbe5e27, groupId=mso-group] No broker available to send FindCoordinator request 19:07:02.590 [kafka-coordinator-heartbeat-thread | mso-group] DEBUG org.apache.kafka.clients.NetworkClient - [Consumer clientId=mso-123456-consumer-e3947c42-8a25-4603-b050-ed449cbe5e27, groupId=mso-group] Give up sending metadata request since no node is available 19:07:02.591 [kafka-producer-network-thread | mso-123456-producer-f273d893-2274-4515-b9d3-12029fbb5bf7] DEBUG org.apache.kafka.clients.NetworkClient - [Producer clientId=mso-123456-producer-f273d893-2274-4515-b9d3-12029fbb5bf7] Give up sending metadata request since no node is available 19:07:02.593 [kafka-coordinator-heartbeat-thread | mso-group] DEBUG org.apache.kafka.clients.consumer.internals.ConsumerCoordinator - [Consumer clientId=mso-123456-consumer-e3947c42-8a25-4603-b050-ed449cbe5e27, groupId=mso-group] No broker available to send FindCoordinator request 19:07:02.642 [kafka-producer-network-thread | mso-123456-producer-f273d893-2274-4515-b9d3-12029fbb5bf7] DEBUG org.apache.kafka.clients.NetworkClient - [Producer clientId=mso-123456-producer-f273d893-2274-4515-b9d3-12029fbb5bf7] Give up sending metadata request since no node is available 19:07:02.643 [main] INFO org.onap.sdc.utils.NotificationSender - DistributionClient - sendStatus 19:07:02.644 [main] DEBUG org.onap.sdc.utils.NotificationSender - Publisher server list: null 19:07:02.644 [main] INFO org.onap.sdc.utils.NotificationSender - Trying to send status: status to topic null 19:07:02.693 [kafka-producer-network-thread | mso-123456-producer-f273d893-2274-4515-b9d3-12029fbb5bf7] DEBUG org.apache.kafka.clients.NetworkClient - [Producer clientId=mso-123456-producer-f273d893-2274-4515-b9d3-12029fbb5bf7] Give up sending metadata request since no node is available 19:07:02.693 [kafka-coordinator-heartbeat-thread | mso-group] DEBUG org.apache.kafka.clients.NetworkClient - [Consumer clientId=mso-123456-consumer-e3947c42-8a25-4603-b050-ed449cbe5e27, groupId=mso-group] Give up sending metadata request since no node is available 19:07:02.693 [kafka-coordinator-heartbeat-thread | mso-group] DEBUG org.apache.kafka.clients.consumer.internals.ConsumerCoordinator - [Consumer clientId=mso-123456-consumer-e3947c42-8a25-4603-b050-ed449cbe5e27, groupId=mso-group] No broker available to send FindCoordinator request 19:07:02.743 [kafka-producer-network-thread | mso-123456-producer-f273d893-2274-4515-b9d3-12029fbb5bf7] DEBUG org.apache.kafka.clients.NetworkClient - [Producer clientId=mso-123456-producer-f273d893-2274-4515-b9d3-12029fbb5bf7] Initialize connection to node localhost:38537 (id: 1 rack: null) for sending metadata request 19:07:02.744 [kafka-producer-network-thread | mso-123456-producer-f273d893-2274-4515-b9d3-12029fbb5bf7] DEBUG org.apache.kafka.clients.ClientUtils - Resolved host localhost as 127.0.0.1 19:07:02.744 [kafka-producer-network-thread | mso-123456-producer-f273d893-2274-4515-b9d3-12029fbb5bf7] DEBUG org.apache.kafka.clients.NetworkClient - [Producer clientId=mso-123456-producer-f273d893-2274-4515-b9d3-12029fbb5bf7] Initiating connection to node localhost:38537 (id: 1 rack: null) using address localhost/127.0.0.1 19:07:02.744 [kafka-producer-network-thread | mso-123456-producer-f273d893-2274-4515-b9d3-12029fbb5bf7] DEBUG org.apache.kafka.common.security.authenticator.SaslClientAuthenticator - [Producer clientId=mso-123456-producer-f273d893-2274-4515-b9d3-12029fbb5bf7] Set SASL client state to SEND_APIVERSIONS_REQUEST 19:07:02.744 [kafka-producer-network-thread | mso-123456-producer-f273d893-2274-4515-b9d3-12029fbb5bf7] DEBUG org.apache.kafka.common.security.authenticator.SaslClientAuthenticator - [Producer clientId=mso-123456-producer-f273d893-2274-4515-b9d3-12029fbb5bf7] Creating SaslClient: client=null;service=kafka;serviceHostname=localhost;mechs=[PLAIN] 19:07:02.746 [kafka-producer-network-thread | mso-123456-producer-f273d893-2274-4515-b9d3-12029fbb5bf7] DEBUG org.apache.kafka.common.network.Selector - [Producer clientId=mso-123456-producer-f273d893-2274-4515-b9d3-12029fbb5bf7] Connection with localhost/127.0.0.1 (channelId=1) disconnected java.net.ConnectException: Connection refused at java.base/sun.nio.ch.SocketChannelImpl.checkConnect(Native Method) at java.base/sun.nio.ch.SocketChannelImpl.finishConnect(SocketChannelImpl.java:777) at org.apache.kafka.common.network.PlaintextTransportLayer.finishConnect(PlaintextTransportLayer.java:50) at org.apache.kafka.common.network.KafkaChannel.finishConnect(KafkaChannel.java:224) at org.apache.kafka.common.network.Selector.pollSelectionKeys(Selector.java:526) at org.apache.kafka.common.network.Selector.poll(Selector.java:481) at org.apache.kafka.clients.NetworkClient.poll(NetworkClient.java:560) at org.apache.kafka.clients.producer.internals.Sender.runOnce(Sender.java:328) at org.apache.kafka.clients.producer.internals.Sender.run(Sender.java:243) at java.base/java.lang.Thread.run(Thread.java:829) 19:07:02.746 [kafka-producer-network-thread | mso-123456-producer-f273d893-2274-4515-b9d3-12029fbb5bf7] INFO org.apache.kafka.clients.NetworkClient - [Producer clientId=mso-123456-producer-f273d893-2274-4515-b9d3-12029fbb5bf7] Node 1 disconnected. 19:07:02.746 [kafka-producer-network-thread | mso-123456-producer-f273d893-2274-4515-b9d3-12029fbb5bf7] WARN org.apache.kafka.clients.NetworkClient - [Producer clientId=mso-123456-producer-f273d893-2274-4515-b9d3-12029fbb5bf7] Connection to node 1 (localhost/127.0.0.1:38537) could not be established. Broker may not be available. 19:07:02.794 [kafka-coordinator-heartbeat-thread | mso-group] DEBUG org.apache.kafka.clients.NetworkClient - [Consumer clientId=mso-123456-consumer-e3947c42-8a25-4603-b050-ed449cbe5e27, groupId=mso-group] Initialize connection to node localhost:38537 (id: 1 rack: null) for sending metadata request 19:07:02.794 [kafka-coordinator-heartbeat-thread | mso-group] DEBUG org.apache.kafka.clients.ClientUtils - Resolved host localhost as 127.0.0.1 19:07:02.794 [kafka-coordinator-heartbeat-thread | mso-group] DEBUG org.apache.kafka.clients.NetworkClient - [Consumer clientId=mso-123456-consumer-e3947c42-8a25-4603-b050-ed449cbe5e27, groupId=mso-group] Initiating connection to node localhost:38537 (id: 1 rack: null) using address localhost/127.0.0.1 19:07:02.795 [kafka-coordinator-heartbeat-thread | mso-group] DEBUG org.apache.kafka.common.security.authenticator.SaslClientAuthenticator - [Consumer clientId=mso-123456-consumer-e3947c42-8a25-4603-b050-ed449cbe5e27, groupId=mso-group] Set SASL client state to SEND_APIVERSIONS_REQUEST 19:07:02.795 [kafka-coordinator-heartbeat-thread | mso-group] DEBUG org.apache.kafka.common.security.authenticator.SaslClientAuthenticator - [Consumer clientId=mso-123456-consumer-e3947c42-8a25-4603-b050-ed449cbe5e27, groupId=mso-group] Creating SaslClient: client=null;service=kafka;serviceHostname=localhost;mechs=[PLAIN] 19:07:02.796 [kafka-coordinator-heartbeat-thread | mso-group] DEBUG org.apache.kafka.common.network.Selector - [Consumer clientId=mso-123456-consumer-e3947c42-8a25-4603-b050-ed449cbe5e27, groupId=mso-group] Connection with localhost/127.0.0.1 (channelId=1) disconnected java.net.ConnectException: Connection refused at java.base/sun.nio.ch.SocketChannelImpl.checkConnect(Native Method) at java.base/sun.nio.ch.SocketChannelImpl.finishConnect(SocketChannelImpl.java:777) at org.apache.kafka.common.network.PlaintextTransportLayer.finishConnect(PlaintextTransportLayer.java:50) at org.apache.kafka.common.network.KafkaChannel.finishConnect(KafkaChannel.java:224) at org.apache.kafka.common.network.Selector.pollSelectionKeys(Selector.java:526) at org.apache.kafka.common.network.Selector.poll(Selector.java:481) at org.apache.kafka.clients.NetworkClient.poll(NetworkClient.java:560) at org.apache.kafka.clients.consumer.internals.ConsumerNetworkClient.poll(ConsumerNetworkClient.java:280) at org.apache.kafka.clients.consumer.internals.ConsumerNetworkClient.pollNoWakeup(ConsumerNetworkClient.java:321) at org.apache.kafka.clients.consumer.internals.AbstractCoordinator$HeartbeatThread.run(AbstractCoordinator.java:1454) 19:07:02.797 [kafka-coordinator-heartbeat-thread | mso-group] INFO org.apache.kafka.clients.NetworkClient - [Consumer clientId=mso-123456-consumer-e3947c42-8a25-4603-b050-ed449cbe5e27, groupId=mso-group] Node 1 disconnected. 19:07:02.797 [kafka-coordinator-heartbeat-thread | mso-group] WARN org.apache.kafka.clients.NetworkClient - [Consumer clientId=mso-123456-consumer-e3947c42-8a25-4603-b050-ed449cbe5e27, groupId=mso-group] Connection to node 1 (localhost/127.0.0.1:38537) could not be established. Broker may not be available. 19:07:02.797 [kafka-coordinator-heartbeat-thread | mso-group] DEBUG org.apache.kafka.clients.consumer.internals.ConsumerCoordinator - [Consumer clientId=mso-123456-consumer-e3947c42-8a25-4603-b050-ed449cbe5e27, groupId=mso-group] No broker available to send FindCoordinator request 19:07:02.846 [kafka-producer-network-thread | mso-123456-producer-f273d893-2274-4515-b9d3-12029fbb5bf7] DEBUG org.apache.kafka.clients.NetworkClient - [Producer clientId=mso-123456-producer-f273d893-2274-4515-b9d3-12029fbb5bf7] Give up sending metadata request since no node is available 19:07:02.897 [kafka-producer-network-thread | mso-123456-producer-f273d893-2274-4515-b9d3-12029fbb5bf7] DEBUG org.apache.kafka.clients.NetworkClient - [Producer clientId=mso-123456-producer-f273d893-2274-4515-b9d3-12029fbb5bf7] Give up sending metadata request since no node is available 19:07:02.897 [kafka-coordinator-heartbeat-thread | mso-group] DEBUG org.apache.kafka.clients.NetworkClient - [Consumer clientId=mso-123456-consumer-e3947c42-8a25-4603-b050-ed449cbe5e27, groupId=mso-group] Give up sending metadata request since no node is available 19:07:02.898 [kafka-coordinator-heartbeat-thread | mso-group] DEBUG org.apache.kafka.clients.consumer.internals.ConsumerCoordinator - [Consumer clientId=mso-123456-consumer-e3947c42-8a25-4603-b050-ed449cbe5e27, groupId=mso-group] No broker available to send FindCoordinator request 19:07:02.948 [kafka-producer-network-thread | mso-123456-producer-f273d893-2274-4515-b9d3-12029fbb5bf7] DEBUG org.apache.kafka.clients.NetworkClient - [Producer clientId=mso-123456-producer-f273d893-2274-4515-b9d3-12029fbb5bf7] Give up sending metadata request since no node is available 19:07:02.998 [kafka-coordinator-heartbeat-thread | mso-group] DEBUG org.apache.kafka.clients.NetworkClient - [Consumer clientId=mso-123456-consumer-e3947c42-8a25-4603-b050-ed449cbe5e27, groupId=mso-group] Give up sending metadata request since no node is available 19:07:02.998 [kafka-producer-network-thread | mso-123456-producer-f273d893-2274-4515-b9d3-12029fbb5bf7] DEBUG org.apache.kafka.clients.NetworkClient - [Producer clientId=mso-123456-producer-f273d893-2274-4515-b9d3-12029fbb5bf7] Give up sending metadata request since no node is available 19:07:02.999 [kafka-coordinator-heartbeat-thread | mso-group] DEBUG org.apache.kafka.clients.consumer.internals.ConsumerCoordinator - [Consumer clientId=mso-123456-consumer-e3947c42-8a25-4603-b050-ed449cbe5e27, groupId=mso-group] No broker available to send FindCoordinator request 19:07:03.049 [kafka-producer-network-thread | mso-123456-producer-f273d893-2274-4515-b9d3-12029fbb5bf7] DEBUG org.apache.kafka.clients.NetworkClient - [Producer clientId=mso-123456-producer-f273d893-2274-4515-b9d3-12029fbb5bf7] Give up sending metadata request since no node is available 19:07:03.099 [kafka-coordinator-heartbeat-thread | mso-group] DEBUG org.apache.kafka.clients.NetworkClient - [Consumer clientId=mso-123456-consumer-e3947c42-8a25-4603-b050-ed449cbe5e27, groupId=mso-group] Give up sending metadata request since no node is available 19:07:03.100 [kafka-coordinator-heartbeat-thread | mso-group] DEBUG org.apache.kafka.clients.consumer.internals.ConsumerCoordinator - [Consumer clientId=mso-123456-consumer-e3947c42-8a25-4603-b050-ed449cbe5e27, groupId=mso-group] No broker available to send FindCoordinator request 19:07:03.100 [kafka-producer-network-thread | mso-123456-producer-f273d893-2274-4515-b9d3-12029fbb5bf7] DEBUG org.apache.kafka.clients.NetworkClient - [Producer clientId=mso-123456-producer-f273d893-2274-4515-b9d3-12029fbb5bf7] Give up sending metadata request since no node is available 19:07:03.150 [kafka-producer-network-thread | mso-123456-producer-f273d893-2274-4515-b9d3-12029fbb5bf7] DEBUG org.apache.kafka.clients.NetworkClient - [Producer clientId=mso-123456-producer-f273d893-2274-4515-b9d3-12029fbb5bf7] Give up sending metadata request since no node is available 19:07:03.200 [kafka-coordinator-heartbeat-thread | mso-group] DEBUG org.apache.kafka.clients.NetworkClient - [Consumer clientId=mso-123456-consumer-e3947c42-8a25-4603-b050-ed449cbe5e27, groupId=mso-group] Give up sending metadata request since no node is available 19:07:03.200 [kafka-coordinator-heartbeat-thread | mso-group] DEBUG org.apache.kafka.clients.consumer.internals.ConsumerCoordinator - [Consumer clientId=mso-123456-consumer-e3947c42-8a25-4603-b050-ed449cbe5e27, groupId=mso-group] No broker available to send FindCoordinator request 19:07:03.201 [kafka-producer-network-thread | mso-123456-producer-f273d893-2274-4515-b9d3-12029fbb5bf7] DEBUG org.apache.kafka.clients.NetworkClient - [Producer clientId=mso-123456-producer-f273d893-2274-4515-b9d3-12029fbb5bf7] Give up sending metadata request since no node is available 19:07:03.252 [kafka-producer-network-thread | mso-123456-producer-f273d893-2274-4515-b9d3-12029fbb5bf7] DEBUG org.apache.kafka.clients.NetworkClient - [Producer clientId=mso-123456-producer-f273d893-2274-4515-b9d3-12029fbb5bf7] Give up sending metadata request since no node is available 19:07:03.301 [kafka-coordinator-heartbeat-thread | mso-group] DEBUG org.apache.kafka.clients.NetworkClient - [Consumer clientId=mso-123456-consumer-e3947c42-8a25-4603-b050-ed449cbe5e27, groupId=mso-group] Give up sending metadata request since no node is available 19:07:03.301 [kafka-coordinator-heartbeat-thread | mso-group] DEBUG org.apache.kafka.clients.consumer.internals.ConsumerCoordinator - [Consumer clientId=mso-123456-consumer-e3947c42-8a25-4603-b050-ed449cbe5e27, groupId=mso-group] No broker available to send FindCoordinator request 19:07:03.303 [kafka-producer-network-thread | mso-123456-producer-f273d893-2274-4515-b9d3-12029fbb5bf7] DEBUG org.apache.kafka.clients.NetworkClient - [Producer clientId=mso-123456-producer-f273d893-2274-4515-b9d3-12029fbb5bf7] Give up sending metadata request since no node is available 19:07:03.354 [kafka-producer-network-thread | mso-123456-producer-f273d893-2274-4515-b9d3-12029fbb5bf7] DEBUG org.apache.kafka.clients.NetworkClient - [Producer clientId=mso-123456-producer-f273d893-2274-4515-b9d3-12029fbb5bf7] Give up sending metadata request since no node is available 19:07:03.401 [kafka-coordinator-heartbeat-thread | mso-group] DEBUG org.apache.kafka.clients.NetworkClient - [Consumer clientId=mso-123456-consumer-e3947c42-8a25-4603-b050-ed449cbe5e27, groupId=mso-group] Give up sending metadata request since no node is available 19:07:03.401 [kafka-coordinator-heartbeat-thread | mso-group] DEBUG org.apache.kafka.clients.consumer.internals.ConsumerCoordinator - [Consumer clientId=mso-123456-consumer-e3947c42-8a25-4603-b050-ed449cbe5e27, groupId=mso-group] No broker available to send FindCoordinator request 19:07:03.405 [kafka-producer-network-thread | mso-123456-producer-f273d893-2274-4515-b9d3-12029fbb5bf7] DEBUG org.apache.kafka.clients.NetworkClient - [Producer clientId=mso-123456-producer-f273d893-2274-4515-b9d3-12029fbb5bf7] Give up sending metadata request since no node is available 19:07:03.456 [kafka-producer-network-thread | mso-123456-producer-f273d893-2274-4515-b9d3-12029fbb5bf7] DEBUG org.apache.kafka.clients.NetworkClient - [Producer clientId=mso-123456-producer-f273d893-2274-4515-b9d3-12029fbb5bf7] Give up sending metadata request since no node is available 19:07:03.502 [kafka-coordinator-heartbeat-thread | mso-group] DEBUG org.apache.kafka.clients.NetworkClient - [Consumer clientId=mso-123456-consumer-e3947c42-8a25-4603-b050-ed449cbe5e27, groupId=mso-group] Give up sending metadata request since no node is available 19:07:03.502 [kafka-coordinator-heartbeat-thread | mso-group] DEBUG org.apache.kafka.clients.consumer.internals.ConsumerCoordinator - [Consumer clientId=mso-123456-consumer-e3947c42-8a25-4603-b050-ed449cbe5e27, groupId=mso-group] No broker available to send FindCoordinator request 19:07:03.507 [kafka-producer-network-thread | mso-123456-producer-f273d893-2274-4515-b9d3-12029fbb5bf7] DEBUG org.apache.kafka.clients.NetworkClient - [Producer clientId=mso-123456-producer-f273d893-2274-4515-b9d3-12029fbb5bf7] Initialize connection to node localhost:38537 (id: 1 rack: null) for sending metadata request 19:07:03.507 [kafka-producer-network-thread | mso-123456-producer-f273d893-2274-4515-b9d3-12029fbb5bf7] DEBUG org.apache.kafka.clients.ClientUtils - Resolved host localhost as 127.0.0.1 19:07:03.507 [kafka-producer-network-thread | mso-123456-producer-f273d893-2274-4515-b9d3-12029fbb5bf7] DEBUG org.apache.kafka.clients.NetworkClient - [Producer clientId=mso-123456-producer-f273d893-2274-4515-b9d3-12029fbb5bf7] Initiating connection to node localhost:38537 (id: 1 rack: null) using address localhost/127.0.0.1 19:07:03.507 [kafka-producer-network-thread | mso-123456-producer-f273d893-2274-4515-b9d3-12029fbb5bf7] DEBUG org.apache.kafka.common.security.authenticator.SaslClientAuthenticator - [Producer clientId=mso-123456-producer-f273d893-2274-4515-b9d3-12029fbb5bf7] Set SASL client state to SEND_APIVERSIONS_REQUEST 19:07:03.507 [kafka-producer-network-thread | mso-123456-producer-f273d893-2274-4515-b9d3-12029fbb5bf7] DEBUG org.apache.kafka.common.security.authenticator.SaslClientAuthenticator - [Producer clientId=mso-123456-producer-f273d893-2274-4515-b9d3-12029fbb5bf7] Creating SaslClient: client=null;service=kafka;serviceHostname=localhost;mechs=[PLAIN] 19:07:03.509 [kafka-producer-network-thread | mso-123456-producer-f273d893-2274-4515-b9d3-12029fbb5bf7] DEBUG org.apache.kafka.common.network.Selector - [Producer clientId=mso-123456-producer-f273d893-2274-4515-b9d3-12029fbb5bf7] Connection with localhost/127.0.0.1 (channelId=1) disconnected java.net.ConnectException: Connection refused at java.base/sun.nio.ch.SocketChannelImpl.checkConnect(Native Method) at java.base/sun.nio.ch.SocketChannelImpl.finishConnect(SocketChannelImpl.java:777) at org.apache.kafka.common.network.PlaintextTransportLayer.finishConnect(PlaintextTransportLayer.java:50) at org.apache.kafka.common.network.KafkaChannel.finishConnect(KafkaChannel.java:224) at org.apache.kafka.common.network.Selector.pollSelectionKeys(Selector.java:526) at org.apache.kafka.common.network.Selector.poll(Selector.java:481) at org.apache.kafka.clients.NetworkClient.poll(NetworkClient.java:560) at org.apache.kafka.clients.producer.internals.Sender.runOnce(Sender.java:328) at org.apache.kafka.clients.producer.internals.Sender.run(Sender.java:243) at java.base/java.lang.Thread.run(Thread.java:829) 19:07:03.509 [kafka-producer-network-thread | mso-123456-producer-f273d893-2274-4515-b9d3-12029fbb5bf7] INFO org.apache.kafka.clients.NetworkClient - [Producer clientId=mso-123456-producer-f273d893-2274-4515-b9d3-12029fbb5bf7] Node 1 disconnected. 19:07:03.509 [kafka-producer-network-thread | mso-123456-producer-f273d893-2274-4515-b9d3-12029fbb5bf7] WARN org.apache.kafka.clients.NetworkClient - [Producer clientId=mso-123456-producer-f273d893-2274-4515-b9d3-12029fbb5bf7] Connection to node 1 (localhost/127.0.0.1:38537) could not be established. Broker may not be available. 19:07:03.602 [kafka-coordinator-heartbeat-thread | mso-group] DEBUG org.apache.kafka.clients.NetworkClient - [Consumer clientId=mso-123456-consumer-e3947c42-8a25-4603-b050-ed449cbe5e27, groupId=mso-group] Give up sending metadata request since no node is available 19:07:03.603 [kafka-coordinator-heartbeat-thread | mso-group] DEBUG org.apache.kafka.clients.consumer.internals.ConsumerCoordinator - [Consumer clientId=mso-123456-consumer-e3947c42-8a25-4603-b050-ed449cbe5e27, groupId=mso-group] No broker available to send FindCoordinator request 19:07:03.610 [kafka-producer-network-thread | mso-123456-producer-f273d893-2274-4515-b9d3-12029fbb5bf7] DEBUG org.apache.kafka.clients.NetworkClient - [Producer clientId=mso-123456-producer-f273d893-2274-4515-b9d3-12029fbb5bf7] Give up sending metadata request since no node is available 19:07:03.656 [main] INFO org.onap.sdc.utils.NotificationSender - DistributionClient - sendStatus 19:07:03.657 [main] DEBUG org.onap.sdc.utils.NotificationSender - Publisher server list: null 19:07:03.657 [main] INFO org.onap.sdc.utils.NotificationSender - Trying to send status: status to topic null 19:07:03.661 [kafka-producer-network-thread | mso-123456-producer-f273d893-2274-4515-b9d3-12029fbb5bf7] DEBUG org.apache.kafka.clients.NetworkClient - [Producer clientId=mso-123456-producer-f273d893-2274-4515-b9d3-12029fbb5bf7] Give up sending metadata request since no node is available 19:07:03.703 [kafka-coordinator-heartbeat-thread | mso-group] DEBUG org.apache.kafka.clients.NetworkClient - [Consumer clientId=mso-123456-consumer-e3947c42-8a25-4603-b050-ed449cbe5e27, groupId=mso-group] Initialize connection to node localhost:38537 (id: 1 rack: null) for sending metadata request 19:07:03.703 [kafka-coordinator-heartbeat-thread | mso-group] DEBUG org.apache.kafka.clients.ClientUtils - Resolved host localhost as 127.0.0.1 19:07:03.703 [kafka-coordinator-heartbeat-thread | mso-group] DEBUG org.apache.kafka.clients.NetworkClient - [Consumer clientId=mso-123456-consumer-e3947c42-8a25-4603-b050-ed449cbe5e27, groupId=mso-group] Initiating connection to node localhost:38537 (id: 1 rack: null) using address localhost/127.0.0.1 19:07:03.704 [kafka-coordinator-heartbeat-thread | mso-group] DEBUG org.apache.kafka.common.security.authenticator.SaslClientAuthenticator - [Consumer clientId=mso-123456-consumer-e3947c42-8a25-4603-b050-ed449cbe5e27, groupId=mso-group] Set SASL client state to SEND_APIVERSIONS_REQUEST 19:07:03.704 [kafka-coordinator-heartbeat-thread | mso-group] DEBUG org.apache.kafka.common.security.authenticator.SaslClientAuthenticator - [Consumer clientId=mso-123456-consumer-e3947c42-8a25-4603-b050-ed449cbe5e27, groupId=mso-group] Creating SaslClient: client=null;service=kafka;serviceHostname=localhost;mechs=[PLAIN] 19:07:03.705 [kafka-coordinator-heartbeat-thread | mso-group] DEBUG org.apache.kafka.common.network.Selector - [Consumer clientId=mso-123456-consumer-e3947c42-8a25-4603-b050-ed449cbe5e27, groupId=mso-group] Connection with localhost/127.0.0.1 (channelId=1) disconnected java.net.ConnectException: Connection refused at java.base/sun.nio.ch.SocketChannelImpl.checkConnect(Native Method) at java.base/sun.nio.ch.SocketChannelImpl.finishConnect(SocketChannelImpl.java:777) at org.apache.kafka.common.network.PlaintextTransportLayer.finishConnect(PlaintextTransportLayer.java:50) at org.apache.kafka.common.network.KafkaChannel.finishConnect(KafkaChannel.java:224) at org.apache.kafka.common.network.Selector.pollSelectionKeys(Selector.java:526) at org.apache.kafka.common.network.Selector.poll(Selector.java:481) at org.apache.kafka.clients.NetworkClient.poll(NetworkClient.java:560) at org.apache.kafka.clients.consumer.internals.ConsumerNetworkClient.poll(ConsumerNetworkClient.java:280) at org.apache.kafka.clients.consumer.internals.ConsumerNetworkClient.pollNoWakeup(ConsumerNetworkClient.java:321) at org.apache.kafka.clients.consumer.internals.AbstractCoordinator$HeartbeatThread.run(AbstractCoordinator.java:1454) 19:07:03.705 [kafka-coordinator-heartbeat-thread | mso-group] INFO org.apache.kafka.clients.NetworkClient - [Consumer clientId=mso-123456-consumer-e3947c42-8a25-4603-b050-ed449cbe5e27, groupId=mso-group] Node 1 disconnected. 19:07:03.705 [kafka-coordinator-heartbeat-thread | mso-group] WARN org.apache.kafka.clients.NetworkClient - [Consumer clientId=mso-123456-consumer-e3947c42-8a25-4603-b050-ed449cbe5e27, groupId=mso-group] Connection to node 1 (localhost/127.0.0.1:38537) could not be established. Broker may not be available. 19:07:03.706 [kafka-coordinator-heartbeat-thread | mso-group] DEBUG org.apache.kafka.clients.consumer.internals.ConsumerCoordinator - [Consumer clientId=mso-123456-consumer-e3947c42-8a25-4603-b050-ed449cbe5e27, groupId=mso-group] No broker available to send FindCoordinator request 19:07:03.711 [kafka-producer-network-thread | mso-123456-producer-f273d893-2274-4515-b9d3-12029fbb5bf7] DEBUG org.apache.kafka.clients.NetworkClient - [Producer clientId=mso-123456-producer-f273d893-2274-4515-b9d3-12029fbb5bf7] Give up sending metadata request since no node is available 19:07:03.761 [kafka-producer-network-thread | mso-123456-producer-f273d893-2274-4515-b9d3-12029fbb5bf7] DEBUG org.apache.kafka.clients.NetworkClient - [Producer clientId=mso-123456-producer-f273d893-2274-4515-b9d3-12029fbb5bf7] Give up sending metadata request since no node is available 19:07:03.806 [kafka-coordinator-heartbeat-thread | mso-group] DEBUG org.apache.kafka.clients.NetworkClient - [Consumer clientId=mso-123456-consumer-e3947c42-8a25-4603-b050-ed449cbe5e27, groupId=mso-group] Give up sending metadata request since no node is available 19:07:03.807 [kafka-coordinator-heartbeat-thread | mso-group] DEBUG org.apache.kafka.clients.consumer.internals.ConsumerCoordinator - [Consumer clientId=mso-123456-consumer-e3947c42-8a25-4603-b050-ed449cbe5e27, groupId=mso-group] No broker available to send FindCoordinator request 19:07:03.812 [kafka-producer-network-thread | mso-123456-producer-f273d893-2274-4515-b9d3-12029fbb5bf7] DEBUG org.apache.kafka.clients.NetworkClient - [Producer clientId=mso-123456-producer-f273d893-2274-4515-b9d3-12029fbb5bf7] Give up sending metadata request since no node is available 19:07:03.863 [kafka-producer-network-thread | mso-123456-producer-f273d893-2274-4515-b9d3-12029fbb5bf7] DEBUG org.apache.kafka.clients.NetworkClient - [Producer clientId=mso-123456-producer-f273d893-2274-4515-b9d3-12029fbb5bf7] Give up sending metadata request since no node is available 19:07:03.907 [kafka-coordinator-heartbeat-thread | mso-group] DEBUG org.apache.kafka.clients.NetworkClient - [Consumer clientId=mso-123456-consumer-e3947c42-8a25-4603-b050-ed449cbe5e27, groupId=mso-group] Give up sending metadata request since no node is available 19:07:03.907 [kafka-coordinator-heartbeat-thread | mso-group] DEBUG org.apache.kafka.clients.consumer.internals.ConsumerCoordinator - [Consumer clientId=mso-123456-consumer-e3947c42-8a25-4603-b050-ed449cbe5e27, groupId=mso-group] No broker available to send FindCoordinator request 19:07:03.913 [kafka-producer-network-thread | mso-123456-producer-f273d893-2274-4515-b9d3-12029fbb5bf7] DEBUG org.apache.kafka.clients.NetworkClient - [Producer clientId=mso-123456-producer-f273d893-2274-4515-b9d3-12029fbb5bf7] Give up sending metadata request since no node is available 19:07:03.964 [kafka-producer-network-thread | mso-123456-producer-f273d893-2274-4515-b9d3-12029fbb5bf7] DEBUG org.apache.kafka.clients.NetworkClient - [Producer clientId=mso-123456-producer-f273d893-2274-4515-b9d3-12029fbb5bf7] Give up sending metadata request since no node is available 19:07:04.008 [kafka-coordinator-heartbeat-thread | mso-group] DEBUG org.apache.kafka.clients.NetworkClient - [Consumer clientId=mso-123456-consumer-e3947c42-8a25-4603-b050-ed449cbe5e27, groupId=mso-group] Give up sending metadata request since no node is available 19:07:04.008 [kafka-coordinator-heartbeat-thread | mso-group] DEBUG org.apache.kafka.clients.consumer.internals.ConsumerCoordinator - [Consumer clientId=mso-123456-consumer-e3947c42-8a25-4603-b050-ed449cbe5e27, groupId=mso-group] No broker available to send FindCoordinator request 19:07:04.014 [kafka-producer-network-thread | mso-123456-producer-f273d893-2274-4515-b9d3-12029fbb5bf7] DEBUG org.apache.kafka.clients.NetworkClient - [Producer clientId=mso-123456-producer-f273d893-2274-4515-b9d3-12029fbb5bf7] Give up sending metadata request since no node is available 19:07:04.065 [kafka-producer-network-thread | mso-123456-producer-f273d893-2274-4515-b9d3-12029fbb5bf7] DEBUG org.apache.kafka.clients.NetworkClient - [Producer clientId=mso-123456-producer-f273d893-2274-4515-b9d3-12029fbb5bf7] Give up sending metadata request since no node is available 19:07:04.108 [kafka-coordinator-heartbeat-thread | mso-group] DEBUG org.apache.kafka.clients.NetworkClient - [Consumer clientId=mso-123456-consumer-e3947c42-8a25-4603-b050-ed449cbe5e27, groupId=mso-group] Give up sending metadata request since no node is available 19:07:04.108 [kafka-coordinator-heartbeat-thread | mso-group] DEBUG org.apache.kafka.clients.consumer.internals.ConsumerCoordinator - [Consumer clientId=mso-123456-consumer-e3947c42-8a25-4603-b050-ed449cbe5e27, groupId=mso-group] No broker available to send FindCoordinator request 19:07:04.115 [kafka-producer-network-thread | mso-123456-producer-f273d893-2274-4515-b9d3-12029fbb5bf7] DEBUG org.apache.kafka.clients.NetworkClient - [Producer clientId=mso-123456-producer-f273d893-2274-4515-b9d3-12029fbb5bf7] Give up sending metadata request since no node is available 19:07:04.166 [kafka-producer-network-thread | mso-123456-producer-f273d893-2274-4515-b9d3-12029fbb5bf7] DEBUG org.apache.kafka.clients.NetworkClient - [Producer clientId=mso-123456-producer-f273d893-2274-4515-b9d3-12029fbb5bf7] Give up sending metadata request since no node is available 19:07:04.209 [kafka-coordinator-heartbeat-thread | mso-group] DEBUG org.apache.kafka.clients.NetworkClient - [Consumer clientId=mso-123456-consumer-e3947c42-8a25-4603-b050-ed449cbe5e27, groupId=mso-group] Give up sending metadata request since no node is available 19:07:04.209 [kafka-coordinator-heartbeat-thread | mso-group] DEBUG org.apache.kafka.clients.consumer.internals.ConsumerCoordinator - [Consumer clientId=mso-123456-consumer-e3947c42-8a25-4603-b050-ed449cbe5e27, groupId=mso-group] No broker available to send FindCoordinator request 19:07:04.217 [kafka-producer-network-thread | mso-123456-producer-f273d893-2274-4515-b9d3-12029fbb5bf7] DEBUG org.apache.kafka.clients.NetworkClient - [Producer clientId=mso-123456-producer-f273d893-2274-4515-b9d3-12029fbb5bf7] Give up sending metadata request since no node is available 19:07:04.267 [kafka-producer-network-thread | mso-123456-producer-f273d893-2274-4515-b9d3-12029fbb5bf7] DEBUG org.apache.kafka.clients.NetworkClient - [Producer clientId=mso-123456-producer-f273d893-2274-4515-b9d3-12029fbb5bf7] Give up sending metadata request since no node is available 19:07:04.309 [kafka-coordinator-heartbeat-thread | mso-group] DEBUG org.apache.kafka.clients.NetworkClient - [Consumer clientId=mso-123456-consumer-e3947c42-8a25-4603-b050-ed449cbe5e27, groupId=mso-group] Give up sending metadata request since no node is available 19:07:04.310 [kafka-coordinator-heartbeat-thread | mso-group] DEBUG org.apache.kafka.clients.consumer.internals.ConsumerCoordinator - [Consumer clientId=mso-123456-consumer-e3947c42-8a25-4603-b050-ed449cbe5e27, groupId=mso-group] No broker available to send FindCoordinator request 19:07:04.318 [kafka-producer-network-thread | mso-123456-producer-f273d893-2274-4515-b9d3-12029fbb5bf7] DEBUG org.apache.kafka.clients.NetworkClient - [Producer clientId=mso-123456-producer-f273d893-2274-4515-b9d3-12029fbb5bf7] Give up sending metadata request since no node is available 19:07:04.369 [kafka-producer-network-thread | mso-123456-producer-f273d893-2274-4515-b9d3-12029fbb5bf7] DEBUG org.apache.kafka.clients.NetworkClient - [Producer clientId=mso-123456-producer-f273d893-2274-4515-b9d3-12029fbb5bf7] Give up sending metadata request since no node is available 19:07:04.410 [kafka-coordinator-heartbeat-thread | mso-group] DEBUG org.apache.kafka.clients.NetworkClient - [Consumer clientId=mso-123456-consumer-e3947c42-8a25-4603-b050-ed449cbe5e27, groupId=mso-group] Give up sending metadata request since no node is available 19:07:04.410 [kafka-coordinator-heartbeat-thread | mso-group] DEBUG org.apache.kafka.clients.consumer.internals.ConsumerCoordinator - [Consumer clientId=mso-123456-consumer-e3947c42-8a25-4603-b050-ed449cbe5e27, groupId=mso-group] No broker available to send FindCoordinator request 19:07:04.419 [kafka-producer-network-thread | mso-123456-producer-f273d893-2274-4515-b9d3-12029fbb5bf7] DEBUG org.apache.kafka.clients.NetworkClient - [Producer clientId=mso-123456-producer-f273d893-2274-4515-b9d3-12029fbb5bf7] Give up sending metadata request since no node is available 19:07:04.470 [kafka-producer-network-thread | mso-123456-producer-f273d893-2274-4515-b9d3-12029fbb5bf7] DEBUG org.apache.kafka.clients.NetworkClient - [Producer clientId=mso-123456-producer-f273d893-2274-4515-b9d3-12029fbb5bf7] Give up sending metadata request since no node is available 19:07:04.511 [kafka-coordinator-heartbeat-thread | mso-group] DEBUG org.apache.kafka.clients.NetworkClient - [Consumer clientId=mso-123456-consumer-e3947c42-8a25-4603-b050-ed449cbe5e27, groupId=mso-group] Give up sending metadata request since no node is available 19:07:04.511 [kafka-coordinator-heartbeat-thread | mso-group] DEBUG org.apache.kafka.clients.consumer.internals.ConsumerCoordinator - [Consumer clientId=mso-123456-consumer-e3947c42-8a25-4603-b050-ed449cbe5e27, groupId=mso-group] No broker available to send FindCoordinator request 19:07:04.520 [kafka-producer-network-thread | mso-123456-producer-f273d893-2274-4515-b9d3-12029fbb5bf7] DEBUG org.apache.kafka.clients.NetworkClient - [Producer clientId=mso-123456-producer-f273d893-2274-4515-b9d3-12029fbb5bf7] Initialize connection to node localhost:38537 (id: 1 rack: null) for sending metadata request 19:07:04.520 [kafka-producer-network-thread | mso-123456-producer-f273d893-2274-4515-b9d3-12029fbb5bf7] DEBUG org.apache.kafka.clients.ClientUtils - Resolved host localhost as 127.0.0.1 19:07:04.520 [kafka-producer-network-thread | mso-123456-producer-f273d893-2274-4515-b9d3-12029fbb5bf7] DEBUG org.apache.kafka.clients.NetworkClient - [Producer clientId=mso-123456-producer-f273d893-2274-4515-b9d3-12029fbb5bf7] Initiating connection to node localhost:38537 (id: 1 rack: null) using address localhost/127.0.0.1 19:07:04.521 [kafka-producer-network-thread | mso-123456-producer-f273d893-2274-4515-b9d3-12029fbb5bf7] DEBUG org.apache.kafka.common.security.authenticator.SaslClientAuthenticator - [Producer clientId=mso-123456-producer-f273d893-2274-4515-b9d3-12029fbb5bf7] Set SASL client state to SEND_APIVERSIONS_REQUEST 19:07:04.521 [kafka-producer-network-thread | mso-123456-producer-f273d893-2274-4515-b9d3-12029fbb5bf7] DEBUG org.apache.kafka.common.security.authenticator.SaslClientAuthenticator - [Producer clientId=mso-123456-producer-f273d893-2274-4515-b9d3-12029fbb5bf7] Creating SaslClient: client=null;service=kafka;serviceHostname=localhost;mechs=[PLAIN] 19:07:04.522 [kafka-producer-network-thread | mso-123456-producer-f273d893-2274-4515-b9d3-12029fbb5bf7] DEBUG org.apache.kafka.common.network.Selector - [Producer clientId=mso-123456-producer-f273d893-2274-4515-b9d3-12029fbb5bf7] Connection with localhost/127.0.0.1 (channelId=1) disconnected java.net.ConnectException: Connection refused at java.base/sun.nio.ch.SocketChannelImpl.checkConnect(Native Method) at java.base/sun.nio.ch.SocketChannelImpl.finishConnect(SocketChannelImpl.java:777) at org.apache.kafka.common.network.PlaintextTransportLayer.finishConnect(PlaintextTransportLayer.java:50) at org.apache.kafka.common.network.KafkaChannel.finishConnect(KafkaChannel.java:224) at org.apache.kafka.common.network.Selector.pollSelectionKeys(Selector.java:526) at org.apache.kafka.common.network.Selector.poll(Selector.java:481) at org.apache.kafka.clients.NetworkClient.poll(NetworkClient.java:560) at org.apache.kafka.clients.producer.internals.Sender.runOnce(Sender.java:328) at org.apache.kafka.clients.producer.internals.Sender.run(Sender.java:243) at java.base/java.lang.Thread.run(Thread.java:829) 19:07:04.522 [kafka-producer-network-thread | mso-123456-producer-f273d893-2274-4515-b9d3-12029fbb5bf7] INFO org.apache.kafka.clients.NetworkClient - [Producer clientId=mso-123456-producer-f273d893-2274-4515-b9d3-12029fbb5bf7] Node 1 disconnected. 19:07:04.522 [kafka-producer-network-thread | mso-123456-producer-f273d893-2274-4515-b9d3-12029fbb5bf7] WARN org.apache.kafka.clients.NetworkClient - [Producer clientId=mso-123456-producer-f273d893-2274-4515-b9d3-12029fbb5bf7] Connection to node 1 (localhost/127.0.0.1:38537) could not be established. Broker may not be available. 19:07:04.611 [kafka-coordinator-heartbeat-thread | mso-group] DEBUG org.apache.kafka.clients.NetworkClient - [Consumer clientId=mso-123456-consumer-e3947c42-8a25-4603-b050-ed449cbe5e27, groupId=mso-group] Give up sending metadata request since no node is available 19:07:04.612 [kafka-coordinator-heartbeat-thread | mso-group] DEBUG org.apache.kafka.clients.consumer.internals.ConsumerCoordinator - [Consumer clientId=mso-123456-consumer-e3947c42-8a25-4603-b050-ed449cbe5e27, groupId=mso-group] No broker available to send FindCoordinator request 19:07:04.623 [kafka-producer-network-thread | mso-123456-producer-f273d893-2274-4515-b9d3-12029fbb5bf7] DEBUG org.apache.kafka.clients.NetworkClient - [Producer clientId=mso-123456-producer-f273d893-2274-4515-b9d3-12029fbb5bf7] Give up sending metadata request since no node is available 19:07:04.658 [main] ERROR org.onap.sdc.utils.NotificationSender - DistributionClient - sendDownloadStatus. Failed to send messages and close publisher. org.apache.kafka.common.KafkaException: null 19:07:04.673 [kafka-producer-network-thread | mso-123456-producer-f273d893-2274-4515-b9d3-12029fbb5bf7] DEBUG org.apache.kafka.clients.NetworkClient - [Producer clientId=mso-123456-producer-f273d893-2274-4515-b9d3-12029fbb5bf7] Give up sending metadata request since no node is available 19:07:04.678 [main] INFO org.onap.sdc.utils.NotificationSender - DistributionClient - sendStatus 19:07:04.679 [main] DEBUG org.onap.sdc.utils.NotificationSender - Publisher server list: null 19:07:04.679 [main] INFO org.onap.sdc.utils.NotificationSender - Trying to send status: status to topic null 19:07:04.680 [main] ERROR org.onap.sdc.utils.NotificationSender - DistributionClient - sendStatus. Failed to send status org.apache.kafka.common.KafkaException: null at org.onap.sdc.utils.kafka.SdcKafkaProducer.send(SdcKafkaProducer.java:65) at org.onap.sdc.utils.NotificationSender.send(NotificationSender.java:47) at org.onap.sdc.utils.NotificationSenderTest.whenSendingThrowsIOExceptionShouldReturnGeneralErrorStatus(NotificationSenderTest.java:83) at java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at java.base/jdk.internal.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.base/java.lang.reflect.Method.invoke(Method.java:566) at org.junit.platform.commons.util.ReflectionUtils.invokeMethod(ReflectionUtils.java:688) at org.junit.jupiter.engine.execution.MethodInvocation.proceed(MethodInvocation.java:60) at org.junit.jupiter.engine.execution.InvocationInterceptorChain$ValidatingInvocation.proceed(InvocationInterceptorChain.java:131) at org.junit.jupiter.engine.extension.TimeoutExtension.intercept(TimeoutExtension.java:149) at org.junit.jupiter.engine.extension.TimeoutExtension.interceptTestableMethod(TimeoutExtension.java:140) at org.junit.jupiter.engine.extension.TimeoutExtension.interceptTestMethod(TimeoutExtension.java:84) at org.junit.jupiter.engine.execution.ExecutableInvoker$ReflectiveInterceptorCall.lambda$ofVoidMethod$0(ExecutableInvoker.java:115) at org.junit.jupiter.engine.execution.ExecutableInvoker.lambda$invoke$0(ExecutableInvoker.java:105) at org.junit.jupiter.engine.execution.InvocationInterceptorChain$InterceptedInvocation.proceed(InvocationInterceptorChain.java:106) at org.junit.jupiter.engine.execution.InvocationInterceptorChain.proceed(InvocationInterceptorChain.java:64) at org.junit.jupiter.engine.execution.InvocationInterceptorChain.chainAndInvoke(InvocationInterceptorChain.java:45) at org.junit.jupiter.engine.execution.InvocationInterceptorChain.invoke(InvocationInterceptorChain.java:37) at org.junit.jupiter.engine.execution.ExecutableInvoker.invoke(ExecutableInvoker.java:104) at org.junit.jupiter.engine.execution.ExecutableInvoker.invoke(ExecutableInvoker.java:98) at org.junit.jupiter.engine.descriptor.TestMethodTestDescriptor.lambda$invokeTestMethod$6(TestMethodTestDescriptor.java:210) at org.junit.platform.engine.support.hierarchical.ThrowableCollector.execute(ThrowableCollector.java:73) at org.junit.jupiter.engine.descriptor.TestMethodTestDescriptor.invokeTestMethod(TestMethodTestDescriptor.java:206) at org.junit.jupiter.engine.descriptor.TestMethodTestDescriptor.execute(TestMethodTestDescriptor.java:131) at org.junit.jupiter.engine.descriptor.TestMethodTestDescriptor.execute(TestMethodTestDescriptor.java:65) at org.junit.platform.engine.support.hierarchical.NodeTestTask.lambda$executeRecursively$5(NodeTestTask.java:139) at org.junit.platform.engine.support.hierarchical.ThrowableCollector.execute(ThrowableCollector.java:73) at org.junit.platform.engine.support.hierarchical.NodeTestTask.lambda$executeRecursively$7(NodeTestTask.java:129) at org.junit.platform.engine.support.hierarchical.Node.around(Node.java:137) at org.junit.platform.engine.support.hierarchical.NodeTestTask.lambda$executeRecursively$8(NodeTestTask.java:127) at org.junit.platform.engine.support.hierarchical.ThrowableCollector.execute(ThrowableCollector.java:73) at org.junit.platform.engine.support.hierarchical.NodeTestTask.executeRecursively(NodeTestTask.java:126) at org.junit.platform.engine.support.hierarchical.NodeTestTask.execute(NodeTestTask.java:84) at java.base/java.util.ArrayList.forEach(ArrayList.java:1541) at org.junit.platform.engine.support.hierarchical.SameThreadHierarchicalTestExecutorService.invokeAll(SameThreadHierarchicalTestExecutorService.java:38) at org.junit.platform.engine.support.hierarchical.NodeTestTask.lambda$executeRecursively$5(NodeTestTask.java:143) at org.junit.platform.engine.support.hierarchical.ThrowableCollector.execute(ThrowableCollector.java:73) at org.junit.platform.engine.support.hierarchical.NodeTestTask.lambda$executeRecursively$7(NodeTestTask.java:129) at org.junit.platform.engine.support.hierarchical.Node.around(Node.java:137) at org.junit.platform.engine.support.hierarchical.NodeTestTask.lambda$executeRecursively$8(NodeTestTask.java:127) at org.junit.platform.engine.support.hierarchical.ThrowableCollector.execute(ThrowableCollector.java:73) at org.junit.platform.engine.support.hierarchical.NodeTestTask.executeRecursively(NodeTestTask.java:126) at org.junit.platform.engine.support.hierarchical.NodeTestTask.execute(NodeTestTask.java:84) at java.base/java.util.ArrayList.forEach(ArrayList.java:1541) at org.junit.platform.engine.support.hierarchical.SameThreadHierarchicalTestExecutorService.invokeAll(SameThreadHierarchicalTestExecutorService.java:38) at org.junit.platform.engine.support.hierarchical.NodeTestTask.lambda$executeRecursively$5(NodeTestTask.java:143) at org.junit.platform.engine.support.hierarchical.ThrowableCollector.execute(ThrowableCollector.java:73) at org.junit.platform.engine.support.hierarchical.NodeTestTask.lambda$executeRecursively$7(NodeTestTask.java:129) at org.junit.platform.engine.support.hierarchical.Node.around(Node.java:137) at org.junit.platform.engine.support.hierarchical.NodeTestTask.lambda$executeRecursively$8(NodeTestTask.java:127) at org.junit.platform.engine.support.hierarchical.ThrowableCollector.execute(ThrowableCollector.java:73) at org.junit.platform.engine.support.hierarchical.NodeTestTask.executeRecursively(NodeTestTask.java:126) at org.junit.platform.engine.support.hierarchical.NodeTestTask.execute(NodeTestTask.java:84) at org.junit.platform.engine.support.hierarchical.SameThreadHierarchicalTestExecutorService.submit(SameThreadHierarchicalTestExecutorService.java:32) at org.junit.platform.engine.support.hierarchical.HierarchicalTestExecutor.execute(HierarchicalTestExecutor.java:57) at org.junit.platform.engine.support.hierarchical.HierarchicalTestEngine.execute(HierarchicalTestEngine.java:51) at org.junit.platform.launcher.core.EngineExecutionOrchestrator.execute(EngineExecutionOrchestrator.java:108) at org.junit.platform.launcher.core.EngineExecutionOrchestrator.execute(EngineExecutionOrchestrator.java:88) at org.junit.platform.launcher.core.EngineExecutionOrchestrator.lambda$execute$0(EngineExecutionOrchestrator.java:54) at org.junit.platform.launcher.core.EngineExecutionOrchestrator.withInterceptedStreams(EngineExecutionOrchestrator.java:67) at org.junit.platform.launcher.core.EngineExecutionOrchestrator.execute(EngineExecutionOrchestrator.java:52) at org.junit.platform.launcher.core.DefaultLauncher.execute(DefaultLauncher.java:96) at org.junit.platform.launcher.core.DefaultLauncher.execute(DefaultLauncher.java:75) at org.apache.maven.surefire.junitplatform.JUnitPlatformProvider.invokeAllTests(JUnitPlatformProvider.java:154) at org.apache.maven.surefire.junitplatform.JUnitPlatformProvider.invoke(JUnitPlatformProvider.java:127) at org.apache.maven.surefire.booter.ForkedBooter.runSuitesInProcess(ForkedBooter.java:377) at org.apache.maven.surefire.booter.ForkedBooter.execute(ForkedBooter.java:138) at org.apache.maven.surefire.booter.ForkedBooter.run(ForkedBooter.java:465) at org.apache.maven.surefire.booter.ForkedBooter.main(ForkedBooter.java:451) [INFO] Tests run: 3, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 2.342 s - in org.onap.sdc.utils.NotificationSenderTest [INFO] Running org.onap.sdc.utils.KafkaCommonConfigTest [INFO] Tests run: 2, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.004 s - in org.onap.sdc.utils.KafkaCommonConfigTest [INFO] Running org.onap.sdc.utils.GeneralUtilsTest [INFO] Tests run: 4, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.007 s - in org.onap.sdc.utils.GeneralUtilsTest [INFO] Running org.onap.sdc.impl.NotificationConsumerTest 19:07:04.712 [kafka-coordinator-heartbeat-thread | mso-group] DEBUG org.apache.kafka.clients.NetworkClient - [Consumer clientId=mso-123456-consumer-e3947c42-8a25-4603-b050-ed449cbe5e27, groupId=mso-group] Give up sending metadata request since no node is available 19:07:04.712 [kafka-coordinator-heartbeat-thread | mso-group] DEBUG org.apache.kafka.clients.consumer.internals.ConsumerCoordinator - [Consumer clientId=mso-123456-consumer-e3947c42-8a25-4603-b050-ed449cbe5e27, groupId=mso-group] No broker available to send FindCoordinator request 19:07:04.823 [kafka-coordinator-heartbeat-thread | mso-group] DEBUG org.apache.kafka.clients.NetworkClient - [Consumer clientId=mso-123456-consumer-e3947c42-8a25-4603-b050-ed449cbe5e27, groupId=mso-group] Give up sending metadata request since no node is available 19:07:04.824 [SessionTracker] INFO org.apache.zookeeper.server.SessionTrackerImpl - SessionTrackerImpl exited loop! 19:07:04.826 [kafka-coordinator-heartbeat-thread | mso-group] DEBUG org.apache.kafka.clients.consumer.internals.ConsumerCoordinator - [Consumer clientId=mso-123456-consumer-e3947c42-8a25-4603-b050-ed449cbe5e27, groupId=mso-group] Sending FindCoordinator request to broker localhost:38537 (id: 1 rack: null) 19:07:04.826 [kafka-producer-network-thread | mso-123456-producer-f273d893-2274-4515-b9d3-12029fbb5bf7] DEBUG org.apache.kafka.clients.NetworkClient - [Producer clientId=mso-123456-producer-f273d893-2274-4515-b9d3-12029fbb5bf7] Give up sending metadata request since no node is available 19:07:04.936 [kafka-coordinator-heartbeat-thread | mso-group] DEBUG org.apache.kafka.clients.ClientUtils - Resolved host localhost as 127.0.0.1 19:07:04.937 [kafka-producer-network-thread | mso-123456-producer-f273d893-2274-4515-b9d3-12029fbb5bf7] DEBUG org.apache.kafka.clients.NetworkClient - [Producer clientId=mso-123456-producer-f273d893-2274-4515-b9d3-12029fbb5bf7] Give up sending metadata request since no node is available 19:07:04.938 [kafka-coordinator-heartbeat-thread | mso-group] DEBUG org.apache.kafka.clients.NetworkClient - [Consumer clientId=mso-123456-consumer-e3947c42-8a25-4603-b050-ed449cbe5e27, groupId=mso-group] Initiating connection to node localhost:38537 (id: 1 rack: null) using address localhost/127.0.0.1 19:07:04.938 [kafka-coordinator-heartbeat-thread | mso-group] DEBUG org.apache.kafka.common.security.authenticator.SaslClientAuthenticator - [Consumer clientId=mso-123456-consumer-e3947c42-8a25-4603-b050-ed449cbe5e27, groupId=mso-group] Set SASL client state to SEND_APIVERSIONS_REQUEST 19:07:04.939 [kafka-coordinator-heartbeat-thread | mso-group] DEBUG org.apache.kafka.common.security.authenticator.SaslClientAuthenticator - [Consumer clientId=mso-123456-consumer-e3947c42-8a25-4603-b050-ed449cbe5e27, groupId=mso-group] Creating SaslClient: client=null;service=kafka;serviceHostname=localhost;mechs=[PLAIN] 19:07:04.941 [kafka-coordinator-heartbeat-thread | mso-group] DEBUG org.apache.kafka.common.network.Selector - [Consumer clientId=mso-123456-consumer-e3947c42-8a25-4603-b050-ed449cbe5e27, groupId=mso-group] Connection with localhost/127.0.0.1 (channelId=1) disconnected java.net.ConnectException: Connection refused at java.base/sun.nio.ch.SocketChannelImpl.checkConnect(Native Method) at java.base/sun.nio.ch.SocketChannelImpl.finishConnect(SocketChannelImpl.java:777) at org.apache.kafka.common.network.PlaintextTransportLayer.finishConnect(PlaintextTransportLayer.java:50) at org.apache.kafka.common.network.KafkaChannel.finishConnect(KafkaChannel.java:224) at org.apache.kafka.common.network.Selector.pollSelectionKeys(Selector.java:526) at org.apache.kafka.common.network.Selector.poll(Selector.java:481) at org.apache.kafka.clients.NetworkClient.poll(NetworkClient.java:560) at org.apache.kafka.clients.consumer.internals.ConsumerNetworkClient.poll(ConsumerNetworkClient.java:280) at org.apache.kafka.clients.consumer.internals.ConsumerNetworkClient.pollNoWakeup(ConsumerNetworkClient.java:321) at org.apache.kafka.clients.consumer.internals.AbstractCoordinator$HeartbeatThread.run(AbstractCoordinator.java:1454) 19:07:04.942 [kafka-coordinator-heartbeat-thread | mso-group] INFO org.apache.kafka.clients.NetworkClient - [Consumer clientId=mso-123456-consumer-e3947c42-8a25-4603-b050-ed449cbe5e27, groupId=mso-group] Node 1 disconnected. 19:07:04.942 [kafka-coordinator-heartbeat-thread | mso-group] WARN org.apache.kafka.clients.NetworkClient - [Consumer clientId=mso-123456-consumer-e3947c42-8a25-4603-b050-ed449cbe5e27, groupId=mso-group] Connection to node 1 (localhost/127.0.0.1:38537) could not be established. Broker may not be available. 19:07:04.942 [kafka-coordinator-heartbeat-thread | mso-group] DEBUG org.apache.kafka.clients.consumer.internals.ConsumerNetworkClient - [Consumer clientId=mso-123456-consumer-e3947c42-8a25-4603-b050-ed449cbe5e27, groupId=mso-group] Cancelled request with header RequestHeader(apiKey=FIND_COORDINATOR, apiVersion=4, clientId=mso-123456-consumer-e3947c42-8a25-4603-b050-ed449cbe5e27, correlationId=72) due to node 1 being disconnected 19:07:04.942 [kafka-coordinator-heartbeat-thread | mso-group] DEBUG org.apache.kafka.clients.consumer.internals.ConsumerCoordinator - [Consumer clientId=mso-123456-consumer-e3947c42-8a25-4603-b050-ed449cbe5e27, groupId=mso-group] FindCoordinator request failed due to org.apache.kafka.common.errors.DisconnectException 19:07:04.988 [kafka-producer-network-thread | mso-123456-producer-f273d893-2274-4515-b9d3-12029fbb5bf7] DEBUG org.apache.kafka.clients.NetworkClient - [Producer clientId=mso-123456-producer-f273d893-2274-4515-b9d3-12029fbb5bf7] Give up sending metadata request since no node is available 19:07:05.201 [kafka-coordinator-heartbeat-thread | mso-group] DEBUG org.apache.kafka.clients.NetworkClient - [Consumer clientId=mso-123456-consumer-e3947c42-8a25-4603-b050-ed449cbe5e27, groupId=mso-group] Give up sending metadata request since no node is available 19:07:05.201 [kafka-producer-network-thread | mso-123456-producer-f273d893-2274-4515-b9d3-12029fbb5bf7] DEBUG org.apache.kafka.clients.NetworkClient - [Producer clientId=mso-123456-producer-f273d893-2274-4515-b9d3-12029fbb5bf7] Give up sending metadata request since no node is available 19:07:05.203 [kafka-coordinator-heartbeat-thread | mso-group] DEBUG org.apache.kafka.clients.consumer.internals.ConsumerCoordinator - [Consumer clientId=mso-123456-consumer-e3947c42-8a25-4603-b050-ed449cbe5e27, groupId=mso-group] No broker available to send FindCoordinator request 19:07:05.218 [main] INFO org.onap.sdc.impl.DistributionClientImpl - DistributionClient - sendNotificationStatus 19:07:05.218 [main] DEBUG org.onap.sdc.impl.DistributionClientImpl - client was not initialized 19:07:05.224 [pool-8-thread-1] INFO org.onap.sdc.impl.NotificationConsumer - Polling for messages from topic: null 19:07:05.253 [kafka-producer-network-thread | mso-123456-producer-f273d893-2274-4515-b9d3-12029fbb5bf7] DEBUG org.apache.kafka.clients.NetworkClient - [Producer clientId=mso-123456-producer-f273d893-2274-4515-b9d3-12029fbb5bf7] Give up sending metadata request since no node is available 19:07:05.303 [kafka-producer-network-thread | mso-123456-producer-f273d893-2274-4515-b9d3-12029fbb5bf7] DEBUG org.apache.kafka.clients.NetworkClient - [Producer clientId=mso-123456-producer-f273d893-2274-4515-b9d3-12029fbb5bf7] Give up sending metadata request since no node is available 19:07:05.304 [kafka-coordinator-heartbeat-thread | mso-group] DEBUG org.apache.kafka.clients.NetworkClient - [Consumer clientId=mso-123456-consumer-e3947c42-8a25-4603-b050-ed449cbe5e27, groupId=mso-group] Give up sending metadata request since no node is available 19:07:05.305 [kafka-coordinator-heartbeat-thread | mso-group] DEBUG org.apache.kafka.clients.consumer.internals.ConsumerCoordinator - [Consumer clientId=mso-123456-consumer-e3947c42-8a25-4603-b050-ed449cbe5e27, groupId=mso-group] No broker available to send FindCoordinator request 19:07:05.322 [pool-8-thread-1] INFO org.onap.sdc.impl.NotificationConsumer - Polling for messages from topic: null 19:07:05.354 [kafka-producer-network-thread | mso-123456-producer-f273d893-2274-4515-b9d3-12029fbb5bf7] DEBUG org.apache.kafka.clients.NetworkClient - [Producer clientId=mso-123456-producer-f273d893-2274-4515-b9d3-12029fbb5bf7] Give up sending metadata request since no node is available 19:07:05.404 [kafka-producer-network-thread | mso-123456-producer-f273d893-2274-4515-b9d3-12029fbb5bf7] DEBUG org.apache.kafka.clients.NetworkClient - [Producer clientId=mso-123456-producer-f273d893-2274-4515-b9d3-12029fbb5bf7] Give up sending metadata request since no node is available 19:07:05.406 [kafka-coordinator-heartbeat-thread | mso-group] DEBUG org.apache.kafka.clients.NetworkClient - [Consumer clientId=mso-123456-consumer-e3947c42-8a25-4603-b050-ed449cbe5e27, groupId=mso-group] Give up sending metadata request since no node is available 19:07:05.407 [kafka-coordinator-heartbeat-thread | mso-group] DEBUG org.apache.kafka.clients.consumer.internals.ConsumerCoordinator - [Consumer clientId=mso-123456-consumer-e3947c42-8a25-4603-b050-ed449cbe5e27, groupId=mso-group] No broker available to send FindCoordinator request 19:07:05.423 [pool-8-thread-2] INFO org.onap.sdc.impl.NotificationConsumer - Polling for messages from topic: null 19:07:05.455 [kafka-producer-network-thread | mso-123456-producer-f273d893-2274-4515-b9d3-12029fbb5bf7] DEBUG org.apache.kafka.clients.NetworkClient - [Producer clientId=mso-123456-producer-f273d893-2274-4515-b9d3-12029fbb5bf7] Give up sending metadata request since no node is available 19:07:05.506 [kafka-producer-network-thread | mso-123456-producer-f273d893-2274-4515-b9d3-12029fbb5bf7] DEBUG org.apache.kafka.clients.NetworkClient - [Producer clientId=mso-123456-producer-f273d893-2274-4515-b9d3-12029fbb5bf7] Initialize connection to node localhost:38537 (id: 1 rack: null) for sending metadata request 19:07:05.506 [kafka-producer-network-thread | mso-123456-producer-f273d893-2274-4515-b9d3-12029fbb5bf7] DEBUG org.apache.kafka.clients.ClientUtils - Resolved host localhost as 127.0.0.1 19:07:05.506 [kafka-producer-network-thread | mso-123456-producer-f273d893-2274-4515-b9d3-12029fbb5bf7] DEBUG org.apache.kafka.clients.NetworkClient - [Producer clientId=mso-123456-producer-f273d893-2274-4515-b9d3-12029fbb5bf7] Initiating connection to node localhost:38537 (id: 1 rack: null) using address localhost/127.0.0.1 19:07:05.507 [kafka-producer-network-thread | mso-123456-producer-f273d893-2274-4515-b9d3-12029fbb5bf7] DEBUG org.apache.kafka.common.security.authenticator.SaslClientAuthenticator - [Producer clientId=mso-123456-producer-f273d893-2274-4515-b9d3-12029fbb5bf7] Set SASL client state to SEND_APIVERSIONS_REQUEST 19:07:05.507 [kafka-producer-network-thread | mso-123456-producer-f273d893-2274-4515-b9d3-12029fbb5bf7] DEBUG org.apache.kafka.common.security.authenticator.SaslClientAuthenticator - [Producer clientId=mso-123456-producer-f273d893-2274-4515-b9d3-12029fbb5bf7] Creating SaslClient: client=null;service=kafka;serviceHostname=localhost;mechs=[PLAIN] 19:07:05.508 [kafka-coordinator-heartbeat-thread | mso-group] DEBUG org.apache.kafka.clients.NetworkClient - [Consumer clientId=mso-123456-consumer-e3947c42-8a25-4603-b050-ed449cbe5e27, groupId=mso-group] Give up sending metadata request since no node is available 19:07:05.509 [kafka-coordinator-heartbeat-thread | mso-group] DEBUG org.apache.kafka.clients.consumer.internals.ConsumerCoordinator - [Consumer clientId=mso-123456-consumer-e3947c42-8a25-4603-b050-ed449cbe5e27, groupId=mso-group] No broker available to send FindCoordinator request 19:07:05.509 [kafka-producer-network-thread | mso-123456-producer-f273d893-2274-4515-b9d3-12029fbb5bf7] DEBUG org.apache.kafka.common.network.Selector - [Producer clientId=mso-123456-producer-f273d893-2274-4515-b9d3-12029fbb5bf7] Connection with localhost/127.0.0.1 (channelId=1) disconnected java.net.ConnectException: Connection refused at java.base/sun.nio.ch.SocketChannelImpl.checkConnect(Native Method) at java.base/sun.nio.ch.SocketChannelImpl.finishConnect(SocketChannelImpl.java:777) at org.apache.kafka.common.network.PlaintextTransportLayer.finishConnect(PlaintextTransportLayer.java:50) at org.apache.kafka.common.network.KafkaChannel.finishConnect(KafkaChannel.java:224) at org.apache.kafka.common.network.Selector.pollSelectionKeys(Selector.java:526) at org.apache.kafka.common.network.Selector.poll(Selector.java:481) at org.apache.kafka.clients.NetworkClient.poll(NetworkClient.java:560) at org.apache.kafka.clients.producer.internals.Sender.runOnce(Sender.java:328) at org.apache.kafka.clients.producer.internals.Sender.run(Sender.java:243) at java.base/java.lang.Thread.run(Thread.java:829) 19:07:05.510 [kafka-producer-network-thread | mso-123456-producer-f273d893-2274-4515-b9d3-12029fbb5bf7] INFO org.apache.kafka.clients.NetworkClient - [Producer clientId=mso-123456-producer-f273d893-2274-4515-b9d3-12029fbb5bf7] Node 1 disconnected. 19:07:05.511 [kafka-producer-network-thread | mso-123456-producer-f273d893-2274-4515-b9d3-12029fbb5bf7] WARN org.apache.kafka.clients.NetworkClient - [Producer clientId=mso-123456-producer-f273d893-2274-4515-b9d3-12029fbb5bf7] Connection to node 1 (localhost/127.0.0.1:38537) could not be established. Broker may not be available. 19:07:05.522 [pool-8-thread-1] INFO org.onap.sdc.impl.NotificationConsumer - Polling for messages from topic: null 19:07:05.610 [kafka-coordinator-heartbeat-thread | mso-group] DEBUG org.apache.kafka.clients.NetworkClient - [Consumer clientId=mso-123456-consumer-e3947c42-8a25-4603-b050-ed449cbe5e27, groupId=mso-group] Give up sending metadata request since no node is available 19:07:05.610 [kafka-coordinator-heartbeat-thread | mso-group] DEBUG org.apache.kafka.clients.consumer.internals.ConsumerCoordinator - [Consumer clientId=mso-123456-consumer-e3947c42-8a25-4603-b050-ed449cbe5e27, groupId=mso-group] No broker available to send FindCoordinator request 19:07:05.610 [kafka-producer-network-thread | mso-123456-producer-f273d893-2274-4515-b9d3-12029fbb5bf7] DEBUG org.apache.kafka.clients.NetworkClient - [Producer clientId=mso-123456-producer-f273d893-2274-4515-b9d3-12029fbb5bf7] Give up sending metadata request since no node is available 19:07:05.623 [pool-8-thread-3] INFO org.onap.sdc.impl.NotificationConsumer - Polling for messages from topic: null 19:07:05.662 [kafka-producer-network-thread | mso-123456-producer-f273d893-2274-4515-b9d3-12029fbb5bf7] DEBUG org.apache.kafka.clients.NetworkClient - [Producer clientId=mso-123456-producer-f273d893-2274-4515-b9d3-12029fbb5bf7] Give up sending metadata request since no node is available 19:07:05.711 [kafka-coordinator-heartbeat-thread | mso-group] DEBUG org.apache.kafka.clients.NetworkClient - [Consumer clientId=mso-123456-consumer-e3947c42-8a25-4603-b050-ed449cbe5e27, groupId=mso-group] Give up sending metadata request since no node is available 19:07:05.712 [kafka-coordinator-heartbeat-thread | mso-group] DEBUG org.apache.kafka.clients.consumer.internals.ConsumerCoordinator - [Consumer clientId=mso-123456-consumer-e3947c42-8a25-4603-b050-ed449cbe5e27, groupId=mso-group] No broker available to send FindCoordinator request 19:07:05.712 [kafka-producer-network-thread | mso-123456-producer-f273d893-2274-4515-b9d3-12029fbb5bf7] DEBUG org.apache.kafka.clients.NetworkClient - [Producer clientId=mso-123456-producer-f273d893-2274-4515-b9d3-12029fbb5bf7] Give up sending metadata request since no node is available 19:07:05.722 [pool-8-thread-2] INFO org.onap.sdc.impl.NotificationConsumer - Polling for messages from topic: null 19:07:05.763 [kafka-producer-network-thread | mso-123456-producer-f273d893-2274-4515-b9d3-12029fbb5bf7] DEBUG org.apache.kafka.clients.NetworkClient - [Producer clientId=mso-123456-producer-f273d893-2274-4515-b9d3-12029fbb5bf7] Give up sending metadata request since no node is available 19:07:05.813 [kafka-coordinator-heartbeat-thread | mso-group] DEBUG org.apache.kafka.clients.NetworkClient - [Consumer clientId=mso-123456-consumer-e3947c42-8a25-4603-b050-ed449cbe5e27, groupId=mso-group] Give up sending metadata request since no node is available 19:07:05.813 [kafka-coordinator-heartbeat-thread | mso-group] DEBUG org.apache.kafka.clients.consumer.internals.ConsumerCoordinator - [Consumer clientId=mso-123456-consumer-e3947c42-8a25-4603-b050-ed449cbe5e27, groupId=mso-group] No broker available to send FindCoordinator request 19:07:05.813 [kafka-producer-network-thread | mso-123456-producer-f273d893-2274-4515-b9d3-12029fbb5bf7] DEBUG org.apache.kafka.clients.NetworkClient - [Producer clientId=mso-123456-producer-f273d893-2274-4515-b9d3-12029fbb5bf7] Give up sending metadata request since no node is available 19:07:05.823 [pool-8-thread-4] INFO org.onap.sdc.impl.NotificationConsumer - Polling for messages from topic: null 19:07:05.864 [kafka-producer-network-thread | mso-123456-producer-f273d893-2274-4515-b9d3-12029fbb5bf7] DEBUG org.apache.kafka.clients.NetworkClient - [Producer clientId=mso-123456-producer-f273d893-2274-4515-b9d3-12029fbb5bf7] Give up sending metadata request since no node is available 19:07:05.913 [kafka-coordinator-heartbeat-thread | mso-group] DEBUG org.apache.kafka.clients.NetworkClient - [Consumer clientId=mso-123456-consumer-e3947c42-8a25-4603-b050-ed449cbe5e27, groupId=mso-group] Initialize connection to node localhost:38537 (id: 1 rack: null) for sending metadata request 19:07:05.914 [kafka-coordinator-heartbeat-thread | mso-group] DEBUG org.apache.kafka.clients.ClientUtils - Resolved host localhost as 127.0.0.1 19:07:05.914 [kafka-coordinator-heartbeat-thread | mso-group] DEBUG org.apache.kafka.clients.NetworkClient - [Consumer clientId=mso-123456-consumer-e3947c42-8a25-4603-b050-ed449cbe5e27, groupId=mso-group] Initiating connection to node localhost:38537 (id: 1 rack: null) using address localhost/127.0.0.1 19:07:05.914 [kafka-coordinator-heartbeat-thread | mso-group] DEBUG org.apache.kafka.common.security.authenticator.SaslClientAuthenticator - [Consumer clientId=mso-123456-consumer-e3947c42-8a25-4603-b050-ed449cbe5e27, groupId=mso-group] Set SASL client state to SEND_APIVERSIONS_REQUEST 19:07:05.914 [kafka-coordinator-heartbeat-thread | mso-group] DEBUG org.apache.kafka.common.security.authenticator.SaslClientAuthenticator - [Consumer clientId=mso-123456-consumer-e3947c42-8a25-4603-b050-ed449cbe5e27, groupId=mso-group] Creating SaslClient: client=null;service=kafka;serviceHostname=localhost;mechs=[PLAIN] 19:07:05.915 [kafka-producer-network-thread | mso-123456-producer-f273d893-2274-4515-b9d3-12029fbb5bf7] DEBUG org.apache.kafka.clients.NetworkClient - [Producer clientId=mso-123456-producer-f273d893-2274-4515-b9d3-12029fbb5bf7] Give up sending metadata request since no node is available 19:07:05.915 [kafka-coordinator-heartbeat-thread | mso-group] DEBUG org.apache.kafka.common.network.Selector - [Consumer clientId=mso-123456-consumer-e3947c42-8a25-4603-b050-ed449cbe5e27, groupId=mso-group] Connection with localhost/127.0.0.1 (channelId=1) disconnected java.net.ConnectException: Connection refused at java.base/sun.nio.ch.SocketChannelImpl.checkConnect(Native Method) at java.base/sun.nio.ch.SocketChannelImpl.finishConnect(SocketChannelImpl.java:777) at org.apache.kafka.common.network.PlaintextTransportLayer.finishConnect(PlaintextTransportLayer.java:50) at org.apache.kafka.common.network.KafkaChannel.finishConnect(KafkaChannel.java:224) at org.apache.kafka.common.network.Selector.pollSelectionKeys(Selector.java:526) at org.apache.kafka.common.network.Selector.poll(Selector.java:481) at org.apache.kafka.clients.NetworkClient.poll(NetworkClient.java:560) at org.apache.kafka.clients.consumer.internals.ConsumerNetworkClient.poll(ConsumerNetworkClient.java:280) at org.apache.kafka.clients.consumer.internals.ConsumerNetworkClient.pollNoWakeup(ConsumerNetworkClient.java:321) at org.apache.kafka.clients.consumer.internals.AbstractCoordinator$HeartbeatThread.run(AbstractCoordinator.java:1454) 19:07:05.916 [kafka-coordinator-heartbeat-thread | mso-group] INFO org.apache.kafka.clients.NetworkClient - [Consumer clientId=mso-123456-consumer-e3947c42-8a25-4603-b050-ed449cbe5e27, groupId=mso-group] Node 1 disconnected. 19:07:05.916 [kafka-coordinator-heartbeat-thread | mso-group] WARN org.apache.kafka.clients.NetworkClient - [Consumer clientId=mso-123456-consumer-e3947c42-8a25-4603-b050-ed449cbe5e27, groupId=mso-group] Connection to node 1 (localhost/127.0.0.1:38537) could not be established. Broker may not be available. 19:07:05.916 [kafka-coordinator-heartbeat-thread | mso-group] DEBUG org.apache.kafka.clients.consumer.internals.ConsumerCoordinator - [Consumer clientId=mso-123456-consumer-e3947c42-8a25-4603-b050-ed449cbe5e27, groupId=mso-group] No broker available to send FindCoordinator request 19:07:05.923 [pool-8-thread-1] INFO org.onap.sdc.impl.NotificationConsumer - Polling for messages from topic: null 19:07:05.965 [kafka-producer-network-thread | mso-123456-producer-f273d893-2274-4515-b9d3-12029fbb5bf7] DEBUG org.apache.kafka.clients.NetworkClient - [Producer clientId=mso-123456-producer-f273d893-2274-4515-b9d3-12029fbb5bf7] Give up sending metadata request since no node is available 19:07:06.016 [kafka-producer-network-thread | mso-123456-producer-f273d893-2274-4515-b9d3-12029fbb5bf7] DEBUG org.apache.kafka.clients.NetworkClient - [Producer clientId=mso-123456-producer-f273d893-2274-4515-b9d3-12029fbb5bf7] Give up sending metadata request since no node is available 19:07:06.016 [kafka-coordinator-heartbeat-thread | mso-group] DEBUG org.apache.kafka.clients.NetworkClient - [Consumer clientId=mso-123456-consumer-e3947c42-8a25-4603-b050-ed449cbe5e27, groupId=mso-group] Give up sending metadata request since no node is available 19:07:06.017 [kafka-coordinator-heartbeat-thread | mso-group] DEBUG org.apache.kafka.clients.consumer.internals.ConsumerCoordinator - [Consumer clientId=mso-123456-consumer-e3947c42-8a25-4603-b050-ed449cbe5e27, groupId=mso-group] No broker available to send FindCoordinator request 19:07:06.023 [pool-8-thread-5] INFO org.onap.sdc.impl.NotificationConsumer - Polling for messages from topic: null 19:07:06.066 [kafka-producer-network-thread | mso-123456-producer-f273d893-2274-4515-b9d3-12029fbb5bf7] DEBUG org.apache.kafka.clients.NetworkClient - [Producer clientId=mso-123456-producer-f273d893-2274-4515-b9d3-12029fbb5bf7] Give up sending metadata request since no node is available 19:07:06.117 [kafka-producer-network-thread | mso-123456-producer-f273d893-2274-4515-b9d3-12029fbb5bf7] DEBUG org.apache.kafka.clients.NetworkClient - [Producer clientId=mso-123456-producer-f273d893-2274-4515-b9d3-12029fbb5bf7] Give up sending metadata request since no node is available 19:07:06.117 [kafka-coordinator-heartbeat-thread | mso-group] DEBUG org.apache.kafka.clients.NetworkClient - [Consumer clientId=mso-123456-consumer-e3947c42-8a25-4603-b050-ed449cbe5e27, groupId=mso-group] Give up sending metadata request since no node is available 19:07:06.117 [kafka-coordinator-heartbeat-thread | mso-group] DEBUG org.apache.kafka.clients.consumer.internals.ConsumerCoordinator - [Consumer clientId=mso-123456-consumer-e3947c42-8a25-4603-b050-ed449cbe5e27, groupId=mso-group] No broker available to send FindCoordinator request 19:07:06.122 [pool-8-thread-3] INFO org.onap.sdc.impl.NotificationConsumer - Polling for messages from topic: null 19:07:06.167 [kafka-producer-network-thread | mso-123456-producer-f273d893-2274-4515-b9d3-12029fbb5bf7] DEBUG org.apache.kafka.clients.NetworkClient - [Producer clientId=mso-123456-producer-f273d893-2274-4515-b9d3-12029fbb5bf7] Give up sending metadata request since no node is available 19:07:06.218 [kafka-coordinator-heartbeat-thread | mso-group] DEBUG org.apache.kafka.clients.NetworkClient - [Consumer clientId=mso-123456-consumer-e3947c42-8a25-4603-b050-ed449cbe5e27, groupId=mso-group] Give up sending metadata request since no node is available 19:07:06.218 [kafka-producer-network-thread | mso-123456-producer-f273d893-2274-4515-b9d3-12029fbb5bf7] DEBUG org.apache.kafka.clients.NetworkClient - [Producer clientId=mso-123456-producer-f273d893-2274-4515-b9d3-12029fbb5bf7] Give up sending metadata request since no node is available 19:07:06.218 [kafka-coordinator-heartbeat-thread | mso-group] DEBUG org.apache.kafka.clients.consumer.internals.ConsumerCoordinator - [Consumer clientId=mso-123456-consumer-e3947c42-8a25-4603-b050-ed449cbe5e27, groupId=mso-group] No broker available to send FindCoordinator request 19:07:06.223 [pool-8-thread-3] INFO org.onap.sdc.impl.NotificationConsumer - Polling for messages from topic: null 19:07:06.227 [main] INFO org.onap.sdc.impl.DistributionClientImpl - DistributionClient - sendNotificationStatus 19:07:06.227 [main] DEBUG org.onap.sdc.impl.DistributionClientImpl - client was not initialized 19:07:06.230 [pool-9-thread-1] INFO org.onap.sdc.impl.NotificationConsumer - Polling for messages from topic: null 19:07:06.269 [kafka-producer-network-thread | mso-123456-producer-f273d893-2274-4515-b9d3-12029fbb5bf7] DEBUG org.apache.kafka.clients.NetworkClient - [Producer clientId=mso-123456-producer-f273d893-2274-4515-b9d3-12029fbb5bf7] Give up sending metadata request since no node is available 19:07:06.319 [kafka-coordinator-heartbeat-thread | mso-group] DEBUG org.apache.kafka.clients.NetworkClient - [Consumer clientId=mso-123456-consumer-e3947c42-8a25-4603-b050-ed449cbe5e27, groupId=mso-group] Give up sending metadata request since no node is available 19:07:06.319 [kafka-coordinator-heartbeat-thread | mso-group] DEBUG org.apache.kafka.clients.consumer.internals.ConsumerCoordinator - [Consumer clientId=mso-123456-consumer-e3947c42-8a25-4603-b050-ed449cbe5e27, groupId=mso-group] No broker available to send FindCoordinator request 19:07:06.319 [kafka-producer-network-thread | mso-123456-producer-f273d893-2274-4515-b9d3-12029fbb5bf7] DEBUG org.apache.kafka.clients.NetworkClient - [Producer clientId=mso-123456-producer-f273d893-2274-4515-b9d3-12029fbb5bf7] Give up sending metadata request since no node is available 19:07:06.329 [pool-9-thread-1] INFO org.onap.sdc.impl.NotificationConsumer - Polling for messages from topic: null 19:07:06.370 [kafka-producer-network-thread | mso-123456-producer-f273d893-2274-4515-b9d3-12029fbb5bf7] DEBUG org.apache.kafka.clients.NetworkClient - [Producer clientId=mso-123456-producer-f273d893-2274-4515-b9d3-12029fbb5bf7] Give up sending metadata request since no node is available 19:07:06.419 [kafka-coordinator-heartbeat-thread | mso-group] DEBUG org.apache.kafka.clients.NetworkClient - [Consumer clientId=mso-123456-consumer-e3947c42-8a25-4603-b050-ed449cbe5e27, groupId=mso-group] Give up sending metadata request since no node is available 19:07:06.420 [kafka-coordinator-heartbeat-thread | mso-group] DEBUG org.apache.kafka.clients.consumer.internals.ConsumerCoordinator - [Consumer clientId=mso-123456-consumer-e3947c42-8a25-4603-b050-ed449cbe5e27, groupId=mso-group] No broker available to send FindCoordinator request 19:07:06.420 [kafka-producer-network-thread | mso-123456-producer-f273d893-2274-4515-b9d3-12029fbb5bf7] DEBUG org.apache.kafka.clients.NetworkClient - [Producer clientId=mso-123456-producer-f273d893-2274-4515-b9d3-12029fbb5bf7] Give up sending metadata request since no node is available 19:07:06.430 [pool-9-thread-2] INFO org.onap.sdc.impl.NotificationConsumer - Polling for messages from topic: null 19:07:06.431 [pool-9-thread-2] DEBUG org.onap.sdc.impl.NotificationConsumer - received message from topic 19:07:06.431 [pool-9-thread-2] DEBUG org.onap.sdc.impl.NotificationConsumer - received notification from broker: {"distributionID" : "bcc7a72e-90b1-4c5f-9a37-28dc3cd86416", "serviceName" : "Testnotificationser1", "serviceVersion" : "1.0", "serviceUUID" : "7f7f94f4-373a-4b71-a0e3-80ae2ba4eb5d", "serviceDescription" : "TestNotificationVF1", "bugabuga" : "xyz", "resources" : [{ "resourceInstanceName" : "testnotificationvf11", "resourceName" : "TestNotificationVF1", "resourceVersion" : "1.0", "resoucreType" : "VF", "resourceUUID" : "907e1746-9f69-40f5-9f2a-313654092a2d", "artifacts" : [{ "artifactName" : "heat.yaml", "artifactType" : "HEAT", "artifactURL" : "/sdc/v1/catalog/services/Testnotificationser1/1.0/resourceInstances/testnotificationvf11/artifacts/heat.yaml", "artifactChecksum" : "ODEyNjE4YTMzYzRmMTk2ODVhNTU2NTg3YWEyNmIxMTM\u003d", "artifactDescription" : "heat", "artifactTimeout" : 60, "artifactUUID" : "8df6123c-f368-47d3-93be-1972cefbcc35", "artifactBuga" : "8df6123c-f368-47d3-93be-1972cefbcc35", "artifactVersion" : "1" }, { "artifactName" : "buga.bug", "artifactType" : "BUGA_BUGA", "artifactURL" : "/sdc/v1/catalog/services/Testnotificationser1/1.0/resourceInstances/testnotificationvf11/artifacts/heat.env", "artifactChecksum" : "NGIzMjExZTM1NDc2NjBjOTQyMGJmMWNiMmU0NTE5NzM\u003d", "artifactDescription" : "Auto-generated HEAT Environment deployment artifact", "artifactTimeout" : 0, "artifactUUID" : "ce65d31c-35c0-43a9-90c7-596fc51d0c86", "artifactVersion" : "1", "generatedFromUUID" : "8df6123c-f368-47d3-93be-1972cefbcc35" } ] } ]} 19:07:06.449 [pool-9-thread-2] DEBUG org.onap.sdc.impl.NotificationConsumer - sending notification to client: { "distributionID": "bcc7a72e-90b1-4c5f-9a37-28dc3cd86416", "serviceName": "Testnotificationser1", "serviceVersion": "1.0", "serviceUUID": "7f7f94f4-373a-4b71-a0e3-80ae2ba4eb5d", "serviceDescription": "TestNotificationVF1", "resources": [ { "resourceInstanceName": "testnotificationvf11", "resourceName": "TestNotificationVF1", "resourceVersion": "1.0", "resoucreType": "VF", "resourceUUID": "907e1746-9f69-40f5-9f2a-313654092a2d", "artifacts": [ { "artifactName": "heat.yaml", "artifactType": "HEAT", "artifactURL": "/sdc/v1/catalog/services/Testnotificationser1/1.0/resourceInstances/testnotificationvf11/artifacts/heat.yaml", "artifactChecksum": "ODEyNjE4YTMzYzRmMTk2ODVhNTU2NTg3YWEyNmIxMTM\u003d", "artifactDescription": "heat", "artifactTimeout": 60, "artifactVersion": "1", "artifactUUID": "8df6123c-f368-47d3-93be-1972cefbcc35", "relatedArtifactsInfo": [] } ] } ], "serviceArtifacts": [] } 19:07:06.471 [kafka-producer-network-thread | mso-123456-producer-f273d893-2274-4515-b9d3-12029fbb5bf7] DEBUG org.apache.kafka.clients.NetworkClient - [Producer clientId=mso-123456-producer-f273d893-2274-4515-b9d3-12029fbb5bf7] Give up sending metadata request since no node is available 19:07:06.520 [kafka-coordinator-heartbeat-thread | mso-group] DEBUG org.apache.kafka.clients.NetworkClient - [Consumer clientId=mso-123456-consumer-e3947c42-8a25-4603-b050-ed449cbe5e27, groupId=mso-group] Give up sending metadata request since no node is available 19:07:06.520 [kafka-coordinator-heartbeat-thread | mso-group] DEBUG org.apache.kafka.clients.consumer.internals.ConsumerCoordinator - [Consumer clientId=mso-123456-consumer-e3947c42-8a25-4603-b050-ed449cbe5e27, groupId=mso-group] No broker available to send FindCoordinator request 19:07:06.522 [kafka-producer-network-thread | mso-123456-producer-f273d893-2274-4515-b9d3-12029fbb5bf7] DEBUG org.apache.kafka.clients.NetworkClient - [Producer clientId=mso-123456-producer-f273d893-2274-4515-b9d3-12029fbb5bf7] Initialize connection to node localhost:38537 (id: 1 rack: null) for sending metadata request 19:07:06.522 [kafka-producer-network-thread | mso-123456-producer-f273d893-2274-4515-b9d3-12029fbb5bf7] DEBUG org.apache.kafka.clients.ClientUtils - Resolved host localhost as 127.0.0.1 19:07:06.522 [kafka-producer-network-thread | mso-123456-producer-f273d893-2274-4515-b9d3-12029fbb5bf7] DEBUG org.apache.kafka.clients.NetworkClient - [Producer clientId=mso-123456-producer-f273d893-2274-4515-b9d3-12029fbb5bf7] Initiating connection to node localhost:38537 (id: 1 rack: null) using address localhost/127.0.0.1 19:07:06.523 [kafka-producer-network-thread | mso-123456-producer-f273d893-2274-4515-b9d3-12029fbb5bf7] DEBUG org.apache.kafka.common.security.authenticator.SaslClientAuthenticator - [Producer clientId=mso-123456-producer-f273d893-2274-4515-b9d3-12029fbb5bf7] Set SASL client state to SEND_APIVERSIONS_REQUEST 19:07:06.523 [kafka-producer-network-thread | mso-123456-producer-f273d893-2274-4515-b9d3-12029fbb5bf7] DEBUG org.apache.kafka.common.security.authenticator.SaslClientAuthenticator - [Producer clientId=mso-123456-producer-f273d893-2274-4515-b9d3-12029fbb5bf7] Creating SaslClient: client=null;service=kafka;serviceHostname=localhost;mechs=[PLAIN] 19:07:06.524 [kafka-producer-network-thread | mso-123456-producer-f273d893-2274-4515-b9d3-12029fbb5bf7] DEBUG org.apache.kafka.common.network.Selector - [Producer clientId=mso-123456-producer-f273d893-2274-4515-b9d3-12029fbb5bf7] Connection with localhost/127.0.0.1 (channelId=1) disconnected java.net.ConnectException: Connection refused at java.base/sun.nio.ch.SocketChannelImpl.checkConnect(Native Method) at java.base/sun.nio.ch.SocketChannelImpl.finishConnect(SocketChannelImpl.java:777) at org.apache.kafka.common.network.PlaintextTransportLayer.finishConnect(PlaintextTransportLayer.java:50) at org.apache.kafka.common.network.KafkaChannel.finishConnect(KafkaChannel.java:224) at org.apache.kafka.common.network.Selector.pollSelectionKeys(Selector.java:526) at org.apache.kafka.common.network.Selector.poll(Selector.java:481) at org.apache.kafka.clients.NetworkClient.poll(NetworkClient.java:560) at org.apache.kafka.clients.producer.internals.Sender.runOnce(Sender.java:328) at org.apache.kafka.clients.producer.internals.Sender.run(Sender.java:243) at java.base/java.lang.Thread.run(Thread.java:829) 19:07:06.524 [kafka-producer-network-thread | mso-123456-producer-f273d893-2274-4515-b9d3-12029fbb5bf7] INFO org.apache.kafka.clients.NetworkClient - [Producer clientId=mso-123456-producer-f273d893-2274-4515-b9d3-12029fbb5bf7] Node 1 disconnected. 19:07:06.525 [kafka-producer-network-thread | mso-123456-producer-f273d893-2274-4515-b9d3-12029fbb5bf7] WARN org.apache.kafka.clients.NetworkClient - [Producer clientId=mso-123456-producer-f273d893-2274-4515-b9d3-12029fbb5bf7] Connection to node 1 (localhost/127.0.0.1:38537) could not be established. Broker may not be available. 19:07:06.529 [pool-9-thread-1] INFO org.onap.sdc.impl.NotificationConsumer - Polling for messages from topic: null 19:07:06.621 [kafka-coordinator-heartbeat-thread | mso-group] DEBUG org.apache.kafka.clients.NetworkClient - [Consumer clientId=mso-123456-consumer-e3947c42-8a25-4603-b050-ed449cbe5e27, groupId=mso-group] Give up sending metadata request since no node is available 19:07:06.621 [kafka-coordinator-heartbeat-thread | mso-group] DEBUG org.apache.kafka.clients.consumer.internals.ConsumerCoordinator - [Consumer clientId=mso-123456-consumer-e3947c42-8a25-4603-b050-ed449cbe5e27, groupId=mso-group] No broker available to send FindCoordinator request 19:07:06.624 [kafka-producer-network-thread | mso-123456-producer-f273d893-2274-4515-b9d3-12029fbb5bf7] DEBUG org.apache.kafka.clients.NetworkClient - [Producer clientId=mso-123456-producer-f273d893-2274-4515-b9d3-12029fbb5bf7] Give up sending metadata request since no node is available 19:07:06.629 [pool-9-thread-3] INFO org.onap.sdc.impl.NotificationConsumer - Polling for messages from topic: null 19:07:06.675 [kafka-producer-network-thread | mso-123456-producer-f273d893-2274-4515-b9d3-12029fbb5bf7] DEBUG org.apache.kafka.clients.NetworkClient - [Producer clientId=mso-123456-producer-f273d893-2274-4515-b9d3-12029fbb5bf7] Give up sending metadata request since no node is available 19:07:06.721 [kafka-coordinator-heartbeat-thread | mso-group] DEBUG org.apache.kafka.clients.NetworkClient - [Consumer clientId=mso-123456-consumer-e3947c42-8a25-4603-b050-ed449cbe5e27, groupId=mso-group] Give up sending metadata request since no node is available 19:07:06.722 [kafka-coordinator-heartbeat-thread | mso-group] DEBUG org.apache.kafka.clients.consumer.internals.ConsumerCoordinator - [Consumer clientId=mso-123456-consumer-e3947c42-8a25-4603-b050-ed449cbe5e27, groupId=mso-group] No broker available to send FindCoordinator request 19:07:06.725 [kafka-producer-network-thread | mso-123456-producer-f273d893-2274-4515-b9d3-12029fbb5bf7] DEBUG org.apache.kafka.clients.NetworkClient - [Producer clientId=mso-123456-producer-f273d893-2274-4515-b9d3-12029fbb5bf7] Give up sending metadata request since no node is available 19:07:06.729 [pool-9-thread-2] INFO org.onap.sdc.impl.NotificationConsumer - Polling for messages from topic: null 19:07:06.776 [kafka-producer-network-thread | mso-123456-producer-f273d893-2274-4515-b9d3-12029fbb5bf7] DEBUG org.apache.kafka.clients.NetworkClient - [Producer clientId=mso-123456-producer-f273d893-2274-4515-b9d3-12029fbb5bf7] Give up sending metadata request since no node is available 19:07:06.822 [kafka-coordinator-heartbeat-thread | mso-group] DEBUG org.apache.kafka.clients.NetworkClient - [Consumer clientId=mso-123456-consumer-e3947c42-8a25-4603-b050-ed449cbe5e27, groupId=mso-group] Give up sending metadata request since no node is available 19:07:06.822 [kafka-coordinator-heartbeat-thread | mso-group] DEBUG org.apache.kafka.clients.consumer.internals.ConsumerCoordinator - [Consumer clientId=mso-123456-consumer-e3947c42-8a25-4603-b050-ed449cbe5e27, groupId=mso-group] No broker available to send FindCoordinator request 19:07:06.826 [kafka-producer-network-thread | mso-123456-producer-f273d893-2274-4515-b9d3-12029fbb5bf7] DEBUG org.apache.kafka.clients.NetworkClient - [Producer clientId=mso-123456-producer-f273d893-2274-4515-b9d3-12029fbb5bf7] Give up sending metadata request since no node is available 19:07:06.830 [pool-9-thread-4] INFO org.onap.sdc.impl.NotificationConsumer - Polling for messages from topic: null 19:07:06.877 [kafka-producer-network-thread | mso-123456-producer-f273d893-2274-4515-b9d3-12029fbb5bf7] DEBUG org.apache.kafka.clients.NetworkClient - [Producer clientId=mso-123456-producer-f273d893-2274-4515-b9d3-12029fbb5bf7] Give up sending metadata request since no node is available 19:07:06.923 [kafka-coordinator-heartbeat-thread | mso-group] DEBUG org.apache.kafka.clients.NetworkClient - [Consumer clientId=mso-123456-consumer-e3947c42-8a25-4603-b050-ed449cbe5e27, groupId=mso-group] Give up sending metadata request since no node is available 19:07:06.923 [kafka-coordinator-heartbeat-thread | mso-group] DEBUG org.apache.kafka.clients.consumer.internals.ConsumerCoordinator - [Consumer clientId=mso-123456-consumer-e3947c42-8a25-4603-b050-ed449cbe5e27, groupId=mso-group] No broker available to send FindCoordinator request 19:07:06.928 [kafka-producer-network-thread | mso-123456-producer-f273d893-2274-4515-b9d3-12029fbb5bf7] DEBUG org.apache.kafka.clients.NetworkClient - [Producer clientId=mso-123456-producer-f273d893-2274-4515-b9d3-12029fbb5bf7] Give up sending metadata request since no node is available 19:07:06.929 [pool-9-thread-1] INFO org.onap.sdc.impl.NotificationConsumer - Polling for messages from topic: null 19:07:06.978 [kafka-producer-network-thread | mso-123456-producer-f273d893-2274-4515-b9d3-12029fbb5bf7] DEBUG org.apache.kafka.clients.NetworkClient - [Producer clientId=mso-123456-producer-f273d893-2274-4515-b9d3-12029fbb5bf7] Give up sending metadata request since no node is available 19:07:07.023 [kafka-coordinator-heartbeat-thread | mso-group] DEBUG org.apache.kafka.clients.NetworkClient - [Consumer clientId=mso-123456-consumer-e3947c42-8a25-4603-b050-ed449cbe5e27, groupId=mso-group] Give up sending metadata request since no node is available 19:07:07.024 [kafka-coordinator-heartbeat-thread | mso-group] DEBUG org.apache.kafka.clients.consumer.internals.ConsumerCoordinator - [Consumer clientId=mso-123456-consumer-e3947c42-8a25-4603-b050-ed449cbe5e27, groupId=mso-group] No broker available to send FindCoordinator request 19:07:07.029 [kafka-producer-network-thread | mso-123456-producer-f273d893-2274-4515-b9d3-12029fbb5bf7] DEBUG org.apache.kafka.clients.NetworkClient - [Producer clientId=mso-123456-producer-f273d893-2274-4515-b9d3-12029fbb5bf7] Give up sending metadata request since no node is available 19:07:07.030 [pool-9-thread-5] INFO org.onap.sdc.impl.NotificationConsumer - Polling for messages from topic: null 19:07:07.080 [kafka-producer-network-thread | mso-123456-producer-f273d893-2274-4515-b9d3-12029fbb5bf7] DEBUG org.apache.kafka.clients.NetworkClient - [Producer clientId=mso-123456-producer-f273d893-2274-4515-b9d3-12029fbb5bf7] Give up sending metadata request since no node is available 19:07:07.124 [kafka-coordinator-heartbeat-thread | mso-group] DEBUG org.apache.kafka.clients.NetworkClient - [Consumer clientId=mso-123456-consumer-e3947c42-8a25-4603-b050-ed449cbe5e27, groupId=mso-group] Initialize connection to node localhost:38537 (id: 1 rack: null) for sending metadata request 19:07:07.124 [kafka-coordinator-heartbeat-thread | mso-group] DEBUG org.apache.kafka.clients.ClientUtils - Resolved host localhost as 127.0.0.1 19:07:07.124 [kafka-coordinator-heartbeat-thread | mso-group] DEBUG org.apache.kafka.clients.NetworkClient - [Consumer clientId=mso-123456-consumer-e3947c42-8a25-4603-b050-ed449cbe5e27, groupId=mso-group] Initiating connection to node localhost:38537 (id: 1 rack: null) using address localhost/127.0.0.1 19:07:07.125 [kafka-coordinator-heartbeat-thread | mso-group] DEBUG org.apache.kafka.common.security.authenticator.SaslClientAuthenticator - [Consumer clientId=mso-123456-consumer-e3947c42-8a25-4603-b050-ed449cbe5e27, groupId=mso-group] Set SASL client state to SEND_APIVERSIONS_REQUEST 19:07:07.125 [kafka-coordinator-heartbeat-thread | mso-group] DEBUG org.apache.kafka.common.security.authenticator.SaslClientAuthenticator - [Consumer clientId=mso-123456-consumer-e3947c42-8a25-4603-b050-ed449cbe5e27, groupId=mso-group] Creating SaslClient: client=null;service=kafka;serviceHostname=localhost;mechs=[PLAIN] 19:07:07.126 [kafka-coordinator-heartbeat-thread | mso-group] DEBUG org.apache.kafka.common.network.Selector - [Consumer clientId=mso-123456-consumer-e3947c42-8a25-4603-b050-ed449cbe5e27, groupId=mso-group] Connection with localhost/127.0.0.1 (channelId=1) disconnected java.net.ConnectException: Connection refused at java.base/sun.nio.ch.SocketChannelImpl.checkConnect(Native Method) at java.base/sun.nio.ch.SocketChannelImpl.finishConnect(SocketChannelImpl.java:777) at org.apache.kafka.common.network.PlaintextTransportLayer.finishConnect(PlaintextTransportLayer.java:50) at org.apache.kafka.common.network.KafkaChannel.finishConnect(KafkaChannel.java:224) at org.apache.kafka.common.network.Selector.pollSelectionKeys(Selector.java:526) at org.apache.kafka.common.network.Selector.poll(Selector.java:481) at org.apache.kafka.clients.NetworkClient.poll(NetworkClient.java:560) at org.apache.kafka.clients.consumer.internals.ConsumerNetworkClient.poll(ConsumerNetworkClient.java:280) at org.apache.kafka.clients.consumer.internals.ConsumerNetworkClient.pollNoWakeup(ConsumerNetworkClient.java:321) at org.apache.kafka.clients.consumer.internals.AbstractCoordinator$HeartbeatThread.run(AbstractCoordinator.java:1454) 19:07:07.126 [kafka-coordinator-heartbeat-thread | mso-group] INFO org.apache.kafka.clients.NetworkClient - [Consumer clientId=mso-123456-consumer-e3947c42-8a25-4603-b050-ed449cbe5e27, groupId=mso-group] Node 1 disconnected. 19:07:07.126 [kafka-coordinator-heartbeat-thread | mso-group] WARN org.apache.kafka.clients.NetworkClient - [Consumer clientId=mso-123456-consumer-e3947c42-8a25-4603-b050-ed449cbe5e27, groupId=mso-group] Connection to node 1 (localhost/127.0.0.1:38537) could not be established. Broker may not be available. 19:07:07.126 [kafka-coordinator-heartbeat-thread | mso-group] DEBUG org.apache.kafka.clients.consumer.internals.ConsumerCoordinator - [Consumer clientId=mso-123456-consumer-e3947c42-8a25-4603-b050-ed449cbe5e27, groupId=mso-group] No broker available to send FindCoordinator request 19:07:07.129 [pool-9-thread-3] INFO org.onap.sdc.impl.NotificationConsumer - Polling for messages from topic: null 19:07:07.131 [kafka-producer-network-thread | mso-123456-producer-f273d893-2274-4515-b9d3-12029fbb5bf7] DEBUG org.apache.kafka.clients.NetworkClient - [Producer clientId=mso-123456-producer-f273d893-2274-4515-b9d3-12029fbb5bf7] Give up sending metadata request since no node is available 19:07:07.181 [kafka-producer-network-thread | mso-123456-producer-f273d893-2274-4515-b9d3-12029fbb5bf7] DEBUG org.apache.kafka.clients.NetworkClient - [Producer clientId=mso-123456-producer-f273d893-2274-4515-b9d3-12029fbb5bf7] Give up sending metadata request since no node is available 19:07:07.226 [kafka-coordinator-heartbeat-thread | mso-group] DEBUG org.apache.kafka.clients.NetworkClient - [Consumer clientId=mso-123456-consumer-e3947c42-8a25-4603-b050-ed449cbe5e27, groupId=mso-group] Give up sending metadata request since no node is available 19:07:07.227 [kafka-coordinator-heartbeat-thread | mso-group] DEBUG org.apache.kafka.clients.consumer.internals.ConsumerCoordinator - [Consumer clientId=mso-123456-consumer-e3947c42-8a25-4603-b050-ed449cbe5e27, groupId=mso-group] No broker available to send FindCoordinator request 19:07:07.229 [pool-9-thread-3] INFO org.onap.sdc.impl.NotificationConsumer - Polling for messages from topic: null 19:07:07.235 [main] INFO org.onap.sdc.impl.DistributionClientImpl - DistributionClient - sendNotificationStatus 19:07:07.236 [main] DEBUG org.onap.sdc.impl.DistributionClientImpl - client was not initialized 19:07:07.232 [kafka-producer-network-thread | mso-123456-producer-f273d893-2274-4515-b9d3-12029fbb5bf7] DEBUG org.apache.kafka.clients.NetworkClient - [Producer clientId=mso-123456-producer-f273d893-2274-4515-b9d3-12029fbb5bf7] Give up sending metadata request since no node is available 19:07:07.238 [pool-10-thread-1] INFO org.onap.sdc.impl.NotificationConsumer - Polling for messages from topic: null 19:07:07.287 [kafka-producer-network-thread | mso-123456-producer-f273d893-2274-4515-b9d3-12029fbb5bf7] DEBUG org.apache.kafka.clients.NetworkClient - [Producer clientId=mso-123456-producer-f273d893-2274-4515-b9d3-12029fbb5bf7] Give up sending metadata request since no node is available 19:07:07.327 [kafka-coordinator-heartbeat-thread | mso-group] DEBUG org.apache.kafka.clients.NetworkClient - [Consumer clientId=mso-123456-consumer-e3947c42-8a25-4603-b050-ed449cbe5e27, groupId=mso-group] Give up sending metadata request since no node is available 19:07:07.327 [kafka-coordinator-heartbeat-thread | mso-group] DEBUG org.apache.kafka.clients.consumer.internals.ConsumerCoordinator - [Consumer clientId=mso-123456-consumer-e3947c42-8a25-4603-b050-ed449cbe5e27, groupId=mso-group] No broker available to send FindCoordinator request 19:07:07.337 [pool-10-thread-1] INFO org.onap.sdc.impl.NotificationConsumer - Polling for messages from topic: null 19:07:07.338 [kafka-producer-network-thread | mso-123456-producer-f273d893-2274-4515-b9d3-12029fbb5bf7] DEBUG org.apache.kafka.clients.NetworkClient - [Producer clientId=mso-123456-producer-f273d893-2274-4515-b9d3-12029fbb5bf7] Give up sending metadata request since no node is available 19:07:07.388 [kafka-producer-network-thread | mso-123456-producer-f273d893-2274-4515-b9d3-12029fbb5bf7] DEBUG org.apache.kafka.clients.NetworkClient - [Producer clientId=mso-123456-producer-f273d893-2274-4515-b9d3-12029fbb5bf7] Give up sending metadata request since no node is available 19:07:07.428 [kafka-coordinator-heartbeat-thread | mso-group] DEBUG org.apache.kafka.clients.NetworkClient - [Consumer clientId=mso-123456-consumer-e3947c42-8a25-4603-b050-ed449cbe5e27, groupId=mso-group] Give up sending metadata request since no node is available 19:07:07.428 [kafka-coordinator-heartbeat-thread | mso-group] DEBUG org.apache.kafka.clients.consumer.internals.ConsumerCoordinator - [Consumer clientId=mso-123456-consumer-e3947c42-8a25-4603-b050-ed449cbe5e27, groupId=mso-group] No broker available to send FindCoordinator request 19:07:07.438 [pool-10-thread-2] INFO org.onap.sdc.impl.NotificationConsumer - Polling for messages from topic: null 19:07:07.438 [pool-10-thread-2] DEBUG org.onap.sdc.impl.NotificationConsumer - received message from topic 19:07:07.438 [pool-10-thread-2] DEBUG org.onap.sdc.impl.NotificationConsumer - received notification from broker: {"distributionID" : "bcc7a72e-90b1-4c5f-9a37-28dc3cd86416", "serviceName" : "Testnotificationser1", "serviceVersion" : "1.0", "serviceUUID" : "7f7f94f4-373a-4b71-a0e3-80ae2ba4eb5d", "serviceDescription" : "TestNotificationVF1", "resources" : [{ "resourceInstanceName" : "testnotificationvf11", "resourceName" : "TestNotificationVF1", "resourceVersion" : "1.0", "resoucreType" : "VF", "resourceUUID" : "907e1746-9f69-40f5-9f2a-313654092a2d", "artifacts" : [{ "artifactName" : "sample-xml-alldata-1-1.xml", "artifactType" : "YANG_XML", "artifactURL" : "/sdc/v1/catalog/services/Testnotificationser1/1.0/resourceInstances/testnotificationvf11/artifacts/sample-xml-alldata-1-1.xml", "artifactChecksum" : "MTUxODFkMmRlOTNhNjYxMGYyYTI1ZjA5Y2QyNWQyYTk\u003d", "artifactDescription" : "MyYang", "artifactTimeout" : 0, "artifactUUID" : "0005bc4a-2c19-452e-be6d-d574a56be4d0", "artifactVersion" : "1", "relatedArtifacts" : [ "ce65d31c-35c0-43a9-90c7-596fc51d0c86" ] }, { "artifactName" : "heat.yaml", "artifactType" : "HEAT", "artifactURL" : "/sdc/v1/catalog/services/Testnotificationser1/1.0/resourceInstances/testnotificationvf11/artifacts/heat.yaml", "artifactChecksum" : "ODEyNjE4YTMzYzRmMTk2ODVhNTU2NTg3YWEyNmIxMTM\u003d", "artifactDescription" : "heat", "artifactTimeout" : 60, "artifactUUID" : "8df6123c-f368-47d3-93be-1972cefbcc35", "artifactVersion" : "1", "relatedArtifacts" : [ "0005bc4a-2c19-452e-be6d-d574a56be4d0", "ce65d31c-35c0-43a9-90c7-596fc51d0c86" ] }, { "artifactName" : "heat.env", "artifactType" : "HEAT_ENV", "artifactURL" : "/sdc/v1/catalog/services/Testnotificationser1/1.0/resourceInstances/testnotificationvf11/artifacts/heat.env", "artifactChecksum" : "NGIzMjExZTM1NDc2NjBjOTQyMGJmMWNiMmU0NTE5NzM\u003d", "artifactDescription" : "Auto-generated HEAT Environment deployment artifact", "artifactTimeout" : 0, "artifactUUID" : "ce65d31c-35c0-43a9-90c7-596fc51d0c86", "artifactVersion" : "1", "generatedFromUUID" : "8df6123c-f368-47d3-93be-1972cefbcc35" } ] } ]} 19:07:07.439 [kafka-producer-network-thread | mso-123456-producer-f273d893-2274-4515-b9d3-12029fbb5bf7] DEBUG org.apache.kafka.clients.NetworkClient - [Producer clientId=mso-123456-producer-f273d893-2274-4515-b9d3-12029fbb5bf7] Initialize connection to node localhost:38537 (id: 1 rack: null) for sending metadata request 19:07:07.439 [kafka-producer-network-thread | mso-123456-producer-f273d893-2274-4515-b9d3-12029fbb5bf7] DEBUG org.apache.kafka.clients.ClientUtils - Resolved host localhost as 127.0.0.1 19:07:07.439 [kafka-producer-network-thread | mso-123456-producer-f273d893-2274-4515-b9d3-12029fbb5bf7] DEBUG org.apache.kafka.clients.NetworkClient - [Producer clientId=mso-123456-producer-f273d893-2274-4515-b9d3-12029fbb5bf7] Initiating connection to node localhost:38537 (id: 1 rack: null) using address localhost/127.0.0.1 19:07:07.440 [kafka-producer-network-thread | mso-123456-producer-f273d893-2274-4515-b9d3-12029fbb5bf7] DEBUG org.apache.kafka.common.security.authenticator.SaslClientAuthenticator - [Producer clientId=mso-123456-producer-f273d893-2274-4515-b9d3-12029fbb5bf7] Set SASL client state to SEND_APIVERSIONS_REQUEST 19:07:07.440 [kafka-producer-network-thread | mso-123456-producer-f273d893-2274-4515-b9d3-12029fbb5bf7] DEBUG org.apache.kafka.common.security.authenticator.SaslClientAuthenticator - [Producer clientId=mso-123456-producer-f273d893-2274-4515-b9d3-12029fbb5bf7] Creating SaslClient: client=null;service=kafka;serviceHostname=localhost;mechs=[PLAIN] 19:07:07.441 [kafka-producer-network-thread | mso-123456-producer-f273d893-2274-4515-b9d3-12029fbb5bf7] DEBUG org.apache.kafka.common.network.Selector - [Producer clientId=mso-123456-producer-f273d893-2274-4515-b9d3-12029fbb5bf7] Connection with localhost/127.0.0.1 (channelId=1) disconnected java.net.ConnectException: Connection refused at java.base/sun.nio.ch.SocketChannelImpl.checkConnect(Native Method) at java.base/sun.nio.ch.SocketChannelImpl.finishConnect(SocketChannelImpl.java:777) at org.apache.kafka.common.network.PlaintextTransportLayer.finishConnect(PlaintextTransportLayer.java:50) at org.apache.kafka.common.network.KafkaChannel.finishConnect(KafkaChannel.java:224) at org.apache.kafka.common.network.Selector.pollSelectionKeys(Selector.java:526) at org.apache.kafka.common.network.Selector.poll(Selector.java:481) at org.apache.kafka.clients.NetworkClient.poll(NetworkClient.java:560) at org.apache.kafka.clients.producer.internals.Sender.runOnce(Sender.java:328) at org.apache.kafka.clients.producer.internals.Sender.run(Sender.java:243) at java.base/java.lang.Thread.run(Thread.java:829) 19:07:07.441 [kafka-producer-network-thread | mso-123456-producer-f273d893-2274-4515-b9d3-12029fbb5bf7] INFO org.apache.kafka.clients.NetworkClient - [Producer clientId=mso-123456-producer-f273d893-2274-4515-b9d3-12029fbb5bf7] Node 1 disconnected. 19:07:07.441 [kafka-producer-network-thread | mso-123456-producer-f273d893-2274-4515-b9d3-12029fbb5bf7] WARN org.apache.kafka.clients.NetworkClient - [Producer clientId=mso-123456-producer-f273d893-2274-4515-b9d3-12029fbb5bf7] Connection to node 1 (localhost/127.0.0.1:38537) could not be established. Broker may not be available. 19:07:07.446 [pool-10-thread-2] DEBUG org.onap.sdc.impl.NotificationConsumer - sending notification to client: { "distributionID": "bcc7a72e-90b1-4c5f-9a37-28dc3cd86416", "serviceName": "Testnotificationser1", "serviceVersion": "1.0", "serviceUUID": "7f7f94f4-373a-4b71-a0e3-80ae2ba4eb5d", "serviceDescription": "TestNotificationVF1", "resources": [ { "resourceInstanceName": "testnotificationvf11", "resourceName": "TestNotificationVF1", "resourceVersion": "1.0", "resoucreType": "VF", "resourceUUID": "907e1746-9f69-40f5-9f2a-313654092a2d", "artifacts": [ { "artifactName": "sample-xml-alldata-1-1.xml", "artifactType": "YANG_XML", "artifactURL": "/sdc/v1/catalog/services/Testnotificationser1/1.0/resourceInstances/testnotificationvf11/artifacts/sample-xml-alldata-1-1.xml", "artifactChecksum": "MTUxODFkMmRlOTNhNjYxMGYyYTI1ZjA5Y2QyNWQyYTk\u003d", "artifactDescription": "MyYang", "artifactTimeout": 0, "artifactVersion": "1", "artifactUUID": "0005bc4a-2c19-452e-be6d-d574a56be4d0", "relatedArtifacts": [ "ce65d31c-35c0-43a9-90c7-596fc51d0c86" ], "relatedArtifactsInfo": [ { "artifactName": "heat.env", "artifactType": "HEAT_ENV", "artifactURL": "/sdc/v1/catalog/services/Testnotificationser1/1.0/resourceInstances/testnotificationvf11/artifacts/heat.env", "artifactChecksum": "NGIzMjExZTM1NDc2NjBjOTQyMGJmMWNiMmU0NTE5NzM\u003d", "artifactDescription": "Auto-generated HEAT Environment deployment artifact", "artifactTimeout": 0, "artifactVersion": "1", "artifactUUID": "ce65d31c-35c0-43a9-90c7-596fc51d0c86", "generatedFromUUID": "8df6123c-f368-47d3-93be-1972cefbcc35" } ] }, { "artifactName": "heat.yaml", "artifactType": "HEAT", "artifactURL": "/sdc/v1/catalog/services/Testnotificationser1/1.0/resourceInstances/testnotificationvf11/artifacts/heat.yaml", "artifactChecksum": "ODEyNjE4YTMzYzRmMTk2ODVhNTU2NTg3YWEyNmIxMTM\u003d", "artifactDescription": "heat", "artifactTimeout": 60, "artifactVersion": "1", "artifactUUID": "8df6123c-f368-47d3-93be-1972cefbcc35", "generatedArtifact": { "artifactName": "heat.env", "artifactType": "HEAT_ENV", "artifactURL": "/sdc/v1/catalog/services/Testnotificationser1/1.0/resourceInstances/testnotificationvf11/artifacts/heat.env", "artifactChecksum": "NGIzMjExZTM1NDc2NjBjOTQyMGJmMWNiMmU0NTE5NzM\u003d", "artifactDescription": "Auto-generated HEAT Environment deployment artifact", "artifactTimeout": 0, "artifactVersion": "1", "artifactUUID": "ce65d31c-35c0-43a9-90c7-596fc51d0c86", "generatedFromUUID": "8df6123c-f368-47d3-93be-1972cefbcc35" }, "relatedArtifacts": [ "0005bc4a-2c19-452e-be6d-d574a56be4d0", "ce65d31c-35c0-43a9-90c7-596fc51d0c86" ], "relatedArtifactsInfo": [ { "artifactName": "sample-xml-alldata-1-1.xml", "artifactType": "YANG_XML", "artifactURL": "/sdc/v1/catalog/services/Testnotificationser1/1.0/resourceInstances/testnotificationvf11/artifacts/sample-xml-alldata-1-1.xml", "artifactChecksum": "MTUxODFkMmRlOTNhNjYxMGYyYTI1ZjA5Y2QyNWQyYTk\u003d", "artifactDescription": "MyYang", "artifactTimeout": 0, "artifactVersion": "1", "artifactUUID": "0005bc4a-2c19-452e-be6d-d574a56be4d0", "relatedArtifacts": [ "ce65d31c-35c0-43a9-90c7-596fc51d0c86" ], "relatedArtifactsInfo": [ { "artifactName": "heat.env", "artifactType": "HEAT_ENV", "artifactURL": "/sdc/v1/catalog/services/Testnotificationser1/1.0/resourceInstances/testnotificationvf11/artifacts/heat.env", "artifactChecksum": "NGIzMjExZTM1NDc2NjBjOTQyMGJmMWNiMmU0NTE5NzM\u003d", "artifactDescription": "Auto-generated HEAT Environment deployment artifact", "artifactTimeout": 0, "artifactVersion": "1", "artifactUUID": "ce65d31c-35c0-43a9-90c7-596fc51d0c86", "generatedFromUUID": "8df6123c-f368-47d3-93be-1972cefbcc35" } ] }, { "artifactName": "heat.env", "artifactType": "HEAT_ENV", "artifactURL": "/sdc/v1/catalog/services/Testnotificationser1/1.0/resourceInstances/testnotificationvf11/artifacts/heat.env", "artifactChecksum": "NGIzMjExZTM1NDc2NjBjOTQyMGJmMWNiMmU0NTE5NzM\u003d", "artifactDescription": "Auto-generated HEAT Environment deployment artifact", "artifactTimeout": 0, "artifactVersion": "1", "artifactUUID": "ce65d31c-35c0-43a9-90c7-596fc51d0c86", "generatedFromUUID": "8df6123c-f368-47d3-93be-1972cefbcc35" } ] }, { "artifactName": "heat.env", "artifactType": "HEAT_ENV", "artifactURL": "/sdc/v1/catalog/services/Testnotificationser1/1.0/resourceInstances/testnotificationvf11/artifacts/heat.env", "artifactChecksum": "NGIzMjExZTM1NDc2NjBjOTQyMGJmMWNiMmU0NTE5NzM\u003d", "artifactDescription": "Auto-generated HEAT Environment deployment artifact", "artifactTimeout": 0, "artifactVersion": "1", "artifactUUID": "ce65d31c-35c0-43a9-90c7-596fc51d0c86", "relatedArtifactsInfo": [] } ] } ], "serviceArtifacts": [] } 19:07:07.528 [kafka-coordinator-heartbeat-thread | mso-group] DEBUG org.apache.kafka.clients.NetworkClient - [Consumer clientId=mso-123456-consumer-e3947c42-8a25-4603-b050-ed449cbe5e27, groupId=mso-group] Give up sending metadata request since no node is available 19:07:07.528 [kafka-coordinator-heartbeat-thread | mso-group] DEBUG org.apache.kafka.clients.consumer.internals.ConsumerCoordinator - [Consumer clientId=mso-123456-consumer-e3947c42-8a25-4603-b050-ed449cbe5e27, groupId=mso-group] No broker available to send FindCoordinator request 19:07:07.537 [pool-10-thread-1] INFO org.onap.sdc.impl.NotificationConsumer - Polling for messages from topic: null 19:07:07.542 [kafka-producer-network-thread | mso-123456-producer-f273d893-2274-4515-b9d3-12029fbb5bf7] DEBUG org.apache.kafka.clients.NetworkClient - [Producer clientId=mso-123456-producer-f273d893-2274-4515-b9d3-12029fbb5bf7] Give up sending metadata request since no node is available 19:07:07.593 [kafka-producer-network-thread | mso-123456-producer-f273d893-2274-4515-b9d3-12029fbb5bf7] DEBUG org.apache.kafka.clients.NetworkClient - [Producer clientId=mso-123456-producer-f273d893-2274-4515-b9d3-12029fbb5bf7] Give up sending metadata request since no node is available 19:07:07.629 [kafka-coordinator-heartbeat-thread | mso-group] DEBUG org.apache.kafka.clients.NetworkClient - [Consumer clientId=mso-123456-consumer-e3947c42-8a25-4603-b050-ed449cbe5e27, groupId=mso-group] Give up sending metadata request since no node is available 19:07:07.629 [kafka-coordinator-heartbeat-thread | mso-group] DEBUG org.apache.kafka.clients.consumer.internals.ConsumerCoordinator - [Consumer clientId=mso-123456-consumer-e3947c42-8a25-4603-b050-ed449cbe5e27, groupId=mso-group] No broker available to send FindCoordinator request 19:07:07.637 [pool-10-thread-3] INFO org.onap.sdc.impl.NotificationConsumer - Polling for messages from topic: null 19:07:07.643 [kafka-producer-network-thread | mso-123456-producer-f273d893-2274-4515-b9d3-12029fbb5bf7] DEBUG org.apache.kafka.clients.NetworkClient - [Producer clientId=mso-123456-producer-f273d893-2274-4515-b9d3-12029fbb5bf7] Give up sending metadata request since no node is available 19:07:07.694 [kafka-producer-network-thread | mso-123456-producer-f273d893-2274-4515-b9d3-12029fbb5bf7] DEBUG org.apache.kafka.clients.NetworkClient - [Producer clientId=mso-123456-producer-f273d893-2274-4515-b9d3-12029fbb5bf7] Give up sending metadata request since no node is available 19:07:07.729 [kafka-coordinator-heartbeat-thread | mso-group] DEBUG org.apache.kafka.clients.NetworkClient - [Consumer clientId=mso-123456-consumer-e3947c42-8a25-4603-b050-ed449cbe5e27, groupId=mso-group] Give up sending metadata request since no node is available 19:07:07.729 [kafka-coordinator-heartbeat-thread | mso-group] DEBUG org.apache.kafka.clients.consumer.internals.ConsumerCoordinator - [Consumer clientId=mso-123456-consumer-e3947c42-8a25-4603-b050-ed449cbe5e27, groupId=mso-group] No broker available to send FindCoordinator request 19:07:07.737 [pool-10-thread-2] INFO org.onap.sdc.impl.NotificationConsumer - Polling for messages from topic: null 19:07:07.744 [kafka-producer-network-thread | mso-123456-producer-f273d893-2274-4515-b9d3-12029fbb5bf7] DEBUG org.apache.kafka.clients.NetworkClient - [Producer clientId=mso-123456-producer-f273d893-2274-4515-b9d3-12029fbb5bf7] Give up sending metadata request since no node is available 19:07:07.795 [kafka-producer-network-thread | mso-123456-producer-f273d893-2274-4515-b9d3-12029fbb5bf7] DEBUG org.apache.kafka.clients.NetworkClient - [Producer clientId=mso-123456-producer-f273d893-2274-4515-b9d3-12029fbb5bf7] Give up sending metadata request since no node is available 19:07:07.830 [kafka-coordinator-heartbeat-thread | mso-group] DEBUG org.apache.kafka.clients.NetworkClient - [Consumer clientId=mso-123456-consumer-e3947c42-8a25-4603-b050-ed449cbe5e27, groupId=mso-group] Give up sending metadata request since no node is available 19:07:07.830 [kafka-coordinator-heartbeat-thread | mso-group] DEBUG org.apache.kafka.clients.consumer.internals.ConsumerCoordinator - [Consumer clientId=mso-123456-consumer-e3947c42-8a25-4603-b050-ed449cbe5e27, groupId=mso-group] No broker available to send FindCoordinator request 19:07:07.838 [pool-10-thread-4] INFO org.onap.sdc.impl.NotificationConsumer - Polling for messages from topic: null 19:07:07.845 [kafka-producer-network-thread | mso-123456-producer-f273d893-2274-4515-b9d3-12029fbb5bf7] DEBUG org.apache.kafka.clients.NetworkClient - [Producer clientId=mso-123456-producer-f273d893-2274-4515-b9d3-12029fbb5bf7] Give up sending metadata request since no node is available 19:07:07.896 [kafka-producer-network-thread | mso-123456-producer-f273d893-2274-4515-b9d3-12029fbb5bf7] DEBUG org.apache.kafka.clients.NetworkClient - [Producer clientId=mso-123456-producer-f273d893-2274-4515-b9d3-12029fbb5bf7] Give up sending metadata request since no node is available 19:07:07.930 [kafka-coordinator-heartbeat-thread | mso-group] DEBUG org.apache.kafka.clients.NetworkClient - [Consumer clientId=mso-123456-consumer-e3947c42-8a25-4603-b050-ed449cbe5e27, groupId=mso-group] Give up sending metadata request since no node is available 19:07:07.931 [kafka-coordinator-heartbeat-thread | mso-group] DEBUG org.apache.kafka.clients.consumer.internals.ConsumerCoordinator - [Consumer clientId=mso-123456-consumer-e3947c42-8a25-4603-b050-ed449cbe5e27, groupId=mso-group] No broker available to send FindCoordinator request 19:07:07.937 [pool-10-thread-1] INFO org.onap.sdc.impl.NotificationConsumer - Polling for messages from topic: null 19:07:07.947 [kafka-producer-network-thread | mso-123456-producer-f273d893-2274-4515-b9d3-12029fbb5bf7] DEBUG org.apache.kafka.clients.NetworkClient - [Producer clientId=mso-123456-producer-f273d893-2274-4515-b9d3-12029fbb5bf7] Give up sending metadata request since no node is available 19:07:07.997 [kafka-producer-network-thread | mso-123456-producer-f273d893-2274-4515-b9d3-12029fbb5bf7] DEBUG org.apache.kafka.clients.NetworkClient - [Producer clientId=mso-123456-producer-f273d893-2274-4515-b9d3-12029fbb5bf7] Give up sending metadata request since no node is available 19:07:08.031 [kafka-coordinator-heartbeat-thread | mso-group] DEBUG org.apache.kafka.clients.NetworkClient - [Consumer clientId=mso-123456-consumer-e3947c42-8a25-4603-b050-ed449cbe5e27, groupId=mso-group] Give up sending metadata request since no node is available 19:07:08.031 [kafka-coordinator-heartbeat-thread | mso-group] DEBUG org.apache.kafka.clients.consumer.internals.ConsumerCoordinator - [Consumer clientId=mso-123456-consumer-e3947c42-8a25-4603-b050-ed449cbe5e27, groupId=mso-group] No broker available to send FindCoordinator request 19:07:08.037 [pool-10-thread-5] INFO org.onap.sdc.impl.NotificationConsumer - Polling for messages from topic: null 19:07:08.047 [kafka-producer-network-thread | mso-123456-producer-f273d893-2274-4515-b9d3-12029fbb5bf7] DEBUG org.apache.kafka.clients.NetworkClient - [Producer clientId=mso-123456-producer-f273d893-2274-4515-b9d3-12029fbb5bf7] Give up sending metadata request since no node is available 19:07:08.097 [kafka-producer-network-thread | mso-123456-producer-f273d893-2274-4515-b9d3-12029fbb5bf7] DEBUG org.apache.kafka.clients.NetworkClient - [Producer clientId=mso-123456-producer-f273d893-2274-4515-b9d3-12029fbb5bf7] Give up sending metadata request since no node is available 19:07:08.132 [kafka-coordinator-heartbeat-thread | mso-group] DEBUG org.apache.kafka.clients.NetworkClient - [Consumer clientId=mso-123456-consumer-e3947c42-8a25-4603-b050-ed449cbe5e27, groupId=mso-group] Give up sending metadata request since no node is available 19:07:08.132 [kafka-coordinator-heartbeat-thread | mso-group] DEBUG org.apache.kafka.clients.consumer.internals.ConsumerCoordinator - [Consumer clientId=mso-123456-consumer-e3947c42-8a25-4603-b050-ed449cbe5e27, groupId=mso-group] No broker available to send FindCoordinator request 19:07:08.137 [pool-10-thread-3] INFO org.onap.sdc.impl.NotificationConsumer - Polling for messages from topic: null 19:07:08.148 [kafka-producer-network-thread | mso-123456-producer-f273d893-2274-4515-b9d3-12029fbb5bf7] DEBUG org.apache.kafka.clients.NetworkClient - [Producer clientId=mso-123456-producer-f273d893-2274-4515-b9d3-12029fbb5bf7] Give up sending metadata request since no node is available 19:07:08.198 [kafka-producer-network-thread | mso-123456-producer-f273d893-2274-4515-b9d3-12029fbb5bf7] DEBUG org.apache.kafka.clients.NetworkClient - [Producer clientId=mso-123456-producer-f273d893-2274-4515-b9d3-12029fbb5bf7] Give up sending metadata request since no node is available 19:07:08.232 [kafka-coordinator-heartbeat-thread | mso-group] DEBUG org.apache.kafka.clients.NetworkClient - [Consumer clientId=mso-123456-consumer-e3947c42-8a25-4603-b050-ed449cbe5e27, groupId=mso-group] Give up sending metadata request since no node is available 19:07:08.232 [kafka-coordinator-heartbeat-thread | mso-group] DEBUG org.apache.kafka.clients.consumer.internals.ConsumerCoordinator - [Consumer clientId=mso-123456-consumer-e3947c42-8a25-4603-b050-ed449cbe5e27, groupId=mso-group] No broker available to send FindCoordinator request 19:07:08.237 [pool-10-thread-3] INFO org.onap.sdc.impl.NotificationConsumer - Polling for messages from topic: null 19:07:08.247 [main] INFO org.onap.sdc.impl.DistributionClientImpl - DistributionClient - sendNotificationStatus 19:07:08.247 [main] DEBUG org.onap.sdc.impl.DistributionClientImpl - client was not initialized 19:07:08.249 [kafka-producer-network-thread | mso-123456-producer-f273d893-2274-4515-b9d3-12029fbb5bf7] DEBUG org.apache.kafka.clients.NetworkClient - [Producer clientId=mso-123456-producer-f273d893-2274-4515-b9d3-12029fbb5bf7] Give up sending metadata request since no node is available 19:07:08.250 [pool-11-thread-1] INFO org.onap.sdc.impl.NotificationConsumer - Polling for messages from topic: null 19:07:08.299 [kafka-producer-network-thread | mso-123456-producer-f273d893-2274-4515-b9d3-12029fbb5bf7] DEBUG org.apache.kafka.clients.NetworkClient - [Producer clientId=mso-123456-producer-f273d893-2274-4515-b9d3-12029fbb5bf7] Give up sending metadata request since no node is available 19:07:08.333 [kafka-coordinator-heartbeat-thread | mso-group] DEBUG org.apache.kafka.clients.NetworkClient - [Consumer clientId=mso-123456-consumer-e3947c42-8a25-4603-b050-ed449cbe5e27, groupId=mso-group] Initialize connection to node localhost:38537 (id: 1 rack: null) for sending metadata request 19:07:08.333 [kafka-coordinator-heartbeat-thread | mso-group] DEBUG org.apache.kafka.clients.ClientUtils - Resolved host localhost as 127.0.0.1 19:07:08.333 [kafka-coordinator-heartbeat-thread | mso-group] DEBUG org.apache.kafka.clients.NetworkClient - [Consumer clientId=mso-123456-consumer-e3947c42-8a25-4603-b050-ed449cbe5e27, groupId=mso-group] Initiating connection to node localhost:38537 (id: 1 rack: null) using address localhost/127.0.0.1 19:07:08.333 [kafka-coordinator-heartbeat-thread | mso-group] DEBUG org.apache.kafka.common.security.authenticator.SaslClientAuthenticator - [Consumer clientId=mso-123456-consumer-e3947c42-8a25-4603-b050-ed449cbe5e27, groupId=mso-group] Set SASL client state to SEND_APIVERSIONS_REQUEST 19:07:08.334 [kafka-coordinator-heartbeat-thread | mso-group] DEBUG org.apache.kafka.common.security.authenticator.SaslClientAuthenticator - [Consumer clientId=mso-123456-consumer-e3947c42-8a25-4603-b050-ed449cbe5e27, groupId=mso-group] Creating SaslClient: client=null;service=kafka;serviceHostname=localhost;mechs=[PLAIN] 19:07:08.334 [kafka-coordinator-heartbeat-thread | mso-group] DEBUG org.apache.kafka.common.network.Selector - [Consumer clientId=mso-123456-consumer-e3947c42-8a25-4603-b050-ed449cbe5e27, groupId=mso-group] Connection with localhost/127.0.0.1 (channelId=1) disconnected java.net.ConnectException: Connection refused at java.base/sun.nio.ch.SocketChannelImpl.checkConnect(Native Method) at java.base/sun.nio.ch.SocketChannelImpl.finishConnect(SocketChannelImpl.java:777) at org.apache.kafka.common.network.PlaintextTransportLayer.finishConnect(PlaintextTransportLayer.java:50) at org.apache.kafka.common.network.KafkaChannel.finishConnect(KafkaChannel.java:224) at org.apache.kafka.common.network.Selector.pollSelectionKeys(Selector.java:526) at org.apache.kafka.common.network.Selector.poll(Selector.java:481) at org.apache.kafka.clients.NetworkClient.poll(NetworkClient.java:560) at org.apache.kafka.clients.consumer.internals.ConsumerNetworkClient.poll(ConsumerNetworkClient.java:280) at org.apache.kafka.clients.consumer.internals.ConsumerNetworkClient.pollNoWakeup(ConsumerNetworkClient.java:321) at org.apache.kafka.clients.consumer.internals.AbstractCoordinator$HeartbeatThread.run(AbstractCoordinator.java:1454) 19:07:08.335 [kafka-coordinator-heartbeat-thread | mso-group] INFO org.apache.kafka.clients.NetworkClient - [Consumer clientId=mso-123456-consumer-e3947c42-8a25-4603-b050-ed449cbe5e27, groupId=mso-group] Node 1 disconnected. 19:07:08.335 [kafka-coordinator-heartbeat-thread | mso-group] WARN org.apache.kafka.clients.NetworkClient - [Consumer clientId=mso-123456-consumer-e3947c42-8a25-4603-b050-ed449cbe5e27, groupId=mso-group] Connection to node 1 (localhost/127.0.0.1:38537) could not be established. Broker may not be available. 19:07:08.335 [kafka-coordinator-heartbeat-thread | mso-group] DEBUG org.apache.kafka.clients.consumer.internals.ConsumerCoordinator - [Consumer clientId=mso-123456-consumer-e3947c42-8a25-4603-b050-ed449cbe5e27, groupId=mso-group] No broker available to send FindCoordinator request 19:07:08.349 [pool-11-thread-1] INFO org.onap.sdc.impl.NotificationConsumer - Polling for messages from topic: null 19:07:08.350 [kafka-producer-network-thread | mso-123456-producer-f273d893-2274-4515-b9d3-12029fbb5bf7] DEBUG org.apache.kafka.clients.NetworkClient - [Producer clientId=mso-123456-producer-f273d893-2274-4515-b9d3-12029fbb5bf7] Give up sending metadata request since no node is available 19:07:08.400 [kafka-producer-network-thread | mso-123456-producer-f273d893-2274-4515-b9d3-12029fbb5bf7] DEBUG org.apache.kafka.clients.NetworkClient - [Producer clientId=mso-123456-producer-f273d893-2274-4515-b9d3-12029fbb5bf7] Give up sending metadata request since no node is available 19:07:08.435 [kafka-coordinator-heartbeat-thread | mso-group] DEBUG org.apache.kafka.clients.NetworkClient - [Consumer clientId=mso-123456-consumer-e3947c42-8a25-4603-b050-ed449cbe5e27, groupId=mso-group] Give up sending metadata request since no node is available 19:07:08.436 [kafka-coordinator-heartbeat-thread | mso-group] DEBUG org.apache.kafka.clients.consumer.internals.ConsumerCoordinator - [Consumer clientId=mso-123456-consumer-e3947c42-8a25-4603-b050-ed449cbe5e27, groupId=mso-group] No broker available to send FindCoordinator request 19:07:08.449 [pool-11-thread-2] INFO org.onap.sdc.impl.NotificationConsumer - Polling for messages from topic: null 19:07:08.450 [pool-11-thread-2] DEBUG org.onap.sdc.impl.NotificationConsumer - received message from topic 19:07:08.450 [pool-11-thread-2] DEBUG org.onap.sdc.impl.NotificationConsumer - received notification from broker: {"distributionID" : "bcc7a72e-90b1-4c5f-9a37-28dc3cd86416", "serviceName" : "Testnotificationser1", "serviceVersion" : "1.0", "serviceUUID" : "7f7f94f4-373a-4b71-a0e3-80ae2ba4eb5d", "serviceDescription" : "TestNotificationVF1", "resources" : [{ "resourceInstanceName" : "testnotificationvf11", "resourceName" : "TestNotificationVF1", "resourceVersion" : "1.0", "resoucreType" : "VF", "resourceUUID" : "907e1746-9f69-40f5-9f2a-313654092a2d", "artifacts" : [{ "artifactName" : "sample-xml-alldata-1-1.xml", "artifactType" : "YANG_XML", "artifactURL" : "/sdc/v1/catalog/services/Testnotificationser1/1.0/resourceInstances/testnotificationvf11/artifacts/sample-xml-alldata-1-1.xml", "artifactChecksum" : "MTUxODFkMmRlOTNhNjYxMGYyYTI1ZjA5Y2QyNWQyYTk\u003d", "artifactDescription" : "MyYang", "artifactTimeout" : 0, "artifactUUID" : "0005bc4a-2c19-452e-be6d-d574a56be4d0", "artifactVersion" : "1" }, { "artifactName" : "heat.yaml", "artifactType" : "HEAT", "artifactURL" : "/sdc/v1/catalog/services/Testnotificationser1/1.0/resourceInstances/testnotificationvf11/artifacts/heat.yaml", "artifactChecksum" : "ODEyNjE4YTMzYzRmMTk2ODVhNTU2NTg3YWEyNmIxMTM\u003d", "artifactDescription" : "heat", "artifactTimeout" : 60, "artifactUUID" : "8df6123c-f368-47d3-93be-1972cefbcc35", "artifactVersion" : "1" }, { "artifactName" : "heat.env", "artifactType" : "HEAT_ENV", "artifactURL" : "/sdc/v1/catalog/services/Testnotificationser1/1.0/resourceInstances/testnotificationvf11/artifacts/heat.env", "artifactChecksum" : "NGIzMjExZTM1NDc2NjBjOTQyMGJmMWNiMmU0NTE5NzM\u003d", "artifactDescription" : "Auto-generated HEAT Environment deployment artifact", "artifactTimeout" : 0, "artifactUUID" : "ce65d31c-35c0-43a9-90c7-596fc51d0c86", "artifactVersion" : "1", "generatedFromUUID" : "8df6123c-f368-47d3-93be-1972cefbcc35" } ] } ]} 19:07:08.451 [kafka-producer-network-thread | mso-123456-producer-f273d893-2274-4515-b9d3-12029fbb5bf7] DEBUG org.apache.kafka.clients.NetworkClient - [Producer clientId=mso-123456-producer-f273d893-2274-4515-b9d3-12029fbb5bf7] Give up sending metadata request since no node is available 19:07:08.456 [pool-11-thread-2] DEBUG org.onap.sdc.impl.NotificationConsumer - sending notification to client: { "distributionID": "bcc7a72e-90b1-4c5f-9a37-28dc3cd86416", "serviceName": "Testnotificationser1", "serviceVersion": "1.0", "serviceUUID": "7f7f94f4-373a-4b71-a0e3-80ae2ba4eb5d", "serviceDescription": "TestNotificationVF1", "resources": [ { "resourceInstanceName": "testnotificationvf11", "resourceName": "TestNotificationVF1", "resourceVersion": "1.0", "resoucreType": "VF", "resourceUUID": "907e1746-9f69-40f5-9f2a-313654092a2d", "artifacts": [ { "artifactName": "heat.yaml", "artifactType": "HEAT", "artifactURL": "/sdc/v1/catalog/services/Testnotificationser1/1.0/resourceInstances/testnotificationvf11/artifacts/heat.yaml", "artifactChecksum": "ODEyNjE4YTMzYzRmMTk2ODVhNTU2NTg3YWEyNmIxMTM\u003d", "artifactDescription": "heat", "artifactTimeout": 60, "artifactVersion": "1", "artifactUUID": "8df6123c-f368-47d3-93be-1972cefbcc35", "generatedArtifact": { "artifactName": "heat.env", "artifactType": "HEAT_ENV", "artifactURL": "/sdc/v1/catalog/services/Testnotificationser1/1.0/resourceInstances/testnotificationvf11/artifacts/heat.env", "artifactChecksum": "NGIzMjExZTM1NDc2NjBjOTQyMGJmMWNiMmU0NTE5NzM\u003d", "artifactDescription": "Auto-generated HEAT Environment deployment artifact", "artifactTimeout": 0, "artifactVersion": "1", "artifactUUID": "ce65d31c-35c0-43a9-90c7-596fc51d0c86", "generatedFromUUID": "8df6123c-f368-47d3-93be-1972cefbcc35" }, "relatedArtifactsInfo": [] } ] } ], "serviceArtifacts": [] } 19:07:08.501 [kafka-producer-network-thread | mso-123456-producer-f273d893-2274-4515-b9d3-12029fbb5bf7] DEBUG org.apache.kafka.clients.NetworkClient - [Producer clientId=mso-123456-producer-f273d893-2274-4515-b9d3-12029fbb5bf7] Give up sending metadata request since no node is available 19:07:08.536 [kafka-coordinator-heartbeat-thread | mso-group] DEBUG org.apache.kafka.clients.NetworkClient - [Consumer clientId=mso-123456-consumer-e3947c42-8a25-4603-b050-ed449cbe5e27, groupId=mso-group] Give up sending metadata request since no node is available 19:07:08.536 [kafka-coordinator-heartbeat-thread | mso-group] DEBUG org.apache.kafka.clients.consumer.internals.ConsumerCoordinator - [Consumer clientId=mso-123456-consumer-e3947c42-8a25-4603-b050-ed449cbe5e27, groupId=mso-group] No broker available to send FindCoordinator request 19:07:08.549 [pool-11-thread-1] INFO org.onap.sdc.impl.NotificationConsumer - Polling for messages from topic: null 19:07:08.552 [kafka-producer-network-thread | mso-123456-producer-f273d893-2274-4515-b9d3-12029fbb5bf7] DEBUG org.apache.kafka.clients.NetworkClient - [Producer clientId=mso-123456-producer-f273d893-2274-4515-b9d3-12029fbb5bf7] Give up sending metadata request since no node is available 19:07:08.602 [kafka-producer-network-thread | mso-123456-producer-f273d893-2274-4515-b9d3-12029fbb5bf7] DEBUG org.apache.kafka.clients.NetworkClient - [Producer clientId=mso-123456-producer-f273d893-2274-4515-b9d3-12029fbb5bf7] Give up sending metadata request since no node is available 19:07:08.637 [kafka-coordinator-heartbeat-thread | mso-group] DEBUG org.apache.kafka.clients.NetworkClient - [Consumer clientId=mso-123456-consumer-e3947c42-8a25-4603-b050-ed449cbe5e27, groupId=mso-group] Give up sending metadata request since no node is available 19:07:08.637 [kafka-coordinator-heartbeat-thread | mso-group] DEBUG org.apache.kafka.clients.consumer.internals.ConsumerCoordinator - [Consumer clientId=mso-123456-consumer-e3947c42-8a25-4603-b050-ed449cbe5e27, groupId=mso-group] No broker available to send FindCoordinator request 19:07:08.649 [pool-11-thread-3] INFO org.onap.sdc.impl.NotificationConsumer - Polling for messages from topic: null 19:07:08.653 [kafka-producer-network-thread | mso-123456-producer-f273d893-2274-4515-b9d3-12029fbb5bf7] DEBUG org.apache.kafka.clients.NetworkClient - [Producer clientId=mso-123456-producer-f273d893-2274-4515-b9d3-12029fbb5bf7] Initialize connection to node localhost:38537 (id: 1 rack: null) for sending metadata request 19:07:08.653 [kafka-producer-network-thread | mso-123456-producer-f273d893-2274-4515-b9d3-12029fbb5bf7] DEBUG org.apache.kafka.clients.ClientUtils - Resolved host localhost as 127.0.0.1 19:07:08.653 [kafka-producer-network-thread | mso-123456-producer-f273d893-2274-4515-b9d3-12029fbb5bf7] DEBUG org.apache.kafka.clients.NetworkClient - [Producer clientId=mso-123456-producer-f273d893-2274-4515-b9d3-12029fbb5bf7] Initiating connection to node localhost:38537 (id: 1 rack: null) using address localhost/127.0.0.1 19:07:08.654 [kafka-producer-network-thread | mso-123456-producer-f273d893-2274-4515-b9d3-12029fbb5bf7] DEBUG org.apache.kafka.common.security.authenticator.SaslClientAuthenticator - [Producer clientId=mso-123456-producer-f273d893-2274-4515-b9d3-12029fbb5bf7] Set SASL client state to SEND_APIVERSIONS_REQUEST 19:07:08.654 [kafka-producer-network-thread | mso-123456-producer-f273d893-2274-4515-b9d3-12029fbb5bf7] DEBUG org.apache.kafka.common.security.authenticator.SaslClientAuthenticator - [Producer clientId=mso-123456-producer-f273d893-2274-4515-b9d3-12029fbb5bf7] Creating SaslClient: client=null;service=kafka;serviceHostname=localhost;mechs=[PLAIN] 19:07:08.654 [kafka-producer-network-thread | mso-123456-producer-f273d893-2274-4515-b9d3-12029fbb5bf7] DEBUG org.apache.kafka.common.network.Selector - [Producer clientId=mso-123456-producer-f273d893-2274-4515-b9d3-12029fbb5bf7] Connection with localhost/127.0.0.1 (channelId=1) disconnected java.net.ConnectException: Connection refused at java.base/sun.nio.ch.SocketChannelImpl.checkConnect(Native Method) at java.base/sun.nio.ch.SocketChannelImpl.finishConnect(SocketChannelImpl.java:777) at org.apache.kafka.common.network.PlaintextTransportLayer.finishConnect(PlaintextTransportLayer.java:50) at org.apache.kafka.common.network.KafkaChannel.finishConnect(KafkaChannel.java:224) at org.apache.kafka.common.network.Selector.pollSelectionKeys(Selector.java:526) at org.apache.kafka.common.network.Selector.poll(Selector.java:481) at org.apache.kafka.clients.NetworkClient.poll(NetworkClient.java:560) at org.apache.kafka.clients.producer.internals.Sender.runOnce(Sender.java:328) at org.apache.kafka.clients.producer.internals.Sender.run(Sender.java:243) at java.base/java.lang.Thread.run(Thread.java:829) 19:07:08.655 [kafka-producer-network-thread | mso-123456-producer-f273d893-2274-4515-b9d3-12029fbb5bf7] INFO org.apache.kafka.clients.NetworkClient - [Producer clientId=mso-123456-producer-f273d893-2274-4515-b9d3-12029fbb5bf7] Node 1 disconnected. 19:07:08.655 [kafka-producer-network-thread | mso-123456-producer-f273d893-2274-4515-b9d3-12029fbb5bf7] WARN org.apache.kafka.clients.NetworkClient - [Producer clientId=mso-123456-producer-f273d893-2274-4515-b9d3-12029fbb5bf7] Connection to node 1 (localhost/127.0.0.1:38537) could not be established. Broker may not be available. 19:07:08.737 [kafka-coordinator-heartbeat-thread | mso-group] DEBUG org.apache.kafka.clients.NetworkClient - [Consumer clientId=mso-123456-consumer-e3947c42-8a25-4603-b050-ed449cbe5e27, groupId=mso-group] Give up sending metadata request since no node is available 19:07:08.738 [kafka-coordinator-heartbeat-thread | mso-group] DEBUG org.apache.kafka.clients.consumer.internals.ConsumerCoordinator - [Consumer clientId=mso-123456-consumer-e3947c42-8a25-4603-b050-ed449cbe5e27, groupId=mso-group] No broker available to send FindCoordinator request 19:07:08.748 [pool-11-thread-2] INFO org.onap.sdc.impl.NotificationConsumer - Polling for messages from topic: null 19:07:08.755 [kafka-producer-network-thread | mso-123456-producer-f273d893-2274-4515-b9d3-12029fbb5bf7] DEBUG org.apache.kafka.clients.NetworkClient - [Producer clientId=mso-123456-producer-f273d893-2274-4515-b9d3-12029fbb5bf7] Give up sending metadata request since no node is available 19:07:08.806 [kafka-producer-network-thread | mso-123456-producer-f273d893-2274-4515-b9d3-12029fbb5bf7] DEBUG org.apache.kafka.clients.NetworkClient - [Producer clientId=mso-123456-producer-f273d893-2274-4515-b9d3-12029fbb5bf7] Give up sending metadata request since no node is available 19:07:08.838 [kafka-coordinator-heartbeat-thread | mso-group] DEBUG org.apache.kafka.clients.NetworkClient - [Consumer clientId=mso-123456-consumer-e3947c42-8a25-4603-b050-ed449cbe5e27, groupId=mso-group] Give up sending metadata request since no node is available 19:07:08.838 [kafka-coordinator-heartbeat-thread | mso-group] DEBUG org.apache.kafka.clients.consumer.internals.ConsumerCoordinator - [Consumer clientId=mso-123456-consumer-e3947c42-8a25-4603-b050-ed449cbe5e27, groupId=mso-group] No broker available to send FindCoordinator request 19:07:08.849 [pool-11-thread-4] INFO org.onap.sdc.impl.NotificationConsumer - Polling for messages from topic: null 19:07:08.857 [kafka-producer-network-thread | mso-123456-producer-f273d893-2274-4515-b9d3-12029fbb5bf7] DEBUG org.apache.kafka.clients.NetworkClient - [Producer clientId=mso-123456-producer-f273d893-2274-4515-b9d3-12029fbb5bf7] Give up sending metadata request since no node is available 19:07:08.907 [kafka-producer-network-thread | mso-123456-producer-f273d893-2274-4515-b9d3-12029fbb5bf7] DEBUG org.apache.kafka.clients.NetworkClient - [Producer clientId=mso-123456-producer-f273d893-2274-4515-b9d3-12029fbb5bf7] Give up sending metadata request since no node is available 19:07:08.938 [kafka-coordinator-heartbeat-thread | mso-group] DEBUG org.apache.kafka.clients.NetworkClient - [Consumer clientId=mso-123456-consumer-e3947c42-8a25-4603-b050-ed449cbe5e27, groupId=mso-group] Give up sending metadata request since no node is available 19:07:08.939 [kafka-coordinator-heartbeat-thread | mso-group] DEBUG org.apache.kafka.clients.consumer.internals.ConsumerCoordinator - [Consumer clientId=mso-123456-consumer-e3947c42-8a25-4603-b050-ed449cbe5e27, groupId=mso-group] No broker available to send FindCoordinator request 19:07:08.949 [pool-11-thread-1] INFO org.onap.sdc.impl.NotificationConsumer - Polling for messages from topic: null 19:07:08.958 [kafka-producer-network-thread | mso-123456-producer-f273d893-2274-4515-b9d3-12029fbb5bf7] DEBUG org.apache.kafka.clients.NetworkClient - [Producer clientId=mso-123456-producer-f273d893-2274-4515-b9d3-12029fbb5bf7] Give up sending metadata request since no node is available 19:07:09.008 [kafka-producer-network-thread | mso-123456-producer-f273d893-2274-4515-b9d3-12029fbb5bf7] DEBUG org.apache.kafka.clients.NetworkClient - [Producer clientId=mso-123456-producer-f273d893-2274-4515-b9d3-12029fbb5bf7] Give up sending metadata request since no node is available 19:07:09.039 [kafka-coordinator-heartbeat-thread | mso-group] DEBUG org.apache.kafka.clients.NetworkClient - [Consumer clientId=mso-123456-consumer-e3947c42-8a25-4603-b050-ed449cbe5e27, groupId=mso-group] Give up sending metadata request since no node is available 19:07:09.039 [kafka-coordinator-heartbeat-thread | mso-group] DEBUG org.apache.kafka.clients.consumer.internals.ConsumerCoordinator - [Consumer clientId=mso-123456-consumer-e3947c42-8a25-4603-b050-ed449cbe5e27, groupId=mso-group] No broker available to send FindCoordinator request 19:07:09.049 [pool-11-thread-5] INFO org.onap.sdc.impl.NotificationConsumer - Polling for messages from topic: null 19:07:09.059 [kafka-producer-network-thread | mso-123456-producer-f273d893-2274-4515-b9d3-12029fbb5bf7] DEBUG org.apache.kafka.clients.NetworkClient - [Producer clientId=mso-123456-producer-f273d893-2274-4515-b9d3-12029fbb5bf7] Give up sending metadata request since no node is available 19:07:09.110 [kafka-producer-network-thread | mso-123456-producer-f273d893-2274-4515-b9d3-12029fbb5bf7] DEBUG org.apache.kafka.clients.NetworkClient - [Producer clientId=mso-123456-producer-f273d893-2274-4515-b9d3-12029fbb5bf7] Give up sending metadata request since no node is available 19:07:09.140 [kafka-coordinator-heartbeat-thread | mso-group] DEBUG org.apache.kafka.clients.NetworkClient - [Consumer clientId=mso-123456-consumer-e3947c42-8a25-4603-b050-ed449cbe5e27, groupId=mso-group] Give up sending metadata request since no node is available 19:07:09.140 [kafka-coordinator-heartbeat-thread | mso-group] DEBUG org.apache.kafka.clients.consumer.internals.ConsumerCoordinator - [Consumer clientId=mso-123456-consumer-e3947c42-8a25-4603-b050-ed449cbe5e27, groupId=mso-group] No broker available to send FindCoordinator request 19:07:09.149 [pool-11-thread-3] INFO org.onap.sdc.impl.NotificationConsumer - Polling for messages from topic: null 19:07:09.160 [kafka-producer-network-thread | mso-123456-producer-f273d893-2274-4515-b9d3-12029fbb5bf7] DEBUG org.apache.kafka.clients.NetworkClient - [Producer clientId=mso-123456-producer-f273d893-2274-4515-b9d3-12029fbb5bf7] Give up sending metadata request since no node is available 19:07:09.228 [kafka-producer-network-thread | mso-123456-producer-f273d893-2274-4515-b9d3-12029fbb5bf7] DEBUG org.apache.kafka.clients.NetworkClient - [Producer clientId=mso-123456-producer-f273d893-2274-4515-b9d3-12029fbb5bf7] Give up sending metadata request since no node is available 19:07:09.240 [kafka-coordinator-heartbeat-thread | mso-group] DEBUG org.apache.kafka.clients.NetworkClient - [Consumer clientId=mso-123456-consumer-e3947c42-8a25-4603-b050-ed449cbe5e27, groupId=mso-group] Initialize connection to node localhost:38537 (id: 1 rack: null) for sending metadata request 19:07:09.240 [kafka-coordinator-heartbeat-thread | mso-group] DEBUG org.apache.kafka.clients.ClientUtils - Resolved host localhost as 127.0.0.1 19:07:09.240 [kafka-coordinator-heartbeat-thread | mso-group] DEBUG org.apache.kafka.clients.NetworkClient - [Consumer clientId=mso-123456-consumer-e3947c42-8a25-4603-b050-ed449cbe5e27, groupId=mso-group] Initiating connection to node localhost:38537 (id: 1 rack: null) using address localhost/127.0.0.1 19:07:09.241 [kafka-coordinator-heartbeat-thread | mso-group] DEBUG org.apache.kafka.common.security.authenticator.SaslClientAuthenticator - [Consumer clientId=mso-123456-consumer-e3947c42-8a25-4603-b050-ed449cbe5e27, groupId=mso-group] Set SASL client state to SEND_APIVERSIONS_REQUEST 19:07:09.241 [kafka-coordinator-heartbeat-thread | mso-group] DEBUG org.apache.kafka.common.security.authenticator.SaslClientAuthenticator - [Consumer clientId=mso-123456-consumer-e3947c42-8a25-4603-b050-ed449cbe5e27, groupId=mso-group] Creating SaslClient: client=null;service=kafka;serviceHostname=localhost;mechs=[PLAIN] 19:07:09.241 [kafka-coordinator-heartbeat-thread | mso-group] DEBUG org.apache.kafka.common.network.Selector - [Consumer clientId=mso-123456-consumer-e3947c42-8a25-4603-b050-ed449cbe5e27, groupId=mso-group] Connection with localhost/127.0.0.1 (channelId=1) disconnected java.net.ConnectException: Connection refused at java.base/sun.nio.ch.SocketChannelImpl.checkConnect(Native Method) at java.base/sun.nio.ch.SocketChannelImpl.finishConnect(SocketChannelImpl.java:777) at org.apache.kafka.common.network.PlaintextTransportLayer.finishConnect(PlaintextTransportLayer.java:50) at org.apache.kafka.common.network.KafkaChannel.finishConnect(KafkaChannel.java:224) at org.apache.kafka.common.network.Selector.pollSelectionKeys(Selector.java:526) at org.apache.kafka.common.network.Selector.poll(Selector.java:481) at org.apache.kafka.clients.NetworkClient.poll(NetworkClient.java:560) at org.apache.kafka.clients.consumer.internals.ConsumerNetworkClient.poll(ConsumerNetworkClient.java:280) at org.apache.kafka.clients.consumer.internals.ConsumerNetworkClient.pollNoWakeup(ConsumerNetworkClient.java:321) at org.apache.kafka.clients.consumer.internals.AbstractCoordinator$HeartbeatThread.run(AbstractCoordinator.java:1454) 19:07:09.242 [kafka-coordinator-heartbeat-thread | mso-group] INFO org.apache.kafka.clients.NetworkClient - [Consumer clientId=mso-123456-consumer-e3947c42-8a25-4603-b050-ed449cbe5e27, groupId=mso-group] Node 1 disconnected. 19:07:09.242 [kafka-coordinator-heartbeat-thread | mso-group] WARN org.apache.kafka.clients.NetworkClient - [Consumer clientId=mso-123456-consumer-e3947c42-8a25-4603-b050-ed449cbe5e27, groupId=mso-group] Connection to node 1 (localhost/127.0.0.1:38537) could not be established. Broker may not be available. 19:07:09.242 [kafka-coordinator-heartbeat-thread | mso-group] DEBUG org.apache.kafka.clients.consumer.internals.ConsumerCoordinator - [Consumer clientId=mso-123456-consumer-e3947c42-8a25-4603-b050-ed449cbe5e27, groupId=mso-group] No broker available to send FindCoordinator request 19:07:09.249 [pool-11-thread-3] INFO org.onap.sdc.impl.NotificationConsumer - Polling for messages from topic: null 19:07:09.253 [main] INFO org.onap.sdc.impl.DistributionClientImpl - DistributionClient - sendNotificationStatus 19:07:09.253 [main] DEBUG org.onap.sdc.impl.DistributionClientImpl - client was not initialized 19:07:09.256 [pool-12-thread-1] INFO org.onap.sdc.impl.NotificationConsumer - Polling for messages from topic: null 19:07:09.280 [kafka-producer-network-thread | mso-123456-producer-f273d893-2274-4515-b9d3-12029fbb5bf7] DEBUG org.apache.kafka.clients.NetworkClient - [Producer clientId=mso-123456-producer-f273d893-2274-4515-b9d3-12029fbb5bf7] Give up sending metadata request since no node is available 19:07:09.331 [kafka-producer-network-thread | mso-123456-producer-f273d893-2274-4515-b9d3-12029fbb5bf7] DEBUG org.apache.kafka.clients.NetworkClient - [Producer clientId=mso-123456-producer-f273d893-2274-4515-b9d3-12029fbb5bf7] Give up sending metadata request since no node is available 19:07:09.342 [kafka-coordinator-heartbeat-thread | mso-group] DEBUG org.apache.kafka.clients.NetworkClient - [Consumer clientId=mso-123456-consumer-e3947c42-8a25-4603-b050-ed449cbe5e27, groupId=mso-group] Give up sending metadata request since no node is available 19:07:09.342 [kafka-coordinator-heartbeat-thread | mso-group] DEBUG org.apache.kafka.clients.consumer.internals.ConsumerCoordinator - [Consumer clientId=mso-123456-consumer-e3947c42-8a25-4603-b050-ed449cbe5e27, groupId=mso-group] No broker available to send FindCoordinator request 19:07:09.355 [pool-12-thread-1] INFO org.onap.sdc.impl.NotificationConsumer - Polling for messages from topic: null 19:07:09.381 [kafka-producer-network-thread | mso-123456-producer-f273d893-2274-4515-b9d3-12029fbb5bf7] DEBUG org.apache.kafka.clients.NetworkClient - [Producer clientId=mso-123456-producer-f273d893-2274-4515-b9d3-12029fbb5bf7] Give up sending metadata request since no node is available 19:07:09.432 [kafka-producer-network-thread | mso-123456-producer-f273d893-2274-4515-b9d3-12029fbb5bf7] DEBUG org.apache.kafka.clients.NetworkClient - [Producer clientId=mso-123456-producer-f273d893-2274-4515-b9d3-12029fbb5bf7] Give up sending metadata request since no node is available 19:07:09.443 [kafka-coordinator-heartbeat-thread | mso-group] DEBUG org.apache.kafka.clients.NetworkClient - [Consumer clientId=mso-123456-consumer-e3947c42-8a25-4603-b050-ed449cbe5e27, groupId=mso-group] Give up sending metadata request since no node is available 19:07:09.443 [kafka-coordinator-heartbeat-thread | mso-group] DEBUG org.apache.kafka.clients.consumer.internals.ConsumerCoordinator - [Consumer clientId=mso-123456-consumer-e3947c42-8a25-4603-b050-ed449cbe5e27, groupId=mso-group] No broker available to send FindCoordinator request 19:07:09.456 [pool-12-thread-2] INFO org.onap.sdc.impl.NotificationConsumer - Polling for messages from topic: null 19:07:09.456 [pool-12-thread-2] DEBUG org.onap.sdc.impl.NotificationConsumer - received message from topic 19:07:09.456 [pool-12-thread-2] DEBUG org.onap.sdc.impl.NotificationConsumer - received notification from broker: { "distributionID" : "5v1234d8-5b6d-42c4-7t54-47v95n58qb7", "serviceName" : "srv1", "serviceVersion": "2.0", "serviceUUID" : "4e0697d8-5b6d-42c4-8c74-46c33d46624c", "serviceArtifacts":[ { "artifactName" : "ddd.yml", "artifactType" : "DG_XML", "artifactTimeout" : "65", "artifactDescription" : "description", "artifactURL" : "/sdc/v1/catalog/services/srv1/2.0/resources/ddd/3.0/artifacts/ddd.xml" , "resourceUUID" : "4e5874d8-5b6d-42c4-8c74-46c33d90drw" , "checksum" : "15e389rnrp58hsw==" } ]} 19:07:09.460 [pool-12-thread-2] ERROR org.onap.sdc.impl.NotificationConsumer - Error exception occurred when fetching with Kafka Consumer:null 19:07:09.461 [pool-12-thread-2] DEBUG org.onap.sdc.impl.NotificationConsumer - Error exception occurred when fetching with Kafka Consumer:null java.lang.NullPointerException: null at org.onap.sdc.impl.NotificationCallbackBuilder.buildResourceInstancesLogic(NotificationCallbackBuilder.java:62) at org.onap.sdc.impl.NotificationCallbackBuilder.buildCallbackNotificationLogic(NotificationCallbackBuilder.java:48) at org.onap.sdc.impl.NotificationConsumer.run(NotificationConsumer.java:57) at java.base/java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:515) at java.base/java.util.concurrent.FutureTask.runAndReset(FutureTask.java:305) at java.base/java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(ScheduledThreadPoolExecutor.java:305) at java.base/java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128) at java.base/java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628) at java.base/java.lang.Thread.run(Thread.java:829) 19:07:09.482 [kafka-producer-network-thread | mso-123456-producer-f273d893-2274-4515-b9d3-12029fbb5bf7] DEBUG org.apache.kafka.clients.NetworkClient - [Producer clientId=mso-123456-producer-f273d893-2274-4515-b9d3-12029fbb5bf7] Give up sending metadata request since no node is available 19:07:09.532 [kafka-producer-network-thread | mso-123456-producer-f273d893-2274-4515-b9d3-12029fbb5bf7] DEBUG org.apache.kafka.clients.NetworkClient - [Producer clientId=mso-123456-producer-f273d893-2274-4515-b9d3-12029fbb5bf7] Give up sending metadata request since no node is available 19:07:09.543 [kafka-coordinator-heartbeat-thread | mso-group] DEBUG org.apache.kafka.clients.NetworkClient - [Consumer clientId=mso-123456-consumer-e3947c42-8a25-4603-b050-ed449cbe5e27, groupId=mso-group] Give up sending metadata request since no node is available 19:07:09.543 [kafka-coordinator-heartbeat-thread | mso-group] DEBUG org.apache.kafka.clients.consumer.internals.ConsumerCoordinator - [Consumer clientId=mso-123456-consumer-e3947c42-8a25-4603-b050-ed449cbe5e27, groupId=mso-group] No broker available to send FindCoordinator request 19:07:09.555 [pool-12-thread-1] INFO org.onap.sdc.impl.NotificationConsumer - Polling for messages from topic: null 19:07:09.583 [kafka-producer-network-thread | mso-123456-producer-f273d893-2274-4515-b9d3-12029fbb5bf7] DEBUG org.apache.kafka.clients.NetworkClient - [Producer clientId=mso-123456-producer-f273d893-2274-4515-b9d3-12029fbb5bf7] Give up sending metadata request since no node is available 19:07:09.633 [kafka-producer-network-thread | mso-123456-producer-f273d893-2274-4515-b9d3-12029fbb5bf7] DEBUG org.apache.kafka.clients.NetworkClient - [Producer clientId=mso-123456-producer-f273d893-2274-4515-b9d3-12029fbb5bf7] Give up sending metadata request since no node is available 19:07:09.644 [kafka-coordinator-heartbeat-thread | mso-group] DEBUG org.apache.kafka.clients.NetworkClient - [Consumer clientId=mso-123456-consumer-e3947c42-8a25-4603-b050-ed449cbe5e27, groupId=mso-group] Give up sending metadata request since no node is available 19:07:09.644 [kafka-coordinator-heartbeat-thread | mso-group] DEBUG org.apache.kafka.clients.consumer.internals.ConsumerCoordinator - [Consumer clientId=mso-123456-consumer-e3947c42-8a25-4603-b050-ed449cbe5e27, groupId=mso-group] No broker available to send FindCoordinator request 19:07:09.656 [pool-12-thread-3] INFO org.onap.sdc.impl.NotificationConsumer - Polling for messages from topic: null 19:07:09.683 [kafka-producer-network-thread | mso-123456-producer-f273d893-2274-4515-b9d3-12029fbb5bf7] DEBUG org.apache.kafka.clients.NetworkClient - [Producer clientId=mso-123456-producer-f273d893-2274-4515-b9d3-12029fbb5bf7] Give up sending metadata request since no node is available 19:07:09.734 [kafka-producer-network-thread | mso-123456-producer-f273d893-2274-4515-b9d3-12029fbb5bf7] DEBUG org.apache.kafka.clients.NetworkClient - [Producer clientId=mso-123456-producer-f273d893-2274-4515-b9d3-12029fbb5bf7] Give up sending metadata request since no node is available 19:07:09.744 [kafka-coordinator-heartbeat-thread | mso-group] DEBUG org.apache.kafka.clients.NetworkClient - [Consumer clientId=mso-123456-consumer-e3947c42-8a25-4603-b050-ed449cbe5e27, groupId=mso-group] Give up sending metadata request since no node is available 19:07:09.744 [kafka-coordinator-heartbeat-thread | mso-group] DEBUG org.apache.kafka.clients.consumer.internals.ConsumerCoordinator - [Consumer clientId=mso-123456-consumer-e3947c42-8a25-4603-b050-ed449cbe5e27, groupId=mso-group] No broker available to send FindCoordinator request 19:07:09.755 [pool-12-thread-2] INFO org.onap.sdc.impl.NotificationConsumer - Polling for messages from topic: null 19:07:09.784 [kafka-producer-network-thread | mso-123456-producer-f273d893-2274-4515-b9d3-12029fbb5bf7] DEBUG org.apache.kafka.clients.NetworkClient - [Producer clientId=mso-123456-producer-f273d893-2274-4515-b9d3-12029fbb5bf7] Give up sending metadata request since no node is available 19:07:09.835 [kafka-producer-network-thread | mso-123456-producer-f273d893-2274-4515-b9d3-12029fbb5bf7] DEBUG org.apache.kafka.clients.NetworkClient - [Producer clientId=mso-123456-producer-f273d893-2274-4515-b9d3-12029fbb5bf7] Initialize connection to node localhost:38537 (id: 1 rack: null) for sending metadata request 19:07:09.835 [kafka-producer-network-thread | mso-123456-producer-f273d893-2274-4515-b9d3-12029fbb5bf7] DEBUG org.apache.kafka.clients.ClientUtils - Resolved host localhost as 127.0.0.1 19:07:09.835 [kafka-producer-network-thread | mso-123456-producer-f273d893-2274-4515-b9d3-12029fbb5bf7] DEBUG org.apache.kafka.clients.NetworkClient - [Producer clientId=mso-123456-producer-f273d893-2274-4515-b9d3-12029fbb5bf7] Initiating connection to node localhost:38537 (id: 1 rack: null) using address localhost/127.0.0.1 19:07:09.836 [kafka-producer-network-thread | mso-123456-producer-f273d893-2274-4515-b9d3-12029fbb5bf7] DEBUG org.apache.kafka.common.security.authenticator.SaslClientAuthenticator - [Producer clientId=mso-123456-producer-f273d893-2274-4515-b9d3-12029fbb5bf7] Set SASL client state to SEND_APIVERSIONS_REQUEST 19:07:09.836 [kafka-producer-network-thread | mso-123456-producer-f273d893-2274-4515-b9d3-12029fbb5bf7] DEBUG org.apache.kafka.common.security.authenticator.SaslClientAuthenticator - [Producer clientId=mso-123456-producer-f273d893-2274-4515-b9d3-12029fbb5bf7] Creating SaslClient: client=null;service=kafka;serviceHostname=localhost;mechs=[PLAIN] 19:07:09.837 [kafka-producer-network-thread | mso-123456-producer-f273d893-2274-4515-b9d3-12029fbb5bf7] DEBUG org.apache.kafka.common.network.Selector - [Producer clientId=mso-123456-producer-f273d893-2274-4515-b9d3-12029fbb5bf7] Connection with localhost/127.0.0.1 (channelId=1) disconnected java.net.ConnectException: Connection refused at java.base/sun.nio.ch.SocketChannelImpl.checkConnect(Native Method) at java.base/sun.nio.ch.SocketChannelImpl.finishConnect(SocketChannelImpl.java:777) at org.apache.kafka.common.network.PlaintextTransportLayer.finishConnect(PlaintextTransportLayer.java:50) at org.apache.kafka.common.network.KafkaChannel.finishConnect(KafkaChannel.java:224) at org.apache.kafka.common.network.Selector.pollSelectionKeys(Selector.java:526) at org.apache.kafka.common.network.Selector.poll(Selector.java:481) at org.apache.kafka.clients.NetworkClient.poll(NetworkClient.java:560) at org.apache.kafka.clients.producer.internals.Sender.runOnce(Sender.java:328) at org.apache.kafka.clients.producer.internals.Sender.run(Sender.java:243) at java.base/java.lang.Thread.run(Thread.java:829) 19:07:09.837 [kafka-producer-network-thread | mso-123456-producer-f273d893-2274-4515-b9d3-12029fbb5bf7] INFO org.apache.kafka.clients.NetworkClient - [Producer clientId=mso-123456-producer-f273d893-2274-4515-b9d3-12029fbb5bf7] Node 1 disconnected. 19:07:09.837 [kafka-producer-network-thread | mso-123456-producer-f273d893-2274-4515-b9d3-12029fbb5bf7] WARN org.apache.kafka.clients.NetworkClient - [Producer clientId=mso-123456-producer-f273d893-2274-4515-b9d3-12029fbb5bf7] Connection to node 1 (localhost/127.0.0.1:38537) could not be established. Broker may not be available. 19:07:09.844 [kafka-coordinator-heartbeat-thread | mso-group] DEBUG org.apache.kafka.clients.NetworkClient - [Consumer clientId=mso-123456-consumer-e3947c42-8a25-4603-b050-ed449cbe5e27, groupId=mso-group] Give up sending metadata request since no node is available 19:07:09.844 [kafka-coordinator-heartbeat-thread | mso-group] DEBUG org.apache.kafka.clients.consumer.internals.ConsumerCoordinator - [Consumer clientId=mso-123456-consumer-e3947c42-8a25-4603-b050-ed449cbe5e27, groupId=mso-group] No broker available to send FindCoordinator request 19:07:09.856 [pool-12-thread-4] INFO org.onap.sdc.impl.NotificationConsumer - Polling for messages from topic: null 19:07:09.938 [kafka-producer-network-thread | mso-123456-producer-f273d893-2274-4515-b9d3-12029fbb5bf7] DEBUG org.apache.kafka.clients.NetworkClient - [Producer clientId=mso-123456-producer-f273d893-2274-4515-b9d3-12029fbb5bf7] Give up sending metadata request since no node is available 19:07:09.945 [kafka-coordinator-heartbeat-thread | mso-group] DEBUG org.apache.kafka.clients.NetworkClient - [Consumer clientId=mso-123456-consumer-e3947c42-8a25-4603-b050-ed449cbe5e27, groupId=mso-group] Give up sending metadata request since no node is available 19:07:09.945 [kafka-coordinator-heartbeat-thread | mso-group] DEBUG org.apache.kafka.clients.consumer.internals.ConsumerCoordinator - [Consumer clientId=mso-123456-consumer-e3947c42-8a25-4603-b050-ed449cbe5e27, groupId=mso-group] No broker available to send FindCoordinator request 19:07:09.955 [pool-12-thread-1] INFO org.onap.sdc.impl.NotificationConsumer - Polling for messages from topic: null 19:07:09.988 [kafka-producer-network-thread | mso-123456-producer-f273d893-2274-4515-b9d3-12029fbb5bf7] DEBUG org.apache.kafka.clients.NetworkClient - [Producer clientId=mso-123456-producer-f273d893-2274-4515-b9d3-12029fbb5bf7] Give up sending metadata request since no node is available 19:07:10.038 [kafka-producer-network-thread | mso-123456-producer-f273d893-2274-4515-b9d3-12029fbb5bf7] DEBUG org.apache.kafka.clients.NetworkClient - [Producer clientId=mso-123456-producer-f273d893-2274-4515-b9d3-12029fbb5bf7] Give up sending metadata request since no node is available 19:07:10.045 [kafka-coordinator-heartbeat-thread | mso-group] DEBUG org.apache.kafka.clients.NetworkClient - [Consumer clientId=mso-123456-consumer-e3947c42-8a25-4603-b050-ed449cbe5e27, groupId=mso-group] Give up sending metadata request since no node is available 19:07:10.045 [kafka-coordinator-heartbeat-thread | mso-group] DEBUG org.apache.kafka.clients.consumer.internals.ConsumerCoordinator - [Consumer clientId=mso-123456-consumer-e3947c42-8a25-4603-b050-ed449cbe5e27, groupId=mso-group] No broker available to send FindCoordinator request 19:07:10.056 [pool-12-thread-5] INFO org.onap.sdc.impl.NotificationConsumer - Polling for messages from topic: null 19:07:10.089 [kafka-producer-network-thread | mso-123456-producer-f273d893-2274-4515-b9d3-12029fbb5bf7] DEBUG org.apache.kafka.clients.NetworkClient - [Producer clientId=mso-123456-producer-f273d893-2274-4515-b9d3-12029fbb5bf7] Give up sending metadata request since no node is available 19:07:10.139 [kafka-producer-network-thread | mso-123456-producer-f273d893-2274-4515-b9d3-12029fbb5bf7] DEBUG org.apache.kafka.clients.NetworkClient - [Producer clientId=mso-123456-producer-f273d893-2274-4515-b9d3-12029fbb5bf7] Give up sending metadata request since no node is available 19:07:10.146 [kafka-coordinator-heartbeat-thread | mso-group] DEBUG org.apache.kafka.clients.NetworkClient - [Consumer clientId=mso-123456-consumer-e3947c42-8a25-4603-b050-ed449cbe5e27, groupId=mso-group] Give up sending metadata request since no node is available 19:07:10.146 [kafka-coordinator-heartbeat-thread | mso-group] DEBUG org.apache.kafka.clients.consumer.internals.ConsumerCoordinator - [Consumer clientId=mso-123456-consumer-e3947c42-8a25-4603-b050-ed449cbe5e27, groupId=mso-group] No broker available to send FindCoordinator request 19:07:10.155 [pool-12-thread-3] INFO org.onap.sdc.impl.NotificationConsumer - Polling for messages from topic: null 19:07:10.190 [kafka-producer-network-thread | mso-123456-producer-f273d893-2274-4515-b9d3-12029fbb5bf7] DEBUG org.apache.kafka.clients.NetworkClient - [Producer clientId=mso-123456-producer-f273d893-2274-4515-b9d3-12029fbb5bf7] Give up sending metadata request since no node is available 19:07:10.240 [kafka-producer-network-thread | mso-123456-producer-f273d893-2274-4515-b9d3-12029fbb5bf7] DEBUG org.apache.kafka.clients.NetworkClient - [Producer clientId=mso-123456-producer-f273d893-2274-4515-b9d3-12029fbb5bf7] Give up sending metadata request since no node is available 19:07:10.246 [kafka-coordinator-heartbeat-thread | mso-group] DEBUG org.apache.kafka.clients.NetworkClient - [Consumer clientId=mso-123456-consumer-e3947c42-8a25-4603-b050-ed449cbe5e27, groupId=mso-group] Give up sending metadata request since no node is available 19:07:10.246 [kafka-coordinator-heartbeat-thread | mso-group] DEBUG org.apache.kafka.clients.consumer.internals.ConsumerCoordinator - [Consumer clientId=mso-123456-consumer-e3947c42-8a25-4603-b050-ed449cbe5e27, groupId=mso-group] No broker available to send FindCoordinator request 19:07:10.255 [pool-12-thread-3] INFO org.onap.sdc.impl.NotificationConsumer - Polling for messages from topic: null 19:07:10.261 [main] INFO org.onap.sdc.impl.DistributionClientImpl - DistributionClient - sendNotificationStatus 19:07:10.262 [main] DEBUG org.onap.sdc.impl.DistributionClientImpl - client was not initialized 19:07:10.266 [pool-13-thread-1] INFO org.onap.sdc.impl.NotificationConsumer - Polling for messages from topic: null 19:07:10.291 [kafka-producer-network-thread | mso-123456-producer-f273d893-2274-4515-b9d3-12029fbb5bf7] DEBUG org.apache.kafka.clients.NetworkClient - [Producer clientId=mso-123456-producer-f273d893-2274-4515-b9d3-12029fbb5bf7] Give up sending metadata request since no node is available 19:07:10.341 [kafka-producer-network-thread | mso-123456-producer-f273d893-2274-4515-b9d3-12029fbb5bf7] DEBUG org.apache.kafka.clients.NetworkClient - [Producer clientId=mso-123456-producer-f273d893-2274-4515-b9d3-12029fbb5bf7] Give up sending metadata request since no node is available 19:07:10.347 [kafka-coordinator-heartbeat-thread | mso-group] DEBUG org.apache.kafka.clients.NetworkClient - [Consumer clientId=mso-123456-consumer-e3947c42-8a25-4603-b050-ed449cbe5e27, groupId=mso-group] Give up sending metadata request since no node is available 19:07:10.347 [kafka-coordinator-heartbeat-thread | mso-group] DEBUG org.apache.kafka.clients.consumer.internals.ConsumerCoordinator - [Consumer clientId=mso-123456-consumer-e3947c42-8a25-4603-b050-ed449cbe5e27, groupId=mso-group] No broker available to send FindCoordinator request 19:07:10.364 [pool-13-thread-1] INFO org.onap.sdc.impl.NotificationConsumer - Polling for messages from topic: null 19:07:10.391 [kafka-producer-network-thread | mso-123456-producer-f273d893-2274-4515-b9d3-12029fbb5bf7] DEBUG org.apache.kafka.clients.NetworkClient - [Producer clientId=mso-123456-producer-f273d893-2274-4515-b9d3-12029fbb5bf7] Give up sending metadata request since no node is available 19:07:10.442 [kafka-producer-network-thread | mso-123456-producer-f273d893-2274-4515-b9d3-12029fbb5bf7] DEBUG org.apache.kafka.clients.NetworkClient - [Producer clientId=mso-123456-producer-f273d893-2274-4515-b9d3-12029fbb5bf7] Give up sending metadata request since no node is available 19:07:10.447 [kafka-coordinator-heartbeat-thread | mso-group] DEBUG org.apache.kafka.clients.NetworkClient - [Consumer clientId=mso-123456-consumer-e3947c42-8a25-4603-b050-ed449cbe5e27, groupId=mso-group] Initialize connection to node localhost:38537 (id: 1 rack: null) for sending metadata request 19:07:10.447 [kafka-coordinator-heartbeat-thread | mso-group] DEBUG org.apache.kafka.clients.ClientUtils - Resolved host localhost as 127.0.0.1 19:07:10.447 [kafka-coordinator-heartbeat-thread | mso-group] DEBUG org.apache.kafka.clients.NetworkClient - [Consumer clientId=mso-123456-consumer-e3947c42-8a25-4603-b050-ed449cbe5e27, groupId=mso-group] Initiating connection to node localhost:38537 (id: 1 rack: null) using address localhost/127.0.0.1 19:07:10.448 [kafka-coordinator-heartbeat-thread | mso-group] DEBUG org.apache.kafka.common.security.authenticator.SaslClientAuthenticator - [Consumer clientId=mso-123456-consumer-e3947c42-8a25-4603-b050-ed449cbe5e27, groupId=mso-group] Set SASL client state to SEND_APIVERSIONS_REQUEST 19:07:10.448 [kafka-coordinator-heartbeat-thread | mso-group] DEBUG org.apache.kafka.common.security.authenticator.SaslClientAuthenticator - [Consumer clientId=mso-123456-consumer-e3947c42-8a25-4603-b050-ed449cbe5e27, groupId=mso-group] Creating SaslClient: client=null;service=kafka;serviceHostname=localhost;mechs=[PLAIN] 19:07:10.449 [kafka-coordinator-heartbeat-thread | mso-group] DEBUG org.apache.kafka.common.network.Selector - [Consumer clientId=mso-123456-consumer-e3947c42-8a25-4603-b050-ed449cbe5e27, groupId=mso-group] Connection with localhost/127.0.0.1 (channelId=1) disconnected java.net.ConnectException: Connection refused at java.base/sun.nio.ch.SocketChannelImpl.checkConnect(Native Method) at java.base/sun.nio.ch.SocketChannelImpl.finishConnect(SocketChannelImpl.java:777) at org.apache.kafka.common.network.PlaintextTransportLayer.finishConnect(PlaintextTransportLayer.java:50) at org.apache.kafka.common.network.KafkaChannel.finishConnect(KafkaChannel.java:224) at org.apache.kafka.common.network.Selector.pollSelectionKeys(Selector.java:526) at org.apache.kafka.common.network.Selector.poll(Selector.java:481) at org.apache.kafka.clients.NetworkClient.poll(NetworkClient.java:560) at org.apache.kafka.clients.consumer.internals.ConsumerNetworkClient.poll(ConsumerNetworkClient.java:280) at org.apache.kafka.clients.consumer.internals.ConsumerNetworkClient.pollNoWakeup(ConsumerNetworkClient.java:321) at org.apache.kafka.clients.consumer.internals.AbstractCoordinator$HeartbeatThread.run(AbstractCoordinator.java:1454) 19:07:10.449 [kafka-coordinator-heartbeat-thread | mso-group] INFO org.apache.kafka.clients.NetworkClient - [Consumer clientId=mso-123456-consumer-e3947c42-8a25-4603-b050-ed449cbe5e27, groupId=mso-group] Node 1 disconnected. 19:07:10.449 [kafka-coordinator-heartbeat-thread | mso-group] WARN org.apache.kafka.clients.NetworkClient - [Consumer clientId=mso-123456-consumer-e3947c42-8a25-4603-b050-ed449cbe5e27, groupId=mso-group] Connection to node 1 (localhost/127.0.0.1:38537) could not be established. Broker may not be available. 19:07:10.450 [kafka-coordinator-heartbeat-thread | mso-group] DEBUG org.apache.kafka.clients.consumer.internals.ConsumerCoordinator - [Consumer clientId=mso-123456-consumer-e3947c42-8a25-4603-b050-ed449cbe5e27, groupId=mso-group] No broker available to send FindCoordinator request 19:07:10.464 [pool-13-thread-2] INFO org.onap.sdc.impl.NotificationConsumer - Polling for messages from topic: null 19:07:10.465 [pool-13-thread-2] DEBUG org.onap.sdc.impl.NotificationConsumer - received message from topic 19:07:10.465 [pool-13-thread-2] DEBUG org.onap.sdc.impl.NotificationConsumer - received notification from broker: {"distributionID" : "bcc7a72e-90b1-4c5f-9a37-28dc3cd86416", "serviceName" : "Testnotificationser1", "serviceVersion" : "1.0", "serviceUUID" : "7f7f94f4-373a-4b71-a0e3-80ae2ba4eb5d", "serviceDescription" : "TestNotificationVF1", "resources" : [{ "resourceInstanceName" : "testnotificationvf11", "resourceName" : "TestNotificationVF1", "resourceVersion" : "1.0", "resoucreType" : "VF", "resourceUUID" : "907e1746-9f69-40f5-9f2a-313654092a2d", "artifacts" : [{ "artifactName" : "sample-xml-alldata-1-1.xml", "artifactType" : "YANG_XML", "artifactURL" : "/sdc/v1/catalog/services/Testnotificationser1/1.0/resourceInstances/testnotificationvf11/artifacts/sample-xml-alldata-1-1.xml", "artifactChecksum" : "MTUxODFkMmRlOTNhNjYxMGYyYTI1ZjA5Y2QyNWQyYTk\u003d", "artifactDescription" : "MyYang", "artifactTimeout" : 0, "artifactUUID" : "0005bc4a-2c19-452e-be6d-d574a56be4d0", "artifactVersion" : "1" }, { "artifactName" : "heat.yaml", "artifactType" : "HEAT", "artifactURL" : "/sdc/v1/catalog/services/Testnotificationser1/1.0/resourceInstances/testnotificationvf11/artifacts/heat.yaml", "artifactChecksum" : "ODEyNjE4YTMzYzRmMTk2ODVhNTU2NTg3YWEyNmIxMTM\u003d", "artifactDescription" : "heat", "artifactTimeout" : 60, "artifactUUID" : "8df6123c-f368-47d3-93be-1972cefbcc35", "artifactVersion" : "1" }, { "artifactName" : "heat.env", "artifactType" : "HEAT_ENV", "artifactURL" : "/sdc/v1/catalog/services/Testnotificationser1/1.0/resourceInstances/testnotificationvf11/artifacts/heat.env", "artifactChecksum" : "NGIzMjExZTM1NDc2NjBjOTQyMGJmMWNiMmU0NTE5NzM\u003d", "artifactDescription" : "Auto-generated HEAT Environment deployment artifact", "artifactTimeout" : 0, "artifactUUID" : "ce65d31c-35c0-43a9-90c7-596fc51d0c86", "artifactVersion" : "1", "generatedFromUUID" : "8df6123c-f368-47d3-93be-1972cefbcc35" } ] } ]} 19:07:10.471 [pool-13-thread-2] DEBUG org.onap.sdc.impl.NotificationConsumer - sending notification to client: { "distributionID": "bcc7a72e-90b1-4c5f-9a37-28dc3cd86416", "serviceName": "Testnotificationser1", "serviceVersion": "1.0", "serviceUUID": "7f7f94f4-373a-4b71-a0e3-80ae2ba4eb5d", "serviceDescription": "TestNotificationVF1", "resources": [ { "resourceInstanceName": "testnotificationvf11", "resourceName": "TestNotificationVF1", "resourceVersion": "1.0", "resoucreType": "VF", "resourceUUID": "907e1746-9f69-40f5-9f2a-313654092a2d", "artifacts": [ { "artifactName": "heat.yaml", "artifactType": "HEAT", "artifactURL": "/sdc/v1/catalog/services/Testnotificationser1/1.0/resourceInstances/testnotificationvf11/artifacts/heat.yaml", "artifactChecksum": "ODEyNjE4YTMzYzRmMTk2ODVhNTU2NTg3YWEyNmIxMTM\u003d", "artifactDescription": "heat", "artifactTimeout": 60, "artifactVersion": "1", "artifactUUID": "8df6123c-f368-47d3-93be-1972cefbcc35", "generatedArtifact": { "artifactName": "heat.env", "artifactType": "HEAT_ENV", "artifactURL": "/sdc/v1/catalog/services/Testnotificationser1/1.0/resourceInstances/testnotificationvf11/artifacts/heat.env", "artifactChecksum": "NGIzMjExZTM1NDc2NjBjOTQyMGJmMWNiMmU0NTE5NzM\u003d", "artifactDescription": "Auto-generated HEAT Environment deployment artifact", "artifactTimeout": 0, "artifactVersion": "1", "artifactUUID": "ce65d31c-35c0-43a9-90c7-596fc51d0c86", "generatedFromUUID": "8df6123c-f368-47d3-93be-1972cefbcc35" }, "relatedArtifactsInfo": [] } ] } ], "serviceArtifacts": [] } 19:07:10.492 [kafka-producer-network-thread | mso-123456-producer-f273d893-2274-4515-b9d3-12029fbb5bf7] DEBUG org.apache.kafka.clients.NetworkClient - [Producer clientId=mso-123456-producer-f273d893-2274-4515-b9d3-12029fbb5bf7] Give up sending metadata request since no node is available 19:07:10.542 [kafka-producer-network-thread | mso-123456-producer-f273d893-2274-4515-b9d3-12029fbb5bf7] DEBUG org.apache.kafka.clients.NetworkClient - [Producer clientId=mso-123456-producer-f273d893-2274-4515-b9d3-12029fbb5bf7] Give up sending metadata request since no node is available 19:07:10.550 [kafka-coordinator-heartbeat-thread | mso-group] DEBUG org.apache.kafka.clients.NetworkClient - [Consumer clientId=mso-123456-consumer-e3947c42-8a25-4603-b050-ed449cbe5e27, groupId=mso-group] Give up sending metadata request since no node is available 19:07:10.550 [kafka-coordinator-heartbeat-thread | mso-group] DEBUG org.apache.kafka.clients.consumer.internals.ConsumerCoordinator - [Consumer clientId=mso-123456-consumer-e3947c42-8a25-4603-b050-ed449cbe5e27, groupId=mso-group] No broker available to send FindCoordinator request 19:07:10.564 [pool-13-thread-1] INFO org.onap.sdc.impl.NotificationConsumer - Polling for messages from topic: null 19:07:10.593 [kafka-producer-network-thread | mso-123456-producer-f273d893-2274-4515-b9d3-12029fbb5bf7] DEBUG org.apache.kafka.clients.NetworkClient - [Producer clientId=mso-123456-producer-f273d893-2274-4515-b9d3-12029fbb5bf7] Give up sending metadata request since no node is available 19:07:10.643 [kafka-producer-network-thread | mso-123456-producer-f273d893-2274-4515-b9d3-12029fbb5bf7] DEBUG org.apache.kafka.clients.NetworkClient - [Producer clientId=mso-123456-producer-f273d893-2274-4515-b9d3-12029fbb5bf7] Give up sending metadata request since no node is available 19:07:10.650 [kafka-coordinator-heartbeat-thread | mso-group] DEBUG org.apache.kafka.clients.NetworkClient - [Consumer clientId=mso-123456-consumer-e3947c42-8a25-4603-b050-ed449cbe5e27, groupId=mso-group] Give up sending metadata request since no node is available 19:07:10.651 [kafka-coordinator-heartbeat-thread | mso-group] DEBUG org.apache.kafka.clients.consumer.internals.ConsumerCoordinator - [Consumer clientId=mso-123456-consumer-e3947c42-8a25-4603-b050-ed449cbe5e27, groupId=mso-group] No broker available to send FindCoordinator request 19:07:10.664 [pool-13-thread-3] INFO org.onap.sdc.impl.NotificationConsumer - Polling for messages from topic: null 19:07:10.693 [kafka-producer-network-thread | mso-123456-producer-f273d893-2274-4515-b9d3-12029fbb5bf7] DEBUG org.apache.kafka.clients.NetworkClient - [Producer clientId=mso-123456-producer-f273d893-2274-4515-b9d3-12029fbb5bf7] Give up sending metadata request since no node is available 19:07:10.744 [kafka-producer-network-thread | mso-123456-producer-f273d893-2274-4515-b9d3-12029fbb5bf7] DEBUG org.apache.kafka.clients.NetworkClient - [Producer clientId=mso-123456-producer-f273d893-2274-4515-b9d3-12029fbb5bf7] Give up sending metadata request since no node is available 19:07:10.751 [kafka-coordinator-heartbeat-thread | mso-group] DEBUG org.apache.kafka.clients.NetworkClient - [Consumer clientId=mso-123456-consumer-e3947c42-8a25-4603-b050-ed449cbe5e27, groupId=mso-group] Give up sending metadata request since no node is available 19:07:10.751 [kafka-coordinator-heartbeat-thread | mso-group] DEBUG org.apache.kafka.clients.consumer.internals.ConsumerCoordinator - [Consumer clientId=mso-123456-consumer-e3947c42-8a25-4603-b050-ed449cbe5e27, groupId=mso-group] No broker available to send FindCoordinator request 19:07:10.764 [pool-13-thread-2] INFO org.onap.sdc.impl.NotificationConsumer - Polling for messages from topic: null 19:07:10.794 [kafka-producer-network-thread | mso-123456-producer-f273d893-2274-4515-b9d3-12029fbb5bf7] DEBUG org.apache.kafka.clients.NetworkClient - [Producer clientId=mso-123456-producer-f273d893-2274-4515-b9d3-12029fbb5bf7] Give up sending metadata request since no node is available 19:07:10.844 [kafka-producer-network-thread | mso-123456-producer-f273d893-2274-4515-b9d3-12029fbb5bf7] DEBUG org.apache.kafka.clients.NetworkClient - [Producer clientId=mso-123456-producer-f273d893-2274-4515-b9d3-12029fbb5bf7] Initialize connection to node localhost:38537 (id: 1 rack: null) for sending metadata request 19:07:10.844 [kafka-producer-network-thread | mso-123456-producer-f273d893-2274-4515-b9d3-12029fbb5bf7] DEBUG org.apache.kafka.clients.ClientUtils - Resolved host localhost as 127.0.0.1 19:07:10.844 [kafka-producer-network-thread | mso-123456-producer-f273d893-2274-4515-b9d3-12029fbb5bf7] DEBUG org.apache.kafka.clients.NetworkClient - [Producer clientId=mso-123456-producer-f273d893-2274-4515-b9d3-12029fbb5bf7] Initiating connection to node localhost:38537 (id: 1 rack: null) using address localhost/127.0.0.1 19:07:10.845 [kafka-producer-network-thread | mso-123456-producer-f273d893-2274-4515-b9d3-12029fbb5bf7] DEBUG org.apache.kafka.common.security.authenticator.SaslClientAuthenticator - [Producer clientId=mso-123456-producer-f273d893-2274-4515-b9d3-12029fbb5bf7] Set SASL client state to SEND_APIVERSIONS_REQUEST 19:07:10.845 [kafka-producer-network-thread | mso-123456-producer-f273d893-2274-4515-b9d3-12029fbb5bf7] DEBUG org.apache.kafka.common.security.authenticator.SaslClientAuthenticator - [Producer clientId=mso-123456-producer-f273d893-2274-4515-b9d3-12029fbb5bf7] Creating SaslClient: client=null;service=kafka;serviceHostname=localhost;mechs=[PLAIN] 19:07:10.846 [kafka-producer-network-thread | mso-123456-producer-f273d893-2274-4515-b9d3-12029fbb5bf7] DEBUG org.apache.kafka.common.network.Selector - [Producer clientId=mso-123456-producer-f273d893-2274-4515-b9d3-12029fbb5bf7] Connection with localhost/127.0.0.1 (channelId=1) disconnected java.net.ConnectException: Connection refused at java.base/sun.nio.ch.SocketChannelImpl.checkConnect(Native Method) at java.base/sun.nio.ch.SocketChannelImpl.finishConnect(SocketChannelImpl.java:777) at org.apache.kafka.common.network.PlaintextTransportLayer.finishConnect(PlaintextTransportLayer.java:50) at org.apache.kafka.common.network.KafkaChannel.finishConnect(KafkaChannel.java:224) at org.apache.kafka.common.network.Selector.pollSelectionKeys(Selector.java:526) at org.apache.kafka.common.network.Selector.poll(Selector.java:481) at org.apache.kafka.clients.NetworkClient.poll(NetworkClient.java:560) at org.apache.kafka.clients.producer.internals.Sender.runOnce(Sender.java:328) at org.apache.kafka.clients.producer.internals.Sender.run(Sender.java:243) at java.base/java.lang.Thread.run(Thread.java:829) 19:07:10.846 [kafka-producer-network-thread | mso-123456-producer-f273d893-2274-4515-b9d3-12029fbb5bf7] INFO org.apache.kafka.clients.NetworkClient - [Producer clientId=mso-123456-producer-f273d893-2274-4515-b9d3-12029fbb5bf7] Node 1 disconnected. 19:07:10.846 [kafka-producer-network-thread | mso-123456-producer-f273d893-2274-4515-b9d3-12029fbb5bf7] WARN org.apache.kafka.clients.NetworkClient - [Producer clientId=mso-123456-producer-f273d893-2274-4515-b9d3-12029fbb5bf7] Connection to node 1 (localhost/127.0.0.1:38537) could not be established. Broker may not be available. 19:07:10.852 [kafka-coordinator-heartbeat-thread | mso-group] DEBUG org.apache.kafka.clients.NetworkClient - [Consumer clientId=mso-123456-consumer-e3947c42-8a25-4603-b050-ed449cbe5e27, groupId=mso-group] Give up sending metadata request since no node is available 19:07:10.852 [kafka-coordinator-heartbeat-thread | mso-group] DEBUG org.apache.kafka.clients.consumer.internals.ConsumerCoordinator - [Consumer clientId=mso-123456-consumer-e3947c42-8a25-4603-b050-ed449cbe5e27, groupId=mso-group] No broker available to send FindCoordinator request 19:07:10.865 [pool-13-thread-4] INFO org.onap.sdc.impl.NotificationConsumer - Polling for messages from topic: null 19:07:10.947 [kafka-producer-network-thread | mso-123456-producer-f273d893-2274-4515-b9d3-12029fbb5bf7] DEBUG org.apache.kafka.clients.NetworkClient - [Producer clientId=mso-123456-producer-f273d893-2274-4515-b9d3-12029fbb5bf7] Give up sending metadata request since no node is available 19:07:10.952 [kafka-coordinator-heartbeat-thread | mso-group] DEBUG org.apache.kafka.clients.NetworkClient - [Consumer clientId=mso-123456-consumer-e3947c42-8a25-4603-b050-ed449cbe5e27, groupId=mso-group] Give up sending metadata request since no node is available 19:07:10.953 [kafka-coordinator-heartbeat-thread | mso-group] DEBUG org.apache.kafka.clients.consumer.internals.ConsumerCoordinator - [Consumer clientId=mso-123456-consumer-e3947c42-8a25-4603-b050-ed449cbe5e27, groupId=mso-group] No broker available to send FindCoordinator request 19:07:10.964 [pool-13-thread-1] INFO org.onap.sdc.impl.NotificationConsumer - Polling for messages from topic: null 19:07:10.997 [kafka-producer-network-thread | mso-123456-producer-f273d893-2274-4515-b9d3-12029fbb5bf7] DEBUG org.apache.kafka.clients.NetworkClient - [Producer clientId=mso-123456-producer-f273d893-2274-4515-b9d3-12029fbb5bf7] Give up sending metadata request since no node is available 19:07:11.047 [kafka-producer-network-thread | mso-123456-producer-f273d893-2274-4515-b9d3-12029fbb5bf7] DEBUG org.apache.kafka.clients.NetworkClient - [Producer clientId=mso-123456-producer-f273d893-2274-4515-b9d3-12029fbb5bf7] Give up sending metadata request since no node is available 19:07:11.053 [kafka-coordinator-heartbeat-thread | mso-group] DEBUG org.apache.kafka.clients.NetworkClient - [Consumer clientId=mso-123456-consumer-e3947c42-8a25-4603-b050-ed449cbe5e27, groupId=mso-group] Give up sending metadata request since no node is available 19:07:11.053 [kafka-coordinator-heartbeat-thread | mso-group] DEBUG org.apache.kafka.clients.consumer.internals.ConsumerCoordinator - [Consumer clientId=mso-123456-consumer-e3947c42-8a25-4603-b050-ed449cbe5e27, groupId=mso-group] No broker available to send FindCoordinator request 19:07:11.064 [pool-13-thread-5] INFO org.onap.sdc.impl.NotificationConsumer - Polling for messages from topic: null 19:07:11.098 [kafka-producer-network-thread | mso-123456-producer-f273d893-2274-4515-b9d3-12029fbb5bf7] DEBUG org.apache.kafka.clients.NetworkClient - [Producer clientId=mso-123456-producer-f273d893-2274-4515-b9d3-12029fbb5bf7] Give up sending metadata request since no node is available 19:07:11.148 [kafka-producer-network-thread | mso-123456-producer-f273d893-2274-4515-b9d3-12029fbb5bf7] DEBUG org.apache.kafka.clients.NetworkClient - [Producer clientId=mso-123456-producer-f273d893-2274-4515-b9d3-12029fbb5bf7] Give up sending metadata request since no node is available 19:07:11.154 [kafka-coordinator-heartbeat-thread | mso-group] DEBUG org.apache.kafka.clients.NetworkClient - [Consumer clientId=mso-123456-consumer-e3947c42-8a25-4603-b050-ed449cbe5e27, groupId=mso-group] Give up sending metadata request since no node is available 19:07:11.154 [kafka-coordinator-heartbeat-thread | mso-group] DEBUG org.apache.kafka.clients.consumer.internals.ConsumerCoordinator - [Consumer clientId=mso-123456-consumer-e3947c42-8a25-4603-b050-ed449cbe5e27, groupId=mso-group] No broker available to send FindCoordinator request 19:07:11.164 [pool-13-thread-3] INFO org.onap.sdc.impl.NotificationConsumer - Polling for messages from topic: null 19:07:11.198 [kafka-producer-network-thread | mso-123456-producer-f273d893-2274-4515-b9d3-12029fbb5bf7] DEBUG org.apache.kafka.clients.NetworkClient - [Producer clientId=mso-123456-producer-f273d893-2274-4515-b9d3-12029fbb5bf7] Give up sending metadata request since no node is available 19:07:11.248 [kafka-producer-network-thread | mso-123456-producer-f273d893-2274-4515-b9d3-12029fbb5bf7] DEBUG org.apache.kafka.clients.NetworkClient - [Producer clientId=mso-123456-producer-f273d893-2274-4515-b9d3-12029fbb5bf7] Give up sending metadata request since no node is available 19:07:11.254 [kafka-coordinator-heartbeat-thread | mso-group] DEBUG org.apache.kafka.clients.NetworkClient - [Consumer clientId=mso-123456-consumer-e3947c42-8a25-4603-b050-ed449cbe5e27, groupId=mso-group] Give up sending metadata request since no node is available 19:07:11.255 [kafka-coordinator-heartbeat-thread | mso-group] DEBUG org.apache.kafka.clients.consumer.internals.ConsumerCoordinator - [Consumer clientId=mso-123456-consumer-e3947c42-8a25-4603-b050-ed449cbe5e27, groupId=mso-group] No broker available to send FindCoordinator request 19:07:11.264 [pool-13-thread-3] INFO org.onap.sdc.impl.NotificationConsumer - Polling for messages from topic: null 19:07:11.270 [main] INFO org.onap.sdc.impl.DistributionClientImpl - DistributionClient - sendNotificationStatus 19:07:11.270 [main] DEBUG org.onap.sdc.impl.DistributionClientImpl - client was not initialized 19:07:11.273 [pool-14-thread-1] INFO org.onap.sdc.impl.NotificationConsumer - Polling for messages from topic: null 19:07:11.299 [kafka-producer-network-thread | mso-123456-producer-f273d893-2274-4515-b9d3-12029fbb5bf7] DEBUG org.apache.kafka.clients.NetworkClient - [Producer clientId=mso-123456-producer-f273d893-2274-4515-b9d3-12029fbb5bf7] Give up sending metadata request since no node is available 19:07:11.349 [kafka-producer-network-thread | mso-123456-producer-f273d893-2274-4515-b9d3-12029fbb5bf7] DEBUG org.apache.kafka.clients.NetworkClient - [Producer clientId=mso-123456-producer-f273d893-2274-4515-b9d3-12029fbb5bf7] Give up sending metadata request since no node is available 19:07:11.355 [kafka-coordinator-heartbeat-thread | mso-group] DEBUG org.apache.kafka.clients.NetworkClient - [Consumer clientId=mso-123456-consumer-e3947c42-8a25-4603-b050-ed449cbe5e27, groupId=mso-group] Initialize connection to node localhost:38537 (id: 1 rack: null) for sending metadata request 19:07:11.355 [kafka-coordinator-heartbeat-thread | mso-group] DEBUG org.apache.kafka.clients.ClientUtils - Resolved host localhost as 127.0.0.1 19:07:11.355 [kafka-coordinator-heartbeat-thread | mso-group] DEBUG org.apache.kafka.clients.NetworkClient - [Consumer clientId=mso-123456-consumer-e3947c42-8a25-4603-b050-ed449cbe5e27, groupId=mso-group] Initiating connection to node localhost:38537 (id: 1 rack: null) using address localhost/127.0.0.1 19:07:11.356 [kafka-coordinator-heartbeat-thread | mso-group] DEBUG org.apache.kafka.common.security.authenticator.SaslClientAuthenticator - [Consumer clientId=mso-123456-consumer-e3947c42-8a25-4603-b050-ed449cbe5e27, groupId=mso-group] Set SASL client state to SEND_APIVERSIONS_REQUEST 19:07:11.356 [kafka-coordinator-heartbeat-thread | mso-group] DEBUG org.apache.kafka.common.security.authenticator.SaslClientAuthenticator - [Consumer clientId=mso-123456-consumer-e3947c42-8a25-4603-b050-ed449cbe5e27, groupId=mso-group] Creating SaslClient: client=null;service=kafka;serviceHostname=localhost;mechs=[PLAIN] 19:07:11.357 [kafka-coordinator-heartbeat-thread | mso-group] DEBUG org.apache.kafka.common.network.Selector - [Consumer clientId=mso-123456-consumer-e3947c42-8a25-4603-b050-ed449cbe5e27, groupId=mso-group] Connection with localhost/127.0.0.1 (channelId=1) disconnected java.net.ConnectException: Connection refused at java.base/sun.nio.ch.SocketChannelImpl.checkConnect(Native Method) at java.base/sun.nio.ch.SocketChannelImpl.finishConnect(SocketChannelImpl.java:777) at org.apache.kafka.common.network.PlaintextTransportLayer.finishConnect(PlaintextTransportLayer.java:50) at org.apache.kafka.common.network.KafkaChannel.finishConnect(KafkaChannel.java:224) at org.apache.kafka.common.network.Selector.pollSelectionKeys(Selector.java:526) at org.apache.kafka.common.network.Selector.poll(Selector.java:481) at org.apache.kafka.clients.NetworkClient.poll(NetworkClient.java:560) at org.apache.kafka.clients.consumer.internals.ConsumerNetworkClient.poll(ConsumerNetworkClient.java:280) at org.apache.kafka.clients.consumer.internals.ConsumerNetworkClient.pollNoWakeup(ConsumerNetworkClient.java:321) at org.apache.kafka.clients.consumer.internals.AbstractCoordinator$HeartbeatThread.run(AbstractCoordinator.java:1454) 19:07:11.357 [kafka-coordinator-heartbeat-thread | mso-group] INFO org.apache.kafka.clients.NetworkClient - [Consumer clientId=mso-123456-consumer-e3947c42-8a25-4603-b050-ed449cbe5e27, groupId=mso-group] Node 1 disconnected. 19:07:11.358 [kafka-coordinator-heartbeat-thread | mso-group] WARN org.apache.kafka.clients.NetworkClient - [Consumer clientId=mso-123456-consumer-e3947c42-8a25-4603-b050-ed449cbe5e27, groupId=mso-group] Connection to node 1 (localhost/127.0.0.1:38537) could not be established. Broker may not be available. 19:07:11.358 [kafka-coordinator-heartbeat-thread | mso-group] DEBUG org.apache.kafka.clients.consumer.internals.ConsumerCoordinator - [Consumer clientId=mso-123456-consumer-e3947c42-8a25-4603-b050-ed449cbe5e27, groupId=mso-group] No broker available to send FindCoordinator request 19:07:11.372 [pool-14-thread-1] INFO org.onap.sdc.impl.NotificationConsumer - Polling for messages from topic: null 19:07:11.400 [kafka-producer-network-thread | mso-123456-producer-f273d893-2274-4515-b9d3-12029fbb5bf7] DEBUG org.apache.kafka.clients.NetworkClient - [Producer clientId=mso-123456-producer-f273d893-2274-4515-b9d3-12029fbb5bf7] Give up sending metadata request since no node is available 19:07:11.450 [kafka-producer-network-thread | mso-123456-producer-f273d893-2274-4515-b9d3-12029fbb5bf7] DEBUG org.apache.kafka.clients.NetworkClient - [Producer clientId=mso-123456-producer-f273d893-2274-4515-b9d3-12029fbb5bf7] Give up sending metadata request since no node is available 19:07:11.458 [kafka-coordinator-heartbeat-thread | mso-group] DEBUG org.apache.kafka.clients.NetworkClient - [Consumer clientId=mso-123456-consumer-e3947c42-8a25-4603-b050-ed449cbe5e27, groupId=mso-group] Give up sending metadata request since no node is available 19:07:11.458 [kafka-coordinator-heartbeat-thread | mso-group] DEBUG org.apache.kafka.clients.consumer.internals.ConsumerCoordinator - [Consumer clientId=mso-123456-consumer-e3947c42-8a25-4603-b050-ed449cbe5e27, groupId=mso-group] No broker available to send FindCoordinator request 19:07:11.472 [pool-14-thread-2] INFO org.onap.sdc.impl.NotificationConsumer - Polling for messages from topic: null 19:07:11.473 [pool-14-thread-2] DEBUG org.onap.sdc.impl.NotificationConsumer - received message from topic 19:07:11.473 [pool-14-thread-2] DEBUG org.onap.sdc.impl.NotificationConsumer - received notification from broker: { "distributionID" : "bcc7a72e-90b1-4c5f-9a37-28dc3cd86416", "serviceName" : "Testnotificationser1", "serviceVersion" : "1.0", "serviceUUID" : "7f7f94f4-373a-4b71-a0e3-80ae2ba4eb5d", "serviceDescription" : "TestNotificationVF1", "serviceArtifacts" : [{ "artifactName" : "sample-xml-alldata-1-1.xml", "artifactType" : "YANG_XML", "artifactURL" : "/sdc/v1/catalog/services/Testnotificationser1/1.0/resourceInstances/testnotificationvf11/artifacts/sample-xml-alldata-1-1.xml", "artifactChecksum" : "MTUxODFkMmRlOTNhNjYxMGYyYTI1ZjA5Y2QyNWQyYTk\u003d", "artifactDescription" : "MyYang", "artifactTimeout" : 0, "artifactUUID" : "0005bc4a-2c19-452e-be6d-d574a56be4d0", "artifactVersion" : "1" }, { "artifactName" : "heat.yaml", "artifactType" : "HEAT", "artifactURL" : "/sdc/v1/catalog/services/Testnotificationser1/1.0/resourceInstances/testnotificationvf11/artifacts/heat.yaml", "artifactChecksum" : "ODEyNjE4YTMzYzRmMTk2ODVhNTU2NTg3YWEyNmIxMTM\u003d", "artifactDescription" : "heat", "artifactTimeout" : 60, "artifactUUID" : "8df6123c-f368-47d3-93be-1972cefbcc35", "artifactVersion" : "1" }, { "artifactName" : "heat.env", "artifactType" : "HEAT_ENV", "artifactURL" : "/sdc/v1/catalog/services/Testnotificationser1/1.0/resourceInstances/testnotificationvf11/artifacts/heat.env", "artifactChecksum" : "NGIzMjExZTM1NDc2NjBjOTQyMGJmMWNiMmU0NTE5NzM\u003d", "artifactDescription" : "Auto-generated HEAT Environment deployment artifact", "artifactTimeout" : 0, "artifactUUID" : "ce65d31c-35c0-43a9-90c7-596fc51d0c86", "artifactVersion" : "1", "generatedFromUUID" : "8df6123c-f368-47d3-93be-1972cefbcc35" } ], "resources" : [{ "resourceInstanceName" : "testnotificationvf11", "resourceName" : "TestNotificationVF1", "resourceVersion" : "1.0", "resoucreType" : "VF", "resourceUUID" : "907e1746-9f69-40f5-9f2a-313654092a2d", "artifacts" : [{ "artifactName" : "sample-xml-alldata-1-1.xml", "artifactType" : "YANG_XML", "artifactURL" : "/sdc/v1/catalog/services/Testnotificationser1/1.0/resourceInstances/testnotificationvf11/artifacts/sample-xml-alldata-1-1.xml", "artifactChecksum" : "MTUxODFkMmRlOTNhNjYxMGYyYTI1ZjA5Y2QyNWQyYTk\u003d", "artifactDescription" : "MyYang", "artifactTimeout" : 0, "artifactUUID" : "0005bc4a-2c19-452e-be6d-d574a56be4d0", "artifactVersion" : "1" }, { "artifactName" : "heat.yaml", "artifactType" : "HEAT", "artifactURL" : "/sdc/v1/catalog/services/Testnotificationser1/1.0/resourceInstances/testnotificationvf11/artifacts/heat.yaml", "artifactChecksum" : "ODEyNjE4YTMzYzRmMTk2ODVhNTU2NTg3YWEyNmIxMTM\u003d", "artifactDescription" : "heat", "artifactTimeout" : 60, "artifactUUID" : "8df6123c-f368-47d3-93be-1972cefbcc35", "artifactVersion" : "1" }, { "artifactName" : "heat.env", "artifactType" : "HEAT_ENV", "artifactURL" : "/sdc/v1/catalog/services/Testnotificationser1/1.0/resourceInstances/testnotificationvf11/artifacts/heat.env", "artifactChecksum" : "NGIzMjExZTM1NDc2NjBjOTQyMGJmMWNiMmU0NTE5NzM\u003d", "artifactDescription" : "Auto-generated HEAT Environment deployment artifact", "artifactTimeout" : 0, "artifactUUID" : "ce65d31c-35c0-43a9-90c7-596fc51d0c86", "artifactVersion" : "1", "generatedFromUUID" : "8df6123c-f368-47d3-93be-1972cefbcc35" } ] } ] } 19:07:11.482 [pool-14-thread-2] DEBUG org.onap.sdc.impl.NotificationConsumer - sending notification to client: { "distributionID": "bcc7a72e-90b1-4c5f-9a37-28dc3cd86416", "serviceName": "Testnotificationser1", "serviceVersion": "1.0", "serviceUUID": "7f7f94f4-373a-4b71-a0e3-80ae2ba4eb5d", "serviceDescription": "TestNotificationVF1", "resources": [ { "resourceInstanceName": "testnotificationvf11", "resourceName": "TestNotificationVF1", "resourceVersion": "1.0", "resoucreType": "VF", "resourceUUID": "907e1746-9f69-40f5-9f2a-313654092a2d", "artifacts": [ { "artifactName": "heat.yaml", "artifactType": "HEAT", "artifactURL": "/sdc/v1/catalog/services/Testnotificationser1/1.0/resourceInstances/testnotificationvf11/artifacts/heat.yaml", "artifactChecksum": "ODEyNjE4YTMzYzRmMTk2ODVhNTU2NTg3YWEyNmIxMTM\u003d", "artifactDescription": "heat", "artifactTimeout": 60, "artifactVersion": "1", "artifactUUID": "8df6123c-f368-47d3-93be-1972cefbcc35", "generatedArtifact": { "artifactName": "heat.env", "artifactType": "HEAT_ENV", "artifactURL": "/sdc/v1/catalog/services/Testnotificationser1/1.0/resourceInstances/testnotificationvf11/artifacts/heat.env", "artifactChecksum": "NGIzMjExZTM1NDc2NjBjOTQyMGJmMWNiMmU0NTE5NzM\u003d", "artifactDescription": "Auto-generated HEAT Environment deployment artifact", "artifactTimeout": 0, "artifactVersion": "1", "artifactUUID": "ce65d31c-35c0-43a9-90c7-596fc51d0c86", "generatedFromUUID": "8df6123c-f368-47d3-93be-1972cefbcc35" }, "relatedArtifactsInfo": [] } ] } ], "serviceArtifacts": [ { "artifactName": "heat.yaml", "artifactType": "HEAT", "artifactURL": "/sdc/v1/catalog/services/Testnotificationser1/1.0/resourceInstances/testnotificationvf11/artifacts/heat.yaml", "artifactChecksum": "ODEyNjE4YTMzYzRmMTk2ODVhNTU2NTg3YWEyNmIxMTM\u003d", "artifactDescription": "heat", "artifactTimeout": 60, "artifactVersion": "1", "artifactUUID": "8df6123c-f368-47d3-93be-1972cefbcc35", "generatedArtifact": { "artifactName": "heat.env", "artifactType": "HEAT_ENV", "artifactURL": "/sdc/v1/catalog/services/Testnotificationser1/1.0/resourceInstances/testnotificationvf11/artifacts/heat.env", "artifactChecksum": "NGIzMjExZTM1NDc2NjBjOTQyMGJmMWNiMmU0NTE5NzM\u003d", "artifactDescription": "Auto-generated HEAT Environment deployment artifact", "artifactTimeout": 0, "artifactVersion": "1", "artifactUUID": "ce65d31c-35c0-43a9-90c7-596fc51d0c86", "generatedFromUUID": "8df6123c-f368-47d3-93be-1972cefbcc35" } } ] } 19:07:11.500 [kafka-producer-network-thread | mso-123456-producer-f273d893-2274-4515-b9d3-12029fbb5bf7] DEBUG org.apache.kafka.clients.NetworkClient - [Producer clientId=mso-123456-producer-f273d893-2274-4515-b9d3-12029fbb5bf7] Give up sending metadata request since no node is available 19:07:11.551 [kafka-producer-network-thread | mso-123456-producer-f273d893-2274-4515-b9d3-12029fbb5bf7] DEBUG org.apache.kafka.clients.NetworkClient - [Producer clientId=mso-123456-producer-f273d893-2274-4515-b9d3-12029fbb5bf7] Give up sending metadata request since no node is available 19:07:11.559 [kafka-coordinator-heartbeat-thread | mso-group] DEBUG org.apache.kafka.clients.NetworkClient - [Consumer clientId=mso-123456-consumer-e3947c42-8a25-4603-b050-ed449cbe5e27, groupId=mso-group] Give up sending metadata request since no node is available 19:07:11.559 [kafka-coordinator-heartbeat-thread | mso-group] DEBUG org.apache.kafka.clients.consumer.internals.ConsumerCoordinator - [Consumer clientId=mso-123456-consumer-e3947c42-8a25-4603-b050-ed449cbe5e27, groupId=mso-group] No broker available to send FindCoordinator request 19:07:11.572 [pool-14-thread-1] INFO org.onap.sdc.impl.NotificationConsumer - Polling for messages from topic: null 19:07:11.601 [kafka-producer-network-thread | mso-123456-producer-f273d893-2274-4515-b9d3-12029fbb5bf7] DEBUG org.apache.kafka.clients.NetworkClient - [Producer clientId=mso-123456-producer-f273d893-2274-4515-b9d3-12029fbb5bf7] Give up sending metadata request since no node is available 19:07:11.651 [kafka-producer-network-thread | mso-123456-producer-f273d893-2274-4515-b9d3-12029fbb5bf7] DEBUG org.apache.kafka.clients.NetworkClient - [Producer clientId=mso-123456-producer-f273d893-2274-4515-b9d3-12029fbb5bf7] Give up sending metadata request since no node is available 19:07:11.659 [kafka-coordinator-heartbeat-thread | mso-group] DEBUG org.apache.kafka.clients.NetworkClient - [Consumer clientId=mso-123456-consumer-e3947c42-8a25-4603-b050-ed449cbe5e27, groupId=mso-group] Give up sending metadata request since no node is available 19:07:11.659 [kafka-coordinator-heartbeat-thread | mso-group] DEBUG org.apache.kafka.clients.consumer.internals.ConsumerCoordinator - [Consumer clientId=mso-123456-consumer-e3947c42-8a25-4603-b050-ed449cbe5e27, groupId=mso-group] No broker available to send FindCoordinator request 19:07:11.672 [pool-14-thread-3] INFO org.onap.sdc.impl.NotificationConsumer - Polling for messages from topic: null 19:07:11.701 [kafka-producer-network-thread | mso-123456-producer-f273d893-2274-4515-b9d3-12029fbb5bf7] DEBUG org.apache.kafka.clients.NetworkClient - [Producer clientId=mso-123456-producer-f273d893-2274-4515-b9d3-12029fbb5bf7] Give up sending metadata request since no node is available 19:07:11.752 [kafka-producer-network-thread | mso-123456-producer-f273d893-2274-4515-b9d3-12029fbb5bf7] DEBUG org.apache.kafka.clients.NetworkClient - [Producer clientId=mso-123456-producer-f273d893-2274-4515-b9d3-12029fbb5bf7] Initialize connection to node localhost:38537 (id: 1 rack: null) for sending metadata request 19:07:11.752 [kafka-producer-network-thread | mso-123456-producer-f273d893-2274-4515-b9d3-12029fbb5bf7] DEBUG org.apache.kafka.clients.ClientUtils - Resolved host localhost as 127.0.0.1 19:07:11.752 [kafka-producer-network-thread | mso-123456-producer-f273d893-2274-4515-b9d3-12029fbb5bf7] DEBUG org.apache.kafka.clients.NetworkClient - [Producer clientId=mso-123456-producer-f273d893-2274-4515-b9d3-12029fbb5bf7] Initiating connection to node localhost:38537 (id: 1 rack: null) using address localhost/127.0.0.1 19:07:11.752 [kafka-producer-network-thread | mso-123456-producer-f273d893-2274-4515-b9d3-12029fbb5bf7] DEBUG org.apache.kafka.common.security.authenticator.SaslClientAuthenticator - [Producer clientId=mso-123456-producer-f273d893-2274-4515-b9d3-12029fbb5bf7] Set SASL client state to SEND_APIVERSIONS_REQUEST 19:07:11.752 [kafka-producer-network-thread | mso-123456-producer-f273d893-2274-4515-b9d3-12029fbb5bf7] DEBUG org.apache.kafka.common.security.authenticator.SaslClientAuthenticator - [Producer clientId=mso-123456-producer-f273d893-2274-4515-b9d3-12029fbb5bf7] Creating SaslClient: client=null;service=kafka;serviceHostname=localhost;mechs=[PLAIN] 19:07:11.753 [kafka-producer-network-thread | mso-123456-producer-f273d893-2274-4515-b9d3-12029fbb5bf7] DEBUG org.apache.kafka.common.network.Selector - [Producer clientId=mso-123456-producer-f273d893-2274-4515-b9d3-12029fbb5bf7] Connection with localhost/127.0.0.1 (channelId=1) disconnected java.net.ConnectException: Connection refused at java.base/sun.nio.ch.SocketChannelImpl.checkConnect(Native Method) at java.base/sun.nio.ch.SocketChannelImpl.finishConnect(SocketChannelImpl.java:777) at org.apache.kafka.common.network.PlaintextTransportLayer.finishConnect(PlaintextTransportLayer.java:50) at org.apache.kafka.common.network.KafkaChannel.finishConnect(KafkaChannel.java:224) at org.apache.kafka.common.network.Selector.pollSelectionKeys(Selector.java:526) at org.apache.kafka.common.network.Selector.poll(Selector.java:481) at org.apache.kafka.clients.NetworkClient.poll(NetworkClient.java:560) at org.apache.kafka.clients.producer.internals.Sender.runOnce(Sender.java:328) at org.apache.kafka.clients.producer.internals.Sender.run(Sender.java:243) at java.base/java.lang.Thread.run(Thread.java:829) 19:07:11.754 [kafka-producer-network-thread | mso-123456-producer-f273d893-2274-4515-b9d3-12029fbb5bf7] INFO org.apache.kafka.clients.NetworkClient - [Producer clientId=mso-123456-producer-f273d893-2274-4515-b9d3-12029fbb5bf7] Node 1 disconnected. 19:07:11.754 [kafka-producer-network-thread | mso-123456-producer-f273d893-2274-4515-b9d3-12029fbb5bf7] WARN org.apache.kafka.clients.NetworkClient - [Producer clientId=mso-123456-producer-f273d893-2274-4515-b9d3-12029fbb5bf7] Connection to node 1 (localhost/127.0.0.1:38537) could not be established. Broker may not be available. 19:07:11.760 [kafka-coordinator-heartbeat-thread | mso-group] DEBUG org.apache.kafka.clients.NetworkClient - [Consumer clientId=mso-123456-consumer-e3947c42-8a25-4603-b050-ed449cbe5e27, groupId=mso-group] Give up sending metadata request since no node is available 19:07:11.760 [kafka-coordinator-heartbeat-thread | mso-group] DEBUG org.apache.kafka.clients.consumer.internals.ConsumerCoordinator - [Consumer clientId=mso-123456-consumer-e3947c42-8a25-4603-b050-ed449cbe5e27, groupId=mso-group] No broker available to send FindCoordinator request 19:07:11.772 [pool-14-thread-2] INFO org.onap.sdc.impl.NotificationConsumer - Polling for messages from topic: null 19:07:11.854 [kafka-producer-network-thread | mso-123456-producer-f273d893-2274-4515-b9d3-12029fbb5bf7] DEBUG org.apache.kafka.clients.NetworkClient - [Producer clientId=mso-123456-producer-f273d893-2274-4515-b9d3-12029fbb5bf7] Give up sending metadata request since no node is available 19:07:11.860 [kafka-coordinator-heartbeat-thread | mso-group] DEBUG org.apache.kafka.clients.NetworkClient - [Consumer clientId=mso-123456-consumer-e3947c42-8a25-4603-b050-ed449cbe5e27, groupId=mso-group] Give up sending metadata request since no node is available 19:07:11.861 [kafka-coordinator-heartbeat-thread | mso-group] DEBUG org.apache.kafka.clients.consumer.internals.ConsumerCoordinator - [Consumer clientId=mso-123456-consumer-e3947c42-8a25-4603-b050-ed449cbe5e27, groupId=mso-group] No broker available to send FindCoordinator request 19:07:11.872 [pool-14-thread-4] INFO org.onap.sdc.impl.NotificationConsumer - Polling for messages from topic: null 19:07:11.905 [kafka-producer-network-thread | mso-123456-producer-f273d893-2274-4515-b9d3-12029fbb5bf7] DEBUG org.apache.kafka.clients.NetworkClient - [Producer clientId=mso-123456-producer-f273d893-2274-4515-b9d3-12029fbb5bf7] Give up sending metadata request since no node is available 19:07:11.957 [kafka-producer-network-thread | mso-123456-producer-f273d893-2274-4515-b9d3-12029fbb5bf7] DEBUG org.apache.kafka.clients.NetworkClient - [Producer clientId=mso-123456-producer-f273d893-2274-4515-b9d3-12029fbb5bf7] Give up sending metadata request since no node is available 19:07:11.961 [kafka-coordinator-heartbeat-thread | mso-group] DEBUG org.apache.kafka.clients.NetworkClient - [Consumer clientId=mso-123456-consumer-e3947c42-8a25-4603-b050-ed449cbe5e27, groupId=mso-group] Give up sending metadata request since no node is available 19:07:11.961 [kafka-coordinator-heartbeat-thread | mso-group] DEBUG org.apache.kafka.clients.consumer.internals.ConsumerCoordinator - [Consumer clientId=mso-123456-consumer-e3947c42-8a25-4603-b050-ed449cbe5e27, groupId=mso-group] No broker available to send FindCoordinator request 19:07:11.972 [pool-14-thread-1] INFO org.onap.sdc.impl.NotificationConsumer - Polling for messages from topic: null 19:07:12.008 [kafka-producer-network-thread | mso-123456-producer-f273d893-2274-4515-b9d3-12029fbb5bf7] DEBUG org.apache.kafka.clients.NetworkClient - [Producer clientId=mso-123456-producer-f273d893-2274-4515-b9d3-12029fbb5bf7] Give up sending metadata request since no node is available 19:07:12.058 [kafka-producer-network-thread | mso-123456-producer-f273d893-2274-4515-b9d3-12029fbb5bf7] DEBUG org.apache.kafka.clients.NetworkClient - [Producer clientId=mso-123456-producer-f273d893-2274-4515-b9d3-12029fbb5bf7] Give up sending metadata request since no node is available 19:07:12.062 [kafka-coordinator-heartbeat-thread | mso-group] DEBUG org.apache.kafka.clients.NetworkClient - [Consumer clientId=mso-123456-consumer-e3947c42-8a25-4603-b050-ed449cbe5e27, groupId=mso-group] Give up sending metadata request since no node is available 19:07:12.062 [kafka-coordinator-heartbeat-thread | mso-group] DEBUG org.apache.kafka.clients.consumer.internals.ConsumerCoordinator - [Consumer clientId=mso-123456-consumer-e3947c42-8a25-4603-b050-ed449cbe5e27, groupId=mso-group] No broker available to send FindCoordinator request 19:07:12.072 [pool-14-thread-5] INFO org.onap.sdc.impl.NotificationConsumer - Polling for messages from topic: null 19:07:12.108 [kafka-producer-network-thread | mso-123456-producer-f273d893-2274-4515-b9d3-12029fbb5bf7] DEBUG org.apache.kafka.clients.NetworkClient - [Producer clientId=mso-123456-producer-f273d893-2274-4515-b9d3-12029fbb5bf7] Give up sending metadata request since no node is available 19:07:12.158 [kafka-producer-network-thread | mso-123456-producer-f273d893-2274-4515-b9d3-12029fbb5bf7] DEBUG org.apache.kafka.clients.NetworkClient - [Producer clientId=mso-123456-producer-f273d893-2274-4515-b9d3-12029fbb5bf7] Give up sending metadata request since no node is available 19:07:12.162 [kafka-coordinator-heartbeat-thread | mso-group] DEBUG org.apache.kafka.clients.NetworkClient - [Consumer clientId=mso-123456-consumer-e3947c42-8a25-4603-b050-ed449cbe5e27, groupId=mso-group] Give up sending metadata request since no node is available 19:07:12.162 [kafka-coordinator-heartbeat-thread | mso-group] DEBUG org.apache.kafka.clients.consumer.internals.ConsumerCoordinator - [Consumer clientId=mso-123456-consumer-e3947c42-8a25-4603-b050-ed449cbe5e27, groupId=mso-group] No broker available to send FindCoordinator request 19:07:12.171 [pool-14-thread-3] INFO org.onap.sdc.impl.NotificationConsumer - Polling for messages from topic: null 19:07:12.209 [kafka-producer-network-thread | mso-123456-producer-f273d893-2274-4515-b9d3-12029fbb5bf7] DEBUG org.apache.kafka.clients.NetworkClient - [Producer clientId=mso-123456-producer-f273d893-2274-4515-b9d3-12029fbb5bf7] Give up sending metadata request since no node is available 19:07:12.259 [kafka-producer-network-thread | mso-123456-producer-f273d893-2274-4515-b9d3-12029fbb5bf7] DEBUG org.apache.kafka.clients.NetworkClient - [Producer clientId=mso-123456-producer-f273d893-2274-4515-b9d3-12029fbb5bf7] Give up sending metadata request since no node is available 19:07:12.263 [kafka-coordinator-heartbeat-thread | mso-group] DEBUG org.apache.kafka.clients.NetworkClient - [Consumer clientId=mso-123456-consumer-e3947c42-8a25-4603-b050-ed449cbe5e27, groupId=mso-group] Initialize connection to node localhost:38537 (id: 1 rack: null) for sending metadata request 19:07:12.263 [kafka-coordinator-heartbeat-thread | mso-group] DEBUG org.apache.kafka.clients.ClientUtils - Resolved host localhost as 127.0.0.1 19:07:12.263 [kafka-coordinator-heartbeat-thread | mso-group] DEBUG org.apache.kafka.clients.NetworkClient - [Consumer clientId=mso-123456-consumer-e3947c42-8a25-4603-b050-ed449cbe5e27, groupId=mso-group] Initiating connection to node localhost:38537 (id: 1 rack: null) using address localhost/127.0.0.1 19:07:12.264 [kafka-coordinator-heartbeat-thread | mso-group] DEBUG org.apache.kafka.common.security.authenticator.SaslClientAuthenticator - [Consumer clientId=mso-123456-consumer-e3947c42-8a25-4603-b050-ed449cbe5e27, groupId=mso-group] Set SASL client state to SEND_APIVERSIONS_REQUEST 19:07:12.264 [kafka-coordinator-heartbeat-thread | mso-group] DEBUG org.apache.kafka.common.security.authenticator.SaslClientAuthenticator - [Consumer clientId=mso-123456-consumer-e3947c42-8a25-4603-b050-ed449cbe5e27, groupId=mso-group] Creating SaslClient: client=null;service=kafka;serviceHostname=localhost;mechs=[PLAIN] 19:07:12.265 [kafka-coordinator-heartbeat-thread | mso-group] DEBUG org.apache.kafka.common.network.Selector - [Consumer clientId=mso-123456-consumer-e3947c42-8a25-4603-b050-ed449cbe5e27, groupId=mso-group] Connection with localhost/127.0.0.1 (channelId=1) disconnected java.net.ConnectException: Connection refused at java.base/sun.nio.ch.SocketChannelImpl.checkConnect(Native Method) at java.base/sun.nio.ch.SocketChannelImpl.finishConnect(SocketChannelImpl.java:777) at org.apache.kafka.common.network.PlaintextTransportLayer.finishConnect(PlaintextTransportLayer.java:50) at org.apache.kafka.common.network.KafkaChannel.finishConnect(KafkaChannel.java:224) at org.apache.kafka.common.network.Selector.pollSelectionKeys(Selector.java:526) at org.apache.kafka.common.network.Selector.poll(Selector.java:481) at org.apache.kafka.clients.NetworkClient.poll(NetworkClient.java:560) at org.apache.kafka.clients.consumer.internals.ConsumerNetworkClient.poll(ConsumerNetworkClient.java:280) at org.apache.kafka.clients.consumer.internals.ConsumerNetworkClient.pollNoWakeup(ConsumerNetworkClient.java:321) at org.apache.kafka.clients.consumer.internals.AbstractCoordinator$HeartbeatThread.run(AbstractCoordinator.java:1454) 19:07:12.265 [kafka-coordinator-heartbeat-thread | mso-group] INFO org.apache.kafka.clients.NetworkClient - [Consumer clientId=mso-123456-consumer-e3947c42-8a25-4603-b050-ed449cbe5e27, groupId=mso-group] Node 1 disconnected. 19:07:12.266 [kafka-coordinator-heartbeat-thread | mso-group] WARN org.apache.kafka.clients.NetworkClient - [Consumer clientId=mso-123456-consumer-e3947c42-8a25-4603-b050-ed449cbe5e27, groupId=mso-group] Connection to node 1 (localhost/127.0.0.1:38537) could not be established. Broker may not be available. 19:07:12.266 [kafka-coordinator-heartbeat-thread | mso-group] DEBUG org.apache.kafka.clients.consumer.internals.ConsumerCoordinator - [Consumer clientId=mso-123456-consumer-e3947c42-8a25-4603-b050-ed449cbe5e27, groupId=mso-group] No broker available to send FindCoordinator request 19:07:12.271 [pool-14-thread-3] INFO org.onap.sdc.impl.NotificationConsumer - Polling for messages from topic: null [INFO] Tests run: 7, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 7.567 s - in org.onap.sdc.impl.NotificationConsumerTest [INFO] Running org.onap.sdc.impl.HeatParserTest 19:07:12.280 [main] DEBUG org.onap.sdc.utils.heat.HeatParser - Start of extracting HEAT parameters from file, file contents: just text 19:07:12.309 [kafka-producer-network-thread | mso-123456-producer-f273d893-2274-4515-b9d3-12029fbb5bf7] DEBUG org.apache.kafka.clients.NetworkClient - [Producer clientId=mso-123456-producer-f273d893-2274-4515-b9d3-12029fbb5bf7] Give up sending metadata request since no node is available 19:07:12.359 [kafka-producer-network-thread | mso-123456-producer-f273d893-2274-4515-b9d3-12029fbb5bf7] DEBUG org.apache.kafka.clients.NetworkClient - [Producer clientId=mso-123456-producer-f273d893-2274-4515-b9d3-12029fbb5bf7] Give up sending metadata request since no node is available 19:07:12.365 [main] ERROR org.onap.sdc.utils.YamlToObjectConverter - Failed to convert YAML just text to object. org.yaml.snakeyaml.constructor.ConstructorException: Can't construct a java object for tag:yaml.org,2002:org.onap.sdc.utils.heat.HeatConfiguration; exception=No single argument constructor found for class org.onap.sdc.utils.heat.HeatConfiguration : null in 'string', line 1, column 1: just text ^ at org.yaml.snakeyaml.constructor.Constructor$ConstructYamlObject.construct(Constructor.java:336) at org.yaml.snakeyaml.constructor.BaseConstructor.constructObjectNoCheck(BaseConstructor.java:230) at org.yaml.snakeyaml.constructor.BaseConstructor.constructObject(BaseConstructor.java:220) at org.yaml.snakeyaml.constructor.BaseConstructor.constructDocument(BaseConstructor.java:174) at org.yaml.snakeyaml.constructor.BaseConstructor.getSingleData(BaseConstructor.java:158) at org.yaml.snakeyaml.Yaml.loadFromReader(Yaml.java:491) at org.yaml.snakeyaml.Yaml.loadAs(Yaml.java:470) at org.onap.sdc.utils.YamlToObjectConverter.convertFromString(YamlToObjectConverter.java:113) at org.onap.sdc.utils.heat.HeatParser.getHeatParameters(HeatParser.java:60) at org.onap.sdc.impl.HeatParserTest.testParametersParsingInvalidYaml(HeatParserTest.java:122) at java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at java.base/jdk.internal.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.base/java.lang.reflect.Method.invoke(Method.java:566) at org.junit.platform.commons.util.ReflectionUtils.invokeMethod(ReflectionUtils.java:688) at org.junit.jupiter.engine.execution.MethodInvocation.proceed(MethodInvocation.java:60) at org.junit.jupiter.engine.execution.InvocationInterceptorChain$ValidatingInvocation.proceed(InvocationInterceptorChain.java:131) at org.junit.jupiter.engine.extension.TimeoutExtension.intercept(TimeoutExtension.java:149) at org.junit.jupiter.engine.extension.TimeoutExtension.interceptTestableMethod(TimeoutExtension.java:140) at org.junit.jupiter.engine.extension.TimeoutExtension.interceptTestMethod(TimeoutExtension.java:84) at org.junit.jupiter.engine.execution.ExecutableInvoker$ReflectiveInterceptorCall.lambda$ofVoidMethod$0(ExecutableInvoker.java:115) at org.junit.jupiter.engine.execution.ExecutableInvoker.lambda$invoke$0(ExecutableInvoker.java:105) at org.junit.jupiter.engine.execution.InvocationInterceptorChain$InterceptedInvocation.proceed(InvocationInterceptorChain.java:106) at org.junit.jupiter.engine.execution.InvocationInterceptorChain.proceed(InvocationInterceptorChain.java:64) at org.junit.jupiter.engine.execution.InvocationInterceptorChain.chainAndInvoke(InvocationInterceptorChain.java:45) at org.junit.jupiter.engine.execution.InvocationInterceptorChain.invoke(InvocationInterceptorChain.java:37) at org.junit.jupiter.engine.execution.ExecutableInvoker.invoke(ExecutableInvoker.java:104) at org.junit.jupiter.engine.execution.ExecutableInvoker.invoke(ExecutableInvoker.java:98) at org.junit.jupiter.engine.descriptor.TestMethodTestDescriptor.lambda$invokeTestMethod$6(TestMethodTestDescriptor.java:210) at org.junit.platform.engine.support.hierarchical.ThrowableCollector.execute(ThrowableCollector.java:73) at org.junit.jupiter.engine.descriptor.TestMethodTestDescriptor.invokeTestMethod(TestMethodTestDescriptor.java:206) at org.junit.jupiter.engine.descriptor.TestMethodTestDescriptor.execute(TestMethodTestDescriptor.java:131) at org.junit.jupiter.engine.descriptor.TestMethodTestDescriptor.execute(TestMethodTestDescriptor.java:65) at org.junit.platform.engine.support.hierarchical.NodeTestTask.lambda$executeRecursively$5(NodeTestTask.java:139) at org.junit.platform.engine.support.hierarchical.ThrowableCollector.execute(ThrowableCollector.java:73) at org.junit.platform.engine.support.hierarchical.NodeTestTask.lambda$executeRecursively$7(NodeTestTask.java:129) at org.junit.platform.engine.support.hierarchical.Node.around(Node.java:137) at org.junit.platform.engine.support.hierarchical.NodeTestTask.lambda$executeRecursively$8(NodeTestTask.java:127) at org.junit.platform.engine.support.hierarchical.ThrowableCollector.execute(ThrowableCollector.java:73) at org.junit.platform.engine.support.hierarchical.NodeTestTask.executeRecursively(NodeTestTask.java:126) at org.junit.platform.engine.support.hierarchical.NodeTestTask.execute(NodeTestTask.java:84) at java.base/java.util.ArrayList.forEach(ArrayList.java:1541) at org.junit.platform.engine.support.hierarchical.SameThreadHierarchicalTestExecutorService.invokeAll(SameThreadHierarchicalTestExecutorService.java:38) at org.junit.platform.engine.support.hierarchical.NodeTestTask.lambda$executeRecursively$5(NodeTestTask.java:143) at org.junit.platform.engine.support.hierarchical.ThrowableCollector.execute(ThrowableCollector.java:73) at org.junit.platform.engine.support.hierarchical.NodeTestTask.lambda$executeRecursively$7(NodeTestTask.java:129) at org.junit.platform.engine.support.hierarchical.Node.around(Node.java:137) at org.junit.platform.engine.support.hierarchical.NodeTestTask.lambda$executeRecursively$8(NodeTestTask.java:127) at org.junit.platform.engine.support.hierarchical.ThrowableCollector.execute(ThrowableCollector.java:73) at org.junit.platform.engine.support.hierarchical.NodeTestTask.executeRecursively(NodeTestTask.java:126) at org.junit.platform.engine.support.hierarchical.NodeTestTask.execute(NodeTestTask.java:84) at java.base/java.util.ArrayList.forEach(ArrayList.java:1541) at org.junit.platform.engine.support.hierarchical.SameThreadHierarchicalTestExecutorService.invokeAll(SameThreadHierarchicalTestExecutorService.java:38) at org.junit.platform.engine.support.hierarchical.NodeTestTask.lambda$executeRecursively$5(NodeTestTask.java:143) at org.junit.platform.engine.support.hierarchical.ThrowableCollector.execute(ThrowableCollector.java:73) at org.junit.platform.engine.support.hierarchical.NodeTestTask.lambda$executeRecursively$7(NodeTestTask.java:129) at org.junit.platform.engine.support.hierarchical.Node.around(Node.java:137) at org.junit.platform.engine.support.hierarchical.NodeTestTask.lambda$executeRecursively$8(NodeTestTask.java:127) at org.junit.platform.engine.support.hierarchical.ThrowableCollector.execute(ThrowableCollector.java:73) at org.junit.platform.engine.support.hierarchical.NodeTestTask.executeRecursively(NodeTestTask.java:126) at org.junit.platform.engine.support.hierarchical.NodeTestTask.execute(NodeTestTask.java:84) at org.junit.platform.engine.support.hierarchical.SameThreadHierarchicalTestExecutorService.submit(SameThreadHierarchicalTestExecutorService.java:32) at org.junit.platform.engine.support.hierarchical.HierarchicalTestExecutor.execute(HierarchicalTestExecutor.java:57) at org.junit.platform.engine.support.hierarchical.HierarchicalTestEngine.execute(HierarchicalTestEngine.java:51) at org.junit.platform.launcher.core.EngineExecutionOrchestrator.execute(EngineExecutionOrchestrator.java:108) at org.junit.platform.launcher.core.EngineExecutionOrchestrator.execute(EngineExecutionOrchestrator.java:88) at org.junit.platform.launcher.core.EngineExecutionOrchestrator.lambda$execute$0(EngineExecutionOrchestrator.java:54) at org.junit.platform.launcher.core.EngineExecutionOrchestrator.withInterceptedStreams(EngineExecutionOrchestrator.java:67) at org.junit.platform.launcher.core.EngineExecutionOrchestrator.execute(EngineExecutionOrchestrator.java:52) at org.junit.platform.launcher.core.DefaultLauncher.execute(DefaultLauncher.java:96) at org.junit.platform.launcher.core.DefaultLauncher.execute(DefaultLauncher.java:75) at org.apache.maven.surefire.junitplatform.JUnitPlatformProvider.invokeAllTests(JUnitPlatformProvider.java:154) at org.apache.maven.surefire.junitplatform.JUnitPlatformProvider.invoke(JUnitPlatformProvider.java:127) at org.apache.maven.surefire.booter.ForkedBooter.runSuitesInProcess(ForkedBooter.java:377) at org.apache.maven.surefire.booter.ForkedBooter.execute(ForkedBooter.java:138) at org.apache.maven.surefire.booter.ForkedBooter.run(ForkedBooter.java:465) at org.apache.maven.surefire.booter.ForkedBooter.main(ForkedBooter.java:451) Caused by: org.yaml.snakeyaml.error.YAMLException: No single argument constructor found for class org.onap.sdc.utils.heat.HeatConfiguration : null at org.yaml.snakeyaml.constructor.Constructor$ConstructScalar.construct(Constructor.java:393) at org.yaml.snakeyaml.constructor.Constructor$ConstructYamlObject.construct(Constructor.java:332) ... 76 common frames omitted 19:07:12.366 [main] ERROR org.onap.sdc.utils.heat.HeatParser - Couldn't parse HEAT template. 19:07:12.366 [main] WARN org.onap.sdc.utils.heat.HeatParser - HEAT template parameters section wasn't found or is empty. 19:07:12.366 [kafka-coordinator-heartbeat-thread | mso-group] DEBUG org.apache.kafka.clients.NetworkClient - [Consumer clientId=mso-123456-consumer-e3947c42-8a25-4603-b050-ed449cbe5e27, groupId=mso-group] Give up sending metadata request since no node is available 19:07:12.366 [kafka-coordinator-heartbeat-thread | mso-group] DEBUG org.apache.kafka.clients.consumer.internals.ConsumerCoordinator - [Consumer clientId=mso-123456-consumer-e3947c42-8a25-4603-b050-ed449cbe5e27, groupId=mso-group] No broker available to send FindCoordinator request 19:07:12.393 [main] DEBUG org.onap.sdc.utils.heat.HeatParser - Start of extracting HEAT parameters from file, file contents: heat_template_version: 2013-05-23 description: Simple template to deploy a stack with two virtual machine instances parameters: image_name_1: type: string label: Image Name description: SCOIMAGE Specify an image name for instance1 default: cirros-0.3.1-x86_64 image_name_2: type: string label: Image Name description: SCOIMAGE Specify an image name for instance2 default: cirros-0.3.1-x86_64 network_id: type: string label: Network ID description: SCONETWORK Network to be used for the compute instance hidden: true constraints: - length: { min: 6, max: 8 } description: Password length must be between 6 and 8 characters. - range: { min: 6, max: 8 } description: Range description - allowed_values: - m1.small - m1.medium - m1.large description: Allowed values description - allowed_pattern: "[a-zA-Z0-9]+" description: Password must consist of characters and numbers only. - allowed_pattern: "[A-Z]+[a-zA-Z0-9]*" description: Password must start with an uppercase character. - custom_constraint: nova.keypair description: Custom description resources: my_instance1: type: OS::Nova::Server properties: image: { get_param: image_name_1 } flavor: m1.small networks: - network : { get_param : network_id } my_instance2: type: OS::Nova::Server properties: image: { get_param: image_name_2 } flavor: m1.tiny networks: - network : { get_param : network_id } 19:07:12.410 [kafka-producer-network-thread | mso-123456-producer-f273d893-2274-4515-b9d3-12029fbb5bf7] DEBUG org.apache.kafka.clients.NetworkClient - [Producer clientId=mso-123456-producer-f273d893-2274-4515-b9d3-12029fbb5bf7] Give up sending metadata request since no node is available Found HEAT parameters: {image_name_1=type:string, label:Image Name, default:cirros-0.3.1-x86_64, hidden:false, description:SCOIMAGE Specify an image name for instance1, image_name_2=type:string, label:Image Name, default:cirros-0.3.1-x86_64, hidden:false, description:SCOIMAGE Specify an image name for instance2, network_id=type:string, label:Network ID, hidden:true, constraints:[length:{min=6, max=8}, description:Password length must be between 6 and 8 characters., range:{min=6, max=8}, description:Range description, allowed_values:[m1.small, m1.medium, m1.large], description:Allowed values description, allowed_pattern:[a-zA-Z0-9]+, description:Password must consist of characters and numbers only., allowed_pattern:[A-Z]+[a-zA-Z0-9]*, description:Password must start with an uppercase character., custom_constraint:nova.keypair, description:Custom description], description:SCONETWORK Network to be used for the compute instance} 19:07:12.443 [main] DEBUG org.onap.sdc.utils.heat.HeatParser - Found HEAT parameters: {image_name_1=type:string, label:Image Name, default:cirros-0.3.1-x86_64, hidden:false, description:SCOIMAGE Specify an image name for instance1, image_name_2=type:string, label:Image Name, default:cirros-0.3.1-x86_64, hidden:false, description:SCOIMAGE Specify an image name for instance2, network_id=type:string, label:Network ID, hidden:true, constraints:[length:{min=6, max=8}, description:Password length must be between 6 and 8 characters., range:{min=6, max=8}, description:Range description, allowed_values:[m1.small, m1.medium, m1.large], description:Allowed values description, allowed_pattern:[a-zA-Z0-9]+, description:Password must consist of characters and numbers only., allowed_pattern:[A-Z]+[a-zA-Z0-9]*, description:Password must start with an uppercase character., custom_constraint:nova.keypair, description:Custom description], description:SCONETWORK Network to be used for the compute instance} 19:07:12.445 [main] DEBUG org.onap.sdc.utils.heat.HeatParser - Start of extracting HEAT parameters from file, file contents: heat_template_version: 2013-05-23 description: Simple template to deploy a stack with two virtual machine instances 19:07:12.446 [main] WARN org.onap.sdc.utils.heat.HeatParser - HEAT template parameters section wasn't found or is empty. [INFO] Tests run: 3, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.169 s - in org.onap.sdc.impl.HeatParserTest [INFO] Running org.onap.sdc.impl.DistributionStatusMessageImplTest [INFO] Tests run: 3, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.006 s - in org.onap.sdc.impl.DistributionStatusMessageImplTest [INFO] Running org.onap.sdc.impl.NotificationCallbackBuilderTest 19:07:12.460 [kafka-producer-network-thread | mso-123456-producer-f273d893-2274-4515-b9d3-12029fbb5bf7] DEBUG org.apache.kafka.clients.NetworkClient - [Producer clientId=mso-123456-producer-f273d893-2274-4515-b9d3-12029fbb5bf7] Give up sending metadata request since no node is available 19:07:12.467 [kafka-coordinator-heartbeat-thread | mso-group] DEBUG org.apache.kafka.clients.NetworkClient - [Consumer clientId=mso-123456-consumer-e3947c42-8a25-4603-b050-ed449cbe5e27, groupId=mso-group] Give up sending metadata request since no node is available 19:07:12.467 [kafka-coordinator-heartbeat-thread | mso-group] DEBUG org.apache.kafka.clients.consumer.internals.ConsumerCoordinator - [Consumer clientId=mso-123456-consumer-e3947c42-8a25-4603-b050-ed449cbe5e27, groupId=mso-group] No broker available to send FindCoordinator request [INFO] Tests run: 2, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.011 s - in org.onap.sdc.impl.NotificationCallbackBuilderTest [INFO] Running org.onap.sdc.impl.SerializationTest 19:07:12.511 [kafka-producer-network-thread | mso-123456-producer-f273d893-2274-4515-b9d3-12029fbb5bf7] DEBUG org.apache.kafka.clients.NetworkClient - [Producer clientId=mso-123456-producer-f273d893-2274-4515-b9d3-12029fbb5bf7] Give up sending metadata request since no node is available 19:07:12.578 [kafka-coordinator-heartbeat-thread | mso-group] DEBUG org.apache.kafka.clients.NetworkClient - [Consumer clientId=mso-123456-consumer-e3947c42-8a25-4603-b050-ed449cbe5e27, groupId=mso-group] Give up sending metadata request since no node is available 19:07:12.578 [kafka-coordinator-heartbeat-thread | mso-group] DEBUG org.apache.kafka.clients.consumer.internals.ConsumerCoordinator - [Consumer clientId=mso-123456-consumer-e3947c42-8a25-4603-b050-ed449cbe5e27, groupId=mso-group] No broker available to send FindCoordinator request 19:07:12.579 [kafka-producer-network-thread | mso-123456-producer-f273d893-2274-4515-b9d3-12029fbb5bf7] DEBUG org.apache.kafka.clients.NetworkClient - [Producer clientId=mso-123456-producer-f273d893-2274-4515-b9d3-12029fbb5bf7] Give up sending metadata request since no node is available 19:07:12.629 [kafka-producer-network-thread | mso-123456-producer-f273d893-2274-4515-b9d3-12029fbb5bf7] DEBUG org.apache.kafka.clients.NetworkClient - [Producer clientId=mso-123456-producer-f273d893-2274-4515-b9d3-12029fbb5bf7] Give up sending metadata request since no node is available [INFO] Tests run: 2, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.18 s - in org.onap.sdc.impl.SerializationTest [INFO] Running org.onap.sdc.impl.DistributionClientDownloadResultTest [INFO] Tests run: 6, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.005 s - in org.onap.sdc.impl.DistributionClientDownloadResultTest [INFO] Running org.onap.sdc.impl.ConfigurationValidatorTest [INFO] Tests run: 4, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.008 s - in org.onap.sdc.impl.ConfigurationValidatorTest [INFO] Running org.onap.sdc.impl.DistributionClientTest 19:07:12.678 [kafka-coordinator-heartbeat-thread | mso-group] DEBUG org.apache.kafka.clients.NetworkClient - [Consumer clientId=mso-123456-consumer-e3947c42-8a25-4603-b050-ed449cbe5e27, groupId=mso-group] Give up sending metadata request since no node is available 19:07:12.679 [kafka-coordinator-heartbeat-thread | mso-group] DEBUG org.apache.kafka.clients.consumer.internals.ConsumerCoordinator - [Consumer clientId=mso-123456-consumer-e3947c42-8a25-4603-b050-ed449cbe5e27, groupId=mso-group] No broker available to send FindCoordinator request 19:07:12.679 [kafka-producer-network-thread | mso-123456-producer-f273d893-2274-4515-b9d3-12029fbb5bf7] DEBUG org.apache.kafka.clients.NetworkClient - [Producer clientId=mso-123456-producer-f273d893-2274-4515-b9d3-12029fbb5bf7] Give up sending metadata request since no node is available 19:07:12.680 [main] INFO org.onap.sdc.impl.DistributionClientImpl - DistributionClient - init 19:07:12.682 [main] DEBUG org.onap.sdc.impl.DistributionClientImpl - Artifact types: [HEAT] were validated with SDC server 19:07:12.682 [main] DEBUG org.onap.sdc.impl.DistributionClientImpl - Get MessageBus cluster information from SDC 19:07:12.683 [main] DEBUG org.onap.sdc.impl.DistributionClientImpl - MessageBus cluster info retrieved successfully org.onap.sdc.utils.kafka.KafkaDataResponse@5075ce9a 19:07:12.684 [main] INFO org.apache.kafka.clients.producer.ProducerConfig - ProducerConfig values: acks = -1 batch.size = 16384 bootstrap.servers = [localhost:9092] buffer.memory = 33554432 client.dns.lookup = use_all_dns_ips client.id = mso-123456-producer-e1bc41c7-1103-4722-9621-0a0777132466 compression.type = none connections.max.idle.ms = 540000 delivery.timeout.ms = 120000 enable.idempotence = true interceptor.classes = [] key.serializer = class org.apache.kafka.common.serialization.StringSerializer linger.ms = 0 max.block.ms = 60000 max.in.flight.requests.per.connection = 5 max.request.size = 1048576 metadata.max.age.ms = 300000 metadata.max.idle.ms = 300000 metric.reporters = [] metrics.num.samples = 2 metrics.recording.level = INFO metrics.sample.window.ms = 30000 partitioner.adaptive.partitioning.enable = true partitioner.availability.timeout.ms = 0 partitioner.class = null partitioner.ignore.keys = false receive.buffer.bytes = 32768 reconnect.backoff.max.ms = 1000 reconnect.backoff.ms = 50 request.timeout.ms = 30000 retries = 2147483647 retry.backoff.ms = 100 sasl.client.callback.handler.class = null sasl.jaas.config = [hidden] sasl.kerberos.kinit.cmd = /usr/bin/kinit sasl.kerberos.min.time.before.relogin = 60000 sasl.kerberos.service.name = null sasl.kerberos.ticket.renew.jitter = 0.05 sasl.kerberos.ticket.renew.window.factor = 0.8 sasl.login.callback.handler.class = null sasl.login.class = null sasl.login.connect.timeout.ms = null sasl.login.read.timeout.ms = null sasl.login.refresh.buffer.seconds = 300 sasl.login.refresh.min.period.seconds = 60 sasl.login.refresh.window.factor = 0.8 sasl.login.refresh.window.jitter = 0.05 sasl.login.retry.backoff.max.ms = 10000 sasl.login.retry.backoff.ms = 100 sasl.mechanism = PLAIN sasl.oauthbearer.clock.skew.seconds = 30 sasl.oauthbearer.expected.audience = null sasl.oauthbearer.expected.issuer = null sasl.oauthbearer.jwks.endpoint.refresh.ms = 3600000 sasl.oauthbearer.jwks.endpoint.retry.backoff.max.ms = 10000 sasl.oauthbearer.jwks.endpoint.retry.backoff.ms = 100 sasl.oauthbearer.jwks.endpoint.url = null sasl.oauthbearer.scope.claim.name = scope sasl.oauthbearer.sub.claim.name = sub sasl.oauthbearer.token.endpoint.url = null security.protocol = SASL_PLAINTEXT security.providers = null send.buffer.bytes = 131072 socket.connection.setup.timeout.max.ms = 30000 socket.connection.setup.timeout.ms = 10000 ssl.cipher.suites = null ssl.enabled.protocols = [TLSv1.2, TLSv1.3] ssl.endpoint.identification.algorithm = https ssl.engine.factory.class = null ssl.key.password = null ssl.keymanager.algorithm = SunX509 ssl.keystore.certificate.chain = null ssl.keystore.key = null ssl.keystore.location = null ssl.keystore.password = null ssl.keystore.type = JKS ssl.protocol = TLSv1.3 ssl.provider = null ssl.secure.random.implementation = null ssl.trustmanager.algorithm = PKIX ssl.truststore.certificates = null ssl.truststore.location = null ssl.truststore.password = null ssl.truststore.type = JKS transaction.timeout.ms = 60000 transactional.id = null value.serializer = class org.apache.kafka.common.serialization.StringSerializer 19:07:12.685 [main] INFO org.apache.kafka.clients.producer.KafkaProducer - [Producer clientId=mso-123456-producer-e1bc41c7-1103-4722-9621-0a0777132466] Instantiated an idempotent producer. 19:07:12.687 [main] INFO org.apache.kafka.common.utils.AppInfoParser - Kafka version: 3.3.1 19:07:12.687 [main] INFO org.apache.kafka.common.utils.AppInfoParser - Kafka commitId: e23c59d00e687ff5 19:07:12.687 [main] INFO org.apache.kafka.common.utils.AppInfoParser - Kafka startTimeMs: 1755112032687 19:07:12.687 [main] DEBUG org.apache.kafka.clients.producer.KafkaProducer - [Producer clientId=mso-123456-producer-e1bc41c7-1103-4722-9621-0a0777132466] Kafka producer started DistributionClientResultImpl [responseStatus=SUCCESS, responseMessage=distribution client initialized successfully] 19:07:12.688 [main] INFO org.onap.sdc.impl.DistributionClientImpl - DistributionClient - init 19:07:12.688 [main] WARN org.onap.sdc.impl.DistributionClientImpl - distribution client already initialized 19:07:12.689 [kafka-producer-network-thread | mso-123456-producer-e1bc41c7-1103-4722-9621-0a0777132466] DEBUG org.apache.kafka.clients.producer.internals.Sender - [Producer clientId=mso-123456-producer-e1bc41c7-1103-4722-9621-0a0777132466] Starting Kafka producer I/O thread. 19:07:12.689 [kafka-producer-network-thread | mso-123456-producer-e1bc41c7-1103-4722-9621-0a0777132466] DEBUG org.apache.kafka.clients.producer.internals.TransactionManager - [Producer clientId=mso-123456-producer-e1bc41c7-1103-4722-9621-0a0777132466] Transition from state UNINITIALIZED to INITIALIZING 19:07:12.690 [kafka-producer-network-thread | mso-123456-producer-e1bc41c7-1103-4722-9621-0a0777132466] DEBUG org.apache.kafka.clients.producer.internals.TransactionManager - [Producer clientId=mso-123456-producer-e1bc41c7-1103-4722-9621-0a0777132466] Enqueuing transactional request InitProducerIdRequestData(transactionalId=null, transactionTimeoutMs=2147483647, producerId=-1, producerEpoch=-1) 19:07:12.690 [main] INFO org.onap.sdc.impl.DistributionClientImpl - stop DistributionClient 19:07:12.692 [kafka-producer-network-thread | mso-123456-producer-e1bc41c7-1103-4722-9621-0a0777132466] DEBUG org.apache.kafka.clients.NetworkClient - [Producer clientId=mso-123456-producer-e1bc41c7-1103-4722-9621-0a0777132466] Initialize connection to node localhost:9092 (id: -1 rack: null) for sending metadata request 19:07:12.692 [kafka-producer-network-thread | mso-123456-producer-e1bc41c7-1103-4722-9621-0a0777132466] DEBUG org.apache.kafka.clients.ClientUtils - Resolved host localhost as 127.0.0.1 19:07:12.692 [kafka-producer-network-thread | mso-123456-producer-e1bc41c7-1103-4722-9621-0a0777132466] DEBUG org.apache.kafka.clients.NetworkClient - [Producer clientId=mso-123456-producer-e1bc41c7-1103-4722-9621-0a0777132466] Initiating connection to node localhost:9092 (id: -1 rack: null) using address localhost/127.0.0.1 19:07:12.692 [kafka-producer-network-thread | mso-123456-producer-e1bc41c7-1103-4722-9621-0a0777132466] DEBUG org.apache.kafka.common.security.authenticator.SaslClientAuthenticator - [Producer clientId=mso-123456-producer-e1bc41c7-1103-4722-9621-0a0777132466] Set SASL client state to SEND_APIVERSIONS_REQUEST 19:07:12.692 [kafka-producer-network-thread | mso-123456-producer-e1bc41c7-1103-4722-9621-0a0777132466] DEBUG org.apache.kafka.common.security.authenticator.SaslClientAuthenticator - [Producer clientId=mso-123456-producer-e1bc41c7-1103-4722-9621-0a0777132466] Creating SaslClient: client=null;service=kafka;serviceHostname=localhost;mechs=[PLAIN] 19:07:12.693 [main] INFO org.onap.sdc.impl.DistributionClientImpl - DistributionClient - init 19:07:12.694 [main] ERROR org.onap.sdc.impl.DistributionClientImpl - DistributionClientResultImpl [responseStatus=CONF_MISSING_PASSWORD, responseMessage=configuration is invalid: CONF_MISSING_PASSWORD] 19:07:12.695 [main] INFO org.onap.sdc.impl.DistributionClientImpl - DistributionClient - init 19:07:12.695 [main] ERROR org.onap.sdc.impl.DistributionClientImpl - DistributionClientResultImpl [responseStatus=CONF_MISSING_PASSWORD, responseMessage=configuration is invalid: CONF_MISSING_PASSWORD] 19:07:12.695 [main] INFO org.onap.sdc.impl.DistributionClientImpl - DistributionClient - init 19:07:12.696 [main] ERROR org.onap.sdc.impl.DistributionClientImpl - DistributionClientResultImpl [responseStatus=CONF_MISSING_USERNAME, responseMessage=configuration is invalid: CONF_MISSING_USERNAME] 19:07:12.696 [main] INFO org.onap.sdc.impl.DistributionClientImpl - DistributionClient - init 19:07:12.696 [main] ERROR org.onap.sdc.impl.DistributionClientImpl - DistributionClientResultImpl [responseStatus=CONF_MISSING_USERNAME, responseMessage=configuration is invalid: CONF_MISSING_USERNAME] 19:07:12.696 [main] INFO org.onap.sdc.impl.DistributionClientImpl - DistributionClient - init 19:07:12.697 [main] ERROR org.onap.sdc.impl.DistributionClientImpl - DistributionClientResultImpl [responseStatus=CONF_MISSING_SDC_FQDN, responseMessage=configuration is invalid: CONF_MISSING_SDC_FQDN] 19:07:12.697 [kafka-producer-network-thread | mso-123456-producer-e1bc41c7-1103-4722-9621-0a0777132466] DEBUG org.apache.kafka.common.network.Selector - [Producer clientId=mso-123456-producer-e1bc41c7-1103-4722-9621-0a0777132466] Connection with localhost/127.0.0.1 (channelId=-1) disconnected java.net.ConnectException: Connection refused at java.base/sun.nio.ch.SocketChannelImpl.checkConnect(Native Method) at java.base/sun.nio.ch.SocketChannelImpl.finishConnect(SocketChannelImpl.java:777) at org.apache.kafka.common.network.PlaintextTransportLayer.finishConnect(PlaintextTransportLayer.java:50) at org.apache.kafka.common.network.KafkaChannel.finishConnect(KafkaChannel.java:224) at org.apache.kafka.common.network.Selector.pollSelectionKeys(Selector.java:526) at org.apache.kafka.common.network.Selector.poll(Selector.java:481) at org.apache.kafka.clients.NetworkClient.poll(NetworkClient.java:560) at org.apache.kafka.clients.NetworkClientUtils.isReady(NetworkClientUtils.java:42) at org.apache.kafka.clients.NetworkClientUtils.awaitReady(NetworkClientUtils.java:64) at org.apache.kafka.clients.producer.internals.Sender.awaitNodeReady(Sender.java:534) at org.apache.kafka.clients.producer.internals.Sender.maybeSendAndPollTransactionalRequest(Sender.java:455) at org.apache.kafka.clients.producer.internals.Sender.runOnce(Sender.java:316) at org.apache.kafka.clients.producer.internals.Sender.run(Sender.java:243) at java.base/java.lang.Thread.run(Thread.java:829) 19:07:12.697 [main] INFO org.onap.sdc.impl.DistributionClientImpl - DistributionClient - init 19:07:12.697 [kafka-producer-network-thread | mso-123456-producer-e1bc41c7-1103-4722-9621-0a0777132466] INFO org.apache.kafka.clients.NetworkClient - [Producer clientId=mso-123456-producer-e1bc41c7-1103-4722-9621-0a0777132466] Node -1 disconnected. 19:07:12.697 [kafka-producer-network-thread | mso-123456-producer-e1bc41c7-1103-4722-9621-0a0777132466] WARN org.apache.kafka.clients.NetworkClient - [Producer clientId=mso-123456-producer-e1bc41c7-1103-4722-9621-0a0777132466] Connection to node -1 (localhost/127.0.0.1:9092) could not be established. Broker may not be available. 19:07:12.697 [kafka-producer-network-thread | mso-123456-producer-e1bc41c7-1103-4722-9621-0a0777132466] WARN org.apache.kafka.clients.NetworkClient - [Producer clientId=mso-123456-producer-e1bc41c7-1103-4722-9621-0a0777132466] Bootstrap broker localhost:9092 (id: -1 rack: null) disconnected 19:07:12.697 [main] ERROR org.onap.sdc.impl.DistributionClientImpl - DistributionClientResultImpl [responseStatus=CONF_MISSING_SDC_FQDN, responseMessage=configuration is invalid: CONF_MISSING_SDC_FQDN] 19:07:12.697 [kafka-producer-network-thread | mso-123456-producer-e1bc41c7-1103-4722-9621-0a0777132466] DEBUG org.apache.kafka.clients.producer.internals.Sender - [Producer clientId=mso-123456-producer-e1bc41c7-1103-4722-9621-0a0777132466] Disconnect from localhost:9092 (id: -1 rack: null) while trying to send request InitProducerIdRequestData(transactionalId=null, transactionTimeoutMs=2147483647, producerId=-1, producerEpoch=-1). Going to back off and retry. java.io.IOException: Connection to localhost:9092 (id: -1 rack: null) failed. at org.apache.kafka.clients.NetworkClientUtils.awaitReady(NetworkClientUtils.java:70) at org.apache.kafka.clients.producer.internals.Sender.awaitNodeReady(Sender.java:534) at org.apache.kafka.clients.producer.internals.Sender.maybeSendAndPollTransactionalRequest(Sender.java:455) at org.apache.kafka.clients.producer.internals.Sender.runOnce(Sender.java:316) at org.apache.kafka.clients.producer.internals.Sender.run(Sender.java:243) at java.base/java.lang.Thread.run(Thread.java:829) 19:07:12.697 [main] INFO org.onap.sdc.impl.DistributionClientImpl - DistributionClient - init 19:07:12.698 [main] ERROR org.onap.sdc.impl.DistributionClientImpl - DistributionClientResultImpl [responseStatus=CONF_INVALID_SDC_FQDN, responseMessage=configuration is invalid: CONF_INVALID_SDC_FQDN] 19:07:12.698 [main] INFO org.onap.sdc.impl.DistributionClientImpl - DistributionClient - init 19:07:12.698 [main] ERROR org.onap.sdc.impl.DistributionClientImpl - DistributionClientResultImpl [responseStatus=CONF_MISSING_CONSUMER_ID, responseMessage=configuration is invalid: CONF_MISSING_CONSUMER_ID] 19:07:12.698 [main] INFO org.onap.sdc.impl.DistributionClientImpl - DistributionClient - init 19:07:12.699 [main] ERROR org.onap.sdc.impl.DistributionClientImpl - DistributionClientResultImpl [responseStatus=CONF_MISSING_CONSUMER_ID, responseMessage=configuration is invalid: CONF_MISSING_CONSUMER_ID] 19:07:12.699 [main] INFO org.onap.sdc.impl.DistributionClientImpl - DistributionClient - init 19:07:12.699 [main] ERROR org.onap.sdc.impl.DistributionClientImpl - DistributionClientResultImpl [responseStatus=CONF_MISSING_ENVIRONMENT_NAME, responseMessage=configuration is invalid: CONF_MISSING_ENVIRONMENT_NAME] 19:07:12.700 [main] INFO org.onap.sdc.impl.DistributionClientImpl - DistributionClient - init 19:07:12.700 [main] ERROR org.onap.sdc.impl.DistributionClientImpl - DistributionClientResultImpl [responseStatus=CONF_MISSING_ENVIRONMENT_NAME, responseMessage=configuration is invalid: CONF_MISSING_ENVIRONMENT_NAME] 19:07:12.700 [main] INFO org.onap.sdc.impl.DistributionClientImpl - stop DistributionClient 19:07:12.700 [main] DEBUG org.onap.sdc.impl.DistributionClientImpl - client was not initialized isUseHttpsWithSDC set to true 19:07:12.702 [main] INFO org.onap.sdc.impl.DistributionClientImpl - DistributionClient - init 19:07:12.727 [main] DEBUG org.onap.sdc.http.SdcConnectorClient - about to perform get on SDC. requestId= 7caf7dd5-8ffd-43a3-be79-1deba3b6d945 url= /sdc/v1/artifactTypes 19:07:12.727 [main] DEBUG org.onap.sdc.http.HttpSdcClient - url to send https://badhost:8080/sdc/v1/artifactTypes 19:07:12.730 [kafka-producer-network-thread | mso-123456-producer-f273d893-2274-4515-b9d3-12029fbb5bf7] DEBUG org.apache.kafka.clients.NetworkClient - [Producer clientId=mso-123456-producer-f273d893-2274-4515-b9d3-12029fbb5bf7] Give up sending metadata request since no node is available 19:07:12.779 [kafka-coordinator-heartbeat-thread | mso-group] DEBUG org.apache.kafka.clients.NetworkClient - [Consumer clientId=mso-123456-consumer-e3947c42-8a25-4603-b050-ed449cbe5e27, groupId=mso-group] Give up sending metadata request since no node is available 19:07:12.779 [kafka-coordinator-heartbeat-thread | mso-group] DEBUG org.apache.kafka.clients.consumer.internals.ConsumerCoordinator - [Consumer clientId=mso-123456-consumer-e3947c42-8a25-4603-b050-ed449cbe5e27, groupId=mso-group] No broker available to send FindCoordinator request 19:07:12.780 [kafka-producer-network-thread | mso-123456-producer-f273d893-2274-4515-b9d3-12029fbb5bf7] DEBUG org.apache.kafka.clients.NetworkClient - [Producer clientId=mso-123456-producer-f273d893-2274-4515-b9d3-12029fbb5bf7] Initialize connection to node localhost:38537 (id: 1 rack: null) for sending metadata request 19:07:12.780 [kafka-producer-network-thread | mso-123456-producer-f273d893-2274-4515-b9d3-12029fbb5bf7] DEBUG org.apache.kafka.clients.ClientUtils - Resolved host localhost as 127.0.0.1 19:07:12.780 [kafka-producer-network-thread | mso-123456-producer-f273d893-2274-4515-b9d3-12029fbb5bf7] DEBUG org.apache.kafka.clients.NetworkClient - [Producer clientId=mso-123456-producer-f273d893-2274-4515-b9d3-12029fbb5bf7] Initiating connection to node localhost:38537 (id: 1 rack: null) using address localhost/127.0.0.1 19:07:12.780 [kafka-producer-network-thread | mso-123456-producer-f273d893-2274-4515-b9d3-12029fbb5bf7] DEBUG org.apache.kafka.common.security.authenticator.SaslClientAuthenticator - [Producer clientId=mso-123456-producer-f273d893-2274-4515-b9d3-12029fbb5bf7] Set SASL client state to SEND_APIVERSIONS_REQUEST 19:07:12.780 [kafka-producer-network-thread | mso-123456-producer-f273d893-2274-4515-b9d3-12029fbb5bf7] DEBUG org.apache.kafka.common.security.authenticator.SaslClientAuthenticator - [Producer clientId=mso-123456-producer-f273d893-2274-4515-b9d3-12029fbb5bf7] Creating SaslClient: client=null;service=kafka;serviceHostname=localhost;mechs=[PLAIN] 19:07:12.781 [kafka-producer-network-thread | mso-123456-producer-f273d893-2274-4515-b9d3-12029fbb5bf7] DEBUG org.apache.kafka.common.network.Selector - [Producer clientId=mso-123456-producer-f273d893-2274-4515-b9d3-12029fbb5bf7] Connection with localhost/127.0.0.1 (channelId=1) disconnected java.net.ConnectException: Connection refused at java.base/sun.nio.ch.SocketChannelImpl.checkConnect(Native Method) at java.base/sun.nio.ch.SocketChannelImpl.finishConnect(SocketChannelImpl.java:777) at org.apache.kafka.common.network.PlaintextTransportLayer.finishConnect(PlaintextTransportLayer.java:50) at org.apache.kafka.common.network.KafkaChannel.finishConnect(KafkaChannel.java:224) at org.apache.kafka.common.network.Selector.pollSelectionKeys(Selector.java:526) at org.apache.kafka.common.network.Selector.poll(Selector.java:481) at org.apache.kafka.clients.NetworkClient.poll(NetworkClient.java:560) at org.apache.kafka.clients.producer.internals.Sender.runOnce(Sender.java:328) at org.apache.kafka.clients.producer.internals.Sender.run(Sender.java:243) at java.base/java.lang.Thread.run(Thread.java:829) 19:07:12.781 [kafka-producer-network-thread | mso-123456-producer-f273d893-2274-4515-b9d3-12029fbb5bf7] INFO org.apache.kafka.clients.NetworkClient - [Producer clientId=mso-123456-producer-f273d893-2274-4515-b9d3-12029fbb5bf7] Node 1 disconnected. 19:07:12.781 [kafka-producer-network-thread | mso-123456-producer-f273d893-2274-4515-b9d3-12029fbb5bf7] WARN org.apache.kafka.clients.NetworkClient - [Producer clientId=mso-123456-producer-f273d893-2274-4515-b9d3-12029fbb5bf7] Connection to node 1 (localhost/127.0.0.1:38537) could not be established. Broker may not be available. 19:07:12.783 [main] ERROR org.onap.sdc.http.HttpSdcClient - failed to connect to url: /sdc/v1/artifactTypes java.net.UnknownHostException: badhost: System error at java.base/java.net.Inet6AddressImpl.lookupAllHostAddr(Native Method) at java.base/java.net.InetAddress$PlatformNameService.lookupAllHostAddr(InetAddress.java:929) at java.base/java.net.InetAddress.getAddressesFromNameService(InetAddress.java:1529) at java.base/java.net.InetAddress$NameServiceAddresses.get(InetAddress.java:848) at java.base/java.net.InetAddress.getAllByName0(InetAddress.java:1519) at java.base/java.net.InetAddress.getAllByName(InetAddress.java:1378) at java.base/java.net.InetAddress.getAllByName(InetAddress.java:1306) at org.apache.http.impl.conn.SystemDefaultDnsResolver.resolve(SystemDefaultDnsResolver.java:45) at org.apache.http.impl.conn.DefaultHttpClientConnectionOperator.connect(DefaultHttpClientConnectionOperator.java:112) at org.apache.http.impl.conn.PoolingHttpClientConnectionManager.connect(PoolingHttpClientConnectionManager.java:376) at org.apache.http.impl.execchain.MainClientExec.establishRoute(MainClientExec.java:393) at org.apache.http.impl.execchain.MainClientExec.execute(MainClientExec.java:236) at org.apache.http.impl.execchain.ProtocolExec.execute(ProtocolExec.java:186) at org.apache.http.impl.execchain.RetryExec.execute(RetryExec.java:89) at org.apache.http.impl.execchain.RedirectExec.execute(RedirectExec.java:110) at org.apache.http.impl.client.InternalHttpClient.doExecute(InternalHttpClient.java:185) at org.apache.http.impl.client.CloseableHttpClient.execute(CloseableHttpClient.java:83) at org.apache.http.impl.client.CloseableHttpClient.execute(CloseableHttpClient.java:108) at org.onap.sdc.http.HttpSdcClient.getRequest(HttpSdcClient.java:116) at org.onap.sdc.http.SdcConnectorClient.performSdcServerRequest(SdcConnectorClient.java:120) at org.onap.sdc.http.SdcConnectorClient.getValidArtifactTypesList(SdcConnectorClient.java:74) at org.onap.sdc.impl.DistributionClientImpl.validateArtifactTypesWithSdcServer(DistributionClientImpl.java:300) at org.onap.sdc.impl.DistributionClientImpl.init(DistributionClientImpl.java:129) at java.base/java.lang.invoke.MethodHandle.invokeWithArguments(MethodHandle.java:710) at org.mockito.internal.util.reflection.InstrumentationMemberAccessor$Dispatcher$ByteBuddy$0KOrTgaY.invokeWithArguments(Unknown Source) at org.mockito.internal.util.reflection.InstrumentationMemberAccessor.invoke(InstrumentationMemberAccessor.java:239) at org.mockito.internal.util.reflection.ModuleMemberAccessor.invoke(ModuleMemberAccessor.java:55) at org.mockito.internal.creation.bytebuddy.MockMethodAdvice.tryInvoke(MockMethodAdvice.java:333) at org.mockito.internal.creation.bytebuddy.MockMethodAdvice.access$500(MockMethodAdvice.java:60) at org.mockito.internal.creation.bytebuddy.MockMethodAdvice$RealMethodCall.invoke(MockMethodAdvice.java:253) at org.mockito.internal.invocation.InterceptedInvocation.callRealMethod(InterceptedInvocation.java:142) at org.mockito.internal.stubbing.answers.CallsRealMethods.answer(CallsRealMethods.java:45) at org.mockito.Answers.answer(Answers.java:99) at org.mockito.internal.handler.MockHandlerImpl.handle(MockHandlerImpl.java:110) at org.mockito.internal.handler.NullResultGuardian.handle(NullResultGuardian.java:29) at org.mockito.internal.handler.InvocationNotifierHandler.handle(InvocationNotifierHandler.java:34) at org.mockito.internal.creation.bytebuddy.MockMethodInterceptor.doIntercept(MockMethodInterceptor.java:82) at org.mockito.internal.creation.bytebuddy.MockMethodAdvice.handle(MockMethodAdvice.java:151) at org.onap.sdc.impl.DistributionClientImpl.init(DistributionClientImpl.java:117) at org.onap.sdc.impl.DistributionClientTest.initFailedConnectSdcTest(DistributionClientTest.java:189) at java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at java.base/jdk.internal.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.base/java.lang.reflect.Method.invoke(Method.java:566) at org.junit.platform.commons.util.ReflectionUtils.invokeMethod(ReflectionUtils.java:688) at org.junit.jupiter.engine.execution.MethodInvocation.proceed(MethodInvocation.java:60) at org.junit.jupiter.engine.execution.InvocationInterceptorChain$ValidatingInvocation.proceed(InvocationInterceptorChain.java:131) at org.junit.jupiter.engine.extension.TimeoutExtension.intercept(TimeoutExtension.java:149) at org.junit.jupiter.engine.extension.TimeoutExtension.interceptTestableMethod(TimeoutExtension.java:140) at org.junit.jupiter.engine.extension.TimeoutExtension.interceptTestMethod(TimeoutExtension.java:84) at org.junit.jupiter.engine.execution.ExecutableInvoker$ReflectiveInterceptorCall.lambda$ofVoidMethod$0(ExecutableInvoker.java:115) at org.junit.jupiter.engine.execution.ExecutableInvoker.lambda$invoke$0(ExecutableInvoker.java:105) at org.junit.jupiter.engine.execution.InvocationInterceptorChain$InterceptedInvocation.proceed(InvocationInterceptorChain.java:106) at org.junit.jupiter.engine.execution.InvocationInterceptorChain.proceed(InvocationInterceptorChain.java:64) at org.junit.jupiter.engine.execution.InvocationInterceptorChain.chainAndInvoke(InvocationInterceptorChain.java:45) at org.junit.jupiter.engine.execution.InvocationInterceptorChain.invoke(InvocationInterceptorChain.java:37) at org.junit.jupiter.engine.execution.ExecutableInvoker.invoke(ExecutableInvoker.java:104) at org.junit.jupiter.engine.execution.ExecutableInvoker.invoke(ExecutableInvoker.java:98) at org.junit.jupiter.engine.descriptor.TestMethodTestDescriptor.lambda$invokeTestMethod$6(TestMethodTestDescriptor.java:210) at org.junit.platform.engine.support.hierarchical.ThrowableCollector.execute(ThrowableCollector.java:73) at org.junit.jupiter.engine.descriptor.TestMethodTestDescriptor.invokeTestMethod(TestMethodTestDescriptor.java:206) at org.junit.jupiter.engine.descriptor.TestMethodTestDescriptor.execute(TestMethodTestDescriptor.java:131) at org.junit.jupiter.engine.descriptor.TestMethodTestDescriptor.execute(TestMethodTestDescriptor.java:65) at org.junit.platform.engine.support.hierarchical.NodeTestTask.lambda$executeRecursively$5(NodeTestTask.java:139) at org.junit.platform.engine.support.hierarchical.ThrowableCollector.execute(ThrowableCollector.java:73) at org.junit.platform.engine.support.hierarchical.NodeTestTask.lambda$executeRecursively$7(NodeTestTask.java:129) at org.junit.platform.engine.support.hierarchical.Node.around(Node.java:137) at org.junit.platform.engine.support.hierarchical.NodeTestTask.lambda$executeRecursively$8(NodeTestTask.java:127) at org.junit.platform.engine.support.hierarchical.ThrowableCollector.execute(ThrowableCollector.java:73) at org.junit.platform.engine.support.hierarchical.NodeTestTask.executeRecursively(NodeTestTask.java:126) at org.junit.platform.engine.support.hierarchical.NodeTestTask.execute(NodeTestTask.java:84) at java.base/java.util.ArrayList.forEach(ArrayList.java:1541) at org.junit.platform.engine.support.hierarchical.SameThreadHierarchicalTestExecutorService.invokeAll(SameThreadHierarchicalTestExecutorService.java:38) at org.junit.platform.engine.support.hierarchical.NodeTestTask.lambda$executeRecursively$5(NodeTestTask.java:143) at org.junit.platform.engine.support.hierarchical.ThrowableCollector.execute(ThrowableCollector.java:73) at org.junit.platform.engine.support.hierarchical.NodeTestTask.lambda$executeRecursively$7(NodeTestTask.java:129) at org.junit.platform.engine.support.hierarchical.Node.around(Node.java:137) at org.junit.platform.engine.support.hierarchical.NodeTestTask.lambda$executeRecursively$8(NodeTestTask.java:127) at org.junit.platform.engine.support.hierarchical.ThrowableCollector.execute(ThrowableCollector.java:73) at org.junit.platform.engine.support.hierarchical.NodeTestTask.executeRecursively(NodeTestTask.java:126) at org.junit.platform.engine.support.hierarchical.NodeTestTask.execute(NodeTestTask.java:84) at java.base/java.util.ArrayList.forEach(ArrayList.java:1541) at org.junit.platform.engine.support.hierarchical.SameThreadHierarchicalTestExecutorService.invokeAll(SameThreadHierarchicalTestExecutorService.java:38) at org.junit.platform.engine.support.hierarchical.NodeTestTask.lambda$executeRecursively$5(NodeTestTask.java:143) at org.junit.platform.engine.support.hierarchical.ThrowableCollector.execute(ThrowableCollector.java:73) at org.junit.platform.engine.support.hierarchical.NodeTestTask.lambda$executeRecursively$7(NodeTestTask.java:129) at org.junit.platform.engine.support.hierarchical.Node.around(Node.java:137) at org.junit.platform.engine.support.hierarchical.NodeTestTask.lambda$executeRecursively$8(NodeTestTask.java:127) at org.junit.platform.engine.support.hierarchical.ThrowableCollector.execute(ThrowableCollector.java:73) at org.junit.platform.engine.support.hierarchical.NodeTestTask.executeRecursively(NodeTestTask.java:126) at org.junit.platform.engine.support.hierarchical.NodeTestTask.execute(NodeTestTask.java:84) at org.junit.platform.engine.support.hierarchical.SameThreadHierarchicalTestExecutorService.submit(SameThreadHierarchicalTestExecutorService.java:32) at org.junit.platform.engine.support.hierarchical.HierarchicalTestExecutor.execute(HierarchicalTestExecutor.java:57) at org.junit.platform.engine.support.hierarchical.HierarchicalTestEngine.execute(HierarchicalTestEngine.java:51) at org.junit.platform.launcher.core.EngineExecutionOrchestrator.execute(EngineExecutionOrchestrator.java:108) at org.junit.platform.launcher.core.EngineExecutionOrchestrator.execute(EngineExecutionOrchestrator.java:88) at org.junit.platform.launcher.core.EngineExecutionOrchestrator.lambda$execute$0(EngineExecutionOrchestrator.java:54) at org.junit.platform.launcher.core.EngineExecutionOrchestrator.withInterceptedStreams(EngineExecutionOrchestrator.java:67) at org.junit.platform.launcher.core.EngineExecutionOrchestrator.execute(EngineExecutionOrchestrator.java:52) at org.junit.platform.launcher.core.DefaultLauncher.execute(DefaultLauncher.java:96) at org.junit.platform.launcher.core.DefaultLauncher.execute(DefaultLauncher.java:75) at org.apache.maven.surefire.junitplatform.JUnitPlatformProvider.invokeAllTests(JUnitPlatformProvider.java:154) at org.apache.maven.surefire.junitplatform.JUnitPlatformProvider.invoke(JUnitPlatformProvider.java:127) at org.apache.maven.surefire.booter.ForkedBooter.runSuitesInProcess(ForkedBooter.java:377) at org.apache.maven.surefire.booter.ForkedBooter.execute(ForkedBooter.java:138) at org.apache.maven.surefire.booter.ForkedBooter.run(ForkedBooter.java:465) at org.apache.maven.surefire.booter.ForkedBooter.main(ForkedBooter.java:451) 19:07:12.785 [main] ERROR org.onap.sdc.http.SdcConnectorClient - status from SDC is org.onap.sdc.http.HttpSdcResponse@31e6654f 19:07:12.785 [main] ERROR org.onap.sdc.http.SdcConnectorClient - DistributionClientResultImpl [responseStatus=SDC_CONNECTION_FAILED, responseMessage=SDC server problem] 19:07:12.785 [main] DEBUG org.onap.sdc.http.SdcConnectorClient - error from SDC is: failed to connect 19:07:12.786 [main] INFO org.onap.sdc.impl.DistributionClientImpl - DistributionClient - init 19:07:12.797 [kafka-producer-network-thread | mso-123456-producer-e1bc41c7-1103-4722-9621-0a0777132466] DEBUG org.apache.kafka.clients.producer.internals.TransactionManager - [Producer clientId=mso-123456-producer-e1bc41c7-1103-4722-9621-0a0777132466] Enqueuing transactional request InitProducerIdRequestData(transactionalId=null, transactionTimeoutMs=2147483647, producerId=-1, producerEpoch=-1) 19:07:12.798 [kafka-producer-network-thread | mso-123456-producer-e1bc41c7-1103-4722-9621-0a0777132466] DEBUG org.apache.kafka.clients.NetworkClient - [Producer clientId=mso-123456-producer-e1bc41c7-1103-4722-9621-0a0777132466] Initialize connection to node localhost:9092 (id: -1 rack: null) for sending metadata request 19:07:12.798 [kafka-producer-network-thread | mso-123456-producer-e1bc41c7-1103-4722-9621-0a0777132466] DEBUG org.apache.kafka.clients.ClientUtils - Resolved host localhost as 127.0.0.1 19:07:12.798 [kafka-producer-network-thread | mso-123456-producer-e1bc41c7-1103-4722-9621-0a0777132466] DEBUG org.apache.kafka.clients.NetworkClient - [Producer clientId=mso-123456-producer-e1bc41c7-1103-4722-9621-0a0777132466] Initiating connection to node localhost:9092 (id: -1 rack: null) using address localhost/127.0.0.1 19:07:12.798 [kafka-producer-network-thread | mso-123456-producer-e1bc41c7-1103-4722-9621-0a0777132466] DEBUG org.apache.kafka.common.security.authenticator.SaslClientAuthenticator - [Producer clientId=mso-123456-producer-e1bc41c7-1103-4722-9621-0a0777132466] Set SASL client state to SEND_APIVERSIONS_REQUEST 19:07:12.798 [kafka-producer-network-thread | mso-123456-producer-e1bc41c7-1103-4722-9621-0a0777132466] DEBUG org.apache.kafka.common.security.authenticator.SaslClientAuthenticator - [Producer clientId=mso-123456-producer-e1bc41c7-1103-4722-9621-0a0777132466] Creating SaslClient: client=null;service=kafka;serviceHostname=localhost;mechs=[PLAIN] 19:07:12.799 [kafka-producer-network-thread | mso-123456-producer-e1bc41c7-1103-4722-9621-0a0777132466] DEBUG org.apache.kafka.common.network.Selector - [Producer clientId=mso-123456-producer-e1bc41c7-1103-4722-9621-0a0777132466] Connection with localhost/127.0.0.1 (channelId=-1) disconnected java.net.ConnectException: Connection refused at java.base/sun.nio.ch.SocketChannelImpl.checkConnect(Native Method) at java.base/sun.nio.ch.SocketChannelImpl.finishConnect(SocketChannelImpl.java:777) at org.apache.kafka.common.network.PlaintextTransportLayer.finishConnect(PlaintextTransportLayer.java:50) at org.apache.kafka.common.network.KafkaChannel.finishConnect(KafkaChannel.java:224) at org.apache.kafka.common.network.Selector.pollSelectionKeys(Selector.java:526) at org.apache.kafka.common.network.Selector.poll(Selector.java:481) at org.apache.kafka.clients.NetworkClient.poll(NetworkClient.java:560) at org.apache.kafka.clients.NetworkClientUtils.isReady(NetworkClientUtils.java:42) at org.apache.kafka.clients.NetworkClientUtils.awaitReady(NetworkClientUtils.java:64) at org.apache.kafka.clients.producer.internals.Sender.awaitNodeReady(Sender.java:534) at org.apache.kafka.clients.producer.internals.Sender.maybeSendAndPollTransactionalRequest(Sender.java:455) at org.apache.kafka.clients.producer.internals.Sender.runOnce(Sender.java:316) at org.apache.kafka.clients.producer.internals.Sender.run(Sender.java:243) at java.base/java.lang.Thread.run(Thread.java:829) 19:07:12.799 [kafka-producer-network-thread | mso-123456-producer-e1bc41c7-1103-4722-9621-0a0777132466] INFO org.apache.kafka.clients.NetworkClient - [Producer clientId=mso-123456-producer-e1bc41c7-1103-4722-9621-0a0777132466] Node -1 disconnected. 19:07:12.799 [kafka-producer-network-thread | mso-123456-producer-e1bc41c7-1103-4722-9621-0a0777132466] WARN org.apache.kafka.clients.NetworkClient - [Producer clientId=mso-123456-producer-e1bc41c7-1103-4722-9621-0a0777132466] Connection to node -1 (localhost/127.0.0.1:9092) could not be established. Broker may not be available. 19:07:12.799 [kafka-producer-network-thread | mso-123456-producer-e1bc41c7-1103-4722-9621-0a0777132466] WARN org.apache.kafka.clients.NetworkClient - [Producer clientId=mso-123456-producer-e1bc41c7-1103-4722-9621-0a0777132466] Bootstrap broker localhost:9092 (id: -1 rack: null) disconnected 19:07:12.799 [kafka-producer-network-thread | mso-123456-producer-e1bc41c7-1103-4722-9621-0a0777132466] DEBUG org.apache.kafka.clients.producer.internals.Sender - [Producer clientId=mso-123456-producer-e1bc41c7-1103-4722-9621-0a0777132466] Disconnect from localhost:9092 (id: -1 rack: null) while trying to send request InitProducerIdRequestData(transactionalId=null, transactionTimeoutMs=2147483647, producerId=-1, producerEpoch=-1). Going to back off and retry. java.io.IOException: Connection to localhost:9092 (id: -1 rack: null) failed. at org.apache.kafka.clients.NetworkClientUtils.awaitReady(NetworkClientUtils.java:70) at org.apache.kafka.clients.producer.internals.Sender.awaitNodeReady(Sender.java:534) at org.apache.kafka.clients.producer.internals.Sender.maybeSendAndPollTransactionalRequest(Sender.java:455) at org.apache.kafka.clients.producer.internals.Sender.runOnce(Sender.java:316) at org.apache.kafka.clients.producer.internals.Sender.run(Sender.java:243) at java.base/java.lang.Thread.run(Thread.java:829) 19:07:12.813 [main] DEBUG org.onap.sdc.http.SdcConnectorClient - about to perform get on SDC. requestId= 621544ab-58ae-426a-bdb9-4ef77066bf86 url= /sdc/v1/artifactTypes 19:07:12.813 [main] DEBUG org.onap.sdc.http.HttpSdcClient - url to send https://localhost:8181/sdc/v1/artifactTypes 19:07:12.816 [main] ERROR org.onap.sdc.http.HttpSdcClient - failed to connect to url: /sdc/v1/artifactTypes org.apache.http.conn.HttpHostConnectException: Connect to localhost:8181 [localhost/127.0.0.1] failed: Connection refused (Connection refused) at org.apache.http.impl.conn.DefaultHttpClientConnectionOperator.connect(DefaultHttpClientConnectionOperator.java:156) at org.apache.http.impl.conn.PoolingHttpClientConnectionManager.connect(PoolingHttpClientConnectionManager.java:376) at org.apache.http.impl.execchain.MainClientExec.establishRoute(MainClientExec.java:393) at org.apache.http.impl.execchain.MainClientExec.execute(MainClientExec.java:236) at org.apache.http.impl.execchain.ProtocolExec.execute(ProtocolExec.java:186) at org.apache.http.impl.execchain.RetryExec.execute(RetryExec.java:89) at org.apache.http.impl.execchain.RedirectExec.execute(RedirectExec.java:110) at org.apache.http.impl.client.InternalHttpClient.doExecute(InternalHttpClient.java:185) at org.apache.http.impl.client.CloseableHttpClient.execute(CloseableHttpClient.java:83) at org.apache.http.impl.client.CloseableHttpClient.execute(CloseableHttpClient.java:108) at org.onap.sdc.http.HttpSdcClient.getRequest(HttpSdcClient.java:116) at org.onap.sdc.http.SdcConnectorClient.performSdcServerRequest(SdcConnectorClient.java:120) at org.onap.sdc.http.SdcConnectorClient.getValidArtifactTypesList(SdcConnectorClient.java:74) at org.onap.sdc.impl.DistributionClientImpl.validateArtifactTypesWithSdcServer(DistributionClientImpl.java:300) at org.onap.sdc.impl.DistributionClientImpl.init(DistributionClientImpl.java:129) at java.base/java.lang.invoke.MethodHandle.invokeWithArguments(MethodHandle.java:710) at org.mockito.internal.util.reflection.InstrumentationMemberAccessor$Dispatcher$ByteBuddy$0KOrTgaY.invokeWithArguments(Unknown Source) at org.mockito.internal.util.reflection.InstrumentationMemberAccessor.invoke(InstrumentationMemberAccessor.java:239) at org.mockito.internal.util.reflection.ModuleMemberAccessor.invoke(ModuleMemberAccessor.java:55) at org.mockito.internal.creation.bytebuddy.MockMethodAdvice.tryInvoke(MockMethodAdvice.java:333) at org.mockito.internal.creation.bytebuddy.MockMethodAdvice.access$500(MockMethodAdvice.java:60) at org.mockito.internal.creation.bytebuddy.MockMethodAdvice$RealMethodCall.invoke(MockMethodAdvice.java:253) at org.mockito.internal.invocation.InterceptedInvocation.callRealMethod(InterceptedInvocation.java:142) at org.mockito.internal.stubbing.answers.CallsRealMethods.answer(CallsRealMethods.java:45) at org.mockito.Answers.answer(Answers.java:99) at org.mockito.internal.handler.MockHandlerImpl.handle(MockHandlerImpl.java:110) at org.mockito.internal.handler.NullResultGuardian.handle(NullResultGuardian.java:29) at org.mockito.internal.handler.InvocationNotifierHandler.handle(InvocationNotifierHandler.java:34) at org.mockito.internal.creation.bytebuddy.MockMethodInterceptor.doIntercept(MockMethodInterceptor.java:82) at org.mockito.internal.creation.bytebuddy.MockMethodAdvice.handle(MockMethodAdvice.java:151) at org.onap.sdc.impl.DistributionClientImpl.init(DistributionClientImpl.java:117) at org.onap.sdc.impl.DistributionClientTest.initFailedConnectSdcTest(DistributionClientTest.java:195) at java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at java.base/jdk.internal.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.base/java.lang.reflect.Method.invoke(Method.java:566) at org.junit.platform.commons.util.ReflectionUtils.invokeMethod(ReflectionUtils.java:688) at org.junit.jupiter.engine.execution.MethodInvocation.proceed(MethodInvocation.java:60) at org.junit.jupiter.engine.execution.InvocationInterceptorChain$ValidatingInvocation.proceed(InvocationInterceptorChain.java:131) at org.junit.jupiter.engine.extension.TimeoutExtension.intercept(TimeoutExtension.java:149) at org.junit.jupiter.engine.extension.TimeoutExtension.interceptTestableMethod(TimeoutExtension.java:140) at org.junit.jupiter.engine.extension.TimeoutExtension.interceptTestMethod(TimeoutExtension.java:84) at org.junit.jupiter.engine.execution.ExecutableInvoker$ReflectiveInterceptorCall.lambda$ofVoidMethod$0(ExecutableInvoker.java:115) at org.junit.jupiter.engine.execution.ExecutableInvoker.lambda$invoke$0(ExecutableInvoker.java:105) at org.junit.jupiter.engine.execution.InvocationInterceptorChain$InterceptedInvocation.proceed(InvocationInterceptorChain.java:106) at org.junit.jupiter.engine.execution.InvocationInterceptorChain.proceed(InvocationInterceptorChain.java:64) at org.junit.jupiter.engine.execution.InvocationInterceptorChain.chainAndInvoke(InvocationInterceptorChain.java:45) at org.junit.jupiter.engine.execution.InvocationInterceptorChain.invoke(InvocationInterceptorChain.java:37) at org.junit.jupiter.engine.execution.ExecutableInvoker.invoke(ExecutableInvoker.java:104) at org.junit.jupiter.engine.execution.ExecutableInvoker.invoke(ExecutableInvoker.java:98) at org.junit.jupiter.engine.descriptor.TestMethodTestDescriptor.lambda$invokeTestMethod$6(TestMethodTestDescriptor.java:210) at org.junit.platform.engine.support.hierarchical.ThrowableCollector.execute(ThrowableCollector.java:73) at org.junit.jupiter.engine.descriptor.TestMethodTestDescriptor.invokeTestMethod(TestMethodTestDescriptor.java:206) at org.junit.jupiter.engine.descriptor.TestMethodTestDescriptor.execute(TestMethodTestDescriptor.java:131) at org.junit.jupiter.engine.descriptor.TestMethodTestDescriptor.execute(TestMethodTestDescriptor.java:65) at org.junit.platform.engine.support.hierarchical.NodeTestTask.lambda$executeRecursively$5(NodeTestTask.java:139) at org.junit.platform.engine.support.hierarchical.ThrowableCollector.execute(ThrowableCollector.java:73) at org.junit.platform.engine.support.hierarchical.NodeTestTask.lambda$executeRecursively$7(NodeTestTask.java:129) at org.junit.platform.engine.support.hierarchical.Node.around(Node.java:137) at org.junit.platform.engine.support.hierarchical.NodeTestTask.lambda$executeRecursively$8(NodeTestTask.java:127) at org.junit.platform.engine.support.hierarchical.ThrowableCollector.execute(ThrowableCollector.java:73) at org.junit.platform.engine.support.hierarchical.NodeTestTask.executeRecursively(NodeTestTask.java:126) at org.junit.platform.engine.support.hierarchical.NodeTestTask.execute(NodeTestTask.java:84) at java.base/java.util.ArrayList.forEach(ArrayList.java:1541) at org.junit.platform.engine.support.hierarchical.SameThreadHierarchicalTestExecutorService.invokeAll(SameThreadHierarchicalTestExecutorService.java:38) at org.junit.platform.engine.support.hierarchical.NodeTestTask.lambda$executeRecursively$5(NodeTestTask.java:143) at org.junit.platform.engine.support.hierarchical.ThrowableCollector.execute(ThrowableCollector.java:73) at org.junit.platform.engine.support.hierarchical.NodeTestTask.lambda$executeRecursively$7(NodeTestTask.java:129) at org.junit.platform.engine.support.hierarchical.Node.around(Node.java:137) at org.junit.platform.engine.support.hierarchical.NodeTestTask.lambda$executeRecursively$8(NodeTestTask.java:127) at org.junit.platform.engine.support.hierarchical.ThrowableCollector.execute(ThrowableCollector.java:73) at org.junit.platform.engine.support.hierarchical.NodeTestTask.executeRecursively(NodeTestTask.java:126) at org.junit.platform.engine.support.hierarchical.NodeTestTask.execute(NodeTestTask.java:84) at java.base/java.util.ArrayList.forEach(ArrayList.java:1541) at org.junit.platform.engine.support.hierarchical.SameThreadHierarchicalTestExecutorService.invokeAll(SameThreadHierarchicalTestExecutorService.java:38) at org.junit.platform.engine.support.hierarchical.NodeTestTask.lambda$executeRecursively$5(NodeTestTask.java:143) at org.junit.platform.engine.support.hierarchical.ThrowableCollector.execute(ThrowableCollector.java:73) at org.junit.platform.engine.support.hierarchical.NodeTestTask.lambda$executeRecursively$7(NodeTestTask.java:129) at org.junit.platform.engine.support.hierarchical.Node.around(Node.java:137) at org.junit.platform.engine.support.hierarchical.NodeTestTask.lambda$executeRecursively$8(NodeTestTask.java:127) at org.junit.platform.engine.support.hierarchical.ThrowableCollector.execute(ThrowableCollector.java:73) at org.junit.platform.engine.support.hierarchical.NodeTestTask.executeRecursively(NodeTestTask.java:126) at org.junit.platform.engine.support.hierarchical.NodeTestTask.execute(NodeTestTask.java:84) at org.junit.platform.engine.support.hierarchical.SameThreadHierarchicalTestExecutorService.submit(SameThreadHierarchicalTestExecutorService.java:32) at org.junit.platform.engine.support.hierarchical.HierarchicalTestExecutor.execute(HierarchicalTestExecutor.java:57) at org.junit.platform.engine.support.hierarchical.HierarchicalTestEngine.execute(HierarchicalTestEngine.java:51) at org.junit.platform.launcher.core.EngineExecutionOrchestrator.execute(EngineExecutionOrchestrator.java:108) at org.junit.platform.launcher.core.EngineExecutionOrchestrator.execute(EngineExecutionOrchestrator.java:88) at org.junit.platform.launcher.core.EngineExecutionOrchestrator.lambda$execute$0(EngineExecutionOrchestrator.java:54) at org.junit.platform.launcher.core.EngineExecutionOrchestrator.withInterceptedStreams(EngineExecutionOrchestrator.java:67) at org.junit.platform.launcher.core.EngineExecutionOrchestrator.execute(EngineExecutionOrchestrator.java:52) at org.junit.platform.launcher.core.DefaultLauncher.execute(DefaultLauncher.java:96) at org.junit.platform.launcher.core.DefaultLauncher.execute(DefaultLauncher.java:75) at org.apache.maven.surefire.junitplatform.JUnitPlatformProvider.invokeAllTests(JUnitPlatformProvider.java:154) at org.apache.maven.surefire.junitplatform.JUnitPlatformProvider.invoke(JUnitPlatformProvider.java:127) at org.apache.maven.surefire.booter.ForkedBooter.runSuitesInProcess(ForkedBooter.java:377) at org.apache.maven.surefire.booter.ForkedBooter.execute(ForkedBooter.java:138) at org.apache.maven.surefire.booter.ForkedBooter.run(ForkedBooter.java:465) at org.apache.maven.surefire.booter.ForkedBooter.main(ForkedBooter.java:451) Caused by: java.net.ConnectException: Connection refused (Connection refused) at java.base/java.net.PlainSocketImpl.socketConnect(Native Method) at java.base/java.net.AbstractPlainSocketImpl.doConnect(AbstractPlainSocketImpl.java:412) at java.base/java.net.AbstractPlainSocketImpl.connectToAddress(AbstractPlainSocketImpl.java:255) at java.base/java.net.AbstractPlainSocketImpl.connect(AbstractPlainSocketImpl.java:237) at java.base/java.net.SocksSocketImpl.connect(SocksSocketImpl.java:392) at java.base/java.net.Socket.connect(Socket.java:609) at org.apache.http.conn.ssl.SSLConnectionSocketFactory.connectSocket(SSLConnectionSocketFactory.java:368) at org.apache.http.impl.conn.DefaultHttpClientConnectionOperator.connect(DefaultHttpClientConnectionOperator.java:142) ... 98 common frames omitted 19:07:12.817 [main] ERROR org.onap.sdc.http.SdcConnectorClient - status from SDC is org.onap.sdc.http.HttpSdcResponse@4bb5b74b 19:07:12.817 [main] ERROR org.onap.sdc.http.SdcConnectorClient - DistributionClientResultImpl [responseStatus=SDC_CONNECTION_FAILED, responseMessage=SDC server problem] 19:07:12.817 [main] DEBUG org.onap.sdc.http.SdcConnectorClient - error from SDC is: failed to connect 19:07:12.817 [main] INFO org.onap.sdc.impl.DistributionClientImpl - stop DistributionClient 19:07:12.817 [main] DEBUG org.onap.sdc.impl.DistributionClientImpl - client was not initialized 19:07:12.819 [main] INFO org.onap.sdc.impl.DistributionClientImpl - DistributionClient - init 19:07:12.820 [main] DEBUG org.onap.sdc.impl.DistributionClientImpl - Artifact types: [HEAT] were validated with SDC server 19:07:12.820 [main] DEBUG org.onap.sdc.impl.DistributionClientImpl - Get MessageBus cluster information from SDC 19:07:12.820 [main] DEBUG org.onap.sdc.impl.DistributionClientImpl - MessageBus cluster info retrieved successfully org.onap.sdc.utils.kafka.KafkaDataResponse@7c2062db 19:07:12.820 [main] INFO org.apache.kafka.clients.producer.ProducerConfig - ProducerConfig values: acks = -1 batch.size = 16384 bootstrap.servers = [localhost:9092] buffer.memory = 33554432 client.dns.lookup = use_all_dns_ips client.id = mso-123456-producer-c0214502-f336-4c34-978d-ecd84726c24b compression.type = none connections.max.idle.ms = 540000 delivery.timeout.ms = 120000 enable.idempotence = true interceptor.classes = [] key.serializer = class org.apache.kafka.common.serialization.StringSerializer linger.ms = 0 max.block.ms = 60000 max.in.flight.requests.per.connection = 5 max.request.size = 1048576 metadata.max.age.ms = 300000 metadata.max.idle.ms = 300000 metric.reporters = [] metrics.num.samples = 2 metrics.recording.level = INFO metrics.sample.window.ms = 30000 partitioner.adaptive.partitioning.enable = true partitioner.availability.timeout.ms = 0 partitioner.class = null partitioner.ignore.keys = false receive.buffer.bytes = 32768 reconnect.backoff.max.ms = 1000 reconnect.backoff.ms = 50 request.timeout.ms = 30000 retries = 2147483647 retry.backoff.ms = 100 sasl.client.callback.handler.class = null sasl.jaas.config = [hidden] sasl.kerberos.kinit.cmd = /usr/bin/kinit sasl.kerberos.min.time.before.relogin = 60000 sasl.kerberos.service.name = null sasl.kerberos.ticket.renew.jitter = 0.05 sasl.kerberos.ticket.renew.window.factor = 0.8 sasl.login.callback.handler.class = null sasl.login.class = null sasl.login.connect.timeout.ms = null sasl.login.read.timeout.ms = null sasl.login.refresh.buffer.seconds = 300 sasl.login.refresh.min.period.seconds = 60 sasl.login.refresh.window.factor = 0.8 sasl.login.refresh.window.jitter = 0.05 sasl.login.retry.backoff.max.ms = 10000 sasl.login.retry.backoff.ms = 100 sasl.mechanism = PLAIN sasl.oauthbearer.clock.skew.seconds = 30 sasl.oauthbearer.expected.audience = null sasl.oauthbearer.expected.issuer = null sasl.oauthbearer.jwks.endpoint.refresh.ms = 3600000 sasl.oauthbearer.jwks.endpoint.retry.backoff.max.ms = 10000 sasl.oauthbearer.jwks.endpoint.retry.backoff.ms = 100 sasl.oauthbearer.jwks.endpoint.url = null sasl.oauthbearer.scope.claim.name = scope sasl.oauthbearer.sub.claim.name = sub sasl.oauthbearer.token.endpoint.url = null security.protocol = SASL_PLAINTEXT security.providers = null send.buffer.bytes = 131072 socket.connection.setup.timeout.max.ms = 30000 socket.connection.setup.timeout.ms = 10000 ssl.cipher.suites = null ssl.enabled.protocols = [TLSv1.2, TLSv1.3] ssl.endpoint.identification.algorithm = https ssl.engine.factory.class = null ssl.key.password = null ssl.keymanager.algorithm = SunX509 ssl.keystore.certificate.chain = null ssl.keystore.key = null ssl.keystore.location = null ssl.keystore.password = null ssl.keystore.type = JKS ssl.protocol = TLSv1.3 ssl.provider = null ssl.secure.random.implementation = null ssl.trustmanager.algorithm = PKIX ssl.truststore.certificates = null ssl.truststore.location = null ssl.truststore.password = null ssl.truststore.type = JKS transaction.timeout.ms = 60000 transactional.id = null value.serializer = class org.apache.kafka.common.serialization.StringSerializer 19:07:12.821 [main] INFO org.apache.kafka.clients.producer.KafkaProducer - [Producer clientId=mso-123456-producer-c0214502-f336-4c34-978d-ecd84726c24b] Instantiated an idempotent producer. 19:07:12.823 [main] INFO org.apache.kafka.common.utils.AppInfoParser - Kafka version: 3.3.1 19:07:12.823 [main] INFO org.apache.kafka.common.utils.AppInfoParser - Kafka commitId: e23c59d00e687ff5 19:07:12.823 [main] INFO org.apache.kafka.common.utils.AppInfoParser - Kafka startTimeMs: 1755112032823 19:07:12.823 [main] DEBUG org.apache.kafka.clients.producer.KafkaProducer - [Producer clientId=mso-123456-producer-c0214502-f336-4c34-978d-ecd84726c24b] Kafka producer started DistributionClientResultImpl [responseStatus=SUCCESS, responseMessage=distribution client initialized successfully] 19:07:12.823 [main] INFO org.onap.sdc.impl.DistributionClientImpl - stop DistributionClient 19:07:12.824 [kafka-producer-network-thread | mso-123456-producer-c0214502-f336-4c34-978d-ecd84726c24b] DEBUG org.apache.kafka.clients.producer.internals.Sender - [Producer clientId=mso-123456-producer-c0214502-f336-4c34-978d-ecd84726c24b] Starting Kafka producer I/O thread. 19:07:12.824 [kafka-producer-network-thread | mso-123456-producer-c0214502-f336-4c34-978d-ecd84726c24b] DEBUG org.apache.kafka.clients.producer.internals.TransactionManager - [Producer clientId=mso-123456-producer-c0214502-f336-4c34-978d-ecd84726c24b] Transition from state UNINITIALIZED to INITIALIZING 19:07:12.824 [kafka-producer-network-thread | mso-123456-producer-c0214502-f336-4c34-978d-ecd84726c24b] DEBUG org.apache.kafka.clients.producer.internals.TransactionManager - [Producer clientId=mso-123456-producer-c0214502-f336-4c34-978d-ecd84726c24b] Enqueuing transactional request InitProducerIdRequestData(transactionalId=null, transactionTimeoutMs=2147483647, producerId=-1, producerEpoch=-1) 19:07:12.825 [kafka-producer-network-thread | mso-123456-producer-c0214502-f336-4c34-978d-ecd84726c24b] DEBUG org.apache.kafka.clients.NetworkClient - [Producer clientId=mso-123456-producer-c0214502-f336-4c34-978d-ecd84726c24b] Initialize connection to node localhost:9092 (id: -1 rack: null) for sending metadata request 19:07:12.825 [kafka-producer-network-thread | mso-123456-producer-c0214502-f336-4c34-978d-ecd84726c24b] DEBUG org.apache.kafka.clients.ClientUtils - Resolved host localhost as 127.0.0.1 19:07:12.825 [kafka-producer-network-thread | mso-123456-producer-c0214502-f336-4c34-978d-ecd84726c24b] DEBUG org.apache.kafka.clients.NetworkClient - [Producer clientId=mso-123456-producer-c0214502-f336-4c34-978d-ecd84726c24b] Initiating connection to node localhost:9092 (id: -1 rack: null) using address localhost/127.0.0.1 19:07:12.825 [kafka-producer-network-thread | mso-123456-producer-c0214502-f336-4c34-978d-ecd84726c24b] DEBUG org.apache.kafka.common.security.authenticator.SaslClientAuthenticator - [Producer clientId=mso-123456-producer-c0214502-f336-4c34-978d-ecd84726c24b] Set SASL client state to SEND_APIVERSIONS_REQUEST 19:07:12.826 [main] INFO org.onap.sdc.impl.DistributionClientImpl - start DistributionClient 19:07:12.826 [kafka-producer-network-thread | mso-123456-producer-c0214502-f336-4c34-978d-ecd84726c24b] DEBUG org.apache.kafka.common.security.authenticator.SaslClientAuthenticator - [Producer clientId=mso-123456-producer-c0214502-f336-4c34-978d-ecd84726c24b] Creating SaslClient: client=null;service=kafka;serviceHostname=localhost;mechs=[PLAIN] 19:07:12.826 [main] DEBUG org.onap.sdc.impl.DistributionClientImpl - client was not initialized 19:07:12.827 [main] INFO org.onap.sdc.impl.DistributionClientImpl - stop DistributionClient 19:07:12.827 [main] DEBUG org.onap.sdc.impl.DistributionClientImpl - client was not initialized 19:07:12.827 [kafka-producer-network-thread | mso-123456-producer-c0214502-f336-4c34-978d-ecd84726c24b] DEBUG org.apache.kafka.common.network.Selector - [Producer clientId=mso-123456-producer-c0214502-f336-4c34-978d-ecd84726c24b] Connection with localhost/127.0.0.1 (channelId=-1) disconnected java.net.ConnectException: Connection refused at java.base/sun.nio.ch.SocketChannelImpl.checkConnect(Native Method) at java.base/sun.nio.ch.SocketChannelImpl.finishConnect(SocketChannelImpl.java:777) at org.apache.kafka.common.network.PlaintextTransportLayer.finishConnect(PlaintextTransportLayer.java:50) at org.apache.kafka.common.network.KafkaChannel.finishConnect(KafkaChannel.java:224) at org.apache.kafka.common.network.Selector.pollSelectionKeys(Selector.java:526) at org.apache.kafka.common.network.Selector.poll(Selector.java:481) at org.apache.kafka.clients.NetworkClient.poll(NetworkClient.java:560) at org.apache.kafka.clients.NetworkClientUtils.isReady(NetworkClientUtils.java:42) at org.apache.kafka.clients.NetworkClientUtils.awaitReady(NetworkClientUtils.java:64) at org.apache.kafka.clients.producer.internals.Sender.awaitNodeReady(Sender.java:534) at org.apache.kafka.clients.producer.internals.Sender.maybeSendAndPollTransactionalRequest(Sender.java:455) at org.apache.kafka.clients.producer.internals.Sender.runOnce(Sender.java:316) at org.apache.kafka.clients.producer.internals.Sender.run(Sender.java:243) at java.base/java.lang.Thread.run(Thread.java:829) 19:07:12.828 [kafka-producer-network-thread | mso-123456-producer-c0214502-f336-4c34-978d-ecd84726c24b] INFO org.apache.kafka.clients.NetworkClient - [Producer clientId=mso-123456-producer-c0214502-f336-4c34-978d-ecd84726c24b] Node -1 disconnected. 19:07:12.828 [kafka-producer-network-thread | mso-123456-producer-c0214502-f336-4c34-978d-ecd84726c24b] WARN org.apache.kafka.clients.NetworkClient - [Producer clientId=mso-123456-producer-c0214502-f336-4c34-978d-ecd84726c24b] Connection to node -1 (localhost/127.0.0.1:9092) could not be established. Broker may not be available. 19:07:12.828 [kafka-producer-network-thread | mso-123456-producer-c0214502-f336-4c34-978d-ecd84726c24b] WARN org.apache.kafka.clients.NetworkClient - [Producer clientId=mso-123456-producer-c0214502-f336-4c34-978d-ecd84726c24b] Bootstrap broker localhost:9092 (id: -1 rack: null) disconnected 19:07:12.828 [kafka-producer-network-thread | mso-123456-producer-c0214502-f336-4c34-978d-ecd84726c24b] DEBUG org.apache.kafka.clients.producer.internals.Sender - [Producer clientId=mso-123456-producer-c0214502-f336-4c34-978d-ecd84726c24b] Disconnect from localhost:9092 (id: -1 rack: null) while trying to send request InitProducerIdRequestData(transactionalId=null, transactionTimeoutMs=2147483647, producerId=-1, producerEpoch=-1). Going to back off and retry. java.io.IOException: Connection to localhost:9092 (id: -1 rack: null) failed. at org.apache.kafka.clients.NetworkClientUtils.awaitReady(NetworkClientUtils.java:70) at org.apache.kafka.clients.producer.internals.Sender.awaitNodeReady(Sender.java:534) at org.apache.kafka.clients.producer.internals.Sender.maybeSendAndPollTransactionalRequest(Sender.java:455) at org.apache.kafka.clients.producer.internals.Sender.runOnce(Sender.java:316) at org.apache.kafka.clients.producer.internals.Sender.run(Sender.java:243) at java.base/java.lang.Thread.run(Thread.java:829) 19:07:12.831 [main] INFO org.onap.sdc.impl.DistributionClientImpl - stop DistributionClient 19:07:12.831 [main] DEBUG org.onap.sdc.impl.DistributionClientImpl - client was not initialized 19:07:12.832 [main] ERROR org.onap.sdc.impl.DistributionClientImpl - DistributionClientResultImpl [responseStatus=CONF_MISSING_PASSWORD, responseMessage=configuration is invalid: CONF_MISSING_PASSWORD] 19:07:12.832 [main] ERROR org.onap.sdc.impl.DistributionClientImpl - DistributionClientResultImpl [responseStatus=CONF_MISSING_PASSWORD, responseMessage=configuration is invalid: CONF_MISSING_PASSWORD] 19:07:12.832 [main] INFO org.onap.sdc.impl.DistributionClientImpl - stop DistributionClient 19:07:12.832 [main] DEBUG org.onap.sdc.impl.DistributionClientImpl - client was not initialized 19:07:12.833 [main] INFO org.onap.sdc.impl.DistributionClientImpl - DistributionClient - init 19:07:12.837 [main] DEBUG org.onap.sdc.http.SdcConnectorClient - about to perform get on SDC. requestId= 88e56623-5a8a-44d9-8c48-8472b57c58fb url= /sdc/v1/artifactTypes 19:07:12.837 [main] DEBUG org.onap.sdc.http.HttpSdcClient - url to send http://badhost:8080/sdc/v1/artifactTypes 19:07:12.841 [main] ERROR org.onap.sdc.http.HttpSdcClient - failed to connect to url: /sdc/v1/artifactTypes java.net.UnknownHostException: proxy: System error at java.base/java.net.Inet6AddressImpl.lookupAllHostAddr(Native Method) at java.base/java.net.InetAddress$PlatformNameService.lookupAllHostAddr(InetAddress.java:929) at java.base/java.net.InetAddress.getAddressesFromNameService(InetAddress.java:1529) at java.base/java.net.InetAddress$NameServiceAddresses.get(InetAddress.java:848) at java.base/java.net.InetAddress.getAllByName0(InetAddress.java:1519) at java.base/java.net.InetAddress.getAllByName(InetAddress.java:1378) at java.base/java.net.InetAddress.getAllByName(InetAddress.java:1306) at org.apache.http.impl.conn.SystemDefaultDnsResolver.resolve(SystemDefaultDnsResolver.java:45) at org.apache.http.impl.conn.DefaultHttpClientConnectionOperator.connect(DefaultHttpClientConnectionOperator.java:112) at org.apache.http.impl.conn.PoolingHttpClientConnectionManager.connect(PoolingHttpClientConnectionManager.java:376) at org.apache.http.impl.execchain.MainClientExec.establishRoute(MainClientExec.java:401) at org.apache.http.impl.execchain.MainClientExec.execute(MainClientExec.java:236) at org.apache.http.impl.execchain.ProtocolExec.execute(ProtocolExec.java:186) at org.apache.http.impl.execchain.RetryExec.execute(RetryExec.java:89) at org.apache.http.impl.execchain.RedirectExec.execute(RedirectExec.java:110) at org.apache.http.impl.client.InternalHttpClient.doExecute(InternalHttpClient.java:185) at org.apache.http.impl.client.CloseableHttpClient.execute(CloseableHttpClient.java:83) at org.apache.http.impl.client.CloseableHttpClient.execute(CloseableHttpClient.java:108) at org.onap.sdc.http.HttpSdcClient.getRequest(HttpSdcClient.java:116) at org.onap.sdc.http.SdcConnectorClient.performSdcServerRequest(SdcConnectorClient.java:120) at org.onap.sdc.http.SdcConnectorClient.getValidArtifactTypesList(SdcConnectorClient.java:74) at org.onap.sdc.impl.DistributionClientImpl.validateArtifactTypesWithSdcServer(DistributionClientImpl.java:300) at org.onap.sdc.impl.DistributionClientImpl.init(DistributionClientImpl.java:129) at java.base/java.lang.invoke.MethodHandle.invokeWithArguments(MethodHandle.java:710) at org.mockito.internal.util.reflection.InstrumentationMemberAccessor$Dispatcher$ByteBuddy$0KOrTgaY.invokeWithArguments(Unknown Source) at org.mockito.internal.util.reflection.InstrumentationMemberAccessor.invoke(InstrumentationMemberAccessor.java:239) at org.mockito.internal.util.reflection.ModuleMemberAccessor.invoke(ModuleMemberAccessor.java:55) at org.mockito.internal.creation.bytebuddy.MockMethodAdvice.tryInvoke(MockMethodAdvice.java:333) at org.mockito.internal.creation.bytebuddy.MockMethodAdvice.access$500(MockMethodAdvice.java:60) at org.mockito.internal.creation.bytebuddy.MockMethodAdvice$RealMethodCall.invoke(MockMethodAdvice.java:253) at org.mockito.internal.invocation.InterceptedInvocation.callRealMethod(InterceptedInvocation.java:142) at org.mockito.internal.stubbing.answers.CallsRealMethods.answer(CallsRealMethods.java:45) at org.mockito.Answers.answer(Answers.java:99) at org.mockito.internal.handler.MockHandlerImpl.handle(MockHandlerImpl.java:110) at org.mockito.internal.handler.NullResultGuardian.handle(NullResultGuardian.java:29) at org.mockito.internal.handler.InvocationNotifierHandler.handle(InvocationNotifierHandler.java:34) at org.mockito.internal.creation.bytebuddy.MockMethodInterceptor.doIntercept(MockMethodInterceptor.java:82) at org.mockito.internal.creation.bytebuddy.MockMethodAdvice.handle(MockMethodAdvice.java:151) at org.onap.sdc.impl.DistributionClientImpl.init(DistributionClientImpl.java:117) at org.onap.sdc.impl.DistributionClientTest.initFailedConnectSdcInHttpTest(DistributionClientTest.java:207) at java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at java.base/jdk.internal.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.base/java.lang.reflect.Method.invoke(Method.java:566) at org.junit.platform.commons.util.ReflectionUtils.invokeMethod(ReflectionUtils.java:688) at org.junit.jupiter.engine.execution.MethodInvocation.proceed(MethodInvocation.java:60) at org.junit.jupiter.engine.execution.InvocationInterceptorChain$ValidatingInvocation.proceed(InvocationInterceptorChain.java:131) at org.junit.jupiter.engine.extension.TimeoutExtension.intercept(TimeoutExtension.java:149) at org.junit.jupiter.engine.extension.TimeoutExtension.interceptTestableMethod(TimeoutExtension.java:140) at org.junit.jupiter.engine.extension.TimeoutExtension.interceptTestMethod(TimeoutExtension.java:84) at org.junit.jupiter.engine.execution.ExecutableInvoker$ReflectiveInterceptorCall.lambda$ofVoidMethod$0(ExecutableInvoker.java:115) at org.junit.jupiter.engine.execution.ExecutableInvoker.lambda$invoke$0(ExecutableInvoker.java:105) at org.junit.jupiter.engine.execution.InvocationInterceptorChain$InterceptedInvocation.proceed(InvocationInterceptorChain.java:106) at org.junit.jupiter.engine.execution.InvocationInterceptorChain.proceed(InvocationInterceptorChain.java:64) at org.junit.jupiter.engine.execution.InvocationInterceptorChain.chainAndInvoke(InvocationInterceptorChain.java:45) at org.junit.jupiter.engine.execution.InvocationInterceptorChain.invoke(InvocationInterceptorChain.java:37) at org.junit.jupiter.engine.execution.ExecutableInvoker.invoke(ExecutableInvoker.java:104) at org.junit.jupiter.engine.execution.ExecutableInvoker.invoke(ExecutableInvoker.java:98) at org.junit.jupiter.engine.descriptor.TestMethodTestDescriptor.lambda$invokeTestMethod$6(TestMethodTestDescriptor.java:210) at org.junit.platform.engine.support.hierarchical.ThrowableCollector.execute(ThrowableCollector.java:73) at org.junit.jupiter.engine.descriptor.TestMethodTestDescriptor.invokeTestMethod(TestMethodTestDescriptor.java:206) at org.junit.jupiter.engine.descriptor.TestMethodTestDescriptor.execute(TestMethodTestDescriptor.java:131) at org.junit.jupiter.engine.descriptor.TestMethodTestDescriptor.execute(TestMethodTestDescriptor.java:65) at org.junit.platform.engine.support.hierarchical.NodeTestTask.lambda$executeRecursively$5(NodeTestTask.java:139) at org.junit.platform.engine.support.hierarchical.ThrowableCollector.execute(ThrowableCollector.java:73) at org.junit.platform.engine.support.hierarchical.NodeTestTask.lambda$executeRecursively$7(NodeTestTask.java:129) at org.junit.platform.engine.support.hierarchical.Node.around(Node.java:137) at org.junit.platform.engine.support.hierarchical.NodeTestTask.lambda$executeRecursively$8(NodeTestTask.java:127) at org.junit.platform.engine.support.hierarchical.ThrowableCollector.execute(ThrowableCollector.java:73) at org.junit.platform.engine.support.hierarchical.NodeTestTask.executeRecursively(NodeTestTask.java:126) at org.junit.platform.engine.support.hierarchical.NodeTestTask.execute(NodeTestTask.java:84) at java.base/java.util.ArrayList.forEach(ArrayList.java:1541) at org.junit.platform.engine.support.hierarchical.SameThreadHierarchicalTestExecutorService.invokeAll(SameThreadHierarchicalTestExecutorService.java:38) at org.junit.platform.engine.support.hierarchical.NodeTestTask.lambda$executeRecursively$5(NodeTestTask.java:143) at org.junit.platform.engine.support.hierarchical.ThrowableCollector.execute(ThrowableCollector.java:73) at org.junit.platform.engine.support.hierarchical.NodeTestTask.lambda$executeRecursively$7(NodeTestTask.java:129) at org.junit.platform.engine.support.hierarchical.Node.around(Node.java:137) at org.junit.platform.engine.support.hierarchical.NodeTestTask.lambda$executeRecursively$8(NodeTestTask.java:127) at org.junit.platform.engine.support.hierarchical.ThrowableCollector.execute(ThrowableCollector.java:73) at org.junit.platform.engine.support.hierarchical.NodeTestTask.executeRecursively(NodeTestTask.java:126) at org.junit.platform.engine.support.hierarchical.NodeTestTask.execute(NodeTestTask.java:84) at java.base/java.util.ArrayList.forEach(ArrayList.java:1541) at org.junit.platform.engine.support.hierarchical.SameThreadHierarchicalTestExecutorService.invokeAll(SameThreadHierarchicalTestExecutorService.java:38) at org.junit.platform.engine.support.hierarchical.NodeTestTask.lambda$executeRecursively$5(NodeTestTask.java:143) at org.junit.platform.engine.support.hierarchical.ThrowableCollector.execute(ThrowableCollector.java:73) at org.junit.platform.engine.support.hierarchical.NodeTestTask.lambda$executeRecursively$7(NodeTestTask.java:129) at org.junit.platform.engine.support.hierarchical.Node.around(Node.java:137) at org.junit.platform.engine.support.hierarchical.NodeTestTask.lambda$executeRecursively$8(NodeTestTask.java:127) at org.junit.platform.engine.support.hierarchical.ThrowableCollector.execute(ThrowableCollector.java:73) at org.junit.platform.engine.support.hierarchical.NodeTestTask.executeRecursively(NodeTestTask.java:126) at org.junit.platform.engine.support.hierarchical.NodeTestTask.execute(NodeTestTask.java:84) at org.junit.platform.engine.support.hierarchical.SameThreadHierarchicalTestExecutorService.submit(SameThreadHierarchicalTestExecutorService.java:32) at org.junit.platform.engine.support.hierarchical.HierarchicalTestExecutor.execute(HierarchicalTestExecutor.java:57) at org.junit.platform.engine.support.hierarchical.HierarchicalTestEngine.execute(HierarchicalTestEngine.java:51) at org.junit.platform.launcher.core.EngineExecutionOrchestrator.execute(EngineExecutionOrchestrator.java:108) at org.junit.platform.launcher.core.EngineExecutionOrchestrator.execute(EngineExecutionOrchestrator.java:88) at org.junit.platform.launcher.core.EngineExecutionOrchestrator.lambda$execute$0(EngineExecutionOrchestrator.java:54) at org.junit.platform.launcher.core.EngineExecutionOrchestrator.withInterceptedStreams(EngineExecutionOrchestrator.java:67) at org.junit.platform.launcher.core.EngineExecutionOrchestrator.execute(EngineExecutionOrchestrator.java:52) at org.junit.platform.launcher.core.DefaultLauncher.execute(DefaultLauncher.java:96) at org.junit.platform.launcher.core.DefaultLauncher.execute(DefaultLauncher.java:75) at org.apache.maven.surefire.junitplatform.JUnitPlatformProvider.invokeAllTests(JUnitPlatformProvider.java:154) at org.apache.maven.surefire.junitplatform.JUnitPlatformProvider.invoke(JUnitPlatformProvider.java:127) at org.apache.maven.surefire.booter.ForkedBooter.runSuitesInProcess(ForkedBooter.java:377) at org.apache.maven.surefire.booter.ForkedBooter.execute(ForkedBooter.java:138) at org.apache.maven.surefire.booter.ForkedBooter.run(ForkedBooter.java:465) at org.apache.maven.surefire.booter.ForkedBooter.main(ForkedBooter.java:451) 19:07:12.841 [main] ERROR org.onap.sdc.http.SdcConnectorClient - status from SDC is org.onap.sdc.http.HttpSdcResponse@1e896761 19:07:12.841 [main] ERROR org.onap.sdc.http.SdcConnectorClient - DistributionClientResultImpl [responseStatus=SDC_CONNECTION_FAILED, responseMessage=SDC server problem] 19:07:12.842 [main] DEBUG org.onap.sdc.http.SdcConnectorClient - error from SDC is: failed to connect 19:07:12.842 [main] INFO org.onap.sdc.impl.DistributionClientImpl - DistributionClient - init 19:07:12.843 [main] DEBUG org.onap.sdc.http.SdcConnectorClient - about to perform get on SDC. requestId= cc1d14a2-edbe-4f9d-b0f2-094260ca43ea url= /sdc/v1/artifactTypes 19:07:12.843 [main] DEBUG org.onap.sdc.http.HttpSdcClient - url to send http://localhost:8181/sdc/v1/artifactTypes 19:07:12.843 [main] ERROR org.onap.sdc.http.HttpSdcClient - failed to connect to url: /sdc/v1/artifactTypes java.net.UnknownHostException: proxy at java.base/java.net.InetAddress$CachedAddresses.get(InetAddress.java:797) at java.base/java.net.InetAddress.getAllByName0(InetAddress.java:1519) at java.base/java.net.InetAddress.getAllByName(InetAddress.java:1378) at java.base/java.net.InetAddress.getAllByName(InetAddress.java:1306) at org.apache.http.impl.conn.SystemDefaultDnsResolver.resolve(SystemDefaultDnsResolver.java:45) at org.apache.http.impl.conn.DefaultHttpClientConnectionOperator.connect(DefaultHttpClientConnectionOperator.java:112) at org.apache.http.impl.conn.PoolingHttpClientConnectionManager.connect(PoolingHttpClientConnectionManager.java:376) at org.apache.http.impl.execchain.MainClientExec.establishRoute(MainClientExec.java:401) at org.apache.http.impl.execchain.MainClientExec.execute(MainClientExec.java:236) at org.apache.http.impl.execchain.ProtocolExec.execute(ProtocolExec.java:186) at org.apache.http.impl.execchain.RetryExec.execute(RetryExec.java:89) at org.apache.http.impl.execchain.RedirectExec.execute(RedirectExec.java:110) at org.apache.http.impl.client.InternalHttpClient.doExecute(InternalHttpClient.java:185) at org.apache.http.impl.client.CloseableHttpClient.execute(CloseableHttpClient.java:83) at org.apache.http.impl.client.CloseableHttpClient.execute(CloseableHttpClient.java:108) at org.onap.sdc.http.HttpSdcClient.getRequest(HttpSdcClient.java:116) at org.onap.sdc.http.SdcConnectorClient.performSdcServerRequest(SdcConnectorClient.java:120) at org.onap.sdc.http.SdcConnectorClient.getValidArtifactTypesList(SdcConnectorClient.java:74) at org.onap.sdc.impl.DistributionClientImpl.validateArtifactTypesWithSdcServer(DistributionClientImpl.java:300) at org.onap.sdc.impl.DistributionClientImpl.init(DistributionClientImpl.java:129) at java.base/java.lang.invoke.MethodHandle.invokeWithArguments(MethodHandle.java:710) at org.mockito.internal.util.reflection.InstrumentationMemberAccessor$Dispatcher$ByteBuddy$0KOrTgaY.invokeWithArguments(Unknown Source) at org.mockito.internal.util.reflection.InstrumentationMemberAccessor.invoke(InstrumentationMemberAccessor.java:239) at org.mockito.internal.util.reflection.ModuleMemberAccessor.invoke(ModuleMemberAccessor.java:55) at org.mockito.internal.creation.bytebuddy.MockMethodAdvice.tryInvoke(MockMethodAdvice.java:333) at org.mockito.internal.creation.bytebuddy.MockMethodAdvice.access$500(MockMethodAdvice.java:60) at org.mockito.internal.creation.bytebuddy.MockMethodAdvice$RealMethodCall.invoke(MockMethodAdvice.java:253) at org.mockito.internal.invocation.InterceptedInvocation.callRealMethod(InterceptedInvocation.java:142) at org.mockito.internal.stubbing.answers.CallsRealMethods.answer(CallsRealMethods.java:45) at org.mockito.Answers.answer(Answers.java:99) at org.mockito.internal.handler.MockHandlerImpl.handle(MockHandlerImpl.java:110) at org.mockito.internal.handler.NullResultGuardian.handle(NullResultGuardian.java:29) at org.mockito.internal.handler.InvocationNotifierHandler.handle(InvocationNotifierHandler.java:34) at org.mockito.internal.creation.bytebuddy.MockMethodInterceptor.doIntercept(MockMethodInterceptor.java:82) at org.mockito.internal.creation.bytebuddy.MockMethodAdvice.handle(MockMethodAdvice.java:151) at org.onap.sdc.impl.DistributionClientImpl.init(DistributionClientImpl.java:117) at org.onap.sdc.impl.DistributionClientTest.initFailedConnectSdcInHttpTest(DistributionClientTest.java:214) at java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at java.base/jdk.internal.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.base/java.lang.reflect.Method.invoke(Method.java:566) at org.junit.platform.commons.util.ReflectionUtils.invokeMethod(ReflectionUtils.java:688) at org.junit.jupiter.engine.execution.MethodInvocation.proceed(MethodInvocation.java:60) at org.junit.jupiter.engine.execution.InvocationInterceptorChain$ValidatingInvocation.proceed(InvocationInterceptorChain.java:131) at org.junit.jupiter.engine.extension.TimeoutExtension.intercept(TimeoutExtension.java:149) at org.junit.jupiter.engine.extension.TimeoutExtension.interceptTestableMethod(TimeoutExtension.java:140) at org.junit.jupiter.engine.extension.TimeoutExtension.interceptTestMethod(TimeoutExtension.java:84) at org.junit.jupiter.engine.execution.ExecutableInvoker$ReflectiveInterceptorCall.lambda$ofVoidMethod$0(ExecutableInvoker.java:115) at org.junit.jupiter.engine.execution.ExecutableInvoker.lambda$invoke$0(ExecutableInvoker.java:105) at org.junit.jupiter.engine.execution.InvocationInterceptorChain$InterceptedInvocation.proceed(InvocationInterceptorChain.java:106) at org.junit.jupiter.engine.execution.InvocationInterceptorChain.proceed(InvocationInterceptorChain.java:64) at org.junit.jupiter.engine.execution.InvocationInterceptorChain.chainAndInvoke(InvocationInterceptorChain.java:45) at org.junit.jupiter.engine.execution.InvocationInterceptorChain.invoke(InvocationInterceptorChain.java:37) at org.junit.jupiter.engine.execution.ExecutableInvoker.invoke(ExecutableInvoker.java:104) at org.junit.jupiter.engine.execution.ExecutableInvoker.invoke(ExecutableInvoker.java:98) at org.junit.jupiter.engine.descriptor.TestMethodTestDescriptor.lambda$invokeTestMethod$6(TestMethodTestDescriptor.java:210) at org.junit.platform.engine.support.hierarchical.ThrowableCollector.execute(ThrowableCollector.java:73) at org.junit.jupiter.engine.descriptor.TestMethodTestDescriptor.invokeTestMethod(TestMethodTestDescriptor.java:206) at org.junit.jupiter.engine.descriptor.TestMethodTestDescriptor.execute(TestMethodTestDescriptor.java:131) at org.junit.jupiter.engine.descriptor.TestMethodTestDescriptor.execute(TestMethodTestDescriptor.java:65) at org.junit.platform.engine.support.hierarchical.NodeTestTask.lambda$executeRecursively$5(NodeTestTask.java:139) at org.junit.platform.engine.support.hierarchical.ThrowableCollector.execute(ThrowableCollector.java:73) at org.junit.platform.engine.support.hierarchical.NodeTestTask.lambda$executeRecursively$7(NodeTestTask.java:129) at org.junit.platform.engine.support.hierarchical.Node.around(Node.java:137) at org.junit.platform.engine.support.hierarchical.NodeTestTask.lambda$executeRecursively$8(NodeTestTask.java:127) at org.junit.platform.engine.support.hierarchical.ThrowableCollector.execute(ThrowableCollector.java:73) at org.junit.platform.engine.support.hierarchical.NodeTestTask.executeRecursively(NodeTestTask.java:126) at org.junit.platform.engine.support.hierarchical.NodeTestTask.execute(NodeTestTask.java:84) at java.base/java.util.ArrayList.forEach(ArrayList.java:1541) at org.junit.platform.engine.support.hierarchical.SameThreadHierarchicalTestExecutorService.invokeAll(SameThreadHierarchicalTestExecutorService.java:38) at org.junit.platform.engine.support.hierarchical.NodeTestTask.lambda$executeRecursively$5(NodeTestTask.java:143) at org.junit.platform.engine.support.hierarchical.ThrowableCollector.execute(ThrowableCollector.java:73) at org.junit.platform.engine.support.hierarchical.NodeTestTask.lambda$executeRecursively$7(NodeTestTask.java:129) at org.junit.platform.engine.support.hierarchical.Node.around(Node.java:137) at org.junit.platform.engine.support.hierarchical.NodeTestTask.lambda$executeRecursively$8(NodeTestTask.java:127) at org.junit.platform.engine.support.hierarchical.ThrowableCollector.execute(ThrowableCollector.java:73) at org.junit.platform.engine.support.hierarchical.NodeTestTask.executeRecursively(NodeTestTask.java:126) at org.junit.platform.engine.support.hierarchical.NodeTestTask.execute(NodeTestTask.java:84) at java.base/java.util.ArrayList.forEach(ArrayList.java:1541) at org.junit.platform.engine.support.hierarchical.SameThreadHierarchicalTestExecutorService.invokeAll(SameThreadHierarchicalTestExecutorService.java:38) at org.junit.platform.engine.support.hierarchical.NodeTestTask.lambda$executeRecursively$5(NodeTestTask.java:143) at org.junit.platform.engine.support.hierarchical.ThrowableCollector.execute(ThrowableCollector.java:73) at org.junit.platform.engine.support.hierarchical.NodeTestTask.lambda$executeRecursively$7(NodeTestTask.java:129) at org.junit.platform.engine.support.hierarchical.Node.around(Node.java:137) at org.junit.platform.engine.support.hierarchical.NodeTestTask.lambda$executeRecursively$8(NodeTestTask.java:127) at org.junit.platform.engine.support.hierarchical.ThrowableCollector.execute(ThrowableCollector.java:73) at org.junit.platform.engine.support.hierarchical.NodeTestTask.executeRecursively(NodeTestTask.java:126) at org.junit.platform.engine.support.hierarchical.NodeTestTask.execute(NodeTestTask.java:84) at org.junit.platform.engine.support.hierarchical.SameThreadHierarchicalTestExecutorService.submit(SameThreadHierarchicalTestExecutorService.java:32) at org.junit.platform.engine.support.hierarchical.HierarchicalTestExecutor.execute(HierarchicalTestExecutor.java:57) at org.junit.platform.engine.support.hierarchical.HierarchicalTestEngine.execute(HierarchicalTestEngine.java:51) at org.junit.platform.launcher.core.EngineExecutionOrchestrator.execute(EngineExecutionOrchestrator.java:108) at org.junit.platform.launcher.core.EngineExecutionOrchestrator.execute(EngineExecutionOrchestrator.java:88) at org.junit.platform.launcher.core.EngineExecutionOrchestrator.lambda$execute$0(EngineExecutionOrchestrator.java:54) at org.junit.platform.launcher.core.EngineExecutionOrchestrator.withInterceptedStreams(EngineExecutionOrchestrator.java:67) at org.junit.platform.launcher.core.EngineExecutionOrchestrator.execute(EngineExecutionOrchestrator.java:52) at org.junit.platform.launcher.core.DefaultLauncher.execute(DefaultLauncher.java:96) at org.junit.platform.launcher.core.DefaultLauncher.execute(DefaultLauncher.java:75) at org.apache.maven.surefire.junitplatform.JUnitPlatformProvider.invokeAllTests(JUnitPlatformProvider.java:154) at org.apache.maven.surefire.junitplatform.JUnitPlatformProvider.invoke(JUnitPlatformProvider.java:127) at org.apache.maven.surefire.booter.ForkedBooter.runSuitesInProcess(ForkedBooter.java:377) at org.apache.maven.surefire.booter.ForkedBooter.execute(ForkedBooter.java:138) at org.apache.maven.surefire.booter.ForkedBooter.run(ForkedBooter.java:465) at org.apache.maven.surefire.booter.ForkedBooter.main(ForkedBooter.java:451) 19:07:12.844 [main] ERROR org.onap.sdc.http.SdcConnectorClient - status from SDC is org.onap.sdc.http.HttpSdcResponse@ece8b3 19:07:12.844 [main] ERROR org.onap.sdc.http.SdcConnectorClient - DistributionClientResultImpl [responseStatus=SDC_CONNECTION_FAILED, responseMessage=SDC server problem] 19:07:12.844 [main] DEBUG org.onap.sdc.http.SdcConnectorClient - error from SDC is: failed to connect 19:07:12.844 [main] INFO org.onap.sdc.impl.DistributionClientImpl - stop DistributionClient 19:07:12.844 [main] DEBUG org.onap.sdc.impl.DistributionClientImpl - client was not initialized 19:07:12.846 [main] INFO org.onap.sdc.impl.DistributionClientImpl - stop DistributionClient 19:07:12.846 [main] DEBUG org.onap.sdc.impl.DistributionClientImpl - client was not initialized 19:07:12.847 [main] WARN org.onap.sdc.impl.DistributionClientImpl - polling interval is out of range. value should be greater than or equals to 15 19:07:12.847 [main] WARN org.onap.sdc.impl.DistributionClientImpl - setting polling interval to default: 15 19:07:12.847 [main] WARN org.onap.sdc.impl.DistributionClientImpl - polling interval is out of range. value should be greater than or equals to 15 19:07:12.847 [main] WARN org.onap.sdc.impl.DistributionClientImpl - setting polling interval to default: 15 19:07:12.847 [main] INFO org.onap.sdc.impl.DistributionClientImpl - stop DistributionClient 19:07:12.847 [main] DEBUG org.onap.sdc.impl.DistributionClientImpl - client was not initialized 19:07:12.849 [main] INFO org.onap.sdc.impl.DistributionClientImpl - DistributionClient - init 19:07:12.849 [main] WARN org.onap.sdc.impl.DistributionClientImpl - polling interval is out of range. value should be greater than or equals to 15 19:07:12.849 [main] WARN org.onap.sdc.impl.DistributionClientImpl - setting polling interval to default: 15 19:07:12.849 [main] DEBUG org.onap.sdc.impl.DistributionClientImpl - Artifact types: [HEAT] were validated with SDC server 19:07:12.849 [main] DEBUG org.onap.sdc.impl.DistributionClientImpl - Get MessageBus cluster information from SDC 19:07:12.849 [main] DEBUG org.onap.sdc.impl.DistributionClientImpl - MessageBus cluster info retrieved successfully org.onap.sdc.utils.kafka.KafkaDataResponse@370db68d 19:07:12.850 [main] INFO org.apache.kafka.clients.producer.ProducerConfig - ProducerConfig values: acks = -1 batch.size = 16384 bootstrap.servers = [localhost:9092] buffer.memory = 33554432 client.dns.lookup = use_all_dns_ips client.id = mso-123456-producer-ee1d5a98-b9ab-455d-ad9e-90f957d203ef compression.type = none connections.max.idle.ms = 540000 delivery.timeout.ms = 120000 enable.idempotence = true interceptor.classes = [] key.serializer = class org.apache.kafka.common.serialization.StringSerializer linger.ms = 0 max.block.ms = 60000 max.in.flight.requests.per.connection = 5 max.request.size = 1048576 metadata.max.age.ms = 300000 metadata.max.idle.ms = 300000 metric.reporters = [] metrics.num.samples = 2 metrics.recording.level = INFO metrics.sample.window.ms = 30000 partitioner.adaptive.partitioning.enable = true partitioner.availability.timeout.ms = 0 partitioner.class = null partitioner.ignore.keys = false receive.buffer.bytes = 32768 reconnect.backoff.max.ms = 1000 reconnect.backoff.ms = 50 request.timeout.ms = 30000 retries = 2147483647 retry.backoff.ms = 100 sasl.client.callback.handler.class = null sasl.jaas.config = [hidden] sasl.kerberos.kinit.cmd = /usr/bin/kinit sasl.kerberos.min.time.before.relogin = 60000 sasl.kerberos.service.name = null sasl.kerberos.ticket.renew.jitter = 0.05 sasl.kerberos.ticket.renew.window.factor = 0.8 sasl.login.callback.handler.class = null sasl.login.class = null sasl.login.connect.timeout.ms = null sasl.login.read.timeout.ms = null sasl.login.refresh.buffer.seconds = 300 sasl.login.refresh.min.period.seconds = 60 sasl.login.refresh.window.factor = 0.8 sasl.login.refresh.window.jitter = 0.05 sasl.login.retry.backoff.max.ms = 10000 sasl.login.retry.backoff.ms = 100 sasl.mechanism = PLAIN sasl.oauthbearer.clock.skew.seconds = 30 sasl.oauthbearer.expected.audience = null sasl.oauthbearer.expected.issuer = null sasl.oauthbearer.jwks.endpoint.refresh.ms = 3600000 sasl.oauthbearer.jwks.endpoint.retry.backoff.max.ms = 10000 sasl.oauthbearer.jwks.endpoint.retry.backoff.ms = 100 sasl.oauthbearer.jwks.endpoint.url = null sasl.oauthbearer.scope.claim.name = scope sasl.oauthbearer.sub.claim.name = sub sasl.oauthbearer.token.endpoint.url = null security.protocol = SASL_PLAINTEXT security.providers = null send.buffer.bytes = 131072 socket.connection.setup.timeout.max.ms = 30000 socket.connection.setup.timeout.ms = 10000 ssl.cipher.suites = null ssl.enabled.protocols = [TLSv1.2, TLSv1.3] ssl.endpoint.identification.algorithm = https ssl.engine.factory.class = null ssl.key.password = null ssl.keymanager.algorithm = SunX509 ssl.keystore.certificate.chain = null ssl.keystore.key = null ssl.keystore.location = null ssl.keystore.password = null ssl.keystore.type = JKS ssl.protocol = TLSv1.3 ssl.provider = null ssl.secure.random.implementation = null ssl.trustmanager.algorithm = PKIX ssl.truststore.certificates = null ssl.truststore.location = null ssl.truststore.password = null ssl.truststore.type = JKS transaction.timeout.ms = 60000 transactional.id = null value.serializer = class org.apache.kafka.common.serialization.StringSerializer 19:07:12.850 [main] INFO org.apache.kafka.clients.producer.KafkaProducer - [Producer clientId=mso-123456-producer-ee1d5a98-b9ab-455d-ad9e-90f957d203ef] Instantiated an idempotent producer. 19:07:12.854 [main] INFO org.apache.kafka.common.utils.AppInfoParser - Kafka version: 3.3.1 19:07:12.854 [main] INFO org.apache.kafka.common.utils.AppInfoParser - Kafka commitId: e23c59d00e687ff5 19:07:12.854 [main] INFO org.apache.kafka.common.utils.AppInfoParser - Kafka startTimeMs: 1755112032854 19:07:12.854 [main] DEBUG org.apache.kafka.clients.producer.KafkaProducer - [Producer clientId=mso-123456-producer-ee1d5a98-b9ab-455d-ad9e-90f957d203ef] Kafka producer started 19:07:12.855 [kafka-producer-network-thread | mso-123456-producer-ee1d5a98-b9ab-455d-ad9e-90f957d203ef] DEBUG org.apache.kafka.clients.producer.internals.Sender - [Producer clientId=mso-123456-producer-ee1d5a98-b9ab-455d-ad9e-90f957d203ef] Starting Kafka producer I/O thread. 19:07:12.855 [kafka-producer-network-thread | mso-123456-producer-ee1d5a98-b9ab-455d-ad9e-90f957d203ef] DEBUG org.apache.kafka.clients.producer.internals.TransactionManager - [Producer clientId=mso-123456-producer-ee1d5a98-b9ab-455d-ad9e-90f957d203ef] Transition from state UNINITIALIZED to INITIALIZING 19:07:12.855 [kafka-producer-network-thread | mso-123456-producer-ee1d5a98-b9ab-455d-ad9e-90f957d203ef] DEBUG org.apache.kafka.clients.producer.internals.TransactionManager - [Producer clientId=mso-123456-producer-ee1d5a98-b9ab-455d-ad9e-90f957d203ef] Enqueuing transactional request InitProducerIdRequestData(transactionalId=null, transactionTimeoutMs=2147483647, producerId=-1, producerEpoch=-1) 19:07:12.856 [kafka-producer-network-thread | mso-123456-producer-ee1d5a98-b9ab-455d-ad9e-90f957d203ef] DEBUG org.apache.kafka.clients.NetworkClient - [Producer clientId=mso-123456-producer-ee1d5a98-b9ab-455d-ad9e-90f957d203ef] Initialize connection to node localhost:9092 (id: -1 rack: null) for sending metadata request 19:07:12.856 [kafka-producer-network-thread | mso-123456-producer-ee1d5a98-b9ab-455d-ad9e-90f957d203ef] DEBUG org.apache.kafka.clients.ClientUtils - Resolved host localhost as 127.0.0.1 19:07:12.856 [kafka-producer-network-thread | mso-123456-producer-ee1d5a98-b9ab-455d-ad9e-90f957d203ef] DEBUG org.apache.kafka.clients.NetworkClient - [Producer clientId=mso-123456-producer-ee1d5a98-b9ab-455d-ad9e-90f957d203ef] Initiating connection to node localhost:9092 (id: -1 rack: null) using address localhost/127.0.0.1 19:07:12.857 [kafka-producer-network-thread | mso-123456-producer-ee1d5a98-b9ab-455d-ad9e-90f957d203ef] DEBUG org.apache.kafka.common.security.authenticator.SaslClientAuthenticator - [Producer clientId=mso-123456-producer-ee1d5a98-b9ab-455d-ad9e-90f957d203ef] Set SASL client state to SEND_APIVERSIONS_REQUEST 19:07:12.857 [kafka-producer-network-thread | mso-123456-producer-ee1d5a98-b9ab-455d-ad9e-90f957d203ef] DEBUG org.apache.kafka.common.security.authenticator.SaslClientAuthenticator - [Producer clientId=mso-123456-producer-ee1d5a98-b9ab-455d-ad9e-90f957d203ef] Creating SaslClient: client=null;service=kafka;serviceHostname=localhost;mechs=[PLAIN] 19:07:12.859 [kafka-producer-network-thread | mso-123456-producer-ee1d5a98-b9ab-455d-ad9e-90f957d203ef] DEBUG org.apache.kafka.common.network.Selector - [Producer clientId=mso-123456-producer-ee1d5a98-b9ab-455d-ad9e-90f957d203ef] Connection with localhost/127.0.0.1 (channelId=-1) disconnected java.net.ConnectException: Connection refused at java.base/sun.nio.ch.SocketChannelImpl.checkConnect(Native Method) at java.base/sun.nio.ch.SocketChannelImpl.finishConnect(SocketChannelImpl.java:777) at org.apache.kafka.common.network.PlaintextTransportLayer.finishConnect(PlaintextTransportLayer.java:50) at org.apache.kafka.common.network.KafkaChannel.finishConnect(KafkaChannel.java:224) at org.apache.kafka.common.network.Selector.pollSelectionKeys(Selector.java:526) at org.apache.kafka.common.network.Selector.poll(Selector.java:481) at org.apache.kafka.clients.NetworkClient.poll(NetworkClient.java:560) at org.apache.kafka.clients.NetworkClientUtils.isReady(NetworkClientUtils.java:42) at org.apache.kafka.clients.NetworkClientUtils.awaitReady(NetworkClientUtils.java:64) at org.apache.kafka.clients.producer.internals.Sender.awaitNodeReady(Sender.java:534) at org.apache.kafka.clients.producer.internals.Sender.maybeSendAndPollTransactionalRequest(Sender.java:455) at org.apache.kafka.clients.producer.internals.Sender.runOnce(Sender.java:316) at org.apache.kafka.clients.producer.internals.Sender.run(Sender.java:243) at java.base/java.lang.Thread.run(Thread.java:829) 19:07:12.859 [kafka-producer-network-thread | mso-123456-producer-ee1d5a98-b9ab-455d-ad9e-90f957d203ef] INFO org.apache.kafka.clients.NetworkClient - [Producer clientId=mso-123456-producer-ee1d5a98-b9ab-455d-ad9e-90f957d203ef] Node -1 disconnected. 19:07:12.859 [kafka-producer-network-thread | mso-123456-producer-ee1d5a98-b9ab-455d-ad9e-90f957d203ef] WARN org.apache.kafka.clients.NetworkClient - [Producer clientId=mso-123456-producer-ee1d5a98-b9ab-455d-ad9e-90f957d203ef] Connection to node -1 (localhost/127.0.0.1:9092) could not be established. Broker may not be available. 19:07:12.859 [kafka-producer-network-thread | mso-123456-producer-ee1d5a98-b9ab-455d-ad9e-90f957d203ef] WARN org.apache.kafka.clients.NetworkClient - [Producer clientId=mso-123456-producer-ee1d5a98-b9ab-455d-ad9e-90f957d203ef] Bootstrap broker localhost:9092 (id: -1 rack: null) disconnected 19:07:12.859 [kafka-producer-network-thread | mso-123456-producer-ee1d5a98-b9ab-455d-ad9e-90f957d203ef] DEBUG org.apache.kafka.clients.producer.internals.Sender - [Producer clientId=mso-123456-producer-ee1d5a98-b9ab-455d-ad9e-90f957d203ef] Disconnect from localhost:9092 (id: -1 rack: null) while trying to send request InitProducerIdRequestData(transactionalId=null, transactionTimeoutMs=2147483647, producerId=-1, producerEpoch=-1). Going to back off and retry. java.io.IOException: Connection to localhost:9092 (id: -1 rack: null) failed. at org.apache.kafka.clients.NetworkClientUtils.awaitReady(NetworkClientUtils.java:70) at org.apache.kafka.clients.producer.internals.Sender.awaitNodeReady(Sender.java:534) at org.apache.kafka.clients.producer.internals.Sender.maybeSendAndPollTransactionalRequest(Sender.java:455) at org.apache.kafka.clients.producer.internals.Sender.runOnce(Sender.java:316) at org.apache.kafka.clients.producer.internals.Sender.run(Sender.java:243) at java.base/java.lang.Thread.run(Thread.java:829) 19:07:12.879 [kafka-coordinator-heartbeat-thread | mso-group] DEBUG org.apache.kafka.clients.NetworkClient - [Consumer clientId=mso-123456-consumer-e3947c42-8a25-4603-b050-ed449cbe5e27, groupId=mso-group] Give up sending metadata request since no node is available 19:07:12.879 [kafka-coordinator-heartbeat-thread | mso-group] DEBUG org.apache.kafka.clients.consumer.internals.ConsumerCoordinator - [Consumer clientId=mso-123456-consumer-e3947c42-8a25-4603-b050-ed449cbe5e27, groupId=mso-group] No broker available to send FindCoordinator request Configuration [sdcAddress=localhost:8443, user=mso-user, password=password, useHttpsWithSDC=true, pollingInterval=15, sdcStatusTopicName=SDC-DISTR-STATUS-TOPIC-AUTO, sdcNotificationTopicName=SDC-DISTR-NOTIF-TOPIC-AUTO, pollingTimeout=20, relevantArtifactTypes=[HEAT], consumerGroup=mso-group, environmentName=PROD, comsumerID=mso-123456, keyStorePath=src/test/resources/etc/sdc-user-keystore.jks, trustStorePath=src/test/resources/etc/sdc-user-truststore.jks, activateServerTLSAuth=true, filterInEmptyResources=false, consumeProduceStatusTopic=false, useSystemProxy=false, httpProxyHost=proxy, httpProxyPort=8080, httpsProxyHost=null, httpsProxyPort=0] 19:07:12.881 [main] INFO org.onap.sdc.impl.DistributionClientImpl - stop DistributionClient 19:07:12.882 [kafka-producer-network-thread | mso-123456-producer-f273d893-2274-4515-b9d3-12029fbb5bf7] DEBUG org.apache.kafka.clients.NetworkClient - [Producer clientId=mso-123456-producer-f273d893-2274-4515-b9d3-12029fbb5bf7] Give up sending metadata request since no node is available 19:07:12.883 [main] ERROR org.onap.sdc.impl.DistributionClientImpl - DistributionClientResultImpl [responseStatus=CONF_MISSING_USERNAME, responseMessage=configuration is invalid: CONF_MISSING_USERNAME] 19:07:12.883 [main] ERROR org.onap.sdc.impl.DistributionClientImpl - DistributionClientResultImpl [responseStatus=CONF_MISSING_USERNAME, responseMessage=configuration is invalid: CONF_MISSING_USERNAME] 19:07:12.883 [main] INFO org.onap.sdc.impl.DistributionClientImpl - stop DistributionClient 19:07:12.883 [main] DEBUG org.onap.sdc.impl.DistributionClientImpl - client was not initialized [INFO] Tests run: 12, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.208 s - in org.onap.sdc.impl.DistributionClientTest 19:07:12.899 [kafka-producer-network-thread | mso-123456-producer-e1bc41c7-1103-4722-9621-0a0777132466] DEBUG org.apache.kafka.clients.producer.internals.TransactionManager - [Producer clientId=mso-123456-producer-e1bc41c7-1103-4722-9621-0a0777132466] Enqueuing transactional request InitProducerIdRequestData(transactionalId=null, transactionTimeoutMs=2147483647, producerId=-1, producerEpoch=-1) 19:07:12.899 [kafka-producer-network-thread | mso-123456-producer-e1bc41c7-1103-4722-9621-0a0777132466] DEBUG org.apache.kafka.clients.producer.internals.TransactionManager - [Producer clientId=mso-123456-producer-e1bc41c7-1103-4722-9621-0a0777132466] Enqueuing transactional request InitProducerIdRequestData(transactionalId=null, transactionTimeoutMs=2147483647, producerId=-1, producerEpoch=-1) 19:07:12.899 [kafka-producer-network-thread | mso-123456-producer-e1bc41c7-1103-4722-9621-0a0777132466] DEBUG org.apache.kafka.clients.NetworkClient - [Producer clientId=mso-123456-producer-e1bc41c7-1103-4722-9621-0a0777132466] Give up sending metadata request since no node is available 19:07:12.928 [kafka-producer-network-thread | mso-123456-producer-c0214502-f336-4c34-978d-ecd84726c24b] DEBUG org.apache.kafka.clients.producer.internals.TransactionManager - [Producer clientId=mso-123456-producer-c0214502-f336-4c34-978d-ecd84726c24b] Enqueuing transactional request InitProducerIdRequestData(transactionalId=null, transactionTimeoutMs=2147483647, producerId=-1, producerEpoch=-1) 19:07:12.928 [kafka-producer-network-thread | mso-123456-producer-c0214502-f336-4c34-978d-ecd84726c24b] DEBUG org.apache.kafka.clients.NetworkClient - [Producer clientId=mso-123456-producer-c0214502-f336-4c34-978d-ecd84726c24b] Initialize connection to node localhost:9092 (id: -1 rack: null) for sending metadata request 19:07:12.928 [kafka-producer-network-thread | mso-123456-producer-c0214502-f336-4c34-978d-ecd84726c24b] DEBUG org.apache.kafka.clients.ClientUtils - Resolved host localhost as 127.0.0.1 19:07:12.928 [kafka-producer-network-thread | mso-123456-producer-c0214502-f336-4c34-978d-ecd84726c24b] DEBUG org.apache.kafka.clients.NetworkClient - [Producer clientId=mso-123456-producer-c0214502-f336-4c34-978d-ecd84726c24b] Initiating connection to node localhost:9092 (id: -1 rack: null) using address localhost/127.0.0.1 19:07:12.929 [kafka-producer-network-thread | mso-123456-producer-c0214502-f336-4c34-978d-ecd84726c24b] DEBUG org.apache.kafka.common.security.authenticator.SaslClientAuthenticator - [Producer clientId=mso-123456-producer-c0214502-f336-4c34-978d-ecd84726c24b] Set SASL client state to SEND_APIVERSIONS_REQUEST 19:07:12.929 [kafka-producer-network-thread | mso-123456-producer-c0214502-f336-4c34-978d-ecd84726c24b] DEBUG org.apache.kafka.common.security.authenticator.SaslClientAuthenticator - [Producer clientId=mso-123456-producer-c0214502-f336-4c34-978d-ecd84726c24b] Creating SaslClient: client=null;service=kafka;serviceHostname=localhost;mechs=[PLAIN] 19:07:12.930 [kafka-producer-network-thread | mso-123456-producer-c0214502-f336-4c34-978d-ecd84726c24b] DEBUG org.apache.kafka.common.network.Selector - [Producer clientId=mso-123456-producer-c0214502-f336-4c34-978d-ecd84726c24b] Connection with localhost/127.0.0.1 (channelId=-1) disconnected java.net.ConnectException: Connection refused at java.base/sun.nio.ch.SocketChannelImpl.checkConnect(Native Method) at java.base/sun.nio.ch.SocketChannelImpl.finishConnect(SocketChannelImpl.java:777) at org.apache.kafka.common.network.PlaintextTransportLayer.finishConnect(PlaintextTransportLayer.java:50) at org.apache.kafka.common.network.KafkaChannel.finishConnect(KafkaChannel.java:224) at org.apache.kafka.common.network.Selector.pollSelectionKeys(Selector.java:526) at org.apache.kafka.common.network.Selector.poll(Selector.java:481) at org.apache.kafka.clients.NetworkClient.poll(NetworkClient.java:560) at org.apache.kafka.clients.NetworkClientUtils.isReady(NetworkClientUtils.java:42) at org.apache.kafka.clients.NetworkClientUtils.awaitReady(NetworkClientUtils.java:64) at org.apache.kafka.clients.producer.internals.Sender.awaitNodeReady(Sender.java:534) at org.apache.kafka.clients.producer.internals.Sender.maybeSendAndPollTransactionalRequest(Sender.java:455) at org.apache.kafka.clients.producer.internals.Sender.runOnce(Sender.java:316) at org.apache.kafka.clients.producer.internals.Sender.run(Sender.java:243) at java.base/java.lang.Thread.run(Thread.java:829) 19:07:12.930 [kafka-producer-network-thread | mso-123456-producer-c0214502-f336-4c34-978d-ecd84726c24b] INFO org.apache.kafka.clients.NetworkClient - [Producer clientId=mso-123456-producer-c0214502-f336-4c34-978d-ecd84726c24b] Node -1 disconnected. 19:07:12.930 [kafka-producer-network-thread | mso-123456-producer-c0214502-f336-4c34-978d-ecd84726c24b] WARN org.apache.kafka.clients.NetworkClient - [Producer clientId=mso-123456-producer-c0214502-f336-4c34-978d-ecd84726c24b] Connection to node -1 (localhost/127.0.0.1:9092) could not be established. Broker may not be available. 19:07:12.930 [kafka-producer-network-thread | mso-123456-producer-c0214502-f336-4c34-978d-ecd84726c24b] WARN org.apache.kafka.clients.NetworkClient - [Producer clientId=mso-123456-producer-c0214502-f336-4c34-978d-ecd84726c24b] Bootstrap broker localhost:9092 (id: -1 rack: null) disconnected 19:07:12.930 [kafka-producer-network-thread | mso-123456-producer-c0214502-f336-4c34-978d-ecd84726c24b] DEBUG org.apache.kafka.clients.producer.internals.Sender - [Producer clientId=mso-123456-producer-c0214502-f336-4c34-978d-ecd84726c24b] Disconnect from localhost:9092 (id: -1 rack: null) while trying to send request InitProducerIdRequestData(transactionalId=null, transactionTimeoutMs=2147483647, producerId=-1, producerEpoch=-1). Going to back off and retry. java.io.IOException: Connection to localhost:9092 (id: -1 rack: null) failed. at org.apache.kafka.clients.NetworkClientUtils.awaitReady(NetworkClientUtils.java:70) at org.apache.kafka.clients.producer.internals.Sender.awaitNodeReady(Sender.java:534) at org.apache.kafka.clients.producer.internals.Sender.maybeSendAndPollTransactionalRequest(Sender.java:455) at org.apache.kafka.clients.producer.internals.Sender.runOnce(Sender.java:316) at org.apache.kafka.clients.producer.internals.Sender.run(Sender.java:243) at java.base/java.lang.Thread.run(Thread.java:829) 19:07:12.932 [kafka-producer-network-thread | mso-123456-producer-f273d893-2274-4515-b9d3-12029fbb5bf7] DEBUG org.apache.kafka.clients.NetworkClient - [Producer clientId=mso-123456-producer-f273d893-2274-4515-b9d3-12029fbb5bf7] Give up sending metadata request since no node is available 19:07:12.950 [kafka-producer-network-thread | mso-123456-producer-e1bc41c7-1103-4722-9621-0a0777132466] DEBUG org.apache.kafka.clients.NetworkClient - [Producer clientId=mso-123456-producer-e1bc41c7-1103-4722-9621-0a0777132466] Initialize connection to node localhost:9092 (id: -1 rack: null) for sending metadata request 19:07:12.950 [kafka-producer-network-thread | mso-123456-producer-e1bc41c7-1103-4722-9621-0a0777132466] DEBUG org.apache.kafka.clients.ClientUtils - Resolved host localhost as 127.0.0.1 19:07:12.950 [kafka-producer-network-thread | mso-123456-producer-e1bc41c7-1103-4722-9621-0a0777132466] DEBUG org.apache.kafka.clients.NetworkClient - [Producer clientId=mso-123456-producer-e1bc41c7-1103-4722-9621-0a0777132466] Initiating connection to node localhost:9092 (id: -1 rack: null) using address localhost/127.0.0.1 19:07:12.950 [kafka-producer-network-thread | mso-123456-producer-e1bc41c7-1103-4722-9621-0a0777132466] DEBUG org.apache.kafka.common.security.authenticator.SaslClientAuthenticator - [Producer clientId=mso-123456-producer-e1bc41c7-1103-4722-9621-0a0777132466] Set SASL client state to SEND_APIVERSIONS_REQUEST 19:07:12.950 [kafka-producer-network-thread | mso-123456-producer-e1bc41c7-1103-4722-9621-0a0777132466] DEBUG org.apache.kafka.common.security.authenticator.SaslClientAuthenticator - [Producer clientId=mso-123456-producer-e1bc41c7-1103-4722-9621-0a0777132466] Creating SaslClient: client=null;service=kafka;serviceHostname=localhost;mechs=[PLAIN] 19:07:12.951 [kafka-producer-network-thread | mso-123456-producer-e1bc41c7-1103-4722-9621-0a0777132466] DEBUG org.apache.kafka.common.network.Selector - [Producer clientId=mso-123456-producer-e1bc41c7-1103-4722-9621-0a0777132466] Connection with localhost/127.0.0.1 (channelId=-1) disconnected java.net.ConnectException: Connection refused at java.base/sun.nio.ch.SocketChannelImpl.checkConnect(Native Method) at java.base/sun.nio.ch.SocketChannelImpl.finishConnect(SocketChannelImpl.java:777) at org.apache.kafka.common.network.PlaintextTransportLayer.finishConnect(PlaintextTransportLayer.java:50) at org.apache.kafka.common.network.KafkaChannel.finishConnect(KafkaChannel.java:224) at org.apache.kafka.common.network.Selector.pollSelectionKeys(Selector.java:526) at org.apache.kafka.common.network.Selector.poll(Selector.java:481) at org.apache.kafka.clients.NetworkClient.poll(NetworkClient.java:560) at org.apache.kafka.clients.NetworkClientUtils.isReady(NetworkClientUtils.java:42) at org.apache.kafka.clients.NetworkClientUtils.awaitReady(NetworkClientUtils.java:64) at org.apache.kafka.clients.producer.internals.Sender.awaitNodeReady(Sender.java:534) at org.apache.kafka.clients.producer.internals.Sender.maybeSendAndPollTransactionalRequest(Sender.java:455) at org.apache.kafka.clients.producer.internals.Sender.runOnce(Sender.java:316) at org.apache.kafka.clients.producer.internals.Sender.run(Sender.java:243) at java.base/java.lang.Thread.run(Thread.java:829) 19:07:12.951 [kafka-producer-network-thread | mso-123456-producer-e1bc41c7-1103-4722-9621-0a0777132466] INFO org.apache.kafka.clients.NetworkClient - [Producer clientId=mso-123456-producer-e1bc41c7-1103-4722-9621-0a0777132466] Node -1 disconnected. 19:07:12.951 [kafka-producer-network-thread | mso-123456-producer-e1bc41c7-1103-4722-9621-0a0777132466] WARN org.apache.kafka.clients.NetworkClient - [Producer clientId=mso-123456-producer-e1bc41c7-1103-4722-9621-0a0777132466] Connection to node -1 (localhost/127.0.0.1:9092) could not be established. Broker may not be available. 19:07:12.951 [kafka-producer-network-thread | mso-123456-producer-e1bc41c7-1103-4722-9621-0a0777132466] WARN org.apache.kafka.clients.NetworkClient - [Producer clientId=mso-123456-producer-e1bc41c7-1103-4722-9621-0a0777132466] Bootstrap broker localhost:9092 (id: -1 rack: null) disconnected 19:07:12.951 [kafka-producer-network-thread | mso-123456-producer-e1bc41c7-1103-4722-9621-0a0777132466] DEBUG org.apache.kafka.clients.producer.internals.Sender - [Producer clientId=mso-123456-producer-e1bc41c7-1103-4722-9621-0a0777132466] Disconnect from localhost:9092 (id: -1 rack: null) while trying to send request InitProducerIdRequestData(transactionalId=null, transactionTimeoutMs=2147483647, producerId=-1, producerEpoch=-1). Going to back off and retry. java.io.IOException: Connection to localhost:9092 (id: -1 rack: null) failed. at org.apache.kafka.clients.NetworkClientUtils.awaitReady(NetworkClientUtils.java:70) at org.apache.kafka.clients.producer.internals.Sender.awaitNodeReady(Sender.java:534) at org.apache.kafka.clients.producer.internals.Sender.maybeSendAndPollTransactionalRequest(Sender.java:455) at org.apache.kafka.clients.producer.internals.Sender.runOnce(Sender.java:316) at org.apache.kafka.clients.producer.internals.Sender.run(Sender.java:243) at java.base/java.lang.Thread.run(Thread.java:829) 19:07:12.960 [kafka-producer-network-thread | mso-123456-producer-ee1d5a98-b9ab-455d-ad9e-90f957d203ef] DEBUG org.apache.kafka.clients.producer.internals.TransactionManager - [Producer clientId=mso-123456-producer-ee1d5a98-b9ab-455d-ad9e-90f957d203ef] Enqueuing transactional request InitProducerIdRequestData(transactionalId=null, transactionTimeoutMs=2147483647, producerId=-1, producerEpoch=-1) 19:07:12.960 [kafka-producer-network-thread | mso-123456-producer-ee1d5a98-b9ab-455d-ad9e-90f957d203ef] DEBUG org.apache.kafka.clients.NetworkClient - [Producer clientId=mso-123456-producer-ee1d5a98-b9ab-455d-ad9e-90f957d203ef] Initialize connection to node localhost:9092 (id: -1 rack: null) for sending metadata request 19:07:12.960 [kafka-producer-network-thread | mso-123456-producer-ee1d5a98-b9ab-455d-ad9e-90f957d203ef] DEBUG org.apache.kafka.clients.ClientUtils - Resolved host localhost as 127.0.0.1 19:07:12.960 [kafka-producer-network-thread | mso-123456-producer-ee1d5a98-b9ab-455d-ad9e-90f957d203ef] DEBUG org.apache.kafka.clients.NetworkClient - [Producer clientId=mso-123456-producer-ee1d5a98-b9ab-455d-ad9e-90f957d203ef] Initiating connection to node localhost:9092 (id: -1 rack: null) using address localhost/127.0.0.1 19:07:12.961 [kafka-producer-network-thread | mso-123456-producer-ee1d5a98-b9ab-455d-ad9e-90f957d203ef] DEBUG org.apache.kafka.common.security.authenticator.SaslClientAuthenticator - [Producer clientId=mso-123456-producer-ee1d5a98-b9ab-455d-ad9e-90f957d203ef] Set SASL client state to SEND_APIVERSIONS_REQUEST 19:07:12.961 [kafka-producer-network-thread | mso-123456-producer-ee1d5a98-b9ab-455d-ad9e-90f957d203ef] DEBUG org.apache.kafka.common.security.authenticator.SaslClientAuthenticator - [Producer clientId=mso-123456-producer-ee1d5a98-b9ab-455d-ad9e-90f957d203ef] Creating SaslClient: client=null;service=kafka;serviceHostname=localhost;mechs=[PLAIN] 19:07:12.962 [kafka-producer-network-thread | mso-123456-producer-ee1d5a98-b9ab-455d-ad9e-90f957d203ef] DEBUG org.apache.kafka.common.network.Selector - [Producer clientId=mso-123456-producer-ee1d5a98-b9ab-455d-ad9e-90f957d203ef] Connection with localhost/127.0.0.1 (channelId=-1) disconnected java.net.ConnectException: Connection refused at java.base/sun.nio.ch.SocketChannelImpl.checkConnect(Native Method) at java.base/sun.nio.ch.SocketChannelImpl.finishConnect(SocketChannelImpl.java:777) at org.apache.kafka.common.network.PlaintextTransportLayer.finishConnect(PlaintextTransportLayer.java:50) at org.apache.kafka.common.network.KafkaChannel.finishConnect(KafkaChannel.java:224) at org.apache.kafka.common.network.Selector.pollSelectionKeys(Selector.java:526) at org.apache.kafka.common.network.Selector.poll(Selector.java:481) at org.apache.kafka.clients.NetworkClient.poll(NetworkClient.java:560) at org.apache.kafka.clients.NetworkClientUtils.isReady(NetworkClientUtils.java:42) at org.apache.kafka.clients.NetworkClientUtils.awaitReady(NetworkClientUtils.java:64) at org.apache.kafka.clients.producer.internals.Sender.awaitNodeReady(Sender.java:534) at org.apache.kafka.clients.producer.internals.Sender.maybeSendAndPollTransactionalRequest(Sender.java:455) at org.apache.kafka.clients.producer.internals.Sender.runOnce(Sender.java:316) at org.apache.kafka.clients.producer.internals.Sender.run(Sender.java:243) at java.base/java.lang.Thread.run(Thread.java:829) 19:07:12.962 [kafka-producer-network-thread | mso-123456-producer-ee1d5a98-b9ab-455d-ad9e-90f957d203ef] INFO org.apache.kafka.clients.NetworkClient - [Producer clientId=mso-123456-producer-ee1d5a98-b9ab-455d-ad9e-90f957d203ef] Node -1 disconnected. 19:07:12.962 [kafka-producer-network-thread | mso-123456-producer-ee1d5a98-b9ab-455d-ad9e-90f957d203ef] WARN org.apache.kafka.clients.NetworkClient - [Producer clientId=mso-123456-producer-ee1d5a98-b9ab-455d-ad9e-90f957d203ef] Connection to node -1 (localhost/127.0.0.1:9092) could not be established. Broker may not be available. 19:07:12.962 [kafka-producer-network-thread | mso-123456-producer-ee1d5a98-b9ab-455d-ad9e-90f957d203ef] WARN org.apache.kafka.clients.NetworkClient - [Producer clientId=mso-123456-producer-ee1d5a98-b9ab-455d-ad9e-90f957d203ef] Bootstrap broker localhost:9092 (id: -1 rack: null) disconnected 19:07:12.962 [kafka-producer-network-thread | mso-123456-producer-ee1d5a98-b9ab-455d-ad9e-90f957d203ef] DEBUG org.apache.kafka.clients.producer.internals.Sender - [Producer clientId=mso-123456-producer-ee1d5a98-b9ab-455d-ad9e-90f957d203ef] Disconnect from localhost:9092 (id: -1 rack: null) while trying to send request InitProducerIdRequestData(transactionalId=null, transactionTimeoutMs=2147483647, producerId=-1, producerEpoch=-1). Going to back off and retry. java.io.IOException: Connection to localhost:9092 (id: -1 rack: null) failed. at org.apache.kafka.clients.NetworkClientUtils.awaitReady(NetworkClientUtils.java:70) at org.apache.kafka.clients.producer.internals.Sender.awaitNodeReady(Sender.java:534) at org.apache.kafka.clients.producer.internals.Sender.maybeSendAndPollTransactionalRequest(Sender.java:455) at org.apache.kafka.clients.producer.internals.Sender.runOnce(Sender.java:316) at org.apache.kafka.clients.producer.internals.Sender.run(Sender.java:243) at java.base/java.lang.Thread.run(Thread.java:829) 19:07:12.980 [kafka-coordinator-heartbeat-thread | mso-group] DEBUG org.apache.kafka.clients.NetworkClient - [Consumer clientId=mso-123456-consumer-e3947c42-8a25-4603-b050-ed449cbe5e27, groupId=mso-group] Give up sending metadata request since no node is available 19:07:12.980 [kafka-coordinator-heartbeat-thread | mso-group] DEBUG org.apache.kafka.clients.consumer.internals.ConsumerCoordinator - [Consumer clientId=mso-123456-consumer-e3947c42-8a25-4603-b050-ed449cbe5e27, groupId=mso-group] No broker available to send FindCoordinator request 19:07:12.982 [kafka-producer-network-thread | mso-123456-producer-f273d893-2274-4515-b9d3-12029fbb5bf7] DEBUG org.apache.kafka.clients.NetworkClient - [Producer clientId=mso-123456-producer-f273d893-2274-4515-b9d3-12029fbb5bf7] Give up sending metadata request since no node is available 19:07:13.030 [kafka-producer-network-thread | mso-123456-producer-c0214502-f336-4c34-978d-ecd84726c24b] DEBUG org.apache.kafka.clients.producer.internals.TransactionManager - [Producer clientId=mso-123456-producer-c0214502-f336-4c34-978d-ecd84726c24b] Enqueuing transactional request InitProducerIdRequestData(transactionalId=null, transactionTimeoutMs=2147483647, producerId=-1, producerEpoch=-1) 19:07:13.030 [kafka-producer-network-thread | mso-123456-producer-c0214502-f336-4c34-978d-ecd84726c24b] DEBUG org.apache.kafka.clients.producer.internals.TransactionManager - [Producer clientId=mso-123456-producer-c0214502-f336-4c34-978d-ecd84726c24b] Enqueuing transactional request InitProducerIdRequestData(transactionalId=null, transactionTimeoutMs=2147483647, producerId=-1, producerEpoch=-1) 19:07:13.031 [kafka-producer-network-thread | mso-123456-producer-c0214502-f336-4c34-978d-ecd84726c24b] DEBUG org.apache.kafka.clients.NetworkClient - [Producer clientId=mso-123456-producer-c0214502-f336-4c34-978d-ecd84726c24b] Give up sending metadata request since no node is available 19:07:13.033 [kafka-producer-network-thread | mso-123456-producer-f273d893-2274-4515-b9d3-12029fbb5bf7] DEBUG org.apache.kafka.clients.NetworkClient - [Producer clientId=mso-123456-producer-f273d893-2274-4515-b9d3-12029fbb5bf7] Give up sending metadata request since no node is available 19:07:13.054 [kafka-producer-network-thread | mso-123456-producer-e1bc41c7-1103-4722-9621-0a0777132466] DEBUG org.apache.kafka.clients.producer.internals.TransactionManager - [Producer clientId=mso-123456-producer-e1bc41c7-1103-4722-9621-0a0777132466] Enqueuing transactional request InitProducerIdRequestData(transactionalId=null, transactionTimeoutMs=2147483647, producerId=-1, producerEpoch=-1) 19:07:13.054 [kafka-producer-network-thread | mso-123456-producer-e1bc41c7-1103-4722-9621-0a0777132466] DEBUG org.apache.kafka.clients.producer.internals.TransactionManager - [Producer clientId=mso-123456-producer-e1bc41c7-1103-4722-9621-0a0777132466] Enqueuing transactional request InitProducerIdRequestData(transactionalId=null, transactionTimeoutMs=2147483647, producerId=-1, producerEpoch=-1) 19:07:13.054 [kafka-producer-network-thread | mso-123456-producer-e1bc41c7-1103-4722-9621-0a0777132466] DEBUG org.apache.kafka.clients.NetworkClient - [Producer clientId=mso-123456-producer-e1bc41c7-1103-4722-9621-0a0777132466] Give up sending metadata request since no node is available 19:07:13.065 [kafka-producer-network-thread | mso-123456-producer-ee1d5a98-b9ab-455d-ad9e-90f957d203ef] DEBUG org.apache.kafka.clients.producer.internals.TransactionManager - [Producer clientId=mso-123456-producer-ee1d5a98-b9ab-455d-ad9e-90f957d203ef] Enqueuing transactional request InitProducerIdRequestData(transactionalId=null, transactionTimeoutMs=2147483647, producerId=-1, producerEpoch=-1) 19:07:13.065 [kafka-producer-network-thread | mso-123456-producer-ee1d5a98-b9ab-455d-ad9e-90f957d203ef] DEBUG org.apache.kafka.clients.NetworkClient - [Producer clientId=mso-123456-producer-ee1d5a98-b9ab-455d-ad9e-90f957d203ef] Initialize connection to node localhost:9092 (id: -1 rack: null) for sending metadata request 19:07:13.065 [kafka-producer-network-thread | mso-123456-producer-ee1d5a98-b9ab-455d-ad9e-90f957d203ef] DEBUG org.apache.kafka.clients.ClientUtils - Resolved host localhost as 127.0.0.1 19:07:13.065 [kafka-producer-network-thread | mso-123456-producer-ee1d5a98-b9ab-455d-ad9e-90f957d203ef] DEBUG org.apache.kafka.clients.NetworkClient - [Producer clientId=mso-123456-producer-ee1d5a98-b9ab-455d-ad9e-90f957d203ef] Initiating connection to node localhost:9092 (id: -1 rack: null) using address localhost/127.0.0.1 19:07:13.066 [kafka-producer-network-thread | mso-123456-producer-ee1d5a98-b9ab-455d-ad9e-90f957d203ef] DEBUG org.apache.kafka.common.security.authenticator.SaslClientAuthenticator - [Producer clientId=mso-123456-producer-ee1d5a98-b9ab-455d-ad9e-90f957d203ef] Set SASL client state to SEND_APIVERSIONS_REQUEST 19:07:13.066 [kafka-producer-network-thread | mso-123456-producer-ee1d5a98-b9ab-455d-ad9e-90f957d203ef] DEBUG org.apache.kafka.common.security.authenticator.SaslClientAuthenticator - [Producer clientId=mso-123456-producer-ee1d5a98-b9ab-455d-ad9e-90f957d203ef] Creating SaslClient: client=null;service=kafka;serviceHostname=localhost;mechs=[PLAIN] [INFO] [INFO] Results: [INFO] [INFO] Tests run: 72, Failures: 0, Errors: 0, Skipped: 0 [INFO] [INFO] [INFO] --- jacoco-maven-plugin:0.8.6:report (post-unit-test) @ sdc-distribution-client --- [INFO] Loading execution data file /w/workspace/sdc-sdc-distribution-client-master-integration-pairwise/sdc-distribution-client/target/code-coverage/jacoco-ut.exec [INFO] Analyzed bundle 'sdc-distribution-client' with 48 classes [INFO] [INFO] --- maven-jar-plugin:2.4:jar (default-jar) @ sdc-distribution-client --- [INFO] Building jar: /w/workspace/sdc-sdc-distribution-client-master-integration-pairwise/sdc-distribution-client/target/sdc-distribution-client-2.1.2-SNAPSHOT.jar [INFO] [INFO] --- maven-javadoc-plugin:3.2.0:jar (attach-javadocs) @ sdc-distribution-client --- [INFO] No previous run data found, generating javadoc. [INFO] Loading source files for package org.onap.sdc.api.consumer... Loading source files for package org.onap.sdc.api... Loading source files for package org.onap.sdc.api.notification... Loading source files for package org.onap.sdc.api.results... Loading source files for package org.onap.sdc.http... Loading source files for package org.onap.sdc.utils... Loading source files for package org.onap.sdc.utils.kafka... Loading source files for package org.onap.sdc.utils.heat... Loading source files for package org.onap.sdc.impl... Constructing Javadoc information... Standard Doclet version 11.0.16 Building tree for all the packages and classes... Generating /w/workspace/sdc-sdc-distribution-client-master-integration-pairwise/sdc-distribution-client/target/apidocs/org/onap/sdc/api/IDistributionClient.html... Generating /w/workspace/sdc-sdc-distribution-client-master-integration-pairwise/sdc-distribution-client/target/apidocs/org/onap/sdc/api/IDistributionStatusMessageJsonBuilder.html... Generating /w/workspace/sdc-sdc-distribution-client-master-integration-pairwise/sdc-distribution-client/target/apidocs/org/onap/sdc/api/consumer/IComponentDoneStatusMessage.html... Generating /w/workspace/sdc-sdc-distribution-client-master-integration-pairwise/sdc-distribution-client/target/apidocs/org/onap/sdc/api/consumer/IConfiguration.html... Generating /w/workspace/sdc-sdc-distribution-client-master-integration-pairwise/sdc-distribution-client/target/apidocs/org/onap/sdc/api/consumer/IDistributionStatusMessage.html... Generating /w/workspace/sdc-sdc-distribution-client-master-integration-pairwise/sdc-distribution-client/target/apidocs/org/onap/sdc/api/consumer/IDistributionStatusMessageBasic.html... Generating /w/workspace/sdc-sdc-distribution-client-master-integration-pairwise/sdc-distribution-client/target/apidocs/org/onap/sdc/api/consumer/IFinalDistrStatusMessage.html... Generating /w/workspace/sdc-sdc-distribution-client-master-integration-pairwise/sdc-distribution-client/target/apidocs/org/onap/sdc/api/consumer/INotificationCallback.html... Generating /w/workspace/sdc-sdc-distribution-client-master-integration-pairwise/sdc-distribution-client/target/apidocs/org/onap/sdc/api/consumer/IStatusCallback.html... Generating /w/workspace/sdc-sdc-distribution-client-master-integration-pairwise/sdc-distribution-client/target/apidocs/org/onap/sdc/api/notification/IArtifactInfo.html... Generating /w/workspace/sdc-sdc-distribution-client-master-integration-pairwise/sdc-distribution-client/target/apidocs/org/onap/sdc/api/notification/INotificationData.html... Generating /w/workspace/sdc-sdc-distribution-client-master-integration-pairwise/sdc-distribution-client/target/apidocs/org/onap/sdc/api/notification/IResourceInstance.html... Generating /w/workspace/sdc-sdc-distribution-client-master-integration-pairwise/sdc-distribution-client/target/apidocs/org/onap/sdc/api/notification/IStatusData.html... Generating /w/workspace/sdc-sdc-distribution-client-master-integration-pairwise/sdc-distribution-client/target/apidocs/org/onap/sdc/api/notification/IVfModuleMetadata.html... Generating /w/workspace/sdc-sdc-distribution-client-master-integration-pairwise/sdc-distribution-client/target/apidocs/org/onap/sdc/api/notification/StatusMessage.html... Generating /w/workspace/sdc-sdc-distribution-client-master-integration-pairwise/sdc-distribution-client/target/apidocs/org/onap/sdc/api/results/IDistributionClientDownloadResult.html... Generating /w/workspace/sdc-sdc-distribution-client-master-integration-pairwise/sdc-distribution-client/target/apidocs/org/onap/sdc/api/results/IDistributionClientResult.html... Generating /w/workspace/sdc-sdc-distribution-client-master-integration-pairwise/sdc-distribution-client/target/apidocs/org/onap/sdc/http/HttpClientFactory.html... Generating /w/workspace/sdc-sdc-distribution-client-master-integration-pairwise/sdc-distribution-client/target/apidocs/org/onap/sdc/http/HttpRequestFactory.html... Generating /w/workspace/sdc-sdc-distribution-client-master-integration-pairwise/sdc-distribution-client/target/apidocs/org/onap/sdc/http/HttpSdcClient.html... Generating /w/workspace/sdc-sdc-distribution-client-master-integration-pairwise/sdc-distribution-client/target/apidocs/org/onap/sdc/http/HttpSdcClientException.html... Generating /w/workspace/sdc-sdc-distribution-client-master-integration-pairwise/sdc-distribution-client/target/apidocs/org/onap/sdc/http/HttpSdcResponse.html... Generating /w/workspace/sdc-sdc-distribution-client-master-integration-pairwise/sdc-distribution-client/target/apidocs/org/onap/sdc/http/IHttpSdcClient.html... Generating /w/workspace/sdc-sdc-distribution-client-master-integration-pairwise/sdc-distribution-client/target/apidocs/org/onap/sdc/http/SdcConnectorClient.html... Generating /w/workspace/sdc-sdc-distribution-client-master-integration-pairwise/sdc-distribution-client/target/apidocs/org/onap/sdc/http/SdcUrls.html... Generating /w/workspace/sdc-sdc-distribution-client-master-integration-pairwise/sdc-distribution-client/target/apidocs/org/onap/sdc/impl/ArtifactInfo.html... Generating /w/workspace/sdc-sdc-distribution-client-master-integration-pairwise/sdc-distribution-client/target/apidocs/org/onap/sdc/impl/Configuration.html... Generating /w/workspace/sdc-sdc-distribution-client-master-integration-pairwise/sdc-distribution-client/target/apidocs/org/onap/sdc/impl/ConfigurationValidator.html... Generating /w/workspace/sdc-sdc-distribution-client-master-integration-pairwise/sdc-distribution-client/target/apidocs/org/onap/sdc/impl/DistributionClientDownloadResultImpl.html... Generating /w/workspace/sdc-sdc-distribution-client-master-integration-pairwise/sdc-distribution-client/target/apidocs/org/onap/sdc/impl/DistributionClientFactory.html... Generating /w/workspace/sdc-sdc-distribution-client-master-integration-pairwise/sdc-distribution-client/target/apidocs/org/onap/sdc/impl/DistributionClientImpl.html... Generating /w/workspace/sdc-sdc-distribution-client-master-integration-pairwise/sdc-distribution-client/target/apidocs/org/onap/sdc/impl/DistributionClientResultImpl.html... Generating /w/workspace/sdc-sdc-distribution-client-master-integration-pairwise/sdc-distribution-client/target/apidocs/org/onap/sdc/impl/DistributionStatusMessageJsonBuilderFactory.html... Generating /w/workspace/sdc-sdc-distribution-client-master-integration-pairwise/sdc-distribution-client/target/apidocs/org/onap/sdc/impl/JsonContainerResourceInstance.html... Generating /w/workspace/sdc-sdc-distribution-client-master-integration-pairwise/sdc-distribution-client/target/apidocs/org/onap/sdc/impl/NotificationCallbackBuilder.html... Generating /w/workspace/sdc-sdc-distribution-client-master-integration-pairwise/sdc-distribution-client/target/apidocs/org/onap/sdc/impl/NotificationData.html... Generating /w/workspace/sdc-sdc-distribution-client-master-integration-pairwise/sdc-distribution-client/target/apidocs/org/onap/sdc/impl/NotificationDataImpl.html... Generating /w/workspace/sdc-sdc-distribution-client-master-integration-pairwise/sdc-distribution-client/target/apidocs/org/onap/sdc/impl/ResourceInstance.html... Generating /w/workspace/sdc-sdc-distribution-client-master-integration-pairwise/sdc-distribution-client/target/apidocs/org/onap/sdc/impl/StatusDataImpl.html... Generating /w/workspace/sdc-sdc-distribution-client-master-integration-pairwise/sdc-distribution-client/target/apidocs/org/onap/sdc/utils/ArtifactTypeEnum.html... Generating /w/workspace/sdc-sdc-distribution-client-master-integration-pairwise/sdc-distribution-client/target/apidocs/org/onap/sdc/utils/CaseInsensitiveMap.html... Generating /w/workspace/sdc-sdc-distribution-client-master-integration-pairwise/sdc-distribution-client/target/apidocs/org/onap/sdc/utils/DistributionActionResultEnum.html... Generating /w/workspace/sdc-sdc-distribution-client-master-integration-pairwise/sdc-distribution-client/target/apidocs/org/onap/sdc/utils/DistributionClientConstants.html... Generating /w/workspace/sdc-sdc-distribution-client-master-integration-pairwise/sdc-distribution-client/target/apidocs/org/onap/sdc/utils/DistributionStatusEnum.html... Generating /w/workspace/sdc-sdc-distribution-client-master-integration-pairwise/sdc-distribution-client/target/apidocs/org/onap/sdc/utils/GeneralUtils.html... Generating /w/workspace/sdc-sdc-distribution-client-master-integration-pairwise/sdc-distribution-client/target/apidocs/org/onap/sdc/utils/NotificationSender.html... Generating /w/workspace/sdc-sdc-distribution-client-master-integration-pairwise/sdc-distribution-client/target/apidocs/org/onap/sdc/utils/Pair.html... Generating /w/workspace/sdc-sdc-distribution-client-master-integration-pairwise/sdc-distribution-client/target/apidocs/org/onap/sdc/utils/Wrapper.html... Generating /w/workspace/sdc-sdc-distribution-client-master-integration-pairwise/sdc-distribution-client/target/apidocs/org/onap/sdc/utils/YamlToObjectConverter.html... Generating /w/workspace/sdc-sdc-distribution-client-master-integration-pairwise/sdc-distribution-client/target/apidocs/org/onap/sdc/utils/heat/HeatConfiguration.html... Generating /w/workspace/sdc-sdc-distribution-client-master-integration-pairwise/sdc-distribution-client/target/apidocs/org/onap/sdc/utils/heat/HeatParameter.html... Generating /w/workspace/sdc-sdc-distribution-client-master-integration-pairwise/sdc-distribution-client/target/apidocs/org/onap/sdc/utils/heat/HeatParameterConstraint.html... Generating /w/workspace/sdc-sdc-distribution-client-master-integration-pairwise/sdc-distribution-client/target/apidocs/org/onap/sdc/utils/heat/HeatParser.html... Generating /w/workspace/sdc-sdc-distribution-client-master-integration-pairwise/sdc-distribution-client/target/apidocs/org/onap/sdc/utils/kafka/KafkaCommonConfig.html... Generating /w/workspace/sdc-sdc-distribution-client-master-integration-pairwise/sdc-distribution-client/target/apidocs/org/onap/sdc/utils/kafka/KafkaDataResponse.html... Generating /w/workspace/sdc-sdc-distribution-client-master-integration-pairwise/sdc-distribution-client/target/apidocs/org/onap/sdc/utils/kafka/SdcKafkaConsumer.html... Generating /w/workspace/sdc-sdc-distribution-client-master-integration-pairwise/sdc-distribution-client/target/apidocs/org/onap/sdc/utils/kafka/SdcKafkaProducer.html... Generating /w/workspace/sdc-sdc-distribution-client-master-integration-pairwise/sdc-distribution-client/target/apidocs/org/onap/sdc/api/package-summary.html... Generating /w/workspace/sdc-sdc-distribution-client-master-integration-pairwise/sdc-distribution-client/target/apidocs/org/onap/sdc/api/package-tree.html... Generating /w/workspace/sdc-sdc-distribution-client-master-integration-pairwise/sdc-distribution-client/target/apidocs/org/onap/sdc/api/consumer/package-summary.html... Generating /w/workspace/sdc-sdc-distribution-client-master-integration-pairwise/sdc-distribution-client/target/apidocs/org/onap/sdc/api/consumer/package-tree.html... Generating /w/workspace/sdc-sdc-distribution-client-master-integration-pairwise/sdc-distribution-client/target/apidocs/org/onap/sdc/api/notification/package-summary.html... Generating /w/workspace/sdc-sdc-distribution-client-master-integration-pairwise/sdc-distribution-client/target/apidocs/org/onap/sdc/api/notification/package-tree.html... Generating /w/workspace/sdc-sdc-distribution-client-master-integration-pairwise/sdc-distribution-client/target/apidocs/org/onap/sdc/api/results/package-summary.html... Generating /w/workspace/sdc-sdc-distribution-client-master-integration-pairwise/sdc-distribution-client/target/apidocs/org/onap/sdc/api/results/package-tree.html... Generating /w/workspace/sdc-sdc-distribution-client-master-integration-pairwise/sdc-distribution-client/target/apidocs/org/onap/sdc/http/package-summary.html... Generating /w/workspace/sdc-sdc-distribution-client-master-integration-pairwise/sdc-distribution-client/target/apidocs/org/onap/sdc/http/package-tree.html... Generating /w/workspace/sdc-sdc-distribution-client-master-integration-pairwise/sdc-distribution-client/target/apidocs/org/onap/sdc/impl/package-summary.html... Generating /w/workspace/sdc-sdc-distribution-client-master-integration-pairwise/sdc-distribution-client/target/apidocs/org/onap/sdc/impl/package-tree.html... Generating /w/workspace/sdc-sdc-distribution-client-master-integration-pairwise/sdc-distribution-client/target/apidocs/org/onap/sdc/utils/package-summary.html... Generating /w/workspace/sdc-sdc-distribution-client-master-integration-pairwise/sdc-distribution-client/target/apidocs/org/onap/sdc/utils/package-tree.html... Generating /w/workspace/sdc-sdc-distribution-client-master-integration-pairwise/sdc-distribution-client/target/apidocs/org/onap/sdc/utils/heat/package-summary.html... Generating /w/workspace/sdc-sdc-distribution-client-master-integration-pairwise/sdc-distribution-client/target/apidocs/org/onap/sdc/utils/heat/package-tree.html... Generating /w/workspace/sdc-sdc-distribution-client-master-integration-pairwise/sdc-distribution-client/target/apidocs/org/onap/sdc/utils/kafka/package-summary.html... Generating /w/workspace/sdc-sdc-distribution-client-master-integration-pairwise/sdc-distribution-client/target/apidocs/org/onap/sdc/utils/kafka/package-tree.html... Generating /w/workspace/sdc-sdc-distribution-client-master-integration-pairwise/sdc-distribution-client/target/apidocs/constant-values.html... Generating /w/workspace/sdc-sdc-distribution-client-master-integration-pairwise/sdc-distribution-client/target/apidocs/serialized-form.html... Generating /w/workspace/sdc-sdc-distribution-client-master-integration-pairwise/sdc-distribution-client/target/apidocs/org/onap/sdc/api/consumer/class-use/IDistributionStatusMessage.html... Generating /w/workspace/sdc-sdc-distribution-client-master-integration-pairwise/sdc-distribution-client/target/apidocs/org/onap/sdc/api/consumer/class-use/IDistributionStatusMessageBasic.html... Generating /w/workspace/sdc-sdc-distribution-client-master-integration-pairwise/sdc-distribution-client/target/apidocs/org/onap/sdc/api/consumer/class-use/IStatusCallback.html... Generating /w/workspace/sdc-sdc-distribution-client-master-integration-pairwise/sdc-distribution-client/target/apidocs/org/onap/sdc/api/consumer/class-use/IFinalDistrStatusMessage.html... Generating /w/workspace/sdc-sdc-distribution-client-master-integration-pairwise/sdc-distribution-client/target/apidocs/org/onap/sdc/api/consumer/class-use/INotificationCallback.html... Generating /w/workspace/sdc-sdc-distribution-client-master-integration-pairwise/sdc-distribution-client/target/apidocs/org/onap/sdc/api/consumer/class-use/IComponentDoneStatusMessage.html... Generating /w/workspace/sdc-sdc-distribution-client-master-integration-pairwise/sdc-distribution-client/target/apidocs/org/onap/sdc/api/consumer/class-use/IConfiguration.html... Generating /w/workspace/sdc-sdc-distribution-client-master-integration-pairwise/sdc-distribution-client/target/apidocs/org/onap/sdc/api/class-use/IDistributionClient.html... Generating /w/workspace/sdc-sdc-distribution-client-master-integration-pairwise/sdc-distribution-client/target/apidocs/org/onap/sdc/api/class-use/IDistributionStatusMessageJsonBuilder.html... Generating /w/workspace/sdc-sdc-distribution-client-master-integration-pairwise/sdc-distribution-client/target/apidocs/org/onap/sdc/api/notification/class-use/IArtifactInfo.html... Generating /w/workspace/sdc-sdc-distribution-client-master-integration-pairwise/sdc-distribution-client/target/apidocs/org/onap/sdc/api/notification/class-use/IVfModuleMetadata.html... Generating /w/workspace/sdc-sdc-distribution-client-master-integration-pairwise/sdc-distribution-client/target/apidocs/org/onap/sdc/api/notification/class-use/IResourceInstance.html... Generating /w/workspace/sdc-sdc-distribution-client-master-integration-pairwise/sdc-distribution-client/target/apidocs/org/onap/sdc/api/notification/class-use/IStatusData.html... Generating /w/workspace/sdc-sdc-distribution-client-master-integration-pairwise/sdc-distribution-client/target/apidocs/org/onap/sdc/api/notification/class-use/INotificationData.html... Generating /w/workspace/sdc-sdc-distribution-client-master-integration-pairwise/sdc-distribution-client/target/apidocs/org/onap/sdc/api/notification/class-use/StatusMessage.html... Generating /w/workspace/sdc-sdc-distribution-client-master-integration-pairwise/sdc-distribution-client/target/apidocs/org/onap/sdc/api/results/class-use/IDistributionClientDownloadResult.html... Generating /w/workspace/sdc-sdc-distribution-client-master-integration-pairwise/sdc-distribution-client/target/apidocs/org/onap/sdc/api/results/class-use/IDistributionClientResult.html... Generating /w/workspace/sdc-sdc-distribution-client-master-integration-pairwise/sdc-distribution-client/target/apidocs/org/onap/sdc/http/class-use/HttpSdcClient.html... Generating /w/workspace/sdc-sdc-distribution-client-master-integration-pairwise/sdc-distribution-client/target/apidocs/org/onap/sdc/http/class-use/HttpSdcClientException.html... Generating /w/workspace/sdc-sdc-distribution-client-master-integration-pairwise/sdc-distribution-client/target/apidocs/org/onap/sdc/http/class-use/SdcUrls.html... Generating /w/workspace/sdc-sdc-distribution-client-master-integration-pairwise/sdc-distribution-client/target/apidocs/org/onap/sdc/http/class-use/HttpClientFactory.html... Generating /w/workspace/sdc-sdc-distribution-client-master-integration-pairwise/sdc-distribution-client/target/apidocs/org/onap/sdc/http/class-use/HttpRequestFactory.html... Generating /w/workspace/sdc-sdc-distribution-client-master-integration-pairwise/sdc-distribution-client/target/apidocs/org/onap/sdc/http/class-use/HttpSdcResponse.html... Generating /w/workspace/sdc-sdc-distribution-client-master-integration-pairwise/sdc-distribution-client/target/apidocs/org/onap/sdc/http/class-use/SdcConnectorClient.html... Generating /w/workspace/sdc-sdc-distribution-client-master-integration-pairwise/sdc-distribution-client/target/apidocs/org/onap/sdc/http/class-use/IHttpSdcClient.html... Generating /w/workspace/sdc-sdc-distribution-client-master-integration-pairwise/sdc-distribution-client/target/apidocs/org/onap/sdc/utils/class-use/NotificationSender.html... Generating /w/workspace/sdc-sdc-distribution-client-master-integration-pairwise/sdc-distribution-client/target/apidocs/org/onap/sdc/utils/class-use/CaseInsensitiveMap.html... Generating /w/workspace/sdc-sdc-distribution-client-master-integration-pairwise/sdc-distribution-client/target/apidocs/org/onap/sdc/utils/class-use/Wrapper.html... Generating /w/workspace/sdc-sdc-distribution-client-master-integration-pairwise/sdc-distribution-client/target/apidocs/org/onap/sdc/utils/class-use/YamlToObjectConverter.html... Generating /w/workspace/sdc-sdc-distribution-client-master-integration-pairwise/sdc-distribution-client/target/apidocs/org/onap/sdc/utils/class-use/DistributionActionResultEnum.html... Generating /w/workspace/sdc-sdc-distribution-client-master-integration-pairwise/sdc-distribution-client/target/apidocs/org/onap/sdc/utils/class-use/DistributionClientConstants.html... Generating /w/workspace/sdc-sdc-distribution-client-master-integration-pairwise/sdc-distribution-client/target/apidocs/org/onap/sdc/utils/class-use/GeneralUtils.html... Generating /w/workspace/sdc-sdc-distribution-client-master-integration-pairwise/sdc-distribution-client/target/apidocs/org/onap/sdc/utils/class-use/DistributionStatusEnum.html... Generating /w/workspace/sdc-sdc-distribution-client-master-integration-pairwise/sdc-distribution-client/target/apidocs/org/onap/sdc/utils/class-use/ArtifactTypeEnum.html... Generating /w/workspace/sdc-sdc-distribution-client-master-integration-pairwise/sdc-distribution-client/target/apidocs/org/onap/sdc/utils/class-use/Pair.html... Generating /w/workspace/sdc-sdc-distribution-client-master-integration-pairwise/sdc-distribution-client/target/apidocs/org/onap/sdc/utils/kafka/class-use/SdcKafkaConsumer.html... Generating /w/workspace/sdc-sdc-distribution-client-master-integration-pairwise/sdc-distribution-client/target/apidocs/org/onap/sdc/utils/kafka/class-use/SdcKafkaProducer.html... Generating /w/workspace/sdc-sdc-distribution-client-master-integration-pairwise/sdc-distribution-client/target/apidocs/org/onap/sdc/utils/kafka/class-use/KafkaCommonConfig.html... Generating /w/workspace/sdc-sdc-distribution-client-master-integration-pairwise/sdc-distribution-client/target/apidocs/org/onap/sdc/utils/kafka/class-use/KafkaDataResponse.html... Generating /w/workspace/sdc-sdc-distribution-client-master-integration-pairwise/sdc-distribution-client/target/apidocs/org/onap/sdc/utils/heat/class-use/HeatParameterConstraint.html... Generating /w/workspace/sdc-sdc-distribution-client-master-integration-pairwise/sdc-distribution-client/target/apidocs/org/onap/sdc/utils/heat/class-use/HeatConfiguration.html... Generating /w/workspace/sdc-sdc-distribution-client-master-integration-pairwise/sdc-distribution-client/target/apidocs/org/onap/sdc/utils/heat/class-use/HeatParameter.html... Generating /w/workspace/sdc-sdc-distribution-client-master-integration-pairwise/sdc-distribution-client/target/apidocs/org/onap/sdc/utils/heat/class-use/HeatParser.html... Generating /w/workspace/sdc-sdc-distribution-client-master-integration-pairwise/sdc-distribution-client/target/apidocs/org/onap/sdc/impl/class-use/DistributionClientFactory.html... Generating /w/workspace/sdc-sdc-distribution-client-master-integration-pairwise/sdc-distribution-client/target/apidocs/org/onap/sdc/impl/class-use/DistributionStatusMessageJsonBuilderFactory.html... Generating /w/workspace/sdc-sdc-distribution-client-master-integration-pairwise/sdc-distribution-client/target/apidocs/org/onap/sdc/impl/class-use/DistributionClientResultImpl.html... Generating /w/workspace/sdc-sdc-distribution-client-master-integration-pairwise/sdc-distribution-client/target/apidocs/org/onap/sdc/impl/class-use/NotificationDataImpl.html... Generating /w/workspace/sdc-sdc-distribution-client-master-integration-pairwise/sdc-distribution-client/target/apidocs/org/onap/sdc/impl/class-use/ConfigurationValidator.html... Generating /w/workspace/sdc-sdc-distribution-client-master-integration-pairwise/sdc-distribution-client/target/apidocs/org/onap/sdc/impl/class-use/DistributionClientDownloadResultImpl.html... Generating /w/workspace/sdc-sdc-distribution-client-master-integration-pairwise/sdc-distribution-client/target/apidocs/org/onap/sdc/impl/class-use/ArtifactInfo.html... Generating /w/workspace/sdc-sdc-distribution-client-master-integration-pairwise/sdc-distribution-client/target/apidocs/org/onap/sdc/impl/class-use/NotificationData.html... Generating /w/workspace/sdc-sdc-distribution-client-master-integration-pairwise/sdc-distribution-client/target/apidocs/org/onap/sdc/impl/class-use/ResourceInstance.html... Generating /w/workspace/sdc-sdc-distribution-client-master-integration-pairwise/sdc-distribution-client/target/apidocs/org/onap/sdc/impl/class-use/NotificationCallbackBuilder.html... Generating /w/workspace/sdc-sdc-distribution-client-master-integration-pairwise/sdc-distribution-client/target/apidocs/org/onap/sdc/impl/class-use/StatusDataImpl.html... Generating /w/workspace/sdc-sdc-distribution-client-master-integration-pairwise/sdc-distribution-client/target/apidocs/org/onap/sdc/impl/class-use/JsonContainerResourceInstance.html... Generating /w/workspace/sdc-sdc-distribution-client-master-integration-pairwise/sdc-distribution-client/target/apidocs/org/onap/sdc/impl/class-use/DistributionClientImpl.html... Generating /w/workspace/sdc-sdc-distribution-client-master-integration-pairwise/sdc-distribution-client/target/apidocs/org/onap/sdc/impl/class-use/Configuration.html... Generating /w/workspace/sdc-sdc-distribution-client-master-integration-pairwise/sdc-distribution-client/target/apidocs/org/onap/sdc/api/package-use.html... Generating /w/workspace/sdc-sdc-distribution-client-master-integration-pairwise/sdc-distribution-client/target/apidocs/org/onap/sdc/api/consumer/package-use.html... Generating /w/workspace/sdc-sdc-distribution-client-master-integration-pairwise/sdc-distribution-client/target/apidocs/org/onap/sdc/api/notification/package-use.html... Generating /w/workspace/sdc-sdc-distribution-client-master-integration-pairwise/sdc-distribution-client/target/apidocs/org/onap/sdc/api/results/package-use.html... Generating /w/workspace/sdc-sdc-distribution-client-master-integration-pairwise/sdc-distribution-client/target/apidocs/org/onap/sdc/http/package-use.html... Generating /w/workspace/sdc-sdc-distribution-client-master-integration-pairwise/sdc-distribution-client/target/apidocs/org/onap/sdc/impl/package-use.html... Generating /w/workspace/sdc-sdc-distribution-client-master-integration-pairwise/sdc-distribution-client/target/apidocs/org/onap/sdc/utils/package-use.html... Generating /w/workspace/sdc-sdc-distribution-client-master-integration-pairwise/sdc-distribution-client/target/apidocs/org/onap/sdc/utils/heat/package-use.html... Generating /w/workspace/sdc-sdc-distribution-client-master-integration-pairwise/sdc-distribution-client/target/apidocs/org/onap/sdc/utils/kafka/package-use.html... Building index for all the packages and classes... Generating /w/workspace/sdc-sdc-distribution-client-master-integration-pairwise/sdc-distribution-client/target/apidocs/overview-tree.html... Generating /w/workspace/sdc-sdc-distribution-client-master-integration-pairwise/sdc-distribution-client/target/apidocs/index-all.html... Building index for all classes... Generating /w/workspace/sdc-sdc-distribution-client-master-integration-pairwise/sdc-distribution-client/target/apidocs/allclasses-index.html... Generating /w/workspace/sdc-sdc-distribution-client-master-integration-pairwise/sdc-distribution-client/target/apidocs/allpackages-index.html... Generating /w/workspace/sdc-sdc-distribution-client-master-integration-pairwise/sdc-distribution-client/target/apidocs/deprecated-list.html... Building index for all classes... Generating /w/workspace/sdc-sdc-distribution-client-master-integration-pairwise/sdc-distribution-client/target/apidocs/allclasses.html... Generating /w/workspace/sdc-sdc-distribution-client-master-integration-pairwise/sdc-distribution-client/target/apidocs/allclasses.html... Generating /w/workspace/sdc-sdc-distribution-client-master-integration-pairwise/sdc-distribution-client/target/apidocs/index.html... Generating /w/workspace/sdc-sdc-distribution-client-master-integration-pairwise/sdc-distribution-client/target/apidocs/overview-summary.html... Generating /w/workspace/sdc-sdc-distribution-client-master-integration-pairwise/sdc-distribution-client/target/apidocs/help-doc.html... [INFO] Building jar: /w/workspace/sdc-sdc-distribution-client-master-integration-pairwise/sdc-distribution-client/target/sdc-distribution-client-2.1.2-SNAPSHOT-javadoc.jar [INFO] [INFO] --- jacoco-maven-plugin:0.8.6:prepare-agent (pre-integration-test) @ sdc-distribution-client --- [INFO] failsafeArgLine set to -javaagent:/home/jenkins/.m2/repository/org/jacoco/org.jacoco.agent/0.8.6/org.jacoco.agent-0.8.6-runtime.jar=destfile=/w/workspace/sdc-sdc-distribution-client-master-integration-pairwise/sdc-distribution-client/target/code-coverage/jacoco-it.exec,excludes=**/gen/**:**/generated-sources/**:**/yang-gen/**:**/pax/** [INFO] [INFO] --- maven-failsafe-plugin:3.0.0-M4:integration-test (integration-tests) @ sdc-distribution-client --- [INFO] [INFO] --- jacoco-maven-plugin:0.8.6:report (post-integration-test) @ sdc-distribution-client --- [INFO] Skipping JaCoCo execution due to missing execution data file. [INFO] [INFO] --- maven-failsafe-plugin:3.0.0-M4:verify (integration-tests) @ sdc-distribution-client --- [INFO] [INFO] --- maven-install-plugin:2.4:install (default-install) @ sdc-distribution-client --- [INFO] Installing /w/workspace/sdc-sdc-distribution-client-master-integration-pairwise/sdc-distribution-client/target/sdc-distribution-client-2.1.2-SNAPSHOT.jar to /home/jenkins/.m2/repository/org/onap/sdc/sdc-distribution-client/sdc-distribution-client/2.1.2-SNAPSHOT/sdc-distribution-client-2.1.2-SNAPSHOT.jar [INFO] Installing /w/workspace/sdc-sdc-distribution-client-master-integration-pairwise/sdc-distribution-client/pom.xml to /home/jenkins/.m2/repository/org/onap/sdc/sdc-distribution-client/sdc-distribution-client/2.1.2-SNAPSHOT/sdc-distribution-client-2.1.2-SNAPSHOT.pom [INFO] Installing /w/workspace/sdc-sdc-distribution-client-master-integration-pairwise/sdc-distribution-client/target/sdc-distribution-client-2.1.2-SNAPSHOT-javadoc.jar to /home/jenkins/.m2/repository/org/onap/sdc/sdc-distribution-client/sdc-distribution-client/2.1.2-SNAPSHOT/sdc-distribution-client-2.1.2-SNAPSHOT-javadoc.jar [INFO] [INFO] ------< org.onap.sdc.sdc-distribution-client:sdc-distribution-ci >------ [INFO] Building sdc-distribution-ci 2.1.2-SNAPSHOT [3/3] [INFO] --------------------------------[ jar ]--------------------------------- [INFO] [INFO] --- maven-clean-plugin:2.5:clean (default-clean) @ sdc-distribution-ci --- [INFO] [INFO] --- maven-enforcer-plugin:3.0.0-M2:enforce (enforce-property) @ sdc-distribution-ci --- [INFO] [INFO] --- maven-enforcer-plugin:3.0.0-M2:enforce (enforce-no-snapshots) @ sdc-distribution-ci --- [INFO] [INFO] --- jacoco-maven-plugin:0.8.6:prepare-agent (pre-unit-test) @ sdc-distribution-ci --- [INFO] surefireArgLine set to -javaagent:/home/jenkins/.m2/repository/org/jacoco/org.jacoco.agent/0.8.6/org.jacoco.agent-0.8.6-runtime.jar=destfile=/w/workspace/sdc-sdc-distribution-client-master-integration-pairwise/sdc-distribution-ci/target/code-coverage/jacoco-ut.exec,excludes=**/gen/**:**/generated-sources/**:**/yang-gen/**:**/pax/** [INFO] [INFO] --- jacoco-maven-plugin:0.8.6:prepare-agent (prepare-agent) @ sdc-distribution-ci --- [INFO] argLine set to -javaagent:/home/jenkins/.m2/repository/org/jacoco/org.jacoco.agent/0.8.6/org.jacoco.agent-0.8.6-runtime.jar=destfile=/w/workspace/sdc-sdc-distribution-client-master-integration-pairwise/sdc-distribution-ci/target/jacoco.exec,excludes=**/gen/**:**/generated-sources/**:**/yang-gen/**:**/pax/** [INFO] [INFO] --- maven-checkstyle-plugin:3.1.1:check (onap-license) @ sdc-distribution-ci --- [INFO] [INFO] --- maven-checkstyle-plugin:3.1.1:check (onap-java-style) @ sdc-distribution-ci --- [INFO] [INFO] --- maven-resources-plugin:2.6:resources (default-resources) @ sdc-distribution-ci --- [INFO] Using 'UTF-8' encoding to copy filtered resources. [INFO] Copying 1 resource [INFO] [INFO] --- maven-compiler-plugin:3.8.1:compile (default-compile) @ sdc-distribution-ci --- [INFO] Changes detected - recompiling the module! [INFO] Compiling 10 source files to /w/workspace/sdc-sdc-distribution-client-master-integration-pairwise/sdc-distribution-ci/target/classes [INFO] /w/workspace/sdc-sdc-distribution-client-master-integration-pairwise/sdc-distribution-ci/src/main/java/org/onap/test/core/service/ClientNotifyCallback.java: /w/workspace/sdc-sdc-distribution-client-master-integration-pairwise/sdc-distribution-ci/src/main/java/org/onap/test/core/service/ClientNotifyCallback.java uses or overrides a deprecated API. [INFO] /w/workspace/sdc-sdc-distribution-client-master-integration-pairwise/sdc-distribution-ci/src/main/java/org/onap/test/core/service/ClientNotifyCallback.java: Recompile with -Xlint:deprecation for details. [INFO] [INFO] --- maven-resources-plugin:2.6:testResources (default-testResources) @ sdc-distribution-ci --- [INFO] Using 'UTF-8' encoding to copy filtered resources. [INFO] Copying 2 resources [INFO] [INFO] --- maven-compiler-plugin:3.8.1:testCompile (default-testCompile) @ sdc-distribution-ci --- [INFO] Changes detected - recompiling the module! [INFO] Compiling 2 source files to /w/workspace/sdc-sdc-distribution-client-master-integration-pairwise/sdc-distribution-ci/target/test-classes [INFO] /w/workspace/sdc-sdc-distribution-client-master-integration-pairwise/sdc-distribution-ci/src/test/java/org/onap/test/core/service/CustomKafkaContainer.java: /w/workspace/sdc-sdc-distribution-client-master-integration-pairwise/sdc-distribution-ci/src/test/java/org/onap/test/core/service/CustomKafkaContainer.java uses or overrides a deprecated API. [INFO] /w/workspace/sdc-sdc-distribution-client-master-integration-pairwise/sdc-distribution-ci/src/test/java/org/onap/test/core/service/CustomKafkaContainer.java: Recompile with -Xlint:deprecation for details. [INFO] [INFO] --- maven-surefire-plugin:3.0.0-M4:test (default-test) @ sdc-distribution-ci --- [INFO] [INFO] ------------------------------------------------------- [INFO] T E S T S [INFO] ------------------------------------------------------- [INFO] Running org.onap.test.core.service.ClientInitializerTest EnvironmentVariableExtension: This extension uses reflection to mutate JDK-internal state, which is fragile. Check the Javadoc or documentation for more details. 19:07:19.570 [main] WARN org.testcontainers.utility.TestcontainersConfiguration - Attempted to read Testcontainers configuration file at file:/home/jenkins/.testcontainers.properties but the file was not found. Exception message: FileNotFoundException: /home/jenkins/.testcontainers.properties (No such file or directory) 19:07:19.582 [main] INFO org.testcontainers.utility.ImageNameSubstitutor - Image name substitution will be performed by: DefaultImageNameSubstitutor (composite of 'ConfigurationFileImageNameSubstitutor' and 'PrefixingImageNameSubstitutor') 19:07:20.646 [main] INFO org.testcontainers.dockerclient.DockerClientProviderStrategy - Found Docker environment with local Unix socket (unix:///var/run/docker.sock) 19:07:20.657 [main] INFO org.testcontainers.DockerClientFactory - Docker host IP address is localhost 19:07:20.714 [main] INFO org.testcontainers.DockerClientFactory - Connected to docker: Server Version: 20.10.18 API Version: 1.41 Operating System: Ubuntu 18.04.6 LTS Total Memory: 32167 MB 19:07:20.749 [main] INFO 🐳 [testcontainers/ryuk:0.3.3] - Pulling docker image: testcontainers/ryuk:0.3.3. Please be patient; this may take some time but only needs to be done once. 19:07:20.759 [main] INFO org.testcontainers.utility.RegistryAuthLocator - Failure when attempting to lookup auth config. Please ignore if you don't have images in an authenticated registry. Details: (dockerImageName: testcontainers/ryuk:latest, configFile: /home/jenkins/.docker/config.json. Falling back to docker-java default behaviour. Exception message: /home/jenkins/.docker/config.json (No such file or directory) 19:07:21.176 [docker-java-stream-256183458] INFO 🐳 [testcontainers/ryuk:0.3.3] - Starting to pull image 19:07:21.214 [docker-java-stream-256183458] INFO 🐳 [testcontainers/ryuk:0.3.3] - Pulling image layers: 0 pending, 0 downloaded, 0 extracted, (0 bytes/0 bytes) 19:07:21.494 [docker-java-stream-256183458] INFO 🐳 [testcontainers/ryuk:0.3.3] - Pulling image layers: 2 pending, 1 downloaded, 0 extracted, (327 KB/? MB) 19:07:21.507 [docker-java-stream-256183458] INFO 🐳 [testcontainers/ryuk:0.3.3] - Pulling image layers: 1 pending, 2 downloaded, 0 extracted, (327 KB/? MB) 19:07:21.516 [docker-java-stream-256183458] INFO 🐳 [testcontainers/ryuk:0.3.3] - Pulling image layers: 0 pending, 3 downloaded, 0 extracted, (330 KB/5 MB) 19:07:21.715 [docker-java-stream-256183458] INFO 🐳 [testcontainers/ryuk:0.3.3] - Pulling image layers: 0 pending, 3 downloaded, 1 extracted, (2 MB/5 MB) 19:07:21.959 [docker-java-stream-256183458] INFO 🐳 [testcontainers/ryuk:0.3.3] - Pulling image layers: 0 pending, 3 downloaded, 2 extracted, (2 MB/5 MB) 19:07:22.121 [docker-java-stream-256183458] INFO 🐳 [testcontainers/ryuk:0.3.3] - Pulling image layers: 0 pending, 3 downloaded, 3 extracted, (5 MB/5 MB) 19:07:23.481 [main] INFO org.testcontainers.utility.RyukResourceReaper - Ryuk started - will monitor and terminate Testcontainers containers on JVM exit 19:07:23.482 [main] INFO org.testcontainers.DockerClientFactory - Checking the system... 19:07:23.483 [main] INFO org.testcontainers.DockerClientFactory - ✔︎ Docker server version should be at least 1.6.0 19:07:23.571 [main] INFO org.testcontainers.DockerClientFactory - ✔︎ Docker environment should have more than 2GB free disk space 19:07:23.578 [main] INFO 🐳 [confluentinc/cp-kafka:6.2.1] - Pulling docker image: confluentinc/cp-kafka:6.2.1. Please be patient; this may take some time but only needs to be done once. 19:07:23.889 [docker-java-stream-1639074418] INFO 🐳 [confluentinc/cp-kafka:6.2.1] - Starting to pull image 19:07:23.891 [docker-java-stream-1639074418] INFO 🐳 [confluentinc/cp-kafka:6.2.1] - Pulling image layers: 0 pending, 0 downloaded, 0 extracted, (0 bytes/0 bytes) 19:07:24.042 [docker-java-stream-1639074418] INFO 🐳 [confluentinc/cp-kafka:6.2.1] - Pulling image layers: 10 pending, 1 downloaded, 0 extracted, (1 KB/? MB) 19:07:24.389 [docker-java-stream-1639074418] INFO 🐳 [confluentinc/cp-kafka:6.2.1] - Pulling image layers: 9 pending, 2 downloaded, 0 extracted, (31 MB/? MB) 19:07:24.531 [docker-java-stream-1639074418] INFO 🐳 [confluentinc/cp-kafka:6.2.1] - Pulling image layers: 8 pending, 3 downloaded, 0 extracted, (44 MB/? MB) 19:07:25.189 [docker-java-stream-1639074418] INFO 🐳 [confluentinc/cp-kafka:6.2.1] - Pulling image layers: 7 pending, 4 downloaded, 0 extracted, (138 MB/? MB) 19:07:25.305 [docker-java-stream-1639074418] INFO 🐳 [confluentinc/cp-kafka:6.2.1] - Pulling image layers: 6 pending, 5 downloaded, 0 extracted, (152 MB/? MB) 19:07:25.344 [docker-java-stream-1639074418] INFO 🐳 [confluentinc/cp-kafka:6.2.1] - Pulling image layers: 5 pending, 6 downloaded, 0 extracted, (152 MB/? MB) 19:07:25.484 [docker-java-stream-1639074418] INFO 🐳 [confluentinc/cp-kafka:6.2.1] - Pulling image layers: 4 pending, 7 downloaded, 0 extracted, (163 MB/? MB) 19:07:25.508 [docker-java-stream-1639074418] INFO 🐳 [confluentinc/cp-kafka:6.2.1] - Pulling image layers: 3 pending, 8 downloaded, 0 extracted, (171 MB/? MB) 19:07:25.735 [docker-java-stream-1639074418] INFO 🐳 [confluentinc/cp-kafka:6.2.1] - Pulling image layers: 2 pending, 9 downloaded, 0 extracted, (186 MB/? MB) 19:07:26.403 [docker-java-stream-1639074418] INFO 🐳 [confluentinc/cp-kafka:6.2.1] - Pulling image layers: 2 pending, 9 downloaded, 1 extracted, (261 MB/? MB) 19:07:26.515 [docker-java-stream-1639074418] INFO 🐳 [confluentinc/cp-kafka:6.2.1] - Pulling image layers: 2 pending, 9 downloaded, 2 extracted, (283 MB/? MB) 19:07:27.311 [docker-java-stream-1639074418] INFO 🐳 [confluentinc/cp-kafka:6.2.1] - Pulling image layers: 1 pending, 10 downloaded, 2 extracted, (358 MB/? MB) 19:07:32.513 [docker-java-stream-1639074418] INFO 🐳 [confluentinc/cp-kafka:6.2.1] - Pulling image layers: 1 pending, 10 downloaded, 3 extracted, (362 MB/? MB) 19:07:32.728 [docker-java-stream-1639074418] INFO 🐳 [confluentinc/cp-kafka:6.2.1] - Pulling image layers: 1 pending, 10 downloaded, 4 extracted, (366 MB/? MB) 19:07:32.842 [docker-java-stream-1639074418] INFO 🐳 [confluentinc/cp-kafka:6.2.1] - Pulling image layers: 1 pending, 10 downloaded, 5 extracted, (366 MB/? MB) 19:07:33.321 [docker-java-stream-1639074418] INFO 🐳 [confluentinc/cp-kafka:6.2.1] - Pulling image layers: 1 pending, 10 downloaded, 6 extracted, (369 MB/? MB) 19:07:33.433 [docker-java-stream-1639074418] INFO 🐳 [confluentinc/cp-kafka:6.2.1] - Pulling image layers: 1 pending, 10 downloaded, 7 extracted, (369 MB/? MB) 19:07:33.547 [docker-java-stream-1639074418] INFO 🐳 [confluentinc/cp-kafka:6.2.1] - Pulling image layers: 1 pending, 10 downloaded, 8 extracted, (369 MB/? MB) 19:07:33.643 [docker-java-stream-1639074418] INFO 🐳 [confluentinc/cp-kafka:6.2.1] - Pulling image layers: 1 pending, 10 downloaded, 9 extracted, (369 MB/? MB) 19:07:34.386 [docker-java-stream-1639074418] INFO 🐳 [confluentinc/cp-kafka:6.2.1] - Pulling image layers: 1 pending, 10 downloaded, 10 extracted, (370 MB/? MB) 19:07:34.487 [docker-java-stream-1639074418] INFO 🐳 [confluentinc/cp-kafka:6.2.1] - Pulling image layers: 1 pending, 10 downloaded, 11 extracted, (370 MB/? MB) 19:07:34.502 [docker-java-stream-1639074418] INFO 🐳 [confluentinc/cp-kafka:6.2.1] - Pull complete. 11 layers, pulled in 10s (downloaded 370 MB at 37 MB/s) 19:07:34.512 [main] INFO 🐳 [confluentinc/cp-kafka:6.2.1] - Creating container for image: confluentinc/cp-kafka:6.2.1 19:07:43.557 [main] INFO 🐳 [confluentinc/cp-kafka:6.2.1] - Container confluentinc/cp-kafka:6.2.1 is starting: 60a7fe8dd4a650a4c84ef8e67a79ca6015a7437f119a8d90de2f1a3e278710ac 19:07:48.843 [main] INFO 🐳 [confluentinc/cp-kafka:6.2.1] - Container confluentinc/cp-kafka:6.2.1 started in PT25.268156S 19:07:50.770 [main] INFO 🐳 [nexus3.onap.org:10001/onap/onap-component-mock-sdc:master] - Pulling docker image: nexus3.onap.org:10001/onap/onap-component-mock-sdc:master. Please be patient; this may take some time but only needs to be done once. 19:07:50.771 [main] INFO org.testcontainers.utility.RegistryAuthLocator - Failure when attempting to lookup auth config. Please ignore if you don't have images in an authenticated registry. Details: (dockerImageName: nexus3.onap.org:10001/onap/onap-component-mock-sdc:latest, configFile: /home/jenkins/.docker/config.json. Falling back to docker-java default behaviour. Exception message: /home/jenkins/.docker/config.json (No such file or directory) 19:07:51.545 [docker-java-stream--807276005] INFO 🐳 [nexus3.onap.org:10001/onap/onap-component-mock-sdc:master] - Starting to pull image 19:07:51.546 [docker-java-stream--807276005] INFO 🐳 [nexus3.onap.org:10001/onap/onap-component-mock-sdc:master] - Pulling image layers: 0 pending, 0 downloaded, 0 extracted, (0 bytes/0 bytes) 19:07:51.794 [docker-java-stream--807276005] INFO 🐳 [nexus3.onap.org:10001/onap/onap-component-mock-sdc:master] - Pulling image layers: 0 pending, 1 downloaded, 0 extracted, (62 KB/5 MB) 19:07:51.953 [docker-java-stream--807276005] INFO 🐳 [nexus3.onap.org:10001/onap/onap-component-mock-sdc:master] - Pulling image layers: 0 pending, 1 downloaded, 1 extracted, (5 MB/5 MB) 19:07:51.979 [main] INFO 🐳 [nexus3.onap.org:10001/onap/onap-component-mock-sdc:master] - Creating container for image: nexus3.onap.org:10001/onap/onap-component-mock-sdc:master 19:07:52.376 [main] INFO 🐳 [nexus3.onap.org:10001/onap/onap-component-mock-sdc:master] - Container nexus3.onap.org:10001/onap/onap-component-mock-sdc:master is starting: 1eeebfe60f32ec2cf498c273b8a22c0182f237b297a16909506eb6d399eaea1b 19:07:53.354 [main] INFO org.testcontainers.containers.wait.strategy.HttpWaitStrategy - /stupefied_kalam: Waiting for 60 seconds for URL: http://localhost:49155/sdc/v1/artifactTypes (where port 49155 maps to container port 30206) 19:07:53.374 [main] INFO 🐳 [nexus3.onap.org:10001/onap/onap-component-mock-sdc:master] - Container nexus3.onap.org:10001/onap/onap-component-mock-sdc:master started in PT2.605574S 19:07:54.518 [main] INFO org.apache.kafka.clients.producer.ProducerConfig - ProducerConfig values: acks = -1 batch.size = 16384 bootstrap.servers = [localhost:43219] buffer.memory = 33554432 client.dns.lookup = use_all_dns_ips client.id = dcae-openapi-manager-producer-c20933a7-103c-4278-b03e-e76ac05194eb compression.type = none connections.max.idle.ms = 540000 delivery.timeout.ms = 120000 enable.idempotence = true interceptor.classes = [] key.serializer = class org.apache.kafka.common.serialization.StringSerializer linger.ms = 0 max.block.ms = 60000 max.in.flight.requests.per.connection = 5 max.request.size = 1048576 metadata.max.age.ms = 300000 metadata.max.idle.ms = 300000 metric.reporters = [] metrics.num.samples = 2 metrics.recording.level = INFO metrics.sample.window.ms = 30000 partitioner.adaptive.partitioning.enable = true partitioner.availability.timeout.ms = 0 partitioner.class = null partitioner.ignore.keys = false receive.buffer.bytes = 32768 reconnect.backoff.max.ms = 1000 reconnect.backoff.ms = 50 request.timeout.ms = 30000 retries = 2147483647 retry.backoff.ms = 100 sasl.client.callback.handler.class = null sasl.jaas.config = [hidden] sasl.kerberos.kinit.cmd = /usr/bin/kinit sasl.kerberos.min.time.before.relogin = 60000 sasl.kerberos.service.name = null sasl.kerberos.ticket.renew.jitter = 0.05 sasl.kerberos.ticket.renew.window.factor = 0.8 sasl.login.callback.handler.class = null sasl.login.class = null sasl.login.connect.timeout.ms = null sasl.login.read.timeout.ms = null sasl.login.refresh.buffer.seconds = 300 sasl.login.refresh.min.period.seconds = 60 sasl.login.refresh.window.factor = 0.8 sasl.login.refresh.window.jitter = 0.05 sasl.login.retry.backoff.max.ms = 10000 sasl.login.retry.backoff.ms = 100 sasl.mechanism = PLAIN sasl.oauthbearer.clock.skew.seconds = 30 sasl.oauthbearer.expected.audience = null sasl.oauthbearer.expected.issuer = null sasl.oauthbearer.jwks.endpoint.refresh.ms = 3600000 sasl.oauthbearer.jwks.endpoint.retry.backoff.max.ms = 10000 sasl.oauthbearer.jwks.endpoint.retry.backoff.ms = 100 sasl.oauthbearer.jwks.endpoint.url = null sasl.oauthbearer.scope.claim.name = scope sasl.oauthbearer.sub.claim.name = sub sasl.oauthbearer.token.endpoint.url = null security.protocol = SASL_PLAINTEXT security.providers = null send.buffer.bytes = 131072 socket.connection.setup.timeout.max.ms = 30000 socket.connection.setup.timeout.ms = 10000 ssl.cipher.suites = null ssl.enabled.protocols = [TLSv1.2, TLSv1.3] ssl.endpoint.identification.algorithm = https ssl.engine.factory.class = null ssl.key.password = null ssl.keymanager.algorithm = SunX509 ssl.keystore.certificate.chain = null ssl.keystore.key = null ssl.keystore.location = null ssl.keystore.password = null ssl.keystore.type = JKS ssl.protocol = TLSv1.3 ssl.provider = null ssl.secure.random.implementation = null ssl.trustmanager.algorithm = PKIX ssl.truststore.certificates = null ssl.truststore.location = null ssl.truststore.password = null ssl.truststore.type = JKS transaction.timeout.ms = 60000 transactional.id = null value.serializer = class org.apache.kafka.common.serialization.StringSerializer 19:07:54.620 [main] INFO org.apache.kafka.clients.producer.KafkaProducer - [Producer clientId=dcae-openapi-manager-producer-c20933a7-103c-4278-b03e-e76ac05194eb] Instantiated an idempotent producer. 19:07:54.683 [main] INFO org.apache.kafka.common.security.authenticator.AbstractLogin - Successfully logged in. 19:07:54.725 [main] INFO org.apache.kafka.common.utils.AppInfoParser - Kafka version: 3.3.1 19:07:54.726 [main] INFO org.apache.kafka.common.utils.AppInfoParser - Kafka commitId: e23c59d00e687ff5 19:07:54.726 [main] INFO org.apache.kafka.common.utils.AppInfoParser - Kafka startTimeMs: 1755112074723 19:07:54.730 [main] INFO org.onap.test.core.service.ClientInitializer - distribution client initialized successfully 19:07:54.730 [main] INFO org.onap.test.core.service.ClientInitializer - ======================================== 19:07:54.730 [main] INFO org.onap.test.core.service.ClientInitializer - ======================================== 19:07:54.748 [main] INFO org.apache.kafka.clients.consumer.ConsumerConfig - ConsumerConfig values: allow.auto.create.topics = false auto.commit.interval.ms = 5000 auto.offset.reset = latest bootstrap.servers = [localhost:43219] check.crcs = true client.dns.lookup = use_all_dns_ips client.id = dcae-openapi-manager-consumer-b4182fb9-df76-4e6e-a02a-5532b7cdbaa6 client.rack = connections.max.idle.ms = 540000 default.api.timeout.ms = 60000 enable.auto.commit = true exclude.internal.topics = true fetch.max.bytes = 52428800 fetch.max.wait.ms = 500 fetch.min.bytes = 1 group.id = noapp group.instance.id = null heartbeat.interval.ms = 3000 interceptor.classes = [] internal.leave.group.on.close = true internal.throw.on.fetch.stable.offset.unsupported = false isolation.level = read_uncommitted key.deserializer = class org.apache.kafka.common.serialization.StringDeserializer max.partition.fetch.bytes = 1048576 max.poll.interval.ms = 300000 max.poll.records = 500 metadata.max.age.ms = 300000 metric.reporters = [] metrics.num.samples = 2 metrics.recording.level = INFO metrics.sample.window.ms = 30000 partition.assignment.strategy = [class org.apache.kafka.clients.consumer.RangeAssignor, class org.apache.kafka.clients.consumer.CooperativeStickyAssignor] receive.buffer.bytes = 65536 reconnect.backoff.max.ms = 1000 reconnect.backoff.ms = 50 request.timeout.ms = 30000 retry.backoff.ms = 100 sasl.client.callback.handler.class = null sasl.jaas.config = [hidden] sasl.kerberos.kinit.cmd = /usr/bin/kinit sasl.kerberos.min.time.before.relogin = 60000 sasl.kerberos.service.name = null sasl.kerberos.ticket.renew.jitter = 0.05 sasl.kerberos.ticket.renew.window.factor = 0.8 sasl.login.callback.handler.class = null sasl.login.class = null sasl.login.connect.timeout.ms = null sasl.login.read.timeout.ms = null sasl.login.refresh.buffer.seconds = 300 sasl.login.refresh.min.period.seconds = 60 sasl.login.refresh.window.factor = 0.8 sasl.login.refresh.window.jitter = 0.05 sasl.login.retry.backoff.max.ms = 10000 sasl.login.retry.backoff.ms = 100 sasl.mechanism = PLAIN sasl.oauthbearer.clock.skew.seconds = 30 sasl.oauthbearer.expected.audience = null sasl.oauthbearer.expected.issuer = null sasl.oauthbearer.jwks.endpoint.refresh.ms = 3600000 sasl.oauthbearer.jwks.endpoint.retry.backoff.max.ms = 10000 sasl.oauthbearer.jwks.endpoint.retry.backoff.ms = 100 sasl.oauthbearer.jwks.endpoint.url = null sasl.oauthbearer.scope.claim.name = scope sasl.oauthbearer.sub.claim.name = sub sasl.oauthbearer.token.endpoint.url = null security.protocol = SASL_PLAINTEXT security.providers = null send.buffer.bytes = 131072 session.timeout.ms = 45000 socket.connection.setup.timeout.max.ms = 30000 socket.connection.setup.timeout.ms = 10000 ssl.cipher.suites = null ssl.enabled.protocols = [TLSv1.2, TLSv1.3] ssl.endpoint.identification.algorithm = https ssl.engine.factory.class = null ssl.key.password = null ssl.keymanager.algorithm = SunX509 ssl.keystore.certificate.chain = null ssl.keystore.key = null ssl.keystore.location = null ssl.keystore.password = null ssl.keystore.type = JKS ssl.protocol = TLSv1.3 ssl.provider = null ssl.secure.random.implementation = null ssl.trustmanager.algorithm = PKIX ssl.truststore.certificates = null ssl.truststore.location = null ssl.truststore.password = null ssl.truststore.type = JKS value.deserializer = class org.apache.kafka.common.serialization.StringDeserializer 19:07:54.816 [main] INFO org.apache.kafka.common.utils.AppInfoParser - Kafka version: 3.3.1 19:07:54.816 [main] INFO org.apache.kafka.common.utils.AppInfoParser - Kafka commitId: e23c59d00e687ff5 19:07:54.816 [main] INFO org.apache.kafka.common.utils.AppInfoParser - Kafka startTimeMs: 1755112074816 19:07:54.817 [main] INFO org.apache.kafka.clients.consumer.KafkaConsumer - [Consumer clientId=dcae-openapi-manager-consumer-b4182fb9-df76-4e6e-a02a-5532b7cdbaa6, groupId=noapp] Subscribed to topic(s): SDC-DIST-NOTIF-TOPIC 19:07:54.821 [main] INFO org.onap.test.core.service.ClientInitializer - distribution client started successfully 19:07:54.821 [main] INFO org.onap.test.core.service.ClientInitializer - ======================================== 19:07:54.821 [pool-1-thread-1] INFO org.onap.sdc.impl.NotificationConsumer - Polling for messages from topic: SDC-DIST-NOTIF-TOPIC 19:07:55.377 [kafka-producer-network-thread | dcae-openapi-manager-producer-c20933a7-103c-4278-b03e-e76ac05194eb] INFO org.apache.kafka.clients.Metadata - [Producer clientId=dcae-openapi-manager-producer-c20933a7-103c-4278-b03e-e76ac05194eb] Cluster ID: VwX1_qxURn2AP_643I1CpA 19:07:55.377 [pool-1-thread-1] WARN org.apache.kafka.clients.NetworkClient - [Consumer clientId=dcae-openapi-manager-consumer-b4182fb9-df76-4e6e-a02a-5532b7cdbaa6, groupId=noapp] Error while fetching metadata with correlation id 2 : {SDC-DIST-NOTIF-TOPIC=UNKNOWN_TOPIC_OR_PARTITION} 19:07:55.378 [pool-1-thread-1] INFO org.apache.kafka.clients.Metadata - [Consumer clientId=dcae-openapi-manager-consumer-b4182fb9-df76-4e6e-a02a-5532b7cdbaa6, groupId=noapp] Cluster ID: VwX1_qxURn2AP_643I1CpA 19:07:55.383 [kafka-producer-network-thread | dcae-openapi-manager-producer-c20933a7-103c-4278-b03e-e76ac05194eb] INFO org.apache.kafka.clients.producer.internals.TransactionManager - [Producer clientId=dcae-openapi-manager-producer-c20933a7-103c-4278-b03e-e76ac05194eb] ProducerId set to 0 with epoch 0 19:07:55.489 [pool-1-thread-1] WARN org.apache.kafka.clients.NetworkClient - [Consumer clientId=dcae-openapi-manager-consumer-b4182fb9-df76-4e6e-a02a-5532b7cdbaa6, groupId=noapp] Error while fetching metadata with correlation id 4 : {SDC-DIST-NOTIF-TOPIC=UNKNOWN_TOPIC_OR_PARTITION} 19:07:55.593 [pool-1-thread-1] WARN org.apache.kafka.clients.NetworkClient - [Consumer clientId=dcae-openapi-manager-consumer-b4182fb9-df76-4e6e-a02a-5532b7cdbaa6, groupId=noapp] Error while fetching metadata with correlation id 6 : {SDC-DIST-NOTIF-TOPIC=UNKNOWN_TOPIC_OR_PARTITION} 19:07:55.603 [pool-1-thread-1] INFO org.apache.kafka.clients.consumer.internals.ConsumerCoordinator - [Consumer clientId=dcae-openapi-manager-consumer-b4182fb9-df76-4e6e-a02a-5532b7cdbaa6, groupId=noapp] Discovered group coordinator localhost:43219 (id: 2147483646 rack: null) 19:07:55.615 [pool-1-thread-1] INFO org.apache.kafka.clients.consumer.internals.ConsumerCoordinator - [Consumer clientId=dcae-openapi-manager-consumer-b4182fb9-df76-4e6e-a02a-5532b7cdbaa6, groupId=noapp] (Re-)joining group 19:07:55.647 [pool-1-thread-1] INFO org.apache.kafka.clients.consumer.internals.ConsumerCoordinator - [Consumer clientId=dcae-openapi-manager-consumer-b4182fb9-df76-4e6e-a02a-5532b7cdbaa6, groupId=noapp] Request joining group due to: need to re-join with the given member-id: dcae-openapi-manager-consumer-b4182fb9-df76-4e6e-a02a-5532b7cdbaa6-906d8665-0899-43b1-ba49-4ff7978ee57b 19:07:55.648 [pool-1-thread-1] INFO org.apache.kafka.clients.consumer.internals.ConsumerCoordinator - [Consumer clientId=dcae-openapi-manager-consumer-b4182fb9-df76-4e6e-a02a-5532b7cdbaa6, groupId=noapp] Request joining group due to: rebalance failed due to 'The group member needs to have a valid member id before actually entering a consumer group.' (MemberIdRequiredException) 19:07:55.648 [pool-1-thread-1] INFO org.apache.kafka.clients.consumer.internals.ConsumerCoordinator - [Consumer clientId=dcae-openapi-manager-consumer-b4182fb9-df76-4e6e-a02a-5532b7cdbaa6, groupId=noapp] (Re-)joining group 19:07:55.678 [pool-1-thread-1] INFO org.apache.kafka.clients.consumer.internals.ConsumerCoordinator - [Consumer clientId=dcae-openapi-manager-consumer-b4182fb9-df76-4e6e-a02a-5532b7cdbaa6, groupId=noapp] Successfully joined group with generation Generation{generationId=1, memberId='dcae-openapi-manager-consumer-b4182fb9-df76-4e6e-a02a-5532b7cdbaa6-906d8665-0899-43b1-ba49-4ff7978ee57b', protocol='range'} 19:07:55.697 [pool-1-thread-1] WARN org.apache.kafka.clients.NetworkClient - [Consumer clientId=dcae-openapi-manager-consumer-b4182fb9-df76-4e6e-a02a-5532b7cdbaa6, groupId=noapp] Error while fetching metadata with correlation id 11 : {SDC-DIST-NOTIF-TOPIC=UNKNOWN_TOPIC_OR_PARTITION} 19:07:55.701 [pool-1-thread-1] INFO org.apache.kafka.clients.consumer.internals.ConsumerCoordinator - [Consumer clientId=dcae-openapi-manager-consumer-b4182fb9-df76-4e6e-a02a-5532b7cdbaa6, groupId=noapp] Finished assignment for group at generation 1: {dcae-openapi-manager-consumer-b4182fb9-df76-4e6e-a02a-5532b7cdbaa6-906d8665-0899-43b1-ba49-4ff7978ee57b=Assignment(partitions=[])} 19:07:55.758 [pool-1-thread-1] INFO org.apache.kafka.clients.consumer.internals.ConsumerCoordinator - [Consumer clientId=dcae-openapi-manager-consumer-b4182fb9-df76-4e6e-a02a-5532b7cdbaa6, groupId=noapp] Successfully synced group in generation Generation{generationId=1, memberId='dcae-openapi-manager-consumer-b4182fb9-df76-4e6e-a02a-5532b7cdbaa6-906d8665-0899-43b1-ba49-4ff7978ee57b', protocol='range'} 19:07:55.759 [pool-1-thread-1] INFO org.apache.kafka.clients.consumer.internals.ConsumerCoordinator - [Consumer clientId=dcae-openapi-manager-consumer-b4182fb9-df76-4e6e-a02a-5532b7cdbaa6, groupId=noapp] Notifying assignor about the new Assignment(partitions=[]) 19:07:55.759 [pool-1-thread-1] INFO org.apache.kafka.clients.consumer.internals.ConsumerCoordinator - [Consumer clientId=dcae-openapi-manager-consumer-b4182fb9-df76-4e6e-a02a-5532b7cdbaa6, groupId=noapp] Adding newly assigned partitions: 19:07:55.802 [pool-1-thread-1] WARN org.apache.kafka.clients.NetworkClient - [Consumer clientId=dcae-openapi-manager-consumer-b4182fb9-df76-4e6e-a02a-5532b7cdbaa6, groupId=noapp] Error while fetching metadata with correlation id 13 : {SDC-DIST-NOTIF-TOPIC=UNKNOWN_TOPIC_OR_PARTITION} 19:07:55.823 [main] INFO org.apache.kafka.clients.producer.ProducerConfig - ProducerConfig values: acks = -1 batch.size = 16384 bootstrap.servers = [PLAINTEXT://localhost:43219] buffer.memory = 33554432 client.dns.lookup = use_all_dns_ips client.id = producer-1 compression.type = none connections.max.idle.ms = 540000 delivery.timeout.ms = 120000 enable.idempotence = true interceptor.classes = [] key.serializer = class org.apache.kafka.common.serialization.StringSerializer linger.ms = 0 max.block.ms = 60000 max.in.flight.requests.per.connection = 5 max.request.size = 1048576 metadata.max.age.ms = 300000 metadata.max.idle.ms = 300000 metric.reporters = [] metrics.num.samples = 2 metrics.recording.level = INFO metrics.sample.window.ms = 30000 partitioner.adaptive.partitioning.enable = true partitioner.availability.timeout.ms = 0 partitioner.class = null partitioner.ignore.keys = false receive.buffer.bytes = 32768 reconnect.backoff.max.ms = 1000 reconnect.backoff.ms = 50 request.timeout.ms = 30000 retries = 2147483647 retry.backoff.ms = 100 sasl.client.callback.handler.class = null sasl.jaas.config = [hidden] sasl.kerberos.kinit.cmd = /usr/bin/kinit sasl.kerberos.min.time.before.relogin = 60000 sasl.kerberos.service.name = null sasl.kerberos.ticket.renew.jitter = 0.05 sasl.kerberos.ticket.renew.window.factor = 0.8 sasl.login.callback.handler.class = null sasl.login.class = null sasl.login.connect.timeout.ms = null sasl.login.read.timeout.ms = null sasl.login.refresh.buffer.seconds = 300 sasl.login.refresh.min.period.seconds = 60 sasl.login.refresh.window.factor = 0.8 sasl.login.refresh.window.jitter = 0.05 sasl.login.retry.backoff.max.ms = 10000 sasl.login.retry.backoff.ms = 100 sasl.mechanism = PLAIN sasl.oauthbearer.clock.skew.seconds = 30 sasl.oauthbearer.expected.audience = null sasl.oauthbearer.expected.issuer = null sasl.oauthbearer.jwks.endpoint.refresh.ms = 3600000 sasl.oauthbearer.jwks.endpoint.retry.backoff.max.ms = 10000 sasl.oauthbearer.jwks.endpoint.retry.backoff.ms = 100 sasl.oauthbearer.jwks.endpoint.url = null sasl.oauthbearer.scope.claim.name = scope sasl.oauthbearer.sub.claim.name = sub sasl.oauthbearer.token.endpoint.url = null security.protocol = SASL_PLAINTEXT security.providers = null send.buffer.bytes = 131072 socket.connection.setup.timeout.max.ms = 30000 socket.connection.setup.timeout.ms = 10000 ssl.cipher.suites = null ssl.enabled.protocols = [TLSv1.2, TLSv1.3] ssl.endpoint.identification.algorithm = https ssl.engine.factory.class = null ssl.key.password = null ssl.keymanager.algorithm = SunX509 ssl.keystore.certificate.chain = null ssl.keystore.key = null ssl.keystore.location = null ssl.keystore.password = null ssl.keystore.type = JKS ssl.protocol = TLSv1.3 ssl.provider = null ssl.secure.random.implementation = null ssl.trustmanager.algorithm = PKIX ssl.truststore.certificates = null ssl.truststore.location = null ssl.truststore.password = null ssl.truststore.type = JKS transaction.timeout.ms = 60000 transactional.id = null value.serializer = class org.apache.kafka.common.serialization.StringSerializer 19:07:55.826 [main] INFO org.apache.kafka.clients.producer.KafkaProducer - [Producer clientId=producer-1] Instantiated an idempotent producer. 19:07:55.835 [main] INFO org.apache.kafka.common.utils.AppInfoParser - Kafka version: 3.3.1 19:07:55.835 [main] INFO org.apache.kafka.common.utils.AppInfoParser - Kafka commitId: e23c59d00e687ff5 19:07:55.835 [main] INFO org.apache.kafka.common.utils.AppInfoParser - Kafka startTimeMs: 1755112075835 19:07:55.864 [kafka-producer-network-thread | producer-1] WARN org.apache.kafka.clients.NetworkClient - [Producer clientId=producer-1] Error while fetching metadata with correlation id 1 : {SDC-DIST-NOTIF-TOPIC=LEADER_NOT_AVAILABLE} 19:07:55.865 [kafka-producer-network-thread | producer-1] INFO org.apache.kafka.clients.Metadata - [Producer clientId=producer-1] Cluster ID: VwX1_qxURn2AP_643I1CpA 19:07:55.866 [kafka-producer-network-thread | producer-1] INFO org.apache.kafka.clients.producer.internals.TransactionManager - [Producer clientId=producer-1] ProducerId set to 1 with epoch 0 19:07:55.911 [pool-1-thread-1] INFO org.apache.kafka.clients.Metadata - [Consumer clientId=dcae-openapi-manager-consumer-b4182fb9-df76-4e6e-a02a-5532b7cdbaa6, groupId=noapp] Resetting the last seen epoch of partition SDC-DIST-NOTIF-TOPIC-0 to 0 since the associated topicId changed from null to F2PMS80AQkyDDz9nvZN-8Q 19:07:55.914 [pool-1-thread-1] INFO org.apache.kafka.clients.consumer.internals.ConsumerCoordinator - [Consumer clientId=dcae-openapi-manager-consumer-b4182fb9-df76-4e6e-a02a-5532b7cdbaa6, groupId=noapp] Request joining group due to: cached metadata has changed from (version5: {}) at the beginning of the rebalance to (version7: {SDC-DIST-NOTIF-TOPIC=1}) 19:07:55.915 [pool-1-thread-1] INFO org.apache.kafka.clients.consumer.internals.ConsumerCoordinator - [Consumer clientId=dcae-openapi-manager-consumer-b4182fb9-df76-4e6e-a02a-5532b7cdbaa6, groupId=noapp] Revoke previously assigned partitions 19:07:55.915 [pool-1-thread-1] INFO org.apache.kafka.clients.consumer.internals.ConsumerCoordinator - [Consumer clientId=dcae-openapi-manager-consumer-b4182fb9-df76-4e6e-a02a-5532b7cdbaa6, groupId=noapp] (Re-)joining group 19:07:55.923 [pool-1-thread-1] INFO org.apache.kafka.clients.consumer.internals.ConsumerCoordinator - [Consumer clientId=dcae-openapi-manager-consumer-b4182fb9-df76-4e6e-a02a-5532b7cdbaa6, groupId=noapp] Successfully joined group with generation Generation{generationId=2, memberId='dcae-openapi-manager-consumer-b4182fb9-df76-4e6e-a02a-5532b7cdbaa6-906d8665-0899-43b1-ba49-4ff7978ee57b', protocol='range'} 19:07:55.923 [pool-1-thread-1] INFO org.apache.kafka.clients.consumer.internals.ConsumerCoordinator - [Consumer clientId=dcae-openapi-manager-consumer-b4182fb9-df76-4e6e-a02a-5532b7cdbaa6, groupId=noapp] Finished assignment for group at generation 2: {dcae-openapi-manager-consumer-b4182fb9-df76-4e6e-a02a-5532b7cdbaa6-906d8665-0899-43b1-ba49-4ff7978ee57b=Assignment(partitions=[SDC-DIST-NOTIF-TOPIC-0])} 19:07:55.931 [pool-1-thread-1] INFO org.apache.kafka.clients.consumer.internals.ConsumerCoordinator - [Consumer clientId=dcae-openapi-manager-consumer-b4182fb9-df76-4e6e-a02a-5532b7cdbaa6, groupId=noapp] Successfully synced group in generation Generation{generationId=2, memberId='dcae-openapi-manager-consumer-b4182fb9-df76-4e6e-a02a-5532b7cdbaa6-906d8665-0899-43b1-ba49-4ff7978ee57b', protocol='range'} 19:07:55.932 [pool-1-thread-1] INFO org.apache.kafka.clients.consumer.internals.ConsumerCoordinator - [Consumer clientId=dcae-openapi-manager-consumer-b4182fb9-df76-4e6e-a02a-5532b7cdbaa6, groupId=noapp] Notifying assignor about the new Assignment(partitions=[SDC-DIST-NOTIF-TOPIC-0]) 19:07:55.937 [pool-1-thread-1] INFO org.apache.kafka.clients.consumer.internals.ConsumerCoordinator - [Consumer clientId=dcae-openapi-manager-consumer-b4182fb9-df76-4e6e-a02a-5532b7cdbaa6, groupId=noapp] Adding newly assigned partitions: SDC-DIST-NOTIF-TOPIC-0 19:07:55.950 [pool-1-thread-1] INFO org.apache.kafka.clients.consumer.internals.ConsumerCoordinator - [Consumer clientId=dcae-openapi-manager-consumer-b4182fb9-df76-4e6e-a02a-5532b7cdbaa6, groupId=noapp] Found no committed offset for partition SDC-DIST-NOTIF-TOPIC-0 19:07:55.973 [pool-1-thread-1] INFO org.apache.kafka.clients.consumer.internals.SubscriptionState - [Consumer clientId=dcae-openapi-manager-consumer-b4182fb9-df76-4e6e-a02a-5532b7cdbaa6, groupId=noapp] Resetting offset for partition SDC-DIST-NOTIF-TOPIC-0 to position FetchPosition{offset=0, offsetEpoch=Optional.empty, currentLeader=LeaderAndEpoch{leader=Optional[localhost:43219 (id: 1 rack: null)], epoch=0}}. 19:07:55.977 [kafka-producer-network-thread | producer-1] INFO org.apache.kafka.clients.Metadata - [Producer clientId=producer-1] Resetting the last seen epoch of partition SDC-DIST-NOTIF-TOPIC-0 to 0 since the associated topicId changed from null to F2PMS80AQkyDDz9nvZN-8Q 19:07:56.063 [main] INFO org.apache.kafka.clients.producer.KafkaProducer - [Producer clientId=producer-1] Closing the Kafka producer with timeoutMillis = 9223372036854775807 ms. 19:07:56.070 [main] INFO org.apache.kafka.common.metrics.Metrics - Metrics scheduler closed 19:07:56.070 [main] INFO org.apache.kafka.common.metrics.Metrics - Closing reporter org.apache.kafka.common.metrics.JmxReporter 19:07:56.070 [main] INFO org.apache.kafka.common.metrics.Metrics - Metrics reporters closed 19:07:56.070 [main] INFO org.apache.kafka.common.utils.AppInfoParser - App info kafka.producer for producer-1 unregistered 19:07:56.073 [main] INFO org.onap.test.core.service.ClientInitializerTest - Waiting for artifacts 19:07:56.100 [pool-1-thread-1] INFO org.onap.sdc.utils.NotificationSender - DistributionClient - sendStatus 19:07:56.100 [pool-1-thread-1] INFO org.onap.sdc.utils.NotificationSender - Trying to send status: { "distributionID": "bf3df55e-cdc6-4bf7-b3b3-0fdccab91106", "consumerID": "dcae-openapi-manager", "timestamp": 1755112074821, "artifactURL": "/sdc/v1/catalog/services/DemovlbCds/1.0/resourceInstances/vlb_cds68b6da5968e40/artifacts/k8s-tca-clamp-policy-05082019.yaml", "status": "NOT_NOTIFIED" } to topic SDC-DIST-STATUS-TOPIC 19:07:56.124 [kafka-producer-network-thread | dcae-openapi-manager-producer-c20933a7-103c-4278-b03e-e76ac05194eb] WARN org.apache.kafka.clients.NetworkClient - [Producer clientId=dcae-openapi-manager-producer-c20933a7-103c-4278-b03e-e76ac05194eb] Error while fetching metadata with correlation id 4 : {SDC-DIST-STATUS-TOPIC=LEADER_NOT_AVAILABLE} 19:07:56.230 [kafka-producer-network-thread | dcae-openapi-manager-producer-c20933a7-103c-4278-b03e-e76ac05194eb] INFO org.apache.kafka.clients.Metadata - [Producer clientId=dcae-openapi-manager-producer-c20933a7-103c-4278-b03e-e76ac05194eb] Resetting the last seen epoch of partition SDC-DIST-STATUS-TOPIC-0 to 0 since the associated topicId changed from null to mwJDZbToQ7CuY8FsoMpIHg 19:07:57.236 [pool-1-thread-1] INFO org.onap.sdc.utils.NotificationSender - DistributionClient - sendStatus 19:07:57.236 [pool-1-thread-1] INFO org.onap.sdc.utils.NotificationSender - Trying to send status: { "distributionID": "bf3df55e-cdc6-4bf7-b3b3-0fdccab91106", "consumerID": "dcae-openapi-manager", "timestamp": 1755112074821, "artifactURL": "/sdc/v1/catalog/services/DemovlbCds/1.0/resourceInstances/vlb_cds68b6da5968e40/artifacts/vf-license-model.xml", "status": "NOT_NOTIFIED" } to topic SDC-DIST-STATUS-TOPIC 19:07:58.240 [pool-1-thread-1] INFO org.onap.sdc.utils.NotificationSender - DistributionClient - sendStatus 19:07:58.240 [pool-1-thread-1] INFO org.onap.sdc.utils.NotificationSender - Trying to send status: { "distributionID": "bf3df55e-cdc6-4bf7-b3b3-0fdccab91106", "consumerID": "dcae-openapi-manager", "timestamp": 1755112074821, "artifactURL": "/sdc/v1/catalog/services/DemovlbCds/1.0/resourceInstances/vlb_cds68b6da5968e40/artifacts/base_template.env", "status": "NOT_NOTIFIED" } to topic SDC-DIST-STATUS-TOPIC 19:07:59.243 [pool-1-thread-1] INFO org.onap.sdc.utils.NotificationSender - DistributionClient - sendStatus 19:07:59.244 [pool-1-thread-1] INFO org.onap.sdc.utils.NotificationSender - Trying to send status: { "distributionID": "bf3df55e-cdc6-4bf7-b3b3-0fdccab91106", "consumerID": "dcae-openapi-manager", "timestamp": 1755112074821, "artifactURL": "/sdc/v1/catalog/services/DemovlbCds/1.0/resourceInstances/vlb_cds68b6da5968e40/artifacts/vlb_cds68b6da5968e40_modules.json", "status": "NOT_NOTIFIED" } to topic SDC-DIST-STATUS-TOPIC 19:08:00.248 [pool-1-thread-1] INFO org.onap.sdc.utils.NotificationSender - DistributionClient - sendStatus 19:08:00.249 [pool-1-thread-1] INFO org.onap.sdc.utils.NotificationSender - Trying to send status: { "distributionID": "bf3df55e-cdc6-4bf7-b3b3-0fdccab91106", "consumerID": "dcae-openapi-manager", "timestamp": 1755112074821, "artifactURL": "/", "status": "NOTIFIED" } to topic SDC-DIST-STATUS-TOPIC 19:08:01.251 [pool-1-thread-1] INFO org.onap.sdc.utils.NotificationSender - DistributionClient - sendStatus 19:08:01.251 [pool-1-thread-1] INFO org.onap.sdc.utils.NotificationSender - Trying to send status: { "distributionID": "bf3df55e-cdc6-4bf7-b3b3-0fdccab91106", "consumerID": "dcae-openapi-manager", "timestamp": 1755112074821, "artifactURL": "/sdc/v1/catalog/services/DemovlbCds/1.0/resourceInstances/vlb_cds68b6da5968e40/artifacts/vdns.env", "status": "NOT_NOTIFIED" } to topic SDC-DIST-STATUS-TOPIC 19:08:02.253 [pool-1-thread-1] INFO org.onap.sdc.utils.NotificationSender - DistributionClient - sendStatus 19:08:02.253 [pool-1-thread-1] INFO org.onap.sdc.utils.NotificationSender - Trying to send status: { "distributionID": "bf3df55e-cdc6-4bf7-b3b3-0fdccab91106", "consumerID": "dcae-openapi-manager", "timestamp": 1755112074821, "artifactURL": "/sdc/v1/catalog/services/DemovlbCds/1.0/resourceInstances/vlb_cds68b6da5968e40/artifacts/vendor-license-model.xml", "status": "NOT_NOTIFIED" } to topic SDC-DIST-STATUS-TOPIC 19:08:03.255 [pool-1-thread-1] INFO org.onap.sdc.utils.NotificationSender - DistributionClient - sendStatus 19:08:03.256 [pool-1-thread-1] INFO org.onap.sdc.utils.NotificationSender - Trying to send status: { "distributionID": "bf3df55e-cdc6-4bf7-b3b3-0fdccab91106", "consumerID": "dcae-openapi-manager", "timestamp": 1755112074821, "artifactURL": "/", "status": "NOTIFIED" } to topic SDC-DIST-STATUS-TOPIC 19:08:04.258 [pool-1-thread-1] INFO org.onap.sdc.utils.NotificationSender - DistributionClient - sendStatus 19:08:04.258 [pool-1-thread-1] INFO org.onap.sdc.utils.NotificationSender - Trying to send status: { "distributionID": "bf3df55e-cdc6-4bf7-b3b3-0fdccab91106", "consumerID": "dcae-openapi-manager", "timestamp": 1755112074821, "artifactURL": "/sdc/v1/catalog/services/DemovlbCds/1.0/resourceInstances/vlb_cds68b6da5968e40/artifacts/vlb.env", "status": "NOT_NOTIFIED" } to topic SDC-DIST-STATUS-TOPIC 19:08:05.260 [pool-1-thread-1] INFO org.onap.sdc.utils.NotificationSender - DistributionClient - sendStatus 19:08:05.260 [pool-1-thread-1] INFO org.onap.sdc.utils.NotificationSender - Trying to send status: { "distributionID": "bf3df55e-cdc6-4bf7-b3b3-0fdccab91106", "consumerID": "dcae-openapi-manager", "timestamp": 1755112074821, "artifactURL": "/sdc/v1/catalog/services/DemovlbCds/1.0/resourceInstances/vlb_cds68b6da5968e40/artifacts/vpkg.env", "status": "NOT_NOTIFIED" } to topic SDC-DIST-STATUS-TOPIC 19:08:06.262 [pool-1-thread-1] INFO org.onap.sdc.utils.NotificationSender - DistributionClient - sendStatus 19:08:06.262 [pool-1-thread-1] INFO org.onap.sdc.utils.NotificationSender - Trying to send status: { "distributionID": "bf3df55e-cdc6-4bf7-b3b3-0fdccab91106", "consumerID": "dcae-openapi-manager", "timestamp": 1755112074821, "artifactURL": "/", "status": "NOTIFIED" } to topic SDC-DIST-STATUS-TOPIC 19:08:07.264 [pool-1-thread-1] INFO org.onap.sdc.utils.NotificationSender - DistributionClient - sendStatus 19:08:07.264 [pool-1-thread-1] INFO org.onap.sdc.utils.NotificationSender - Trying to send status: { "distributionID": "bf3df55e-cdc6-4bf7-b3b3-0fdccab91106", "consumerID": "dcae-openapi-manager", "timestamp": 1755112074821, "artifactURL": "/", "status": "NOTIFIED" } to topic SDC-DIST-STATUS-TOPIC 19:08:08.266 [pool-1-thread-1] INFO org.onap.sdc.utils.NotificationSender - DistributionClient - sendStatus 19:08:08.266 [pool-1-thread-1] INFO org.onap.sdc.utils.NotificationSender - Trying to send status: { "distributionID": "bf3df55e-cdc6-4bf7-b3b3-0fdccab91106", "consumerID": "dcae-openapi-manager", "timestamp": 1755112074821, "artifactURL": "/sdc/v1/catalog/services/DemovlbCds/1.0/artifacts/service-DemovlbCds-template.yml", "status": "NOT_NOTIFIED" } to topic SDC-DIST-STATUS-TOPIC 19:08:09.268 [pool-1-thread-1] INFO org.onap.sdc.utils.NotificationSender - DistributionClient - sendStatus 19:08:09.268 [pool-1-thread-1] INFO org.onap.sdc.utils.NotificationSender - Trying to send status: { "distributionID": "bf3df55e-cdc6-4bf7-b3b3-0fdccab91106", "consumerID": "dcae-openapi-manager", "timestamp": 1755112074821, "artifactURL": "/sdc/v1/catalog/services/DemovlbCds/1.0/artifacts/service-DemovlbCds-csar.csar", "status": "NOT_NOTIFIED" } to topic SDC-DIST-STATUS-TOPIC 19:08:10.271 [pool-1-thread-1] INFO org.onap.test.core.service.ClientNotifyCallback - ================================================= 19:08:10.271 [pool-1-thread-1] INFO org.onap.test.core.service.ClientNotifyCallback - Distrubuted service information 19:08:10.271 [pool-1-thread-1] INFO org.onap.test.core.service.ClientNotifyCallback - Service UUID: d2192fd5-6ba4-40d2-9078-e3642d9175ee 19:08:10.271 [pool-1-thread-1] INFO org.onap.test.core.service.ClientNotifyCallback - Service name: demoVLB_CDS 19:08:10.271 [pool-1-thread-1] INFO org.onap.test.core.service.ClientNotifyCallback - Service resources: 19:08:10.272 [pool-1-thread-1] INFO org.onap.test.core.service.ClientNotifyCallback - - Resource: vLB_CDS 68b6da59-68e4 19:08:10.272 [pool-1-thread-1] INFO org.onap.test.core.service.ClientNotifyCallback - Artifacts: 19:08:10.272 [pool-1-thread-1] INFO org.onap.test.core.service.ClientNotifyCallback - - Name: vpkg.yaml 19:08:10.273 [pool-1-thread-1] INFO org.onap.test.core.service.ClientNotifyCallback - - Name: vlb.yaml 19:08:10.273 [pool-1-thread-1] INFO org.onap.test.core.service.ClientNotifyCallback - - Name: vdns.yaml 19:08:10.273 [pool-1-thread-1] INFO org.onap.test.core.service.ClientNotifyCallback - - Name: base_template.yaml 19:08:10.273 [pool-1-thread-1] INFO org.onap.test.core.service.ClientNotifyCallback - ================================================= 19:08:10.273 [pool-1-thread-1] INFO org.onap.test.core.service.ArtifactsDownloader - Downloading artifacts... 19:08:10.283 [pool-1-thread-1] ERROR org.onap.sdc.http.HttpSdcClient - failed to connect to url: / org.apache.http.conn.HttpHostConnectException: Connect to localhost:30206 [localhost/127.0.0.1] failed: Connection refused (Connection refused) at org.apache.http.impl.conn.DefaultHttpClientConnectionOperator.connect(DefaultHttpClientConnectionOperator.java:156) at org.apache.http.impl.conn.PoolingHttpClientConnectionManager.connect(PoolingHttpClientConnectionManager.java:376) at org.apache.http.impl.execchain.MainClientExec.establishRoute(MainClientExec.java:393) at org.apache.http.impl.execchain.MainClientExec.execute(MainClientExec.java:236) at org.apache.http.impl.execchain.ProtocolExec.execute(ProtocolExec.java:186) at org.apache.http.impl.execchain.RetryExec.execute(RetryExec.java:89) at org.apache.http.impl.execchain.RedirectExec.execute(RedirectExec.java:110) at org.apache.http.impl.client.InternalHttpClient.doExecute(InternalHttpClient.java:185) at org.apache.http.impl.client.CloseableHttpClient.execute(CloseableHttpClient.java:83) at org.apache.http.impl.client.CloseableHttpClient.execute(CloseableHttpClient.java:108) at org.onap.sdc.http.HttpSdcClient.getRequest(HttpSdcClient.java:116) at org.onap.sdc.http.SdcConnectorClient.downloadArtifact(SdcConnectorClient.java:136) at java.base/java.util.stream.ReferencePipeline$3$1.accept(ReferencePipeline.java:195) at java.base/java.util.ArrayList$ArrayListSpliterator.forEachRemaining(ArrayList.java:1655) at java.base/java.util.stream.ReferencePipeline$Head.forEach(ReferencePipeline.java:658) at java.base/java.util.stream.ReferencePipeline$7$1.accept(ReferencePipeline.java:274) at java.base/java.util.ArrayList$ArrayListSpliterator.forEachRemaining(ArrayList.java:1655) at java.base/java.util.stream.AbstractPipeline.copyInto(AbstractPipeline.java:484) at java.base/java.util.stream.AbstractPipeline.wrapAndCopyInto(AbstractPipeline.java:474) at java.base/java.util.stream.ReduceOps$ReduceOp.evaluateSequential(ReduceOps.java:913) at java.base/java.util.stream.AbstractPipeline.evaluate(AbstractPipeline.java:234) at java.base/java.util.stream.ReferencePipeline.collect(ReferencePipeline.java:578) at org.onap.test.core.service.ArtifactsDownloader.pullArtifacts(ArtifactsDownloader.java:56) at org.onap.test.core.service.ClientNotifyCallback.activateCallback(ClientNotifyCallback.java:65) at org.onap.sdc.impl.NotificationConsumer.run(NotificationConsumer.java:61) at java.base/java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:515) at java.base/java.util.concurrent.FutureTask.runAndReset(FutureTask.java:305) at java.base/java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(ScheduledThreadPoolExecutor.java:305) at java.base/java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128) at java.base/java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628) at java.base/java.lang.Thread.run(Thread.java:829) Caused by: java.net.ConnectException: Connection refused (Connection refused) at java.base/java.net.PlainSocketImpl.socketConnect(Native Method) at java.base/java.net.AbstractPlainSocketImpl.doConnect(AbstractPlainSocketImpl.java:412) at java.base/java.net.AbstractPlainSocketImpl.connectToAddress(AbstractPlainSocketImpl.java:255) at java.base/java.net.AbstractPlainSocketImpl.connect(AbstractPlainSocketImpl.java:237) at java.base/java.net.SocksSocketImpl.connect(SocksSocketImpl.java:392) at java.base/java.net.Socket.connect(Socket.java:609) at org.apache.http.conn.socket.PlainConnectionSocketFactory.connectSocket(PlainConnectionSocketFactory.java:75) at org.apache.http.impl.conn.DefaultHttpClientConnectionOperator.connect(DefaultHttpClientConnectionOperator.java:142) ... 30 common frames omitted 19:08:10.285 [pool-1-thread-1] ERROR org.onap.sdc.http.SdcConnectorClient - status from SDC is org.onap.sdc.http.HttpSdcResponse@5fa0131 19:08:10.290 [pool-1-thread-1] ERROR org.onap.sdc.http.SdcConnectorClient - DistributionClientResultImpl [responseStatus=GENERAL_ERROR, responseMessage=failed to send request to SDC] 19:08:10.291 [pool-1-thread-1] ERROR org.onap.sdc.http.HttpSdcClient - failed to connect to url: / org.apache.http.conn.HttpHostConnectException: Connect to localhost:30206 [localhost/127.0.0.1] failed: Connection refused (Connection refused) at org.apache.http.impl.conn.DefaultHttpClientConnectionOperator.connect(DefaultHttpClientConnectionOperator.java:156) at org.apache.http.impl.conn.PoolingHttpClientConnectionManager.connect(PoolingHttpClientConnectionManager.java:376) at org.apache.http.impl.execchain.MainClientExec.establishRoute(MainClientExec.java:393) at org.apache.http.impl.execchain.MainClientExec.execute(MainClientExec.java:236) at org.apache.http.impl.execchain.ProtocolExec.execute(ProtocolExec.java:186) at org.apache.http.impl.execchain.RetryExec.execute(RetryExec.java:89) at org.apache.http.impl.execchain.RedirectExec.execute(RedirectExec.java:110) at org.apache.http.impl.client.InternalHttpClient.doExecute(InternalHttpClient.java:185) at org.apache.http.impl.client.CloseableHttpClient.execute(CloseableHttpClient.java:83) at org.apache.http.impl.client.CloseableHttpClient.execute(CloseableHttpClient.java:108) at org.onap.sdc.http.HttpSdcClient.getRequest(HttpSdcClient.java:116) at org.onap.sdc.http.SdcConnectorClient.downloadArtifact(SdcConnectorClient.java:136) at java.base/java.util.stream.ReferencePipeline$3$1.accept(ReferencePipeline.java:195) at java.base/java.util.ArrayList$ArrayListSpliterator.forEachRemaining(ArrayList.java:1655) at java.base/java.util.stream.ReferencePipeline$Head.forEach(ReferencePipeline.java:658) at java.base/java.util.stream.ReferencePipeline$7$1.accept(ReferencePipeline.java:274) at java.base/java.util.ArrayList$ArrayListSpliterator.forEachRemaining(ArrayList.java:1655) at java.base/java.util.stream.AbstractPipeline.copyInto(AbstractPipeline.java:484) at java.base/java.util.stream.AbstractPipeline.wrapAndCopyInto(AbstractPipeline.java:474) at java.base/java.util.stream.ReduceOps$ReduceOp.evaluateSequential(ReduceOps.java:913) at java.base/java.util.stream.AbstractPipeline.evaluate(AbstractPipeline.java:234) at java.base/java.util.stream.ReferencePipeline.collect(ReferencePipeline.java:578) at org.onap.test.core.service.ArtifactsDownloader.pullArtifacts(ArtifactsDownloader.java:56) at org.onap.test.core.service.ClientNotifyCallback.activateCallback(ClientNotifyCallback.java:65) at org.onap.sdc.impl.NotificationConsumer.run(NotificationConsumer.java:61) at java.base/java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:515) at java.base/java.util.concurrent.FutureTask.runAndReset(FutureTask.java:305) at java.base/java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(ScheduledThreadPoolExecutor.java:305) at java.base/java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128) at java.base/java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628) at java.base/java.lang.Thread.run(Thread.java:829) Caused by: java.net.ConnectException: Connection refused (Connection refused) at java.base/java.net.PlainSocketImpl.socketConnect(Native Method) at java.base/java.net.AbstractPlainSocketImpl.doConnect(AbstractPlainSocketImpl.java:412) at java.base/java.net.AbstractPlainSocketImpl.connectToAddress(AbstractPlainSocketImpl.java:255) at java.base/java.net.AbstractPlainSocketImpl.connect(AbstractPlainSocketImpl.java:237) at java.base/java.net.SocksSocketImpl.connect(SocksSocketImpl.java:392) at java.base/java.net.Socket.connect(Socket.java:609) at org.apache.http.conn.socket.PlainConnectionSocketFactory.connectSocket(PlainConnectionSocketFactory.java:75) at org.apache.http.impl.conn.DefaultHttpClientConnectionOperator.connect(DefaultHttpClientConnectionOperator.java:142) ... 30 common frames omitted 19:08:10.292 [pool-1-thread-1] ERROR org.onap.sdc.http.SdcConnectorClient - status from SDC is org.onap.sdc.http.HttpSdcResponse@c15e2f2 19:08:10.292 [pool-1-thread-1] ERROR org.onap.sdc.http.SdcConnectorClient - DistributionClientResultImpl [responseStatus=GENERAL_ERROR, responseMessage=failed to send request to SDC] 19:08:10.293 [pool-1-thread-1] ERROR org.onap.sdc.http.HttpSdcClient - failed to connect to url: / org.apache.http.conn.HttpHostConnectException: Connect to localhost:30206 [localhost/127.0.0.1] failed: Connection refused (Connection refused) at org.apache.http.impl.conn.DefaultHttpClientConnectionOperator.connect(DefaultHttpClientConnectionOperator.java:156) at org.apache.http.impl.conn.PoolingHttpClientConnectionManager.connect(PoolingHttpClientConnectionManager.java:376) at org.apache.http.impl.execchain.MainClientExec.establishRoute(MainClientExec.java:393) at org.apache.http.impl.execchain.MainClientExec.execute(MainClientExec.java:236) at org.apache.http.impl.execchain.ProtocolExec.execute(ProtocolExec.java:186) at org.apache.http.impl.execchain.RetryExec.execute(RetryExec.java:89) at org.apache.http.impl.execchain.RedirectExec.execute(RedirectExec.java:110) at org.apache.http.impl.client.InternalHttpClient.doExecute(InternalHttpClient.java:185) at org.apache.http.impl.client.CloseableHttpClient.execute(CloseableHttpClient.java:83) at org.apache.http.impl.client.CloseableHttpClient.execute(CloseableHttpClient.java:108) at org.onap.sdc.http.HttpSdcClient.getRequest(HttpSdcClient.java:116) at org.onap.sdc.http.SdcConnectorClient.downloadArtifact(SdcConnectorClient.java:136) at java.base/java.util.stream.ReferencePipeline$3$1.accept(ReferencePipeline.java:195) at java.base/java.util.ArrayList$ArrayListSpliterator.forEachRemaining(ArrayList.java:1655) at java.base/java.util.stream.ReferencePipeline$Head.forEach(ReferencePipeline.java:658) at java.base/java.util.stream.ReferencePipeline$7$1.accept(ReferencePipeline.java:274) at java.base/java.util.ArrayList$ArrayListSpliterator.forEachRemaining(ArrayList.java:1655) at java.base/java.util.stream.AbstractPipeline.copyInto(AbstractPipeline.java:484) at java.base/java.util.stream.AbstractPipeline.wrapAndCopyInto(AbstractPipeline.java:474) at java.base/java.util.stream.ReduceOps$ReduceOp.evaluateSequential(ReduceOps.java:913) at java.base/java.util.stream.AbstractPipeline.evaluate(AbstractPipeline.java:234) at java.base/java.util.stream.ReferencePipeline.collect(ReferencePipeline.java:578) at org.onap.test.core.service.ArtifactsDownloader.pullArtifacts(ArtifactsDownloader.java:56) at org.onap.test.core.service.ClientNotifyCallback.activateCallback(ClientNotifyCallback.java:65) at org.onap.sdc.impl.NotificationConsumer.run(NotificationConsumer.java:61) at java.base/java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:515) at java.base/java.util.concurrent.FutureTask.runAndReset(FutureTask.java:305) at java.base/java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(ScheduledThreadPoolExecutor.java:305) at java.base/java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128) at java.base/java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628) at java.base/java.lang.Thread.run(Thread.java:829) Caused by: java.net.ConnectException: Connection refused (Connection refused) at java.base/java.net.PlainSocketImpl.socketConnect(Native Method) at java.base/java.net.AbstractPlainSocketImpl.doConnect(AbstractPlainSocketImpl.java:412) at java.base/java.net.AbstractPlainSocketImpl.connectToAddress(AbstractPlainSocketImpl.java:255) at java.base/java.net.AbstractPlainSocketImpl.connect(AbstractPlainSocketImpl.java:237) at java.base/java.net.SocksSocketImpl.connect(SocksSocketImpl.java:392) at java.base/java.net.Socket.connect(Socket.java:609) at org.apache.http.conn.socket.PlainConnectionSocketFactory.connectSocket(PlainConnectionSocketFactory.java:75) at org.apache.http.impl.conn.DefaultHttpClientConnectionOperator.connect(DefaultHttpClientConnectionOperator.java:142) ... 30 common frames omitted 19:08:10.294 [pool-1-thread-1] ERROR org.onap.sdc.http.SdcConnectorClient - status from SDC is org.onap.sdc.http.HttpSdcResponse@d8107a5 19:08:10.294 [pool-1-thread-1] ERROR org.onap.sdc.http.SdcConnectorClient - DistributionClientResultImpl [responseStatus=GENERAL_ERROR, responseMessage=failed to send request to SDC] 19:08:10.295 [pool-1-thread-1] ERROR org.onap.sdc.http.HttpSdcClient - failed to connect to url: / org.apache.http.conn.HttpHostConnectException: Connect to localhost:30206 [localhost/127.0.0.1] failed: Connection refused (Connection refused) at org.apache.http.impl.conn.DefaultHttpClientConnectionOperator.connect(DefaultHttpClientConnectionOperator.java:156) at org.apache.http.impl.conn.PoolingHttpClientConnectionManager.connect(PoolingHttpClientConnectionManager.java:376) at org.apache.http.impl.execchain.MainClientExec.establishRoute(MainClientExec.java:393) at org.apache.http.impl.execchain.MainClientExec.execute(MainClientExec.java:236) at org.apache.http.impl.execchain.ProtocolExec.execute(ProtocolExec.java:186) at org.apache.http.impl.execchain.RetryExec.execute(RetryExec.java:89) at org.apache.http.impl.execchain.RedirectExec.execute(RedirectExec.java:110) at org.apache.http.impl.client.InternalHttpClient.doExecute(InternalHttpClient.java:185) at org.apache.http.impl.client.CloseableHttpClient.execute(CloseableHttpClient.java:83) at org.apache.http.impl.client.CloseableHttpClient.execute(CloseableHttpClient.java:108) at org.onap.sdc.http.HttpSdcClient.getRequest(HttpSdcClient.java:116) at org.onap.sdc.http.SdcConnectorClient.downloadArtifact(SdcConnectorClient.java:136) at java.base/java.util.stream.ReferencePipeline$3$1.accept(ReferencePipeline.java:195) at java.base/java.util.ArrayList$ArrayListSpliterator.forEachRemaining(ArrayList.java:1655) at java.base/java.util.stream.ReferencePipeline$Head.forEach(ReferencePipeline.java:658) at java.base/java.util.stream.ReferencePipeline$7$1.accept(ReferencePipeline.java:274) at java.base/java.util.ArrayList$ArrayListSpliterator.forEachRemaining(ArrayList.java:1655) at java.base/java.util.stream.AbstractPipeline.copyInto(AbstractPipeline.java:484) at java.base/java.util.stream.AbstractPipeline.wrapAndCopyInto(AbstractPipeline.java:474) at java.base/java.util.stream.ReduceOps$ReduceOp.evaluateSequential(ReduceOps.java:913) at java.base/java.util.stream.AbstractPipeline.evaluate(AbstractPipeline.java:234) at java.base/java.util.stream.ReferencePipeline.collect(ReferencePipeline.java:578) at org.onap.test.core.service.ArtifactsDownloader.pullArtifacts(ArtifactsDownloader.java:56) at org.onap.test.core.service.ClientNotifyCallback.activateCallback(ClientNotifyCallback.java:65) at org.onap.sdc.impl.NotificationConsumer.run(NotificationConsumer.java:61) at java.base/java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:515) at java.base/java.util.concurrent.FutureTask.runAndReset(FutureTask.java:305) at java.base/java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(ScheduledThreadPoolExecutor.java:305) at java.base/java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128) at java.base/java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628) at java.base/java.lang.Thread.run(Thread.java:829) Caused by: java.net.ConnectException: Connection refused (Connection refused) at java.base/java.net.PlainSocketImpl.socketConnect(Native Method) at java.base/java.net.AbstractPlainSocketImpl.doConnect(AbstractPlainSocketImpl.java:412) at java.base/java.net.AbstractPlainSocketImpl.connectToAddress(AbstractPlainSocketImpl.java:255) at java.base/java.net.AbstractPlainSocketImpl.connect(AbstractPlainSocketImpl.java:237) at java.base/java.net.SocksSocketImpl.connect(SocksSocketImpl.java:392) at java.base/java.net.Socket.connect(Socket.java:609) at org.apache.http.conn.socket.PlainConnectionSocketFactory.connectSocket(PlainConnectionSocketFactory.java:75) at org.apache.http.impl.conn.DefaultHttpClientConnectionOperator.connect(DefaultHttpClientConnectionOperator.java:142) ... 30 common frames omitted 19:08:10.295 [pool-1-thread-1] ERROR org.onap.sdc.http.SdcConnectorClient - status from SDC is org.onap.sdc.http.HttpSdcResponse@e38bb95 19:08:10.295 [pool-1-thread-1] ERROR org.onap.sdc.http.SdcConnectorClient - DistributionClientResultImpl [responseStatus=GENERAL_ERROR, responseMessage=failed to send request to SDC] 19:08:10.395 [main] INFO org.onap.test.core.service.ClientInitializer - ======================================== 19:08:10.395 [main] INFO org.onap.test.core.service.ClientInitializer - distribution client stopped successfully 19:08:10.395 [main] INFO org.onap.test.core.service.ClientInitializer - ======================================== 19:08:10.849 [kafka-producer-network-thread | dcae-openapi-manager-producer-c20933a7-103c-4278-b03e-e76ac05194eb] INFO org.apache.kafka.clients.NetworkClient - [Producer clientId=dcae-openapi-manager-producer-c20933a7-103c-4278-b03e-e76ac05194eb] Node 1 disconnected. 19:08:10.854 [kafka-producer-network-thread | dcae-openapi-manager-producer-c20933a7-103c-4278-b03e-e76ac05194eb] INFO org.apache.kafka.clients.NetworkClient - [Producer clientId=dcae-openapi-manager-producer-c20933a7-103c-4278-b03e-e76ac05194eb] Node -1 disconnected. 19:08:10.863 [kafka-coordinator-heartbeat-thread | noapp] INFO org.apache.kafka.clients.NetworkClient - [Consumer clientId=dcae-openapi-manager-consumer-b4182fb9-df76-4e6e-a02a-5532b7cdbaa6, groupId=noapp] Node 1 disconnected. 19:08:10.863 [kafka-coordinator-heartbeat-thread | noapp] INFO org.apache.kafka.clients.NetworkClient - [Consumer clientId=dcae-openapi-manager-consumer-b4182fb9-df76-4e6e-a02a-5532b7cdbaa6, groupId=noapp] Node -1 disconnected. 19:08:10.863 [kafka-coordinator-heartbeat-thread | noapp] INFO org.apache.kafka.clients.NetworkClient - [Consumer clientId=dcae-openapi-manager-consumer-b4182fb9-df76-4e6e-a02a-5532b7cdbaa6, groupId=noapp] Node 2147483646 disconnected. 19:08:10.864 [kafka-coordinator-heartbeat-thread | noapp] INFO org.apache.kafka.clients.consumer.internals.ConsumerCoordinator - [Consumer clientId=dcae-openapi-manager-consumer-b4182fb9-df76-4e6e-a02a-5532b7cdbaa6, groupId=noapp] Group coordinator localhost:43219 (id: 2147483646 rack: null) is unavailable or invalid due to cause: coordinator unavailable. isDisconnected: true. Rediscovery will be attempted. 19:08:10.957 [kafka-producer-network-thread | dcae-openapi-manager-producer-c20933a7-103c-4278-b03e-e76ac05194eb] INFO org.apache.kafka.clients.NetworkClient - [Producer clientId=dcae-openapi-manager-producer-c20933a7-103c-4278-b03e-e76ac05194eb] Node 1 disconnected. 19:08:10.957 [kafka-producer-network-thread | dcae-openapi-manager-producer-c20933a7-103c-4278-b03e-e76ac05194eb] WARN org.apache.kafka.clients.NetworkClient - [Producer clientId=dcae-openapi-manager-producer-c20933a7-103c-4278-b03e-e76ac05194eb] Connection to node 1 (localhost/127.0.0.1:43219) could not be established. Broker may not be available. 19:08:11.067 [kafka-coordinator-heartbeat-thread | noapp] INFO org.apache.kafka.clients.NetworkClient - [Consumer clientId=dcae-openapi-manager-consumer-b4182fb9-df76-4e6e-a02a-5532b7cdbaa6, groupId=noapp] Node 1 disconnected. 19:08:11.068 [kafka-coordinator-heartbeat-thread | noapp] WARN org.apache.kafka.clients.NetworkClient - [Consumer clientId=dcae-openapi-manager-consumer-b4182fb9-df76-4e6e-a02a-5532b7cdbaa6, groupId=noapp] Connection to node 1 (localhost/127.0.0.1:43219) could not be established. Broker may not be available. 19:08:11.110 [kafka-producer-network-thread | dcae-openapi-manager-producer-c20933a7-103c-4278-b03e-e76ac05194eb] INFO org.apache.kafka.clients.NetworkClient - [Producer clientId=dcae-openapi-manager-producer-c20933a7-103c-4278-b03e-e76ac05194eb] Node 1 disconnected. 19:08:11.111 [kafka-producer-network-thread | dcae-openapi-manager-producer-c20933a7-103c-4278-b03e-e76ac05194eb] WARN org.apache.kafka.clients.NetworkClient - [Producer clientId=dcae-openapi-manager-producer-c20933a7-103c-4278-b03e-e76ac05194eb] Connection to node 1 (localhost/127.0.0.1:43219) could not be established. Broker may not be available. [INFO] Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 51.992 s - in org.onap.test.core.service.ClientInitializerTest 19:08:11.271 [kafka-coordinator-heartbeat-thread | noapp] INFO org.apache.kafka.clients.NetworkClient - [Consumer clientId=dcae-openapi-manager-consumer-b4182fb9-df76-4e6e-a02a-5532b7cdbaa6, groupId=noapp] Node 1 disconnected. 19:08:11.272 [kafka-coordinator-heartbeat-thread | noapp] WARN org.apache.kafka.clients.NetworkClient - [Consumer clientId=dcae-openapi-manager-consumer-b4182fb9-df76-4e6e-a02a-5532b7cdbaa6, groupId=noapp] Connection to node 1 (localhost/127.0.0.1:43219) could not be established. Broker may not be available. [INFO] [INFO] Results: [INFO] [INFO] Tests run: 1, Failures: 0, Errors: 0, Skipped: 0 [INFO] [INFO] [INFO] --- jacoco-maven-plugin:0.8.6:report (post-unit-test) @ sdc-distribution-ci --- [INFO] Loading execution data file /w/workspace/sdc-sdc-distribution-client-master-integration-pairwise/sdc-distribution-ci/target/code-coverage/jacoco-ut.exec [INFO] Analyzed bundle 'sdc-distribution-ci' with 9 classes [INFO] [INFO] --- maven-jar-plugin:2.4:jar (default-jar) @ sdc-distribution-ci --- [INFO] Building jar: /w/workspace/sdc-sdc-distribution-client-master-integration-pairwise/sdc-distribution-ci/target/client-initialization.jar [INFO] [INFO] --- maven-javadoc-plugin:3.2.0:jar (attach-javadocs) @ sdc-distribution-ci --- [INFO] No previous run data found, generating javadoc. [INFO] Loading source files for package org.onap.test.core.service... Loading source files for package org.onap.test.core.config... Loading source files for package org.onap.test.it... Constructing Javadoc information... Standard Doclet version 11.0.16 Building tree for all the packages and classes... Generating /w/workspace/sdc-sdc-distribution-client-master-integration-pairwise/sdc-distribution-ci/target/apidocs/org/onap/test/core/config/ArtifactTypeEnum.html... Generating /w/workspace/sdc-sdc-distribution-client-master-integration-pairwise/sdc-distribution-ci/target/apidocs/org/onap/test/core/config/DistributionClientConfig.html... Generating /w/workspace/sdc-sdc-distribution-client-master-integration-pairwise/sdc-distribution-ci/target/apidocs/org/onap/test/core/service/ArtifactsDownloader.html... Generating /w/workspace/sdc-sdc-distribution-client-master-integration-pairwise/sdc-distribution-ci/target/apidocs/org/onap/test/core/service/ArtifactsValidator.html... Generating /w/workspace/sdc-sdc-distribution-client-master-integration-pairwise/sdc-distribution-ci/target/apidocs/org/onap/test/core/service/ClientInitializer.html... Generating /w/workspace/sdc-sdc-distribution-client-master-integration-pairwise/sdc-distribution-ci/target/apidocs/org/onap/test/core/service/ClientNotifyCallback.html... Generating /w/workspace/sdc-sdc-distribution-client-master-integration-pairwise/sdc-distribution-ci/target/apidocs/org/onap/test/core/service/DistributionStatusMessage.html... Generating /w/workspace/sdc-sdc-distribution-client-master-integration-pairwise/sdc-distribution-ci/target/apidocs/org/onap/test/core/service/ValidationMessage.html... Generating /w/workspace/sdc-sdc-distribution-client-master-integration-pairwise/sdc-distribution-ci/target/apidocs/org/onap/test/core/service/ValidationResult.html... Generating /w/workspace/sdc-sdc-distribution-client-master-integration-pairwise/sdc-distribution-ci/target/apidocs/org/onap/test/it/RegisterToSdcTopicIT.html... Generating /w/workspace/sdc-sdc-distribution-client-master-integration-pairwise/sdc-distribution-ci/target/apidocs/org/onap/test/core/config/package-summary.html... Generating /w/workspace/sdc-sdc-distribution-client-master-integration-pairwise/sdc-distribution-ci/target/apidocs/org/onap/test/core/config/package-tree.html... Generating /w/workspace/sdc-sdc-distribution-client-master-integration-pairwise/sdc-distribution-ci/target/apidocs/org/onap/test/core/service/package-summary.html... Generating /w/workspace/sdc-sdc-distribution-client-master-integration-pairwise/sdc-distribution-ci/target/apidocs/org/onap/test/core/service/package-tree.html... Generating /w/workspace/sdc-sdc-distribution-client-master-integration-pairwise/sdc-distribution-ci/target/apidocs/org/onap/test/it/package-summary.html... Generating /w/workspace/sdc-sdc-distribution-client-master-integration-pairwise/sdc-distribution-ci/target/apidocs/org/onap/test/it/package-tree.html... Generating /w/workspace/sdc-sdc-distribution-client-master-integration-pairwise/sdc-distribution-ci/target/apidocs/constant-values.html... Generating /w/workspace/sdc-sdc-distribution-client-master-integration-pairwise/sdc-distribution-ci/target/apidocs/org/onap/test/core/service/class-use/ArtifactsDownloader.html... Generating /w/workspace/sdc-sdc-distribution-client-master-integration-pairwise/sdc-distribution-ci/target/apidocs/org/onap/test/core/service/class-use/ClientInitializer.html... Generating /w/workspace/sdc-sdc-distribution-client-master-integration-pairwise/sdc-distribution-ci/target/apidocs/org/onap/test/core/service/class-use/ValidationResult.html... Generating /w/workspace/sdc-sdc-distribution-client-master-integration-pairwise/sdc-distribution-ci/target/apidocs/org/onap/test/core/service/class-use/ValidationMessage.html... Generating /w/workspace/sdc-sdc-distribution-client-master-integration-pairwise/sdc-distribution-ci/target/apidocs/org/onap/test/core/service/class-use/ArtifactsValidator.html... Generating /w/workspace/sdc-sdc-distribution-client-master-integration-pairwise/sdc-distribution-ci/target/apidocs/org/onap/test/core/service/class-use/DistributionStatusMessage.html... Generating /w/workspace/sdc-sdc-distribution-client-master-integration-pairwise/sdc-distribution-ci/target/apidocs/org/onap/test/core/service/class-use/ClientNotifyCallback.html... Generating /w/workspace/sdc-sdc-distribution-client-master-integration-pairwise/sdc-distribution-ci/target/apidocs/org/onap/test/core/config/class-use/DistributionClientConfig.html... Generating /w/workspace/sdc-sdc-distribution-client-master-integration-pairwise/sdc-distribution-ci/target/apidocs/org/onap/test/core/config/class-use/ArtifactTypeEnum.html... Generating /w/workspace/sdc-sdc-distribution-client-master-integration-pairwise/sdc-distribution-ci/target/apidocs/org/onap/test/it/class-use/RegisterToSdcTopicIT.html... Generating /w/workspace/sdc-sdc-distribution-client-master-integration-pairwise/sdc-distribution-ci/target/apidocs/org/onap/test/core/config/package-use.html... Generating /w/workspace/sdc-sdc-distribution-client-master-integration-pairwise/sdc-distribution-ci/target/apidocs/org/onap/test/core/service/package-use.html... Generating /w/workspace/sdc-sdc-distribution-client-master-integration-pairwise/sdc-distribution-ci/target/apidocs/org/onap/test/it/package-use.html... Building index for all the packages and classes... Generating /w/workspace/sdc-sdc-distribution-client-master-integration-pairwise/sdc-distribution-ci/target/apidocs/overview-tree.html... Generating /w/workspace/sdc-sdc-distribution-client-master-integration-pairwise/sdc-distribution-ci/target/apidocs/index-all.html... Building index for all classes... Generating /w/workspace/sdc-sdc-distribution-client-master-integration-pairwise/sdc-distribution-ci/target/apidocs/allclasses-index.html... Generating /w/workspace/sdc-sdc-distribution-client-master-integration-pairwise/sdc-distribution-ci/target/apidocs/allpackages-index.html... Generating /w/workspace/sdc-sdc-distribution-client-master-integration-pairwise/sdc-distribution-ci/target/apidocs/deprecated-list.html... Building index for all classes... Generating /w/workspace/sdc-sdc-distribution-client-master-integration-pairwise/sdc-distribution-ci/target/apidocs/allclasses.html... Generating /w/workspace/sdc-sdc-distribution-client-master-integration-pairwise/sdc-distribution-ci/target/apidocs/allclasses.html... Generating /w/workspace/sdc-sdc-distribution-client-master-integration-pairwise/sdc-distribution-ci/target/apidocs/index.html... Generating /w/workspace/sdc-sdc-distribution-client-master-integration-pairwise/sdc-distribution-ci/target/apidocs/overview-summary.html... Generating /w/workspace/sdc-sdc-distribution-client-master-integration-pairwise/sdc-distribution-ci/target/apidocs/help-doc.html... [INFO] Building jar: /w/workspace/sdc-sdc-distribution-client-master-integration-pairwise/sdc-distribution-ci/target/client-initialization-javadoc.jar [INFO] [INFO] --- jacoco-maven-plugin:0.8.6:prepare-agent (pre-integration-test) @ sdc-distribution-ci --- [INFO] failsafeArgLine set to -javaagent:/home/jenkins/.m2/repository/org/jacoco/org.jacoco.agent/0.8.6/org.jacoco.agent-0.8.6-runtime.jar=destfile=/w/workspace/sdc-sdc-distribution-client-master-integration-pairwise/sdc-distribution-ci/target/code-coverage/jacoco-it.exec,excludes=**/gen/**:**/generated-sources/**:**/yang-gen/**:**/pax/** [INFO] [INFO] --- maven-failsafe-plugin:3.0.0-M4:integration-test (integration-tests) @ sdc-distribution-ci --- [INFO] [INFO] --- jacoco-maven-plugin:0.8.6:report (post-integration-test) @ sdc-distribution-ci --- [INFO] Skipping JaCoCo execution due to missing execution data file. [INFO] [INFO] --- maven-failsafe-plugin:3.0.0-M4:verify (integration-tests) @ sdc-distribution-ci --- [INFO] [INFO] --- maven-install-plugin:2.4:install (default-install) @ sdc-distribution-ci --- [INFO] Installing /w/workspace/sdc-sdc-distribution-client-master-integration-pairwise/sdc-distribution-ci/target/client-initialization.jar to /home/jenkins/.m2/repository/org/onap/sdc/sdc-distribution-client/sdc-distribution-ci/2.1.2-SNAPSHOT/sdc-distribution-ci-2.1.2-SNAPSHOT.jar [INFO] Installing /w/workspace/sdc-sdc-distribution-client-master-integration-pairwise/sdc-distribution-ci/pom.xml to /home/jenkins/.m2/repository/org/onap/sdc/sdc-distribution-client/sdc-distribution-ci/2.1.2-SNAPSHOT/sdc-distribution-ci-2.1.2-SNAPSHOT.pom [INFO] Installing /w/workspace/sdc-sdc-distribution-client-master-integration-pairwise/sdc-distribution-ci/target/client-initialization-javadoc.jar to /home/jenkins/.m2/repository/org/onap/sdc/sdc-distribution-client/sdc-distribution-ci/2.1.2-SNAPSHOT/sdc-distribution-ci-2.1.2-SNAPSHOT-javadoc.jar [INFO] ------------------------------------------------------------------------ [INFO] Reactor Summary for sdc-sdc-distribution-client 2.1.2-SNAPSHOT: [INFO] [INFO] sdc-sdc-distribution-client ........................ SUCCESS [ 10.006 s] [INFO] sdc-distribution-client ............................ SUCCESS [ 54.954 s] [INFO] sdc-distribution-ci ................................ SUCCESS [ 57.073 s] [INFO] ------------------------------------------------------------------------ [INFO] BUILD SUCCESS [INFO] ------------------------------------------------------------------------ [INFO] Total time: 02:02 min [INFO] Finished at: 2025-08-13T19:08:13Z [INFO] ------------------------------------------------------------------------ $ ssh-agent -k unset SSH_AUTH_SOCK; unset SSH_AGENT_PID; echo Agent pid 2240 killed; [ssh-agent] Stopped. [PostBuildScript] - [INFO] Executing post build scripts. [sdc-sdc-distribution-client-master-integration-pairwise] $ /bin/bash /tmp/jenkins7783847301632825735.sh ---> sysstat.sh [sdc-sdc-distribution-client-master-integration-pairwise] $ /bin/bash /tmp/jenkins3880215382514574791.sh ---> package-listing.sh ++ tr '[:upper:]' '[:lower:]' ++ facter osfamily + OS_FAMILY=debian + workspace=/w/workspace/sdc-sdc-distribution-client-master-integration-pairwise + START_PACKAGES=/tmp/packages_start.txt + END_PACKAGES=/tmp/packages_end.txt + DIFF_PACKAGES=/tmp/packages_diff.txt + PACKAGES=/tmp/packages_start.txt + '[' /w/workspace/sdc-sdc-distribution-client-master-integration-pairwise ']' + PACKAGES=/tmp/packages_end.txt + case "${OS_FAMILY}" in + dpkg -l + grep '^ii' + '[' -f /tmp/packages_start.txt ']' + '[' -f /tmp/packages_end.txt ']' + diff /tmp/packages_start.txt /tmp/packages_end.txt + '[' /w/workspace/sdc-sdc-distribution-client-master-integration-pairwise ']' + mkdir -p /w/workspace/sdc-sdc-distribution-client-master-integration-pairwise/archives/ + cp -f /tmp/packages_diff.txt /tmp/packages_end.txt /tmp/packages_start.txt /w/workspace/sdc-sdc-distribution-client-master-integration-pairwise/archives/ [sdc-sdc-distribution-client-master-integration-pairwise] $ /bin/bash /tmp/jenkins3561448756761557189.sh ---> capture-instance-metadata.sh Setup pyenv: system 3.8.13 3.9.13 * 3.10.6 (set by /w/workspace/sdc-sdc-distribution-client-master-integration-pairwise/.python-version) lf-activate-venv(): INFO: Reuse venv:/tmp/venv-RsCQ from file:/tmp/.os_lf_venv lf-activate-venv(): INFO: Installing: lftools lf-activate-venv(): INFO: Adding /tmp/venv-RsCQ/bin to PATH INFO: Running in OpenStack, capturing instance metadata [sdc-sdc-distribution-client-master-integration-pairwise] $ /bin/bash /tmp/jenkins244018052800462984.sh provisioning config files... copy managed file [jenkins-log-archives-settings] to file:/w/workspace/sdc-sdc-distribution-client-master-integration-pairwise@tmp/config13551114800656812381tmp Regular expression run condition: Expression=[^.*logs-s3.*], Label=[] Run condition [Regular expression match] preventing perform for step [Provide Configuration files] [EnvInject] - Injecting environment variables from a build step. [EnvInject] - Injecting as environment variables the properties content SERVER_ID=logs [EnvInject] - Variables injected successfully. [sdc-sdc-distribution-client-master-integration-pairwise] $ /bin/bash /tmp/jenkins4670163443837869634.sh ---> create-netrc.sh [sdc-sdc-distribution-client-master-integration-pairwise] $ /bin/bash /tmp/jenkins14424171060843693456.sh ---> python-tools-install.sh Setup pyenv: system 3.8.13 3.9.13 * 3.10.6 (set by /w/workspace/sdc-sdc-distribution-client-master-integration-pairwise/.python-version) lf-activate-venv(): INFO: Reuse venv:/tmp/venv-RsCQ from file:/tmp/.os_lf_venv lf-activate-venv(): INFO: Installing: lftools lf-activate-venv(): INFO: Adding /tmp/venv-RsCQ/bin to PATH [sdc-sdc-distribution-client-master-integration-pairwise] $ /bin/bash /tmp/jenkins15526187297381345183.sh ---> sudo-logs.sh Archiving 'sudo' log.. [sdc-sdc-distribution-client-master-integration-pairwise] $ /bin/bash /tmp/jenkins7926076547270275312.sh ---> job-cost.sh Setup pyenv: system 3.8.13 3.9.13 * 3.10.6 (set by /w/workspace/sdc-sdc-distribution-client-master-integration-pairwise/.python-version) lf-activate-venv(): INFO: Reuse venv:/tmp/venv-RsCQ from file:/tmp/.os_lf_venv lf-activate-venv(): INFO: Installing: zipp==1.1.0 python-openstackclient urllib3~=1.26.15 lf-activate-venv(): INFO: Adding /tmp/venv-RsCQ/bin to PATH INFO: No Stack... INFO: Retrieving Pricing Info for: v3-standard-8 INFO: Archiving Costs [sdc-sdc-distribution-client-master-integration-pairwise] $ /bin/bash -l /tmp/jenkins8140599236144983300.sh ---> logs-deploy.sh Setup pyenv: system 3.8.13 3.9.13 * 3.10.6 (set by /w/workspace/sdc-sdc-distribution-client-master-integration-pairwise/.python-version) lf-activate-venv(): INFO: Reuse venv:/tmp/venv-RsCQ from file:/tmp/.os_lf_venv lf-activate-venv(): INFO: Installing: lftools lf-activate-venv(): INFO: Adding /tmp/venv-RsCQ/bin to PATH INFO: Nexus URL https://nexus.onap.org path production/vex-yul-ecomp-jenkins-1/sdc-sdc-distribution-client-master-integration-pairwise/1237 INFO: archiving workspace using pattern(s): -p **/target/surefire-reports/*-output.txt Archives upload complete. INFO: archiving logs to Nexus ---> uname -a: Linux prd-ubuntu1804-docker-8c-8g-37411 4.15.0-192-generic #203-Ubuntu SMP Wed Aug 10 17:40:03 UTC 2022 x86_64 x86_64 x86_64 GNU/Linux ---> lscpu: Architecture: x86_64 CPU op-mode(s): 32-bit, 64-bit Byte Order: Little Endian CPU(s): 8 On-line CPU(s) list: 0-7 Thread(s) per core: 1 Core(s) per socket: 1 Socket(s): 8 NUMA node(s): 1 Vendor ID: AuthenticAMD CPU family: 23 Model: 49 Model name: AMD EPYC-Rome Processor Stepping: 0 CPU MHz: 2799.998 BogoMIPS: 5599.99 Virtualization: AMD-V Hypervisor vendor: KVM Virtualization type: full L1d cache: 32K L1i cache: 32K L2 cache: 512K L3 cache: 16384K NUMA node0 CPU(s): 0-7 Flags: fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush mmx fxsr sse sse2 syscall nx mmxext fxsr_opt pdpe1gb rdtscp lm rep_good nopl xtopology cpuid extd_apicid tsc_known_freq pni pclmulqdq ssse3 fma cx16 sse4_1 sse4_2 x2apic movbe popcnt tsc_deadline_timer aes xsave avx f16c rdrand hypervisor lahf_lm cmp_legacy svm cr8_legacy abm sse4a misalignsse 3dnowprefetch osvw topoext perfctr_core ssbd ibrs ibpb stibp vmmcall fsgsbase tsc_adjust bmi1 avx2 smep bmi2 rdseed adx smap clflushopt clwb sha_ni xsaveopt xsavec xgetbv1 xsaves clzero xsaveerptr arat npt nrip_save umip rdpid arch_capabilities ---> nproc: 8 ---> df -h: Filesystem Size Used Avail Use% Mounted on udev 16G 0 16G 0% /dev tmpfs 3.2G 708K 3.2G 1% /run /dev/vda1 155G 11G 145G 8% / tmpfs 16G 0 16G 0% /dev/shm tmpfs 5.0M 0 5.0M 0% /run/lock tmpfs 16G 0 16G 0% /sys/fs/cgroup /dev/vda15 105M 4.4M 100M 5% /boot/efi tmpfs 3.2G 0 3.2G 0% /run/user/1001 ---> free -m: total used free shared buff/cache available Mem: 32167 867 28175 0 3124 30849 Swap: 1023 0 1023 ---> ip addr: 1: lo: mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000 link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00 inet 127.0.0.1/8 scope host lo valid_lft forever preferred_lft forever inet6 ::1/128 scope host valid_lft forever preferred_lft forever 2: ens3: mtu 1458 qdisc mq state UP group default qlen 1000 link/ether fa:16:3e:04:b4:8e brd ff:ff:ff:ff:ff:ff inet 10.30.106.202/23 brd 10.30.107.255 scope global dynamic ens3 valid_lft 86086sec preferred_lft 86086sec inet6 fe80::f816:3eff:fe04:b48e/64 scope link valid_lft forever preferred_lft forever 3: docker0: mtu 1500 qdisc noqueue state DOWN group default link/ether 02:42:93:b6:ee:0e brd ff:ff:ff:ff:ff:ff inet 10.250.0.254/24 brd 10.250.0.255 scope global docker0 valid_lft forever preferred_lft forever inet6 fe80::42:93ff:feb6:ee0e/64 scope link valid_lft forever preferred_lft forever ---> sar -b -r -n DEV: Linux 4.15.0-192-generic (prd-ubuntu1804-docker-8c-8g-37411) 08/13/25 _x86_64_ (8 CPU) 19:04:03 LINUX RESTART (8 CPU) 19:05:02 tps rtps wtps bread/s bwrtn/s 19:06:01 82.09 6.80 75.29 135.57 10850.91 19:07:01 131.36 21.56 109.80 771.34 16313.50 19:08:01 97.72 4.50 93.22 613.10 33070.89 19:09:01 47.73 1.88 45.84 115.31 5658.26 Average: 89.76 8.69 81.06 409.99 16496.90 19:05:02 kbmemfree kbavail kbmemused %memused kbbuffers kbcached kbcommit %commit kbactive kbinact kbdirty 19:06:01 30017912 31643760 2921308 8.87 73156 1861044 1462528 4.30 913616 1709940 195532 19:07:01 28244040 30067796 4695180 14.25 84260 2038296 3219908 9.47 2529072 1847264 5456 19:08:01 27025552 29749044 5913668 17.95 104492 2889144 6034832 17.76 2958028 2554208 716 19:09:01 28850272 31588644 4088948 12.41 107888 2901452 1500380 4.41 1168232 2540388 10716 Average: 28534444 30762311 4404776 13.37 92449 2422484 3054412 8.99 1892237 2162950 53105 19:05:02 IFACE rxpck/s txpck/s rxkB/s txkB/s rxcmp/s txcmp/s rxmcst/s %ifutil 19:06:01 lo 1.29 1.29 0.16 0.16 0.00 0.00 0.00 0.00 19:06:01 ens3 164.72 117.10 1225.61 18.91 0.00 0.00 0.00 0.00 19:06:01 docker0 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 19:07:01 lo 16.26 16.26 2.17 2.17 0.00 0.00 0.00 0.00 19:07:01 ens3 1002.70 613.33 2178.06 189.26 0.00 0.00 0.00 0.00 19:07:01 docker0 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 19:08:01 lo 10.31 10.31 1.43 1.43 0.00 0.00 0.00 0.00 19:08:01 vethaacacc0 0.15 0.43 0.01 0.03 0.00 0.00 0.00 0.00 19:08:01 veth81f2f40 1.47 2.12 0.37 0.41 0.00 0.00 0.00 0.00 19:08:01 vethd2e2e3a 0.13 0.30 0.02 0.04 0.00 0.00 0.00 0.00 19:09:01 lo 2.27 2.27 0.24 0.24 0.00 0.00 0.00 0.00 19:09:01 ens3 2595.78 1612.63 11307.26 402.25 0.00 0.00 0.00 0.00 19:09:01 docker0 2.20 3.02 0.40 0.56 0.00 0.00 0.00 0.00 Average: lo 7.56 7.56 1.00 1.00 0.00 0.00 0.00 0.00 Average: ens3 553.93 339.54 2615.31 83.82 0.00 0.00 0.00 0.00 Average: docker0 0.55 0.76 0.10 0.14 0.00 0.00 0.00 0.00 ---> sar -P ALL: Linux 4.15.0-192-generic (prd-ubuntu1804-docker-8c-8g-37411) 08/13/25 _x86_64_ (8 CPU) 19:04:03 LINUX RESTART (8 CPU) 19:05:02 CPU %user %nice %system %iowait %steal %idle 19:06:01 all 8.86 0.00 0.57 3.34 0.03 87.20 19:06:01 0 1.22 0.00 0.20 0.12 0.00 98.46 19:06:01 1 6.22 0.00 0.36 0.02 0.02 93.39 19:06:01 2 10.43 0.00 0.58 0.76 0.02 88.21 19:06:01 3 14.17 0.00 1.22 17.32 0.03 67.25 19:06:01 4 18.82 0.00 0.83 6.73 0.07 73.54 19:06:01 5 7.71 0.00 0.61 1.12 0.03 90.53 19:06:01 6 9.70 0.00 0.58 0.70 0.02 89.01 19:06:01 7 2.63 0.00 0.15 0.02 0.00 97.20 19:07:01 all 19.79 0.00 1.65 2.07 0.07 76.43 19:07:01 0 11.55 0.00 1.07 0.74 0.07 86.57 19:07:01 1 16.38 0.00 1.73 0.74 0.07 81.09 19:07:01 2 29.80 0.00 2.09 1.49 0.08 66.53 19:07:01 3 13.30 0.00 1.47 4.56 0.07 80.60 19:07:01 4 25.57 0.00 1.23 1.80 0.08 71.32 19:07:01 5 17.04 0.00 1.88 6.23 0.05 74.80 19:07:01 6 19.98 0.00 1.07 0.13 0.05 78.77 19:07:01 7 24.70 0.00 2.65 0.89 0.08 71.68 19:08:01 all 16.60 0.00 2.77 3.26 0.08 77.30 19:08:01 0 18.66 0.00 2.77 0.49 0.05 78.03 19:08:01 1 16.74 0.00 2.59 0.30 0.08 80.29 19:08:01 2 16.81 0.00 2.85 0.20 0.10 80.04 19:08:01 3 16.35 0.00 3.62 7.84 0.08 72.11 19:08:01 4 13.97 0.00 2.11 1.29 0.07 82.56 19:08:01 5 11.73 0.00 2.73 14.13 0.10 71.30 19:08:01 6 19.07 0.00 2.90 1.46 0.10 76.46 19:08:01 7 19.44 0.00 2.57 0.32 0.08 77.58 19:09:01 all 9.99 0.00 0.75 1.34 0.04 87.89 19:09:01 0 4.00 0.00 0.48 0.10 0.03 95.38 19:09:01 1 2.74 0.00 0.40 0.07 0.03 96.76 19:09:01 2 31.58 0.00 0.99 1.35 0.05 66.03 19:09:01 3 13.34 0.00 1.10 1.59 0.03 83.94 19:09:01 4 5.01 0.00 0.80 6.17 0.03 87.99 19:09:01 5 16.26 0.00 1.00 1.40 0.03 81.30 19:09:01 6 1.88 0.00 0.55 0.02 0.02 97.53 19:09:01 7 5.08 0.00 0.73 0.03 0.02 94.13 Average: all 13.82 0.00 1.44 2.50 0.05 82.20 Average: 0 8.87 0.00 1.13 0.36 0.04 89.59 Average: 1 10.52 0.00 1.27 0.28 0.05 87.88 Average: 2 22.21 0.00 1.63 0.95 0.06 75.15 Average: 3 14.29 0.00 1.85 7.79 0.05 76.02 Average: 4 15.82 0.00 1.24 3.99 0.06 78.88 Average: 5 13.21 0.00 1.56 5.73 0.05 79.45 Average: 6 12.66 0.00 1.28 0.58 0.05 85.45 Average: 7 12.97 0.00 1.53 0.32 0.05 85.14